How to Use AI to Test and Improve Your Team’s Training Retention in Minutes
In this episode of The Real Estate Growth Hackers, join Zach Hammer and Charlie Madison as they delve into the groundbreaking approach of using AI to achieve remarkable team training retention in just minutes! Discover the synergies of technology and training, and learn practical insights to propel your team’s learning abilities to the next level.
The future is now; tune in to not just keep pace but stay ahead in the fast-evolving real estate industry. Don’t miss out!
Other subjects we covered on the show:
- Exploring the power of AI in real estate training.
- Instant testing and improvements of training with AI.
- Customized team learning through AI for better outcomes.
- Real-world examples of AI applications in the industry.
- Potential pitfalls to avoid while implementing AI.
- Future projections of AI in team training.
AND MORE TOPICS COVERED IN THE FULL INTERVIEW!!! You can check that out and subscribe to YouTube.
If you want to know more about Zach Hammer and Charlie Madison, you may reach out to them at:
[00:00:00] Zach Hammer: Alright, welcome to another episode of Real Estate Growth Hackers. I am here Zack Hammer with my buddy Charlie Madison of realtor waiting list referrals while you sleep. All sorts of different things. Software developer, realtor, green screen aficionado, all sorts of things say about, Charlie.
[00:00:25] Charlie is here.
[00:00:25] Charlie Madison: Whatever it takes to not have a W-2.
[00:00:29] Zach Hammer: Yeah. Yeah. There you go. That’s what’s gonna be on your tombstone. Did whatever it took to not have W-2. What we’re talking about today is we’re talking about how to leverage AI in the process of your onboarding and your training so that you could, take training materials that you’ve already put together and put together questions, quizzes et cetera, easily and on autopilot in order to actually test for understanding.
[00:00:55] So that’s what we’re talking about today. Now, why is that important? Let’s cover that real [00:01:00] quick. When you are going through and giving people training, one of the things that I have found is it is a lot easier to claim that you went through training than it is to actually pay attention and make sure that somebody fully listened to it.
[00:01:13] Ideally people are self-motivated and doing that on their own. But sometimes it does take a little bit of I dunno, for better or worse, if something isn’t observed, if something isn’t checked or watched, it inevitably degrades to about the worst quality possible. And so if you wanna make sure that when you’re bringing people onto your team and getting ’em trained up, it’s really good to have quick ways to be able to test.
[00:01:33] Did they go through the materials? Do they understand it? Even if they are like let’s be optimistic here. Even if they are wanting to understand it, maybe for better or worse. Maybe it wasn’t clear in the training material. Maybe you actually didn’t communicate something as clearly as you thought you did.
[00:01:48] And so when you test for the understanding, you may not get the answers back that you expect, et cetera. So either way, the end goal of training is always for the people that are going through it, to have [00:02:00] an understanding of that material and being able to be able to put it into practice.
[00:02:04] Does that sound accurate? That’s like the reason why we give people training, right?
[00:02:08] Charlie Madison: Exactly, and it’s the old inspect what you expect, right?
[00:02:15] Zach Hammer: Exactly, and so now why? So that’s why it’s important, but why don’t we already do this, right? Why don’t we already put the time into putting together these questions and checking for understanding, and do all that? Pretty simply, it’s hard , right? It’s completely a different frame of mind to try and convey a concept versus putting yourself in the shoes of somebody who doesn’t know the information and trying to think through questions that aren’t just literally regurgitate the words that came out of my mouth back to me ’cause that’s that’s not really what you wanna do. That doesn’t make for good testing of understanding.
[00:02:51] If you know you could train a parrot to repeat words, that doesn’t mean that it knows how to implement your systems. It just means that they can repeat your words. [00:03:00] And why it’s important is that we want to do this in a way that is actually easy to do. So that’s what’s powerful right now about leveraging AI. In order to do this is that we could take something that was previously hard where you have to put yourself into a frame of mind that’s hard to put yourself into. If we’re gonna be honest. It’s hard to put yourself in the shoes of somebody who doesn’t know what you know.
[00:03:20] It’s probably one of the most difficult things that I’ve actually seen for humans in general, that we aren’t good at thinking what it was like before. We know what we know now. What’s really good at it?
[00:03:32] Charlie Madison: I am guessing
[00:03:32] Zach Hammer: I’ve been trained on basically all of language. All of language that is publicly available.
[00:03:39] It’s pretty good at that. It’s pretty good at pretending like it’s a different scenario. So, that’s what we’re gonna cover today. How does that sound? Any thoughts or questions or things that you think people might want to take away from this as we get into it and and I start diving into how we implement this?
[00:03:53] Charlie Madison: I’m excited about this because it’s, truthfully for me, it’s fine. As you show this, like it’s finally [00:04:00] possible for me to one, constantly improve my training by I can get feedback from the results and actually get feedback because, there’s a reason that teachers get paid.
[00:04:14] Like, it’s a different animal. And so to be able to not have to know that skill to be able to get at least 80% of the way there is pretty exciting. So I’m looking forward to this.
[00:04:29] Zach Hammer: Awesome. Awesome. So let’s dive into it. The goal is to make this as painless and repeatable as possible. Anybody who’s heard me, talk about AI implementation, one of the caveats that I do to throw out there, I’m going to make this drastically easier, but that doesn’t mean there isn’t work.
[00:04:47] So if you’re expecting AI makes it not take work, you’re going to be disappointed. But if you want to be able to get a result that is drastically more leveraged than you used to be able to do before [00:05:00] and is a lot easier than it used to be before, then this is gonna help. So with that in mind, let’s go through the process.
[00:05:06] So first off, typically I find this process will work whether you’re giving people written instructions or giving them video instructions. If you give them video instructions, you do have to process that first in order to be able to go through the rest of this process. So I do find most people when they’re doing some sort of training, I feel like people tend to flow easiest.
[00:05:27] Recording videos, doing things like looms, screen shares where they’re showing the process, explaining the process, however makes sense to them, you can definitely start there. And that is a great place to do this. If you happen to be somebody who leads with writing and you think best by writing, then feel free to go that way as well.
[00:05:43] That is perfectly fine and acceptable. It’ll work great for this process as well. So you can write out instructions, you could write out your training and that would be fine as well. You don’t have to, but it’s also a good, valid option if that’s what works for you. If you have that, then you could skip the first part of this process.
[00:05:59] [00:06:00] For most people who are gonna be using video if you have the written, you could skip this, which is you need to get your video transcribed. There’s lots of different tools to be able to do this. One of my favorite ones for the Mac is called Mac Whisper. It uses open AI’s whisper technology. The app is completely free and allows you to upload videos or audios and get back a complete transcript completely for free using the hardware on your computer to do the processing.
[00:06:25] I love that one because it tends to do a great job. It’s run locally. It runs quick, it runs effective, and it doesn’t cost anything. So that’s powerful. If you don’t have a Mac and you need something else, that’s an option. A few other things that I’ve seen work well. If you have Slack, you can upload videos into Slack and the transcriptions from that will be good.
[00:06:43] If you don’t have Slack, you can upload videos into YouTube and get automated captions and download that caption file. That will also be good. It’s not gonna be perfect. I do find that the whisper the open AI whisper technology is a little bit better. But honestly, this stuff’s improving all the time.
[00:06:59] So it’s very [00:07:00] possible. It’s gonna be, equivalent regardless, and it’s just a matter of, preference for your process. So those are some good options. You could go there, get those downloaded literally just throw ’em into a text document, a Google document, whatever. Just get the entire transcript. If you have a really long transcript.
[00:07:18] First off. This is separate, but typically I recommend when it comes to training, strive for only teaching 15 minute segments at a time. At most shorter is okay too. If a concept takes longer than 15 minutes to explain, you probably wanna break it out into multiple videos and teach substeps rather than teaching the whole thing.
[00:07:36] If you do that, then this should work because the size of your transcript should fit within into the context window of ChatGPT. Using GPT 4 with a, you gotta be paying for GPT Plus, if not, then you might need a longer transcript window if you’ve got a really long transcript.
[00:07:51] And you could use a tool like Claude for that. So Claude has a massive window of memory, so it could understand large swabs of text. Actually, it can be good at this [00:08:00] in general. I tend to find that GBT 4’s ability to process this kind of data is a little bit better.
[00:08:05] As long as it fits within the window of what it could process. If the window goes wider, then you might need to rely on Claude. I’ve just found Claude isn’t quite as good. If that makes sense. It’s not as good at, coming up with the next steps in this process, the actual creating of the questions, et cetera. It’ll still be good and it’ll be a great starting point, keep those caveats in mind.
[00:08:20] And so then, now that you have your transcript for your training you take that and you’re gonna feed it in to ChatGPT, or you’re gonna feed it into Claude, and then you’re gonna just ask it to generate questions for you, right?
[00:08:32] Now you probably want to ask it to generate questions in a way where the questions are actually good that they test for understanding in good ways, et cetera. Would you like me to go over some of what you should tell the AI system to give you back when it to these of questions?
[00:08:47] Would you want me to tell that?
[00:08:48] Charlie Madison: Yes, please do.
[00:08:52] Zach Hammer: All right, great. First off, just like normal we’re gonna use our mega prompt framework as how we build this out. There’s a couple of things that we want to tell it.[00:09:00] We want to tell it to simulate a persona. We want to tell a a, the kind of persona for this is gonna be something like, I want you to be an expert, instructional designer, familiar with with testing and learning adoption and understanding.
[00:09:15] You could come up with something better than that, but that’s the key idea. We want it to take on the role of somebody who really understands how to ask good questions is the basic idea. We’re gonna give it the task of coming up with these questions.
[00:09:24] So the task is gonna be something like I want you to develop questions to test for understanding of the training material I will provide to you. We’re going to give it the goal of saying, the goal is to create excellent questions that thoroughly test for understanding of training material based transcript or based on the written information.
[00:09:43] Then we’re gonna have it do some steps as well. Steps would be things like, step one, I want you to fully understand the training material. Step two, I want you to generate an extensive list of questions that would test for understanding of this material based on criteria that [00:10:00] I’ll give you below.
[00:10:00] And then you might not even have any other steps. If you use AI, it’ll probably help you to be able to come up with better steps than that, but that’s the basic idea. If you have any context or constraints, say you’re using a specific training system. So I like to use what’s called an LMSA Learning Management System.
[00:10:15] That allows for there are specific types of questions. So if you have that sort of thing where you know that you have limitations, you want to integrate it into a tool, you can actually tell it like the kinds of questions that you could give are these kinds, right? Multiple choice match fill in the blank.
[00:10:29] What you could tell it those kinds of criteria. Whatever your different types are available, we’ll go over some of those options in a second. So you’re gonna give it context. You could tell it things like tone of voice. You might give it more context about your business or what you’re trying to get people to learn, et cetera.
[00:10:43] And then so that that’s your overall instructions for it. The other thing that you’re gonna wanna throw in is examples or information about what makes for a good questions that test for understanding, cause not all questions are created equal. If somebody just regurgitates the information[00:11:00] that’s not the same caliber of testing as something that really forces ’em to take and apply the knowledge a way that you could see is relevant.
[00:11:07] And, in terms of how to do this, there’s partially a question of implementation on your part. Do you have the bandwidth to actually review answers? Do you want that level of involvement? Do you want to, somebody goes through this process, goes through your quiz. Do you want to have to review the information in order for them to move forward?
[00:11:24] Or would you rather it be completely automated? If it’s completely automated, you’re probably not gonna be able to test to as high a degree of understanding as you will be if you’re willing to review it. But again, you set these based on, done is better than perfect. What is the level of involvement you’re able to put into something like this?
[00:11:40] And then adapt as needed. So, that’s kinda the idea of what you would throw in, into the prompt. I’m gonna give you the criteria here in a second that would also go into that. But so far any questions on the laying the groundwork for how we’re instructing the AI to go through this process and to start to give us these questions?
[00:11:55] Any questions so far?
[00:11:57] Charlie Madison: I think that makes sense. We [00:12:00] get the transcript. If we don’t have it, we send the transcript to ChatGPT if it’s small enough, for Claude if it’s larger. And we tell it first to understand it, and then first we tell it who it is. Then we say, understand this, and then third, we tell them the types of questions we want them to ask.
[00:12:22] Is that right?
[00:12:24] Zach Hammer: Right. That at a basic level. So we’ve done an episode before on the mega prompt framework. Really, we’re just applying that concept to this end goal of creating good questions. So the same concepts that I apply there, we just use those same building blocks of persona, task, goal steps, context and constraints, right? We build our prompt based on those concepts.
[00:12:48] And so, let’s go ahead and continue and let’s flesh out what those content, concepts or constraints would be. When you’re deciding if you wanna go more to the automated versus deeper understanding, you might tell it to adapt based on [00:13:00] what you’re looking for, or if you wanna mix of both, then you can just feed in all of these, if that makes sense, depending on what you’re looking for.
[00:13:05] But for the automated processing end the kinds of questions that you’re going to allow it to give you back are gonna be things like multiple choice where you’re testing for recognition and ensuring clarity and relevance where you’re making sure that they’re understanding how to find the right answer amongst a number of answers. You can give them true or false to understand basic understandings of what is the right way to think about this? Fill in the blanks. Fill in the blanks could be, combined with multiple choice.
[00:13:32] If there’s a clear, exact right answer, then fill in the blank can be automated. Fill in the blank might be a little bit manual review required. If there might be some variation in what a right answer could be. Matching can also be useful depending on what you’re testing for.
[00:13:47] You and I, for instance, we just recorded an awesome episode on the lead efficiency index. And so you might, if we were gonna test somebody on that concept, we might present them with a few different [00:14:00] descriptions of a lead and then have them match it to a lead efficiency index, example.
[00:14:06] And so it’s a combination of multiple choice, but applying the information and that could be automated where there is a clear right answer. So that kind of concept could work. So those are the automated . Processing things. So some tips for that. You wanna use clear you want to tell the AI to use clear, concise language.
[00:14:22] You wanna make sure to tell it what the learning objective is like, what are the takeaways that you wanna make sure that somebody is supposed to walk away understanding, feed that into the AI as well as if possible so that it knows how to give you back something as close to the right answer is or close to the right kinds of questions as possible.
[00:14:38] You want it to include plausible distractors in something like a multiple choice. So what that means is you want there to be at least a few answers that look like they could be right and have them have to really understand what makes them different in order to pick the one that is right.
[00:14:55] So that concept you, again, you feed this into multiple choice so it knows it, it needs there to [00:15:00] be plausible distractors. Now, do note on this level, AI is hit or miss on how good it’s gonna get you on these sorts of things. It’s gonna try at that, it’s gonna try for what it thinks is a plausible distractor, but this is really a thing where think of AI as being able to get you, like you said, 80% of the way there.
[00:15:19] You’re gonna need to go through the questions personally. You’re gonna need to double check the potential answers, the potential options, and you are going to need to make sure that they add up and make sense because it’s probably gonna be a little bit off, a little bit wrong, in the vein of trying to get you to that right answer. Just make sure that you’re planning for that.
[00:15:36] And then the other thing that could be useful, this is less of a big deal for AI specifically, but the tools that you use, if they could do this, it could be helpful, would just to randomize the question and answers in terms of order, maybe even what questions are theirs.
[00:15:49] Depending on the system that you’re using, you might have a pool of questions. That it pulls from rather than just a set of five that everybody gets, if that makes sense. [00:16:00] A lot of this is based on the tools that you have available, but so those are some of the concepts for the automated processing.
[00:16:06] Now, if you need to test for deeper understanding if you need to test to say, like this is more of a nuanced concept and there isn’t exactly a right answer. Or I really need to make sure that somebody understands this more from an application sort of way, or from an applied learning. Those types of things are typically gonna require some sort of human review. You can maybe do some where the answers are fed into AI and then AI tells you if it thinks they answered it right or not. I wouldn’t rely on that yet.
[00:16:33] That’s gonna be hit or miss. If you feel like it requires a deep understanding. You probably wanna have a human actually review it. The kinds of things that fit for that are gonna be things like short answer or long answer slash essay. So those sorts of things are gonna test for the ability to articulate the concepts especially on their own, where they’re putting it into their own words and you’re trying to flesh out the concepts and you’re seeing how deeply, how well do they understand it?
[00:16:56] Longer answers gonna require a depth and [00:17:00] ability to apply the knowledge maybe in different settings so you get more opportunity to see somebody applying that.
[00:17:05] And then one of the last ones in this vein that you may not be able to even have them submit something for. It may be something that you literally you design a test that’s designed to be done together.
[00:17:16] And sometimes you could do it this way, sometimes not, but demonstrating the learned skillset. So this could be things like if you went through a training on sales skills, you might have somebody either role play with you to demonstrate those things happening and that’s the kind of environment where you need to test something like that.
[00:17:32] Or it might be that you literally have somebody hop on the phone and start making calls and get to test things that they’re doing or demonstrate the information by writing an email, putting together a design piece creating a video, creating whatever it is, right?
[00:17:47] You might have somebody, not just a written answer, but you might have them literally demonstrate understanding of whatever skill or concept you’re training them on by having them, go through and do something that showcases that learning.[00:18:00] In those you’re really wanting to make sure that you’re probing for real world application of the topic.
[00:18:05] You want to, depending on what you’re teaching, you might provide for different opportunities for different ways to demonstrate video, audio design in person. It really depends on what you’re testing for. On all of these, you wanna make sure that you’re providing s sufficient time for thoughtful responses, but not so much time that it’s too long.
[00:18:27] So you wanna have that right mix, that time as a constraint can be really helpful in testing for some of these concepts. And then you want to make sure that regardless on all of these that you’re having, the AI as well as this is what you’re selecting for. Ensure that you’re testing for a variety of aspects of the topic, that you’re trying to look at it from a nuanced perspective as if possible, and look at it from different angles as much as possible. So again, all of those concepts.
[00:18:52] If you take those list them out, say, Hey, AI I’d like you to generate questions. I would select from what I just said based [00:19:00] on what’s relevant for your use case and what’s relevant for what you’re looking to do. And then you could put together a prompt.
[00:19:05] And then when you the transcript, plus that prompt, includes the instructions plus your criteria what it’ll get you back will be a list of questions based on the actual training that you put together, not just based on random stuff, but based on your provided context. And it’ll give you back really good, relevant stuff that you can throw into an an LMS or you can print out and test people that way if you’re doing something less dynamic.
[00:19:31] And then when tied together with a system where the things that you need somebody to learn, the skill sets that you need them to learn. You can actually flow people through a process where you’re training and testing for understanding training and testing for understanding.
[00:19:43] And people only flow through when they’ve actually demonstrated some level of acquired knowledge, acquired expertise on the topic. And again, AI makes that drastically easier than it ever has been before. I knew that this kind of thing was good and useful, and I pretty much never [00:20:00] did it because it was too much work to be able to do it properly.
[00:20:04] But now that AI allows us to happen, it’s actually really easy to go through, generate these, get ’em set up and start being able to test for understanding again, based all off of the thing that is maybe it’s not easy, but it’s a lot easier to do. It’s a lot easier to create the training and when you have the training.
[00:20:20] You can create the derivative questions off of it pretty readily leveraging AI. Does that all make sense?
[00:20:26] Charlie Madison: That does. Yeah, I might actually do it now.
[00:20:32] Zach Hammer: There you go. There you go. Yeah. Any what’s your biggest takeaway from this process? What what do you think what kind of impact do you think it’ll make for you or your business that would be applicable to our audience of real estate professionals here?
[00:20:45] Charlie Madison: It reminds me, I was in a high level mastermind a few years ago, and there were a lot of team leaders that were successful in one particular he had built a team and [00:21:00] it was just struggling . He said they weren’t doing well and he actually created a test for ’em and gave it to ’em and they all flunked.
[00:21:09] And I think he actually ended up firing them all, even though like he was like, I’ve gotta do better. I’ve gotta build a new foundation. And he built the new foundation on making sure that they knew what their brand was, what their USP was, what their buyer process, what their seller process. And now, he built out a test.
[00:21:35] They got together once a week. Like, he did much better. He said, we got community, we’re training, we’re testing. And the big AHA from that, he said, the 80/20 rule, my top agents. They all got nine out of 10 on the test. My lower agents still, they don’t know what we do. I think one, it’s a great way to make sure that I’m, [00:22:00] I love the word, I’m not confabulating how well I teach, I’m not thinking that I’m teaching well when I’m not.
[00:22:07] But then also it gives me a great, if one of my team members is not doing well. I can look and be like how well do you know our processes? Because if you don’t know your processes, that could be a simple way to fix it.
[00:22:24] Zach Hammer: Right, yeah, exactly, exactly. And it really does. My typical way of thinking through this is I like to see people test it on understanding quickly so that you’re going through a process of learn. Learn test. Whenever possible, people learn best when they actually get to put something into practice, into implementation.
[00:22:45] And and testing is at least a low level form of that where it switches from consumption into some form of they’re having to actively create and think in order to apply that knowledge. [00:23:00] The more they have to apply it, the stronger they’ve likely internalized it. And so a lot of people miss this, but literally testing, application, all of that, they’re part of the learning process.
[00:23:12] They’re part of what actually forces you to learn the information. Even if you explain something perfectly, it’s worth understanding that people only retain so much through through what they hear. They retain drastically more through what they actually do or what they have to reteach themselves.
[00:23:29] And so this gives you the tools to be able to try and increase that level of adoption, that level of understanding of people taking in what, you know, I hope everybody who actually creates some level of training for their teams, for their business. Hopefully you believe it’s important information that people actually need to understand.
[00:23:47] And if so. It’s probably important that that you’re setting them up for success and making sure to test for understanding and whatnot too. So there you go. If anybody wants more on this information, again, we [00:24:00] are in a process right now of figuring out what are you guys interested in? What do you wanna know more about?
[00:24:05] What do you care about? What’s exciting to you? What kind of topics here that we talk about at real estate growth hackers are tickling your fancy, so to speak? , that’s what we’re on the search for. If you want more of this information, feel free to to reach out to us Real estate Growth Hackers.
[00:24:18] I don’t know if I got something for you. Maybe I have an SOP. Maybe I have a course. Maybe that’s what you want. I don’t know. You let me know. You reach out, you say, Hey, Zach, I love that information that you said about this thing. Give me more. Here’s how I think I need your help. And we’ll see what we could do for you.
[00:24:33] In the meantime, . Feel free to check out what we’ve got for you at Real Estate Growth Hackers. If you ever need one-on-one Consulting, I do offer some level of that. It is limited, but if you get signed up for my list, or you check us out online, you can find out more information about how to do that how to get involved in some of our communities, see our courses, all that sort of stuff.
[00:24:50] And until next time, I think that about covers it for how to leverage AI to generate some good questions to test for understanding. So thanks so much for coming out, [00:25:00] Charlie.
[00:25:00] Charlie Madison: I’m looking forward to taking one of my trainings and running it through Claude and seeing what happens.
[00:25:09] Zach Hammer: There you go. There you go. Make sure to share it back with me. I’m curious to see what happens. Bye everyone.
Real Estate Growth Hackers Founder
Zach Hammer is the co-founder of Real Estate Growth Hackers. Over the last 36 months Zach and his team have managed ad budgets well over $100,000, generated over 25,000 real estate leads, and helped create over $50,000,0000 in business revenue for their clients. Zach is also a highly sought after speaker and consultant whose work has impacted some of the top Real Estate teams and brokerages across the country.