Likeability is Likeable, but Ineffective: Better Ways to Evaluate Training Impact
Updated: Nov 17, 2020
A client told me,
When we do X, only 50% of people say they like the training; when we do Y, more people say they like it. I'm not saying we have to do it that way, but that's just what we're hearing.
Here's another one to consider:
300 people took this class. At the end, we asked them 'would you recommend this course to other people?', and 97% of them said yes. They especially liked the instructor.
So, I ask you, is likeability a good measure of whether or not you should continue to use or to remove an approach from your training tool kit? Surely, user experience is important.
But no. Alone, likeability is a poor measure of instructional effectiveness.
I ask you to look at the examples above. What do the numbers say about performance?
If my instructional goal was, for example, to get more people to self-report when they make errors on the job, does it mean they're doing that simply because the enjoyed the training?
If my goal was to get my customer service team to improve the customer experience, does their recommendation of the course to other colleagues mean that customers are happier?
I'm guessing that you see the answer is no. There are far better measures. For the examples above, you can begin to see them built into the posed questions. You could look at reporting rates, or in the case of customer service, you could see if customer complaints on social media have gone down. Those would give you a more significant story about training impact.
So why are we still making decisions about learning delivery based on likeability alone? It's because likeability is both quick and simple to quantify. It's an easy trap to fall into, particularly when you haven't outlined goals or an evaluation plan from the beginning of your project.
The Difference Between Treats and Treatments
The difference in approaches really comes down to goal-defining and goal-setting.
For example, my dog really like treats, and she'll do or try to learn any trick for a treat. If I gave Cupid a dog training satisfaction survey, she'd probably say something like "BARK BARK BARK BARK WOOF BARK BARK."
TRANSLATION: Treats are the best training tool. More treats, please!
That means, according to the latest dog training satisfaction survey, the best training strategy is unlimited treats.
Kinda ridiculous, right?
Well, I'm measuring the wrong things. My real goal is to improve my dog's obedience so that she is safe.
That's the goal I've defined, and that's the goal I need to measure. Instead, when I decide whether the experience has been successful, I could look at things like:
Does Cupid acknowledge when her name is called?
How many times has she ignored me when off the leash outside?
How many times do I have to say "Come" before she returns to me?
Will she try to bite me if I take food away from her or don't give it to her?
Any of those measures would give me a much better idea of whether she's being properly trained (treatment) or not than whether she enjoys training (treat).
I can easily tell you that my dog comes when called for a treat, but if she's interested in something outside, she will ignore me-- that training has not been effective, as it's only working in the training room.
On the flip side, if I say "leave it" or I put my hand near her mouth while she's eating, she leaves the food alone and does not bite or show aggression-- that training has been effective, whether she enjoyed practicing that skill or not.
In this case, more treats for training is not an option-- Cupid is a little chihuahua mix, and she'll become overweight quickly if I give her too many treats. That affects her joint health. So I have to be creative-- toys, clickers, using dog friends to model behaviors, pretending it's bed time (my dog is really into napping), etc. I have to find methods that might not be as likelable as treats, but have actual demonstratable impact in the field.
It's a work in progress, but with the right measures, I can see what training methods work (treatment), not just what my dog likes doing (treats).
When you start your learning project, plan for the longterm. Even if you're a freelance consultant who will never get to see that far ahead, make sure your client has parameters for evaluating the impact of their training and the tools they need to successfully perform these evaluations.
Create a list of questions/measurements that can help you decide to what degree training was effective. That's the only way any of you will truly understand if the work that's being done was impactful and achieved its purpose. Then, create a treatment/training approach based on those goals.
If you can improve the learner experience and still maintain effective training, awesome! Otherwise, your primary target is demonstratable training impact (and really, likeability often comes with the realization that training has been effective, whether a student was "entertained" or not).
Want to learn more about needs analysis and goal setting? Check out From Data to Design!