Delivering training is not a mission unto itself; the training has to have the results that you intended. When it comes to developing and delivering a course, many of us have breathed a sigh of relief when the course is ready for learners because we think our job is done. In fact, measuring the effectiveness of the training we produce should be as much a part of the production process as research, revisions and quality assurance.
Looking at how effective your training program is helps to refine your approach to learning and development by keeping what works best and avoiding what doesn’t. This will result in a workforce that is more motivated to learn, among other less-obvious benefits. Let’s take a closer look at how you can find out if your training program is delivering how you want it to.
Is Your Training Program Working?
There is a standard model for evaluating training effectiveness called the Kirkpatick Model, which covers four levels:
- Level 1: Reaction
- Level 2: Learning
- Level 3: Behaviour
- Level 4: Results
Each of these levels measures a different component of post-training results and using all four will give you a complete picture of how your training is doing. The problem, though, is we don’t always have the ability to see or measure all of these levels. For example, if you are in charge of L&D for a national (or global!) organization, you might be delivering training to people you never normally get to interact with. With a little planning, however, it’s not impossible to see how effective your training is, even if that training leaves your control when it’s completed.
We’ve discussed the concepts of these levels before, but let’s review them now and then take a look at how to measure them when you don’t have complete access to your learners post-training.
Level 1: Reaction
This level of feedback is the one you are the most likely to be doing already. It measures the learners’ reaction to the course based on their own personal impressions and tastes. Learner feedback forms that ask them to rate the course’s clarity, interest and organization on a scale of 1-10 is an example of how to collect data on this level of training effectiveness. You could also interview learners to get their feedback or sit in on a pilot program and collect their thoughts along the way.
You may not able to get feedback from actual learners, but that doesn’t mean you have to skip this level! It can be helpful to take the course yourself from the beginning, as if you are a learner, and see what stands out to you. Is there a lot of text to sit and read or listen to? Is it clear? Does the order make sense?
It can be hard to pretend to have no prior knowledge of a course if you had a hand in creating, so another useful strategy is to have someone else, like a colleague, take your course. If they’re willing, you can even gain a lot of insight by watching them take the course. Do they seem confused or bored? Are there certain screens or concepts that they have to read multiple times or repeat? Seeing how a fresh set of eyes experiences the course can be illuminating and a decent substitute if you’re not able to ask “real” learners.
Level 2: Learning
This level of the model deals with measuring how much of the content the learners retained. The simplest way to measure this is by providing pre-tests to see how much they knew upon starting the course and comparing that to their results on a final exam. This can also be seen by supervisors in task assessments or practical evaluations, where learners perform a task and are marked on how well they perform it according to the standard set out in the course.
Including practical evaluations or delayed assessments is a great way to get data on this level. If pre-tests, post-tests, task evaluations or an on-the-job assessment are included as a part of the course, instructors are much more likely to include them as a part of the training module and share this data to you as well.
Level 3: Behaviour
Levels 3 and 4 are the ones that we most often forget to measure, as they are more complex and require more time. Since they also require access to learners beyond the scope of specific training, they also depend on a willingness for follow-up work. If you are developing training for your own personnel or members, you are in a great position to be able to see how the objectives in the training are put into practice after the formal evaluations are over.
One simple way to see behaviour change is to provide job aids or other reference materials, with an expectation that the learners will refer to them as needed later on. Seeing that people who took a course are accessing the references is a good indicator that they are seeking to modify their behaviour and checking that they are aligning to the new standard.
Level 4: Results
The idea of this level of the model is to see what impact, if any, the training had on specific outcomes like revenue, safety, or efficiency. Since this measurement has to come well after the training is delivered to learners, the planning has to cover a long time span. One way to make sure this level isn’t forgotten is to include the specific desired outcomes in the training needs analysis during the initial planning stages. Is the goal of this training to increase sales? To comply with a new regulation? To change a process? Including the answers to questions like these will help improve the quality of your training, as well as allowing you to have metrics to measure after the training is implemented.
Determining if your training does what it’s supposed to do is fundamental, yet it’s often overlooked due to the effort and follow-up required. Using a framework like the Kirkpatrick model can help you decide what kinds of results you’re targeting, and whether your training gets you there. It’s tempting to conclude that getting good metrics on training effectiveness is impossible if you don’t manage the training delivery, but with the support of your end users, you can track your training’s impact through all four levels.