The ABC’s of Good Training: How to Evaluate a Training Program
Poor training programs are a waste of time and company resources
Good training incorporates Accountability, Benchmarking, and Context into its design
You can learn to evaluate potential training using these elements
The two-day management workshop you and your management team attended was amazing. The discussion was deep, the points were insightful, and you have tons of ideas you’re going to take back to your team. Six months later, though, nothing’s changed.
The good news is that the training in this scenario likely had excellent content. The bad news is that in spite of this, the resulting lack of real change is commonplace. For the medical company above, the use of pre- and immediately post-training surveys indicated that attitudes had changed and that progress would be made. Yet in as little as six months after training, the company discovered that attendees hadn’t retained the material or changed their behavior. The status quo persisted. This company is not alone. In the last three years, companies have spent over $480 billion in training and education, yet we still have untrained leaders, ineffective management, and under-trained employees.
The reason for this is that most training falls short in one way or another.
Nobody wants to be responsible for bad training, but most of us aren’t equipped to evaluate whether a training program is a valuable investment. If you’re not already using a system like human-centered instruction to create your Learning & Development, you can use three criteria to quickly assess training for effectiveness; this is known as the ABC test.
No matter how good the content, how great your learning management system, or how competent your experts, if you don’t address the ABC’s of accountability, benchmarking, and context, your training will be ineffective. Luckily, you can learn to assess these elements before buying into a training program.
A is for Accountability
Accountability is the most critical aspect of developing quality training programs. Research has shown that no matter how good the material, the majority of learning takes place after the formal instruction. Lack of accountability and follow-up is the single largest reason that “nothing changes” after training.
In addition, unless the learner has self-opted into the training, it’s also unrealistic to ask the learner to be solely accountable for applying the training content to their role. More likely than not, the daily pressures of the learner’s role won’t afford them the space to put new learnings into practice. Only one individual has the authority to both follow up with the learner and create opportunities to apply the instruction: the learner’s manager.
The manager is the single most influential person in the learner’s success. However, most managers are unaware of the role they play. Even something as simple as a manager checking in post-training to ask what was learned and what the learner is planning to do with the training can more than double knowledge retention. And when the manager actively creates opportunities for learning and application, retention jumps as high as 80%.
When training takes accountability seriously, it invests as much into ensuring that managers have guidance on how to support their learners as it invests into the instruction itself. Prepared managers can use 1:1s, goal setting, and other facilitation tools to both reinforce learning and to demonstrate that the learning is valued.
The Test for Accountability: Does the training include guidelines, instructions, and tools for the learner’s managers in order to facilitate learning and retention after the main instruction? Has the instructor made clear their commitment to learners and their managers post-instruction?
B is for Benchmarking
Training programs that teach skills but do not measure outcomes are far more common than anyone likes to admit. The purpose of training is to affect behavioral change, but to do this, the behavior must be measured. This process of checking and comparing against a point of reference is known as “benchmarking.” The process of benchmarking does two important things for good training.
First, the existence of a benchmark implies both that something is being measured and that the results of the training will be measured against the starting point. Good measurement goes deeper than satisfaction surveys, it leverages the key business metrics to which the training is connected. When no metrics exist, good instructors will establish, quantify, and report on new metrics before the training begins. If it matters, it’s measured.
Second, benchmarking requires instructors to identify how the training will close the gaps in what was measured. In practice, this often results in “Learning Objectives,” which set the expectations for the knowledge, skills, and abilities learners will gain through the training. Coupled with accountability, these Learning Objectives inform managers about what opportunities must be created and what the expectations are for learner ability. Coupled with context, these Learning Objectives guide the customization of instruction to better meet the needs of the individuals.
Yet so much training never includes either aspect of benchmarking, and companies miss out on the opportunity to capture the ROI they were looking for by investing in a training program.
The Test for Benchmarking: Does the training have clearly stated Learning Objectives, and does it outline how meeting these objectives will affect the desired organizational change? Are the effects of organizational change being measured and reported?
C is for Context
Generic training sucks. From multi-month programs to smaller half-day workshops, too many trainings are optimized for the instructor, not the learner. Most training programs are meant to be given broadly and repeatedly, designed for one-size-fits all at the expense of nuance. This laziness is a sign of bad training. Your business has nuance. Your team has different personalities, different Learning Objectives, and different gaps that need to be addressed. And training that lacks context will fail to maximize its opportunity to affect change.
The ability to adapt is at the heart of training that leverages context. In some cases, it means extensive learner interviews ahead of actual instruction to ensure the training is real, relevant, and repeatable for your specific context. In other cases, it means allowing the the training to evolve when new information emerges.
One of the most common techniques instructional designers use to build context is a Needs Assessment. Unlike benchmarking, which focuses on the training’s impact on business value, a Needs Assessment focuses on the individual gaps in knowledge, abilities, interests, and approaches pertinent to the learners. Most assessments are done with a combination of focus groups, interviews, community meetings, and surveys.
To create both individual and organizational alignment, the results of the Needs Assessment should be placed alongside the benchmarking and organizational needs. Together, these assessments paint a holistic picture of the training needs and allow the instructor to better select instructional tools, delivery techniques, and accountability systems to reinforce the training.
The Test for Context: Will the learners be interviewed or otherwise assessed before the training? Can the instructor lay out a plan for how the training will evolve in response to these interviews?
Putting the ABC’s to the Test
You can apply the ABC’s to any training program, no matter whether it’s leadership, technical, or vocational skills. As an example, let’s look at how I applied the ABC’s to the technical training program I built at Nav.
Accountability - Managers were directly involved in creating opportunities for learners to apply the lessons in their daily roles. Managers received additional coaching on how to mentor, guide, and develop talent with new technical responsibilities.
Benchmarking - Models of effective React development were used in creating the training, along with bootcamp curriculum used by incubators. It was determined at the outset to use “story points” as a velocity indicator for improved productivity.
Context - Halfway through React Ready, the needs of the organization shifted from React fundamentals to the application of React at Nav. The second half of the training was restructured around this new information.
It’s a lot to ask of training programs to think holistically about accountability, benchmarking, and context. However, the best trainings - the ones worth paying for - make the ABC’s a priority. While the Nav training would have produced some results if it had utilized only two of these three aspects, its effectiveness would have been greatly reduced.
So learn your ABC’s. They’re a fast way to check your existing training and evaluate potential new training programs. They can also help you identify what’s missing and suggest fixes to get the most value from your L&D programs. It’s as easy as ABC.