Wednesday, January 3, 2007

What's Easy vs. What's Right

I've been a member of the "eLearning" community for over 15 years now. Throughout that time, I've consistently been frustrated with the elephant in the room that no one wants to recognize:

More often than not, T&D departments and the CXO's to whom they report are more interested in going through the motions related to training than with the actual impact and effectiveness of those efforts.

What I mean by this is that there is a disturbingly large amount of money, time, and energy devoted towards getting course completion "checkmarks" for the entire employee population, regardless of whether that tornado of effort and funding actually makes any difference.

For example, why are Multiple-Choice Questions (MCQs) so entrenched in corporate (and K-12) training? Is it because they are especially effective? No. It's because they are efficient. And efficiency and efficacy are rarely related.

MCQs are a terrifically efficient way to move a large number of people through an evaluation process in a quick and easily quantifiable fashion. The problem is that most MCQs are structured in such a ridiculous way that it's often relatively easy to discern the correct answer without any actual knowledge or background in the subject. (I once took an internal "certification exam" that a client used to qualify their employees, sight unseen and with no background in the subject, and scored a passing (78%, if I recall correctly) grade.) Even if the MCQs you create don't fall in this category (and it is possible to create decent MCQs, with a little effort and creativity), the extra effort may not be justified because life and work don't often consist of selecting from menus of options. Recognition doesn't equal Recall, and neither equal Performance. And Performance is (usually) what we are really seeking.

This old saw rant of mine was recently rekindled by a terrific article by Sarah Boehle on the disconnect between L1 eval (smiley sheet) results and the actual effectiveness of that training (L3/L4).

I won't attempt to reframe her arguments, as she makes a strong and pointed case, leveraging a few folks I respect (Roger Chevalier and Will Thalheimer), but I will pull out a couple of items that had me cheering:
"Why do so many companies make this mistake? In all likelihood, they do so because the steps they must take in order to get their hands on the objective information necessary to accurately assess training effectiveness can be complex, difficult, costly and time-intensive. "
and
"From my experience talking to corporate training departments, the calculus in the field seems to be, 'Don't do good evaluations,' " Thalheimer says. "Why would anyone want to? If you evaluate at all levels, only bad things can happen. If you use Level I and get good results, then you're maintaining the status quo and everyone assumes that training is doing a good job. If you evaluate more rigorously and get bad results, however, all hell breaks loose."
There seems to be some sort of unspoken conspiracy/agreement between T&D and upper management of "don't ask, don't tell". T&D keeps shoveling meaningless stats to the boardroom that help to justify their existence, and the boardroom keeps accepting them as a means to help them feel better about the "necessary evil" that they see T&D to be. It's an easier, cleaner, and more friendly way to go about business, but it's an enormous waste of resources all around.

T&D professionals have an obligation to "get dirty" and dig into some of the hard questions and actions associated with making training *matter*. And that involves taking the road less traveled and providing some solid business rationale for why the annual budget has a line item for T&D, whether the CxO is asking for it or not. Until that happens, T&D will never be considered anything more than a "necessary evil" in the boardroom.

No comments:

Post a Comment