Monday, January 22, 2007

Speed vs. Quality - Still Relevant?

January’s “Big Question” at the Learning Circuits Blog is:

What are the trade offs between quality learning programs and rapid e-learning and how do you decide?

My knee-jerk reaction is that the question makes an unspoken assumption that quality and speed are diametrically opposed – that “rapid” should be equated with “low quality”. Although that certainly may be the conventional wisdom position, I’m not sure it holds water under closer inspection.

It’s similar to asking about the trade-offs between ‘fast’ food and a ‘good’ food. Upon initial inspection, there’s a shared tacit agreement that a drive-through dinner from your local burger franchise won’t (can’t) hold a candle to the quality that would come from a multi-course meal from your favorite fancy sit-down restaurant.

But if you think more deeply about it for a moment, you may discover that (just as with most things) the “trade-offs” are intimately tied to objectives – or more simply put, “it depends”.

If you are in the midst of running a marathon, “good” and “fast” eating are defined quite differently than if you are having an anniversary dinner with your spouse. It’s only when you fully understand the needs and expectations of parties involved in a situation are you able to properly assess what (if any) trade-offs must be made to effectively/properly serve their objectives.

In the case of (e)Learning, the argument stands, as well. It’s generally accepted (in theory, at least) that one solution rarely addresses the needs of more than a few – that what is ‘fast’ and ‘good’ to one person/group isn’t the same as for another. The "value" of a solution is highly dependent upon the needs of target.

Consider, for example, the case of Google and how their primary service has been so deeply adopted into the fabric of society that their company name has morphed into a generic verb meaning “to search”. One of the more dramatic results of the service Google provides is that information on any topic at (nearly) anytime is available to anyone, all with a microsecond response time (rapid). Granted, there’s no guarantees to the quality of the information Google supplies in response to a query, but (and here’s the interesting part) that lack of guarantee also doesn’t preclude quality – it actually says nothing about it at all!

So, we are witnessing an interesting swing away from the monolithic course model, to a JIT search-browse-synthesize-apply model. In this world, ‘rapid’ and ‘quality’ can (and often do) comfortably co-exist. There’s no reason that a similar, albeit more constrained, replica of this model cannot be adopted and leveraged within individual organizations, as well. Instead of leaving the determination of ‘quality’ up to the seeker to discern (as is largely the current case when broadly searching with Google, despite the confidence their ranking algorithms is supposed to proffer), a company could create and offer a multitude of high-quality but quickly accessible & consumable ‘learning bites” on the most frequent/timely/damaging/profitable topics of interest. By aiming for succinct responses to the most leveragable “20” of the 80/20 Pareto Principle of business/sales/performance themes, it would be possible to serve the double-masters of Speed and Quality without any contradiction.

It’s important to note that one element will take precedence over the other in many cases – when the validity and comprehensiveness of information trumps the amount of time necessary to locate and understand it, or when something that half-right today is infinitely more valuable than something that is completely right tomorrow. Again, it all comes down to understanding the underlying objectives of your audience and adjusting your ‘S vs. Q’ knob accordingly.

(My response, obviously, is taking a consumption-oriented interpretation on the BQ. If one was to read the BQ to relate to production, I may have a slightly different stance in answering.

But then again, maybe not… :-) )

Wednesday, January 3, 2007

What's Easy vs. What's Right

I've been a member of the "eLearning" community for over 15 years now. Throughout that time, I've consistently been frustrated with the elephant in the room that no one wants to recognize:

More often than not, T&D departments and the CXO's to whom they report are more interested in going through the motions related to training than with the actual impact and effectiveness of those efforts.

What I mean by this is that there is a disturbingly large amount of money, time, and energy devoted towards getting course completion "checkmarks" for the entire employee population, regardless of whether that tornado of effort and funding actually makes any difference.

For example, why are Multiple-Choice Questions (MCQs) so entrenched in corporate (and K-12) training? Is it because they are especially effective? No. It's because they are efficient. And efficiency and efficacy are rarely related.

MCQs are a terrifically efficient way to move a large number of people through an evaluation process in a quick and easily quantifiable fashion. The problem is that most MCQs are structured in such a ridiculous way that it's often relatively easy to discern the correct answer without any actual knowledge or background in the subject. (I once took an internal "certification exam" that a client used to qualify their employees, sight unseen and with no background in the subject, and scored a passing (78%, if I recall correctly) grade.) Even if the MCQs you create don't fall in this category (and it is possible to create decent MCQs, with a little effort and creativity), the extra effort may not be justified because life and work don't often consist of selecting from menus of options. Recognition doesn't equal Recall, and neither equal Performance. And Performance is (usually) what we are really seeking.

This old saw rant of mine was recently rekindled by a terrific article by Sarah Boehle on the disconnect between L1 eval (smiley sheet) results and the actual effectiveness of that training (L3/L4).

I won't attempt to reframe her arguments, as she makes a strong and pointed case, leveraging a few folks I respect (Roger Chevalier and Will Thalheimer), but I will pull out a couple of items that had me cheering:
"Why do so many companies make this mistake? In all likelihood, they do so because the steps they must take in order to get their hands on the objective information necessary to accurately assess training effectiveness can be complex, difficult, costly and time-intensive. "
and
"From my experience talking to corporate training departments, the calculus in the field seems to be, 'Don't do good evaluations,' " Thalheimer says. "Why would anyone want to? If you evaluate at all levels, only bad things can happen. If you use Level I and get good results, then you're maintaining the status quo and everyone assumes that training is doing a good job. If you evaluate more rigorously and get bad results, however, all hell breaks loose."
There seems to be some sort of unspoken conspiracy/agreement between T&D and upper management of "don't ask, don't tell". T&D keeps shoveling meaningless stats to the boardroom that help to justify their existence, and the boardroom keeps accepting them as a means to help them feel better about the "necessary evil" that they see T&D to be. It's an easier, cleaner, and more friendly way to go about business, but it's an enormous waste of resources all around.

T&D professionals have an obligation to "get dirty" and dig into some of the hard questions and actions associated with making training *matter*. And that involves taking the road less traveled and providing some solid business rationale for why the annual budget has a line item for T&D, whether the CxO is asking for it or not. Until that happens, T&D will never be considered anything more than a "necessary evil" in the boardroom.