Monday, December 19

Training is NOT Snakeoil


After reflecting upon the recent topic of Snakeoil for a while I have decided that it simply does not jive with the facts.

Laurie Bassi's research shows that organizations that make large investments in training do much better than others. This is because training has both a direct and indirect effect upon the organization:

  • The direct effect is that employees have the skills and competencies they need to do their jobs.
  • The indirect, and perhaps more important effects, are that employees:
    • Are less likely to leave (provided that leaders are effective and wages are competitive).
    • Develop valuable relationships with customers.

Her research is so powerful, that it actually shows that organizations that make large investments in training return 16.3% per year, compared with 10.7 for the S&P 500 index.

In the Human Equation, Jeffery Pfeffer writes that "Virtually all descriptions of high performance management practices emphasize training, and the amount of training provided by commitment as opposed to control-orientated management is substantial" (p85).

On the very next page Pfeffer writes that in times of economic stringency, many U.S. organizations reduce training to make profit goals. Why? Because if we as trainers have no faith, then why should the decision-makers?

Yet training works! It is one of the best predictors of organizational success! So why do we on the inside, who perhaps should know better, bash training just as readily as those on the outside? Perhaps because we deal with the most complicated organization of matter in the known universe -- the human brain.

The brain struggling to understand the brain is society trying to explain itself. - Colin Blakemore

Training works...but not as we always predict...and the reason we cannot always predict it is because we are trying to get a set number of neurons in the human brain to light up at exactly the right time...yet we are not quite sure which neurons actually need to light up...a complicated thing training is indeed...yet for the most part, we do quite well...thats pretty good since we are learning ourselves...and the most exciting part is that we are not there yet...we are still learning...

9 comments:

Stuart Kruse said...

Two thoughts come to mind. If perhaps the most important value of training is,

"...that employees:

Are less likely to leave (provided that leaders are effective and wages are competitive).

Develop valuable relationships with customers."

Would we better to look at interventions that more directly target these areas? I.e. could we be doing other (maybe more direct, effective and cheaper) things to meet these goals? E.g. better working practices and environments to encourage learners to stay at one workplace.

The other thought:
The issue isn't really does training work or not, it is WHAT KIND of training or performance interventions work? How can we get a bigger bang for our buck!

Dave Lee said...

Nice post, Don. I agree with you that training, in and of itself isn't snake oil. Would you want to fly with an airline pilot who has never logged an hour in a simulator or a cockpit? No way! I want him or her trained and then some!

But I think in mindful learner's response I see where some confusion comes in. I totally agree with the holistic approach mindful learner advocates. Training is definitely not the cure all that some may have expected it to be in the past.

Mindful learner suggests we provide "better work processes and evironments" to keep employees happy and engaged. First, I've never seen work processes and environments high on either the lists of why employees stay or why employees leave. But more importantly, are work processes and environments actually something that learning professionals can impact?

If a training needs assessment shows solid evidence that it's actually the desk chairs that are causing poor performance in a call center, should the CLO order up new chairs for everyone? Of course, the data should be taken to the business unit head and/or facilities, but the CLO isn't going to spend a cent on either new chairs or, hopefully, on designing any sort of learning intervention.

At the end, I agree with mindful learner that our goal is to get a bigger bang for our buck. But let's not get distracted by what are really non-issues for the learning group.

michael hotrum said...

As trainers/training developers we are necessarily an introspective lot, prone to periods of disllusionment and refelection that often lead to professional incrimination - and criticism of our activities. Why is that? From my experience, here are reaspons I often spend time wondering what I'm doing/not doing:

a) I have limited control over the factors that impact upon my learning design and delivery.
b) I don't have an effective inexpensiv way to identify the impact of my learning design and delivery.
c) My actions are not considered strategic and am therefore not able to respond to the real needs or respond at the right time.
d) My strategies and methods are constrained by learning technology or approaches that were decided upon and implemented for organizational and management reasons, not because they are the best for training purposes.
e) Opportunities for effective learning design that is creative and progressive is sacrificed to expediency and cost - in fact all training dsign is thwarted by having to stay with the "way things are always done" - in turn this doesn't help my job enjoyment.
f) training is still not seen as a business priority; and opportunities for developing organizational educational plans are sacrificed for episodic training interventions.
f) after twenty years serving the training interests of the corporate sector, I'm taking a different tack - and am now working in a fee for service training unit operating out of an academic institution - serving internal and external clients; will it prove any different? Too soon to say, but as a a trainer I am naturally skeptical so...

Godfrey Parkin said...

"Laurie Bassi's research shows that organizations that make large investments in training do much better than others."

I have not studied her methodology, but the question that springs to my mind whenever I read such a bold assertion is this: does the research prove causality in the direction being interpreted? In other words, do companies who spend more on training produce better profits as a provable result of that spending; or do companies with good profits spend more on training as a result of having more to spend?

And if it is the former, how does this disprove the snake oil assertion? Perhaps 90% of everything spent on training is snake oil -- but the 10% that is not is what tips the balance...

Godfrey Parkin

Donald Clark said...

Actually, Laurie Bassi practices her research -- one of her companies is an investment company that of course invests in companies that do a lot of training -- she practices what she preaches. If you type her name into Google you should get some good hits.

As far as causality, I have never read if she has checked for it. But if it is the opposite, then we are left pondering why do good companies invest in training if it is so bad?

I'm a former miliary person so I know know training works -- I got to see it first-hand every day for 22 years. That alone proves to me that the snakeoil assertion is dead wrong.

The company I now work for is a Fortune 500 company and one of the best companies to work for. It believed in training from its early days and still does. Did training help it get there? My personal opinion is that it was not the main factor (leadership and vision was), yet it was of help and the money spent on it was not and is not wasted.

Peter Isackson said...

Laurie Bassi’s findings are encouraging and I don’t know anyone in this business who wouldn’t agree that doing training is better than not doing it. But as Godfrey points out that doesn’t tell us anything about how effective the methods and products we call training may be.

Recent medical research has shown that placebos actually do achieve results… but of course not because of their active ingredients. Does that mean we should abandon pharmaceutical research (much too expensive!) and simply put abusive labels on as many jars and bottles as possible to make sure that more people end up “feeling better”? Laurie Bassi’s purely economic reasoning may amount to no more than that. Her financial ambitions need go no further. She’s betting on stable correlations without really needing to wonder why they exist.

The question is how many people might benefit from something that wasn’t just leveraging the placebo effect? The challenge is doing something better than snake oil. And the ultimate question is (as Michael Hotrum frames it), “is anyone interested?”

Donald Clark said...

While its easy to dismiss Bassi's research, it is also just as easy to dismiss the studies cited in snakeoil. Studies of this nature are normally based on a small number of cases. And surprisingly, often quite wrong for being peered reviewed published papers.

One published paper often cited for showing the failure of training to transfer over the job is George Alliger and Elizabeth Janak's (1989) "Kirkpatrick's Levels of Training Criteria Thirty Years Later."

While the paper is mostly about the "failure" of Kirkpatrick's model to show causality, it has also been used to show that training often fails to transfer to the job.

Yet the studies that the paper cites, lumps every conceivable form of learning in an organization into one big pile called "training." For example, some of the forms of so called "training" that the paper uses are spirit-building, inculcation of company history or philosophy, and individual growth programs. If the concept of informal learning would have been in vogue at the time, I'm sure they would have included it as a form of "training" that fails to transfer.

And it is amazing how these papers are then used as the basis for other papers. Wang, Dou, and Li published one back in the summer of 1992 called "A systems approach to measuring return on investment for HRD interventions." When I asked one of the authors why they used such a flawed paper for one of the basic foundations of their paper, he replied that he only read the abstract. Give me a break! If some poor layman like me spends a few hours tracking down a copy of the paper they reference so that I can fully understand their paper, then I would expect an assistant professor and a couple of Ph.D. candidates from a respected university to have done the same!

The snakeoil post was about the same if I remember right. While someone did ask about the references, no one really questioned the research. In addition, I don't believe the author actually read the real research from the way the references were listed. I tried to go back and check since it has been soe time since I fully read the comments but that section seems to be closed off right now. If it opens back up I might even check the references myself.

Anonymous said...

Donald, in all the high-caliber, intellectually stimulating Snake Oil commentary, you are the first person to resort to personal attacks.

As Glen Whitman writes in Logical Fallacies and the Art of Debate, “It is always bad form to use the fallacy of argumentum ad hominem.” This is because it completely fails to prove an argument. Alas, argumentum ad homine is Snake Oil. It does not work.

In one fell swoop you have maligned my work and the credibility of Alliger, Janak, Wang, Dou, and Li. You have also misrepresented Bassi’s work. First of all, I am honored to be included in such fine company. Until now I never considered myself one of their peers. It is quite flattering.

I strongly agree with you that people that cite research to prove their point should actually read the research. I am a well-paid professional and I make a good living in learning technology research. Reading research I cite is one of the foundations of my practice, (your “belief” notwithstanding). I suggest you try it.

Had you read Bassi’s work, you would have seen that she is very careful about the causality issue and never maintained that her data proved causality. Bassi is a fine researcher, known in the industry for her careful design and attention to detail. However, by cutting and pasting sound bites, you have misrepresented her work. This is another logical fallacy called “Cum hoc ergo propter hoc” (with this, therefore because of this). As Greg Whitman writes, “This is the familiar fallacy of mistaking correlation for causation -- i.e., thinking that because two things occur simultaneously, one must be a cause of the other.” Bassi would never make such a mistake and has always been very careful to avoid it.

In the June 2004 paper she wrote with three other researchers called The Impact of U.S. Firms’ Investments in Human Capital on Stock Prices the researchers did find a significant correlation between training expenditures and stock price. Significance is a technical and statistical term in research that means that the finding has a high probability of not being due to chance. However, the researchers are very careful to point out in several places that while the data suggests possible causality it does not prove causality.

In it she writes, “Since the analysis is based on non-experimental data, it was, of course, not possible to determine that the relationships being estimated are truly causal with no effects from confounding or omitted variables.” By “non-experimental data”, she means they did not perform control studies to eliminate other confounding variables.

Later in the same paper, the researchers write, “As previously noted, however, in the absence of data from an experimental design it is impossible to rule out the possibility that the training measure used in our analysis is serving, at least in part, as a marker for other unmeasured firm-level attributes that are correlated with a firm’s long-term profitability (and thus equity market valuation).”

I have never read the two papers you cite from Alliger and Janak or Wang et al. I have read several papers by Greg Wang and have a great deal of respect for his work. He is what we call a Measurement and Evaluation (M&E) expert and does not do research on actual transfer methods, so I am surprised he would comment on the actual interventions or use unrelated data to support his claims.

Alliger and Janak are also well-known (and highly respected) M&E experts and in all the research I have read that cites them, I have yet to see them cited as proof of the inefficacy of transfer. Yet, since you claim this is what Wang did, I have to take your word for it until I know otherwise. I have looked for the paper you cite but can only find a 2002 paper, and cannot find the 1992 paper you cite.

Even if Wang et al had used Alliger’s data improperly as you claim, it does not mean that Alliger’s data is flawed (as you also claim). Even if Wang had not read beyond the abstract, it does make the position he is taking untenable. And even if he did not read the research it does not mean other researchers follow the same practice. All of these can be summed up as another logical fallacy on your part called Dicto simpliciter (i.e., sweeping generalization).

By the way, the original Snake Oil post is archived in entirety on Jay Cross’s Internet Time server. Most of the reference material I read was generously provided to me by colleagues (acknowledged in the post) in a spirit of cooperation and professional collaboration.
I suggest you read the post. The caliber of the responses might be instructive.

Donald Clark said...

If I offended you Mr. Adkins, then I'm sorry as that was not my intent. As far as being general, I find your post to be just about as general as most other ones, e.g., "Training does not work." Blogs for the most part are general in nature. And yes, my statements are also general, but not any more than most others.

Your statement, "We spend about $65 billion every year in the US for training that has a dismal knowledge transfer ratio (2%), a dismal learning transfer rate (20-30%) and only accounts for 10% of the way we acquire knowledge." is a pretty good example of the generality that you used. You take a study that Bloom performed on his college students and then make a sweeping statement that this applies to the entire field of training. I find that quite general and misleading.

As far as me questioning you reading the research, you wrote, "I am indebted to Tony O’Driscoll at IBM for providing me with these two data sets." I took that to mean that you only looked at the numbers. I'f I'm wrong then I'm quite sorry and I apologize for my earlier statement.

Yes, Alliger and Janak are well-known and highly respected, but that does not mean that one of their papers is wrong. I stand by the statement I made about the paper as I have read it and it makes a very sweeping generalization about training that is quite misleading. Does that make them bad researchers? I doubt it. Karl Popper once said something to the effect that scientific statements are those that can be put to the test and potentially proven wrong. Sounds as if a researcher who never gets proven wrong is not much of a researcher.

And thank you for the link to your original post. The one used on the main page of Learning Circuits goes to your post, but it would not allow me to read the comments that follow. Yours worked quite nicely as it allows one to read the comments.