Separating Good Research from Bad (in Workplace Learning)

Separating Good Research from Bad (in Workplace Learning)

==============

Author's Note:

This article is Chapter 31 from my recently published book, The CEO's Guide to Training, eLearning & Work: Empowering Learning for a Competitive Advantage. You can learn more about the book at the book's website (https://www.ceosguide.net/ ) or on Amazon (https://amzn.to/4674JGS ).

=================

Preface

In the book, I write as if I'm writing to a CEO, letting him/her/they know how to manage their learning function to get a competitive advantage. I tell CEOs how they might manage us better and I tell them how they can get the most out of our good work.

The book is not just intended for CEOs and other senior leaders. It's also intended for us, learning and performance professionals, so we can empower ourselves to our full potential.

The book has received advance praise from leaders in the workplace learning field, including by the following people: ?????????? ??????????, ???????????? ??????????, ?????????? ??????????, ?????? ??????????????, ?????????? ??????????????, ?????? ??????????, ???????? ????????, ???????? ??????????????, ?????????? ????????????????, ?????????????? ??????????, ???????????????? ????????????, ?????????? ????????????????, ?????????????? ??. ??????????, ?????? ??????????????????????, ?????? ??????????????, ???????? ??????????: ???????????? ??????.


===== START OF CHAPTER =====

Separating Good Research from Bad

In this chapter, I describe how some members of your learning team have been fooled by “research” into following mediocre practices—benchmarking themselves against industry averages and following pied pipers toward new and unproven technologies and learning methods. They are wasting time and your organization’s money and are building inadequate practices, thus eroding your organization’s effectiveness. You might share the chapters in this section with your CLO?to encourage a small investment in developing research wisdom and using research more thoughtfully.

We’ve talked already about scientific research?on learning. But scientific research?exists on many other areas relating to organizational functioning—coaching, managing, onboarding, creativity, habits, human-technology interface design, embodied cognition, office design, leadership, and much, much more—that can also be utilized to design training, elearning, and workflow learning?interventions.

Without the benefits of this research, your organization’s managers?and teams are likely to make suboptimal decisions. For example, most people think that brainstorming in groups is better than brainstorming as individuals, when the science is clear that the opposite is true. Most organizations focus on conveying information during onboarding, when research shows that creating personal connections for new hires is much more important to employee success and loyalty. These are just a few examples of how well-translated scientific research?can improve your organization’s success.

Indeed, if you’re running a well-funded organization, you might consider hiring a half-dozen people whose sole job is to index your organization’s needs and search for and translate scientific research?into practical recommendations based on those needs. Yes! It’s an extra cost but it’s a relatively small cost for a competitive advantage.

Organizations typically underutilize scientific research—at least in relation to learning and employee performance—and rely on vendors and consultants who underutilize it as well. As we might expect, this underutilization leads to ineffective learning, development, and workplace practices.

Why do learning teams underutilize scientific research? Primarily because L&D measurement practices don’t capture what’s effective and what’s not. With poor measurement, we just can’t tell an effective program from an ineffective one. Measurement blindness allows other forces to fill the vacuum. Vendors tend to convey information and market their products in ways that get sales—routinely ignoring what works.

Also, consultants and vendors know that “research” brings credibility, so some of them look for the quickest, cheapest way that can earn research credibility. They often turn to industry research, one of the most hidden-in-plain-sight problems in the learning field.

Let’s now switch away from scientific research. Lots of people everywhere, in every field, hear regularly about other types of research, including survey research and qualitative research. These methods have important contributions to make, but they can be dangerous when not understood for what they are. Indeed, a major problem in the learning field—certainly elsewhere as well—is that many of us who are practitioners see the word “research” and assume all research is cut from the same cloth. It is not!

Industry research uses surveys (and sometimes focus groups or interviews) to compile opinions from people in the learning field. The biggest strength of industry research is that most people find the results very compelling. When we see that 70% of organizations are using microlearning videos, we can’t help but think that maybe we should be using microlearning videos as well. Again, if we had access to scientific research?or A-B testing, we would know that microlearning videos used alone are unlikely to be effective. However, since we don’t have this data (and our managers?don’t have this data), we tend to follow the crowd over the cliff—utilizing practices that are not very effective.

It’s absurd to blindly trust survey research in the learning industry when so many people join the field without education and experience, when so many fads float through the ether, when so little good measurement is getting done that would provide guidance on what works! Many who complete these industry surveys are likely to have incomplete knowledge on at least some of the questions asked. Yet organizations like yours fall for industry survey research over and over.

Your organization almost certainly pays good money to industry analysts or trade associations who make recommendations based on the “research” surveys they do. They survey, survey, and survey on different topics to bring forth new reports and new analyses—and learning professionals like me keep paying for this “research” because we fear we might be missing out on some new secret technology, method, or practice.

I am not saying that well-done industry research is worthless. Some very thoughtful industry research analysts are doing great work, and their findings can give us great insights into where learning professionals see the field. What I am saying is that, too often, problems creep into this work and in the interpretation of the data.

I have done industry research myself at Work-Learning Research and when I worked in a team at TiER1 Performance on the annual Learning Trends survey. Given these experiences, I know intimately about the strengths and limitations of industry survey research as well as how to maximize the benefits and limit the dangers.

What are the dangers? There are many. The biggest problem is mediocrity. How will your learning team get a competitive advantage?by aspiring to industry averages? Worse, what will they learn from industry averages in a learning-and-development field where the constraints on professionals have too often hampered best practices? They will learn to aspire to mediocre practices!

The mediocrity problem is compounded by the designs of most industry surveys. The questions asked on the surveys accentuate people’s tendency toward current practices and shiny new objects of faddish affection. Where surveys present a list of practices for respondents to choose from, they include too many popular but dubious practices. They too often omit research-inspired models and frameworks. Also, in the false hope that open-ended responding will provide a truer view of the industry, they fail to realize that people’s top-of-mind thinking is more likely to prompt thoughts of traditional practices and exciting new technologies rather than foundational proven practices.

Remember too that learning evaluation is so bereft and broken in the learning field that people in the field—those responding on industry surveys—have likely gotten their sense of what works and what doesn’t from poorly-designed smile sheets?and lazy platitudes circulating in the industry.

In addition to the mediocrity problem, industry survey research almost always suffers from sampling bias. The surveyors (me included) don’t have the resources to do the difficult work of ensuring a truly representative sample. Surveys are blasted out to anyone who is willing to take them. This isn’t a fatal flaw, because demographic data can be used to get a sense of the population that was sampled and the reports generated can communicate important characteristics of the audience who was surveyed. However, report writers often don’t caveat their findings based on their respondent samples. Also, report readers don’t often notice who took the survey; they just look at the data and assume it is a fair snapshot of the industry in general.

The mediocrity problem and the representation problem are bad enough. What’s worse is that, when people in the field think “research,” they too often don’t distinguish between scientific research?and industry survey research. This leaves folks on your learning team thinking they are following science-based best practices, when in fact they are following survey-based mediocre practices.

And, of course, there are vendors, suppliers, and consultants who exacerbate the problem exponentially because they use the industry-survey research to bolster their claims and their credibility. Vendors amplify the noise, and your learning team hears messages of mediocrity over and over again.

The best industry analysts use survey research with wisdom, bringing in other sources of information and expertise to make sense of the survey data—even highlighting when industry perceptions are out of whack with evidence-based best practices. They also work hard to ensure that casual readers of their reports can’t misinterpret their data.

Again, this doesn’t prevent vendors and consultants from picking and choosing from these research reports, decontextualizing the data to send their preferred messages.

Another practice that harms your learning team’s performance is web searching or GenAI?searching for learning best practices—even when searching for research-based practices. The internet is filled with poor recommendations for the practice of learning. When your learning team searches for research-based learning practices, they’re liable to run across bogus information that looks credible. Similarly, large language models like ChatGPT?were trained on the internet and their results can also be faulty. The bottom line is that these types of casual searches are no replacement for expertise.

Given all these problems with your learning team’s common practices in using research, what can you do as CEO? Probably, you’re not going to want to get down into the weeds on this, but, when the annual budget gets discussed, your organization should ensure your learning team spends its research money wisely—not making strategic decisions based primarily on industry research. It should only use industry survey research that (1) is specifically designed to lessen the siren song of mediocrity, (2) caveats findings based on the respondent sample, (3) communicates clearly about those caveats, and (4) connects survey findings to scientific recommendations.

Your learning team should have access and be encouraged to use scientific research?and research translators to help make sense of the evidence. If there is any doubt about the quality of the industry data your learning team is paying for, those who negotiate your learning team’s budget might consider dropping your industry-research subscription—and often it is a very expensive subscription—until your learning team can do their due diligence to ensure the quality of the data and its strategic usefulness.

You can also fund and encourage your learning team to bring in outside unbiased help in interpreting industry data—enabling them to hire trusted advisors on an occasional basis or on a retainer, or by getting an outside audit of their learning practices. By bringing in experts with a research background in learning and in learning practices, your learning team will get clearer insights from the data.

===== END OF CHAPTER =====


===== CHAPTER NOTES =====

(Provided at the back of the book, so you can dig deeper, or peruse the research support).

Let me provide a few real examples that show how survey research in the learning field causes problems. I won’t reveal the perpetrators—the problem is endemic, so it wouldn’t be fair to call out these folks specifically.

Here is a question on an industry survey on learning evaluation.

What types of measures do you use to support your evaluation goals?

  • Perception (satisfaction, instructor or format performance, etc.).??????????????
  • Efficiency (completions, enrollment, score, etc.).??????????
  • Effectiveness (behavior change, on-the-job actions, etc.).?????????????
  • Business Impact (business outcome measures).????????????
  • We do not have evaluation goals.
  • I don’t know.

The problem here is that the question doesn’t ask about learning factors that might be measured. This is a blackhole-sized oversight that gets respondents thinking too narrowly about learning evaluation and produces data for which the most important constructs have been sucked into oblivion. Because learning professionals must gather data on learning to get feedback?they can leverage to maintain and improve effectiveness, this question simply gets respondents and report readers thinking all wrong about learning evaluation!

I was asked recently by a doctoral student to provide feedback?on a draft survey—one that will be sent to several thousands of learning professionals. It’s not finalized yet, but the guy’s dissertation advisor is insisting he only use questions from previously utilized surveys. Unfortunately, the previous surveys were poorly designed. Here’s a question that is being contemplated:

?Select all levels of evaluation that are used to any extent in your organization.

  • Level 1 (Reaction)—The learner’s reaction to the training.
  • Level 2 (Learning)—The change in the learner’s attitude, knowledge, or skills.
  • Level 3 (Behavior/OJTP)—The learner’s ability to apply knowledge, skills, and attitudes to their on-the-job performance.
  • Level 4 (Results)—The impact on organizational goals or metrics.
  • Level 5 (ROI)—Financial impact of the training.
  • None of the above.

Egads! This is terrible! It reinforces the 1960s-era thinking around learning evaluation. It also, by using the model labels, pushes respondents to think at a surface level, focusing on the labels and likely ignoring the more specific words and meaning behind those labels—thus creating biased and corrupted data. In addition, by smashing all learning metrics into one category (that is, “Level 2 Learning”), the question leaves out important nuances. For example, it does not distinguish between measures of recognition, knowledge, decision-making competence, or skills.

One more. This survey question asks learning professionals to select the practices they have used most frequently. I’ve modified the question slightly to simplify it and protect the identity of the group who created it.

What processes does your team employ in the design and/or development of your [learning] assets?

  • ADDIE.
  • Agile.
  • Design Thinking.
  • Gagne’s Nice Events.
  • Lean Startup.
  • Systems View Diagramming.
  • Understanding by Design.
  • (And there were several more options as well).

The answer choices limit what learning professionals might consider selecting, and several of the most proven processes are not even included. For example, where is Cathy Moore’s?Action Mapping? Where are David Merrill’s Five Principles of Instruction? Where are the HPT and HPI models?

In this chapter, I mentioned a few examples of how research on human performance can improve organizational functioning, including brainstorming and onboarding.

Brainstorming has been thoroughly researched, and the findings show that brainstorming individually is more effective than brainstorming in groups.

  • Mullen, B., Johnson, C., & Salas, E. (1991). Productivity loss in brainstorming groups: A meta-analytic integration. Basic and Applied Social Psychology, 12, 3–23.
  • Diehl, M., & Stroebe, W. (1987). Productivity loss in brainstorming groups: Toward the solution of a riddle. Journal of Personality and Social Psychology, 53, 497–509.

Brainstorming electronically—rather than in person—has been shown to improve creativity?in many situations.

  • DeRosa, D. M., Smith, C. L., & Hantula, D. A. (2007). The medium matters: Mining the long-promised merit of group interaction in creative idea generation tasks in a meta-analysis of the electronic group brainstorming literature. Computers in Human Behavior, 23(3), 1549–1581.

Interestingly, it matters who is brainstorming. Adding star performers to groups significantly improves groups’ creative ideas.

  • Kenworthy, J. B., Marusich, L. R., Paulus, P. B., Abellanoza, A., & Bakdash, J. Z. (2020). The impact of top performers in creative groups. Psychology of Aesthetics, Creativity, and the Arts. Advance online publication. https://doi.org/10.1037/aca0000365

The following article by long-time creativity?researcher Paul Paulus?and colleagues provides an excellent overview of how creative ideas can be generated within organizational contexts—and is available to anyone using the link below.

Onboarding new employees—sometimes called “induction” in parts of the world—is critical for the longevity, satisfaction, and productivity of new hires. Research shows that just providing new hires with information is not enough; that social and emotional considerations are important for success.

  • Bauer, T. N., & Erdogan, B. (2011). Organizational socialization: The effective onboarding of new employees. In S. Zedeck (Ed.), APA handbook of industrial and organizational psychology, Vol. 3. Maintaining, expanding, and contracting the organization (pp. 51–64). American Psychological Association.
  • Klein, H. J., Polin, B., & Sutton, K. L. (2015). Specific onboarding practices for the socialization of new employees. International Journal of Selection and Assessment, 23(3), 263–283.

Of course, this doesn’t mean socialization is the only critical goal of onboarding; learning new ways of thinking and acting may also be important.

  • Becker, K., & Bish, A. (2021). A framework for understanding the role of unlearning in onboarding. Human Resource Management Review, 31(1), Article 100730.

===== END OF CHAPTER NOTES =====



How To Learn More

THE BOOK. You can learn more about the book at the book's website (https://www.ceosguide.net/ ) or on Amazon (https://amzn.to/4674JGS ).

LTEM. (The Learning-Transfer Evaluation Model) is rapidly replacing older evaluation models. I invite you to join me in the LTEM Boot Camp open-enrollment workshop or contact me to arrange a private Boot Camp for your organization. LTEM Boot Camp LEARN MORE .

My Website. To access my research-to-practice reports, my blog, job aids, and get an introduction to my consulting services. WorkLearning.com/ .

Coaching. I'm available as a coach, and offer a pay-what-you-can pricing option. Check out my coaching options .


Book Chapters Available on LinkedIn:


Other Articles Available on LinkedIn:


Sign Up For This Newsletter

You can sign up for this LinkedIn newsletter if you like. There's a link somewhere here where you can do that.


Ilona Boomsma

Beter Leuker Sneller met AI | Keynotes, trainingen en advies

1 年

Like this Will! Looking forward to the book. I fully agree that we should NOT use ChatGPT to determine best practices for learning. Its recommendations are often bad, it still advises to do something with learning styles. However, if we provide it with good research, it can assist us in evaluating our designs against established good practices.

Dr. Jim Sellner, PhD. DipC.

Vivo Team is the ONLY digital L&D company that uses unique, internationally award-winning processes and analytics to build your company into one that is winning in the marketplace with people & profits.

1 年

I think, if I was doing this , i'd cut it to fewer chapter for CEO's - WIIFT to read this? jim

  • 该图片无替代文字
回复

要查看或添加评论,请登录

Will Thalheimer的更多文章

社区洞察

其他会员也浏览了