E - Learning Instructional Design

E - Learning Instructional Design


Recently, I facilitated a series of meetings that quickly reminded me of a few commonly held Instructional Design beliefs. These beliefs permeate their way into our eLearning and classroom training courses under the guise of being “instructionally sound” (which, is a term, by the way, that I believe should be banned from existence – but that’s another conversation for another time). Often, Instructional Designers do these things without questioning why; as if they are requirements. These are what I consider myths of Instructional Design. I’d be interested in hearing what other myths you can add to the list, so leave a comment and join in the conversation.

1. Courses Must Start With The List Of Learning Objectives

Nearly every course I’ve ever reviewed (or participated in)—both eLearning and instructor-led—begins with an objectives screen. And, in most courses, learners and facilitators simply breeze past, or entirely skip, this screen, paying little to no attention to the content. So, why do we continue to start our courses with objectives?

Theoretically-based Instructional Designers will likely cite the second event, “informing the learner of the objectives” in Gagne’s Nine Events of Instruction as the reason why this screen is a necessary component to include in any course. (Robert Gagne The Conditions of Learning, 1965) This is deep-rooted Instructional Design history, so calling this a myth will likely ruffle some feathers. Allow me to clarify; while I do believe informing the learner of the objectives is necessary, I do not believe we should share them in the same manner in which we develop them.

After working for Dr. Michael Allen for many years, I’ve found a much more effective way of sharing the objectives is to help the learner understand how the course material will help them perform better.

For example, let’s say you’re creating a course on Coaching and Feedback for your leaders. Instead of placing a bulleted screen that reads “After completing this course, you will be able to 1) Identify moments when immediate feedback is more appropriate than delayed, and 2) Recognize the visual cues and body language of the employee …”, what if your course started with the following?

“For the past two Mondays, one of your star performers, Mary, has been late to work. Today, Monday, you receive a text message from her stating that she is having car trouble and will be in as soon as possible. Do you respond to her text? Do you approach her as soon as she arrives, or wait until later in the day? What do you say? How do you react to her outburst of tears?
Being a leader is difficult. You have to navigate a variety of conversations with your team members. Do you know when to deliver immediate feedback, when to delay feedback, and when to have a coaching conversation? Can you recognize the visual clues and body language and adjust your message accordingly? This course places you in several real-world challenges you may face as a leader, giving you an opportunity to build confidence in your ability to provide Coaching and Feedback to your employees.”

So, put that bulleted list of objectives away! In its place, give the learner a real-world example of how this course will help them. (Bonus points if you immediately put them into a challenge to prove they need the instruction.) It will still meet the goal of the second event, which is to prepare the student for instruction by informing them the objective of the learning that will take place, but will do so in an interesting, meaningful, and impactful way.

2. Narration Is Necessary

For me, there are few things more frustrating, mind-numbing, or insulting to a learner as eLearning narration that reads aloud every word on screen. Even worse is when I, as a learner, am not permitted to move forward on my own, forced to wait for the narrator to end, which is typically long after I’ve finished reading. I see this often in compliance training courses, or those where you must “ensure your learners are paying attention”. The belief here is that narration causes people who would otherwise blow past the content to pay attention. Putting narration doesn’t ensure they are paying attention – it ensures they are waiting until they hear a break in the narration to look up, or return to the screen and hit the Next button. That’s all it accomplishes.

For many Instructional Designers, there is a belief that narration helps those who are “auditory” learners. Besides the fact that the entire theory of learning styles is currently under debate, I find fault in this argument as to why narration is necessary. When a narrator is simply reading the text that is on screen, they are providing no additional benefit for an auditory learner. Reading text aloud in my own head serves that same purpose.

I’m not opposed to narration or audio in courses. Quite the contrary, actually. However, I believe that audio should add to the learner experience. It should help add context to an eLearning course. Perhaps you need to replicate the hustle and bustle in a busy distribution center, or you are asking learners to diagnose a patient illness based on the sound of cough, or your learner needs to hear the inflection in the caller’s voice to understand if they are frustrated when they say “Well, that was really helpful!”. In each of these cases, the course works better when there is audio. So, while word-for-word narration isn’t necessary, using audio to set context might be.

3. Interactivity Is Created By Having Movement On Screen

In the past, I had clients say they wanted a “highly interactive eLearning course”. Then, come to find out, what they actually wanted was a highly animated course. Or, a course that had an “interaction” every third screen, such as a true/false question or a drag and drop. Interactions are different from being interactive.

Interactive, in the purist form of the definition, means “mutually or reciprocally active”. Some may argue that inserting a true or false question, in which the program gives you feedback on your selection, meets the criteria for being interactive. I would define interactive courses as those that adjust and modify based on the learner’s input. So, for example, a learner gets more difficult scenarios as they demonstrate mastery over the easier scenarios. Or, the outcome of the event is modified based on the decisions of the learner.

Throwing animated videos on screen, having words appear and disappear, using “white board” drawings are certainly entertaining, but they are not engaging the mind in a way to cause learning to occur. Studies on learning and the complexity of behavioral (performance) change easily negate any belief that animation, text, or quizzes alone will achieve the intended outcomes of the course – assuming you are looking for performance change not just a method of delivering content.

Instead, choose to design real interactive experiences, in which learners draw on existing mental models, make choices, fail and try something new until they succeed, rather than simply presenting information (even if that text is animated and cool!) in a contextual-less void and then quizzing on that information. Unless, of course, you just want to entertain your learner audience, in which case go ahead and throw some glitter on that content! Make that text dance! Add that spark-flare to a video! Ask that true/false question. Just don’t call that interactivity.

4. An Assessment At The End Of The Course Is Necessary To Demonstrate Mastery

Assessments are one of the carry-overs from the educational system that has invaded the corporate learning workspace. The idea is simple, ask a few questions of the learner at the end of the course to see how much of the information they have retained. If they score less than a set threshold, they do not pass the program and must repeat the lesson until they get the necessary score.

There are a few issues I have with using assessments as a demonstration of mastery. First, and foremost, most Instructional Designers I know are not experts at assessment writing. Sure, we can pull together some multiple choice or true/false questions, write some distractors and then indicate which is the correct answer. However, there is an entire science and study in the validity of testing. And, a lot more goes into writing a test than stringing together a series of multiple choice questions. It may not seem like a big deal. So what? They have to repeat an e-learning course. What’s the harm?

The big deal is what is done with the assessment data. I have had leaders at organizations ask if the LMS could retain and report on the number of attempts it took a learner to pass the test, or to provide a ranking of associates by assessment scores. In asking for this information, they were looking for “additional data points” to help make decisions about a learner’s career – promotions, moves within the company, or even as further proof to support adequate grounds for termination. The fact is, unless the assessment is valid, it should never be used in career decisions (even if it is just “another data point”).

The other concern that I have with post-course assessments (multi-choice, true/false, etc.) is that they do little to prove mastery of a performance. We are not in the business of educating a workforce; we are in the business of helping a workforce perform. There is a difference. I can be educated on musical notes and learn how to read music so that I could pass a test. However, I couldn’t pick up an instrument and play a single song.

In organizations, I feel we spend too much time educating people and evaluating them on whether or not they know the content we taught. But our role in an organization is to train our workforce, not simply educate them. The difference is knowing or doing. I’d much rather have a workforce who can perform the actual task than answer a bunch of questions about the task. Wouldn’t you?

So, what does this do to your Instructional Design? It typically means you put the content for knowing or reference outside of your training program. In your design, you should allow for self-directed exploration of content needed to perform the job (instruction manuals or how to guides). Your design of training should focus on real-world application of the content. The test is whether or not they can successfully perform the task at hand – safely, without errors, without taking too long. That’s mastery.

5. Training Metrics Are Our Measure Of Value And Success

I was once at a conference where a Chief Learning Officer of a global organization stood before the crowd and boasted about how robust his organization’s training was and how great his training team was. I was excited to hear what he had to say! Then, he gave his proof points… we have ___ thousand courses available to our learners in our LMS, last year our average learner spent ____ hours in training programs, we have an average completion rate of _____ per training module. I walked out.

If you do a quick search on KPIs of training, you’ll come across articles that reference the same metrics that misguided CLO referenced. Typically, these articles are written by Learning Management System companies. It’s what they want you to measure because it’s what they built their system to measure.

Here’s a list of metrics I literally found today when searching: Activity Pass/Fail Rate, Average Test Score, Training Completion Percentage Rate, Job Role Competency Rate (in which competency was described as “tracking the training progress of people in a given team or department”), Compliance Percentage Rate, Class Attendance Rate, and worst of all Average Time to Completion.

These metrics seem all well and good, until you really think about the job we are supposed to be doing – the value that we add. We should be in the business of helping people do their job better. Therefore, the same metrics the business is evaluating is what we should evaluate. If the business is looking to increase margins, then our training should be designed to help our associates increase productivity, decrease waste, sell higher margin products more often, etc. If our organization values a safe work environment, then our training should be measured by our ability to reduce safety violations, decrease the number of workplace hazards reported, increase the usage of the appropriate PPE.

Often, the Chief Learning Officer or other learning leader feels hesitant to their work to these metrics. It goes back to myth #4 – are we educating our workforce or are we training them? When simply educating employees, learning leaders may fall victim to the post hoc ergo propter hoc fallacy (the whole correlation/causation worry). But, if our training is designed to teach employees to do their job, to practice the skills they will use in the real world, then we can take confidence in knowing our training was a part of the improvements seen year over year. Of course, this also means we should be held accountable if there is not the desired improvement, which is something learning leaders likely fear. It's a whole lot easier when our metrics are outside of the business metrics. We can hold our heads up high and say, “The business may be struggling, but the training team is doing our job! They’re just not using what they learned!”

And sure, it is easy to pull an “average time to completion report” – and it’s not so easy to obtain data on average time to competency. But, we shouldn’t fall victim to measuring what is easy. We should push ourselves to measure our true value to the organization.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了