Issue #6: Evals: You took WHAT from my course???

Issue #6: Evals: You took WHAT from my course???

This content series is all things instructional design and learning and development. Subscribe to this series to get notified for content on a monthly basis!

In our last issue we had no relatable comments ??- let's change that for this issue! Remember I am selecting people with relevant comments to tag and shout out in future issues. Onto today's issue...


If we take the typical teenager coming home from school, it usually goes one or two ways…

Parent: “What did you learn today, honey?”

Teenager: “Nothing much.”

OR

Parent: “What did you learn today, honey?”

Teenager: “Oh, my teacher got arrested.”

Parent: WHAT?!

Haha, but in truth feedback is often this way…very minimalistic or revealing willingly only when it’s shocking. As instructional designers and people working in an online environment, we have even less ability to garner feedback than a traditional teacher who can “read the room”. We don’t have the luxury of seeing body language, eye rolls, disgruntled mood swings, heavy sighs, or crying breakdowns due to frustration and fatigue (perhaps better that way sometimes…) However, we still need to collect ways to “read the room”.

Much in the same way that I outlined getting to know our learners from the outset, we need to find ways to capture their experience on the flip side of their experience too. There’s a number of ways we can look at this, with some having overlap from the pre-learning journey phase.

evaluation methods graphic with brain emojis

End of Course Survey Yup, you guessed it. Just as we can pre-survey learners of their interests and needs, we should also have a data capture method for how things went at the ::end:: of their experience as well. Keep this anonymous. Try and keep it as close to the end as possible but not too close such that learners fear any infringement / impact on their grades. Be sure to balance some multiple choice questions with some open ended ones. A few samples are below.

How did you find the pacing of this course?

-Too fast

-Too slow

-Just right


My instructor was attentive to my academic needs ___ of the time.

- Less than 30%

-50%

-51-75%

-More than 75% of the time


What if anything would you like to improve about this course?

[ open ended response]


Retention Metrics If people dropped out of the course or stopped using the product, when did they do so? Was there a clear pattern to a beginning, middle, or end of the experience? (Note that most colleges have an add / drop week the first semester so that needs to be factored in.) Are the dropout rates excessively high with this course? Examine retention metrics in comparison to things like teacher feedback, academic preparation (i.e. any prerequisites or entrance exams), and marketing strategies. For the latter, was the course advertised as a lightweight liberal arts romp and then it turned into a rigorous analytical experience? What expectations between learner and delivery might have gotten mismanaged in the process?

Eye Tracking Studies This method can be used to go back and examine what might be going wrong, but it can also be done when a course is still in development and used as part of the product preparation. It’s a bit more costly and time consuming but involves a focus group navigating through the course or online product and having the attention they spend at different parts on the screen analyzed and tracked. This can tell instructional designers and product developers if people are spending time in the right places, or getting lost in the process. Here’s a bit more if you want to read up on it.

LMS Metrics Learning management systems can often provide reports to the instructor of time spent in specific parts of resources shared. For example, how much time did a learner spend on a particular page or video? Note that these metrics would work if the learner clicked on the video while still embedded into the LMS, but it wouldn’t be accurate if they clicked to view it directly on YouTube in another pop up window. This method is a bit hot or cold in many instructor’s eyes and not always favored upon. However, built in checks and balances - like not being able to proceed to a next module or content block before certain mini milestones are reached is. This can be done with a brief pop up quiz, assignment due, or required discussion board post based on readings.

Real World Performance Finally, there’s nothing like handing the learning off to the learner to be implemented in real time. For a medical student, this could be on the job training in a medical rotation or residency. For an online learning student in a higher education class this might be the product they produce for their assignment. In either regard, we can look carefully at what the learner captured vs. didn’t capture in terms of retention and make adjustments from there in terms of course design and delivery.

Ultimately, we need to be willing to try different evaluation techniques and find the one that works best for our niche. There isn’t a one size fits all solution. But there is a need to find a solution to the “not much” response described earlier.

I want to hear from you! Check out the polling question below and reply in comments. I shout out some responses in the next issue (easy chance to get a little publicity ??). This issue’s one is below!

"what's one way you evaluate your learners" polling question

Julie Ann is a junior instructional designer with a strong background in education and STEM. She’s available for project based, part time and flexible work while in school. This could include anything from an advisory session to updating training decks or learning materials. If you have a need in mind, reach out via DM!

P.S. You can help fuel this educational series by dropping a donation here. No cost ways to support the series like comments, people tags, follows and reshares are also always welcome!

?? Jesse Garza

Software Engineering

4 个月

This was such an eye-opener, thanks for sharing! I learned some new things, but more interestingly, I found many parallels with my work in enterprise software support. My customers were the “learners” of the software. I never thought about it before (insert ADHD blame here), but the goal is the same. Understanding the target audience to create and provide quality content. My previous roles required me to create knowledge articles including tutorials, How-To, and other reference material. The more information you can collect from customers, the better. We used info from survey responses, phone and email conversations, and cloud analytics to help guide us when writing KB articles.

要查看或添加评论,请登录

Julie Ann Howlett的更多文章