Part 2: Steal This Event Program Evaluation

Part 2: Steal This Event Program Evaluation

This second of three articles in this series on event learning program evaluations outlines the different levels of program evaluation and shares examples of good questions to ask across key domains.

When evaluating event program effectiveness, there are five levels to consider:

Level 1: Reaction: The degree to which participants find the program favorable, engaging and relevant to their jobs.

Level 2: Learning: The degree to which participants acquire the intended knowledge, skills, attitude, confidence and commitment based on their participation in the program.

Level 3: Behavior/Application: The degree to which participants apply what they learned during the program when they are back on the job.

Level 4: Business Results/Impact: The degree to which participants targeted outcomes occur as a result of the program (performance improvement).

Level 5: Return on Investment (ROI): Is there any monetary value to the results made (cost savings or revenue generated)?

According to The ROI Institute, the percentage of organizations evaluating learning programs at each level typically declines as the evaluation levels progress, due to increasing complexity and resource requirements. Here’s a general breakdown based on industry research and reports:

  • Level 1: Reaction: ~90%
  • Level 2: Learning: ~60-70%
  • Level 3: Behavior/Application: ~30-40%
  • Level 4: Business Results/Impact: ~10-20%
  • Level 5: Return on Investment - ROI: ~5-10%

There is no data available for the events industry but based on my experience, we significantly lag behind these benchmarks. As a result, we cannot say with any certainty whether participants derived value from attending event-based learning programs.

Questions to Ask

Here are some questions to consider asking in order determine the value of your learning programs, and the level they’re aligned with. These questions use either a 5-point Likert scale (e.g., Strongly Agree to Strongly Disagree) or an open-ended format (text box).

Level 1: Reaction

  • The program objectives were clearly defined and met.
  • The content of this program was relevant to my role or career goals.
  • The presenter was knowledgable about the subject.
  • The presenter’s energy and enthusiasm kept me actively engaged.
  • The delivery method (e.g., lecture, activities, discussion) kept me engaged.
  • The learning environment (e.g., venue, materials, technology) was conducive to achieving the objectives.

Level 2: Learning

  • I learned new knowledge from this program. What specific new knowledge did you gain from this program?
  • I learned new skills from this program. What specific skill did you learn and how do you plan to use it?

Level 3: Behavior/Application

  • I feel confident I can apply the knowledge I learned in this program to my job.
  • I feel confident I can apply the skills I learned in this program to my job.
  • What percentage of your current work tasks do you believe will benefit from the knowledge presented in this program?
  • What percentage of the knowledge and skills gained in this program do you think you will apply in your current role?
  • Do you anticipate any challenges in applying what you learned in this program to your job? If yes, please describe.

Level 4: Business Results/Impact

  • I expect this program to improve my job performance. What specific aspects of your job performance do you believe will improve as a result of this program?
  • What percentage of your job performance do you believe is directly influenced by the knowledge or skills covered in this program?
  • What percentage improvement in your job performance do you expect as a direct result of this program?

Note: If a learning program targets specific results like increasing quality, sales or decreasing costs or risk, you can include additional, customized questions here.

Level : ROI

  • What specific business outcomes do you expect to achieve as a result of this program?
  • Can you estimate the monetary value of the improvements this program will help you achieve (e.g., cost savings, increased efficiency)?
  • This program was a worthwhile investment in my career development.

How to Use These Question

Use several questions from each level in order to gain a clear understanding of each outcome and gather the most actionable data. The objective of any program evaluation is to systematically assess the program’s effectiveness, efficiency, and impact, ensuring that it meets its goals, delivers value to stakeholders, and provides actionable insights for continuous improvement.

There is no right number of questions to ask in a program evaluation. It all depends on the evaluation's goals, the complexity of the program, the time available for respondents, the type of data needed (qualitative or quantitative), and the stakeholders' priorities for actionable insights.

Ten to 20 questions is a good range to strive for. That’s enough to gather valid and reliable data while insuring a respectable response rate.

When it comes to response rates, here are some ranges for your consideration:

General Expectations:

  • A 10-20% response rate is common for post-event surveys, particularly for large groups or busy professional audiences.
  • 20-50% may be achievable with strong follow-up efforts, tailored questions, and incentives.

Small Groups (~100 participants):

  • A response rate of 30-50% may be possible, as smaller, more engaged groups typically have higher completion rates.

Large Groups (~500-5000 participants):

  • Response rates closer to 10-20% are more common unless significant effort is made to boost participation (e.g., onsite reminders, personalized follow-ups).

These response rates will provide you with a reasonable amount of confidence to be able to make informed decisions about your learning programs.

Other factors which influence your response rate:

  • Engagement Levels: Attendees who feel personally connected to the program or see clear value in providing feedback are more likely to respond. Encourage responses by explaining how feedback will improve future events or directly benefit attendees.
  • Survey Timing: Surveys completed immediately after the session often yield higher response rates compared to post-event follow-ups.
  • Survey Method: Digital surveys during sessions (via QR codes or event apps) generally have better uptake than emails sent post-event. Brief evaluations embedded into the session close can also boost participation.
  • Incentives: Offering small but meaningful incentives (e.g., access to session slides, raffle entries) can significantly increase response rates.
  • Participant Burden: Keep surveys short and focused (10-20 questions) to maximize completions.

For efficiency’s sake, some questions can be combined (“knowledge and skill”, or “knowledgable, energetic, and enthusiastic”). Just keep in mind that if you do combine descriptors, you lose the ability to discriminate between them.

Finally, not every event should be evaluated at Level 5 (ROI), only the ones that matter.

Supplemental Questions

I often supplement specific questions like those above, with some open-ended questions that provide additional insights:

  • What about this program was most useful to you?
  • What about this program was least useful to you?
  • How might this program be improved to make it more relevant to your job?
  • Would you recommend this learning program to a peer? Why?

The last question (“This program was a worthwhile investment in my career development.”) aligns with the Net Promoter Score (NPS) framework and can be considered a measure of perceived value. However, be wary of relying on NPS-type questions for program evaluations. Perceived value does not indicate achievement of key learning outcomes such as knowledge retention, behavior change, or business impact. NPS-type questions should only be used in combination with other qualitative and quantitative metrics that directly measure those outcomes.

Satisfaction is Unsatisfactory

Notice that none of these questions ask about participant “satisfaction”. That’s because “satisfaction” is highly subjective and often influenced by conscious and unconscious biases. “Satisfaction” also does not correlate with learner or business outcomes. Simply put, “satisfaction” is an unsatisfactory metric and provides no actionable insights.

For added emphasis, let’s compare two promotional claims for an event:

  • '90% of our attendees were satisfied with the learning opportunities we provided.’
  • '90% of our attendees gained knowledge and skills that will directly enhance their performance and their organization's success.’

Which event would you choose to attend?

Measuring What Matters

Effective program evaluation is not just about checking boxes; it’s about creating a clear and measurable link between your learning programs and the outcomes they are designed to achieve.

By aligning your evaluation questions with different levels, you can gather actionable insights that go beyond subjective satisfaction metrics.

Whether your goal is to improve content relevance, enhance learning retention, drive workplace behavior change, demonstrate business impact, or justify investment, a thoughtful approach to evaluation ensures you can make data-informed decisions to continually optimize your programs.

Remember, the ultimate goal of evaluation is to demonstrate value—not only to participants but to your organization and stakeholders.

The third and final article in this series on event program evaluations explores practical strategies for implementing and leveraging post-event insights, highlighting actionable steps to transform attendee feedback and data into informed decisions that enhance the value and impact of future events.

For a customizable template of recommended program evaluation questions, email: [email protected]. Use “Program Evaluation Template” in the Subject line.

John Nawn is a business strategist and thought leader in event-based learning program evaluation. With expertise as a trained psychometrician specializing in the measurement of human behavior, John leverages decades of experience in data-driven decision-making to help event professionals assess and enhance the value of their programs. His work emphasizes aligning educational outcomes with attendee and organizational goals, transforming events into strategic opportunities for growth and engagement.

要查看或添加评论,请登录

John Nawn的更多文章

社区洞察

其他会员也浏览了