Does It Impact Student Learning Outcomes? Evaluating Professional Development

Does It Impact Student Learning Outcomes? Evaluating Professional Development

This post was first published by me on my company website blog and on Medium.

Modernity, coupled with the fast paced life leveraged by rapidly evolving fields, especially in sciences and technology, has imposed on organizations and individuals alike a "continuous improvement" imperative. One of the facets of "continuous improvement" is formal professional development. However, particularly in institutions of learning, formal professional development of teaching faculty lacks evaluation, and in many cases the evaluation tries to gauge the faculty's reaction and, probably, the degree of their learning directly after or during the professional development event. Rarely do they gauge the impact of the professional development on the participant's behavior or performance on the job and perhaps never would they collect evidence to draw plausible conclusion on the impact of the professional development on the student learning outcome. This is partly because measuring the impact on teacher/instructor performance or behavior and the impact on student learning outcome is complex and partly because institutions's professional development programs are not grounded in their unique needs of improving student learning outcomes. However, training has no value unless what is learned gets applied on the job, and the subsequent on-the-job performance contributes to key organizational outcomes.

This post will highlight an influential learning evaluation model in the corporate world, and then present and adaption of the learning evaluation model in the education sector.

Learning Evaluation in the Corporate World

Kirkpatrick Model's four levels of learning has been very influential in measuring the impact of professional development in the corporate world. Its four levels of 1- Reaction, the degree to which participants react favorably to the learning event 2- Learning, the degree to which participants acquire the intended knowledge, skills and attitudes based on their participation in the learning event 3- Behavior, the degree to which participants apply what they learned during training when they are back on the job and 4- Results, the degree to which targeted outcomes occur as a result of the learning event(s) and subsequent reinforcement, have been used by many learning and development professionals.

However, the model itself did not explain many instances where a participant would learn from the professional development but could not implement his/her learning on the job due to work environment restraints from policies to lack of resources and encouragement. This is why the New World Kirkpatrick Model added the element of the "Required Drivers" at level 3, Behavior . Required drivers are processes and systems that reinforce, monitor, encourage and reward performance of critical behaviors on the job. Common examples of required drivers include job aids, coaching, work review, pay-for-performance systems and recognition for a job well done. They decrease the likelihood of people falling through the cracks, or deliberately crawling through the cracks if they are not interested in performing the required behaviors.

No alt text provided for this image

Indeed, Organizations that reinforce the knowledge and skills learned during training with accountability and support systems can expect as much as 85% application on the job. Conversely, companies that rely primarily on training events alone to create good job performance achieve around a 15% success rate (Brinkerhoff, “Telling Training’s Story,” 2006).

Levels 1 and 2 of the New World Kirkpatrick Model provide data related to effective training. These levels measure the quality of the training and the degree to which it resulted in knowledge and skills that can be applied on the job. These measurements are useful primarily to the training function to internally measure the quality of the programs they design and deliver. Levels 3 and 4 provide the needed data related to training effectiveness. These levels measure on-the-job performance and subsequent business results that occur, in part, due to training and reinforcement. Training effectiveness data is key to demonstrating the value that the training has contributed to the organization and is typically the type of data that stakeholders find valuable.

Putting them All Together

However, two caveats are in order here. First, professional development needs to be implemented backwards. That is, a learning professional needs to start with the desired results (Level 4) and go backwards to design the the desired Behavior (Level 3), the desired Learning (Level 2) and the desired Reaction (Level 1). When you start training with a focus on the Level 4 Results you need to accomplish, efforts are automatically focused on what is most important. Conversely, if you follow the common, old-school approach to planning and implementing your training, thinking about how you will evaluate Level 1 Reaction, then Level 2 Learning, then Level 3 Behavior….it’s easy to see why few people get to Level 4 Results in this fashion. Second, Level 4 Results, is the most misunderstood among all levels. A common misapplication occurs when professionals or functional departments define results in terms of their small, individual area of the organization instead of globally for the entire company. This creates silos and fiefdoms that are counterproductive to organizational effectiveness. The resulting misalignment causes layers upon layers of dysfunction and waste. Clarity regarding the true Level 4 Result of an organization is critical. By definition, it is some combination of the organizational purpose and mission. In a for-profit company, it means profitably delivering the product or service to the marketplace. In a not-for-profit, government or military organization, it means accomplishing the mission. Every organization has just one Level 4 Result. A good test of whether or not the correct Level 4 Result has been identified is a positive answer to the question, “Is this what the organization exists to do / deliver / contribute?”

Professional Development Evaluation in Education

Kirkpatrick Model of learning evaluation was adapted by Guskey (2004). Guskey also realized that the old Kirkpatrick Model did not take into account the ecological barriers/facillitation of work environment. This is why they added one level (at level 3) "Organization Support and Change". Guskey (2004) claims that using five critical levels of evaluation, you can improve your school’s professional development program. But be sure to start with the desired result—improved student outcomes.

Good evaluations don’t have to be complicated. They simply require thoughtful planning, ability to ask good questions, and a basic ability the ability valid answers. understanding of how to find valid. What’s more, they can provide meaningful information that you can use to make thoughtful, responsible decisions about professional development processes and effects.

Level 1: Participants’ Reactions

The first level of evaluation looks at participants’ reactions to the professional development experience. This is the most common form of professional development evaluations, and the easiest type of information to gather and analyze. At Level 1, you address questions focusing on whether or not participants liked the experience. Did they feel their time was well spent? Did the material make sense to them? Were the activities well planned and meaningful? Was the leader knowledgeable and helpful? Did the participants find the information useful? Important questions for professional development workshops and seminars also include, Was the coffee hot and ready on time? Was the room at the right temperature? Were the chairs comfortable? To some, questions such as these may seem silly and inconsequential. But experienced professional developers know the importance of attending to these basic human needs.

Some educators refer to these measures of participants’ reactions as “happiness quotients,” insisting that they reveal only the entertainment value of an activity, not its quality or worth. But measuring participants’ initial satisfaction with the experience can help you delivery of programs improve the design and delivery or activities in valid ways.

Level 2: Participants’ Learning

In addition to liking their professional development experience, we also hope that participants learn something from it. Level 2 focuses on measuring the knowledge and skills that participants gained. Depending on the goals of the program or activity, this can involve anything from a pencil-and-paper assessment (Can participants describe the crucial attributes of mastery learning and give examples of how these might be applied in typical classroom situations?) to a simulation or full-scale skill demonstration (Presented with a variety of classroom conflicts, can participants diagnose each situation and then prescribe and carry out a fair and workable solution?). You can also use oral personal reflections or portfolios that participants assemble to document their learning.

Although you can usually gather Level 2 evaluation information at the completion of a professional development activity, it requires more than a standardized form. Measures must show attainment of specific learning goals. This means that indicators of successful learning need to be outlined before activities begin. You can use this information as a basis for improving the content, format, and organization of the program or activities.

Level 3: Organization Support and Change

This is the level that Guskey (2004) added to Kirkpatrick Model. At Level 3, the focus shifts to the organization. Lack of organization support and change can subvert any professional development effort, even when all the individual facets of professional development are done right. Suppose, for example, that several secondary school educators participate in a professional development program on cooperative learning. They gain a thorough understanding of the theory and develop a variety of classroom activities based on cooperative learning principles. Following their training, they try to implement these activities in schools where students are graded “on the curve”—according to their relative standing among classmates—and great importance is attached to selecting the class valedictorian. Organization policies and practices such as these make learning highly competitive and will thwart the most valiant efforts to have students cooperate and help one another learn (Guskey, 2000b). The lack of positive results in this case doesn’t reflect poor training or inadequate learning, but rather organization policies that undermine implementation efforts. Problems at Level 3 have essentially canceled the gains made at Levels 1 and 2 (Sparks & Hirsh, 1997). That’s why professional development evaluations must include information on organization support and change.

At Level 3, you need to focus on questions about the organization characteristics and attributes necessary for success. Did the professional development activities promote changes that were aligned with the mission of the school and district? Were changes at the individual level encouraged and supported at all levels? Were sufficient resources made available, including time for sharing and reflection? Were successes recognized and shared? Issues such as these can play a large part in determining the success of any professional development effort.

Gathering information at Level 3 is generally more complicated than at previous levels. Procedures differ depending on the goals of the program or activity. They may involve analyzing district or school records, examining the minutes from follow-up meetings, administering questionnaires, and interviewing participants and school administrators. You can use this information not only to document and improve organization support but also to inform future change initiatives.

Level 4: Participants’ Use of New Knowledge and Skills

At Level 4 we ask, Did the new knowledge and skills that participants learned make a difference in their professional practice? The key to gathering relevant information at this level rests in specifying clear indicators of both the degree and the quality of implementation. Unlike Levels 1 and 2, this information cannot be gathered at the end of a professional development session. Enough time must pass to allow participants to adapt the new ideas and practices to their settings. Because implementation is often a gradual and uneven process, you may also need to measure progress at several time intervals.

You may gather this information through questionnaires or structured interviews with participants and their supervisors, oral or written personal reflections, or examination of participants’ journals or portfolios. The most accurate information typically comes from direct observations, either with trained observers or by reviewing video-or audiotapes. These observations, however, should be kept as unobtrusive as possible (for examples, see Hall & Hord, 1987).

You can analyze this information to help restructure future programs and activities to facilitate better and more consistent implementation.

Level 5: Student Learning Outcomes Level

Level 5 addresses “the bottom line”: How did the professional development activity affect students? Did it benefit them in any way? The particular student learning outcomes of interest depend, of course, on the goals of that specific professional development effort.

In addition to the stated goals, the activity may result in important unintended outcomes. For this reason, evaluations should always include multiple measures of student learning (Joyce, 1993). Consider, for example, elementary school educators who participate in study groups dedicated to finding ways to improve the quality of students’ writing and devise a series of strategies that they believe will work for their students. In gathering Level 5 information, they find that their students’ scores on measures of writing ability over the school year increased significantly compared with those of comparable students whose teachers did not use these strategies.

On further analysis, however, they discover that their students’ scores on mathematics achievement declined compared with those of the other students. This unintended outcome apparently occurred because the teachers inadvertently sacrificed instructional time in mathematics to provide more time for writing. Had information at Level 5 been restricted to the single measure of students’ writing, this important unintended result might have gone unnoticed.

Measures of student learning typically include cognitive indicators of student performance and achievement, such as portfolio evaluations, grades, and scores from standardized tests. In addition, you may want to measure affective outcomes (attitudes and dispositions) and psychomotor outcomes (skills and behaviors). Examples include students’ self-concepts, study habits, school attendance, homework completion rates, and classroom behaviors.

You can also consider such school-wide indicators as enrollment in advanced classes, memberships in honor societies, participation in school related activities, disciplinary actions, and retention or dropout rates. Student and school records provide the majority of such information. You can also include results from questionnaires and structured interviews with students, parents, teachers, and administrators.

Level 5 information about a program’s overall impact can guide improvements in all aspects of professional development, including program design, implementation, and follow-up. In some cases, information on student learning outcomes is used to estimate the cost effectiveness of professional development, sometimes referred to as “return on investment” or “ROI evaluation” (Parry, 1996; Todnem & Warner, 1993).

Look for Evidence Not proof

Using these five levels of information in professional development evaluations, are you ready to “prove” that professional development programs make a difference? Can you now demonstrate that a particular professional development program, and nothing else, is solely responsible for the school’s 10 percent increase in student achievement scores or its 50 percent reduction in discipline referrals?

Of course not. Nearly all professional development takes place in real-world settings. The relationship between professional development and improvements in student learning in these real-world settings is far too complex and includes too many intervening variables to permit simple causal inferences (Guskey, 1997; Guskey & Sparks, 1996). What’s more, most schools are engaged in systemic reform initiatives that involve the simultaneous implementation of multiple innovations (Fullan, 1992). Isolating the effects of a single program or activity under such conditions is usually impossible.

But in the absence of proof, you can collect good evidence about whether a professional development program has contributed to specific gains in student learning. Superintendents, board members, and parents rarely ask, “Can you prove it?” Instead, they ask for evidence. Above all, be sure to gather evidence on measures that are meaningful to stakeholders in the evaluation process.

Consider, for example, the use of anecdotes and testimonials. From a methodological perspective, they are a poor source of data. They are typically highly subjective, and they may be inconsistent and unreliable. Nevertheless, as any trial attorney will tell you, they offer the kind of personalized evidence that most people believe, and they should not be ignored as a source of information. Of course, anecdotes and testimonials should never form the basis of an entire evaluation. Setting up meaningful comparison groups and using appropriate pre- and post-measures provide valuable information. Time-series designs that include multiple measures collected before and after implementation are another useful alternative.

Working Backward Through the Five Levels

Three important implications stem from this model for evaluating professional development. First, each of these five levels is important. The information gathered at each level provides vital data for improving the quality of professional development programs.

Second, tracking effectiveness at one level tells you nothing about the impact at the next. Although success at an early level may be necessary for positive results at the next higher one, it’s clearly not sufficient. Breakdowns can occur at any point along the way. It’s important to be aware of the difficulties involved in moving from professional development experiences (Level 1) to improvements in student learning (Level 5) and to plan for the time and effort required to build this connection.

The third implication, and perhaps the most important, is this: In planning professional development to improve student learning, the order of these levels must be reversed. You must plan “backward” (Guskey, 2001), starting where you want to end and then working back.

In backward planning, you first consider the student learning outcomes that you want to achieve (Level 5). For example, do you want to improve students’ reading comprehension, enhance their skills in problem solving, develop their sense of confidence in learning situations, or improve their collaboration with classmates? Critical analyses of relevant data from assessments of student learning, examples of student work, and school records are especially useful in identifying these student learning goals.

Then you determine, on the basis of pertinent research evidence, what instructional practices and policies will most effectively and efficiently produce those outcomes (Level 4). You need to ask, What evidence verifies that these particular practices and policies will lead to the desired results? How good or reliable is that evidence? Was it gathered in a context similar to ours? Watch out for popular innovations that are more opinion-based than research-based, promoted by people more concerned with “what sells” than with “what works.” You need to be cautious before jumping on any education bandwagon, always making sure that trustworthy evidence validates whatever approach you choose.

Next, consider what aspects of organization support need to be in place for those practices and policies to be implemented (Level 3). Sometimes, as mentioned earlier, aspects of the organization actually pose barriers to implementation. “No tolerance” policies regarding student discipline and grading, for example, may limit teachers’ options in dealing with students’ behavioral or learning problems. A big part of planning involves ensuring that organization elements are in place to support the desired practices and policies.

Then, decide what knowledge and skills the participating professionals must have to implement the prescribed practices and policies (Level 2). What must they know and be able to do to successfully adapt the innovation to their specific situation and bring about the sought- after change?

Finally, consider what set of experiences will enable participants to acquire the needed knowledge and skills (Level 1). Workshops and seminars, especially when paired with collaborative planning and structured opportunities for practice with feedback, action research projects, organized study groups, and a wide range of other activities can all be effective, depending on the specified purpose of the professional development.

This backward planning process is so important because the decisions made at each level profoundly affect those at the next. For example, the particular student learning outcomes you want to achieve influence the kinds of practices and policies you implement. Likewise, the practices and policies you want to implement influence the kinds of organization support or change required, and so on.

The context-specific nature of this work complicates matters further. Even if we agree on the student learning outcomes that we want to achieve, what works best in one context with a particular community of educators and a particular group of students might not work as well in another context with different educators and different students. This is what makes developing examples of truly universal “best practices” in professional development so difficult. What works always depends on where, when, and with whom.

Making Evaluation Central!

A lot of good things are done in the name of professional development. But so are a lot of rotten things. What educators haven’t done is provide evidence to document the difference between the two. Evaluation provides the key to making that distinction. By including systematic information gathering and analysis as a central component of all professional development activities, we can enhance the success of professional development efforts everywhere.

One Last Thought

Like Kirkpatrick and Guskey, we believe that the work ecology has a significant impact on the success of the professional development, i.e. it has an impact on student learning outcome. However, we disagree with Guskey on the need to add a level to reflect organization support and change. By adding the level, we assume that organization support is inherent to the professional learning program and we might even hold the learning professionals accountable. However, by focusing on them as imperative drivers for success, we can differentiate the inherent evaluation of the professional development and the external drivers that need to be there to make it successful.

from Guskey (2004)
from Guskey (2004)
from Guskey (2004)


Joe Ghantous ?? ?????

CEO @ Right Service | Transforming Visions into Reality with the 4Ts? Marketing Concept | Strategist | Author of the book "Employee Advocacy on Social Media" | Keynote Speaker

2 年

Ammar, thanks for sharing!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了