Dr. Jeff Sheldon discusses building the evaluation capacity of international school leaders on-line and asynchronously: yes, it can be done – Part 2
Jeff Sheldon, Ed.M., Ph.D.
Social Scientist: Applied Research, Evaluation, and Learning | Project Manager | Educator | Technical Assistant | Coach | Data Analyst | Peer Reviewer/Editor | RFP Proposal Developer/Grant Writer | Author | Leader
Introduction
In the first part of this two-piece series I described the evaluation courses I taught at the University of San Diego, elaborated on the essentials for an on-line course, and then discussed course machinations. Other than the course descriptions, which were likely new to you, if you’ve ever taught on-line then the latter two aspects of that piece were likely a familiar refrain. For those of you who haven’t taught on-line but are considering that form of higher education purgatory the information provided therein should have given you an idea of what you might expect. Now I discuss how to build evaluation capacity, hence competency, through knowledge, skill, and self-efficacy development all on-line. Yes, it can be done and done successfully.?
Building Evaluation Capacity: Knowledge and Self-Efficacy
Knowledge
The former, a necessary part of the discussion, is merely the prelude to the main point of this article which discusses the ways in which evaluation capacity – knowledge and skill competencies along with self-efficacy - can be built on-line. The actual process of building evaluation capacity provides a context in which student empowerment is fostered such that they are better able to produce evaluations and consume evaluation findings. Knowledge about evaluation (i.e., how it started and how it’s evolved, what it is, how it’s used, how it’s done, how it’s become professionalized, its future, etc.) is the easiest capacity to build, relatively speaking. Providing knowledge and opportunities to acquire knowledge is mostly rhetorical and topical, and can be applied by students, as a form of self-efficacy, through various assessments that allow them to exercise and integrate that knowledge and its relevance in their own context and experiences. Self-efficacy by definition is a person’s appraisal of, and the motivation and agency to change one’s life circumstances, in this case, to be able to consume and produce evaluations. Specifically, self-efficacy refers to how people perceive the challenges they face and how they begin to appraise their abilities to influence social and political systems. In that regard, the knowledge-based topics I covered in LEAD 609 included: an introduction and overview of evaluation; evaluation audiences and stakeholders; the American Evaluation Association’s standards, guiding principles, and cultural competencies; evaluation models and approaches; theories used in evaluation; theory-driven evaluation; organizational learning and creating a learning organization through evaluative inquiry; assessing as a an evaluation consumer six exemplary evaluations from the Community Tool Box (and one not so exemplary evaluation for context); and an overview of evaluation methods and design.
Self-efficacy: Knowledge
To introduce self-efficacy and assess knowledge, assignments included: writing about prior experience with evaluation of any kind (i.e., professional or personal); the ways in which evaluation might be incorporated into their own work and that of their organization, and the ways in which evaluation might be used and by whom; identifying likely evaluation audiences – by type – in their own context and why, and who the stakeholders are likely to be at each level, why, and necessary attendant cultural competencies; writing about, individually and collectively, which evaluation branch and model is most resonant and why; determining which evaluation model is a best-fit within their context and why; creating a theoretical model of a program they’re familiar with that shows both the action and change models along with inputs, outputs, outcomes, and ultimately, impacts; the indicators of and organizational learning elements in evidence in their context; analyzing an evaluation and describing the purpose of the evaluation, how it was used, and why, and for whom it was important; reading an evaluation and describing the program, its goals, its theory, what the stakeholders wanted to know and the evaluation questions that were asked (i.e., the purpose of the evaluation), who was involved, the type of evaluation that was conducted (e.g., participatory), the type of data that was collected (i.e., quantitative, qualitative, or both), how the data was collected, the results and the audience to whom the results were disseminated, and how the evaluation was ultimately used; discussing whether they lean toward qualitative methods, quantitative methods, or a bit of both and why; last, considering a program they are involved or familiar with and developing one process and one outcome evaluation question for something they’d like to know about that program, and the best methods for answering those two questions. All of this foundational knowledge, however, was only the first step in developing evaluation capacity as the next course built on that knowledge to develop the skills and self-efficacy necessary for evaluation production.? ?
Building Evaluation Capacity: Skills and Self-Efficacy
Skills
While LEAD 609 focused on evaluation theory, LEAD 614 focused on application which involved moving beyond evaluation knowledge and into competencies in terms of evaluation skills while continuing to build self-efficacy. The topics covered in LEAD 614 included: ethics in evaluation and research using CITI course materials (i.e., human subjects’ protection, IRB, informed consent, and data security); an overview of mixed-methods; quantitative methods useful in evaluation (i.e., survey research); qualitative methods useful in evaluation (i.e., interviews); and mixed-methods designs (i.e., putting QUAL and QUANT methods together sequentially in an evaluation study); and the peer review process. So, how to develop skills and self-efficacy? Well, the rest of the course was dedicated to developing an evaluation proposal (5 - 7 pages) which would include and build on elements of all the previous material covered in both LEAD 609 and 614 (minus budget as that was not covered during the course, but in hind-site should have been included – next time) followed by the production of a five minute “big pitch” video to sell their mixed-methods proposal incorporating a survey and an interview to a mock evaluation RFP committee.
Self-efficacy: Skills
Let me take you through the linear, seven-step sequence used to get students to the final proposal and ultimately, the “big pitch.” Their first assignment was to choose a program (or aspect of a program) they would be interested in evaluating and provide an overview of that program including name, where it’s located, who it serves, who runs it, how long it’s been in existence, what it does, etc. They were not asked to provide the program theory, just the overview as they would include action and change models in their proposal. They also needed to consider who their human subjects might be, the potential risks to those human subjects, how they would be protected, and any special considerations that might need to be taken into account (e.g., special classes of subjects). Second, based on the program they were going to evaluate they were asked to choose two measurable, quantifiable constructs from the literature, provide 3 – 5 survey or scale questions for each of those constructs (no more than 10 total), the types of responses that would make the most sense for their survey/scale items, and the scale type (e.g., Likert-type, dichotomous, open-ended, some combination thereof, etc.). An important note here is that teaching a survey/scale approach to quantitative research made sense because it is the approach that most doctoral students use in their respective dissertation studies if it’s either quantitative or mixed-methods. Third, based on the program they were going to evaluate and their two measurable constructs, I asked them to provide 3 – 5 interview questions for each construct (no more than 10 total) and explain whether the interview protocol would be open-ended, structured, or semi-structured and why that would be the best option. The fourth assignment was to, using the 16 elements of a proposal provided by me (see below), write an outline of the evaluation they were planning to conduct. However, because we were in the evaluation development stage, if they weren’t sure about how to address any of the elements they could have said “I’m not sure” which would buy them time to think it through for inclusion in the final proposal. The purpose of this exercise was to get them to not just think about the whole of the evaluation, but to think about each of the elements that comprise an evaluation, what their individual contribution is, and how each contributes to the whole in a linearly linked way (i.e., the scientific method applied to the social sciences). As I told them, they might not have known everything at that moment, but the important thing was to begin thinking it through and planning accordingly. The following are the 16 elements I wanted them to include in their proposals and “big pitch”:
1.????? Description of the program or aspect of the program to be evaluated.
2.????? Why an evaluation is necessary.
3.????? Program theory (includes a very simple, left to right, linear model as they practiced).
4.????? Purpose of the evaluation and which aspects of the program will be evaluated..
5.????? Context in which the evaluation will take place and whether the context is conducive to an evaluation.
6.????? Cultural competencies required for the evaluation.
7.????? Stakeholders and primary audience of the evaluation.
8.????? Potential users and uses/utilization of the evaluation.
9.????? Constructs to be measured.
10.?? Evaluation questions (based on the program theory).
11.?? Evaluation model/approach that would work best for the context, stakeholders, and your evaluation ethos/value system.
12.?? Human subjects and why they might need protecting.
13.?? Evaluation design (i.e., qual. – quant, quant-qual., concurrent quant.-qual, etc.).
14.?? Methods.
15.?? Proposed analytical strategy.
16.?? How findings will likely be disseminated and to whom.
The fifth assignment was broken into two parts. For the first part, based on their proposal outline, students had to write a rough draft of their proposal making sure to follow APA guidelines. For the second assignment, based on peer reviewing guidelines I provided (see below), they had to peer review two draft proposals to which they had been randomly assigned. Peer reviewing is an important professional evaluation skill so was included in the curriculum as part of my pedagogical approach. The peer review guidelines were as follows:
1.????? Read the proposal with a clear, objective, critical eye from the perspective of an evaluation expert.
2.????? Make sure the author has included all the required elements and make note of those that are missing or incomplete.
3.????? Because you’re reviewing an evaluation proposal there will be no results, discussion, or conclusion sections, and likely no statistic/data tables or graphs.? Everything else is in play along with the elements unique to these proposals.
4.????? Make? sure you start your review with what’s good or positive about the proposal, what you liked,
5.????? Be thoughtful in providing constructive feedback that is instructive.
6.????? Feedback should be provided about information you didn’t understand, what wasn’t readily clear to you, what was missing that you thought should have been included, and any questions raised about any aspect of the proposal that should have been addressed in the proposal.
7.????? Cite specific examples from the proposal including paragraph and page.? For example, “On page two I noticed that your theoretical program model didn’t show any outcomes.? A theoretic program model typically shows different types of outcomes (i.e., proximal, intermediate, and distal) that indicate….”?
For the sixth assignment students were to write the final draft of their proposal, with all elements included, as informed by peer review feedback as well as feedback from me; there was no class that week so they had nearly two weeks to complete the final draft. The seventh assignment was in two parts. In part-one, students had to create a “big pitch” video for their evaluation proposal. I asked them to imagine that they were standing before representatives of the organization running the program they were proposing to evaluate and to explain why they were the best candidate (i.e., they had the best design, the best methods, etc.).They were told there were numerous evaluators vying for the same evaluation contract so the organization was giving them only five minutes to hear their “big pitch.” In real life, we typically don’t create a “big pitch” video as the proposal serves that purpose, but the competition is no less fierce and there is no money for coming second place. For their second and final assignment I asked them to consider themselves to be a representative of two different organizations (randomly assigned by me) as a member of a RFP committee to which evaluation proposals were being pitched. Their job was to watch two different “big pitch” videos created by their colleagues and render a judgment with clear justification: contract or no contract.
Summary
And that’s how you build evaluation capacity by developing knowledge and skill competencies along with self-efficacy all on-line. To me the most important aspect of the two courses was inculcating an evaluator’s mindset by taking them through the same linear sequence of thought exercises and evaluation activities they would go through if they were developing an evaluation proposal that could be used to bid on a real evaluation contract while capitalizing on the application of all the knowledge accrued during the first course. Not only did they complete the course with two products (i.e., an evaluation proposal and a pitch video), but there was also an affective component to all of this. They viscerally felt what it’s like to go through a process that is often fraught, challenging, but ever so satisfying when you develop a really good final product that you know you can execute if need be. The bottom line is that if you want to be a better evaluation consumer or producer you have to be able to think like an evaluator, be able to do what an evaluator does, and be able to feel what an evaluator feels throughout the process. It’s challenging enough to teach people how to do this in-person, but even more challenging when it’s done on-line because it is, and I won’t lie to you, a lot of work in both planning and implementation. Regardless, I hope these two pieces provided you with some insights and ideas if ever you are tasked with teaching evaluation on-line. Is it the ideal way to develop evaluation capacity? No, it is not, but it can be done well with rigour and at a very high level.
Thanks for reading and if you have questions or comments please do so below. I wish you much success. Cheers.
?
Evaluation Consultant (semi-retired)
3 个月Excellent outline and discussion of the course you facilitated on building evaluation competency