On Building Evidence – Some Excerpts I'd like to share from my forthcoming book on Performance Evaluation

On Building Evidence – Some Excerpts I'd like to share from my forthcoming book on Performance Evaluation

How to Build Evidence – Some Excerpts from my Forthcoming Book on Performance Evaluation

I am completing a book manuscript entitled Delivering Measurable Performance: Performance Evaluation Methods, Strategies and Tools for Policy Makers and Program Managers which I started in 2004.?Why has it taken me so long to finish??Well, it might help to know that this is my fourth book, so I do know how to finish a book!?I got slowed down in my writing as a single parent with four daughters with a fulltime day job who taught only part-time and lacked regular faculty academic resources and summers off.?But I’m close to finishing and would love some feedback.?I’ll be sharing some excerpts occasionally from the ms. in my LinkedIn posts and look forward to comments from colleagues.?I’m going to start with some “hot topics” on evaluation and evidence.?I was recently intrigued by the posting by Michael Quinn Patton where he discussed 10 differences among evaluators.?

As the title suggests, the book is designed to link together mixed research methods with both evidence-based evaluation and performance measurement. As you can imagine, this writing project has evolved as each of these fields have advanced methodologically.?The purpose of this book is to codify and define the emerging methods of performance evaluation from the singular perspective of a research methodologist.?The goal here is to create a “commons” for performance evaluation that is shared by methodologists, program managers, and policy makers and donors.

Like most evaluators, I started from a specific and unique starting point and as such, my understanding of the field as evolved as my practice evolved.?I was heavily trained in graduate school as a political scientist in quantitative research methods/designs, behavioralism and philosophy of science/social/science.?I had the good fortune to study with Professors John S. Jackson, a well-known political party scholar who taught me I would not understand or study politics well as a political scientist without participant observation and engaged me with the Party Elite Study he launched which provided rigorous overtime surveys of nominating convention delegates; the late Roy E. Miller, one of the greatest methodologist minds (who as a late-deafened adult turned from political science to contribute to telecommunications for the deaf) who taught me the benefits and limits of experimental designs and engaged me with the pathbreaking 1980 field experiments; and the late Elizabeth Eames, a philosopher and renowned Bertrand Russell and John Dewey expert who taught me to think critically about the philosophy of science.?My early published scholarly work focused on institutional reforms, why social movements succeed and fail, and when and how political parties set the agenda and lead (democratic) public opinion.?I also turned to qualitative methods given my studies of women’s marginalization and empowerment focusing on how women enter public life, and how using focus groups and key informant interviews can be value-added.

I next got my footing in evaluation policy while working in Congress in the 1990s during the performance revolution where the U.S. government discovered the distinction between outputs and outcomes and enacted the Government Performance and Results Act (GPRA).?Tracking how U.S. federal agencies responded to GPRA and the twists and turns until Pres. Bush elevated performance as a presidential management tool with the help of the Office of Management and Budget’s (OMB’s) Presidential Assessment Rating Tool (PART) was fascinating, especially when some programs were ridiculously rated as ineffective even when they had not ever been evaluated, or when President Obama narrowly treated randomized control trials (RCTs) as the only “gold standard,” or when President Trump directed the Centers for Disease Control (CDC) to stop using the term “evidence,” or when President Biden has now included social science as part of the evidence process.?

My next step as a consultant was working in criminal and juvenile justice on prevention science and integration of information systems for projects funded by the Department of Justice, Office of Juvenile Justice and Delinquency Prevention and the National Governors Association.?I also gained experience in democracy promotion doing consulting for the National Democratic Institute, one of the four core institutes of the National Endowment for Democracy (NED) and for USAID.?From there I moved to international development and democracy promotion.?I have also served as a reviewing evaluator for the Corporation for National and Community Service, one of the early adopters of best practices registries.?As Founding Director of the Evaluation Department at the Center for International Private Enterprise (CIPE), also a NED institute, where I built and led an evaluation team working on monitoring and assessment of private sector engagement projects in over 100 emerging democracies.?I have also learned a great deal from former students I've taught at George Washington University, American University and Georgetown University who challenged me to explain evaluation terms in understandable language just as Jim Rugh did for logical frameworks with his “Rosetta Stone.”

So my evaluator path predates a lot of the recent professionalization (training and certificates by The Evaluators Institute, Humentum, and evaluation associations and degree earning graduate training by a growing number of major universities) of the program evaluation field and crosscuts a lot of evaluation communities of practice from education to prevention to policy reform, evaluation policy, democracy promotion and women’s empowerment among others.

Will my book resolve these all these differences that Michael Quinn Patton has identified??Probably not.?I am realistic about that - the field of evaluation has very real structural differences in how evaluators are trained and the distinct content areas where evaluators work and there are persistent differences between pure research and applied evaluation work.?But if we accept that evaluation is evolving into a transdisciplinary method, I do think there are some answers that can be found if we aim to professionalize and elevate evaluation as a distinct methodology.?If we are looking to build knowledge in accountable and transparent ways, and move beyond treating the existing differences in practice as equally valid differences in values to considering how we balance competing values with justifications, then we will need to be able to move forward on a problem-solving path.???

So in my view, more formally - what gap does my book fill??Despite the growing frameworks for performance evaluation, performance evaluation remains quite misunderstood as a systematic research method.?This book seeks to provide that framework and introduces the Matrix Logic Model.?I developed the Matrix Logic Model as a tool designed to improve on the basic logic model and the logframe approaches.?The latter two are fine for proposals but are ill-served to develop measurable evaluation frameworks that integrate causal linkages and benchmarks for impact.?But my book is more than a tool – it provides a framework to elevate evaluation as a strategic and adaptive management resource.

Let me add some differences to Michael Quinn Patton’s list of ten differences:?For some, performance evaluation is more commonly understood as a management tool for developing and implementing evidence-based policy.?For others such as in the U.S. where several U.S. federal agencies have separated “performance evaluation” from “impact evaluation,” performance evaluation is limited to the project level of analysis and centered only on project objectives rather than population-level results.?

Using the lens of a research methodologist, I seek to place performance evaluation on a firm grounding as a social science methodology in its own right.?This is because research methods are an essential but unrecognized part of the accountability that is promised by a focus on performance.?This book comes some thirty years after performance measurement (and evaluation) entered the national scene as a major agenda item of a U.S. President (Bill Clinton) and about forty years after performance measurement (and evaluation) began to take shape from a changed policy and research environment.

In the book, I use the term performance evaluation as the broader and more proper nomenclature to refer to the methods employed.?So what is the essential difference between performance measurement, performance evaluation and performance management?

In my view, the term “performance measurement” should be limited only to the measurement process, which while distinct is only a part or subset of the full methodology.?And the term “performance management” should be limited to how managers use performance data and methods within their teams to ensure that they are able to make data-driven executive and administrative decisions, and the degree to which they manage projects to ensure high quality evaluability and impact.?

While some use all these terms interchangeably, there are important reasons to make the distinction.?

  • Without a clear consensus over what we are focusing on and talking about, the methods are obscured.?Fidelity to the methods guarantee a credible (valid and reliable) and accountable (scientifically speaking) answer.
  • Measurement just one part of the scientific method and is uniquely derivative rather than determinative of program theory, program goals, and desired policy outcomes. Among laypersons, it has sometimes appeared incorrectly that measurement precedes science.
  • Managing for performance is a subset of management skills, but it not a substitute for the technical skills that only evaluators possess that should guide managers.

So the science of performance evaluation mattes – because it is based on a distinct type of scientific method.?It comprises the methods used to ensure the objective tracking of outcomes of a meaningful process on communities and populations as they unfold and are managed for continued success over-time. ??In other words, it is a hybrid method combining the systematic application of research methods and the business process to assess the design, implementation, improvement, outcomes or impact of a project, program, portfolio or policy as it unfolds over time.?Performance evaluation provides a new way to address and justify causality that can be compared to other scientific methods.?At its base, I argue performance evaluation involves being able to make scientific attributions of causality and thinking seriously about how we know what we know

The premise of this book is that in addition to being a distinctive policy and management tool, performance evaluation is also a research method in its own right.?Understanding and exploiting this research method body of knowledge is critical to effective use of evidence-based and data-driven evaluations and adaptive management.??I have written this book to define and explain the method for policymakers, administrators, and program managers as well as researchers.?

When performance evaluation is based upon objective data using research methods to draw appropriate inferences – it can also provide distinctive benefits for evidence-based policymaking.?Done well, performance evaluation can provide for greater accountability through tracking the success and failure of programs and policies and provide a management early warning system when early improvements and mid-course corrections are needed in programs, so they work as intended and achieve desired goals.?

So I look forward to sharing occasional excepts as I finalize the full ms. for publication and would appreciate any comments and/or questions.?I especially appreciate skeptical or critical comments as they help me refine and improve my work - and learn from the insights of others which is what all evaluators need to do. ??My next post will focus on the Evidence Act of 2018 and its relevance to the practice of evaluation.

Salome Tsereteli-Stephens

Monitoring, Evaluation and Learning Director at American Bar Association Rule of Law Initiative. Any posts made are in personal capacity.

2 年

Congratulations Denise! Read this post in one breath. Would love to discuss and learn more of your insights on this incredibly timely subject!

回复
Jerim Obure

Senior Specialist -Monitoring, Evaluation & Learning- International Justice Mission

2 年

Well done, Denise. I look forward to the book coming out. Sounds exciting!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了