OECD GLOSSARY FOR REPORTS
Ricardo Jorge Medeiros Fonseca Phd.
Toyota Gazzoo Racing fans social media manager
This post is an integrated contribution for technical terms used in oecd reports. Understand the terms for a perfect perception of reports.
Accountability: Obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to report fairly and accurately on performance results vis-à- vis mandated roles and/or plans. This may require a careful, even legally defensible, demonstration that the work is consistent with the contract terms. Note: Accountability in development may refer to the obligations of partners to act according to clearly defined responsibilities, roles and performance expectations, often with respect to the prudent use of resources. For evaluators, it connotes the responsibility to provide accurate, fair and credible monitoring reports and performance assessments. For public sector managers and policy-makers, accountability is to taxpayers/citizens.
Activity: Actions taken or work performed through which inputs, such as funds, technical assistance and other types of resources are mobilized to produce specific outputs. Related term: development intervention.
Analytical tools: Methods used to process and interpret information during an evaluation.
Appraisal: An overall assessment of the relevance, feasibility and potential sustainability of a development intervention prior to a decision of funding. Note: In development agencies, banks, etc., the purpose of appraisal is to enable decision makers to decide whether the activity represents an appropriate use of corporate resources. Related term: ex-ante evaluation.
Assumptions: Hypotheses about factors or risks which could affect the progress or success of a development intervention. Note: Assumptions can also be understood as hypothesized conditions that bear on the validity of the evaluation itself, e.g., about the characteristics of the population when designing a sampling procedure for a survey. Assumptions are made explicit in theory based evaluations where evaluation tracks systematically the anticipated results chain.
Attribution: The ascription of a causal link between observed (or expected to be observed) changes and a specific intervention. Note: Attribution refers to that which is to be credited for the observed changes or results achieved. It represents the extent to which observed development effects can be attributed to a specific intervention or to the performance of one or more partner taking account of other interventions, (anticipated or unanticipated) confounding factors, or external shocks.
Audit: An independent, objective assurance activity designed to add value and improve an organization’s operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to assess and improve the effectiveness of risk Annex VI Glossary OECD Glossary of Key Terms in Evaluation and Results-Based Management (2002) 223 management, control and governance processes. Note: a distinction is made between regularity (financial) auditing, which focuses on compliance with the applicable statutes and regulations; and performance auditing, which is concerned with relevance, economy, efficiency and effectiveness. Internal auditing provides an assessment of internal controls undertaken by a unit reporting to management while external auditing is conducted by an independent organization.
Base-line study: An analysis describing the situation prior to a development intervention, against which progress can be assessed or comparisons made.
Benchmark: Reference point or standard against which performance or achievements can be assessed. Note: A benchmark refers to the performance that has been achieved in the recent past by other comparable organizations, or what can be reasonably inferred to have been achieved in the circumstances. Beneficiaries: The individuals, groups, or organizations, whether targeted or not, that benefit, directly or indirectly, from the development intervention. Related terms: reach, target groups.
Cluster evaluation: An evaluation of a set of related activities, projects and/or programs.
Conclusions: Conclusions point out the factors of success and failure of the evaluated intervention, with special attention paid to the intended and unintended results and impacts, and more generally to any other strength or weakness. A conclusion draws on data collection and analyses undertaken, through a transparent chain of arguments.
Counterfactual: The situation or condition which hypothetically may prevail for individuals, organizations, or groups where there is no development intervention.
Country Program Evaluation/ Country Assistance Evaluation: Evaluation of one or more donor’s or agency’s portfolio of development interventions, and the assistance strategy behind them, in a partner country.
Data collection tools: Methodologies used to identify information sources and collect information during an evaluation. Note: Examples are informal and formal surveys, direct and participatory observation, community interviews, focus groups, expert opinion, case studies, literature search.
Development intervention: An instrument for partner (donor and non-donor) support aimed to promote development. Note: Examples are policy advice, projects and programs.
Development objective: Intended impact contributing to physical, financial, institutional, social, environmental, or other benefits to a society, community, or group of people via one or more development interventions.
Economy: Absence of waste for a given output. Note: An activity is economical when the costs of the scarce resources used approximate the minimum needed to achieve planned objectives.
Effect: Intended or unintended change due directly or indirectly to an intervention. Related terms: results, outcome. 224 Annex VI Effectiveness: The extent to which the development intervention’s objectives were achieved, or are expected to be achieved, taking into account their relative importance. Note: Also used as an aggregate measure of (or judgment about) the merit or worth of an activity, i.e., the extent to which an intervention has attained, or is expected to attain, its major relevant objectives efficiently in a sustainable fashion and with a positive institutional development impact. Related term: efficacy.
Efficiency: A measure of how economically resources/inputs (funds, expertise, time, etc.) are converted to results.
Evaluability: Extent to which an activity or program can be evaluated in a reliable and credible fashion. Note: Evaluability assessment calls for the early review of a proposed activity in order to ascertain whether its objectives are adequately defined and its results verifiable.
Evaluation: The systematic and objective assessment of an on-going or completed project, program or policy, its design, implementation and results. The aim is to determine the relevance and fulfillment of objectives, development efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors. Evaluation also refers to the process of determining the worth or significance of an activity, policy or program. An assessment, as systematic and objective as possible, of a planned, on-going, or completed development intervention. Note: Evaluation in some instances involves the definition of appropriate standards, the examination of performance against those standards, an assessment of actual and expected results and the identification of relevant lessons. Related term: review.
Ex-ante evaluation: An evaluation that is performed before implementation of a development intervention. Related terms: appraisal, quality at entry.
Ex-post evaluation: Evaluation of a development intervention after it has been completed. Note: It may be undertaken directly after or long after completion. The intention is to identify the factors of success or failure, to assess the sustainability of results and impacts, and to draw conclusions that may inform other interventions.
External evaluation: The evaluation of a development intervention conducted by entities and/or individuals outside the donor and implementing organizations.
Feedback: The transmission of findings generated through the evaluation process to parties for whom it is relevant and useful so as to facilitate learning. This may involve the collection and dissemination of findings, conclusions, recommendations and lessons from experience.
Finding: A finding uses evidence from one or more evaluations to allow for a factual statement. Formative evaluation: Evaluation intended to improve performance, most often conducted during the implementation phase of projects or programs. Annex VI 225 Note: Formative evaluations may also be conducted for other reasons such as compliance, legal requirements or as part of a larger evaluation initiative. Related term: process evaluation.
Goal: The higher-order objective to which a development intervention is intended to contribute. Related term: development objective.
Impacts: Positive and negative, primary and secondary, long-term effects produced by a development intervention, directly or indirectly, intended or unintended. Independent evaluation: An evaluation carried out by entities and persons free of the control of those responsible for design and implementation of the development intervention. Note: The credibility of an evaluation depends in part on how independently it has been carried out. Independence implies freedom from political influence and organizational pressure. It is characterized by full access to information and by full autonomy in carrying out investigations and reporting findings.
Indicator: Quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement, to reflect the changes connected to an intervention, or to help assess the performance of a development actor.
Inputs: The financial, human, and material resources used for the development intervention.
Institutional Development Impact: The extent to which an intervention improves or weakens the ability of a country or region to make more efficient, equitable, and sustainable use of its human, financial, and natural resources, for example through: (a) better definition, stability, transparency, enforceability and predictability of institutional arrangements and/or (b) better alignment of the mission and capacity of an organization with its mandate, which derives from these institutional arrangements. Such impacts can include intended and unintended effects of an action.
Internal evaluation: Evaluation of a development intervention conducted by a unit and/or individuals reporting to the management of the donor, partner, or implementing organization. Related term: self-evaluation.
Joint evaluation: An evaluation to which different donor agencies and/or partners participate. Note: There are various degrees of “jointness” depending on the extent to which individual partners cooperate in the evaluation process, merge their evaluation resources and combine their evaluation reporting. Joint evaluations can help overcome attribution problems in assessing the effectiveness of programs and strategies, the complementarity of efforts supported by different partners, the quality of aid coordination, etc.
Lessons learned: Generalizations based on evaluation experiences with projects, programs, or policies that abstract from the specific circumstances to broader situations. Frequently, lessons highlight strengths or weaknesses in preparation, design, and implementation that affect performance, outcome, and impact. Logical framework (Logframe): Management tool used to improve the design of interventions- 226 Annex VI, most often at the project level. It involves identifying strategic elements (inputs, outputs, outcomes, impact) and their causal relationships, indicators, and the assumptions or risks that may influence success and failure. It thus facilitates planning, execution and evaluation of a development intervention. Related term: results-based management.
Meta-evaluation: The term is used for evaluations designed to aggregate findings from a series of evaluations. It can also be used to denote the evaluation of an evaluation to judge its quality and/or assess the performance of the evaluators.
Mid-term evaluation: Evaluation performed toward the middle of the period of implementation of the intervention. Related term: formative evaluation.
Monitoring: A continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds. Related term: performance monitoring, indicator.
Outcome: The likely or achieved short-term and medium-term effects of an intervention’s outputs. Related terms: result, outputs, impacts, effect.
Outputs: The products, capital goods and services that result from a development intervention; may also include changes resulting from the intervention which are relevant to the achievement of outcomes.
Participatory evaluation: Evaluation method in which representatives of agencies and stakeholders (including beneficiaries) work together in designing, carrying out and interpreting an evaluation. Partners: The individuals and/or organizations that collaborate to achieve mutually agreed upon objectives. Note: The concept of partnership connotes shared goals, common responsibility for outcomes, distinct accountabilities and reciprocal obligations. Partners may include governments, civil society, non-governmental organizations, universities, professional and business associations, multilateral organizations, private companies, etc.
Performance: The degree to which a development intervention or a development partner operates according to specific criteria/standards/guidelines or achieves results in accordance with stated goals or plans.
Performance indicator: A variable that allows the verification of changes in the development intervention or shows results relative to what was planned. Related terms: performance monitoring, performance measurement. Performance measurement: A system for assessing performance of development interventions against stated goals. Related terms: performance monitoring, indicator.
Performance monitoring: A continuous process of collecting and analyzing data to compare how well a project, program, or policy is being implemented against expected results. Annex VI 227 Process evaluation: An evaluation of the internal dynamics of implementing organizations, their policy instruments, their service delivery mechanisms, their management practices, and the linkages among these. Related term: formative evaluation.
Program evaluation: Evaluation of a set of interventions, marshaled to attain specific global, regional, country, or sector development objectives. Note: a development program is a time bound intervention involving multiple activities that may cut across sectors, themes and/or geographic areas. Related term: Country program/strategy evaluation.
Project evaluation: Evaluation of an individual development intervention designed to achieve specific objectives within specified resources and implementation schedules, often within the framework of a broader program. Note: Cost benefit analysis is a major instrument of project evaluation for projects with measurable benefits. When benefits cannot be quantified, cost-effectiveness is a suitable approach.
Project or program objective: The intended physical, financial, institutional, social, environmental, or other development results to which a project or program is expected to contribute.
Purpose: The publicly stated objectives of the development program or project.
Quality assurance: Quality assurance encompasses any activity that is concerned with assessing and improving the merit or the worth of a development intervention or its compliance with given standards. Note: examples of quality assurance activities include appraisal, RBM, reviews during implementation, evaluations, etc. Quality assurance may also refer to the assessment of the quality of a portfolio and its development effectiveness.
Results-Based Management (RBM): A management strategy focusing on performance and achievement of outputs, outcomes and impacts.
Review: An assessment of the performance of an intervention, periodically or on an ad hoc basis. Note: Frequently “evaluation” is used for a more comprehensive and/or more in-depth assessment than “review.” Reviews tend to emphasize operational aspects. Sometimes the terms “review” and “evaluation” are used as synonyms.
Related term: evaluation. Risk analysis: An analysis or an assessment of factors (called assumptions in the logframe) that affect or are likely to affect the successful achievement of an intervention’s objectives. A detailed examination of the potential unwanted and negative consequences to human life, health, property, or the environment posed by development interventions; a systematic process to provide information regarding such undesirable consequences; the process of quantification of the probabilities and expected impacts for identified risks.
Sector program evaluation: Evaluation of a cluster of development interventions in a sector within one country or across countries, all of which contribute to the achievement of a specific development goal. 228 Annex VI Note: a sector includes development activities commonly grouped together for the purpose of publication such as health, education, agriculture, transport, etc. Self-evaluation: An evaluation by those who are entrusted with the design and delivery of a development intervention.
Stakeholders: Agencies, organizations, groups or individuals who have a direct or indirect interest in the development intervention or its evaluation.
Summative evaluation: A study conducted at the end of an intervention (or a phase of that intervention) to determine the extent to which anticipated outcomes were produced. Summative evaluation is intended to provide information about the worth of the program. Related term: impact evaluation.
Sustainability: The continuation of benefits from a development intervention after major development assistance has been completed. The probability of continued long-term benefits. The resilience to risk of the net benefit flows over time.
Target group: The specific individuals or organizations for whose benefit the development intervention is undertaken.
Terms of reference: Written document presenting the purpose and scope of the evaluation, the methods to be used, the standard against which performance is to be assessed or analyses are to be conducted, the resources and time allocated, and reporting requirements. Two other expressions sometimes used with the same meaning are “scope of work” and “evaluation mandate.”
Thematic evaluation: Evaluation of a selection of development interventions, all of which address a specific development priority that cuts across countries, regions, and sectors.
Triangulation: The use of three or more theories, sources or types of information, or types of analysis to verify and substantiate an assessment. Note: by combining multiple data sources, methods, analyses or theories, evaluators seek to overcome the bias that comes from single informants, single methods, single observer or single theory studies.
Validity: The extent to which the data collection strategies and instruments measure what they purport to measure. Source: OECD 2002a.