Lost opportunity cost, the pendulum effect and why we need to rethink the idea of “fidelity” in maths education interventions

by Tierney Kennedy

?

Why is it that so much of education research goes seemingly unnoticed or ignored by teachers (Joram et. al, 2019)? Is that related to way educational policy seems to pendulum back and forth between approaches? This article aims to raise some serious issues regarding practical implementation of interventions in schools and point to a possible way forward.

?

Issue 1: Requiring fidelity in research projects

Principals and other decision makers in schools spend considerable time and effort deciding what intervention or professional learning to undertake in order to best meet the needs of their students and teachers. While the results of many education interventions are often only considered if teachers implemented the recommendations with high degrees of fidelity, schools simply don’t have that luxury. Even teachers who are motivated towards the use of research in their classes cite their lack of time as a barrier to implementation of interventions (Williams and Coles, 2007). Most rarely have the time or energy required to implement a new approach project perfectly, and schools are dynamic places with constant interruptions and external pressures. This means that school decision makers need to know how robust and reliable an intervention is when the implementation does not go according to plan. Information such as the “minimum dosage” or the “benefit for effort” would be far more useful factors in helping decision makers than fidelity. Perhaps we should consider “failure by the teacher to implement with fidelity” instead as “failure of the intervention to be implementable by teachers”.

?

Issue 2: The cost of an intervention vs the effectiveness of the intervention

One factor that is often employed somewhat unofficially in decision making is the cost of the intervention balanced against the potential outcomes of that intervention. While most interventions report some measure of student outcomes, practical studies on the cost-effectiveness of education interventions are fairly rare (Levin and McEwan, 2001). To illustrate the importance of this factor, let us consider two interventions. Intervention A[1] adds 3 months of gain in an average year for a primary school student in maths for a moderate cost ($321-$1200 per child per year). Intervention B[2] also adds 3 months of gain in an average year for a primary school student but for a very low cost (less than $160 per child per year). Even if Intervention B had a lower impact than Intervention A, it might be worth doing anyway due to the low cost required. Choosing Intervention B would also free up funds for other relatively inexpensive interventions, which may lead to greater overall gains for students than Intervention B on its own. Decision makers need to be able to weigh up the costs and benefits when considering which interventions to implement.

?

Issue 3: The impact of targeted interventions and lost opportunity costs on overall maths growth

At times the real costs of an intervention are somewhat hidden. One often overlooked cost is the “opportunity cost” of instructional time spent on implementing the intervention that could be spent something else (Levin et. al, 2017). If we only measure the impact of an intervention on closely-matched measures, we can miss noticing the trade-offs in terms of overall progress in mathematics. For example, consider a fictional example of a fractions intervention that had an extremely high impact on closely-matched, researcher-designed tests. The intervention was designed to take 12 lessons, with an additional 4 lessons recommended for maximum impact. Usually, the class teacher spent 6 lessons on fractions. Implementing the intervention in the manner recommended therefore cost 10 lessons of additional instruction time that was usually spent on other topics. To decide whether that was a good choice overall, we would need to examine the impact of the intervention on more than just fractions because the cost was from more than just fractions. Distal measures, such as improvements on annual standardised testing across all areas in maths, might be a more useful measure than closely-matched or bespoke assessments, as they examine the overall impact including the opportunity cost of the intervention.

?

The pendulum effect

All approaches to teaching have a balance of pros and cons (Kennedy, 2023). We also know that methods that are effective for novices tend to be substantially less effective or even detrimental to those with more experience (Sweller et. al, 2011). While somewhat rare, good research takes this into account by not only pointing out the benefits of their intervention but also the benefits of other competing approaches, and the difficulties or problems with both. What is scarcer again is research that considers how much additional gain is possible from a teacher’s current starting point. For example, consider Teacher A, of Class A, who predominantly uses Technique A, and Teacher B who predominately uses Technique B. Class A is unlikely to gain much further benefit from an intervention that uses Technique A because they are already receiving that benefit as part of their normal teaching. Likewise, Class B would receive little benefit from an intervention based on Technique B. Teacher A may be better trading off a little time spent on Technique A for the benefits of Technique B and vice versa.

?

So, what does this all mean and what do we do about it? All interventions take time, effort, energy and funding. All of those finite resources have to come from somewhere, so we need a way of balancing them.

?

Here are a few questions that I find useful for decision makers to work through:

1.????? Which approaches are already used commonly in my school? What benefits do our students already receive from these choices? What might be missing that could add a different kind of benefit to our students?

2.????? How much time do we realistically have for an intervention? What is the amount of time recommended for the intervention I am considering? Will the time we have available be sufficient to gain us the benefit we are looking for? What will we choose to drop or miss out on by spending time on this intervention?

3.????? How much effort is required by teachers to implement the intervention? Is this requirement realistic for my teachers given the current status of my school? What is the result likely to be if teachers cannot put in this effort or if something goes wrong? Is the benefit worth the cost?

4.????? Does the intervention target exactly what we need? If our students are not understanding, does the intervention focus on developing understanding of concepts? If we need help with extension, does it use teaching strategies that are targeted towards experts?


Here are a few requests that principals I work with have raised for researchers to consider:

·??????? Focus quantitative data analysis on the idea of “minimum dosage for effectiveness”. For example, how much intervention time is needed for an observable effect?

·??????? Consider the idea of “maximum benefit for effort”. For example, Approach A has a good effect when implemented once per week, twice as much benefit when implemented twice per week, but then the added gain peters off.

·??????? Consider the idea of implementation “with integrity” rather than “with fidelity”. How much can the approach be messed up and still have a reasonably good gain? What happens when my teacher does not quite understand it?

·??????? Consider what my teachers already know and are good at, so that we can add a different kind of benefit with our limited time. Consider the impact of what we have to stop doing in order to implement the intervention.

?

Working together on this helps us all.

?

?

References

Joram, E., Gabriele, A. J., & Walton, K. (2020). What influences teachers’“buy-in” of research? Teachers’ beliefs about the applicability of educational research to their practice.?Teaching and Teacher Education,?88(102980), 1-12.

Kennedy, T. (2023). Improving learning rates on standardised testing through a balanced teaching cycle a tale of three schools.?Teaching Mathematics,?48(1), 10-17.

Levin, H. M., & McEwan, P. J. (2001).?Cost-effectiveness analysis: Methods and applications?(Vol. 4). Sage.

Levin, H. M., McEwan, P. J., Belfield, C., Bowden, A. B., & Shand, R. (2017).?Economic evaluation in education: Cost-effectiveness and benefit-cost analysis. SAGE publications.

Sweller, J., Ayres, P., Kalyuga, S., Sweller, J., Ayres, P., & Kalyuga, S. (2011). The expertise reversal effect.?Cognitive load theory, 155-170.

Williams, D., & Coles, L. (2007). Teachers' approaches to finding and using research evidence: An information literacy perspective.?Educational research,?49(2), 185-206.

?


[1] Teaching Learning Toolkit information on Teacher Assistants. https://evidenceforlearning.org.au/education-evidence/teaching-learning-toolkit/teaching-assistant-interventions

[2] Teaching Learning Toolkit information on homework. https://evidenceforlearning.org.au/education-evidence/teaching-learning-toolkit/homework

要查看或添加评论,请登录

社区洞察

其他会员也浏览了