?????? ???????????????????????????? ?????? - ????????????????????
(For a summary read the topic sentences of each paragraph - a 1-minute read)
??????????????????: Assessment is your most vital tool...a statement of absolutes that I shouldn't say. But I have your attention! This article will elucidate the gap that a lot of institutions encounter as well as my take on this form of data collection. Oh, and I would like to cite my preramble from a previous article on assessment. 'Knowledge parties' are entirely the intellectual property of Dennis Allen. A Math teacher I had the absolute pleasure to learn from in my NQT year.
Speaking of previous articles. I wanted to acknowledge Gilbert Halcrow for further enlightening me on these topics through his comments . If you want to read considered expansions upon these topics I would suggest diving into his work. In his consultancy, he not only focuses on the initiative (GenAI, LISC or formative assessment) but also the implementation in the school.
"...but that is not the problem - it is always implementation" - Gilbert Halcrow
The implementation gap in assessment refers to the discrepancy between the intended outcomes of educational assessments—such as accurately measuring student learning, informing instruction, and supporting educational decision-making—and the actual outcomes realized in practice. Often perceived from the point of view of whole-school summative assessments. Mainly due to the logistical difficulties of pulling these off. However, I have made this error as a teacher when applying formative assessment. Even after self-proclaiming my authority on the topic. This gap can be a pernicious one. We can underestimate the impact right up until we have collected the data over the long term, which at times can be considered too late.
Implementation gaps in assessment can arise from misalignment with curricula, ineffective design, inadequate teacher training, resource constraints, poor use of data, stakeholder misunderstandings, technological challenges, and an overemphasis on high-stakes testing. The culmination of these factors commonly results in quagmire. More egregious errors can include punitive measures implemented on bad data and stemming from a misunderstanding of the data itself. For example, using the mean to represent a data set that includes anomalies. I've seen this lead to disproportional responses to misbehavior through policy changes that have affected entire cohorts of students who ultimately lose liberties unjustly.
The Implementation Gap in assessment typically occurs in at least one of the following: design, administration, analysis, or follow-up. Depending on what data you gather, your priority of implementation for each of these steps will vary. If you intend to report upon a student's conceptual understanding of content then your question design must be meticulous. However, if you want to choose the most appropriate task for a student to learn from, you can accept a larger error margin for question design to favour agility and expediency.
The gap usually resides where there exists too much emphasis on the results of assessment as a one-dimensional measure. Not all assessments are true representations of understanding...actually no assessments are. If we're honest it's impossible even for these standardised examinations to reveal the objective reality of a learner's ability. Since assessments and examinations typically get conflated this can result in disdain for any measures to be applied. But without measuring in this way we remove a way of knowing from our perspective. Let's move from feet to metres, shall we?
So how do we ensure that both the assessments we administer and the way we utilise the data can actualise our intended outcomes? Below I address each factor that widens the gap.
领英推荐
If you've enjoyed what you've read here feel free to repost this article. It appears that's a favourite morsel of the algorithm. Let's feed it fat =)
????????????????????/?????????????? ??????????????: