The Inefficiencies of Running Records

The Inefficiencies of Running Records

This post was originally on The Literacy Architects' resource blog. To read other articles from the blog, click here .

As an educator, you may be familiar with the term, “running records”. They are a benchmarking tool used by teachers to assess a student’s reading level. As a student reads a text aloud, teachers record the student’s word reading accuracy, errors, and self-corrections, and then they analyze the results to try to determine how students are decoding words (whether they are using meaning, structure, or visual cues).

While this form of assessment is used in elementary schools across the world to determine a student’s reading level, there are inefficiencies to this system that are often overlooked. Let’s take a look at why you may not want to use running records and what you can use instead.

The Argument Against Running Records

Running records are ingrained in teacher education prep programs and district professional development, but quick skills-based measures can provide more actionable information in a shorter amount of time. Examples include the Dynamic Indicators of Basic Early Literacy Skills Letter Naming Fluency (DIBELS LNF) and Nonsense Word Fluency (DIBELS NWF). These assessments ask students to read a series of letters or words aloud for one minute.

But surely, it would be better to sit next to a student and listen to them read (i.e., a running record) instead of giving them a random list of letters or nonsense words to read quickly . . . right?

In actuality, it’s not. Not only are running records more time-consuming, but the end result is usually a reading level on the A-Z scale — a scale that has little research base behind it. Since background knowledge plays a large role in reading comprehension, a student’s ability to read one Level J book independently doesn’t necessarily mean they will be able to read a different Level J text. There is also a wide disparity between books at the same reading level — two Level D books can contain words with very different phonics components and depending on which letter-sound combinations a student has learned so far, they may not be able to decode the text.

Another reason to move away from running records is because it combines accuracy, fluency, and comprehension into one assessment. The Simple View of Reading tells us that decoding and language comprehension are two separate components, with reading comprehension being the product of both. By using only one reading assessment to measure both, it is difficult to determine whether decoding or language comprehension is the cause of a student’s reading difficulty.

Alternatives to Running Records

Instead of running records, measure literacy skills individually so you can determine the cause of reading weaknesses. For example, a good NWF measure will be structured in a way that will allow you to easily see patterns of errors so you will know what to center your explicit phonics instruction on — the letter patterns (VC, CVC, CVVC, etc.), word length, letter-sound correspondences, etc. (finding these patterns is more efficient with skills-based measures rather than with a running record).

For language comprehension, use a listening comprehension assessment or a measure of receptive vocabulary such as the Peabody Picture Vocabulary Test (PPVT). Alternatively, oral reading fluency measures such as DIBELS Oral Reading Fluency (DIBELS ORF) correlate highly with reading comprehension and could be used as a substitute for measuring language comprehension.

Final Thoughts

Shifting your assessment practice away from running records and learning a new assessment system can be difficult. It can also feel counterintuitive to not use some of your assessment time to listen to your students read aloud. But we encourage you to take the first step and try it out. We hope you will find that this is a better, more efficient way to gather data and tailor instruction to each student.

要查看或添加评论,请登录

The Literacy Architects的更多文章

社区洞察

其他会员也浏览了