Evidence-Based Instructional Design: Text Difficulty

Evidence-Based Instructional Design: Text Difficulty

The best instructional designers are constant learners, always searching for ways to improve their craft. Unfortunately, many of the field's popular publications are plagued with fads and poorly-supported opinion pieces. Even the most dedicated designers occasionally get swept up in the hype and find themselves nodding in agreement with articles that are little more than thinly-disguised advertisements.

Peer-reviewed scientific journals provide a more reliable source of evidence-based practices, but these sources are often difficult to understand. Most articles in refereed journals are written by academics for academics. They're full of jargon and are concerned primarily with building and refining scientific theories of learning. To benefit from these reports, working instructional designers have to pull double duty: not only must we be careful and critical readers, we must also take on the task of translating this research into practical recommendations which can be applied to our daily work.

The goal of this article is to do a little of that heavy-lifting for you. I'll review a few peer-reviewed articles on text difficulty--a topic that is often overlooked in instructional design circles--and then explain exactly how you can apply this research in your work.

Text Difficulty

Simply put, text difficulty is how easy or hard a text is to read. It does not refer to the difficulty or complexity of the topic of the text, but the ease with which the text can be read. Researchers have identified two main factors that influence text difficulty: readability and cohesion.

Readability deals with vocabulary and sentence complexity. A highly-readable text uses common words and relatively short, simple sentences. A text with low readability, on the other hand, uses rare words or specialized jargon that are unfamiliar to the reader and presents them in long, complex sentences that are difficult to follow.

Cohesion refers to the degree to which a text connects its ideas together into an understandable whole. A highly-cohesive text has a sense of unity, in which each sentence has an easily-understandable relationship to all other sentences. A text with low cohesion, on the other hand, tends to ramble. It goes off on tangents, which makes it hard for the reader to figure out what the sentences have to do with each other.

Example of a text with high difficulty: "One day, in the indeterminate past, two children ascended a naturally raised area of land to gather and convey a vessel of water. (The canister, it should be noted, had a handle, to ease transport.) One of the children, a male named Jack (surname unknown) tripped and fell down the side of the raised area of land and injured his cranium; the other juvenile. a female named Jill (surname also unknown), also descended the slope, though it is unknown what obstacle or impediment caused the children to fall or the extent of their injuries."

Example of a text with low difficulty: "Jack and Jill went up the hill to fetch a pail of water. Jack fell down and broke his crown, and Jill came tumbling after."

Linguists, educational psychologists, and reading specialists pay a lot of attention to text difficulty, but the topic is rarely discussed among instructional designers. We should care about text difficulty, though, because many of the resources we create are comprised--at least in part--of text. eLearning, job aids, manuals, presentation slides, and facilitator guides all rely on text. In addition, our day-to-day jobs require us to communicate with each other, and with our stakeholders, through text-based mediums such as email, instant messages, design documents, and team platforms like Slack or Microsoft Teams.

Now, let's take a look at some recent research on text difficulty and mind-wandering.

Research on Text Difficulty

In this experiment, participants were asked to read both easy and difficult texts and prompted to report when they noticed that their minds were wandering away from the reading task. After reading the texts, they were presented with a reading comprehension assessment. Participants who read the difficult texts reported more mind-wandering than those who read the less difficult texts. Further, participants who read the difficult texts scored lower on the comprehension test. (Mind Wandering While Reading Easy and Difficult Texts, Psychonomic Bulletin & Review)

The second study was similar in design to the first, but assessed additional outcomes. For starters, the researchers measured both voluntary and involuntary mind-wandering. In addition, participants in this study also reported their interest in the topic of the text. Participants who read difficult texts reported both more voluntary and involuntary mind-wandering than those who read the less difficult texts. Participants who read the less difficult texts outperformed the participants who read the more difficult texts in reading comprehension. In addition, participants who read the more difficult texts reported lower levels of interest in the topic than participants who read the less difficult texts. After analyzing the data, the researchers reported that text difficulty impedes reading comprehension by reducing interest in the topic and increasing mind-wandering. (Text Difficulty, Topic Interest, and Mind Wandering During Reading, Learning and Instruction)

The third study followed a similar experimental design but examined the effect of section length. The researchers found that participants who read longer sections of high-difficulty text experienced substantially more instances of mind-wandering than participants who read shorter sections of high-difficulty text. (On the Relation Between Reading Difficulty and Mind-Wandering: A Section-Length Account, Psychological Research)

Conclusions

Although peer-reviewed research is generally more reliable than articles in popular magazines and websites, it's important to note that all studies--including those discussed above--have limitations. Nevertheless, these studies were controlled experiments with relatively robust outcomes. We can conclude with a high level of confidence, therefore, that:

  • On average, texts with a high level of difficulty cause more instances of voluntary and involuntary mind-wandering than texts with low difficulty.
  • On average, texts with a high level of difficulty impede reading comprehension and learning.
  • On average, longer sections of high-difficulty text cause more mind-wandering than shorter sections of high-difficulty text.

Important Caveat! The relationship between text difficulty and mind-wandering has only been demonstrated for reading. For other kinds of tasks, there is evidence that the opposite is sometimes true. That is, increasing the difficulty actually decreases mind-wandering.

Practical Implications for Instructional Designers

Some implications follow naturally from this research. First, if we want to maximize reading comprehension and learning, we should ensure our text is low difficulty. Second, when high-difficulty text is unavoidable, we should ensure that we present it in short sections rather than long ones.

The implications are simple, but the execution is hard. How do we assess text difficulty? After all, our perception of difficulty is naturally influenced by our own level of reading fluency and vocabulary. What seems like a low-difficulty passage to one designer might seem like a high-difficulty passage to another.

Fortunately, there are tools that make this task more objective and standardized. Here are three to add to your toolbox.

The Hemingway Editor

Composing your text in the Hemingway Editor gives you immediate feedback on your text's readability, passage length, and vocabulary. It's simple, free, and available as both a browser-based web app and a desktop app.

The Coh-Metrix Text Easability Assessor

The TEA provides more rigorous analysis of your text, including percentile scores on Narrativity, Syntactic Simplicity, Word Concreteness, Referential Cohesion, and Deep Cohesion. TEA is based on the Coh-Metrix, a sophisticated text analysis engine used by linguists to provide objective, research-based feedback on readabilty and cohesion. It is much more complex than the Hemingway Editor, but provides a more rigorous analysis. It is also free.

Microsoft Word

Good ol' Word has its own readability function built right in. Word can display the readability score for both the Flesch Reading Ease and Flesch-Kincaide Grade Level formulas. Although less sophisticated than either the Hemingway Editor or the TEA, using the built-in functions of a tool that's already part of your design toolbox has obvious convenience. Here's a short article that explains how to use this function.

If you found this article useful, please leave a comment and share it with others. If you found it confusing, or if you have any questions, please leave a comment and I'll do my best to clarify.

Alexander Schwall

Co-Founder and Chief Science Officer at Rhabit Analytics, Inc.

5 年

Thank you, this is very useful.

Ben Butina, Ph.D., SPHR

?? Host of Department 12: An #IOPsych Podcast | Director @ ASPCA | Learning & Development Leader

5 年

Note: Industry jargon and organization-specific acronyms are, by definition, not commonly used in ordinary speech. For this reason, if you include them in your text, they will inflate your scores when using the tools mentioned in the article. If your target audience uses these terms, however, it still makes good sense to use them because they will improve the readability of the text. If you try to break them down using simpler language, you'll get a better readability score, but you'll actually make your text harder to read for the target audience.

回复

要查看或添加评论,请登录

Ben Butina, Ph.D., SPHR的更多文章

社区洞察

其他会员也浏览了