THE END OF EXPLICIT READING INSTRUCTION
David Boulton
Learning Activist, Steward, Architect, Speaker-Presenter, Consultant, and Coach
Kids in the future will not be 'taught' to read. Every interaction with every word on every device will support them learning to read on their own.
We only sense now. We only feel now. We only think now. We only learn now. We are naturally ‘wired’ to learn from what is happening on the living edge of now. Humans learn best by differentiating, refining, and extending their participation on the living edge of now.
Modern human life requires an unnatural kind of learning. Reading, writing, math, and all their abstract, conventional, and technological outgrowths, require our brains to process information in complexly artificial ways. Whereas we learn to move, feel, touch, smell, taste, hear, emote, walk, and talk by reference to the immediate internal feel of learning them, in the artificial domains we learn from the external abstract authority of who or what we are learning from. In natural modes of learning we learn from immediately synchronous (self-generated) feedback on the edge of participating (falling while walking). In the artificial modes, (other-provided) feedback can be far out of ‘sync’ with the learning it relates to (test results in school provide feedback far downstream from the learning they measure).
Most of the children who struggle in school are struggling with artificial learning challenges.
In reading, for example, our brains must process a human invented ‘code’ and construct a simulation of language. This unique form of neural circuitry conscripts the biologically based language processes of our brains to perform in programmably mechanical ways (according to the instructions and information contained in a c-o-d-e). The virtual machinery that must form in our brain to do this is as artificial as a CD player.
The Absurdity of Explicit Reading Instruction
Can you imagine trying to help a toddler learn to walk by giving them verbal ‘how to’ instructions when they are sitting? Can you imagine trying to teach kids to learn to ride a bicycle without using a bicycle – by trying to teach them through the use of abstract exercises rather than a guiding hand during the real-time live act of trying to ride the bicycle?
All prevailing models of reading instruction share a similar absurdity. They all involve methods of instruction that are abstractly removed from the live act of reading they intend to improve. They are all designed to train learner’s brains to perform unconsciously automatic code-processing operations that will later, when engaged in actual reading, result in fluent word recognition. Why? Because the technology we have used to teach reading has been incapable of interactively coaching and supporting children on the living edge of their learning to read. Unable to respond to learners during the real-time flow of their learning to work out unfamiliar words, we've been forced to train them in the abstract offline ways we do.
At Learning Stewards, we have turned the process completely upside-down and inside-out. Rather than using abstract training exercises, we have created a technology based pedagogy that is based on instantaneously responding to and coaching learners word-by-word as needed.
Our tech provides autonomous learning-to-read guidance and support which safely and differentially stretches the learner’s mind into learning to decode. Instead of teaching phonics rules and spelling patterns to be later applied (hopefully) to the decoding of unfamiliar words, our tech interactively guides students through the process of working out unfamiliar words, and it teaches them the rules and patterns in the process. With this model, kids learn 3 simple steps that enable them to learn to read (thereafter without any 'offline' instruction).
Learning to Read 1-2-3: 1) Click on ANY word. 2) Try to read word in pop-up. Can't? Click word in pop-up. 3) Repeat.
Every time a student encounters a word she or he doesn’t recognize, they touch or click it. This brings up a pop-up box containing the word. Clicking on the word in the pop-up results in visual and audible ‘cues’ that reduce and often eliminate the (letter-sound-pattern) confusions in the word. With each click, the cues advance through a consistent series of steps that reveal (where applicable): the word’s segments, long and short sounds, silent letters, letter-sound exceptions, and groupings (blends and combinations). At each step, the student uses the cues to try again to recognize the word. If they can’t, they click again. If all of the cues (seen and heard) after the initial clicks aren’t sufficient to guide recognition of the word, a final click causes the pop-up to animate (visually and audibly) the ‘sounding-out’ of the word and lastly, the playing of the word’s sound as it is normally heard.
See for yourself: https://www.learningstewards.org/mldemo/
Kids in the future will not be explicitly-systematically taught to read any more than they are explicitly-systematically taught to talk. They will learn to read during their every interaction with every word on every device (phones, tablets, computers, tv sets, augmented reality). They will learn to read as a background process pervasively available while they are playing and learning with anything involving written words. All words - all devices - all the time.
Decades of research, thousands of studies, and billions of dollars later: 60+% of U.S. children are still chronically less than grade-level proficient in reading. We are dedicated to ending the archaic, abstract, tedious, precarious, and ineffective (and consequently life maligning) ways we have historically taught reading. It's time to get our heads out of the past and recognize that learning to read is a technological process, and as such, a process best facilitated by technology.
---
Learning Stewards is a 501c3 non-profit organization working to provide the technology described in this article to every public school student in the U.S. for free. Help us steward this transformation.
Resources: What is Reading - The Brain's Challenge - Reading Shame - Reading In The Brain - Oral Language and Reading Comprehension - Children of the Code
Learning Activist, Steward, Architect, Speaker-Presenter, Consultant, and Coach
7 年Thank you Jennifer. First, the number of kids who have dyslexia is debatable (see Shaywitz https://goo.gl/FxGy9m, Lyon https://goo.gl/WxAi4F, Wendorf https://goo.gl/UKWX7f, Hennessy https://goo.gl/hfK4Sd). However, no matter which number you support for the % of children with neurobiologically innately ordained dyslexia, at least 3 times that many children (arguably 10 times that many) have difficulty with learning to read (NAEP=60+% of grade 4,8,12 are less than grade level proficient ). Did you visit the examples and actually experience what I am describing? Did you let your son play with this? The tech acts like 'training wheels' for getting up to speed with our orthography. It 'teaches' the same things (letters, sounds, letter-sound correspondences, spelling patterns) it just does so in response to the real live flow of reading on a word by word level (rather than abstractly offline like prevailing explicit instructional methods). Good readers recognize words as wholes. They don't decode them. However they build up their inventory of recognized words by decoding the unfamiliar words they encounter. This tool guides that process in real time. The more they use it the less they need it. Kids who learn this way are still learning the orthographic patterns they are just doing so in different and more neurologically efficient ways and as they do they can transition to regular 2 dimensional print. (see Dehaene: https://goo.gl/51oEsN) Re making the language make sense... Talk to any 25 adults who are not English teachers or linguists and ask them what role their conceptual understanding of written language plays in their reading. It's all unconsciously automatic until they hit and unfamiliar word. Even then they don't have a conscious thought routine for working out word recognition. It's reflexive. I am not saying this will work for all kids with neurobiologically innate LD, however even with such kids I think this real-time 'doing it' based learning will be a great help and be the ground floor level to wrap supplemental instruction around.
Senior Business Analysts-CBAP at City of Charlotte
7 年My son didn't automatically learn how to talk due to a speech disability so he worked with a speech therapist who explicitly taught him how to make the sounds for sperch. So I am skeptical about your idea on how you extrapolate learning to do motor things like riding a bike or walking with a non-motor task of reading. About 15 -20% of kids have some level of dyslexia and learning explicitly is how they get it. My son also has Dyslexia and a very low working memory. He can't memorize words. He has no word memory. It's taken him years to just get 1st grade level sight words. What happens when kids encounter words they don't know outside of a device in the real world? Who wants to spend 10 minutes going over an unknown word? By then they need to go back a reread what the just read. Chances are a child like my son won't know most of the words. I learned to read by some unknown process where I had an excellent memory for words. I was read to a lot and it magically happened but now that I am Explicitly teaching my son I really with someone would have taught me this way. He was read to a lot and didn't learn. I think do think kids need to learn by doing instead of sitting in a classroom. Project based learning should be implemented everywhere. But I also think Explicit reading instruction and Structured Word Inquiry should be as well for all kids. It make the language make sense.
Learning Activist, Steward, Architect, Speaker-Presenter, Consultant, and Coach
7 年Why insert such dark comedic noise into this Posie? Other than lack of access to the cheap tech required, do you have a cogent argument against the points I am making that we can discuss?
The advocate, that is where it starts – with one person. The start of the flood begins with the first drop.
7 年And what is the plan for failed states, countries, or territories? And, umm the Zombie Apocalypse? Or remote areas without internet? Or nuclear bombed zones?