May 2024: The growing problem with EdTech research
eSpark Learning
Engaging instruction and practice for reading, writing, and math in grades PK-8 ??
In this edition of EdTech Evolved, we're taking a deep dive into the current reality of "efficacy research" and why the studies we've all come to rely on might not actually be so reliable.
Also:
Is EdTech research a sham?
It seems like such a simple question. And it can usually be answered by a relatively quick Google search. But what if we told you that everything we know about edtech effectiveness is an illusion? That the modern reality of “efficacy research” is so riddled with conflicts of interest, pay-to-play barriers, and cherry-picked data that we may not have any idea what actually works and what doesn’t?
Much of the criteria used today to allocate federal funding, create state-approved vendor lists, or obtain school board approval for massive contracts is shaky, inconsistent, and too often not at all reflective of the real-life impact a program will have in a school or district.
Everybody has some evidence of efficacy at this point, or they wouldn’t be around anymore. So how is it that the treatment group always wins?
In 2015, the Every Student Succeeds Act (ESSA)’s inclusion of evidence-based requirements for federal funding eligibility served as the catalyst for a massive shift in the edtech landscape. Schools began requiring publishers to demonstrate evidence of efficacy, and publishers scrambled to commission studies or put together their own to check this new, mandatory box. Those who lacked the resources to do so began fading into oblivion.
The tiers of evidence defined under ESSA are similar to benchmarks from the What Works Clearinghouse (WWC) often referenced in discussions about evidence-based practices. But, it's risky to point to the WWC as an unassailable source of truth.
A quick review of the WWC website shows “Promising Evidence” for the much-maligned Reading Recovery? program dated July 2013, with no updates to reflect follow-up research showing a significant negative impact on children. Popular programs like i-Ready? and Achieve3000? are also represented on the site with damning reports showing no positive impact on student achievement.
These heavyweight publishers have had no trouble simply commissioning more studies and sidestepping the WWC to check the evidence-based box for school district purchasers. It’s not hard to find the resources or justify the investment when you have district contracts worth $7-$10 million dollars a piece.
So the programs with the most evidence to display aren’t necessarily the most effective, they’re just the ones that can afford to maintain their own in-house research teams. It’s certainly possible to commission independent, 3rd-party studies – the problem is that those don’t come cheap, either.
Depending on the scope of the study and the ESSA evidence level a publisher is looking for, independent studies demonstrating efficacy can range from $15-$20k well into the six figures.
Once a study has been published, it has no enforceable shelf life. For instance, DreamBox (now part of Discovery Education) has long touted their status as “the only comprehensive K-8 math program rated STRONG by Evidence for ESSA,” but the studies that earned them that rating date back to the 2010-2011 school year. Digital learning was still so new at the time that the control group for the study didn’t use any online tools at all.
领英推荐
Also trending this month:
The eternal tug-of-war between teacher choice vs. district mandates
Do we lean into teacher choice or do we drive consistency across classrooms? It's a push and pull that's been around for decades. Now, as the pendulum swings back in the direction of consolidation once again, questions are surfacing about what’s really best for today's generation of learners.
What do we want driving the decisions that shape our classrooms? Are we unintentionally squeezing out innovation in favor of the slower-moving establishment? And why are so many schools still paying millions of dollars to companies that have proven unable to reverse the negative trends that started at the same time that digital learning boomed?
Predicting the challenges of AI use in schools (and how to avoid them)
One of the most surprising things about artificial intelligence is just how quickly it has progressed. That’s also one of the reasons some are still wary about using it. When it’s changing so quickly, how can we predict what the challenges of AI use will be? How do we make sure to prepare for those challenges, so that our students don’t wind up suffering the consequences?
There’s no way to guarantee what the future will hold, but we can make predictions based on the current state of AI in schools and the most common issues associated with introducing any new technology into classrooms. Here are the most likely challenges educators will face when bringing AI into the classroom, and what you can do to avoid them.
Adding AI to your school district’s acceptable use policy
Wherever you are in your journey to AI-readiness, your district’s acceptable use policy likely looms as a high-priority update.
Know this: you are not alone. Thousands of school systems are still wrestling with the whos, hows, whys, whats, and whens of AI implementation. Get a head start with both a templated outline and a sample Acceptable Use Policy addendum here.
That's a wrap on this edition of EdTech Evolved! Subscribe to make sure you catch next month's edition.
EdTech Evolved is brought to you by eSpark Learning , a leading provider of highly-personalized math, reading, and writing curriculum for grades K-8. eSpark's AI-driven approach to personalized instruction has been featured in EdSurge, AP, and more.