Engineering Learning Science Experiments in EdTech Platforms: A Worked Example with UpGrade & Battleship Numberline
Overview
In this article, I will describe how to engineer learning science experiments in EdTech apps by using UpGrade - an open-source platform for running randomized controlled trials in digital learning platforms. Experimentation enables researchers to improve edtech products using data and evidence.
Let's start by looking at UpGrade, and what it does.
UpGrade - An Open Source Platform for Experimentation in EdTech
UpGrade is a tool to manage experiments in digital learning platforms. Researchers can create their experiment designs in UpGrade, and engineers can integrate the SDK that gives random condition assignments to users by connecting with the server.
Experiment creation happens through a wizard:
We will cover UpGrade and its core concepts in another post. Head over to UpGrade website or go through the documentation for more details.
Learning Engineering - The Process to Systematically Improve EdTech
Rigorous experimentation is required to enhance the value of the edtech tool systematically requires. Comparing the effects of different interventions in the field takes a lot of work effort, while running experiments online offers a cost-effective way to find out what works for learners. For classrooms with digital access who want to participate in the experiments, it becomes easier to do randomized field trials by engineering the experiments inside the applications. It is important to keep in mind, though, that only a subset of education experiments can be run online.
The massive amount of edtech use these days offers organizations a great testbed that can be utilized to improve product efficacy based on data.
In my previous article, "How to Improve EdTech Engagement Using Learning Engineering Principles: Case Study of an Educational Game", I described the high-level process of Learning Engineering that aims to:
(1) Create engaging and effective learning experiences (2) By applying evidence-based principles and methods (3) To Support individual challenges of learners, Better understand learners and learning.
Let's dive into one worked example of creating an experiment in an educational game.
Battleship Numberline - Number Sense Made Fun!
We conducted the experiment in Battleship Numberline, a math game to develop number sense. Students play this game by finding the location of toy ships on a numberline and exploding them by throwing pineapples on the ships.
You can play the game here: Play Battleship Numberline
Worked Example - Studying the Effects of Game Difficulty on Student Motivation
领英推荐
Hypothetical Research Question
There are studies that show an inverted U relationship between difficulty and engagement e.g., in online chess playing. What is the relationship between difficulty and engagement in Battleship Numberline?
Experiment Design
Our experiment will be an A/B test, comparing the outcomes of two different conditions (easy play and difficulty play) on student engagement.
The questions to solve in the game can be made easier or more difficult by changing the size of the target that needs to be hit. If the target is bigger, users can explode it even though they are less accurate. If exploding more ships can keep students playing for longer, they may end up learning more. Measuring the learning gains won't be a part of this experiment. I will cover this aspect in a subsequent article.
UpGrade Configuration
The experimenter can start by creating an experiment in UpGrade and set its status to enrolling. Experiments and the edtech app are connected via "Decision Points" or "Experiment Points" that are made up of a Site (roughly the function that is requesting the random assignment) and Target (the piece of content/resource where the experiment is happening). The screenshots below show the process of configuring an experiment in UpGrade:
Experiment Engineering - Using UpGrade SDK in Battleship Numberline
// initialization of the SDK
import UpgradeClient from 'upgrade_client_lib';
const upgradeClient = new UpgradeClient(userId = "userid_goes_here",
hostUrl = "https://upgrade-backend.app/");
// getting condition assignment for the user
upgradeClient.init() // initialize the user
// difficultyLevel will be either "easy" or "hard", randomly chosen
// same user ID will always get same assignment for simple experiments
const difficultyLevel = await upClient.getExperimentCondition('assign-prog', 'SetDifficulty', 'Default');
// use the difficultyLevel in the application to show the appropriate
// experience to the user
// once the user has experienced the condition, call mark
await upClient.markExperimentPoint('SetDifficulty', 'Easy', 'Default');
// after completing the above steps, UpGrade will start showing data for the experiment
// in the web portal (see example below)
Observing Data
Once enrollments start coming in, UpGrade will show the condition assignment data for the various conditions. You can also track outcome metrics in UpGrade, we will cover the metric tracking in another post.
Users can export the condition assignment data from UpGrade to do further analysis.
Conclusion
Experimental evaluation of new features and instructional approaches in your edtech platform can provide you with data to back up the claims of efficacy. Improving edtech products after their initial evaluation can be made easy by integrating UpGrade into your digital learning platform.
If you need help using UpGrade, please feel free to message me. The UpGrade project is led by Dr. Steve Ritter at Carnegie Learning, and Playpower Labs is a key contributor to the project.
AI | Education | Smart Paper | Playpower Labs | Tech for Good
1 年Steve Ritter I finally started writing and realized that we have so many features to cover :D It will probably take a while to show everything that UpGrade has!
Revenue Analytics Leader | Systems Builder | Physics & AI Teacher | ex-Amazon
1 年Looks cool!