EdTech & Algorithmic Transparency
Photo by NeONBRAND on Unsplash

EdTech & Algorithmic Transparency

How the investigation into Kamilah Campbell’s SAT score is a reminder of the importance of transparent and explainable artificial intelligence

The recent news surrounding the investigation of Florida high-school student Kamilah Campbell’s SAT score, which was flagged by the Educational Testing Service (ETS) and The College Board for possible cheating, offers an interesting perspective into issues surrounding algorithmic transparency in the growing EdTech sector. While it’s likely that only a portion of the process used to flag the test was automated, the responses from both parties highlight issues that will become commonplace in an educational landscape that is increasingly dominated by opaque algorithms.

Background

As reported by CNN and other outlets, Campbell’s second attempt at the SATs, which yielded a 330 point improvement, is being withheld and investigated by The College Board after it was found, among other things, that her answers closely resembled other students’ answers. Represented by civil rights attorney Ben Crump, Campbell is petitioning that her SAT score be released in time for her to be accepted into college and apply for scholarships. “This 1230 (score) makes a big difference whether she’s going to get into the college of her dreams and whether she can afford it,” says Crump.

The College Board released a statement clarifying their review process, which looks at a variety of factors in determining whether a student’s scores should be held or canceled for suspected cheating. Evidence for cheating might include a high degree of similarity between a student’s answers and a group of other students, the presence (in that group) of students that have had their score canceled before, the similarity of answers with a confiscated “cheat sheet”, and the absence of scratch work in the student’s testing booklet. The letter sent to Campbell from The College Board cited “substantial agreement between (her) answers on one or more scored sections of the test and those of other test takers” as a reason for flagging the exam for review. Though not stated in the original article or the review statement, this initial flag is likely the only automated part of the process.

Relevance for Automation in EdTech

Regardless of the outcome of the review process, or Campbell’s demand for the scores to be released, the rhetoric surrounding this story highlights several issues important for the EdTech industry that is investing heavily in artificial intelligence and machine-learning as its future.

Transparency

Crump made it clear to CNN that transparency of the process is a critical issue, saying “She is now being accused of cheating. And why? They say, ‘Oh, you just have to take our word for it, that there’s something that we see that’s wrong,’”, adding “they need to tell us (what they see).”

As reported by The Miami Times, Bob Schaeffer, a public education director with FairTest, also highlights the apparent secrecy of the flagging criteria, saying "Some kids cheat; there is no question about that... But to hold scores arbitrarily based on secret evidence is fundamentally un-American."

The director of media relations with the College Board, Maria Eugenia Alcon-Heraux, stated that “earlier this week ETS sent the student a report with initial evidence that led to the review,” and that “[the report] is en route.”

This is a reminder of how important transparency will be when the majority of the skills or competency assessments are fully automated. The College Board’s initial letter, and their statement released afterward, attempted to address this issue by sketching their process in broad strokes, but it’s incompleteness left a lot of questions open. — Was the flagging process automated? How similar do the answers need to be for them to be flagged? How was this cutoff chosen? What is the false positive rate for similarity above this cutoff and how was it determined? — The answers to these questions exist, but have they been effectively communicated to relevant stakeholders? What form of transparency, if any, could have avoided this issue?

The European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) have drafted ethical guidelines for what they’re calling “Trustworthy AI”, and they offer particular advice on transparency which may apply to this situation. They suggest that vendors “provide, in a clear and proactive manner, information to stakeholders … about the AI system’s capabilities and limitations…”, suggesting further that they must “strive to facilitate the auditability of AI systems, particularly in critical contexts or situations. To the extent possible, design your system to enable tracing individual decisions to your various inputs; data, pre-trained models, etc. Moreover, define explanation methods of the AI system.”

One worry might be: how much information can ETS and The College Board make available without risking people attempting to “game the system”? In the words the AI HLEG, “be mindful that there might be fundamental tensions between different objectives (transparency can open the door to misuse; identifying and correcting bias might contrast with privacy protections). Communicate and document these trade-offs.”

The fact that Crump has chosen to focus on transparency is telling, and the lesson for EdTech is that these issues must be considered well in advance of deploying new assessment algorithms that operate at scale.

Accountability

Another focus of Crump’s rhetorical argument for the release of the scores is the issue of reciprocal accountability. He argues that “they want these students to be accountable to them, but this system is not accountable to anybody. … Well, this time, they’re going to have to be accountable as well.”

This message should resonate strongly with those that develop and deploy algorithmic solutions to assess, score, measure and predict human skills and behavior. The consequences of decisions made by machines affect real people in potentially significant ways, and the question of whose accountable when something goes wrong is becoming increasingly apropos. A Forrester Consulting survey (commissioned by KPMG International) found that 62% of respondents would blame the company that developed faulty software that caused an autonomous vehicular accident, and 54% would also blame the manufacturer that installed the software and the driver who could have taken manual control (it was a “select all that apply” question). The analogy in EdTech is that the vendor creating the AI software is on the hook in the case of a mistake, followed closely by the school district or institution that bought the software and the educator that chooses to use it in place of direct human assessment.

None of this is meant to imply that the ETS or The College Board did anything wrong, and both organizations have placed a lot of effort into communicating their commitment to fairness and transparency. While AI has the potential to revolutionize and democratize education, the issues raised by the Kamilah Campbell case do give EdTech innovators reason to pause and consider carefully the issues of transparency and accountability at every step of the development process.

This article was originally published in Toward Data Science, with minor revisions.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了