Algorithms and legal rock, paper and scissors
Experts, crowds and algorithms....or legal rock, paper and scissors
We know that experts are wise. And the concept of the wisdom of crowds can be traced back at least to Charles Darwin’s cousin’s musings about guessing the weight of oxen over a hundred years ago.
But can algorithms be wise? And which is better? A human, an expert or a crowd? Who wins in this game of legal rock, paper and scissors?
Professor Dan Katz has been looking at the implications of these questions in the field of legal analytics, in particular asking which is more effective at predicting court decisions. Earlier this week, Richard Moorhead invited him to UCL to give an update on where he has got to.
Dan’s work builds on the classic Supreme Court Forecasting Project from 2004, where a statistical model was able to correctly predict the decisions of the Supreme Court of the US (SCOTUS – like POTUS but with more of a Star Trek vibe) 75% of the time, compared with a 59.1% success rate for a stellar panel of experts. (Given that the choice SCOTUS had was between affirming the decision below or reversing it, the experts did particularly poorly where 60% of all decisions get a reverse ruling in any event. Just bet on reverse, you’ve got a 60% chance of being right!). So in that model, the algorithm wins.
Prof Katz has now built multiple algorithms and prediction theories, which refine and develops the algorithm, based on a "forest" of multi-layered decision trees, which is proving to be more sensitive to predicting answers in tough SCOTUS years. And it has been benchmarked going way back to test the algorithm. It is getting better all the time.
He has also set up a competitive prediction site, FantasySCOTUS, the principles of which will be familiar to anyone who has run a fantasy sports team. Amateurs compete with their predictions of SCOTUS decisions, with a prize of $10,000 to the best predictor. The best get about 80% right and some of the best are not even lawyers. And they do better than the algorithm. Informed, determined amateur beats algorithm beats expert.
What is most fascinating though is that the average of the amateur predictors on FantasySCOTUS beats the best experts over the long run. Crowds beat amateurs.
So in this legal rock, paper, scissors game, should we not just rely on the crowd-sourced view of informed amateurs to predict legal outcomes?
Well, no. Dan has looked at the impact of predictions of imminent SCOTUS decisions on the share prices of traded securities in companies affected by the decision. The short term financial impact is as significant as many other external factors, outcomes with the potential to be too serious to be left to amateurs, however gifted. These predictions have commercial value in that context.
Dan is convinced that there is a weighting to be applied to predictions from the three different sources, much as Nate Silver has weighted different polls to give an overall prediction of US presidential election results, getting 99/100 right over the last two elections. So, for certain types of case, what the algorithm predicts is worth more than the crowd. For others, it’s the expert.
The real point is that human plus algorithm beats anything else any time. This is AI as Augmented (or is it "Augmenting"?) Intelligence rather than Artificial.
As with any recipe, though, the secret in the sauce is all about the proportions of the ingredients for different circumstances.
Dan and Richard: thank you for a fascinating evening. We await the next instalment with great interest.
Marketing & Business Development Director at DLA Piper
9 年Really brain-thumping thought that in some scenarios crowds beat algorithms. But if this kind of crowd-sourcing were to become part of the augmented intelligence armoury, sourcing the right crowd of 'educated amateurs' every time would be something of an art form in itself!