Block on Trump's Asylum Ban Upheld by Supreme Court
Some would argue that the Supreme Court is utterly predictable: We all know that Scalia is going to go scour the statutes, Thomas is going to complain about Commerce Clauses (dormant or otherwise), and Sotomayor is going to tick off her colleagues.
And with 9-0 decisions at a high point (66 percent this term, versus an average of 44 percent over the preceding five terms), prediction can't be that difficult, can it?
According to Prof. Josh Blackman, the creator of FantasySCOTUS, the "power predictors" hit 75 percent. Next season, FantasySCOTUS players will go up against more than a few SCOTUS nerds with lucky guesses: They'll battle a predictive algorithm designed by Blackman and his colleagues, one that predicts individual justices' votes 70.9 percent of the time, and overall affirm/reverse decisions 69.7 percent, tested against 7,000 past cases.
How Does This Algorithm Work?
According to Blackman's blog, the model uses something called an "extremely randomized trees" model. Different variables are assigned different weights and tested to measure predictive power. Think of it as trial-and-error on meth, resulting in a cocktail of 90+ variables weighted by predictive power [PDF].
Does It Work?
Seventy percent overall is nothing to sneeze at, but there are a few shortcomings.
The first, and most understandable, is that some justices are more unpredictable than others -- Justice Felix Frankfurter was a complete wild card, while Justice William O. Douglas was pretty much a sure thing. Justice Clarence Thomas is relatively predictable, while Justice William Rehnquist was predictable as an Associate Justice, and far less so once he became Chief Justice.
Blackman has a heatmap of all justices' predictability according to the model.
More interesting, however, is affirmances. Blackman notes:
Our model struggles to identify in advance cases that the Court ultimately decides to affirm -- especially unanimous affirmances. Since 1953, the Court has affirmed 2,623 cases or 34.1% of its fully argued cases. On this subset of cases, our model does not perform particularly well. In some years, we are able to forecast less than 25% of these cases correctly.
OK, None of That Made Sense, but I Want to Play!
We couldn't agree more. If you are interested in the algorithm, Prof. Blackman's post, as well as his team's paper, are worth the read.
But if you want to get into the fun and games part, FantasySCOTUS is, as always, where it's at. Much like Fantasy Football, we forgot to set our lineups last year, but this year, we'll be better -- we promise. You can find me in the Learned and Noble Hands league.
And if you want a war against a machine, details on the tournament against the algorithm are TBA -- we'll keep you updated when we learn more.