Block on Trump's Asylum Ban Upheld by Supreme Court
It turns out that the information collected by the Innocence Project is serving another purpose: helping to develop better lie-detection software.
Researchers at the University of Michigan have used clips of witnesses collected by the Innocence Project and other sources to put together a system of "tells" that trains a computer to predict whether or not someone is lying. And you know what? It's about fifty percent more accurate than humans.
Much to Learn, Young Grasshopper
The project began with properly training the program. To do this, the research team needed at least a pool of recorded interviewees of whom they knew were either telling the truth or not. To train their program, they took video 120 media clips of actual trials that are freely available on the Innocence Project's website.
In order to test the accuracy of the newly programmed system, it was necessary to test it on subjects. The problem, evidently, is creating a need for a test subject to lie. In the lab, it's too clinical. Out in the real world, there are many motivates to lie and deceive: fear, jealousy, etc.
Results so Far
The findings have led to some rather interesting conclusions. Liars tend to gesticulate with their hands more. Opening of eyes is more associated with truth. So is raising of eyebrows and shaking of the head. Also, strangely enough, maintaining eye contact is more correlated with lying. Bet you didn't expect that one.
The eye contact finding is, actually, not all that surprising. More recent studies have indicated that avoiding eye-contact is very much culturally inculcated as is increasingly recognized as a poor indicator of veracity.
Issues With the Study
It will be a while yet before this technology sees the inside of a courtroom, but we're sure researchers are aware of that. Initial problems lie (no pun intended) with the original source material. The veracity of the computer training depends almost entirely on truth and falsity lining up perfectly with actual accurate outcome of the Innocence Project trials. Additionally, as the team already pointed out, "laboratory lying" is not as favorable as a "real world" lying, and therefore has significant drawbacks for honing accuracy.
Nonetheless, the project is noteworthy. Even with crude calibration, it seems that the program is already better than humans at telling whether someone is lying to us. That either means the machine is excellent already, or we are terrible at picking up subtle lies. It also suggest that human beings are susceptible to lies that are given to us in comforting and inviting ways.