Block on Trump's Asylum Ban Upheld by Supreme Court
It may be hard to believe, but the future is now. Thanks to the futuristic world we live in, our clients can be faced with damning evidence that was generated via automation or artificial intelligence. Soon, prosecutors may be asking the robot gardener what happened?
What's worse is that the systems that generate the evidence are prohibitively expensive to forensically examine. And let's be real, the custodian of records that gets called to testify to admit the evidence will likely be as unhelpful a witness as they come. Upping the ante, federal litigators are getting trained to use this type of evidence as a sword in both criminal and civil litigation.
So, how do you rebut seemingly unflinchingly credible robot generated evidence?
Check the Data for Obvious Errors
Perhaps the most frequently used automated evidence in courts today involves those hated red-light cameras. Interestingly, one of the best methods of contesting this automatically generated evidence comes courtesy of Shaggy. While we all know it really was him, when it comes to the scourge known as the red-light camera ticket, one of the quickest ways out of the ticket is showing that the robot got the wrong person. Computers can sometimes be wrong.
While this may not be so revolutionary, the same premise can be applied to other forms of automatically generated evidence. Actually looking at the data, rather than just the data's conclusions, could reveal an obvious error (that even a custodian of records could see).
Data's Only as Good as What's Collected
In addition to the obvious errors, depending on what sort of data and evidence we're talking about, you could find substantial issues involving automation bias, or other systemic failures (though these are likely to require an expert or two, or other evidence, to prove).
When the automated data is not something as basic as a photograph of a car speeding through an intersection, you may be able to capitalize on what is missing from the data or an alternative explanation. For example, just because a phone's GPS indicates that it was at a certain location, that doesn't necessarily mean the phone's owner was there as well. While automated, or machine generated, data tends to be inherently trustworthy, that trust can be shattered by exposing limitations.