AI Facial Recognition Technology Raises the Risk of Wrongful Arrests

In January 2020, Robert Williams from Michigan experienced a startling turn of events when he was wrongfully arrested based on a match from facial recognition technology (FRT). The incident, initiated by a call to his wife and followed by an arrest at his home, was due to the alleged theft of Shinola watches, identified through grainy surveillance footage compared to an old driver’s license photo. This case marked the first documented instance of a wrongful arrest attributed to FRT.

Williams is among at least seven individuals misidentified by facial recognition technology, with six being Black, highlighting a troubling pattern of racial disparities. The cases of Nijeer Parks, Porcha Woodruff, Michael Oliver, Randall Reid, Alonzo Sawyer, alongside Williams, underscore the technology’s flaws, particularly its lower reliability for people of color. Studies suggest these inaccuracies could stem from algorithmic biases, insufficient representation of Black faces in training data, and the amplification of existing police biases.

The misuse of FRT in law enforcement mirrors past controversies with forensic science, such as bite mark analysis and hair comparison, which have led to numerous wrongful convictions. Critics, including Chris Fabricant of the Innocence Project, caution against the unchecked adoption of AI technologies in the criminal justice system, pointing out the dangers of relying on self-supported claims of efficacy without rigorous scrutiny.

DNA evidence has been instrumental in overturning wrongful convictions, often tied to flawed forensic evidence. Remarkably, 60% of individuals exonerated through DNA evidence between 1989 and 2020 were Black. However, cases involving AI misuse present unique challenges to rectification through DNA evidence alone.

To combat these issues, the Innocence Project is engaging in pretrial litigation and policy advocacy aimed at preventing the employment of unreliable AI in policing, with a particular focus on its impact in communities of color. The Neighborhood Project is a new initiative by the Innocence Project, focusing on understanding the effects of surveillance technologies on communities and advocating for their control to be in the hands of those most affected.

Despite an executive order from the Biden administration aimed at managing AI risks, comprehensive federal regulations on AI use in policing are still lacking. Advocates encourage community involvement at local government levels to influence the adoption and regulation of surveillance technologies, emphasizing the power of public presence and advocacy in shaping policies that impact their lives and liberties.

Original article: https://innocenceproject.org/artificial-intelligence-is-putting-innocent-people-at-risk-of-being-incarcerated/ (note: content has been edited)

Also Read

Tracking Fentanyl Use: Study Supports Broader Detection in Mandatory Drug Testing
,
Study Finds Electronic Screening for Domestic Violence 10x More Effective than Traditional Methods
,
New Technique Could Speed up Forensic Analysis in Sexual Assault Cases
,

This field is for validation purposes and should be left unchanged.