Image by ACE

This piece was originally posted by the Alliance for Citizen Engagement (Center). It was written by Abigail Gaetz.


Introduction

A facial image is like a fingerprint: a unique piece of human data that can identify an individual or connect them to a crime. Law enforcement uses facial recognition to identify suspects, monitor large crowds, and ensure public safety.

Facial recognition software is used by local, state, and federal law enforcement, but its adoption is uneven. Some cities, like San Francisco and Boston, have banned its use for law enforcement, while others have embraced it. Despite this, the technology has been instrumental in solving cold casestracking suspects, and finding missing persons, and is considered a game changer by some in law enforcement.

Facial recognition software can be integrated with existing police databases, including mugshots and driver’s license records. Private companies like Clearview AI and Amazon’s Rekognition also provide law enforcement with databases containing information gathered from the internet. 

Here’s how police use facial recognition technology:

  1. Law enforcement collects a snapshot of a suspect drawn from traditional investigative methods, such as surveillance footage or independent intelligence. 
  2. They then input the image into the facial recognition software database to search for potential matches. 
  3. The system populates a series of similar facial images ranked by the software’s algorithm, along with personal information such as name, address, phone number, and social media presence 
  4. Law enforcement analyzes the results and gathers information about a potential suspect, which is later confirmed through additional police work

The exact number of law enforcement agencies using facial recognition software is difficult to know.

Much like other investigative techniques, law enforcement tends to keep the practice of facial recognition identification out of the public eye to protect ongoing investigations. Furthermore, facial recognition technology is not used as evidence in court proceedings, meaning that it is hard to track the frequency of use of this technology in criminal prosecutions. 

However, studies conducted on facial recognition and law enforcement give a broad understanding of the scope and scale of this debate. A 2017 study conducted by Georgetown Law’s Center on Privacy & Technology, estimates that 3,947 out of roughly 15,388 state and local law enforcement agencies in 2013, or one in four, “can run face recognition searches of their own databases, run those searches on another agency’s face recognition system, or have the option to access such a system.” Furthermore, a 2021 study from the Government Accountability Office (GAO) found that 42 federal agencies used facial recognition technology that they either owned or were provided by another agency or company. 

Supporters of this technology celebrate the use of facial recognition to solve crimes and find suspects faster than ever before. 

“The obvious effect of [this] technology is that ‘wow’ factor,” said Liu. “You put any photo in there, as long as it’s not a very low-quality photo, and it will find matches ranked from most likely to ones that are similar in a short second,” says Terrance Liu, vice president of research at Clearview AI, a leading service provider of facial recognition software to law enforcement. 

Before facial recognition technology, identifying suspects caught on surveillance cameras was difficult, especially without substantial leads. Law enforcement argues that this technology can help investigators develop and pursue leads at faster rates. 

How accurate are the results produced by this software? 

Due to the advances in processing power and data availability in recent years, facial recognition technology is more accurate than it was ten years ago, according to a study conducted by The National Institute of Standards and Technology (NIST). 

However, research conducted by Joy Bulowami at MIT Media Lab demonstrates that while some facial recognition software boasts more than 90% accuracy, this number can be misleading. When broken down into demographic categories, the technology is 11.8% – 19.2% less accurate when matching faces of color. Critics argue that this reliability gap endangers people of color, making them more likely to be misidentified by the technology. After the initial release of the study, the research noted that IBM and Microsoft were able to correct the accuracy differentials across specific demographics, indicating that with more care and attention when crafting these technologies, adverse effects like these can be prevented. 

Image clarity plays a large role in determining the accuracy of a match. A 2024 study from NIST found that matching errors are “in large part attributable to long-run aging, facial injury, and poor image quality.” Furthermore, when the technology is tested against a real-world venue, such as a sports stadium, NIST found that the accuracy ranged between 36% and 87%, depending on the camera placement. However, as low-cost, high-resolution cameras become more widely available, researchers suggest the technology will improve. 

Law enforcement emphasizes that because facial recognition cannot be used as probable cause, investigators must use traditional investigative measures before making an arrest, safeguarding against misuse of the technology. However, a study conducted by Georgetown Law’s Center for Privacy and Technology says that despite the assurance, there is evidence that facial recognition technology has been used as the primary basis for arrest. Publicized misidentification cases, such as Porcha Woodruff, a black woman who was eight months pregnant when wrongfully convicted of a carjacking and armed robbery from a false match on the police’s facial recognition software, reaffirm the belief that police can use matches in a facial recognition return to make an arrest and that the technology is less reliable for faces of color. 

Opponents argue that law enforcement is not doing enough to ensure that their systems are accurate and reliable. This is in part due to how law enforcement uses facial recognition software, and for what purpose. Facial recognition software allows users to adjust the confidence scores, or reliability, of the image returns. Images with high confidence scores are more accurate but produce fewer returns. In some cases, law enforcement might use a lower confidence score to generate as many leads as possible. 

For example, the ACLU of Northern California captured media attention with an investigative study that found that Amazon’s Rekognition software falsely matched 28 members of Congress with mugshots, 40% of which were congresspeople of color. The study used Rekognition’s default confidence score, 80% which is considered relatively low, to generate the matches. However, in response to the study, Amazon advised users that a 95% confidence threshold is recommended to find human matches. 

Both proponents and critics advocate for comprehensive training to help law enforcement navigate the software and protect against misuse. A GAO study of seven federal agencies using facial recognition technology found that all seven agencies used the software without prior training. After the study, the DHS and DOJ concurred with the recommendations and made steps to rectify this issue. 

Civil liberties advocates and policy groups argue that without regulation and transparency in this area, it is hard to ensure that the systems are used correctly and in good faith. 

Privacy Concerns and Freedom of Assembly 

Social media companies have raised concerns about how the information hosted on their platform is used in facial recognition software. Facial recognition software is powered by algorithms that scrape the internet of facial images and personal information which is then used to generate returns for the software. Facebook, YouTube, and Twitter are among the largest companies to speak out against the practice, filing cease and desist orders. However, legal precedence established in a 2019 ruling, HiQ Labs V. LinkedIn, allows third parties to harvest publicly available information on the internet due to its nature as public domain. 

Facial recognition technology can be used to monitor large crowds and events, which is commonplace in airportssports venues, and casinos. Furthermore, law enforcement is known to have used facial recognition software to find individuals present at the January 6th Insurrection and the nationwide Black Lives Matter movements. Law enforcement argues that surveilling large crowds with this technology can help protect the public from unlawful actors, and help catch suspects quickly. However, privacy and civil liberties activists worry about the impact of surveillance on the freedom of assembly. 

Regulatory Landscape and Conclusions 

In 2023, Senator Ed Markey (D) reintroduced a bill to place a moratorium on the use of facial recognition technology by local, state, and federal entities, including law enforcement, which has yet to progress through Congress. However, states like Maine and California have enacted laws that address some of the challenges presented by the technology, along with a patchwork of other local laws across the country. 
Critics continue to argue that a lack of transparency and accountability among law enforcement drives uncertainty in this area. The ACLU is currently suing the FBI, DEA, ICE, and Customs and Border Protection to turn over all records regarding facial recognition technology usage. However, advocates argue that the benefits outweigh the concerns, and are a useful tool for law enforcement.