Senate Democrats introduced new legislation Tuesday attempting to, once again, outright ban facial recognition use by federal agencies and officials, particularly law enforcement. If passed, the Facial Recognition and Biometric Technology Moratorium Act would cut the feds off from a variety of other sensitive biometrics identifiers, like voice and gait recognition, and limit federal grant funding only to states with their own “substantially similar” facial recognition or biometric moratoriums.
All of that might sound like music to privacy advocates’ ears, but it’s worth remembering proposed federal facial recognition bans are a lot like hot new restaurants in New York. They pop up often but quickly disappear in the ether.
“The year is 2023, but we are living through 1984,” Massachusetts Dem Sen. Ed Markey, the bill’s lead author, said in a statement. “The continued proliferation of surveillance tools like facial recognition technologies in our society is deeply disturbing. Biometric data collection poses serious risks of privacy invasion and discrimination, and Americans know they should not have to forgo personal privacy for safety.”
Markey received support from 15 other Democratic lawmakers, including former presidential candidates Bernie Sanders and Elizabeth Warren. The lawmakers say they united to push the bill through partly in response to a rise in reports of the tech being used by law enforcement in both the state and federal levels. An audit conducted by the Government Accountability Office in 2021 determined at least 20 federal agencies used facial recognition, often with little to no meaningful oversight or safeguards. Some of those agencies didn’t even know how many of their employees had access to the tech.
The proposed bans, if passed, could only be lifted by an act of Congress. In addition to restricting the tech’s use, the bill would also prohibit any biometrics data collected from those systems from being used in courts or judicial proceedings, a provision likely in response to reports of false arrests connected to inaccurate facial recognition matches. Federal agencies, according to the bill, would also be barred from spending money on biometric surveillance tech. Some of the bill authors, like Michigan Rep. Rashida Tlaib said the ban is needed to prevent harm caused by racially biased algorithms.
“Facial recognition is a racist technology that is being used in our neighborhoods to invade privacy, surveil, and criminalize,” Tlaib said. “In the City of Detroit, facial recognition has already falsely identified our residents, making them suspects in crimes they did not commit. This technology is making us less safe.”
Documents show feds are interested in ‘radically’ expanding facial recognition power
The proposed bans come the same day the American Civil Liberties Union shared a trove of internal government documents with Gizmodo detailing efforts by the FBI and Pentagon to develop “truly unconstrained” facial recognition capable of being deployed in public street cameras, mobile drones, and cops’ body cameras. That project, dubbed the Janus Program, offered a glimpse into a world of unfettered federal facial recognition use. The program, which officially ended in 2020, sought to “radically” expand real-world scenarios where facial recognition can establish identity, from subway cars and street corners to hospitals and schools. Experts told Gizmodo that vision would be a nightmare for privacy and civil rights, especially since the federal government currently lacks comprehensive biometric privacy protections.
“The government is opening Pandora’s box on a terrifying technical capability, which can enable pervasive tracking of anyone or everyone in a way never before possible in a free society,” ACLU Speech, Privacy, and Technology Project Deputy Director Nathan Freed Wessler told Gizmodo. “Lawmakers need to close the door on government abuse of this technology now, before it is too late.”
We’ve been here before
Markey tried to introduce similar legislation, that one called the Facial Recognition and Biometric Technology Moratorium Act, back in 2021. Like the most recent effort, the 2021 bill sought to ban government use of biometric tech. It managed to gain support from a wide switch of civil society groups, including the ACLU, but ultimately died in committee.
Last year, another group of Democratic lawmakers led by California Rep. Ted Lieu attempted to pass a separate piece of legislation more narrowly tailored to restricting law enforcement use of the tech. Among other safeguards, the Facial Recognition Act of 2022 would require police first obtain a judge-authorized warrant before using facial recognition in an investigation. It’s unclear, however, whether that bill has any chance of advancing in the new newly Republican-controlled house.
Federal actions on facial action may be lacking but local city-level actions are alive and well. To date, at least 16 municipalities, including San Francisco, Boston, and Oakland already passed laws and ordinances on facial recognition use by law enforcement or in public. At the same time, rising violent crime rates in recent years has reportedly pressured leaders in at least some of those areas to reconsider their facial recognition prohibitions.
“Cities and counties across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so,” Electronic Frontier Foundation Director of Federal Affairs India McKinney said in a statement. “It’s past time for the federal government to catch up to what cities and states already know.”
External variables like secret government surveillance operations, or perceived rising crime can play crucial roles in determining how far the public and lawmakers are willing to take proposed bans. Recent polling suggests people are still split on facial recognition generally, making the issues potentially more fluid than others.
Around 46% of U.S adult surveys by Pew last year said they thought widespread police use of facial recognition seemed like a “good idea.” Another majority (66%) however said they thought police would use their tech to monitor black and Hispanic neighborhoods at higher rates. 53% said the tech’s widespread use would probably lead to more false arrests.