The Biden Administration on Tuesday revealed a first-of-its-kind “AI bill of rights” calling on developers and policymakers to address longstanding issues of algorithmic bias and discrimination. The blueprint document details numerous examples where AI can negatively impact communities and urges developers to embed equitable practices into their design philosophy.
While the blueprint marks the federal government’s most significant effort to address AI harms to date, the document lacks meaningful enforcement mechanisms, experts told Gizmodo. Worse still, some critics fear the policy prescriptions could counterproductively normalize harmful uses of AI, particularly when implemented by the military or law enforcement. The bill of rights, which focuses primarily on bias from AI systems deployed in the private sector, largely sidesteps growing concerns over the federal government’s own use of AI surveillance tools.
The AI bill of rights as proposed Tuesday rests on five key protections. These include the ability to be protected from unsafe or ineffective systems, protection from algorithmic discrimination, strengthened data privacy, notices of when and how automated systems are being used, and the ability for users to opt-out of systems and have access to a human being. The Biden administration hopes these core tenets can ultimately help guide the “design, development, and deployment” of AI systems.”
At its core, the AI bill of rights aims to cobble together a semblance of standards and frameworks to help policymakers and AI developers address the negative societal consequences of automated systems. The document points to a rise in workplace and school surveillance, exacerbated bias in housing, and biases in hiring algorithms as key areas where AI tools are doing real-world harm. Those issues disproportionately affect communities of color.
“The practices laid out in the Blueprint for an AI Bill of Rights aren’t just aspirational; they are achievable and urgently necessary to build technologies and a society that works for all of us,” White House Office of Science and Technology Policy Deputy Director for Science and Society Dr. Alondra Nelson said in a statement.
The blueprints recommendations, which are non-binding, call on AI developers to take “proactive and continuous measures” to protect individuals from discrimination and create tools with a philosophy of equity baked in from the start. The blueprint condemns the use of “continuous surveillance and monitoring,” and advocates for meaningful human oversight to AI systems used in critical areas like criminal justice and healthcare.
During a press briefing, policymakers involved in drafting the blueprint repeatedly said AI protections represent a modern extension of civil liberties protections. The racialized component of algorithmic bias and discrimination, the policymakers said, ties the AI bill of rights to the Biden Administration’s broader equity agenda.
“The harms that automated systems can cause constitute a new civil rights frontier,” Chiraag Bains, White House Deputy Director for Racial and Economic Justice said during the briefing.
Fabian Rogers, a community advocate from Brooklyn New York, spoke during the briefing and recounted an experience where his landlord attempted to implement a facial recognition system for entry into a large apartment complex. If implemented, residents would have had no choice but to provide a face scan for entry into their home, a sacrifice Rogers described as a “clear violation of rights.” He and other advocates were able to stop the rollout but said the scenario illustrated the fundamental lack of meaningful protections for everyday people.
“It’s far too easy for landlords to deploy untested, unregulated, and unsafe technology we didn’t want or ask for,” Rogers said.
Policymakers speaking on Tuesday said the bill of rights represents the conclusion of a year-long conversation with privacy advocates, journalists, technologists, and members of communities impacted by the automated systems. While the bill of rights is the clearest-eyed statement of AI principles given by the U.S. government to date, the lengthy document provides little in terms of actual enforcement ability. The bill restates many of the issues long raised by privacy advocates but fails to detail avenues by which the federal government could meaningfully persuade AI developers to act responsibly. The bill punts on the topic of new federal data legislation.
Some fear the bill of rights could actually do more harm than good. Critics, like Surveillance Technology Oversight Project (STOP) Executive Director Albert Fox Cahn, expressed concerns the blueprint, while well-intentioned, risks further normalizing biased surveillance. That normalization, he warned, could potentially amplify discrimination.
“I respect the folks at WhiteHouse OSTP [Office of Science and Technology Policy] who worked on this, but I couldn’t disagree more,” Fox Cahn wrote on Twitter. “This is a blueprint for normalizing and accelerating AI surveillance, not combatting it. This ignores the way AI fuels discrimination and oppression in the real world.”
In a statement, STOP said the bill of rights document endorses law enforcement use of AI systems during a time when advocates are calling for bans of particularly harmful AI tools. Over a dozen cities including Portland and Boston have passed ordinances and legislation banning automated systems like facial recognition from public use.
“When police and companies are rolling out new and destructive forms of AI every day, we need to push pause across the board on the most invasive technologies,” Fox Cahn said. “While the White House does take aim at some of the worst offenders, they do far too little to address the everyday threats of AI, particularly in police hands.”