New York City will be among the first cities in the US to earnestly tackle black box algorithms, the automated decision-making systems that are rarely made public, but have greater and greater influence over lives. A bill passed by the city council this week orders the creation of a local task force to monitor and assess the effect of these algorithms on the public. Unless Mayor Bill de Blasio vetoes the bill, which he is not expected to, the task force will audit the city’s algorithms for disproportionate impacts on different communities and come up with ways to inform the public on the role of automation.
In a city of 8 million people, decisions have to be made. In New York City, algorithms factor into nearly every part of municipal life, from school placement for students to deciding which suspects are released or kept in jail until trial. Even firefighters use them: The FDNY aims to inspect at least 10 percent of the city’s 300,000 buildings each year, so it uses an algorithmic tool to help decide which buildings are inspected and in what order. Such algorithms enable public agencies to process cases more quickly, but removing the human element means it’s difficult to hold anyone accountable, especially when most people don’t realize algorithms are the reason they’ve been denied a loan, detained in jail instead of being released, or haven’t had their building inspected for fire safety.
“A part of this task force’s mission will be to determine how we can evaluate the outputs of automated systems and figure out if and when there is harm done,” Councilman James Vacca, chair of the City Council Committee on Technology and sponsor of the bill, told Gizmodo over email. “I do hope this task force evaluates how algorithmic tools impact school seat distribution, the policing of neighborhoods, the determination of fire safety resources, the allocation of benefits, and that it also brings up other use cases we aren’t already thinking about.”
The bill has been submitted to Mayor de Blasio to sign, though it passes into law automatically unless he moves to veto it. Vacca has full confidence that he will not, meaning the task force will work with public agencies to determine if their “automated decision making processes” impact varying groups unfairly, what should be done if so, how to inform people they’ve been impacted.
This issue of how to inform the public is key and distinguishes the bill passed on Monday from a previous version. The older version included a clause that would’ve required agencies to publicly disclose each algorithm’s source code (the combination of inputs and outputs that define it), a first for the nation. That clause didn’t make it into this version of the bill, though Vacca says he doesn’t rule out eventual source code disclosures.
“In drafting and amending the legislation, we had to work through a number of complex things, from both a legal and technical stand-point,” he said. “The task force formed will examine the best ways to ensure algorithmic accountability and to mitigate algorithmic bias, and its recommendations will be public. These recommendations could be, and I expect they will be, considered by the administration and the next Council, and as of now, no measures are off the table.”
A growing number of scholars have committed to researching issues of fairness and bias in algorithms, though few laws are actually on the books requiring transparency.