How Algorithms Can Become as Prejudiced as People

We may earn a commission from links on this page.

We're in an era when major decisions are being made by algorithms poring over large datasets. They regulate everything from the stocks we trade, to where we put police officers in cities — and sometimes these algorithms suffer from the exact same prejudices that people do.

In a fascinating essay for Next City, Alexis Stephens writes about how algorithms and big data are not the objective measures of reality we hope they are. The way data is gathered and analyzed can wind up reproducing very human problems of racism and discrimination against the poor. Ultimately we still desperately need human analysts to look at the ethics of algorithms are being used — to see whether they are, in fact, providing us with unbiased data.

Advertisement

On Next City, Stephens writes about how researchers have found several examples of apps that reinforce existing human prejudices:

[Princeton researcher Solon] Barocas' report cites Boston's Street Bump as an example. When smartphone users drive over Boston potholes, the widely acclaimed app reports the location to the city. While inventive, the differences in smartphone ownership across Boston's populations might cause the app to unintentionally underserve the infrastructural needs of poorer communities.

"Historically disadvantaged communities tend to be simultaneously over-surveilled — if you are a part of the welfare system, you have a lot of information being collected by the state at all times — and severely underrepresented, because you might not be an attractive consumer," says Barocas ...

The questions that data miners ask and the way that the results are categorized are extremely important. Barocas brings up an anecdote about Evolv, a San Francisco startup that develops hiring models. In searching for predictors for employee retention, the company found that employees who live farther from call centers were more likely to quit. But because the results also could have an unintentional link to race, Evolv declined to use that information as a caution against violating equal opportunity laws.

"You can use data mining to do something completely different," Barocas points out. "You could ask 'If I adjust workplace policies or workplace conditions, might I be able to recruit or retain different people?'" Rather than blindly using data that might unintentionally discriminate, employers can intentionally reverse prior hiring practices that have adversely affected job candidates based on their race, ethnicity, gender and income.

Advertisement

The more we understand algorithms, the more obvious it becomes that they are just as fallible as human beings.

Read the full story at Next City