Vote 2020 graphic
Everything you need to know about and expect during
the most important election of our lifetimes

Democrats Introduce Bill Requiring Tech Companies To Check Algorithms For Bias

Illustration for article titled Democrats Introduce Bill Requiring Tech Companies To Check Algorithms For Bias
Photo: Zach Gibson (Getty)

Inside Silicon Valley, the consensus is that artificial intelligence will be the most important technology for the next decade. Okay, but what the hell does that mean for the rest of us?

Advertisement

In an attempt to to build in transparency and accountability into the next generation of world-changing technology, American lawmakers introduced a bill on Wednesday to require large companies to audit machine learning systems for bias.

Democratic Senators Ron Wyden and Cory Booker introduced the Algorithmic Accountability Act on Wednesday. Democratic Congresswoman Yvette Clarke introduced an equivalent bill in the House of Representatives.

Advertisement

Machine learning and artificial intelligence already powers a deceptively wide sweep of crucial processes and tools like facial recognition, self-driving cars, ad targeting, customer service, content moderation, policing, hiring, and even war. It’s a huge list, and sometimes it’s fun to sit back and marvel at how different all those uses are.

Exactly how those decisions are made and whether or not they’re fair, however, is often opaque or unknowable. That problem has led lawmakers to this attempt to pry open the “black box.”

The new bill would task the Federal Trade Commission with crafting regulations making companies conduct “impact assessments” of automated decision systems to assess the decision making systems and training data “for impacts on accuracy, fairness, bias, discrimination, privacy and security.”

Companies making over $50 million per year or holding the data of over one million individuals would be targeted by the bill.

Advertisement

“Computers are increasingly involved in so many of the key decisions Americans make with respect to their daily lives—whether somebody can buy a home, get a job or even go to jail,” Wyden said in an interview with The Associated Press.

This is not a hypothetical scenario. Look at a situation that arose at Amazon in 2018. The American tech giant, a world leader in AI, used the technology to help decide which applicants it should interview for jobs. After two years of design, they found that any female applicant was automatically blasted to the back of the list.

Advertisement

The AI Now Institute’s Kate Crawford discussed the incident in a recent interview with Kara Swisher.

“It tells us two things,” Crawford said. “One, it’s actually much harder to automate these tools than you might imagine, because Amazon’s got some pretty great engineers. It’s not like they don’t know what they’re doing. It also tells you something about the pile of résumés that they had. What were they training it on? What was the training data? Surprise, surprise: a lot of white dudes in basically their entire engineering pool.”

Advertisement

Within the last month, Facebook’s discriminatory ads practices came into the spotlight and the company now also faces charges from the Department of Housing and Urban Development for discrimination under the Fair Housing Act.

“By requiring large companies to not turn a blind eye towards unintended impacts of their automated systems, the Algorithmic Accountability Act ensures 21st Century technologies are tools of empowerment, rather than marginalization, while also bolstering the security and privacy of all consumers,” Clarke said.

Advertisement

Reporter in Silicon Valley. Contact me: Email poneill@gizmodo.com, Signal +1-650-488-7247

Share This Story

Get our newsletter

DISCUSSION

notspecified
Akinetopsia

That’s actually a good thing. Reposting from a reply to a comment on this article yesterday:

Nerd11135 > Andrew Liszewski
4/09/19 11:07pm

I took a quick look at the product. The more of this stuff comes out, and the more I look at it, the more I realize the following.

1-The AI takeover/robot uprising is inevitable.

2-It’ll be our own fault when it happens. (Why do we waste our time making computers that could do this shit, really.)

3-The new AI/robot civilization won’t last all that long.

-An Anonymous Nerd

Akinetopsia > Nerd11135
4/10/19 2:53am

1: Yes. It’s already here. But it’s not ‘AI’ per se, more like glorified algorithms. Welcome the the era of borderline-gibberish fake everything and strong data signals from the masses. That equates to lotsa money.

2: Yes. Because who cares, that equates to lotsa money.

3: At least until v2.0, which will equate to even more lotsa money.

Unless someone can point something positive here, I don’t see any active ‘AI’ development that is done for the general good of humanity. All progress is made for profit, either to deceive, to capitalize on us and our flaws and/or to better predict us. That can’t be good in the long run considering that the end-goal is potential, actual intelligence.

I mean, as it is right now It’s pretty damn biased already.


‘AI’ components as they are right now are used mainly to manipulate our decisions and predict our actions. Often it’s just a service provider trying to optimize ROI - Other times it’s an organization with sketchy motives and gaming popular platforms. The issue is that we can’t know how our information will be used, and that we can’t opt-in — or more importantly opt out — of that whole process once we click that OK button. Moreover, nothing tells us that anything different happens with our data once we opt-out.

Data is the new economy. Just like money — and bitcoin proponents won’t like me for this — it needs to be regulated, and sorry to say but our current politicians are ill-equipped to deal with this new phenomenon. Moreover, the average person needs to understand that everything they see online has most likely been filtered trough one of those algorithms that exist for the sole purpose of generating more revenue in a more efficient way at every page refresh/load.

The Internet has been turned against us.

AI’ already has an inherent bias woven into its fabric.