Stanford's New Institute to Ensure AI Is 'Representative of Humanity' Mostly Staffed by White Guys

Photo: Justin Sullivan (Getty)

Stanford University, the bastion of higher education known for manufacturing Silicon Valley’s future, launched the Institute for Human-Centered Artificial Intelligence this week with a massive party. Big names and billionaires like Bill Gates and Gavin Newsom filed into campus to back the stated mission that “the creators and designers of AI must be broadly representative of humanity.”

The new AI institute has more than 100 faculty members listed on their website and, on Thursday, cybersecurity executive Chad Loder noticed that not a single member of Stanford’s new AI faculty was black.

Advertisement

What happened next was a weird feat of public relations.

When Gizmodo reached out to Stanford on Thursday morning, the institute’s website was quickly updated to include one previously unlisted faculty member, Juliana Bidadanure, an assistant professor of philosophy. Bidadanure was not listed among the institute’s staff prior to our email to the school on Thursday, according to a version of the page preserved on the Internet Archive’s Wayback Machine, but she did speak this week at the institute’s opening event. In fact, the school appeared to be adding Bidadanure, and later her bio, to the faculty page as I was writing this article.

Based on our count, the institute’s faculty includes 72 white men out of 114 total staffers, or 63 percent—a figure that apparently can change at any moment. Stanford did not respond to our questions.

Advertisement

About a one-hour drive from Stanford, I waited on Wednesday night in a long line of 150 people in Berkeley, California, to get into a sold-out auditorium. We all came to hear the Oxford Internet Institute’s Dr. Safiya Noble, author of the 2018 book Algorithms of Oppression, talk about how Silicon Valley’s algorithms—the code driving everything from search engines to artificial intelligence—can reinforce racism.

“It was very difficult to find people who would be on a dissertation committee in 2010 that would be willing to put their name on the line and say we think technology can discriminate or that algorithms can discriminate,” Noble, who began her research a decade ago, said in Berkeley last night. “What most people were saying at the time was that, ‘It’s just math. Code can’t discriminate.’ That was the dominant discourse. I took a lot of body-blows trying to argue that there can be racist and sexist bias in our technology platform. And yet here we are today.”

Advertisement

Today, we live in an age where predictive policing is real and can disproportionately hit minority communities, job hiring is handled by AI and can discriminate against women, where Google and Facebook’s algorithms often decide what information we see and which conspiracy theory YouTube serves up next. But the algorithms making those decisions are closely guarded company secrets with global impact.

In Silicon Valley and the broader Bay Area, the conversation and the speakers have shifted. It’s no longer a question of if technology can discriminate. The questions now include who can be impacted, how we can fix it, and what are we even building anyway?

Advertisement

When a group of mostly white engineers gets together to build these systems, the impact on black communities is particularly stark. Algorithms can reinforce racism in domains like housing and policing. Algorithm bias mirrors what we see in the real world. Artificial intelligence mirrors its developers and the data sets it’s trained on.

Where there used to be a popular mythology that algorithms were just technology’s way of serving up objective knowledge, there’s now a loud and increasingly global argument about just who is building the tech and what it’s doing to the rest of us. The emergence of the artificial intelligence industry is pushing it even further as AI systems that will dominate our lives are learning and automating decisions in processes that are increasingly opaque and less accountable.

Advertisement

Last month, over 40 civil rights groups wrote a letter calling on Congress to address data-driven discrimination. And in December, the Electronic Privacy Information Center (EPIC) sent a statement to the House Judiciary Committee detailing the argument that “algorithmic transparency” should be required for tech firms.

“At the intersection of law and technology, knowledge of the algorithm is a fundamental human right,” Marc Rotenberg, EPIC’s president, said on the issue.

Advertisement

The stated goal of Stanford’s new human-AI institute is admirable. But to get to a group that is truly “broadly representative of humanity,” they’ve got a ways to go.

Update 9:35am, March 22: Standford HAI told Gizmodo in a statement that it agrees “we are not where we should be” but is “in the process of recruiting 20 new faculty to Stanford HAI and funding seed grants for research—diversity is a top priority for us on both fronts.” Here’s the institute’s full statement:

One of the many reasons why we created Stanford HAI is to spark discussions, conduct research and address critical issues like diversity, inclusion and representation in AI. We agree we’re not where we should be, but we’re extremely proud of the group of faculty we’ve assembled across Stanford. We’re also in the process of recruiting 20 new faculty to Stanford HAI and funding seed grants for research—diversity is a top priority for us on both fronts.

Additionally, Stanford is the birthplace and now a partner of AI4All, a non-profit whose express aim is to increase diversity in AI for generations to come by engaging students from a range of under-represented backgrounds. An important part of the long-term solution—both at Stanford and in the industry writ large—is increasing incoming talent into STEM fields. We’re heartened by our progress to date and remain committed to proactively improving diversity in the field, and within HAI.

Advertisement

Share This Story

About the author

Patrick Howell O'Neill

Reporter in Silicon Valley. Contact me: Email poneill@gizmodo.com, Signal +1-650-488-7247