Google Reportedly Told AI Scientists To 'Strike A Positive Tone' In Research

Illustration for article titled Google Reportedly Told AI Scientists To 'Strike A Positive Tone' In Research
Photo: Leon Neal / Staff (Getty Images)

Nearly three weeks after the abrupt exit of Black artificial intelligence ethicist Timnit Gebru, more details are emerging about the shady new set of policies Google has rolled out for its research team.

Advertisement

After reviewing internal communications and speaking to researchers affected by the rule change, Reuters reported on Wednesday that the tech giant recently added a “sensitive topics” review process for its scientists’ papers, and on at least three occasions explicitly requested that scientists abstain from casting Google’s technology in a negative light.

Under the new procedure, scientists are required to meet with special legal, policy and public relations teams before pursuing AI research related to so-called controversial topics that might include facial analysis and categorizations of race, gender or political affiliation.

In one example reviewed by Reuters, scientists who had researched the recommendation AI used to populate user feeds on platforms like YouTube — a Google-owned property — had drafted a paper detailing concerns that the tech could be used to promote “disinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content,” as well as lead to “political polarization.” After review by a senior manager who instructed the researchers to strike a more positive tone, and the final publication instead suggests that the systems can promote “accurate information, fairness, and diversity of content.”

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” one internal webpage outlining the policy reportedly states.

In recent weeks — and particularly after the departure of Gebru, a widely-renowned researcher who reportedly fell out of favor with higher-ups after she raised the alarm about censorship infiltrating the research process — Google has faced increased scrutiny over the potential biases in its internal research division.

Four staff researchers who spoke to Reuters validated Gebru’s claims, saying that they too believe that Google is beginning to interfere with critical studies of technology’s potential to do harm.

Advertisement

“If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship,” Margaret Mitchell, a senior scientist at the company, said.

In early December, Gebru claimed that she had been fired by Google after she pushed back against an order not to publish research claiming that AI capable of mimicking speech could put marginalized populations at a disadvantage.

Advertisement

DISCUSSION

This should not surprise anyone who lives in the real world.

I am a writer/editor for a Fortune 500 company in the insurance industry. Much of my work is in thought leadership —reports, short articles, and blog posts that do not make an explicit sales pitch for my company but cover issues/topics that relate to our products and services.

Everything I produce for external consumption is subject to review by others in marketing (I am part of the global marketing team) plus legal and the appropriate leaders in the business. Everything I produce is intended to support our sales objectives. I’ve had pieces shelved (after a lot of work and arguments about them) because they are about sensitive topics (one particular climate change piece comes to mind) or because they could end up hurting us in the sales process. I’ve been told to stay away from certain topics or to get extra approval on them before I start writing. And I’ve had to rewrite pieces in part or in full before receiving approval for publication.

I imagine it works pretty much the same way at every large company, regardless of industry, and regardless of whether the people producing the content are in marketing departments (like me) or have lofty titles such as “scientist.”

You want high-quality, independent research about the role and danger of artificial intelligence and other things that technology companies have their hands in? Google is not going to deliver that. Google is not your friend — it is a business.