Once Again, Google Promoted Disinformation and Propaganda After a Mass Shooting [Updated]

Photo: AP

As authorities named Devin Patrick Kelley as the shooter in a horrifying massacre in Sutherland Springs, Texas which resulted in at least 26 deaths on Sunday, Google once again served up misinformation and posts from conspiracy theorists at the top of search results for his name.

On Sunday evening, Googling “Devin Patrick Kelley” delivered results from Paul Joseph Watson, the far-right personality behind InfoWars-affiliated conspiracy website Prison Planet. It also pulled up unfounded allegations Kelley was a member of antifa, an umbrella term for loosely organized groups of anti-fascist activists across the country which have become an exaggerated boogeyman in the conservative media.


The posts arrived in the form of Google’s “Popular on Twitter” module, which appears directly below “Top stories” at the top of search results.

Image: Screengrab via Google

As noted by NYC Media Lab executive director Justin Hendrix on Twitter, other dubious information pulled into the Twitter module included that Kelley was a member of a “Pro Bernie Sanders Group,” a “#MUSLIM Convert,” “a radical Alt-left, with potential ties to ANTIFA,” or named “Samir Al-Hajeeda.”


According to the Daily Beast, the real Kelley was a former U.S. Air Force member who Defense Department records show was sentenced to a “bad-conduct discharge, 12 months confinement, and two reductions in rank to basic airman” after he was court-martialed in 2012. Screenshots of his Facebook obtained by the Beast included a photo of an semi-automatic scoped rifle with the caption “She’s a bad bitch,” but not anti-fascist symbols. Per CBS, officials were unclear as to Kelley’s motive as of Sunday evening, but said he “doesn’t appear to be linked to any organized terrorist groups.”

The spread of unverified or deliberately falsified information from gutter-level sources in the wake of crises, aided by venues like Google and Twitter, has become a real problem with real consequences. In the hours after the shooting, Texas Rep. Vicente Gonzalez fell for a reoccurring far-right social media meme claiming comedian Sam Hyde was responsible for the shootings and repeated that information during a live CNN broadcast.


The Twitter module is designed to give users a view of the social media conversation surrounding a trending event, but as this incident makes clear, deliberately inflammatory posts that play to readers’ prejudices are often the ones scooped up.


After another massacre in Las Vegas in October, Google’s top stories module linked to 4chan’s far-right board /pol/, which identified the wrong perpetrator and claimed he was motivated by his opposition to President Donald Trump. Afterwards, Google subsidiary YouTube’s search results promoted unfounded theories the killings were a false flag attack.

Google, Twitter, and Facebook have all regularly shifted the blame to algorithms when this happens, but the issue is that said companies write the algorithms, making them responsible for what they churn out. As CNN’s Jonathon Morgan noted, those algorithms are often designed to “show attention-grabbing, influential content to exactly the people most likely to be manipulated by it.”


Of course, platforms are only part of the picture. As with other stories like the murder of Democratic National Committee staffer Seth Rich, uncritical promotion of conspiracy theories by prominent media and political figures plays an additional role in elevating and keeping the misinformation alive long after it originally spread online. But at the end of the day, a handful of tech companies that dominate how Americans access information online don’t seem to have been particularly active in addressing the problem.

Update 11/6/2017: In a statement, Google told Gizmodo:

The search results appearing from Twitter, which surface based on our ranking algorithms, are changing second by second and represent a dynamic conversation that is going on in near real-time. For the queries in question, they are not the first results we show on the page. Instead, they appear after news sources, including our Top Stories carousel which we have been constantly updating. We’ll continue to look at ways to improve how we rank tweets that appear in search.


Additionally, Google’s public liaison for search Danny Sullivan told Gizmodo in a phone interview that the company wants to ensure the Twitter module is not pulling misinformation. He added Google is not simply relaying tweets suggested by Twitter but ranking them itself, though fine-tuning the process was a “moving target” they are trying to hit—and that the Twitter results function returned more trustworthy results over time as more reliable sources reported on search terms like the shooter’s name.

“It’s simply not a case of we’re taking in exactly what Twitter’s putting out,” Sullivan said. “... And it’s important because on the one hand you might say, it would be great if we could show whatever Twitter’s doing and it’s not our fault, but that’s not what’s happening nor is that sort of something we want to reach for. The concern here is, is there is something on our search results page that needs to be improved—we want to improve it.”


Sullivan added Google staff had been monitoring results in real time and will continue to tweak what appears on those pages in the future.

“You can try to deconstruct how the Twitter results are showing up on the page,” Sullivan said. “... We weren’t happy that those tweets that people were pointing out to us were showing up that way. We’re like, okay, we may need to make some changes here. For whatever reason those are getting there, it wasn’t by intent, it wasn’t by design, and it wasn’t something we’re striving to keep.”


Share This Story

Get our newsletter

About the author

Tom McKay

"... An upperclassman who had been researching terrorist groups online." - Washington Post