The anti-vaccination movement might be getting a boost from Twitter bots and Russian trolls, suggests a new study published Thursday in the American Journal of Public Health. It found that certain kinds of bot accounts were more likely to send antivax tweets than were accounts belonging to actual people, while alleged troll accounts were more likely to stoke the flames of controversy by promoting both pro- and anti-vaccination messages.
The researchers behind this study, led by David Broniatowski of George Washington University in DC, were initially hoping to use Twitter for good. They wanted to see if people’s tweets could be an accurate proxy for surveys that try to suss out the public’s overall attitude toward vaccination. These surveys are essential, but relying on social media might allow scientists to more quickly and cheaply track changing public sentiment.
But as they started to sift through vaccine-related tweets, they ran into a problem. The vast majority of people, judging by large, nationally representative surveys, correctly believe that vaccines are safe and essential (the flu shot alone prevents thousands of deaths annually). But Twitter seemed to be swarming with antivaxxers and their false and misleading arguments. Theorizing at least some of this chatter could be the work of bots and other not-so-genuine actors, they decided to compare how different types of accounts talked about vaccines.
Broniatowski and his team looked at thousands of vaccine-related tweets from 2014 to 2017 made by Twitter users, as well as a random sample of 1 percent of all tweets made during the same time. They then used an algorithm that predicted whether an account was likely to be a human or a bot. They also compared these accounts to ones alleged to be linked to Russian troll farms like the Internet Research Agency.
Likely human accounts rarely talked about vaccines, but when they did, they were usually pro-vaccine. Troll accounts and more sophisticated bot accounts (meaning those that don’t simply send out incoherent gibberish), on the other hand, talked about vaccines more. And content polluters, bots that spew out links to malware and clickbait, more often spouted anti-vaccine tweets.
“That suggests these bots were using anti-vaccine content as part of an attempt to get people to follow their accounts and expose them to malware or spam,” Broniatowski told Gizmodo. “Or alternatively, anti-vaccine activists might be using these bots to get their message out. Though we would need to do more in-depth digging to know for sure.”
Russian trolls, meanwhile, were more insidious. They had an equal mix of positive and negative vaccine-related tweets, often using popular hashtags circulated by antivaxxers. “Pharmacy companies want to develop #vaccines to cash, not to prevent deaths #VaccinateUS,” read one of these tweets collected by the team.
“This was part of a much larger effort to promote political discord, and vaccines simply happened to be one of the topics they were focused on,” Broniatowski said. “They were playing both sides of the field.”
The team’s findings aren’t the first to suggest that bots are spreading propaganda about vaccines and other controversial topics. And as much as we try to ignore spammy-looking messages when we come across them, Broniatowski says they could have a real influence on our perceptions of vaccine safety.
“We know, from prior research, that exposure to the ‘vaccine debate’—even if it’s pro-vaccine messages—can increase vaccine hesitancy, delays in vaccinating, and ultimately lead to a higher likelihood of vaccine-preventable illness. So this is of great concern to us,” he said.
That said, it’s hard to know just many people were exposed to the antivax tweets his team found. Broniatowski notes some of the accounts they flagged had hundreds of thousands of followers, and as a lowball estimate, he says these tweets collectively had millions of impressions.
There are some important limitations to the team’s findings, mainly that most accounts tweeting about vaccines (93 percent) couldn’t be clearly identified as belonging to either a person or bot. Some of these accounts undoubtedly are real, Broniatowski says, but there’s probably a large proportion that belong to more elusive bots and trolls. Others might even be so-called cyborg accounts, meaning accounts that are managed by both humans and bots, or are legitimate accounts that were later hacked.
Because of how complex these bad-faith accounts are, there isn’t an easy way for companies like Twitter to nip them in the bud.
“I don’t believe the answer is better machine-learning algorithms, because that simply puts us in an arms race. The people who make bots have gotten very good at developing one that can avoid detection from our best algorithms,” Broniatowski said. “On the other hand, what companies might be able to do is communicate the context behind how these bots are operating.”
One example of how that might work, he added, could be public warnings about suspect accounts likely engaging in trolling or spamming. As for public health experts, the authors say, the best way to fend off anti-vaccine sentiment online might not be fighting the antivaxxers in their mentions, but undercutting the legitimacy of their message. If bot networks are being used by antivaxxers, for example, then identifying and exposing these networks could torpedo their public credibility more than simply debunking any of their actual claims. But these and other communication strategies will need more research to figure out if they really work.
In the meantime, there’s a familiar lesson to be gleaned about what you see and hear on the internet.
“The main message here is that people tweet about vaccines for many different reasons, with many different hidden agendas, and just because you see a tweet about vaccines, it may not actually be about vaccines,” Broniatowski said.