Like everyone else in journalism or the journalism-adjacent world, or anyone who owns a television or internet-connected device, or pretty much anyone who lacked the foresight to drive into the desert yesterday, I found myself powerless but to stare into the semi-redacted collapsing star that is the Mueller Report. Still, as I read through the black, I was struck by one element highlighted again by the report: the impressive extent to which cheap automation is responsible for this mess.
The report explains, of course, that “the Russian government interfered in the 2016 election in sweeping and systematic fashion” and did so primarily via two operations: an extensive social media campaign conducted by its Internet Research Agency (IRA), and the hacking of entities supporting or working for the Clinton campaign. There aren’t a ton of new revelations about how the IRA manipulated social media to try to steer the political conversation in Trump’s favor, but the final report is a good opportunity to revisit what actually happened here—and remind us that it absolutely can and will happen again.
“[The Mueller report] should remind people that if an account or tweet pops up into your Twitter feed from an unverified account and you don’t follow it, there MAY be someone creating fake engagement to push that accounts very partisan tweet,” the disinformation researcher and CNN-dubbed Russian troll hunter Josh Russell told me in a direct message.
After the summary and the details of the investigation, the first substantive section of the report focuses on the IRA and how, beginning in 2014, it began setting up hundreds, then thousands of social media accounts, most of them representing fictitious Americans, on Facebook, Twitter, Tumblr, etc. Over the next couple of years, it started beefing them up, amplifying the dummy accounts’ posts with a botnet tens of thousands of accounts strong.
On Facebook, IRA-created groups attracted hundreds of thousands of followers, and IRA posts reached up to 126 million Americans, the report says. Twitter’s own internal investigation last year found that “during the time period we investigated, the 3,814 identified IRA-linked accounts posted 175,993 Tweets”—and the Mueller report notes that these Tweets were shared, discussed, or retweeted by right-wing celebrities like Sean Hannity, quoted by credulous media outlets, and even commented on by Trump himself.
Russia’s social media propaganda machine was able, essentially, to brute force its messaging into the mainstream conservative media, with a handful of made-up accounts and a low-rent automated system. Because that’s the thing here; the IRA was able to do all this for relative pennies on the propagandic unit. Whether or not its relentless posting and eventual infiltration of the Twitter feeds of your racist uncle and the right’s intellectual giants actually influenced the election, the amount of discord it ultimately caused is undeniable—so-called Russiagate (are we still calling it that?) and the Mueller investigation is all a significant swath of the media and your #Resistance uncle could focus on for the last two years.
According to RBC Magazine, the IRA employed fewer than 100 employees in the “American Department” of its so-called “troll farm.” The department’s budget for two years of operations was $2 million dollars. If your goal is to sow nationwide political discord and get that nation’s media to
It also reportedly spent just $100,000 on Facebook ads, which is kind of a hilariously paltry sum if you’re hoping to swing elections, though experts regard it as likely just an experiment, a small part of the IRA’s posting regimen. And Twitter botnets are even cheaper. Dapper cyberlord Joseph Cox wrote about assembling his own Russian botnets for less than $100 in 2017, and security researchers have determined that they’ve only gotten more sophisticated since the 2016 election.
Twitter identified some 50,000 automated accounts that were affiliated with the IRA and were retweeting pro-Trump messages leading up to the election. Cox bought 1,000 accounts for $45. You don’t have to be an experienced coder to set these botnets up, either; you just need a little cash, the ability to Google ‘botnet services’ (or better yet, poke around on the dark web for them), and an openness to getting scammed here and there. It’s really easy to do.
“Overseas it’s a pretty cheap service,” Russell tells me. “They even advertise ON Twitter for it. Lots of Arabic bots I have ran into actually advertised for botting using Twitter.”
But all those automated bots will need something to retweet, which is where the dummy accounts come in. One of the better known IRA troll accounts is @Pamela_Moore13 (many of her tweets are collected here), and she provides a pretty good example of how this operation worked.
In the summer of 2017, as the Georgia congressional election results were being tallied—Democrat Joel Ossoff would narrowly lose—the very not-real Pamela Moore tweeted, “This is priceless! #Fakenews #GA06 #Handel #Ossoff” with a screengrab of glum-looking CNN pundits presumably discussing the results. The frowning faces captured therein were meant to offer proof that CNN was biased against Trump, and to serve as red meat to his base. The botnet presumably then retweeted her tweet, which circulated widely enough to be noticed by Sean Hannity, who also retweeted the post.
From there, the clip spread into the right-wing media ecosystem: One conservative radio station wrote an entire blog post dedicated to the Russian troll’s screengrabbed tweet. The American Thinker, a conservative website, taking issue with the notion that the mainstream media is unbiased, cited the Russian troll’s tweet as evidence that “blows that nonsense out of the water.” The New Hampshire Register included the tweet in its election coverage, and it showed up embedded in a ton of conservative blogs, and so on and so forth.
It’s certainly absurd to imagine some poor troll in Russia watching CNN online in a crappy office building in St. Petersburg, recognizing in a flash the potential gold, screengrabbing the money shot, and hitting publish, which is probably exactly what happened, then watching an automated system lift his or her work to mild meme status. But it’s also extremely unclear what kind of impact this actually had. Sure, it’s another cliché-perpetuating right-wing voice, but it’s also a phony troll amongst a chorus of obnoxious real ones. Even if this happened a thousand times over the course of the election, I’m not sure it does much beyond what standard-grade conservative posters were already doing naturally. It’s not like Trump loses the election if there were 12 fewer semi-viral dunks on CNN.
If anything, it’s a pretty good reminder of just how weird these times are, which tends to get lost in the very serious collusion narratives and spy novel fanfictioning that seems to infect so much of the conversation about Russian interference. After all, Russia had a concerted, state-run propaganda operation that was built to use an automated botnet to amplify a legion of trolls to, uh, get RT’d by Sean Hannity? Other actions documented in the report—the hacking, and how the trolls managed to organize real-world conservative protests from Russia—are more serious, but few are as brazenly 2019 as that.
Then again, it was pretty cheap. And as I mentioned, botnet tools are apparently getting more sophisticated—so we can clearly go ahead and assume that, as lame and ultimately pointless as they may be, the confounding days of automation-boosted social media political propaganda are only beginning.
“I expect it to keep happening, they will always be playing the game of cat and mouse with Twitter’s integrity team,” Russell says. “Just like spammers when they notice their message isn’t getting out they will change tactics, and mask the fact they are using bots. The troll accounts just have to make no mistakes and Twitter will never be able to catch them.”
“I expect it still happens,” Russell tells me, “just not at that scale. Yet.”