Like most companies in the tech realm, Twitter is no stranger to weirdly hollow apologies. Some of the company’s greatest hits happened when it was grilled on offensive trending topics (whoops!), leaking account info to hackers (oopsie!), and failing to deal with the harassment-filled cesspool a lot of us know the platform to be (lol, sorry).
Now, it might have to apologize to at least some of the advertisers who keep the platform afloat.
First, a little bit of context: A BBC news investigation from last week found anyone could pay to target the site’s most hateful users with paid ad campaigns. In response to the report—which found that an ad targeted with keywords like “Islamaphobia” had the potential to reach tens of thousands of Twitter users—the company said that it was “very sorry this happened,” and “rectified” the issue “as soon as [representatives] were made aware of [it].”
As of Tuesday, Gizmodo found that the issue was not, in fact, rectified. A quick test found it was still ridiculously easy to target people using keywords like “men’s rights” and “incels” and “8chan,” along with a slew of racial slurs.
Using the keywords and key phrases above (plus a few that we’d rather not reprint), Gizmodo was, as of Tuesday evening, able to “boost” a pre-existing tweet and reach nearly 2,000 people over the span of three hours—all for $2.20. We were also able to run multiple ads targeting the exact keywords targeted by the BBC without a hitch.
By Wednesday morning—after we contacted Twitter about the issue and gave the company time to figure out what the hell was going on—it wasn’t simply impossible to target banned keywords, as the company claimed it would be—it appears to be impossible to target keywords at all, at least if you’re only spending a few bucks. Keyword targeting was working fine on Tuesday before we reached out to Twitter.
When originally reached for clarification on the company’s ad-targeting policy by Gizmodo, Twitter offered the following:
Many of the search words listed are indeed prohibited as hateful content and will not actually register as keywords for the ad once it’s published. This is an automated process in the next step before final posting. We understand the user experience is not as intuitive as it should be and we’re working to explore ways to simplify it.
Or put another way, Twitter won’t necessarily stop you from throwing slurs into your targeting—it just won’t process them as targetable once the ad eventually runs.
But that answer doesn’t explain how both the BBC and Gizmodo were able to run advertisements targeting only those words, and were still able to rack up hundreds of eyeballs apiece. It also doesn’t explain how Twitter’s able to calculate the “potential audience” you might reach by targeting these keywords. How can the company calculate that using the phrase “neonazi” in your targeting mix will get you in front of 450,000 tweeters if that word can’t be used for targeting?
For what it’s worth, the company also seemingly did rectify the issue in question—by throwing a wrench in its ad system that prevented us and other low-spenders from running ads at all.
Following Twitter’s response, we attempted to run six separate campaigns—either targeting the offensive phrases in question, or targeting totally kosher phrases like “cats” or “floofs”—and the result was the same every time: zero “impressions,” or eyeballs, on the ad, and $0 being spent. We even reached out to an outside advertiser to see if the same thing was happening on their end, and they confirmed: Ads that earlier would have reached hundreds of people were suddenly garnering no traction at all.
When we asked Twitter to clarify what was going on, a spokesperson declined to comment further and said that it is working to fix the busted and confusing design of its ad platform.
On paper, at least, Twitter does provide guardrails for targeting the phrases typed onto timelines or into search bars. On a page describing the platform’s “policies for keyword targeting,” Twitter states that it’s verboten to target certain “sensitive” keywords—like those relating to race, politics, religion or sexual orientation—with any sort of ads. At the same time, the page states that the onus falls onto the advertiser for abiding with its policies and “applicable laws”—not Twitter itself.
As with all advertising platforms, there are certain obligations to follow when using Twitter for advertising. Review our guidelines and make sure you understand the requirements for your brand, business, promoted content, and targeting criteria. You are responsible for all your promoted content and targeting on Twitter. This includes complying with applicable laws and regulations regarding online advertisements.
It’s an approach that sharply contrasts what companies like Facebook have done in the past. When that platform’s been confronted by pesky journalists for targeting unsavory groups with its ad platform by using keywords like “white genocide” or “Jew Haters,” it managed to nix those phrases from the platform entirely.
Unfortunately for Twitter, the company’s business—and appeal to advertisers—is intrinsically tied to the platform’s ability to reach “the right people,” during the “right conversations.” In a playbook they shared with their advertiser partners last year, the company explicitly highlights conversation targeting as being one of Twitter’s key strengths:
Conversation targeting reaches audiences based on the content of everyday conversations they take part in across 25+ categories and 10,000+ topics. These people have Tweeted about, engaged with, or dwelled on Tweets about the selected topic(s).
On Twitter at least, it just so happens that a lot of those conversations happen to be about Nazis.