Facebook’s Ad Tools Labeled Thousands of Users as ‘Interested’ in Treason

“Facebook CEO Mark Zuckerberg Testifies At House Hearing”
“Facebook CEO Mark Zuckerberg Testifies At House Hearing”
Photo: Getty

Facebook is apologizing after its algorithms tagged 65,000 Russian users as “interested in treason.” Facebook algorithmically tags users based on their behavior, making it easier for advertisers to target people interested in specific topics. In this case, however, the tag “treason” may have put users under the threat of government intervention. Facebook says it has since removed the interest category.

“Treason was included as a category, given its historical significance. Given it’s an illegal activity, we’ve removed it as an interest category,” a Facebook spokesperson told the Guardian.

Automated profiling is useful when you’re an orange juice vendor looking for people who say they like orange juice, but a landmark 2016 report from ProPublica found that many of the interests Facebook links to users aren’t self-selected. Facebook records your behavior, then makes inferences on who you may be or what you might like, including your race, gender, sexuality, and religion. As an example, Facebook wouldn’t explicitly ask a user in the profile section whether they are an East Coast liberal or a Southern conservative, but it knows if you live on the east coast versus the south, if you completed high school or college, and, of course, it can make sharp inferences based on “Likes” and whether you, say, clicked an ad for “Defend 2A” versus “March For Our Lives.”


In the case of Facebook’s “treason” label, it would be shockingly feasible to unmask some of these users without access to Facebook’s internal data, as the Guardian’s write-up explains. Advertisers need only create ads targeted specifically to people selected as having that interest, and then attempt to keep track of whoever clicked through.

Let’s be clear: Facebook is an automated profiling machine that synthesizes the enormous amounts of behavior data we create as we click, share, and friend other users. Advertisers can tap into that machine whenever they want, for the right price, and governments can request data. Overall, Facebook hands data over about 75% of the time, according to its 2018 Transparency Report.

“Officially, the internet is not censored in Russia,” Mette Skak, an authority on Russia, told the Guardian. “However, these methods, which Facebook has probably unwittingly given the Russian authorities, make it much easier for governmental agencies to systematically track persons marked as potential traitors.”

When I reported on Facebook’s hidden profiles two years ago, I asked readers to send in the interests Facebook had assigned to them. One journalist found that Facebook had marked him as interested in Hezbollah, a terrorist group. What happens when information like that, absurd and erroneous as it is, ends up in the hands of repressive regimes? It’s bad enough in the hands of advertisers.


Facebook is currently fighting off a civil rights suit from housing advocates, who say advertisers used the hidden profiles to exclude housing ads from Hispanics or disabled people. The reasoning is simple: Advertisers could show housing ads to everyone except those interested in “Disability.gov,” “wheelchairs,” or “English as a second language,” “Telemundo,” etc.

Facebook knows. It knows things users provide willingly, and it knows things that users were never asked (and thus, couldn’t refuse). Facebook doesn’t have to ask and in fact, even if you don’t have Facebook it may still know who you are. All of this is useful when targeting orange juice enthusiasts, but it also means oppressive governments across the world have the means to identify or interrogate users based on concealed algorithms. A tool that makes life easier for advertisers can also do the same for authoritarians.


[The Guardian]

Of course I have pages. I had pages five years ago. How anyone can believe I don’t defies belief.

Share This Story

Get our newsletter



it can make sharp inferences based on “Likes” and whether you, say, clicked an ad for “Defend 2A” versus “March For Our Lives.”

The trouble is they also jump to conclusions from replies, not just direct actions.

For example, an old college friend of mine leans the other way politically. Sometimes he shares an article from a rather extreme page. I might reply to that share offering some (shall we say) “corrected” facts, and maybe a link to a fact-checking site. FB, however, sees me as “engaging” with a post from that page and keeps suggesting it and similarly aligned pages.

This makes it tough to actually engage in dialog. The only option is to duck for cover, and we saw how far that gets us politically.