Twitter is so close to understanding the problem. So close.
“A lot of what people consider abusive on the service doesn’t actually violate our policies,” Twitter executive Kayvon Beykpour admitted on Monday.
“Because what Kara finds abusive is different from what you find abusive and so on and so forth,” Beykpour continued.
Beykpour, who serves as product lead at Twitter, made the observation at the first day of Recode’s Code Conference taking place in Scottsdale, Arizona. And he seems so close to getting it, while also trying to have it both ways.
Many Twitter users are fed up with the abuse on Twitter, though it’s not quite as complicated as Beykpour wants to make it sound. Twitter is a social media company that would rather treat individual neo-Nazis, white supremacists, and other inherently violent ideologies with kid gloves than to actually step up and create an environment where the worst aspects of humanity aren’t welcome.
“One of the things that we’ve really had to step up, from a product and technology standpoint is proactively de-amplifying content that we don’t think should be amplified,” Beykpour continued.
How does that serve anyone, by essentially hiding content that Twitter has already made a value judgement about, but not actually getting rid of the toxicity because Twitter insists that it doesn’t want to make a value judgement? Your guess is as good as ours.
The discussion, which also included Vijaya Gadde, lead counsel at Twitter, gets into other uncomfortable topics, like the fact that Twitter is basically a radicalization machine. But Gadde seemed to contradict Beykpour when it came to how stringent Twitter’s policies were.
“I think there is content on Twitter and every platform that contributes to radicalization, no doubt,” Gadde said. “But I think we have a lot of mechanisms and policies in place that we enforce very, very effectively that combat this.”
“We’ve taken over 1.6 million accounts down for terrorism on the platform, over 90 percent of that is detected by our own technologies proactively without any user reports,” Gadde said. “That’s work that we’ve been doing for many, many years.”
And Gadde insists that they’re actually getting rid of the Nazis.
“We have a violent extremist group policy that has banned over 110 violent extremist groups—90 plus percent of those are white supremacist or white nationalist groups, including the American Nazi Party, the Proud Boys, the KKK.”
But that doesn’t quite explain why prominent hate-mongers like David Duke, former Grand Wizard of the KKK, are still allowed on the platform. Why is the KKK organization banned from Twitter while KKK members and leaders are not, as they systematically abuse people on the platform?
Later in the discussion, Gadde said that people with “any affiliation” to white supremacist groups wasn’t allowed on Twitter. Again, there are countless examples of that not being the case.
There are a handful of other insights from the Code Conference discussion with Twitter, including the fact that there used to be 1.5 billion brute password guesses per day on the platform. Twitter now says that they’ve “brought that down” to about 600,000 attempts per day. And while that’s all well and good, it does nothing to address the biggest problem with Twitter: the fucking Nazis.
You can watch the full discussion on YouTube.