It’s been a few weeks since our last edition of Hellfeed, and in the meantime, things on the feed have gotten somewhat closer to baseline dispiriting: While the president’s efforts to overturn the election results have degenerated from coup attempt to dangerous levels of cringe, social media firms are eager to return to pretending everything is totally back to normal.
This is true in a sense but stuff is still pretty bad out there! Here are some of the highlights of the last few weeks.
Throughout this year, the Trump administration tried to push a batshit executive order that would direct the Federal Communications Commission to reinterpret Section 230 of the Communications Decency Act, the federal law that protects websites from most legal liability for user content or their moderation decisions.
To make a long story short, the order demands that the FCC investigate conspiracy theories that sites like Facebook, Twitter, and YouTube systematically censor proud conservative patriots at the bidding of liberal elites. If the FCC finds those sites insufficiently loyal to the president, it would then act to strip them of key protections allowing fast-track dismissal of frivolous lawsuits, like a white supremacist suing Twitter over a ban. The order is nonsensical bullshit that not even Trump supporters understand.
This week, the White House and Republican-controlled Senate rewarded the author of the Section 230 order, Nathan Simington, by appointing him to a slot on the FCC’s five-member commission. Trump’s plan is likely dead in the water, considering in about 40 days he will no longer be president, and there’s a snowball’s chance in hell that its frothing ideologue of an FCC commissioner will be able to resurrect it.
However, the order did prove Simington to be the kind of useful political hack the Senate needs to deadlock the FCC commission at 2-2 in 2021 and prevent it from rolling back massively unpopular Trump-era policies. As TechDirt noted, if the FCC does manage to “reinterpret” Section 230 before Inauguration Day, Simington could still hamstring social media companies trying to have stunt lawsuits thrown out of court until Democrats are able to retake a 3-2 majority.
Joe Biden and his campaign’s long-running feud with Facebook is far from over, with a report in the New York Times this week detailing some lingering sore spots. Among them, sources told the paper:
- Facebook promised unequivocal, zero-tolerance, and decisive action against the use of misinformation to undermine mail-in voting, even when Trump did it. It then rolled over and did nothing but attach useless warning labels to Trump’s posts when Trump did it.
- Facebook failed to fact-check conspiracy theories asserting various Democratic plots to register fake voters before the election, at least before they went mega-viral across the entire site.
- Facebook didn’t answer questions about its fact-checking processes or provide in-depth guidance as to how political content spreads on the site.
- Facebook executives met with “federal elections officials, and both Democratic and Republican campaigns” to discuss proposed changes that would require more transparency around political ads and limit targeting options. One employee told the paper, “If anyone in those constituencies said, ‘We don’t like this idea,’ then Facebook would abandon it.”
In 2019, a white supremacist terrorist posted a manifesto on far-right internet hellhole 8chan before live-streaming mass shootings at two separate mosques in Christchurch, New Zealand, on Facebook, murdering 51 people and wounding at least 40 others. A new report issued by the New Zealand government has indicated that at least part of the blame also falls on YouTube and its algorithmically fueled rabbit holes, which the authors say helped radicalize the gunman.
Among the findings of the report was the claim that although the perpetrator consumed and spread extreme far-right content on other sites, YouTube seemed to have a formative role in the Christchurch attack. Per the report:
The individual claimed that he was not a frequent commenter on extreme right-wing sites and that YouTube was, for him, a far more significant source of information and inspiration. Although he did frequent extreme right-wing discussion boards such as those on 4chan and 8chan, the evidence we have seen is indicative of more substantial use of YouTube and is therefore consistent with what he told us.
Research by Stanford researcher and Ph.D. candidate Becca Lewis has highlighted the breadth of YouTube’s “alternative influence” network—a loosely-linked number of right-wing channels, ranging from conservative pundits to raging neo-Nazis, that overlap in viewership and often collaborate. The Christchurch report specifically mentions that the shooter donated to alleged cult leader Stefan Molyneux’s Freedomain Radio and Austrian white supremacist Martin Sellner, both of whose YouTube channels have subsequently been deleted.
The New Zealand government also called out Facebook, where the shooter was a member of Facebook groups belonging to Australian far-right organizations United Patriots Front, True Blue Crew, and the Lads Society, and where he discussed conspiracy theories that Muslim immigrants would take over Australia.
Every goddamn app has a clone of Instagram Stories, the disappearing post feature itself ripped off from Snapchat, now. LinkedIn rolled out LinkedIn Stories in September, Twitter rolled out “Fleets” last month, and now Spotify has its own baffling version. To put it another way, the feed is bleeding over into what little is left of your non-feed-based online existence. It is inescapable.
Parler, the social media website for conservatives seeking a safe space from fictional liberal tech boogeymen, has very lax rules regarding content that doesn’t involve swearing. Per the Washington Post, in addition to being flooded with conspiracy theorists and white supremacists, the site is now overrun with hardcore pornography targeting its slack-jawed, almost certainly male supermajority userbase. As of Dec. 2, hashtags like #sexytrumpgirls, #keepamericasexy, and #milfsfortrump2020 were flooding the site.
It appears the issue is rooted in a recent decision in which Parler further slashed its laissez-faire rulebook to better accomplish its goal of being a vague free speech thing by—among other things—allowing porn. Spam is still banned, but Parler relies on a volunteer-based moderation team that the Post reported is overwhelmed by sheer volume. Parler COO Jeffrey Wernick essentially told the paper he couldn’t possibly be aware of the problem because he doesn’t look at porn:
After this story was published online, Parler Chief Operating Officer Jeffrey Wernick, who had not responded to repeated pre-publication requests seeking comment on the proliferation of pornography on the site, said he had little knowledge regarding the extent or nature of the nudity or sexual images that appeared on his site but would investigate the issue.
“I don’t look for that content, so why should I know it exists?” Wernick said, but he added that some types of behavior would present a problem for Parler. “We don’t want to be spammed with pornographic content.”
It’s doing great! Actually no, that’s wrong. It is doing really badly because everyone on Parler hates the Environmental Protection Agency.
2020 has been rough for everyone, but it’s been a blockbuster year for OnlyFans, a subscription website that primarily specializes in parasocial pornography. Founder and CEO Tim Stokely told Bloomberg in a recent interview that the site has made $2 billion in sales over the last year, with one million creators that make around $200 million in aggregate each month. That’s far larger than Patreon. Bloomberg reported that OnlyFans’ 20 percent cut of creator profits means it stands to make $400 million in net sales throughout 2020 and the company has plans to expand its non-porn offerings with a streaming service called OFTV.
Twitter ruled that the Arizona Republican Party was totally within bounds and not violating its policies on violence or extremism to ask followers to “Live for nothing, or die for something”: Namely, the president’s bumbling efforts to secure a second term with some kind of legislative or judicial coup.
On the other hand, Twitter also began applying “manipulated media” warnings to prominent politicians in India, apparently the first time it has done so in the country.
Last week, the Supreme Court expressed skepticism of the Computer Fraud and Abuse Act, a notoriously vague 1986 law that criminalizes unauthorized access to computer systems or obtaining “information from any protected computer.” When the CFAA was passed, computing was in its relative infancy, and the law could be primarily understood to refer to break-ins of government, military, and corporate systems. Now that computers are ubiquitous, the nebulousness of the law is a real problem—it could be understood to illegalize anything from hacking to unauthorized access to a database or simply using a work computer in a manner prohibited by an employer. The issue of the CFAA’s scope came up at SCOTUS in the context of a police officer charged under the act for allegedly accepting a bribe to check a license plate database to determine whether a stripper was an undercover cop.
One of the justice’s concerns was that the CFAA was so broad it could make lying on a dating website or checking one’s social media feeds at work a federal crime, per Politico:
Alito asked [Stanford University professor Jeffrey Fisher, the lawyer of the defendant in the case] to explain how the CFAA would criminalize one of his example scenarios: lying about one’s weight on a dating website. Fisher responded that, by receiving interested messages from potential romantic partners based on a falsified weight, the user would be “obtaining” information from a computer in violation of the website’s terms of service — and thus also the CFAA.
Similarly, Fisher told Justice Elena Kagan, checking Instagram at work constituted obtaining words and pictures from one’s Instagram feed. And if a company prohibited social media browsing on work computers, obtaining that information would violate the CFAA by contravening the employer’s policy.
As Trump becomes increasingly desperate to overturn the results of the 2020 presidential elections in his favor—or at least convince his supporters that Joe Biden cheated his way into office—the president has been regularly retweeting QAnon accounts.
Here’s who and what got shitcanned over the past few weeks:
- A six-year-old who is really good at Call of Duty: Warzone was banned from playing Call of Duty: Warzone, presumably because he is six and the game is rated M for mature (17, which is 11 more than six).
- YouTube wants a pat on the back for banning videos claiming Donald Trump won the election on Dec. 9. Astute readers will note Dec. 9 is 36 days after the Nov. 3 elections, which Trump lost, and that during those 36 days countless videos claiming Trump won the election went viral.
- Twitch has specifically banned the Confederate flag, blackface, and swastikas, which the service said were already covered by its hateful images policy but it felt the need to clarify because of widespread harassment.
- A major QAnon account on Twitter, whose videos were spread by Trump and his toxic offspring, has made the world a slightly better place by disappearing.
- Proud Boys-affiliated band Trapt was kicked off Twitter after, uh, offering some thoughts on how ok it is to do... pedophilia.
Honorable mention: Horrible transphobic feminists whose community was banned from Reddit earlier this year are regrouping on their own platform called “Ovarit,” coming one step closer to just being Mumsnet without an annoying British accent. According to the Atlantic, most of the discussions on Ovarit center around how they were unfairly “canceled” by other feminists and censored by Big Tech. Sounds like they are not in fact over it.