If you thought last year’s clusterf*ck of a Senate hearing on social media was a good use of everyone’s time, congrats! The Senate is considering calling Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and the rest of the gang back together for another hearing, this time before the Judiciary Committee.
Per Politico, Senator Chris Coons told the site on Thursday that he “[thinks] there’s reason for us to ask them to come before us again.” While the plans aren’t final and Coons said he was still negotiating with his Republican counterparts, he added his expectation is that “we’ll look at the dynamics of social media and I think we’ll look at the intersection between privacy, civil liberties and civil rights in the digital context.”
Last year’s hearing was before the Commerce Committee. At the time, it was still controlled by Republicans, but Democrats joined their colleagues across the aisle in a unanimous vote to subpoena Zuckerberg, Dorsey, and Alphabet-Google CEO Sundar Pichai. Democrats’ rationale at the time was that the committee chair, GOP Senator Roger Wicker, had promised the hearing would reserve time for Dems’ preferred issues like antitrust and not solely serve as a vehicle for conservatives to scream at the assembled CEOs about liberal bias. Of course, the latter thing is exactly what happened.
With Democrats in control, perhaps this hearing will go a little smoother. Anything’s possible, right? ¯\_(ツ)_/¯
It’s been a while since our last edition of Hellfeed, so here’s some of the biggest developments in the social media world over the last few weeks.
It’s long been the case—based both on safety concerns like bullying and pedophiles and, more cynically, laws surrounding the collection of user data on children—that Facebook and its subsidiary Instagram have been age-gated to those 13 and older. Of course, this has been completely unenforceable without solutions nobody likes, such as requiring new users to provide photos of their IDs. Children have slipped onto the site in droves, and like their teenage counterparts, sometimes face extreme amounts of bullying and harassment, not to mention the occasional message from pedophiles.
As originally reported by BuzzFeed, Facebook has a jaw-dropping solution to this: A post on an internal company message board by Instagram vice president of product Vishal Shah said the company is working on “a version of Instagram that allows people under the age of 13 to safely use Instagram for the first time.” What could go wrong? Well, YouTube Kids—which unlike an Instagram for children, doesn’t even involve kids uploading videos of themselves—resulted in claims of illegal data collection and the site being flooded with disturbing videos uploaded by bots or horrible trolls. YouTube was eventually forced to overhaul the whole product. Facebook is mulling a product for children based around one that lets adults upload everything from drug cartel glamour posts to pro-eating disorder content, so... yeah.
As Gizmodo colleague Matt Novak pointed out, pretty much everything about this product and how it will function is an unknown at this point. But it does reek of an effort to get ever-younger users signed up for the Facebook data machine, thinly veiled with the excuse that it’s trying to make kids already on Instagram safer. Yeesh.
Facebook also announced this week that it’s taking steps to clean up Groups, the interest-based communities that it tried to juice in recent years before many of said groups inevitably became hives full of QAnon conspiracists, election truthers, anti-vaxxers, far-right propagandists, and the people who organized the Capitol riots. Changes include prohibiting users who break rules from posting or commenting in Groups for a period of time, putting warning labels on groups that have broken rules, and requiring tighter moderation of rules-violating communities. Surely they’ll whack that mole this time!
A few fun updates from our friends at Parler, the far-right Facebook/Twitter clone for people who love issuing death threats and would marry a gun if they could just choose one:
- While the site has managed to crawl back onto the web after losing its web hosting and app store placements over its role in the Jan. 6 riots at the Capital, it hasn’t convinced any of the tech companies that ditched it—Amazon, Apple, and Google—to do business with them again.
- Parler claims to now have algorithms to detect content calling for violence now, but there’s no reason to believe anything will change. Apple rejected the company’s appeal to get back on the App Store, after which Parler reportedly fired its whole iOS team.
- Republican megadonor and Parler investor Rebekah Mercer, a hardliner on the whole giving-racists-and-conspiracy-theorists-a-giant-megaphone-to-spew-hate-online issue, is reportedly personally bankrolling the site with “big checks” at this point and flexing her muscles to preserve that vision. The new CEO, apparently a Mercer pick, is a Tea Party activist.
Definitely not a ticking time bomb waiting to go off for a second time or anything.
Gab, Parler’s neo-Nazi uncle, has been hacked—big time. Whistleblower site DDoSecrets announced the release to a group of reporters of some 70 gigabytes of data lifted from the company’s servers, including profile and user data, posts, private messages, and more.
A similar situation played out on a far smaller scale with white supremacist forum Iron March, which had its SQL database dumped on the Internet Archive by an unknown hacker in 2019. The result was numerous white nationalists/supremacists, fascists, and current/former members of violent groups like the terroristic Atomwaffen Division had their identities publicly revealed, which is sort of inconvenient when you’re trying to anonymously spark a race war.
You may remember MyPillow founder Mike Lindell from his previous best hits, such as months of increasingly depraved promotion of voter fraud hoaxes (TL;DR: Donald Trump won, apparently!) and the $1.3 billion lawsuit he is facing from an election tech manufacturer over that. He’s definitely not mad that he got banned from Twitter, which is why he’s announced he is launching his own free speech site, Vocl. Per Business Insider:
In an interview with Insider, [Mike] Lindell said he plans to call the site “Vocl” and he described it as a cross between Twitter and YouTube.
“It’s not like anything you’ve ever seen,” he said to Insider in a Wednesday interview. “It’s all about being able to be vocal again and not to be walking on egg shells.”
Sure thing, Mike.
Facebook, Telegram, PayPal, and other big tech firms are continuing to serve as a vehicle for crowdfunding the Islamic State terror group, often via accounts that are fake or run by sympathizers and middlemen posing as humanitarian interests, according to an in-depth feature on Rest of World:
Vera Mironova, a visiting fellow at Harvard University who has extensively monitored online terrorist fundraising campaigns, notes that posts follow the mores of their host platform. “So secretive campaigns would not be posted on Facebook, or if they were, they would sound more humanitarian and not use words like ‘ISIS.’ But the ones on Telegram go full hurrah,” she explained. This same dynamic plays out on a country-by-country level, Mironova added, and is especially apparent on payment platforms. “Some countries — let’s say Russia or parts of Eastern Europe, Uzbekistan, Tajikistan — they just do not care,” she said. “ISIS-linked campaigns coming from those places absolutely won’t hide anything. … They could use any platform; they even transfer money between bank cards.”
The full thing is worth a read, because this type of thing is now a permanent fixture of the internet and will only become more relevant going forward.
Twitter, which has been introducing new features at a rate of approximately 10 per minute, has announced that it is working on Super Follows, a tool for users to launch paid subscriptions with access to private feeds or posts. While feed-addicted journalism and media types might be salivating at the prospect of being paid to waste time, Twitter has yet to clarify whether it will allow the most obvious application that will actually make money: porn.
The Washington Post has an interesting feature on how apps like TikTok have tried to implement accessibility features, but still lag far behind on implementing or improving features like speech to text transcription—making them harder to use for those with deafness, hearing loss, or visual impairments. A good roundup of the technical challenges behind implementing such features on the one hand, but also how tech firms have sometimes failed to prioritize working on them on the other.
Newsletter platform Substack isn’t really a social media site. But it essentially wouldn’t exist without Facebook and Twitter, where the various journalists, commentators, and web personalities that actually write those newsletters generated and cultivate their followings in the first place. Besides, what we will euphemistically refer to as “Substack discourse” is now approximately three hundred percent of Twitter.
In the past week Substack has come under fire for its practice of luring high-profile writers to set up shop on the site by writing huge “advance payment” checks. That might be less controversial were it not for the fact that many of its most prominent power users regularly write raving diatribes about supposedly out-of-control leftism, “cancel culture,” “identity politics,” and stuff like that. Glenn Greenwald, one of the site’s biggest success stories (and who says he did not accept an advance check from Substack), uses his account to further vitriolic feuds such as one with a specific New York Times reporter. Another, Irish TV writer Graham Linehan, aggressively promotes anti-trans rhetoric.
Annalee Newitz, founder of our sister blog io9, penned a Medium post arguing that Substack’s habit of paying writers, sometimes without disclosure, and seemingly allowing others with huge followings to violate its rules essentially makes it less of a platform than an editorial publication—except one with none of the editorial standards followed by reputable ones:
So Substack has an editorial policy, but no accountability. And they have terms of service, but no enforcement. If you listen to [co-founder Hamish McKenzie], they don’t even hire writers! They just give money to people who write things that happen to be on Substack. It’s the usual Silicon Valley sleight-of-hand move, very similar to Uber reps claiming drivers aren’t “core” to their business. I’m sure Substack is paying a writer right now to come up with a catchy way of saying that Substack doesn’t pay writers.
(No, no one means “publication” in the way Josh Hawley does, stop asking.)
Substack wrote in a blog post that misunderstandings about the actual makeup of the advance payments program has resulted in a “distorted perception of the overall makeup of the group, leading to incorrect inferences about Substack’s business strategy.” But because there’s no transparency into who Substack is paying beyond those writers which have chosen to disclose they cashed a check, you’re just gonna have to take their word for it.
An HBO documentary series airing this weekend claims to have discovered the identity of QAnon’s Q, the individual or individuals behind a sprawling pro-Trump conspiracy theory that infected the Republican Party (primarily via Facebook) and provided much of the manpower at the Capitol riots. It’s not exactly a huge surprise that the culprit named here is Ron Watkins, the administrator of imageboard sites 8chan/8kun, where Q posted for years after leaving 4chan.
That doesn’t necessarily solve the mystery of who came up with Q in the first place, as Watkins may have simply took over the Q account from its original creator, and whatever case Q: Into the Storm believes it has to prove Watkins is Q has yet to be vetted. Either way, don’t think we’re done with this whole mess anytime soon.
Ladies and gentlemen, drum roll please...
- QAnon cheerleader and (unfortunately) Representative Marjorie Taylor Greene was temporarily suspended from Twitter for 12 hours thanks to an “error,” though one could argue one wasn’t actually made.
- YouTube took down a video from bigoted talk show host Steven Crowder, not for mocking Black speech and culture in an explicitly racist way or suggesting Chinese restaurants spread the novel coronavirus, but for violating anti-misinformation policies by conflating the pandemic death toll with that of the common flu. That’s because they’re cowards afraid of backlash from conservatives.
- Facebook banned the military of Myanmar, which perhaps might have been more effective had it done so before they used the site to incite genocide.
- Also, Facebook briefly banned news links across the entire country of Australia in an inspiring corporate protest against a law forcing them to pay out a share of revenue to news sites.
- Twitter accidentally auto-banned a lot of people, including Gizmodo weekend editor Alyse Stanley, for posting the word “Memphis.”
- TikTok banned the use of the “super straight” hashtag, which claimed that being transphobic is a gender identity, and its creator Kyle Royce.
- World’s worst lawyer Rudy Giuliani was banned from YouTube for two weeks for refusing to stop insisting his ex-boss, who hates him, won the 202 elections.
Honorable mention: Neera Tanden, Joe Biden’s nominee to run the Office of Management and Budget, didn’t get banned from Twitter. But her tweets attacking numerous members of Congress did get her “banned,” in a sense, from further consideration for the job.