The platforms witness a crime and show the impossibility of content moderation

This is the challenge that social media sites are up against: people with motivation to spread bad stuff. It doesn’t have to be a lot of people; a handful with enough motivation can create these problems.

 

19 May 2022 – As I have noted in previous posts, it is difficult to write about America because although I am no longer a U.S. citizen (I moved to Europe 20 years ago and dropped by citizenship a few years back) I find myself irrevocably tangled in America’s hopes, arrogance, and despair. I am still “American” at heart, I suppose.

The gunman who murdered 10 people last weekend and wounded 3 others at a supermarket in Buffalo, New York – his weapon painted with a white-supremacist slogan – had live-streamed his attack. Prior to the shooting, he also posted a manifesto, which relies heavily on the so-called great replacement theory, a racist conspiracy that has become increasingly mainstream in a number of Western countries, from France to Hungary to the United States. There are people far better versed in that theory than me, and versed in the dangers of white-supremacist violence more broadly, and how it all ties into the white power movement and paramilitary America.

In that respect, all I have grasped in the past week is that we can get into the textual background of these terms if you want to, but it’s basically a new language for the same set of ideas that have worked to connect many different kinds of social threats into one broadly motivating, violent, and frightening world view for people in the white-power movement and on the militant right. The idea is simply that many different kinds of social change are connected to a plot by a cabal of élites to eradicate the white race, which people in this movement believe is their nation. It connects things such as abortion, immigration, gay rights, feminism, residential integration — all of these are seen as part of a series of threats to the white birth rate. Beyond that – well, consult the scholars if that’s the right word.

But as a tech guy who knows how the tubes and wires and networks and infrastructure of the web and platforms work, I am more intrigued by the usual questions about what role platforms may have played in radicalizing the shooter – or may continue to play in spreading his extremist ideology. And as awful as all that is, due to a recent court ruling this latest mass shooting raises an unsettling new possibility: that platforms may in fact be required to carry the shooter’s racist posts in the future, for fear of violating a new Texas law that bans “viewpoint-based” discrimination. Daphne Keller calls that Texas law a “litigation DDOS attack” and warns that no one really knows where in hell this is all going.

But a few thoughts anyway.

First, the tech platforms are very much implicated in the story, at least insofar as they apparently (and unwittingly) provided the infrastructure for the shooter to organize and promote his crimes. In that, it was similar to the 2019 shootings in Christchurch, New Zealand, where a white supremacist broadcast himself during a rampage that killed 51 people and injured 40 more. The point of the live broadcast had been to radicalize other terrorists, and it worked: the Buffalo shooter said in long screed posted online that he had been inspired by it. (Here’s a concise, sober summary of the Buffalo shooter’s screed from a Bellingcat researcher).

Most consequentially, the shooter broadcast his crimes on the Amazon-owned, gaming-centric platform Twitch. But in an impressive feat of content moderation, Twitch said it had removed suspended the stream just two minutes after it started – basically about as fast as could be hoped for in a situation like this. Only 22 people were tuned into the stream when it was live.

But as numerous media/platform analysts noted 22 people was enough to do damage for the indefinite future. Several of those Twitch members had been primed that the shooter’s stream was coming so all it took was for one viewer to save a copy and redistribute it online, which three of them did. A jumble of video-hosting sites, extremist message boards and some of Silicon Valley’s biggest names did the rest, ensuring that millions of people would view the video.

NOTE: one copy made its way onto the little-known video site Streamable, where, thanks to links posted on much larger sites, it was viewed more than 3 million times before it was removed. One link to that copy on Facebook received more than 500 comments and 46,000 shares; Facebook did not remove it for more than 10 hours.

Streamable is used extensively by Redditors posting copyrighted material to the site with ease. There is virtually no content moderation.

On one hand, none of those engagement numbers look all that big. But this distribution metrics were huge. Streamable was clearly the top host for the video, as links to it were everywhere, and it got less than half of the views that “Plandemic” did. On Facebook, it made the list of most-posted links of the day, but we don’t know how many people actually viewed the link there or were “linked onward”. At the very least, it seems, a 2-3 million people clicked on it across the ecosystem. In comparison, on YouTube, Kendrick Lamar’s latest video did 8.1 million views in two days.

At the same time, the shooter’s sole aim is to inspire more violence. And for that, you don’t need to reach a Super Bowl size audience – you just need to reach one more disturbed racist. The internet is built in such a way that nearly anything posted online can remain there forever, if people are committed enough; sadly, this video will almost certainly fall in that category.

NOTE: media analyst Jack Trimbole who covers TikTok told me he found TikTok videos that share accounts and terms that allowed Twitter users to search for and view the shooter’s full video: “It was all over Twitter and all over TikTok within minutes”. 

For that reason, even in a situation where platforms say they are performing toward the high end of their capabilities, it’s worth asking how these murders get live streamed. Again, it is because of the lack of/impossibility of content moderation. The Buffalo shooter’s screed said he had chosen Twitch over Facebook partly because of his perception that Facebook has stricter rules. And the New York Times notes that YouTube requires people to verify their accounts and amass at least 50 followers before going live, which helps discourage terrorists from broadcasting via burner accounts.

NOTE: many problems related to the stem of harmful posts stem from a lack of friction in sharing. Policy teams are now trying to figure out what steps they could take to make their platforms less attractive to would-be shooters.

Plus, the Buffalo shooter had been openly planning the attack on a private Discord server for months. Platform analysts noted that the shooter was sharing logs with several public Discord groups as part of an effort to draw attention to his Twitch stream, where he broadcast the attack live. In December alone, he referenced his plans to commit the attack at least 17 times, according to the logs. Between November and May, 14 the shooter references the Christchurch terrorist’s name 31 times, the word “gun” 200 times, the word “shoot” 119 times, and the word “attack” over 200 times. He also abundantly used racist and anti-Semitic language, including extremist phrases identified by multiple anti-hate research centers.

You see how bizarre this is. During a time when several countries are pushing to end encryption in the name of public safety, murderers are openly planning their crimes on unencrypted chats and promoting them on public servers. Using language that should be easily detectable via plain-text searches. Of all the platforms implicated in this weekend’s terrible events, Discord has the most to answer for.

There is, of course, a bigger story. Had these attacks taken place a month ago, the shape of what takes place next online would be relatively clear. Tech companies would remove instances of the video and the terrorist’s rant whenever they appeared; motivated adversaries would alter them slightly and attempt to re-post them, and the cat-and-mouse cycle would continue until the next act of terrorism started the cycle anew.

But then came Texas law HB20 which allows the state’s attorney general and average users to sue platforms with more than 50 million users if they believe the platform removed a post due to the viewpoint it expresses. Not their own posts, mind you. Whenever any post is removed for violating a platform’s community standards, with very few exceptions, platforms can now be sued by anybody.

On Wednesday, a 2-1 panel of the Fifth Circuit voted to let the law take effect pending further appeal. In the aftermath of that decision, the big platforms refused to comment. That’s likely because (1) they are still hoping this thing gets struck down by the Supreme Court, (2) they are still sorting through the implications of this law with their lawyers, and (3) they don’t yet know how they would respond if it takes effect.

In the run-up to the decision, critics observed that the law could force platforms to host, among other things, a lot of noxious extremist content – Nazi manifestos, racist screeds, that sort of thing. Well, here’s our test case. While the Texas bill’s sponsor suggested Twitch could not be sued for removing the shooter’s channel, it’s not clear why Texans would not be able to sue over platforms removing his manifesto.

All of this sets aside the fact that there is a strong market demand for content moderation. Platforms that allow extremism to proliferate should become ghost towns; most people don’t want to see racist idiocy every time they log on. Platforms remove hate speech and various other forms of harm primarily because to allow it to remain poses an existential threat to their businesses.

But lots of critics, including folks who used to work at these platforms, have noted that while all of this may be true, their old bosses have gone too far. And as with everything in content moderation, all of us would draw the lines differently according to our own beliefs. But Texas makes it a crime to draw almost any lines at all, and it’s extremely difficult to imagine how any big site could function in the state in a world where HB20 stands.

What’s on deck? Perhaps the Supreme Court will intervene, and sanity will prevail. Or perhaps Clarence Thomas, who has been openly lusting for a chance to overturn Section 230, will seize the moment – and platforms will be legally required to host the worst that humanity has to offer.

Reality? There was a time when legal cases like these proceeded along fairly predictable outcomes, but it feels like that time has passed. The future of content moderation feels like it’s all about to come down to a coin flip, and I’m not sure anyone is fully prepared for what could come next.

There are a lot of problems here and a lot if nuances and I have but scratched the surface. Just a recap:

1. Streamable isn’t among the 18 online services that are members of the Global Internet Forum to Counter Terrorism, so it didn’t have access to the forum’s full set of tools major apps use to detect and remove videos of mass shootings. Amazon, which owns Twitch, is a member of the forum. Failure all around.

2. The ominous lesson is that as long as there are small services unable or unwilling to thoroughly scrub out mass shooting videos, the industry will have a difficult time stamping them out. And it replicates. There’s always going to be some sketchier and sketchier company that is happy to make a buck off the dregs of the internet.

3. This week Facebook and Twitter said that their internal teams were working with the industry anti-extremism forum to cut off dissemination of the Buffalo video, in line with their terms of service. As I have noted in previous posts, at the center of the companies’ strategy is a shared database of information about extremist videos. As mass shootings and other violent attacks occur, the 18 services participating in the industry forum add more information to the database, allowing all of the participating companies to know the digital fingerprints of violent videos, detect matches and speed up the takedown process. Companies continue to share more digital fingerprints, known as “hashes,” as they discover altered versions of banned videos.

4. But the problem, as noted by scores of platform analysts, is that 3 years after the Christchurch shooting they continue to get hashes upon hashes – Christchurch data makes up 5 percent of the entire shared database – which also includes data about videos from the Islamic State terrorist group, better known as ISIS, and other extremist groups. Members continue to be able to add new hashes of the perpetrator-produced content but are finding newer and newer versions, and the pace of technology means a greater ability of perpetrators to manipulate the content so there is never going to be a state of “all right, we’ve got enough hashes”. As I have noted before, among the common ways people alter videos are to overlay text or add banners. Users can sometimes alter videos enough that they can evade matching software on established platforms, which then report the near-duplicates back to the industry forum.

FINAL NOTE: all of this leads to an uncomfortable conclusion and that is this: the internet’s worst websites aren’t algorithmic. That’s just a fact. The overwhelming majority of the violent extremists who were “radicalized online,” were not radicalized by Zuckerberg’s products, but by other angry men online using simple websites to trade hateful, evil ideas and then do horrifying things with them. There is absolutely, definitely an amplification that can happen when the bile that gets posted on the sites I noted above, some with automated social feeds. But the internet didn’t invent American racism. People developed that and uploaded it. And facing that head on is a much harder thing to do than just blaming it on a bunch of companies or recommendation widgets.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top