Watchdog site’s YouTube channel hit with short-lived suspension as company quickly backtracks
Large social-media companies like YouTube, Facebook, and Twitter have been struggling for a long time with the continuing spread of right-wing extremism and its attendant conspiracism primarily on their platforms, with toxic consequences including acts of violent terrorism and insurrection. This week, YouTube inadvertently demonstrated how far removed they still are from figuring it out.
The popular video platform took down the channel of Right Wing Watch (RWW), People for the American Way’s website devoted to monitoring and reporting on the activities of far-right extremists, for supposedly violating YouTube’s terms of service with videos featuring hate speech. After the predictable uproar, the company suddenly reversed itself—but in the process, revealed that blunders like this are systemic within the platform, particularly its approach to free speech and extremism as well as its revenue model, and will continue to happen, along with the spread of hate speech.
Apparently responding to a reporting campaign, YouTube notified RWW last week that it was removing the site’s six-plus years’ worth of content for “severe or repeated violations of our Community Guidelines.” When RWW appealed the takedown, YouTube responded: “We have decided to keep your account suspended.”
“Our efforts to expose the bigoted views and dangerous conspiracy theories spread by right wing activists has now resulted in YouTube banning our channel and removing thousands of our videos,” RWW tweeted. “We attempted to appeal this decision, and YouTube rejected it.”
After that tweet—and a resulting outcry on social media—YouTube reversed itself, saying it was confused by the large volume of videos it has to monitor: “We’d like to thank you for your patience while we reviewed this case. Our goal is to make sure content doesn’t violate our Community Guidelines so that YouTube can be a safe place for all—and sometimes we make mistakes trying to get it right.”
YouTube’s explanation is less than wholly satisfactory. It appears to suggest that the suspension was triggered by its algorithms, but fails to explain why YouTube managers initially upheld the suspension, which is a step involving human review. But for now, RWW is moving forward.
“We are glad that by reinstating our account, YouTube recognizes our position that there is a world of difference between reporting on offensive activities and committing them,” Right Wing Watch director Adele Stan said in a statement. “Without the ability to accurately portray dangerous behavior, meaningful journalism and public education about that behavior would cease to exist.”
Stan added, “We hope this is the end of a years-long struggle with YouTube to understand the nature of our work. We also hope the platform will become more transparent about the process it uses to determine whether a user has violated its rules, which has always been opaque and has led to frustrating and inexplicable decisions and reversals such as the one we experienced today. We remain dedicated to exposing threatening and harmful activities on the Far Right and we are glad to have YouTube again available to us to continue our work.”
Kyle Mantyla, a senior fellow at RWW, explained to the Daily Beast that the site, which has 60,000 subscribers at its YouTube channel, had been struggling with YouTube for several years, plagued with frequent disputes over content in which the platform failed to distinguish between content dedicated to spreading extremism and those dedicated to monitoring and exposing it such as RWW’s, which included disclaimers explaining it. These disputes, he said, had escalated during the past year as RWW intensified its efforts to counter a rising tide of misinformation.
In April, YouTube issued two strikes against RWW for videos it posted, Mantyla explained, so the watchdog group refrained from posting any more content to the site until the strikes dropped off after 90 days. By then, RWW had already shifted most of its video content to rival platform Vimeo.
“And then they found some video from eight years ago that they flagged, took that down, and that was our third strike,” he said. “And they took down our entire account.”
A very similar thing happened to the Southern Poverty Law Center’s Hatewatch channel at YouTube, which I helped oversee between 2013 and 2018. Our videos were primarily edited collections of extremist content intended to document and substantiate our written reportage, which they inevitably accompanied and to which they were linked.
For the most part, we received few complaints. But in early 2018, a reporting campaign targeting our videos resulted in the channel receiving two strikes, which YouTube upheld upon appeal. Rather than risk losing six years’ worth of content, we simply ceased posting any further videos on the channel. I left Hatewatch in January 2019, but as far as I can tell, the SPLC no longer uses YouTube, nor any other video platform.
I’ve also become intimately acquainted since then with the obtuseness of social-media platforms’ oversight functions, and particularly with the inconsistent enforcement of their rules that supposedly forbids extremist, hateful, or conspiracist content and misinformation, but in reality only affects a limited bandwidth of the deluge of such content on their platforms. When Twitter suspended my account in 2019 over the use of my book’s cover design in my banner, I ran headlong into the reality that while social-media companies claim to be fighting hate speech, they consistently fail to distinguish between such speech and the efforts to oppose it. As I noted then:
In other words, Twitter’s algorithm has the net effect of privileging alt-right extremism—which specializes in this kind of appropriation and “ironic” use of disguised hate symbols, ranging from the Kek banner to the “OK” sign. And it punishes the serious work of combating white supremacism, which supposedly is what Twitter had in mind when it announced its intention to investigate whether it should actively de-platform hate groups and far-right extremists.
I eventually reached a kind of rapprochement with Twitter, which despite its vows to eliminate extremist content nonetheless still provides a platform for raging white nationalists like Nicholas Fuentes. However, even Twitter operates with more integrity than YouTube, which frequently proclaims the removal of various kinds of extremist and conspiracist content—which inevitably turns out to be a relative handful in a continuous tide of toxic content.
These companies largely hide behind a dictum that treats all political speech as functionally equal, which buries the toxic effects of hate speech and conspiracism into a one-size-fits-all algorithmic approach to enforcement of their rules. But behind their steadfast refusal to address those algorithms is really a powerful bottom line: these companies’ revenue models are built on providing as much such content as possible.
The top priority at YouTube, as Mark Bergen at Bloomberg News explored in 2019, is “Engagement,” getting people to come to the site and remain there, accumulated in data as views, time spent viewing, and interactions. Moderating extremist content is often devalued if it interferes with the company’s main goals.
“Scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread,” Bergen reported. “Each time they got the same basic response: Don’t rock the boat.”
The company announced early in 2019 that it intended to crack down on the conspiracism. However, part of its problem is that YouTube in fact created a huge market for these crackpot and often harmful theories by unleashing an unprecedented boom in conspiracism. And that same market is where it now makes its living.
The result has been a steady toxic bloom of online radicalization, producing an army of “redpilled” young men disconnected from reality and caught up in radical-right ideology. As the New York Times noted in 2019:
A 2018 study by Bellingcat researchers found that YouTube, in fact, was the No. 1 factor in fueling that radicalization:
The study also noted: “Fascists who become red-pilled through YouTube often start with comparatively less extreme right-wing personalities, like Ben Shapiro or Milo Yiannopoulos.”
A more recent study by researchers at Raditube found that YouTube’s moderation efforts, such as they are, systematically fail to catch problematic content before it goes viral. This means that even when such content is found and removed, it nonetheless continues to be circulated with another half-life on other channels and platforms.
YouTube’s claims to be cleaning up its act notwithstanding, it ultimately remains one of the biggest reservoirs of toxic misinformation and hate speech on the Internet, and a powerful engine in the spread of far-right ideologies and activism.
Certainly its suspension of the RWW channel was greeted with hoots of acclimation from right-wing extremists. Rick Wiles, the End Times conspiracy theorist who is a frequent subject of RWW’s reports, celebrated prematurely on his “TruNews” program Monday night now appear rather foolish.
“I suspect that there will be layoffs very soon inside the organization because there’s no platform for them to spew their lies and propaganda,” Wiles said. “So their writers, their editors, all the people that they had working to smear us and other ministries, what are they gonna do? I suspect they’re gonna lose their jobs this week.”
He continued: “Let me make this very clear today: Jesus Christ shut down Right Wing Watch. Not YouTube. Jesus Christ shut down Right Wing Watch today. This is an example of God working through unsaved people at YouTube to carry out his vengeance against those who attack and smear his servants. So I didn’t have to lift a finger against Right Wing Watch. I think they’ll disappear in the coming weeks and months. There’s no purpose for them now.”
RWW posted the video of that rant on YouTube.