The Corner

Media

Hurricanes and Censorship

Workers remove debris from a pole following the passing of Hurricane Helene in Marshall, N.C., October 8, 2024. (Eduardo Munoz/Reuters)

Natural disasters are frequently accompanied by rumor and speculation, often ridiculous, sometimes malicious. That this has been the case with hurricanes Helene and Milton is unsurprising— as is the fact that this has been used by those agitating for more censorship (and particularly online censorship). As Hillary Clinton, John Kerry and Tim Walz have recently reminded us, they have been on the offensive lately.

Turn to X, to find a post from The Hill, illustrated by what looks like hurricane damage, that “House urges action on misinformation”. Click on the story to find that not the House (oh), but:

A few House Democrats called on major tech leaders to do more when it comes to the spread of misinformation and conspiracy theories about hurricanes Helene and Milton.

This includes increased monitoring and removal of misinformation and disinformation, enhanced fact-checking partnerships with local agencies and disaster relief organizations and stronger safeguards against scams.

Algorithms also should be “strengthened” to better flag conspiracy theories, the lawmakers argued.

Stronger safeguards against scams? Sure.

But what could possibly go wrong with “strengthening” algorithms, an opaque process (to most of us)? How do these algorithms “decide” what a conspiracy theory is? “Monitoring” potential dis/misinformation during a disaster is a good idea, as is countering it with accurate information. But “removing” dis/misinformation raises not only First Amendment issues, but also (once again) the question of who decides what is or is not true. And while we’re on that topic, please check out Rich Lowry’s article on how climate warriors in the media and government have portrayed the hurricanes.

Meanwhile, Media Matters and Vox alumnus Carlos Maza tells his 141 thousand followers on X that:

“Free speech” was, and is, an unmitigated disaster. I know it sounds inflammatory, but every piece of evidence points to the fact that we need really aggressive government regulation of speech platforms. The free market doesn’t work and it never will.

“Free speech” gets scare quotes? Today’s discourse is what it is. As for the claim that “every bit of evidence” points to the need for “really aggressive” regulation of “speech platforms,” well, let’s call it hyperbole. There is considerable debate over how much of a problem dis/misinformation really is. Its most pernicious effect may be to foster distrust, something that will be increased by exaggerating dis/misinformation’s reach. And to believe that censorship (sorry, “aggressive government regulation”) will increase trust is not what history would suggest.

Maza was reacting to an article in The Atlantic by Charlie Warzel, in which, among other matters, Warzel complains about some of the nonsense that was talked about the hurricanes. Reason’s Liz Wolfe didn’t think much of that nonsense either, but:

It’s not clear to me either from observing it play out OR from reading this piece that the spread of stupid conspiracy theories online has actually created very much real-world harm. What IS clear to me is that the framing of this article is itself fearmongering, baiting people for clicks.

The framing, however, is about more than clicks, it’s about pushing for censorship.

Warzel’s article is also about the relativization of truth, something he links to “the far right,” a common enough notion. But the assault on the notion of objective truth in recent decades has its origins in academia, came (and comes) mainly from the left, and has metastasized from there into popular culture— “my truth” and all that. That doesn’t excuse the tales spun by some on the right, but still…

Warzel is on firmer ground here:

So much of the conversation around misinformation suggests that its primary job is to persuade. But as Michael Caulfield, an information researcher at the University of Washington, has argued, “The primary use of ‘misinformation’ is not to change the beliefs of other people at all. Instead, the vast majority of misinformation is offered as a service for people to maintain their beliefs in face of overwhelming evidence to the contrary.”

There’s a good deal of evidence that this is correct. And there is nothing particularly new about it.

British philosopher Dan Williams, writing at Conspicuous Cognition, offers a calming (but never complacent) perspective. He would, I think, agree with Caulfield’s comment and argues that once we (to generalize) become committed to a political or cultural tribe, we become:

Highly receptive to false claims when they support our tribe’s favored narratives. Even when we know such claims are not exactly true on the specifics, we often give them a free pass if we think they send the right message—if they celebrate the righteousness of our causes and convictions or discredit and vilify the right enemies.

But to move from that to talk of:

 A conflict between those who value truth and those who do not… is a symptom of polarization, not an analysis of it. It involves demonizing those who do not belong to the right team and delegitimizing their perspective.

The underlying conflict is more complex, reflecting:

Sharp disagreements over what the truth about various matters is and, at a more fundamental level, over which people and institutions are trustworthy sources of truth.

The solution, argues Williams, is to “build trust in institutions—most importantly, by making them more trustworthy.”

He’s right. And those who think this will be achieved by “fact-checkers” (Quis custodiet ipsos custodes, anybody?), let alone outright censorship, have lost their minds.

Exit mobile version