Why Gonzalez v. Google Matters

(Dado Ruvic/Reuters)

In an internet of algorithms, excluding controversial content leaves us all worse off.

Sign in here to read more.

In an internet of algorithms, excluding controversial content leaves us all worse off.

G iven that YouTube users alone upload more than 70 years of video to the platform every day, it is impossible to manually sort through content posted online across billions of websites. While the early internet could be organized and moderated by forum administrators, algorithms keep the present internet humming along.

Today, the U.S. Supreme Court will hear a case that threatens to change our use of algorithms by purging them of controversial content.

In Gonzalez v. Google, the court will decide if Section 230 of the Communications Decency Act protects using algorithms to recommend speech. Since 1996, Section 230 has blocked lawsuits that treat websites or platforms “as the publisher or speaker” of content submitted by users.

Gonzalez plaintiffs are relatives of victims of the 2015 ISIS Bataclan terror attack. They allege YouTube’s recommendation of ISIS content makes it a developer or co-creator of the recommended content, beyond Section 230’s protection. They argue this makes Google liable for the harm their loved ones suffered.

The plaintiffs are attempting to distinguish Google’s protected display of speech from algorithmic recommendation, but the act of organizing speech is inextricable from publishing. A newspaper must decide which stories to put on the first page and which to put on the last. Holding the newspaper liable for a story’s placement would erect a roadblock to its publication, in the same way legal liability for the paper’s content does.

Given the infinitude of content uploaded every day across nearly 2 billion websites, the mere act of displaying any particular piece of content is to make a recommendation.

Platforms have neither the time nor the resources to review everything posted by their users. Recognition of this simple fact motivated Section 230’s passage, and courts have long appreciated that the same logic applies to the content and preferences included in algorithmic search and recommendation systems.

In its 2019 Force v. Facebook ruling, the U.S. Court of Appeals for the Second Circuit held that Section 230 protects Facebook friend suggestions. The court observed that algorithms “take the information provided by Facebook users and ‘match’ it to other users . . . based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers.”

Whether it’s YouTube video recommendations, Facebook friend suggestions, Tinder swipe queues, Reddit upvote weightings, or tweets displayed in a Twitter feed, the modern internet relies on algorithms to determine who sees what. Relying on myriad signals, these algorithms respond to our preferences, showing us more of what we, or those like us, enjoy.

But if plaintiffs’ arguments are accepted, and platforms are made liable for algorithmic recommendations, another concern will be injected into this mix. Risk-averse platform lawyers and the most litigious members of our society will suddenly have a say in previously personalized algorithms.

Section 230’s value to free speech is a product of its procedural protections. Creating an exception to Section 230 for algorithmic recommendations would ultimately benefit large incumbent firms. Even if platforms would eventually succeed on the merits — there is no evidence any of the Bataclan attackers were radicalized by YouTube videos — taking every such case to trial would be financially crippling for growing platforms.

Intermediaries would limit their liability by attempting to cleanse algorithmic recommendations of potentially litigable content. YouTube already prohibits videos that praise or are produced by terrorist groups, but at scale, some extremist content slips through. To render algorithmic feeds utterly safe, platforms will have to accept many false positives.

That means the costs of imposing liability on algorithmic recommendations would fall hardest on users. Controversial speakers and those who hold minority opinions would be hit hardest. Platforms cannot easily or quickly determine which Arabic-language videos offer praise for jihad and which condemn it, how a pictured firearm will be used, what might happen on a date between strangers, or whether a recommended political rally will turn into a riot.

Hopefully the Supreme Court will avoid disturbing settled precedent in a misguided effort to clean up the internet. If changes are made to Section 230, they should come from Congress, not the courts. Holding platforms liable for algorithmic recommendations would break the tools Americans use to organize and access information online.

Will Duffield is a policy analyst in the Cato Institute’s Center for Representative Government.
You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version