A Misguided Proposal for Regulating Artificial Intelligence

(wutwhanfoto/iStock/Getty Images)

Eliminating Section 230 for generative AI would be dangerous.

Sign in here to read more.

Eliminating Section 230 for generative AI would be dangerous.

O n Wednesday, Senators Josh Hawley (R., Mo.) and Richard Blumenthal (D., Conn.) attempted to fast-track their first effort at artificial-intelligence regulation. The bill, S. 1993, would eliminate Section 230 protection for producers of generative-AI systems, and it is a part of a larger regulatory framework that the two senators proposed over the summer. Though the bill failed to receive unanimous consent in the Senate, it is likely to find its way into the policy debate again in the near term. It would present a profound threat to U.S. technological dynamism. In just a handful of sentences, this proposed legislation has the potential to diminish — and even destroy — the nascent American AI industry.

Section 230 is a provision of the Communications Decency Act of 1996 that shields internet companies from liability regarding content that third parties post on the companies’ websites. For example, if someone posts a death threat on a comments page or social-media website, the person who posts that illicit content is liable, but the owner of the website is not. This simple provision, passed when the internet was still in its infancy, is widely considered to be one of the key enablers of the flourishing of the internet and digital communication that we have seen in the past 30 years.

It also reflects a simple moral and legal intuition: Individuals are responsible for their conduct. If a person assaults someone at a bar or restaurant, most people understand that the person who committed the assault should be held legally responsible, not (usually) the owner of the establishment in which the crime took place. Section 230 merely translated this commonsense idea from the physical to the digital world.

The proposed legislation by Senators Hawley and Blumenthal would strip companies that make generative-AI systems, such as OpenAI’s ChatGPT, of those protections and dismantle this commonsense legal framework. Imagine that someone uses ChatGPT to produce a presentation of a fraudulent company for potential investors. We can all agree that defrauded investors would have the right to pursue civil and criminal litigation against the fraudster. But should OpenAI be exposed to liability for this malfeasance, even though there is no way that ChatGPT could have “known” that it was being used in furtherance of a crime? If the criminal were to write an email connected to his fraud using Gmail’s generative-AI features, would Google also be liable?

Generative AI, and AI in general, is a general-purpose technology. These systems have been used to code, write, predict protein structures, discover new materials, and create images, to name just a few of their countless applications. With time, they will suffuse many areas of our economy and daily lives. Like any technology, AI is a tool that will be used to both good and bad ends. A hammer can be used to build a house, but it can also be used as a weapon. No one thinks the manufacturer of the hammer is responsible for either use. Though ChatGPT is far more complex, it is, at the end of the day, another tool that individuals and firms can add to their toolkit if they find it useful.

If the Hawley-Blumenthal bill becomes law, that tool may no longer be available to most. The risks of deploying these AI systems would likely be far too high, and if the makers of such products remained in business at all, they would be forced to sell only to institutional clients such as government agencies, research universities, and a handful of large corporations. AI’s potential to start a consumer-technology revolution, and the United States’ enviable position as the world leader in this field, would be lost overnight. Other countries would take up the mantle, and Americans would watch as those countries reaped the economic benefits. Perhaps those countries would be U.S. allies, such as France, Japan, or the U.K., each of which has promising AI sectors. But perhaps they would be our adversaries — namely China, which today is likely in second place in the global AI race. No matter the specifics, it is unclear how this outcome would leave any American safer or better off.

AI is a technology with much promise, but, like any powerful new tool, it presents the risk of abuse. No technology, however sophisticated, and no law or regulation, however carefully crafted, can evade this basic fact. We should face that reality head-on rather than pretend that it is possible to design regulation that prevents all misuse. Attempts to do so in the name of safety or consumer protection would be counterproductive at best. At worst, they could spell the demise of one of America’s most dynamic industries.

Dean W. Ball is the senior program manager at the Hoover Institution’s State and Local Governance Initiative.
You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version