The Tech Doomers Are Wrong about Artificial Intelligence

A Galaxy Note9 smartphone at the Samsung Electronics booth during the Consumer Electronics Show in Las Vegas, Nev., in 2019. (Steve Marcus/Reuters)

Today, American policy-makers must choose with respect to AI: freedom or technocracy, prosperity or economic insignificance.

Sign in here to read more.

Today, American policy-makers must choose with respect to AI: freedom or technocracy, prosperity or economic insignificance.

T ypifying the unseriousness with which many in Washington treat AI, President Joe Biden’s inspirations to regulate the technology reportedly include a viewing of Mission: Impossible — Dead Reckoning Part One (you know, the famous documentary). Let’s hope a rewatch of Ghostbusters doesn’t persuade him to prevent urban rampage by nationalizing the marshmallow industry.

Unfortunately, many in government, the media, and private industry share Biden’s overwrought fears. Technologists such as Elon Musk often speak of AI primarily as a mortal threat. Softer variations include theories that AI, if not micromanaged by regulators, inevitably will cause mass unemployment or widespread discrimination. One commentator on X (formerly Twitter) recently voiced the maximalist version of this perspective, advocating that “we kill the demon robots before they kill us.”

The catastrophist perspective has manifested itself in legislative proposals such as Senators Josh Hawley (R., Mo.) and Richard Blumenthal’s (D., Conn.) bill to strip generative AI products of Section 230 protections. In it, the senators aim to create legal carve-outs and special liabilities that categorically disadvantage AI products — not just the nascent supercomputer overlords.

Section 230 protects online content-hosting platforms from civil liability for third-party speech. Without it, those who host websites that allow third-party posts — from micro-bloggers to the largest social-media companies — would face potentially crippling liability for third-party user-generated posts. Removing its protections from AI-generated content would disincentivize investment and innovation in AI without regard for any specific product’s potential benefits or risks.

This effort smacks of culture-war-driven technophobia, not clear-eyed policy-making. These irrational fears would retard American economic growth and technological innovation. According to a recent Goldman Sachs report, generative AI “could drive a 7% (or almost $7 trillion) increase in global GDP and lift productivity growth by 1.5 percentage points over a 10-year period.” Enacting law to discourage American innovation in the sector would hamstring U.S. firms that compete with foreign firms (e.g., Chinese firms). This not only would make American consumers poorer, but it likely would end decades of America’s global technological dominance, which Washington has thus far promoted through light-touch regulation.

Largely free from technocratic strangulation, the U.S. digital economy has generated tremendous wealth. In 2021, it “accounted for $3.70 trillion of gross output, $2.41 trillion of value added (translating to 10.3 percent of U.S. gross domestic product (GDP)), $1.24 trillion of compensation, and 8.0 million jobs,” according to the Bureau of Economic Analysis. Should Washington impose significant new burdens (related to AI or otherwise), the tech sector’s productivity would atrophy accordingly.

By framing AI policy in apocalyptic terms, policy-makers ignore the fact that most AI-enabled products have more to do with mundane activities such as shipping logistics, data analysis, and spell-check than with supercomputers trying to take over the world. These common tools, which never star in movies, help individuals complete ordinary daily tasks or businesses increase operational efficiencies.

For example, the aforementioned Hawley–Blumenthal bill would impact many common tools including Grammarly, Vimeo, and smartphone cameras, as the R Street Institute’s Shoshana Weissmann explains. “Because it’s impossible to know if content will be used in illegal ways, it’s unclear how these companies could comply with the law without removing all AI features from their products,” Weissmann writes. “The resulting deluge of lawsuits could bring AI development in the United States to a grinding halt.”

Government certainly should monitor advanced systems that could (if abused) threaten national security. Regulatory regimes must grow from realistic assessments of risk rather than Hollywood plotlines. Moreover, they must promote permissionless innovation and, in turn, economic and technological dynamism.

Prosperity occurs where government opts against erecting barriers to private citizens innovating, collaborating, trading, and pursuing their own goals. This dynamic has asserted itself throughout history, from Ancient Egypt to post-communist Europe and China. It has caused America’s relatively free tech sector to dominate, and Europe’s heavily regulated tech sector to stagnate.

Today, American policy-makers must choose with respect to AI: freedom or technocracy, prosperity or economic insignificance.

You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version