Keep the FTC’s Hands off Artificial Intelligence

Federal Trade Commission Chair Lina Khan listens as the Federal Trade Commission committee questions her during a hearing on “Oversight of the Federal Trade Commission,” on Capitol Hill in Washington, D.C., July 13, 2023. (Kevin Wurm/Reuters)

Competition itself is a discovery process, and regulators at the FTC do not have a cheat sheet with all the answers.

Sign in here to read more.

Competition itself is a discovery process, and regulators at the FTC do not have a cheat sheet with all the answers.

T he general panic over artificial intelligence and the Federal Trade Commission’s aim to insert itself into every corner of the U.S. economy recently dovetailed in a 20-page investigative letter the agency sent to OpenAI, the owner of ChatGPT.

The FTC had warned us: Chairwoman Lina Khan publicly claimed her agency already has the authority to regulate artificial-intelligence applications in April. She elaborated on her concerns in a New York Times guest opinion piece this May. In it, she laments the lack of government intervention at the dawn of social media: “What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.” She counsels that as “the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.”

Granting that the rise of the platform economy and social media brought challenges, it also brought convenient (often free) new services and the largest expansion of speech ever experienced in human history. Doubtless, the online revolution created uncomfortable creative destruction in certain industries, new trials for parents with social media, and other concerns, but it’s hard to argue that life was better before Uber, Instagram, Amazon Prime, Airbnb, Waze, Zoom, or their competitors. Anyone who doesn’t agree these developments are beneficial on balance remains very free not to use them, casting some doubt on Khan’s description of them above as “key.”

More broadly, the Department of Commerce put the digital economy at $2.4 trillion, or 10.3 percent of total U.S. GDP in 2021, and credited it for 8 million jobs adding up to $1.2 trillion in compensation. Globally, 19 of the top 25 tech firms by market cap are U.S. companies. Not too shabby and perhaps suggestive of superiority in America’s light-touch regulatory approach.

Despite that evidence to the contrary, the odd mindset that Web 2.0 is a tale of regulatory failure is echoed in the FTC’s blog posts on the dangers of AI. The frequent references to scary science-fiction scenarios suggest the agency is stoking Luddite sentiments instead of giving practical guidance to companies. What advice is offered is already in line with market incentives: Improve your data set, don’t let prejudice get in the way of good AI results, and let experts evaluate your AI process to improve accuracy.

But some guidance is murkier. “Do more harm than good” is subjective direction unworthy of such a powerful agency. And what are businesses to make of finding “concern about [AI tool’s] malicious use goes well beyond FTC jurisdiction” and “design elements that trick people into making harmful choices are a common element in FTC cases” in the same paragraph? Lastly, the bullying tone of many posts may betray an antibusiness bias that already taken the shape of failed merger challenges and an unprecedented attempted power expansion at the agency. Use of phrases such as “we’re in FTC territory” and “if you think you can get away with baseless claims that your product is AI-enabled, think again,” seems inappropriately glib. It’s true that FTC has certain oversight of business practices and that lying is not okay, but the tough-guy language doesn’t inspire a presumption of innocent-until-proven-guilty civility.

AI, like every new technology ever created, will bring challenges. Whether those will mean that individuals have to learn and adjust accordingly, or designers of AI applications are required to make tweaks, or even that regulatory, legal, or legislative solutions are needed remains to be seen. For the most part, only time will tell. But surely public policy is better crafted in response to issues that emerge (or have a high likelihood of emerging), rather than on the basis of the assumption that an FTC-staff badge bestows a magical ability to forecast the future. Competition itself is a discovery process, and regulators at the FTC do not have a cheat sheet with all the answers.

Trade-offs between gains and risks are an unavoidable reality of any new technology. The essential question is who decides what those trade-offs will be. Is it the free-enterprise system, with its bounty of decentralized knowledge and millions of individuals signaling where AI should go? Or should this decision be left to centralized bodies like the FTC, mistakenly believing they can predict the future and “fairly” account for varied preferences of risk and reward?

Comparing the track record of the free market with government planning, the answer should be obvious.

Jessica Melugin is director of the Center for Technology and Innovation at the Competitive Enterprise Institute and a 2023 Innovators Network Foundation Antitrust and Competition Policy Fellow.
You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version