Don’t Trust the SEC Techno-Pessimists on AI

SEC chairman Gary Gensler testifies before a House Financial Services Committee oversight hearing on Capitol Hill in Washington, D.C., September 27, 2023. (Jonathan Ernst/Reuters)

The agency’s application of unworkable rules to new technology means we’ll get less of it — and consumers will lose out.

Sign in here to read more.

The agency's application of unworkable rules to new technology means we’ll get less of it — and consumers will lose out.

We are being lied to.”

So begins the “Techno-Optimist Manifesto,” an exhortation by venture capitalist Marc Andreessen to unleash markets and technology to aggressively improve humanity’s material and (in some sense) spiritual well-being. According to Andreessen, the lies of the techno-pessimists include that technology risks “ruining everything” and therefore must be shackled.

Securities and Exchange Commission (SEC) chairman Gary Gensler has long been in the running for the federal government’s chief techno-pessimist. While that competition is stiff among the mandarins of the administrative state, the SEC’s latest proposed rule is second to none in its ignorant fearmongering about technology, particularly artificial intelligence and related tools (which the agency refers to as “predictive data analytics”), and in its overreaching response.

From the SEC’s perspective, the greater “scale and scope” of AI and similar tools make them so powerful and complex that they can fundamentally transform and turbocharge risks to investors. Specifically, the commission believes that AI can amplify financial professionals’ (i.e., brokers’ and advisers’) conflicts of interest with their investors, and generate new types of conflicts of interest, to such a degree as to render investors utterly incapable of thinking for themselves. One way that the SEC thinks a financial professional’s use of AI might do this is by nudging an investor towards a product that’s more profitable for the broker or adviser while concealing that fact from the investor.

The SEC therefore has rushed in with a radically new, invasive, and unworkable compliance regime that abandons the traditional remedy of disclosures to investors. But the commission gets both the problem and the solution wrong.

Among the SEC’s principal concerns about the use of AI by financial professionals appears to be that targeted and behavioral advertising techniques that leverage AI are just too persuasive to let stand in the world of finance. But anyone who has ever ignored an online ad will intuitively understand why the SEC’s argument is overblown. Research indicates that these types of marketing techniques likely have a far smaller impact on investor behavior than the SEC fears and that targeted ads can even lead to smarter, not mindless, shopping.

Moreover, the SEC’s idea that AI-related technology can somehow create new conflicts of interest plainly doesn’t make sense. The main source of potential conflict between a broker or adviser and an investor is the fact that a salesperson is in the business of, well, sales. This is understood by anyone who has ever bought a car. There’s nothing about new technology that changes the salesperson’s interest in making more sales. Salespeople may also know more about the product (an “information asymmetry” as economists say) than those to whom they sell it, but if new technology gives them an even greater leg up in terms of knowledge, the right remedy is the very technique the SEC would toss in the trash: disclosure.

What of the arguments that certain advanced AI models are too complex and inscrutable — with inner workings often described as “black boxes” — for effective disclosure? While it’s true that the step-by-step logic of certain AI models might be obscure, this does not pose the insurmountable challenge that the SEC thinks it does. Even the use of a completely opaque, black-box AI model by a financial professional changes little about the ultimate potential conflict that stems from the broker’s or adviser’s stake in the sale. That interest can be seen in the AI tool’s output — its placement of a particular product in front of the customer — regardless of what goes into the model’s “thought” process. Just as it’s possible to explain, for example, that nicotine is addictive without getting into the weeds of biochemistry or epidemiology, it’s equally possible to understand that your broker will make more when you buy more (or when you buy this versus that) without the need to study a textbook on machine learning.

Therefore, neither the complexity of the technology that delivers information to an investor nor the financial professional’s incentive structure make it impossible to explain a conflict of interest to a customer. In fact, technology makes disclosure easier, not harder. Through user-experience design and touches as simple as labeling paid-for content as “Ad” or “Sponsored,” digital platforms can signal whether there’s a financial stake behind a piece of information. There’s no reason these practices can’t be adapted to disclose conflicted financial products or services.

The SEC’s getting it wrong isn’t just embarrassing; it’s consequential. Applying unworkable rules to new technology means we’ll get less of it. That’s a problem because digital, automated, and intelligent technologies are how we get more, cheaper, and better financial services.

Digital investment technologies have made markets accessible to more investors. If the SEC keeps AI out of the mix, consumers will lose out. They might lose tools that help deliver information in their native language and at their reading level. They might not get generative chatbots that can answer user-specific questions and follow-ups. And they might be denied the broad category of technologies that help some of the world’s most adept financial firms spot risks and opportunities.

Don’t be fooled by the SEC’s poorly reasoned fears. Financial technology, including AI, holds the promise of making the world better for investors — provided the techno-pessimists’ rules don’t get in the way.

Jack Solowey is a policy analyst at the Cato Institute’s Center for Monetary and Financial Alternatives, where Jennifer J. Schulp is the director of financial-regulation studies.

You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version