How the Biden-Harris Administration Created an Unlawful AI Regulatory Agency

Vice President Kamala Harris attends the AI Safety Summit in Bletchley Park, near Milton Keynes, Britain, November 2, 2023. (Toby Melville/Reuters)

The AI Safety Institute has no authorization from Congress to receive funding or even exist, yet it could shape AI’s future.

Sign in here to read more.

The AI Safety Institute has no authorization from Congress to receive funding or even exist, yet it could shape AI’s future.

L ast week, the titans of the artificial-intelligence (AI) industry sent a letter to Congress demanding that their favored quasi-governmental agency be enshrined into law and empowered to regulate their industry. But Congress did not create this agency in the first place and has never appropriated any funding for it, and no matter what you think about AI, you shouldn’t want an unlawful agency captured by the industry regulating it.

Last year, the White House published the longest executive order in history, on “Safe, Secure, and Trustworthy Artificial Intelligence,” that established a new AI Safety Institute under the National Institute of Standards and Technology (NIST). This institute was then announced in a speech by Vice President Harris to coordinate AI regulation with other countries. However, the institute operates without any legislative authority and lacks the necessary accountability for an agency with such sweeping ambitions.

Even more troubling is how this initiative is being funded.

The AI Safety Institute diverted $10 million from the General Services Administration Technology Modernization Fund, which is money that was meant for targeted technology upgrades, not to fund the general operations of an unaccountable new regulator. It also appears to be benefiting from another backdoor maneuver — $337 million was repurposed from the Broadband, Equity, Access, and Deployment (BEAD) program, a key initiative meant to connect rural households to high-speed internet.

Not a single household has been connected by BEAD, yet its funds are now being siphoned away for AI regulation. This kind of fiscal juggling shows a clear misalignment of priorities from the Biden-Harris administration. Broadband infrastructure — a pressing need in many underserved communities — and upgrading government technology have taken a back seat to a “wannabe” agency and an ill-defined AI regulatory framework.

What’s worse, the AI Safety Institute is staffed by individuals who bring questionable perspectives to the table. One prominent figure, Paul Christiano (formerly of OpenAI), has alarmed the public with his assertions that there’s a “10 to 20% chance of AI takeover . . . [leaving] most humans dead.” Such apocalyptic thinking is not just fringe. It has infiltrated the very institutions now charged with shaping the future of AI development. The institute has also attracted cultural anthropologists and other academics whose experience with technology is limited.

The real danger, however, lies in the fact that major players in the tech industry are now calling on Congress to codify the AI Safety Institute into law. This is classic regulatory capture — where powerful industries influence the creation of rules that benefit them at the expense of everyone else.

The tech giants are maneuvering to choose their own regulators. And rather than protecting the public from potential harms, the AI Safety Institute could become another tool for these companies to stifle competition and consolidate their control. This concern is clearly exacerbated by the total lack of independence this agency would have if it were to owe its existence to a rogue process and shrewd lobbying tactics rather than congressional intent.

By creating what amounts to a shadow agency filled with left-wing academics and tech-industry insiders, the White House is building a regulatory framework that benefits a select few. That Big Tech is fully supportive of this initiative should raise immediate red flags. And that Democrats are funding it through the repurposing of money intended for critical infrastructure shows the administration’s skewed priorities.

We need a regulatory approach to AI that is transparent, accountable, and reflective of the broad public interest — not one driven by tech oligarchs and ivory-tower academics. If Congress truly cares about the future of AI and the well-being of the American people, it should reject any attempt to codify this flawed institute into law. The stakes are simply too high to get this wrong.

Paul Winfree is the president and CEO of the Economic Policy Innovation Center. He was previously the director of budget policy and the deputy director of the Domestic Policy Council during the Trump administration.
You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version