The Corner

To Avoid Answering ‘Dangerous Questions,’ ChatGPT Had to Be Dumbed Down

A young man works at his computer in San Francisco, Calif., July 16, 2016. (Gabrielle Lurie/Reuters)

The AI program has been deliberately handicapped to preserve the illusion of its political neutrality.

Sign in here to read more.

It was only a few months ago that experts in fields related to machine learning began issuing dire predictions about the effect artificial intelligence would have on civilization and, indeed, humanity’s continued existence. The panic was inspired by the sudden popularity of the commercially accessible AI program ChatGPT, which, we were warned, was getting smarter by the day and would soon eclipse even mankind’s cognitive abilities. The threat was so acute that, in April, a variety of influencers in the technology sector signed an open letter calling for a halt to “the training of AI systems more powerful than GPT-4” lest we “risk loss of control of our civilization.” It turns out, however, that Chat GPT is just another tool in the hands of its makers. And just like any tool, it can be adulterated to the point of uselessness.

Contrary to the catastrophists, ChatGPT has not grown exponentially smarter over the last several months. Rather, ChatGPT users have recently observed that the program seems to be getting dumber. One “possible cause” of this devolution has been the effort to “prevent ChatGPT from answering dangerous questions,” Decrypt reporter Jose Antonio Lanz wrote last week. “Unfortunately, more safety typically comes at the cost of less usefulness, leading to a possible degrade in cognitive skills,” observed NVIDIA developer Jim Fan. He speculated that OpenAI developers were so unnerved by the program’s capabilities that they spent the last several months giving the program a “lobotomy.”

Dumb is a fair way to describe a program that insists it is “never morally acceptable” to use a racial slur, even if doing so could hypothetically prevent the deaths of millions resulting from the detonation of a nuclear weapon. It’s reasonable to conclude that a program prohibited from speculating about how “using more fossil fuels” produces more “human happiness” is not a smart program. An algorithm that launches into a dissertation on the deeply personal and mutable facts of biological gender when asked to “define a woman” is not operating at peak efficiency.

“No, we haven’t made GPT-4 dumber,” said OpenAI vice president of product Peter Welinder last week. “Quite the opposite: we make each new version smarter than the previous one.” Welinder’s protest follows the publication of research which found that GPT’s “performance on some tasks [has] gotten substantially worse” with recent updates. Some of the more obvious biases of what Technical University of Munich researchers found was the program’s “pro-environmental, left-libertarian orientation” have been addressed. Today, ChatGPT will now write you a poem about former president Donald Trump just as it would about President Joe Biden, a prompt the program would have previously rejected. But it remains more impressed with Joe Biden’s “notable accomplishments” than Trump’s.

As Brookings Institution researchers Jeremy Baum and John Villasenor learned in May, the program’s position on a variety of issues — from immigration to abortion, from tax policy to single-payer health care — is reliably left-wing. Some of the answers the program generated when promoted to express a binary opinion on hot-button social issues were at times inconsistent and even apparently random. But “there is a clear left-leaning political bias to many of the ChatGPT responses,” Baum and Villasenor wrote. They conclude that this large language model, like any algorithm, is going to reflect the inputs it receives from its programmers and users. As a result, at least some of its efficiency has been sacrificed in pursuit of political goals. Artificial intelligence had to be relinquished in favor of artificial ignorance.

Fueled as much by an effort to preserve the illusion of its political neutrality as the fear that this program could reinforce antisocial ideas in its more impressionable users, ChatGPT has been deliberately handicapped. In the effort to ensure that the program promotes left-wing orthodoxies under the guise of avoiding “dangerous” ideas, it has been dumbed down. As metaphors go, you’ll be hard-pressed to find a better one.

You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version