News

Politics & Policy

California Bans Political ‘Deepfakes’ during Election Season

California governor Gavin Newsom (D., Calif.) reacts as he speaks to the members of the press after the first presidential debate hosted by CNN in Atlanta, Ga., June 27, 2024. (Marco Bello/Reuters)

California governor Gavin Newsom signed three new laws on Tuesday to regulate or outright ban politically charged “deepfakes” powered by artificial intelligence weeks ahead of the 2024 presidential election.

One of the laws takes effect immediately, banning someone from knowingly sharing deceptive election-related deepfake videos, images, or audio. Like election-related deepfake legislation in other states, the law can be enforced 120 days before an election but goes one step further by remaining in effect until 60 days after an election is held. The bill was enacted amid concerns about how AI-generated content could impact the 2024 election.

Newsom vowed to sign the legislation in July after Elon Musk reposted a deepfake video that mimicked Vice President Kamala Harris’s voice in a campaign ad without disclosing that it had been manipulated, seemingly in violation of X’s policies. The altered ad showed the Democratic presidential nominee admitting that President Joe Biden is senile and that she is the “ultimate diversity hire.”

Newsom promised to prohibit similar practices, regardless of whether deepfakes targeted either presidential candidate.

“I could care less if it was Harris or Trump,” Newsom said at an AI conference in San Francisco on Tuesday. “It was just wrong on every level.” He added the bills were “long overdue.”

The Democratic governor signed a second bill imposing disclaimers on AI-generated political ads to prevent voters from being deceived, and the third requires social-media companies with more than 1 million users in California to label or remove deepfakes within 72 hours after a complaint is raised. Otherwise, a court will force them to do so. Both of these laws take effect in January.

Musk responded to the slate of new laws late Tuesday, saying Newsom effectively made “parody illegal” in California because of an altered video shared on X.

The laws exempt satiric content so long as it is clearly labeled.

Newsom has made it his mission to crack down on the emergence of AI in his state, not only related to politics. On Tuesday, he also signed two laws governing Hollywood’s use of deepfake technology. One requires consent to use deepfakes of actors and performers, and the other prohibits digital replicas of a dead performer’s voice or likeness in commercial media without an estate’s permission.

Meanwhile, Newsom has yet to sign another bill that would require tech companies to test the safety of AI tools before releasing them. If signed, the measure would also enable California’s attorney general to sue tech companies if the technology causes serious harm to the public. The AI-safety legislation has caused a divide among tech leaders in Silicon Valley.

While Meta and Google have claimed it will deter AI development, Musk threw his support behind the bill, despite his opposition to Newsom’s election-related deepfake measures.

“This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” the owner of X wrote on the social-media platform last month. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”

David Zimmermann is a news writer for National Review. Originally from New Jersey, he is a graduate of Grove City College and currently writes from Washington, D.C. His writing has appeared in the Washington Examiner, the Western Journal, Upward News, and the College Fix.
Exit mobile version