Image for Political Deepfakes Pose Election Challenges
A deepfake video of former President Obama in 2008 was a harbinger of a new threat to elections in the form of deceptive political ads.

AI Makes Creating Digital Avatars to Deceive Voters as Easy as Sending Email

U.S. elections are already under a lot of pressure from election deniers. Now election officials are facing a daunting new challenge – AI-aided political deepfake advertising.

The stakes are high. One observer described AI-aided deepfakes as fake people dishing out real disinformation. AI has made creation of  “digital avatars” as “easy as sending email”, according to one AI developer. In politics, digital twins of real politicians can say untrue things intended to generate viral social media mayhem.

Deepfake videos sprung up during the Obama presidency. Now artificial intelligence has made them easier to produce and harder to detect. They will be a politician’s worst nightmare and turn election officials into video detectives.

The role of detectives is to follow leads on criminal acts. Deepfake videos may be political dirty pool, but they aren’t illegal everywhere. Only a few states have enacted laws regulating or banning political deepfake videos. Federal legislation hasn’t passed and the Federal Election Commission has deadlocked over whether it should regulate them.

Senate Majority Leader Chuck Schumer and a bipartisan cross-section of senators hosted a closed-door AI Insights Forum with technology leaders, including Elon Musk and Sam Altman, to discuss AI and the need for regulation. Their discussion included the European Union’s AI Act, which some observers argue should be a model for U.S. regulation. A central element of EU’s legislation is consistent standards for all AI systems.

The Biden White House has issued what it calls a “Blueprint for an AI Bill of Rights”.

State Action on AI
Texas was the first state to address AI in the political sphere in 2019. California, Washington and Minnesota followed suit. Michigan just passed measures to regulate the use of AI and deepfakes in political communications. Legislation has been introduced in Illinois, New Jersey, New York and Wisconsin. Oregon doesn’t have legislation banning or regulating AI-aided political advertising.

Over the summer, the National Conference of State Legislatures published a report stressing the importance of adopting best AI practice in laws and regulations, warning that without them “there will be a race to the bottom for AI if no guardrails are offered”. Toward that end, 12 states, including Washington and Oregon, have tasked government or governmental agencies will gaining a better understanding of AI capabilities.

Governor Kotek recently signed an executive order forming a 15-member AI advisory council to explore how Oregon can employ or regulate AI technology. “Artificial intelligence is an important new frontier, bringing the potential for substantial benefits to our society, as well as risks we must prepare for,” Kotek said. “This rapidly developing technological landscape leads to questions that we must take head-on, including concerns regarding ethics, privacy, equity, security and social change.“ Though she didn’t single AI in politics, it was implied.

A similar AI task force in Vermont morphed into a new state agency that will conduct yearly inventories of the use and impacts of AI systems within state government.

Deepfake Political Ads
As the blueprints, studies and task forces take shape, election officials face the daunting task next year of dealing with AI-aided political deepfake advertising. The discovery of deepfakes should be relatively easy as aggrieved politicians shout, “That’s not me.” Without a legal basis to force them off the air, election officials in Oregon may be powerless to do anything but condemn them.

Television stations could decide to pull them or they could require advertisers to verify ads don’t contain deepfakes, whether or not they are generated by AI. Social media platforms also decide to remove deepfakes if they are uncovered, but after they have already circulated.

The prospect of deepfakes will require close inspection of negative political advertising, which seeks to show an opponent in a bad or embarrassing light. That may not require a digital avatar, just some tinkering with voices. AI has advanced voice cloning capabilities that can capture the distinct tonality and emotional nuances of someone’s voice. Microsoft’s new AI system claims it can simulate a voice with only three seconds of audio.

Prosecuting Deepfake Political Ads
State attorneys general may be forced to discourage would-be deepfake artists by threatening legal action under data privacy statutes. Earlier this year, Kotek signed the Oregon Consumer Privacy Act, which goes into effect July 1,2024. That might require legal interpretation that someone’s image or voice are included in the definition of “personal data” and that the statute’s reach can stretch to include political advertising intended for mass consumption.

A potentially more straightforward avenue of prosecution would involve Oregon statutes dealing with fraud and deception. Under these laws, a deepfake would be treated like a forgery by someone who made a deepfake representation they knew was false.

The challenge with online deepfake videos is pinpointing who is responsible and stopping them before they spread widely. The time between when a politician cries foul and authorities verify a deepfake and take steps to remove it can be a political eternity, and certainly enough time for the misrepresentation to achieve its purpose.

“AI-aided political deepfake videos are fake people dishing out real disinformation.”

The AI statutes in California and Washington directly address media manipulation. California prohibits the distribution of materially deceptive media, which is defined as images, videos or audio depicting a candidate for office that “falsely appear . . . to be authentic” and transmit a “fundamentally different understanding or impression” than reality – with the intent to injure a candidate’s reputation or deceive voters.”

The statute has limitations. A prosecutor must show deceptive intent. The ban only applies to depictions of a candidate distributed within 60 days of an election and the law permits media that clearly discloses the content was manipulated. That could make the job facing California elections officials in dealing with deepfakes a whole lot messier.

Washington’s comparable statute is similar to California’s. The law in Texas is more clearcut by banning media created “to depict a real person performing an action that did not occur in reality.” The Texas ban presumably would apply equally to images altered by AI or Photoshop.

Laws in Texas and Minnesota prohibit media created with the “intent to injure a political candidate or influence the result of an election,” regardless of its subject. That could give standing to election officials to go after critics who misrepresent their conduct in distributing, collecting and counting ballots.

Deepfakes Deepen Distrust
Distrust of elections and election officers is already high. How election officers deal with political deepfakes has the prospect of deepening distrust or, more optimistically, earning praise from their electorate by exposing actual fakes.