Image for Artificial Intelligence Poses Regulatory Riddle
Deepfake videos are just one of the abuses that make artificial intelligence creators, government officials and potential targets wary of AI's potential.

EU, Individual States and Christchurch Call Offer Options to Tame AI Abuses

Deepfake videos reflect the dilemma over whether Artificial Intelligence is a friend or a fiend. Attempts to police AI-generated deepfake porn and political videos show the challenge of regulating this rapidly expanding and evolving technology, as Europe, individual states and a New Zealand collaborative explore options.

States, including Washington and California, have enacted statutes to curb malicious creation and dissemination of deepfake videos created using AI software. However, the verdict is out on the effectiveness of state-by-state legislation of a technology that is at once global and invisible.

California’s legislative efforts are a case in point. Assembly Bill 602 and Assembly Bill 730, which respectively deal with pornographic and political deepfakes, put enforcement in the hands of victims who must sue for redress, a difficult challenge because deepfake videos are often posted anonymously by sources posting from anywhere.

The Washington statute focuses on political deepfakes and enables candidate-victims to “seek injunctive or other equitable relief prohibiting the publication” of so-called “synthetic media”. Good luck with that if the deepfake campaign ad airs right before an election.

Opponents of deepfake legislation point to potential First Amendment infringement of free speech. That’s an argument that doesn’t sit well with celebrities, mostly women, whose faces have been grafted onto participants engaged in sex acts posted maliciously online..

Politicians aren’t happy either when they appear, falsely, doing something unpopular or out of character. The Trump camp released a so-called “parody” deepfake video of Florida Governor Ron DeSantis’ presidential campaign announcement on Twitter Space that featured “appearances” by Democratic donor George Soros, Adolph Hitler and the devil.

Trump was the brunt of a farcical deepfake video on YouTube’s Sassy Justice channel that showed the former President improbably telling a story about a reindeer. Former President Barack Obama was spoofed in a satirical deepfake in which actor Jordan Peele closely impersonated his voice and gestures. Hillary Clinton was impersonated by an SNL cast member on a deepfake video that was so well done that a later analysis could not detect the fakery of her face.

“AI is kind of scary but exciting. We will just have to see where it leads”.

Facial Recognition Software
Facial recognition software, the secret sauce of faking someone’s appearance, has greatly improved from earlier versions that faltered with Black faces. So has the production capability of fake videos, to the point that it may not be instantly obvious that a video is a fake.

The European Parliament has produced draft legislation that seeks to restrict what it considers the riskiest uses of AI by curtailing applications of facial recognition software. European officials have debated AI regulation for two years but were galvanized to action after release of ChatGPT and other software that approaches generative intelligence.

Congress has been encouraged by AI pioneers and Sam Altman, chief executive of OpenAI that produced ChatGPT, to pursue regulation, but that effort lags behind the EU’s progress.

The latest draft version of EU regulation would impose transparency requirements, including publishing summaries of copyrighted software used to train bots and safeguards to prevent illegal content. AI manufacturers are split over these provisions, which technology developers contend may be “technically infeasible”.

The risk-based approach embraced by the EU is aimed at applications with the greatest potential for human harm, such as AI systems controlling water or energy systems or determining who is entitled to government benefits. AI producers would be on the hook to conduct risk assessments of their applications before putting them into operation.

The trick will be to balance risk avoidance with technological innovation by AI developers. That’s where the use of facial recognition software becomes a critical focus. The draft legislation would ban companies from securing biometric data from social media sites, but left open the use of facial recognition software for law enforcement and national security purposes.

The City of Portland became a national leader in banning the use of facial recognition software, based on a poor track record of misidentifying black faces. Not all facial recognition software is alike and more advanced versions have improved facial identification for all skin types.

The most serious concern over facial recognition software that is commercially available to create deepfake videos, especially the kind of intentionally damaging videos states like Washington are trying to prevent.

Christchurch Call to Action Model
Jacinda Ardern has turned her attention after resigning as New Zealand’s prime minister to grapple with dilemmas posed by AI. In an op-ed in The Washington Post, Ardern touted the Christchurch Call to Action, a large-scale collaborative effort launched after a terrorist attack in 2019 that was livestreamed for 17 minutes as 51 Muslims were massacred. Video circulated widely on social media. YouTube estimated there was one upload per second.

“Afterward, New Zealand was faced with a choice,” Ardern wrote. “accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.”

“Within two months, we launched the Christchurch Call to Action,” Ardern explained. “Today it has more than 120 members, including governments, online service providers and civil society organizations – united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The collaboration, which includes French participation, has widened its reach to AI. “From its start, the Christchurch Call anticipated challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online,” Ardern said. “Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself.”

“It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress,” according to Ardern. “It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI.”

“I see collaboration on AI as the only option,” Ardern said. “The technology is evolving too quickly for any single regulatory fix. Solutions need to be dynamic, operable across jurisdictions, and able to quickly anticipate and respond to problems. There’s no time for open letters. And government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.”

Ardern’s hopeful conclusion: “Together, we stand the best chance to create guardrails, governance structures and operating principles that act as the option of least regret. We don’t have to create a new model for AI governance. It already exists, and it works. So let’s get on with it.”

Last Beatle Album
AI has many beneficiary uses that we already experience in everyday life. Its creative potential poses both risks and rewards. Paul McCartney just announced he used AI to retrieve a pure recording of John Lennon’s voice from a demo tape singing Now and Then, which will headline what he called The Beatle’s last album.

Now and Then was considered a reunion song for the famous band in 1995. George Harrison said he didn’t like it, and the band dropped it. Yoko Ono, Lennon’s widow, provided the demo. “It didn’t have a very good title and it needed a bit of reworking,” McCartney said, “but it had a beautiful verse and it had John singing it.”

As for AI technology, McCartney says it is “kind of scary but exciting”. He added, “We will just have to see where it leads”.