Image for AI Developers Pledge Safe, Secure Practices
Major Artificial Intelligence developers pledged to develop the controversial technology in a "safe, secure and transparent manner".

Regulation Looms as AI Use Extends from Subways to the Dark Web

Major Artificial Intelligence developers have pledged to advance the technology in a “safe, secure and transparent” manner before new products are launched. The voluntary commitments were brokered by the Biden administration.

The pledges come as AI uses rapidly expand and the technology’s capabilities accelerate globally from replicating actors to replacing frontline soldiers, while providing a powerful tool exploited globally by cybercriminals.

Google, Amazon, Inflection, Meta, Microsoft, Anthropic and OpenAI made the pledge prior to a Biden executive order and potential legislation to establish a “legal and regulatory regime”.

Senate Majority Leader announced a series of bipartisan briefings on AI before assembling a bill that will “build and expand on the actions” by the administration. Senator Mark Warner, D-Virginia, chair of the Senate Intelligence Committee, said regulation is needed to ensure AI developers “prioritize security, combat bias and responsibly roll out new technologies.”

A key regulatory goal is giving AI consumers a heads-up when the technology is used or misused. OpenAI says it already employs “red teams” that pretend to be bad actors to test weaknesses in ChatGPT-4, its latest chatbot  One safeguard under exploration is watermarking and fingerprinting AI output so audio and video content can be distinguished from human-generated content.

In a post-pledge blog post, a Microsoft official indicated his company was seeking collaboration with the National Science Foundation to create a research center focused on AI safety. The official also said Microsoft would back formation of a national registry for “high-risk AI systems”.

The seven companies making the pledge agreed to share information with each other, researchers and government agencies on AI “best practices”. They also committed to “allow independent security experts to test their systems before they are released to the public and share data about the safety of their systems with the government and academics.”

“U.S. companies lead the world in innovation, and they have a responsibility to do that and continue to do that, but they have an equal responsibility to ensure that their products are safe, secure and trustworthy,” Jeff Zients, White House chief of staff, told NPR.

“U.S. companies lead the world in innovation, but they have an equal responsibility to ensure their products are safe, secure and trustworthy,”

AI and the News
According to report published by CNET, Google is pursuing a tool called Genesis that can write news stories or serve as a “helpmate” for journalists. Large news organizations have explored using AI assistance to write headlines, story summaries and cover routine events.

The Associated Press announced it licensed its news archives dating back to 1985 to OpenAI for ChatGPT to scour them as a training set. AP says it uses AI to create news summaries but not generate news stories.

AP’s licensing agreement comes as the Federal Trade Commission is investigating claims by authors and copyright holders their content has been harvested by OpenAI without permission or compensation. Some experts suggest licensing agreements could become a common practice.

AI in the News

NBC News reported New York’s transit authority is using AI facial recognition software at subway turnstiles to detect riders evading fares. The agency plans to expand use of the technology to more stations in an effort to reduce its estimated $690 million annual revenue loss because of fare evasion.

Forbes reported police are using AI tracking technology, including license plate recognition, to monitor the movements of suspected drug traffickers.

Hollywood writers and actors are on strike in part because they fear their work and images could be re-used without compensation. AI already has been employed in movies to “de-age” actors, retrieve voices of deceased actors, analyze viewing patterns on streaming platforms and create movie trailers. Studio owners want the right to scan and re-use background performers to avoid paying day wages for crowd scenes.

Outgoing UK Defense Secretary Ben Wallace says AI will replace frontline soldiers in future warfare. He recommended spending £6.6 billion to “create and seize opportunities presented by new and emerging technologies”, giving new generations of fighters more than “pitchforks”.

Europol told the media that WormGPT is being marketed on the dark web with “no ethical boundaries or limitations” to cybercriminals.

Below the headlines, companies are using AI to personalize marketing outreach, write and document code, draft and review lengthy annual reports, accelerate new drug discoveries and give writer-blocked authors starter ideas.

Read more about AI at: https://cfmadvocates.com/artificial-intelligence-poses-regulatory-riddle/