U.S. Tech Companies Are Lining Up to Back AI Safety Regulations

More AI companies are taking the trip to Washington, D.C. to register “safety commitments”.

Last week, the White House announced voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended a mid-September White House meeting that led to the announcement. The eight companies have pledged “to help drive safe, secure, and trustworthy development of AI technology,” the White House reported.

The new group joins seven other AI technology companies who made their own commitments to “safe AI” last July. That group includes Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

An Emphasis on Testing, Trust, and Safety

All 15 companies have agreed to specific trust and safety guidelines in three key categories: Ensuring product safety; building systems that put security first, and earning the public’s trust in any AI initiatives.

“The companies commit to internal and external security testing of their AI systems before their release,” the White House noted. “This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.”

In committing to cybersecurity firewalls and safeguards to protect proprietary and unreleased model weights, the agreement states that participating tech companies only release artificial intelligence products and services on a specific and transparent timeline and “only when security risks are considered.”

As for public trust, the White House guidelines ask AI companies to “commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system,” the White House stated. “This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.”

Lastly, all 15 companies have agreed to develop and deploy advanced AI systems to help address society’s greatest challenges.

“From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all,” the agreement noted.

President Biden has recently supported legislation and executive action to place guardrails on AI’s growth.

The White House is expected to roll out a new executive order that could be tied to an emerging “AI Bill of Rights” that would set standards on AI regulation, just like the new agreements inked with the 15 technology companies listed above.

“This fall, I’m going to take executive action, and my administration is going to continue to work with bipartisan legislation,” Biden said after a White House meeting on September 27, “so America leads the way toward responsible AI innovation.”

“Vast differences exist among them in terms of what potential (AI) has and what dangers there are,” he said. “I’ve convened key experts on how to harness the power of artificial intelligence for good while protecting people from the profound risk it also presents.”

“We can’t kid ourselves,” Biden added. “[There is] profound risk if we don’t do the job well.”


Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *