White House Shifts to Executive Order on AI Oversight

The White House has been vocal about installing safeguards on artificial intelligence in 2023, with multiple onsite meetings that included technology leaders to establish regulatory rules.

In late September, for example, the White House announced voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended a mid-September White House meeting that led to the announcement. The eight companies have pledged “to help drive safe, secure, and trustworthy development of AI technology,” the White House reported.

The new group joins seven other AI technology companies who made their own commitments to “safe AI” last July. That group includes Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

Now, one month later, the White House is taking another big step to curb bad actors and fraudsters who may use AI to take advantage of consumers and to keep companies who deploy AI tools and applications in regulatory rigor.

On October 30, President Joe Biden issued an executive order that sets new standards for AI “safety and security”, according to the White House. The executive order builds on previous actions the President has taken, “including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development” of AI.

Here’s what the executive order is stating:

— Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.

“In accordance with the Defense Production Act, the order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests,” the EO states. “These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.”

— Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
The National Institute of Standards and Technology “will set the rigorous standards for extensive red-team testing to ensure safety before public release,” the White House noted. “The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board.”

— Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.

Agencies that fund life-science projects will “establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI,” the EO added.

Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.

“The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content,” the EO stated. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”

— Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff.

This document will “ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI. Protecting Americans’ Privacy,” the EO noted.

Fighting “Irresponsibility”

These and ensuing private sector safeguards should be enough to protect user privacy when developing and using AI tools – but it won’t be easy and it won’t happen overnight.
“Without safeguards, AI can put Americans’ privacy further at risk,” the EO stated. “AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”

The actions that President Biden directed today are “vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI,” the EO concluded. “More action will be required, and the administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *