California looks to regulate statewide use of AI

States say they may be in the dark on AI outcomes – but not for long.

Exhibit “A” on that front is the regulatory bodies that oversee the artificial intelligence market, which isn’t close to understanding the rules needed and the risks the public faces from AI technologies like “deep-faked” images and cybersecurity breaches.

Before the federal government and state governments can set their own guardrails down on AI usage in the commercial sector, they want to get a grip first.

That’s where the state of California lands this week with new legislation that would help lawmakers best understand risks associated with AI and “start a conversation” on the technology, according to the state senator who’s sponsoring the bill.

California Senator Scott Wiener (D) has introduced legislation that strives to make Generative AI “more transparent” and provide “safe development” guidelines to protect the public and to issue guidelines for the commercial use of AI at statewide companies.

“The Safety in Artificial Intelligence Act. SB 294 presents a framework for California to ensure the safe development of AI models within its borders,” Weiner’s office notes in a statement.

Under the framework, AI labs would be required to “practice responsible scaling by testing the most advanced models rigorously for safety risks and disclose their planned responses to the State if safety risks are discovered,” the legislation notes.

Company financial and compliance officers may also want to focus on additional language in the bill that seeks to hold companies accountable for negative AI outcomes.

“The bill also establishes strong liability for damages caused by foreseeable safety risks,” the legislation notes. “To ensure California remains the center of AI innovation and guide the development of large AI models toward safe practices, the proposal would also create CalCompute, a cloud-based compute cluster available for use by AI researchers and smaller developers housed in California’s world-class public university system.”

One prominent risk the bill aims at curbing is the possibility of privately-developed AI tools that may “fall into the hands of foreign states.” That’s one reason why the legislation calls for the creation of a state research center that monitors AI models for safety threats and makes the issue public when actual technology risks are uncovered.

A January Vote



“Large-scale AI presents a range of opportunities and challenges for California, and we need to get ahead of them and not play catch up when it may be too late,” Wiener said in a September 13 statement on the bill. “As a society, we made a mistake by allowing social media to become widely adopted without first evaluating the risks and putting guardrails in place. Repeating the same mistake around AI would be far more costly.”

The bill is what legislators call an “intent bill” which calls for further discussion and participation of public and private enterprises to produce a workable law.

“SB 294 is a framework we will fill in over the next several months by engaging closely with a range of researchers, industry leaders, security experts, and labor leaders,” Wiener added “The best way to get feedback on an idea is to put legislative text in print, so after consulting a broad array of experts in industry and academia, we’ve introduced a framework we think addresses the most critical risks of the new technology while preserving and nurturing its incredible benefits.”

Weiner also says he’s aiming for a January, 2024 vote on the bill. He likely won’t be alone.
25 U.S. states, along with Puerto Rico and the District of Columbia, introduced AI bills in 2023 as of July, as reported by The Verge.


Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *