Existing law requires, on or before September 1, 2024, the Department of Technology to conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency. The California AI Transparency Act requires a covered provider, as defined, of a generative artificial intelligence (GenAI) system to offer the user the option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered provider's GenAI system that, among other things, identifies content as AI-generated content.
This bill would establish a process by which the Attorney General designates, for a renewable period of 3 years, a private entity as a multistakeholder regulatory organization (MRO) if that entity meets certain requirements, including that the entity presents a plan that ensures acceptable mitigation of risk from any MRO-certified artificial intelligence models and artificial intelligence applications. The bill would require an applicant for designation by the Attorney General as an MRO to submit with its application a plan that contains certain elements, including the applicant's approach to mitigating specific high-impact risks, including cybersecurity, chemical, biological, radiological, and nuclear threats, malign persuasion, and artificial intelligence model autonomy and exfiltration.
This bill would require an MRO to perform various responsibilities related to certifying the safety of artificial intelligence models and artificial intelligence applications, including decertifying an artificial intelligence model or artificial intelligence application that does not meet the requirements prescribed by the MRO and submitting an annual report to the Legislature and the Attorney General that addresses, among other things, the adequacy of existing evaluation resources and mitigation measures to mitigate observed and potential risks.
This bill would provide that in a civil action asserting claims for personal injury or property damage caused by an artificial intelligence model or artificial intelligence application, it is an affirmative defense to liability that the artificial intelligence model or artificial intelligence application in question was certified by an MRO at the time of the plaintiff's injuries.

Statutes affected:
SB 813: 2570.18.5 BPC
02/21/25 - Introduced: 2570.18.5 BPC
03/26/25 - Amended Senate: 2570.18.5 BPC