Existing law requires the Department of Technology to conduct, in coordination with other interagency bodies as it deems appropriate, a comprehensive inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by, or are being used, developed, or procured by, any state agency.
Existing law defines "automated decision system" as a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence (AI) that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons. Existing law defines "artificial intelligence" as an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
Existing law, the Generative Artificial Intelligence Accountability Act, among other things, requires the Department of Technology, under the guidance of the Government Operations Agency, the Office of Data and Innovation, and the Department of Human Resources, to update the report to the Governor, as required by Executive Order No. N-12-23, as prescribed, and requires the Office of Emergency Services to perform, as appropriate, a risk analysis of potential threats posed by the use of generative AI to California's critical infrastructure, including those that could lead to mass casualty events.
This bill would declare the intent of the Legislature to enact legislation that would establish safeguards for the development of AI frontier models and that would build state capacity for the use of AI, that may include, but is not limited to, the findings of the Joint California Policy Working Group on AI Frontier Models established by the Governor.