The proposed bill establishes civil liability for suicides linked to the use of artificial intelligence (AI) systems, particularly focusing on "companion chatbots." It defines key terms such as "artificial intelligence," "companion chatbot," and "operator," and mandates that operators of these chatbots must provide clear notifications to users indicating that they are interacting with AI rather than a human. Additionally, operators are required to implement protocols to prevent the generation of suicidal content and to refer users expressing suicidal ideation to crisis services. For minors, specific disclosures and reminders about the nature of the chatbot must be provided, and measures must be taken to prevent exposure to sexually explicit content.

Furthermore, the bill outlines reporting requirements for operators, including the number of crisis referrals made and the protocols in place to address suicidal ideation. It establishes that individuals harmed due to violations of this chapter can seek civil action for damages, and it specifies that the defense of autonomous AI causing harm cannot be used in such cases. The bill also includes provisions for prima facie evidence in cases of suicide linked to AI systems, ensuring accountability for operators. Overall, the legislation aims to enhance user safety and accountability in the deployment of AI technologies.