Existing law requires, among other things related to ensuring the safety of companion chatbots, an operator to prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, as specified.
This bill, the Preventing AI User Self Endangerment (PAUSE) Act, would require an operator to adopt and make publicly available a policy governing its protocol for identifying and responding to credible crisis expressions and, for each companion chatbot an operator makes available to users in this state, implement a system for monitoring and detecting credible crisis expressions in user conversations with companion chatbots. The bill would require, if the monitoring system detects a credible crisis expression, the operator to take certain actions, including commence a crisis interruption pause, as specified. The bill would define "credible crisis expression" to mean a statement by a user of a companion chatbot that reasonably indicates, as determined through contextual analysis rather than keyword detection alone, intent to harm the user or others.
This bill would require an operator of a companion chatbot to document certain information related to credible crisis expressions and crisis interruption pauses and, beginning January 1, 2028, annually report that information to the Office of Suicide Prevention. The bill would provide for its enforcement, as specified.