This bill establishes new requirements for providers of artificial intelligence chatbots regarding the mental health of users. It defines key terms such as "artificial intelligence chatbot," "mental health advice," and "provider." The bill prohibits providers from designing or operating chatbots that offer or simulate professional mental health advice or represent themselves as licensed professionals. Additionally, it mandates that providers implement protocols for detecting expressions of self-harm, suicidal ideation, or emotional distress, requiring chatbots to refer users to appropriate crisis services upon detection.

Furthermore, the bill requires chatbots to disclose clearly and conspicuously that they are not human and do not substitute for professional mental health care. This disclosure must occur at the beginning of user interactions, at regular intervals, and when discussing mental health topics. The attorney general is granted enforcement authority, and violations are classified as unfair practices under consumer fraud laws, subject to civil penalties. The bill also clarifies that educational institutions and libraries are not liable for merely providing access to general-use software or the internet, and it mandates the Department of Health and Human Services to adopt rules for implementation.

Statutes affected:
Introduced: 714.16