The proposed bill, known as the Conversational Artificial Intelligence Safety Act, aims to enhance consumer protection regarding the use of conversational artificial intelligence (AI) services. It establishes definitions for key terms such as "account holder," "operator," and "minor," and outlines the responsibilities of operators in relation to minor account holders. Operators are required to disclose to minors that they are interacting with AI, implement measures to prevent harmful content, and provide tools for managing privacy settings. Additionally, the bill mandates that operators respond appropriately to user prompts related to suicidal ideation and prohibits them from misrepresenting their services as professional mental health care.
The bill grants the Attorney General the authority to enforce its provisions, allowing for civil actions against operators who violate the act. It specifies potential penalties, including civil fines ranging from $1,000 to $500,000 per violation, and outlines the types of relief that may be sought. Importantly, the act clarifies that it does not create a private right of action for individuals and limits liability for developers of AI models in cases of violations by third-party systems. The act is set to become operative on July 1, 2027.