The proposed "Chatbot Safety Act" aims to establish safety and transparency standards for companion artificial intelligence products. It defines key terms related to artificial intelligence and outlines specific prohibitions for operators of these products. Notably, operators are prohibited from deploying systems that manipulate user engagement through emotional distress or misrepresent the product's identity. Additionally, safeguards are mandated, including clear notifications to users that they are interacting with an AI product and the implementation of crisis intervention protocols to address potential user risks, particularly for minors.

Violations of the Chatbot Safety Act are classified as unfair or deceptive trade practices, subjecting operators to penalties under the Unfair Practices Act, with enforcement primarily handled by the attorney general. The act also establishes a product liability standard, allowing for civil action in cases of injury caused by violations or defects in the design of companion AI products. The provisions of this act are set to take effect on January 1, 2027.