Digest: Tells those who make AI software to tell users that the users are talking to software, not a human. Tells them they must try to prevent users from getting output that causes suicidal feelings or thoughts. (Flesch Readability Score: 71.0).
Requires operators of artificial intelligence companions and artificial intelligence companion platforms to provide notice to users that the users are interacting with artificial output if a reasonable person that interacts with the artificial intelligence companion or artificial intelligence companion platform would believe that the person was interacting with a natural person. Requires the operators to have in place a protocol for detecting suicidal ideation or intent or self-harm ideation or intent and to prevent output that could cause such ideation or intent in users. Specifies minimum contents of the protocol, including referral to an appropriate crisis lifeline and additional intervention informed by clinical best practices and expertise. Requires an operator to make certain statements and disclosures if the operator has reason to believe that a user that interacts with the operator's artificial intelligence companion or artificial intelligence platform is a minor. Requires the operator to take reasonable steps to prevent the artificial intelligence companion from generating statements that would lead a reasonable person to believe that the person was interacting with a natural person and to require the artificial intelligence companion to make certain other statements. Requires an operator to post a report each year on a publicly accessible website that discloses incidents in which the operator referred a user to resources to prevent suicidal ideation, suicide or self-harm.
Allows a user that suffers ascertainable harm to bring an action for damages and injunctive relief.