Digest: Tells those who make AI software to tell users that the users are talking to software, not a human. Tells them they must try to prevent users from getting output that causes suicidal feelings or thoughts. (Flesch Readability Score: 71.0).
Requires operators of artificial intelligence companions and artificial intelligence platforms to provide notice to users that the users are interacting with artificial output. Requires the operators to have in place a protocol for detecting suicidal ideation or intent or self-harm ideation or intent and to prevent output that could cause such ideation or intent in users. Specifies minimum contents of the protocol. Requires an operator to make certain statements and disclosures if the operator has reason to believe that a user that interacts with the operator's artificial intelligence companion or artificial intelligence platform is a minor and prohibits the operator from causing the artificial intelligence companion to perform certain actions. Requires an operator to report each year to the Oregon Health Authority concerning incidents in which the operator referred a user to resources to prevent suicidal ideation, suicide or self-harm.
Allows a user that suffers ascertainable harm to bring an action for damages and injunctive relief.