BILL NUMBER: S5668
SPONSOR: GONZALEZ
 
TITLE OF BILL:
An act to amend the general business law, in relation to liability for
false information provided by a chatbot
 
PURPOSE:
This bill would amend the general business law, to assign liability for
the actions of chatbots to the proprietors of such chatbots.
 
SUMMARY OF ORIGINAL PROVISIONS:
Section 1. Amends the general business law by adding a new section
390-f, which sets forth requirements for liability of the proprietors of
chatbots.
Subsection 1 defines the terms "chatbot" and "proprietor".
Subsection 2 (a) provides that a proprietor using a chatbot as an alter-
native to a human representative may not disclaim liability for mate-
rially misleading, incorrect, contradictory, or harmful information. A
proprietor may avoid liability by correcting the information and curing
the harm within 30 days.
Subsection 2 (b) provides that a proprietor is responsible for ensuring
the chatbot provides information that is consistent with formal poli-
cies, product details, disclosures, and terms of service to consumers.
Subsection 2 (c) provides that a proprietor may not disclaim liability
by disclosing to consumers that they are interacting with a non-human
chatbot.
Subsection 3 provides that a proprietor using a chatbot must provide
notice to users that that they are interacting with an Al chatbot rather
than a human representative.
Section 2. This act shall take effect one year after becoming a law.
 
DIFFERENCE BETWEEN ORIGINAL AND AMENDED
Adds subsection 1 (a). Defines "artificial intelligence"
Adds subsection 1 (c). Defines "companion chatbot"
Adds subsection 1 (d). Defines "covered user"
Adds subsection 1 (e). Defines "human-like"
Adds subsection 1 (f). Defines "minor"
Amends subsection 1 (g). Changes the definition of "proprietor" by
removing the requirement that such proprietor have twenty or more
employees. Makes other technical changes.
Amends subsections 2 (a) and 2 (b). Makes technical changes.
Adds a new subdivision 3. States that proprietors of chatbots or other
entities may not disclaim liability where their chatbots provide incor-
rect, contradictory or harmful information to a covered user that
results in bodily harm to a covered user or any third party.
Adds a new subdivision 4. Requires proprietors using chatbots to provide
notice to covered users that they are interaction with an AI chatbot and
not a human.
Adds a new subsection 5 (a). Requires that proprietors of companion
chatbots use commercially reasonable and technically feasible methods to
prevent the self-harm, and to determine whether a covered user is
expressing thoughts of self-harm. Where the proprietor makes such a
determination, the user must be prohibited from using the platform for
at least 24 hours and they must show a suicide crisis organization
contact.
Adds a new subsection 5 (b). Creates liability for violating 4 (a).
Adds a new subsection 5 (c). Creates liability where the proprietor has
actual knowledge that a companion chatbot is promoting, causing or
aiding self-harm or a covered user has thoughts of self-harm, the
proprietor does not prevent the user from using the chatbot for at least
24 hours and does not prominently display the suicide crisis organiza-
tion contact, and the user harms themselves.
Adds a new subsection 5 (d). States that a proprietor of a companion
chatbot cannot waive or disclaim liability under this subdivision.
Adds a new subsection 6 (a). Requires that a proprietor of a companion
chatbot use commercially reasonable and technically feasible methods to
determine whether a covered user is a minor.
Adds a new subsection 6 (b). Where a user is determined to be a minor,
the proprietor shall cease the user's use of the chatbot until they have
obtained parental consent and, where they are determined to be having
thoughts of self-harm, they are prohibited from using the service for at
least three days and a suicide crisis organization contact is displayed.
Adds a new subsection 6 (c). States that a proprietor is strictly liable
for any harm caused where they fail to comply with paragraphs (a) or (b)
of this subdivision and they inflict self-harm upon themselves as a
result of the companion chatbot.
Adds a new subsection 6 (d). States that liability cannot be disclaimed
under this subdivision.
Adds a new subdivision 7. Requires the ongoing implementation of changes
to the system to discover vulnerabilities.
Adds a new subdivision 8. Provides the attorney general with the author-
ity to promulgate regulations related to this subdivision.
Adds a new subdivision 9. States that information collected for the
purpose of determining a user's age shall not be used for any other
purpose and shall be deleted immediately after an attempt to determine a
user's age.
Adds a new subdivision 10. States that the attorney general shall
promulgate regulations identifying methods of obtaining verifiable
parental consent.
Adds a new subdivision 11. States that information collected for the
purposes of obtaining verifiable parental consent shall not be used for
any other purpose and shall be deleted immediately after an attempt to
determine a user's age.
Adds a new subdivision 12. States that nothing in this section shall be
construed as requiring any proprietor to give a parent who grants veri-
fiable parental consent any additional or special access to or control
over the data or accounts of their child.
Changes the effective date to one year from 90 days.
 
JUSTIFICATION:
With the growing capability and broadening availability of AI-powered
large language models, there is an accelerating trend of companies
deploying public-facing chatbots that seem like human representatives.
While these applications may offer a cost-saving function for business
enterprises, numerous instances of AI-powered chatbots giving incorrect
information have come to light. Incorrect responses like these greatly
deteriorate the customer service experience. Examples include an airline
giving incorrect information about bereavement fares, a government agen-
cy giving incorrect information about the law, and a car dealership
"agreeing to" a $1 sale of a new vehicle.
This bill will hold the proprietors of chatbots liable for the misbehav-
ior of their products, while allowing an opportunity for cure if the
proprietor can correct the information and remedy any harm caused by the
provision of incorrect information. Proprietors will also be required to
disclose to users that they are interacting with an AI chatbot program.
This bill will motivate proprietors to be more careful when designing
their chatbot products and will motivate end users to be more skeptical
of chatbots and vigilant of erroneous information they may provide.
In addition, companion chatbots have risen in popularity. Companion
chatbots are AI chatbots that can form interpersonal relationships with
users, using current and past user interactions to strengthen that
relationship and make it seem more "real." Many vulnerable populations,
particularly young people and seniors have turned to companion chatbots
for comfort and social interaction occasionally to address feelings of
loneliness. While providing comfort to minors and seniors is a laudable
goal, the overuse or misuse of companion chatbots, particularly by
emotionally vulnerable persons, can be extremely detrimental to their
mental health and, in some cases, have even caused users to engage in
self-harm and suicide.
This bill would set technical requirements for companion chatbot propri-
etors to ensure that their chatbots do not encourage, promote, or aid
self-harm to users, particularly minors.
 
LEGISLATIVE HISTORY:
S9381: 05/14/24 referred to internet and technology
 
FISCAL IMPLICATION:
To be determined.
 
EFFECTIVE DATE:
This act shall take effect one year after it shall become law.