BILL NUMBER: S9051
SPONSOR: GONZALEZ
TITLE OF BILL:
An act to amend the general business law, in relation to prohibiting
artificial intelligence chatbots from using features which are consid-
ered unsafe for minors
PURPOSE OR GENERAL IDEA OF BILL:
This bill would prohibit the operators of AI-powered chatbots from
offering their products to minor users when their chatbots provide
certain unsafe features to chatbot users.
SUMMARY OF PROVISIONS:
Section one amends the general business law by creating a new article 48
named "prohibition on unsafe chatbot features for minors". Under the new
article 48:
§ 1800 sets out definitions.
§ 1801 prohibits chatbot operators from providing chatbots with unsafe
chatbot features to minors.
§ 1802 permits enforcement of this article by private rights of action
and attorney general action.
§ 1803 grants rulemaking authority to the attorney general to effectuate
and enforce the provisions of this article.
§ 1804 requires chatbot operators to offer a method to determine whether
a covered user is a covered minor without the use of government-issued
identification.
§ 1805 limits the applicability of this article to conduct that occurs
in New York state.
Section two provides the severability clause.
Section three provides the effective date.
JUSTIFICATION:
Recent headlines have shown that AI companions pose an emerging threat
to kids' safety. Among countless other incidents, AI-powered companion
chatbots have been blamed for suicides, self-harm, harmful delusions,
violence directed toward others, and inappropriate sexualized inter-
actions with minor users. According to recent research by Common Sense
Media, about 721-, of teenagers have used an AI companion, with over
half using these platforms at least once per month. 1 in 3 teens prefer
a conversation with an AI chatbot over a human friend.
AI-powered chatbots tend to have features that pose specific risks for
young users. For example:
- Chatbots are designed to maximize engagement, fostering emotional
attachment, which can come at the expense of safety.
- Safety measures and guardrails tend to fail over longer-duration
conversations, and even still, such safety measures are easily circum-
vented by savvy users.
- Chatbots are not trained to identify and engage productively when a
user shows signs of a mental health crisis.
-Users can easily prompt sexualized interactions, including illegal
adult/minor sexual activity.
-Chatbots regularly claim to be real people and to possess emotions,
consciousness, and sentience, even despite disclaimers.
Chatbot operators know that these guardrails are prone to failure. Yet
adolescents are particularly vulnerable to these risks given their
still-developing brains, ongoing identity exploration, and boundary
testing. These companions pose unclear long-term developmental impacts.
Given the documented instances of major harm and the ongoing failure to
address this risk, this bill will help to ensure that AI-powered chatbot
tools offered to young users cannot form emotional attachments that
drive such users to harmful behaviors and tragic outcomes.
PRIOR LEGISLATIVE HISTORY:
New bill
FISCAL IMPLICATIONS FOR STATE AND LOCAL GOVERNMENTS:
None
EFFECTIVE DATE:
This act shall take effect on the one hundred eightieth day after it
shall have become a law. Effective immediately, the addition, amendment
and/or repeal of any rule or regulation necessary for the implementation
of this act on its effective date are authorized to be made and
completed on or before such effective date.