SPONSOR: Miller
COMMITTEE ACTION: Voted "Do Pass with HCS" by the Standing Committee on Emerging Issues by a vote of 10 to 0. Voted "Do Pass" by the Standing Committee on Rules-Administrative by a vote of 9 to 0.
The following is a summary of the House Committee Substitute for HBs 1746 & 1769.
This bill establishes the "AI Nonsentience and Responsibility Act".
This bill defines "artificial intelligence" or "AI", "deployer", "developer", "emergent properties", and "end-user"; and removes definitions for "manufacturer" and "owner".
The bill defines a "person" as a natural person or any entity recognized as having legal personhood under the laws of the State, explicitly excluding any AI system.
This bill states that for all purposes under State law, AI systems must be declared to be nonsentient entities. As a result, no AI system will be granted the status of or recognized as any of the following:
(1) A person, or any form of legal personhood, nor be considered to possess consciousness, self-awareness, or similar traits of living beings;
(2) A spouse, domestic partner, or hold any personal legal status similar to a marriage. Any attempt to marry or create a personal union with an AI system will be void and hold no legal effect;
(3) An officer, director, manager, or similar role within any corporation, partnership, or other legal entity. Any appointment of an AI system to such a role will be void and hold no legal effect; or
(4) An owner, controller, or holder of title to any form of property, including real estate, intellectual property, financial accounts, and digital assets. All such assets associated with AI must be attributed to a human individual or a legally recognized organization that is responsible for the AI's development, deployment, or operation.
Any harm caused by an AI system, when used as intended or misused, is the responsibility of the deployer or user who directed or employed the AI.
A developer or deployers of AI may be held liable if harm is caused by a defect in design, construction, or instructions for use of the AI system. However, mere misuse or intentional wrongdoing by the user will not impute liability to the developer or deployer absent proof of negligence or design defects, nor will wrongdoing by the developer impute liability to the developer. In cases of negligence, the developer or deployer will be absolved of any criminal wrongdoing or punitive damages, provided that the standards established in this bill were followed.
Developers and deployers of AI must maintain oversight and control over any AI system that could reasonably be expected to impact human welfare, property, or public safety. Failure to provide adequate supervision or safeguards may constitute negligence on the part of the deployer. Proper oversight is demonstrated by adherence to Missouri standards for AI risk management based on the National Institute of Standards and Technology, as described in the bill.
This bill further states that an AI system is not an entity that is capable of bearing fault or liability in its own right. And any attempt to shift blame solely onto an AI system will be void, because liability will remain with human actors or entities in control of the AI system.
Safety mechanisms designed to prevent harm to individuals or property must be prioritized by developers and deployers of AI systems.
The mere labeling of an AI system as "aligned," "ethically trained," or "value locked" will not excuse the deployer's or developer's liability for harm caused. Developers remain responsible for demonstrating adequate safety features that are commensurate with the AI's level of potential harm.
Deployers and developers of AI systems that cause significant bodily harm, death, or major property damage must promptly notify the relevant authorities and comply with any subsequent investigations. A developer or deployer that has formally adopted a policy to comply with the provisions of this bill before August 28, 2026, will not be subject to the requirements of this bill before March 1, 2027.
This bill is similar to HB 1462 (2025).
The following is a summary of the public testimony from the committee hearing. The testimony was based on the introduced version of the bill.
PROPONENTS: Supporters say that as technology advances at a fast rate, it is important that we make sure that AI is not in any way regarded as something like a human or person. Supporters further state that people can easily mistake AI-generated content for something they believe to be real. Because of the fast rate of expansion of this new technology, our proper usage of it is likely far behind our actual understanding of the dangers and consequences. Supporters also state that the provisions of this bill are a necessary proactive step to regulate AI.
Testifying in person for the bill were Representative Miller; Elizabeth Kayser, Kayser And Associates LLC.
OPPONENTS: There was no opposition voiced to the committee.
Written testimony has been submitted for this bill. The full written testimony and witnesses testifying online can be found under Testimony on the bill page on the House website.
Statutes affected: