HB 1462 -- AI NON-SENTIENCE AND RESPONSIBILITY ACT

SPONSOR: Amato

This bill establishes the "AI Non-Sentience and Responsibility Act".

The bill defines a "person", as a natural person or any entity recognized as having legal personhood under the laws of the state, explicitly excluding any artificial intelligence (AI) system.

This bill states that for all purposes under state law, AI systems must be declared to be non-sentient entities. As a result, no AI system will be granted the status of or recognized as any of the following:

(1) A "person," or any form of legal personhood, nor be considered to possess consciousness, self-awareness, or similar traits of living beings;

(2) A spouse, domestic partner, or hold any personal legal status similar to a marriage. Any attempt to marry or create a personal union with an AI system will be void and hold no legal effect;

(3) An officer, director, manager, or similar role within any corporation, partnership, or other legal entity. Any appointment of an AI system to such a role will be void and hold no legal effect; or

(4) An owner, controller, or holder of title to any form of property, including real estate, intellectual property, financial accounts, and digital assets. All such assets associated with AI must be attributed to a human individual or a legally recognized organization that is responsible for the AI's development, deployment, or operation.

Any harm caused by an AI system, when used as intended or misused, is the responsibility of the owner or user who directed or employed the AI.

A developer or manufacturer of AI may be held liable if harm is caused by a defect in design, construction, or instructions for use of the AI system. However, mere misuse or intentional wrongdoing by the user or owner will not impute liability to the developer or manufacturer absent proof of negligence or design defects.

Owners of AI must maintain oversight and control over any AI system that could reasonably be expected to impact human welfare, property, or public safety. Failure to provide adequate supervision or safeguards may constitute negligence on the part of the owner.

This bill further states that an AI system is not an entity that is capable of bearing fault or liability in its own right. And any attempt to shift blame solely onto an AI system shall be void, because liability will remain with human actors or entities in control of the AI system.

Safety mechanisms designed to prevent harm to individuals or property must be prioritized by developers, manufacturers, and owners of AI systems.

The mere labeling of an AI system as "aligned," "ethically trained," or "value locked" will not excuse the owner's or developer's liability for harms caused. Owners remain responsible for demonstrating adequate safety features that are commensurate with the AI's level of potential harm.

If an AI system causes significant harm, a court may pierce the corporate veil to hold parent companies, controlling entities, or key stakeholders directly accountable if:

(1) An AI-related subsidiary, shell company, or limited liability entity was intentionally under-capitalized to evade financial responsibility for damages;

(2) A corporate structure was used to misrepresent, obscure, or deflect liability for AI harm; or

(3) A parent company or key stakeholders exercised direct control over AI development, deployment, or risk decisions while attempting to shield themselves from liability through layered corporate entities.

Any protections from liability that are currently granted to corporations shall not be used to evade responsibility for harm caused by AI systems, particularly in the case of reckless, negligent, or deceptive conduct.

Owners and developers of AI systems that cause significant bodily harm, death, or major property damage shall promptly notify the relevant authorities and comply with any subsequent investigations.

Statutes affected:
Introduced (2795H.01): 1.2045