The Artificial Intelligence Consumer Protection Act aims to establish guidelines and protections for consumers interacting with high-risk artificial intelligence (AI) systems. Effective February 1, 2026, the bill mandates that developers and deployers of such systems take reasonable care to protect consumers from algorithmic discrimination. Developers must provide documentation regarding the system's uses, limitations, and risks, while deployers are required to conduct impact assessments and notify consumers when significant decisions are made by AI systems. The bill also allows developers to designate certain information as proprietary, exempting it from disclosure, and clarifies that compliance with other applicable laws may satisfy the documentation requirements of this Act.

The enforcement of the Act is designated to the Attorney General, who must issue a notice of violation before taking action against any developer or deployer. The bill includes provisions for affirmative defenses for those demonstrating compliance with recognized risk management frameworks and emphasizes that it does not create a private right of action for individuals, meaning enforcement is solely the responsibility of the Attorney General. Additionally, the Act specifies that disclosure is not required when it is obvious to a reasonable person that they are engaging with an AI system and outlines various exemptions for developers and deployers to ensure consumer safety without infringing on individual rights.