The Artificial Intelligence Consumer Protection Act aims to mitigate algorithmic discrimination associated with high-risk artificial intelligence systems. Effective February 1, 2026, the bill requires developers to take reasonable care to protect consumers from known risks of algorithmic discrimination and to provide documentation to deployers regarding the system's uses, limitations, and risks. It mandates that developers disclose any known risks to deployers without unreasonable delay and allows the Attorney General to request relevant documentation for investigations. The legislation emphasizes transparency and accountability, ensuring consumer protection while allowing for the continued development of AI technologies.

Additionally, the bill imposes new regulations on deployers of high-risk AI systems, requiring them to implement risk management policies and conduct impact assessments detailing the purpose and risks associated with their systems. Consumers must be notified when a high-risk AI system is involved in consequential decisions about them. The bill allows developers to designate certain information as proprietary, exempting it from disclosure, and outlines that compliance with other applicable laws may satisfy the bill's requirements. The Attorney General is responsible for enforcement, and the Act does not create a private right of action, meaning only the Attorney General can enforce its provisions. Overall, the bill seeks to balance consumer protection with the operational needs of AI developers and deployers.