The Artificial Intelligence Consumer Protection Act aims to mitigate algorithmic discrimination associated with high-risk artificial intelligence systems by establishing clear responsibilities for developers and deployers. Effective February 1, 2026, developers will be required to take reasonable care to protect consumers from known risks of algorithmic discrimination and provide documentation regarding the system's uses, limitations, and risks. Deployers must also implement a risk management policy and conduct impact assessments for each high-risk AI system, ensuring consumers are informed when such systems are involved in consequential decisions that affect them.

The bill emphasizes transparency and accountability, allowing the Attorney General to request relevant documentation during investigations while maintaining confidentiality for proprietary information. It includes provisions for exemptions from disclosure when it is obvious to a reasonable person that they are engaging with an AI system and clarifies that the Act does not infringe upon individual rights. Enforcement is designated to the Attorney General, who must issue a notice of violation before taking action, and the Act does not create a private right of action, ensuring that only the Attorney General can enforce its provisions.