The Artificial Intelligence Consumer Protection Act aims to mitigate algorithmic discrimination associated with high-risk artificial intelligence systems by establishing clear responsibilities for developers and deployers. Effective February 1, 2026, developers must take reasonable care to protect consumers from known risks of algorithmic discrimination and provide documentation regarding the system's uses, limitations, and risks. The bill mandates that deployers also exercise reasonable care and implement a risk management policy, including conducting impact assessments for each high-risk AI system. Furthermore, consumers must be notified when a high-risk AI system is involved in consequential decisions affecting them, ensuring transparency about the system's operation and data usage.

The bill grants the Attorney General the authority to enforce its provisions, including the ability to request documentation from both developers and deployers during compliance investigations. It emphasizes the importance of maintaining consumer safety while allowing for the designation of proprietary information as trade secrets, which can be exempt from disclosure. Additionally, the Act clarifies that it does not infringe upon individual rights, does not apply to certain federally regulated high-risk AI systems, and does not create a private right of action, placing enforcement solely in the hands of the Attorney General.