The proposed "Artificial Intelligence Act" establishes a regulatory framework in New Mexico aimed at preventing algorithmic discrimination in high-risk AI systems. It requires developers to provide comprehensive documentation about their AI systems, including intended uses, data sources, and potential discrimination risks. Developers must disclose known risks within 90 days of an incident and maintain transparency regarding the data used for training. Deployers are also mandated to implement risk management policies and conduct annual impact assessments to evaluate discrimination risks. The Act allows consumers to pursue civil actions for algorithmic discrimination, ensuring responsible and transparent deployment of AI technologies.
Additionally, the bill emphasizes consumer protection by requiring deployers to inform consumers about the AI systems in use, including potential risks and data collection practices. Before making significant decisions based on AI, deployers must notify affected consumers and allow them to appeal adverse outcomes. The bill also includes provisions for reporting algorithmic discrimination incidents and outlines conditions for withholding trade secrets. Enforcement mechanisms are established, allowing consumers to take civil action against violators, with a notice-and-cure period provided before enforcement actions can commence. The department is tasked with developing implementation rules by January 1, 2027, with the Act set to take effect on July 1, 2026.