The proposed "Artificial Intelligence Act" establishes a regulatory framework for the use of high-risk artificial intelligence (AI) systems in New Mexico, focusing on consumer protection and accountability. Developers of AI systems are required to provide comprehensive documentation regarding their intended uses, risks of algorithmic discrimination, and data governance measures. They must disclose any known risks of algorithmic discrimination within ninety days of an incident and maintain transparency about the data used for training their systems. Deployers of these AI systems are also mandated to implement risk management policies, conduct annual impact assessments, and provide clear information to consumers about the AI systems in use, including risks and data collection practices.
The legislation empowers consumers to take civil action against developers or deployers who fail to comply with the Act's requirements, while the state Department of Justice is tasked with enforcing compliance and establishing relevant rules. Additionally, deployers must notify consumers directly before using AI systems for consequential decisions, providing details about the decision-making process and reasons for any adverse outcomes. The bill includes provisions for exemptions related to trade secrets and compliance with federal laws, ensuring that the regulations do not impede lawful activities. The effective date for the provisions of this Act is set for July 1, 2026.