The proposed "Artificial Intelligence Act" establishes a regulatory framework in New Mexico aimed at addressing algorithmic discrimination associated with high-risk artificial intelligence (AI) systems. It requires developers to provide detailed documentation on the intended uses, data sources, and potential discrimination risks of their AI systems. Developers must disclose any incidents of algorithmic discrimination within ninety days and maintain transparency regarding their methodologies. Additionally, deployers of these systems are mandated to implement risk management policies, conduct annual impact assessments, and provide consumers with clear information about the AI systems in use, including associated risks and data collection practices.

The legislation emphasizes consumer protection by allowing civil actions for discrimination and requiring deployers to notify consumers before using AI systems for significant decisions. Consumers must be informed of the reasons behind adverse decisions made by AI systems and given an opportunity to appeal. The Department of Justice will oversee enforcement and establish compliance rules, with a deadline for promulgating these rules set for January 1, 2027. The Act is set to take effect on July 1, 2026, and includes provisions to protect trade secrets while ensuring consumer awareness of AI's role in decision-making processes.