The proposed "Artificial Intelligence Act" introduces a new chapter in Title 6 of the General Laws, focusing on the regulation of high-risk artificial intelligence (AI) systems. Effective October 1, 2026, the bill outlines the responsibilities of developers, integrators, and deployers of AI systems, emphasizing the need for transparency and consumer protection against algorithmic discrimination.

Developers of high-risk AI systems are required to use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination, document the uses, limitations, and risks of their AI systems, and disclose known risks to the attorney general and affected parties. They must also provide documentation to deployers that includes a general statement of foreseeable uses, known harmful uses, and the data governance measures used.

Integrators are required to enter into contracts with developers and use reasonable care to protect consumers from algorithmic discrimination risks. They must also provide a statement summarizing the types of high-risk AI systems they have integrated and how they manage risks.

Deployers of high-risk AI systems must implement a risk management policy, conduct regular impact assessments, and notify consumers when AI is used in decision-making, including providing information about the nature of the decision and the right to appeal adverse outcomes. They are also required to disclose any known risks of algorithmic discrimination to the attorney general and consumers.

The act mandates that synthetic digital content generated by AI be clearly marked and detectable, with exceptions for certain types of content. The attorney general is granted exclusive enforcement authority, with a focus on encouraging compliance before pursuing legal action. The act also ensures that existing legal rights and defenses remain intact, and it provides exemptions for AI systems governed by equivalent federal standards, used for internal business purposes, or developed for specific federal agencies. Overall, the legislation aims to promote responsible innovation in AI while safeguarding consumer rights in critical areas such as employment, healthcare, and education.