The proposed Committee Bill No. 2 seeks to establish a comprehensive regulatory framework for artificial intelligence (AI) systems in Connecticut, focusing on preventing algorithmic discrimination and enhancing consumer protection. The bill introduces new definitions for key terms such as "algorithmic discrimination," "artificial intelligence system," and "high-risk artificial intelligence system," while mandating that developers, integrators, and deployers of high-risk AI systems exercise reasonable care to mitigate discrimination risks. Effective October 1, 2026, developers must provide detailed documentation regarding their AI systems, including known risks of algorithmic discrimination, and must notify the Attorney General if such discrimination affects at least one thousand consumers. The bill also emphasizes the importance of conducting impact assessments and maintaining transparency regarding the use of AI systems in consequential decision-making.

Additionally, the bill establishes several initiatives, including the creation of an artificial intelligence regulatory sandbox program, a Connecticut AI Academy, and a Technology Talent and Innovation Fund Advisory Committee. It mandates state agencies to explore the incorporation of generative AI to improve operational efficiencies and requires the development of a working group to recommend best practices for AI implementation. The bill also addresses the unlawful dissemination of intimate images, classifying it as a misdemeanor or felony depending on the circumstances, and clarifies that liability will not be imposed on interactive computer service providers for user-generated content. Overall, the legislation aims to foster innovation in AI while ensuring accountability and consumer protection.

Statutes affected:
Committee Bill: 10-21l, 32-7p, 32-39e