The proposed bill, titled the "Artificial Intelligence Safety and Security Transparency Act," mandates that large developers implement comprehensive safety and security protocols to manage critical risks associated with foundation models. It defines a "large developer" as an entity that has developed foundation models with significant computing power and outlines the specific duties these developers must fulfill, including the creation and publication of detailed safety protocols. These protocols must address various aspects of risk management, including the identification of critical risks, testing procedures, and incident reporting. The bill also establishes a timeline for compliance, requiring developers to begin implementing these protocols by January 1, 2026.

Additionally, the bill provides protections for employees who report potential critical risks, prohibiting retaliation from large developers against such employees. It allows for civil actions to be brought by employees who experience discrimination for reporting risks, and it outlines the penalties for large developers who fail to comply with the established safety protocols. The Attorney General is granted the authority to pursue civil actions against developers for violations, with potential fines reaching up to $1,000,000 per violation. Overall, the bill aims to enhance transparency and accountability in the development and deployment of artificial intelligence technologies, ensuring that significant risks are adequately managed and reported.