The proposed bill, titled the "Artificial Intelligence Safety and Security Transparency Act," mandates that large developers implement comprehensive safety and security protocols to manage critical risks associated with foundation models. It defines key terms such as "artificial intelligence model," "critical risk," and "foundation model," and outlines the responsibilities of large developers, which include producing and publishing a detailed safety and security protocol by January 1, 2026. This protocol must address various aspects, including risk assessment procedures, thresholds for intolerable risks, and the conditions under which incidents must be reported. Additionally, large developers are required to conduct regular transparency reports and retain third-party auditors to assess compliance with these protocols.

The bill also provides protections for employees who report potential critical risks, prohibiting retaliation from large developers against such employees. It allows for civil actions to be brought by employees who experience discrimination for reporting, and it establishes civil penalties for large developers who violate the provisions of the act. The Attorney General is granted the authority to seek civil fines and injunctive relief for violations, emphasizing the importance of accountability in the management of artificial intelligence technologies.