The proposed bill, titled the "Artificial Intelligence Safety and Security Transparency Act," mandates that large developers implement comprehensive safety and security protocols to manage critical risks associated with foundation models. It defines a "large developer" as an entity that has developed foundation models with significant computing power and outlines the specific duties these developers must adhere to, including the creation and publication of detailed safety protocols. These protocols must address various aspects of risk management, including the exclusion of certain models, thresholds for intolerable risks, testing procedures, and incident reporting. The bill also establishes a timeline for compliance, requiring developers to begin implementing these protocols by January 1, 2026.

Additionally, the bill provides protections for employees who report potential critical risks, prohibiting retaliation from large developers against such employees. It allows for civil actions to be brought by employees who experience discrimination for reporting risks, and it outlines the penalties for large developers who fail to comply with the established safety protocols. The Attorney General is granted the authority to seek civil fines and injunctive relief for violations of the act, emphasizing the importance of accountability in the development and deployment of artificial intelligence technologies.