The proposed bill, titled the "Artificial Intelligence Safety and Security Transparency Act," mandates that large developers implement comprehensive safety and security protocols to manage critical risks associated with foundation models. It defines key terms such as "artificial intelligence model," "critical risk," and "large developer," and outlines the specific duties these developers must fulfill, including the creation and publication of detailed safety protocols. These protocols must address various aspects of risk management, including testing procedures, deployment conditions, and incident reporting. Additionally, the bill requires large developers to produce transparency reports every 90 days and undergo annual audits by a third-party auditor to ensure compliance with the established protocols.
Furthermore, the bill provides protections for employees who report potential critical risks, prohibiting retaliation from large developers against such employees. It allows affected employees to pursue civil actions for violations of these protections, seeking remedies such as damages and reinstatement. The Attorney General is granted the authority to enforce compliance through civil actions, which may result in significant fines for violations. Overall, the legislation aims to enhance accountability and transparency in the development and deployment of artificial intelligence technologies, particularly those that pose substantial risks to public safety.