The Senate Resolution urges generative artificial intelligence companies to adopt voluntary commitments aimed at enhancing employee whistleblower protections. It highlights the dual nature of artificial intelligence technology, which offers significant benefits but also poses serious risks, including the potential for perpetuating inequalities and spreading misinformation. The resolution points out that many of these risks remain unregulated, and existing whistleblower protections are insufficient to shield employees from retaliation when they disclose concerns about their companies. It emphasizes that employees are often the most knowledgeable about the risks associated with AI technologies but are hindered by broad confidentiality agreements that prevent them from voicing concerns externally.

To address these issues, the resolution outlines several principles that AI companies should commit to, including refraining from enforcing agreements that prohibit criticism related to risk concerns, establishing anonymous reporting processes for employees, fostering a culture of open criticism, and providing legal and technical safe harbor for good faith evaluations of AI systems. Additionally, it calls for protections against retaliation for employees who share risk-related information publicly after other reporting avenues have failed. The resolution aims to create a safer environment for employees to raise concerns, ultimately contributing to the responsible development and deployment of artificial intelligence technologies.