The resolution urges generative artificial intelligence companies to adopt voluntary commitments aimed at enhancing employee whistleblower protections. It highlights the dual nature of artificial intelligence technology, which offers significant benefits but also poses serious risks, including the potential for perpetuating inequalities and spreading misinformation. The resolution points out that many of these risks remain unregulated, and existing whistleblower protections are insufficient to shield employees from retaliation when they disclose concerns about company practices. It emphasizes the critical role employees play in holding these companies accountable, especially in the absence of government oversight, and notes that confidentiality agreements often prevent them from voicing concerns.

To address these issues, the resolution outlines several principles that AI companies should commit to, including refraining from enforcing agreements that prohibit criticism related to risk concerns, establishing anonymous reporting processes, fostering a culture of open criticism, and providing legal and technical safe harbor for good faith evaluations of their systems. Additionally, it calls for protections against retaliation for employees who share risk-related information publicly after other reporting avenues have failed. The resolution aims to ensure that employees can safely raise concerns about the risks associated with artificial intelligence technologies while protecting trade secrets and intellectual property.