The resolution urges generative artificial intelligence companies to adopt voluntary commitments aimed at enhancing employee whistleblower protections. It highlights the dual nature of artificial intelligence technology, which offers significant benefits but also poses serious risks, including the potential for perpetuating inequalities and spreading misinformation. The resolution points out that many of these risks remain unregulated, and existing whistleblower protections are insufficient to shield employees from retaliation when they disclose concerns about company practices. It emphasizes that employees are crucial in holding companies accountable, yet confidentiality agreements often prevent them from voicing their concerns effectively.
To address these issues, the resolution outlines several principles that AI companies should commit to, including refraining from enforcing agreements that silence criticism, establishing anonymous reporting processes for risk-related concerns, fostering a culture of open criticism, and providing legal and technical safe harbor for good faith evaluations of AI systems. Additionally, it calls for protections against retaliation for employees who disclose risk-related information and allows them to report concerns publicly until a proper anonymous reporting process is established. The resolution aims to promote transparency and accountability within the AI industry while safeguarding employees' rights to voice their concerns.