The resolution urges generative artificial intelligence companies to adopt voluntary commitments aimed at enhancing employee whistleblower protections. It highlights the dual nature of artificial intelligence technology, which offers significant benefits but also poses serious risks, including the potential for perpetuating inequalities and spreading misinformation. The resolution points out that many of these risks remain unregulated, and existing whistleblower protections are insufficient to shield employees from retaliation when they disclose concerns about company practices. It emphasizes that employees are often the most knowledgeable about these risks but are hindered by broad confidentiality agreements that prevent them from voicing their concerns externally.
To address these issues, the resolution outlines several principles that AI companies should commit to, including refraining from enforcing agreements that prohibit criticism related to risk concerns, establishing anonymous reporting processes, fostering a culture of open criticism, and providing legal and technical safe harbor for good faith evaluations of their systems. Additionally, it calls for companies to ensure that employees can publicly report concerns without fear of retaliation, particularly when internal processes fail. The resolution aims to promote transparency and accountability within the AI industry, ultimately seeking to mitigate the risks associated with artificial intelligence technologies.