Filling the A.I. Gap: How Domestic and International Law Fails to Protect Artificial Intelligence Whistleblowers [Note]
Citation
42 Ariz. J. Int'l & Comp. L. 377 (2025)Description
NoteAdditional Links
http://arizonajournal.orgAbstract
As artificial intelligence (A.I.) development accelerates beyond the reach of current regulatory frameworks, whistleblowers in the A.I. sector, particularly those employed by privately held firms, face a dangerous legal void. This Note identifies a critical regulatory shortfall, termed the “A.I. Gap,” where employees seeking to expose unsafe but not explicitly illegal A.I. practices are left unprotected under both U.S. and EU law. Through a detailed analysis of high-profile whistleblower cases, including the 2024 “Right to Warn” letter and disclosures by former OpenAI and Microsoft employees, the Note demonstrates how existing laws, such as the Dodd-Frank Act, the False Claims Act, and the EU Whistleblower Directive, fail to protect individuals who raise concerns about speculative or ethical A.I. risks. The Note also examines how non-disclosure agreements (NDAs) are strategically used to suppress internal dissent and limit legal recourse. Ultimately, this Note proposes a multi-step reform framework to protect AI whistleblowers across internal, governmental, and post-disclosure stages, emphasizing the need for confidential, responsive, and independent reporting channels; statutory redefinition of whistleblowing to include risk-related concerns; and robust anti-retaliation safeguards. Without these reforms, the public remains vulnerable to unaccountable A.I. development practices and the individuals best positioned to expose them remain silenced.Type
Articletext
