The Fiduciary Failure at OpenAI: Why Character Is a Mission-Critical Risk
In the high-stakes race to build Artificial General Intelligence (AGI), the most dangerous variables aren't unaligned algorithms or rogue code—they are the human beings controlling them. For OpenAI, an organization legally bound to a "fiduciary duty to humanity," the governance crisis is no longer theoretical. It is here, and it has a name: the unchecked behavioral risk profile of its leadership.
The recent resignation of board member Larry Summers following the exposure of his ties to Jeffrey Epstein, combined with the explosive federal lawsuit accusing CEO Sam Altman of childhood sexual abuse, reveals a pattern that transcends bad PR. When viewed alongside Altman’s financial embrace of a political figure civilly liable for sexual abuse, these data points form a constellation of "red flags" that no responsible board can ignore. Under Delaware law, specifically the Caremark standard for oversight, ignoring these indicators isn't just bad business—it is a potential breach of fiduciary duty that demands immediate intervention by state Attorneys General.
The Myth of the "Private Matter"
The prevailing defense in Silicon Valley is that a founder’s private life is distinct from their corporate stewardship. This creates a dangerous blind spot. In January 2025, Ann Altman filed a federal lawsuit (Altman v. Altman) alleging years of severe sexual and psychological abuse by her brother, Sam. The Altman family’s response—utilizing what forensic psychologists term "DARVO" tactics (Deny, Attack, and Reverse Victim and Offender) to pathologize the accuser—mirrors the very strategies of manipulation and reality distortion that former colleagues at Y Combinator and OpenAI have described in professional contexts.
When a CEO is accused of predatory behavior in their private life, and that CEO controls technology capable of reshaping human civilization, "character" becomes a tangible asset—or a catastrophic liability. Delaware courts have increasingly recognized that personal conduct which destroys reputational capital can trigger derivative claims for breach of duty. By failing to rigorously investigate these allegations—dismissing them as "family disputes" rather than potential indicators of a psychological profile incompatible with safety-critical leadership—the OpenAI board is failing its duty of care.
The Company They Keep
The board’s negligence is further illuminated by the tenure of Larry Summers. Brought in to provide "adult supervision" after Altman’s brief ouster in 2023, Summers was forced to resign in late 2025 after emails revealed he maintained a "wingman" dynamic with sex trafficker Jeffrey Epstein long after Epstein’s initial conviction.
How did a vetting process for the world’s most important AI company miss this? Or worse, did they know and decide it didn't matter? The failure to vet—or the decision to ignore—Summers’ toxic associations exposes a governance culture that prioritizes political connectivity over ethical hygiene. While Summers was allowed to quietly resign without facing a suit for reputational damages, his appointment proves that the board lacks the mechanisms to filter out high-risk individuals. They are operating reactively, cleaning up messes only after they become public scandals.
Ethical Incoherence as Governance Failure
Perhaps the most damning evidence of the board’s failure to enforce its "duty to humanity" is Sam Altman’s $1 million donation to Donald Trump’s inauguration. At the time of this donation, Trump had already been found civilly liable by a federal jury for the sexual abuse of E. Jean Carroll.
Consider the incoherence: The CEO of a nonprofit-governed entity dedicated to "humanity" is funding the celebration of a leader judicially determined to have violated the bodily autonomy of a woman. This is not a political preference; it is an endorsement of impunity. It signals to employees, regulators, and the public that judicial findings of sexual violence are trivial inconveniences. For a board mandated to ensure AGI is developed "safely," allowing the CEO to normalize sexual misconduct creates a culture of impunity that is fundamentally unsafe.
A Legal Theory for Intervention
Under Delaware law, the Caremark doctrine requires directors to implement and monitor information systems regarding "mission-critical" risks. For OpenAI, public trust is a mission-critical asset. A CEO with a high-liability behavioral profile and a board that tolerates predatory associations threatens that asset.
Furthermore, OpenAI’s unique structure—a nonprofit controlling a for-profit arm—grants standing to specific enforcers. Because the board’s primary beneficiary is "humanity," not shareholders, they cannot hide behind the Business Judgment Rule to justify actions that enrich insiders while eroding the organization’s moral standing.