This post is about Elon Musk’s xAI, its use of the Public Benefit Corporation (PBC) form, and what it says about the AI industry more broadly.
For the last several months, LASST has been helping to coordinate a public advocacy campaign addressing OpenAI’s corporate restructuring, working to keep profit motives from undermining OpenAI’s charitable purpose of ensuring artificial general intelligence (AGI) is safe and benefits all of humanity.
In a May post updating the public on its restructuring plans, OpenAI emphasized that its for-profit LLC would transition to a PBC, which it describes as “a purpose-driven company structure that has to consider the interests of both shareholders and the mission.” Further, the announcement explains, “PBCs have become the standard for-profit structure for other AGI companies like Anthropic and X.ai… We think it makes sense for us, too.”
This prompted Tyler to co-write an op ed explaining why using the PBC form on its own is insufficient to protect OpenAI’s charitable purpose. We also decided to look into how the PBC form was being used at other AI companies, including how companies were fulfilling their reporting requirements. Because xAI is incorporated in Nevada, and Nevada requires PBCs to report publicly (unlike Delaware which only requires reporting to shareholders), we assumed we would find public reports from xAI. What we found instead were public records suggesting that xAI is not a PBC at all and hasn’t been since May 2024. This is directly contrary to representations Musk and xAI have made in court in their lawsuit against OpenAI.
The xAI timeline
As a Nevada corporation, xAI is required to publicly file certain documents with Nevada’s Secretary of State, including its articles of incorporation and any amendments to them. From reviewing copies of xAI’s filings, LASST established the following timeline of xAI’s PBC status:
March 9, 2023 – X.AI Corp (better known as xAI) was incorporated as a standard profit corporation in Nevada.
April 20, 2023 – xAI amended its corporate charter to become a PBC with the public benefit purpose “[t]o create a material positive impact on society and the environment, taken as a whole.” Notably, this vague statement is arguably inconsistent with how xAI presents its mission to the public—for example, that it aims “to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.”
December 2023 – News outlets publicly reported on xAI’s PBC status, and since then, commentators and other AI companies, including OpenAI, have consistently referred to it as such.
May 9, 2024 – xAI quietly amended its corporate charter again, this time “to terminate its status as a benefit corporation.”
March 28, 2025 – After completing its merger with X, xAI executed what is now the operative charter.
Although xAI terminated its PBC status in May 2024, it has been representing itself in court as a PBC since November 2024, when it filed suit against OpenAI as a “Nevada benefit corporation.” xAI most recently claimed PBC status to the court in May 2025, a full year after it terminated its status, stating in its amended complaint that “Plaintiff X.AI Corp. is a public benefit corporation formed under the laws of Nevada.” As of the date of this post, xAI has yet to correct its representations with the court, as reported by CNBC.
Why this matters
AI companies have been using corporate tools like the PBC form to pitch themselves as uniquely capable of self-regulation. But the PBC form itself does little to hold them to account. Without legally binding commitments enforceable in the public interest, PBCs are functionally indistinguishable from normal corporations—as xAI’s actions show.
Here we have a leading AI company seemingly misrepresenting its PBC status to the public and getting away with it for over a year. Meanwhile, xAI has continued to find ways to raise the bar for problematic behavior, and it released its largest model to date while remaining entirely opaque about safety practices. xAI’s misrepresentation of its corporate status drives home a simple point: self-policing cannot replace actual oversight. And what’s true of xAI is true of the rest of the industry. At LASST, we maintain that any organization developing transformative technologies must be answerable to the people it affects. True accountability requires clear and enforceable rules, not voluntary pledges or easy-to-abandon corporate structures. These companies invoke grand missions directed towards society, humanity, and the universe. If their aspirations are sincere, the leaders developing AI should welcome real, binding standards to keep their missions on course.