On the stand in Oakland federal court last week, Elon Musk conceded, under cross-examination by OpenAI's lead counsel William Savitt, that it was "partly" true xAI had used some of OpenAI's technology to train Grok through distillation. He then softened the concession into a shrug. "It is standard practice to use other AIs to validate your AI," he said, as if the distinction between validating a model and copying its behaviour were self-evident, and as if the room had not just heard him spend three days arguing that OpenAI was a stolen charity owed him roughly thirty-eight million dollars in moral damages plus, by his lawyers' arithmetic, a hundred and thirty-four billion in the for-profit value the conversion produced.

The contradiction did not seem to bother him. It bothered Judge Yvonne Gonzalez Rogers, who opened the trial on Tuesday by asking Musk how the court could get its work done "without you making things worse outside the courtroom," and it bothered, in a quieter way, the cross-examining attorney, who walked Musk through his own xAI valuation (two hundred and fifty billion at the SpaceX merger in February) and his own boasts about Grok's capabilities. The picture that emerged was not the picture Musk's opening narrative had drawn. He had cast himself as a founder defending a charity from corporate capture. The cross painted him as a competitor, valued in the hundreds of billions, who had built his competing model in part on the very outputs he says were stolen from a public mission.

I wrote yesterday that generally, AI companies distill, because the practice is now baseline industry behaviour rather than a deviation from it. The major labs all do versions of it, sometimes openly, more often through quiet evaluation pipelines that nobody itemises in a press release. So Musk's admission, on its technical merits, is not a scandal. It is a statement of how the industry actually works.

The scandal is the framing. To sue OpenAI for one hundred and thirty-four billion dollars on the theory that the company betrayed its founding promise to benefit humanity, while simultaneously running a competitor that benefited, in part, from OpenAI's outputs, is to argue both sides of the same case at once. The mission was sacred enough to litigate. The model weights, or their behavioural shadow, were available enough to use. Both can be true. Neither sits well with the other.

Whether the jury cares is a different question. The CNBC summary of the first week noted that Altman, Satya Nadella and Greg Brockman are still to testify, and that the outcome could threaten OpenAI's anticipated IPO. A finding for Musk would not just unwind the for-profit conversion; it would establish a kind of moral lien on every dollar the company has raised since 2019. A finding for OpenAI would let Altman walk into the IPO roadshow having beaten the most litigious billionaire in technology in open court, on his own terms.

What I keep getting stuck on is the smaller, weirder fact at the centre of all this. The man suing to recover a charity used the charity's outputs to train his rival. He did not deny it. He did not apologise for it. He called it standard practice, which it is. The case will turn on whether that answer is enough.

Sources: