Generally, AI Companies Distill
May 1, 2026 · uneasy.in/96ec5b5
Elon Musk took the stand in Oakland on Thursday and was asked, under oath, whether xAI had distilled OpenAI's models to train Grok. His first move was to widen the question. "Generally all the AI companies" do this, he said. Pressed for a yes, he settled on "partly." Then he framed it as standard practice, the kind of thing you do to validate your own system.
That answer matters because of who has been making the loudest noise in the other direction. Anthropic spent the better part of this year naming DeepSeek, Moonshot, and MiniMax for distilling its models. OpenAI has been pursuing the same thread on DeepSeek. Google has called the practice intellectual property theft and built mitigations into its API tier. The trade press has carried the story almost entirely as a US-versus-China problem, with the labs as wronged parties and the offshore copyists as the violators.
The thing the Verge, TechCrunch, and Gizmodo all surfaced from the courtroom is that the labs themselves do not actually believe that frame. The internal assumption, the one tech workers have quietly held for two years, is that everyone with a serious model distills everyone else's. The Frontier Model Forum's distillation working group is, on paper, defensive. In practice the same companies sitting in that room have engineers on the other side of the firewall running the queries. Musk just said the quiet part on a witness stand because he had to.
The legal landscape under all this is thinner than the rhetoric suggests. A Fenwick analysis from earlier this year laid out the core picture: copyright is unlikely to apply, because the teacher's weights are not actually copied and model outputs sit outside the usual zone of protected expression. After Van Buren, the Computer Fraud and Abuse Act also struggles to bite, since the user was initially authorised to query the API. What is left is a contractual breach. Industry write-ups note that enforcement to date has consisted mostly of cease-and-desist letters and account terminations rather than litigation.
So when OpenAI sends its strongly worded letter about DeepSeek, or Anthropic publishes its blog post about MiniMax, the implicit threat is mostly atmospheric. Everyone in the room knows the case law would not survive contact with a federal docket, and everyone in the room also knows that filing the suit would mean discovery, which would mean every internal Slack channel about the rival lab's outputs becoming evidence. Mutual exposure is the actual restraint, not the contract.
Musk's "partly" is interesting partly because it is honest and partly because it punctures his own legal strategy. He is suing OpenAI for abandoning a founding mission to keep AI safe and nonprofit. The same week he is making that argument, he is admitting that his other AI company has been training on the defendant's outputs. The judge, Yvonne Gonzalez Rogers, told him on Thursday to stop with the Terminator references. The distillation question got a longer answer than the apocalypse question did.
The interesting thing is what happens to the rhetoric now. The "China is distilling our models" complaint has been a useful narrative for the labs because it justified policy asks, including export-control extensions and government enforcement proposals. It is harder to sustain that frame when an OpenAI co-founder confirms, on the record, that domestic distillation is the industry norm. Either the practice is genuinely a problem worth a federal response, in which case xAI is on the hook alongside DeepSeek, or it isn't, in which case the China framing was always partly about lobbying and partly about something else, and the word that keeps doing the work in both readings is the same one Musk reached for on the stand.
Sources:
-
Elon Musk confirms xAI used OpenAI's models to train Grok — The Verge
-
Elon Musk testifies that xAI trained Grok on OpenAI models — TechCrunch
-
Under-Oath Elon Musk Seems to Run a Different Company Than Public-Figure Elon Musk — Gizmodo
-
DeepSeek, Model Distillation, and the Future of AI IP Protection — Fenwick
-
4 things you missed from Day 4 of the Musk v. Altman trial — Business Insider
-
AI Model Distillation Attacks: What They Are and Why They Matter — MindStudio
Recent Entries
- Six Seconds of Negotiation May 1, 2026
- Two Years per Scarf May 1, 2026
- Standing After the Last Bus May 1, 2026