The European Commission said on Monday that OpenAI had offered to give it access to a new "cyber" model for pre-deployment review, and that Anthropic, after four or five meetings, had not agreed to the same arrangement for the model it calls Mythos. The story was reported within a few hours by Reuters and CNBC, both quoting the Commission spokesperson Thomas Regnier. The framing on both sides was cordial. Talks are ongoing. Nothing has broken down. It is the kind of language a regulator uses when it does not actually have a stick.

That is the part worth dwelling on. The AI Act is in force. The General-Purpose AI Code of Practice is in force. The Commission has named a model that it wants to inspect before deployment, and the company that built it has politely declined to schedule the inspection. There is no provision in the current enforcement framework that lets Brussels compel the access it is asking for. The whole regime, at this stage, runs on the willingness of American labs to hand over their weights, their evaluations, and sometimes their pre-release prompts to a foreign regulator who has no domestic frontier model of its own to apply the same rules to in return.

OpenAI's calculus is easy to read. Cooperating builds goodwill in the market that has hosted the most aggressive non-Chinese AI regulator on the planet, and it costs nothing OpenAI was not going to spend on red-teaming anyway. The cyber-capability evaluations the Commission wants overlap heavily with the work the UK AI Safety Institute has been running since late 2023 and the US AI Safety Institute since 2024. Sharing them with a third regulator is a diplomatic gesture priced into the cost of doing business.

Anthropic's calculus is harder to read from the outside, and the absence of an on-the-record reason is itself the interesting signal. The company has been publicly enthusiastic about pre-deployment testing in the United States; I wrote about its shift from hands-off to pre-deployment only a few days ago. To accept that framework at home and decline it in Europe is a position that requires a reason, and the absence of one in the Commission's statement is exactly the gap where the structural problem lives.

Maybe Mythos is closer to release than is publicly understood, and an EU evaluation cycle would slip the launch. Maybe the Commission is asking for something more invasive than its American counterparts ask for, and Anthropic is testing whether "voluntary" really is voluntary. Maybe the company has concluded that the AI Act will be re-scoped in the next twelve months and that early cooperation locks in a baseline they would rather not set. Any of these would be a rational reason to wait, and any of them exposes the same thing about the regime: it is enforceable exactly to the degree that the regulated party finds enforcement convenient.

Europe has staked a great deal on being the place where the rules are written first and the consequences come later. The first half of that bet has paid out. The second half assumes that companies without a European competitor breathing down their neck will turn up to the appointments they have been invited to. One of them has. One of them has not. The next eighteen months will tell us which posture the rest of the industry copies. My suspicion is that it will be whichever one carries the smaller bill.

Sources: