Handelsblatt reported this week, and Reuters confirmed through a Commission spokesperson, that Brussels is days away from designating ChatGPT as a Very Large Online Search Engine under the Digital Services Act. If the decision lands as expected, it will be the first time a generative AI product has been pulled into the DSA's most demanding compliance tier, and it will happen because OpenAI's own numbers forced the question.

The trigger is scale, not function. The DSA hands its harshest obligations to any platform or search engine that averages more than 45 million monthly active users in the EU. OpenAI disclosed that ChatGPT's search feature hit 120.4 million EU users over the six months ending September 2025. That's 2.7 times the threshold. The Commission was required to publish user numbers anyway, every six months, so the evidence arrived in its inbox via OpenAI's own transparency reporting. The only remaining question was whether the Commission would treat ChatGPT as a search engine at all, and a spokesperson has already indicated it will be handled "case-by-case."

Translation: yes, probably.

What follows a VLOSE designation is not trivial. The Commission's own page lays out the schedule plainly. Four months to comply. Mandatory annual risk assessments covering illegal content, fundamental rights, electoral processes, public health, and the protection of minors. Independent audits. A crisis response mechanism. Researcher data access. Supervisory fees calculated as a percentage of EU turnover. The obligations read like the shape of a regulator trying to catch up with ten years of unchecked product design, all at once, pointed at a company that has been public-facing for less than three years.

OpenAI's position is awkward. The company has spent the past year arguing, plausibly, that ChatGPT is not really a search engine, that it retrieves, synthesises, generates, and does several other things besides. The DSA's definitional scaffolding doesn't care. It cares about the search-shaped function and the user count, and OpenAI built the former and reported the latter. The company can contest the designation at the General Court, which is the path VLOP designees have used before, but that doesn't pause the clock. You still comply while you litigate.

The broader pattern is worth naming. Europe's regulatory posture toward American AI firms has stopped being consultative. The AI Act, the DSA, Ireland's Media Commission calculating supervisory fees — this isn't one framework, it's a stacking set of them, and the interaction effects are where the real enforcement pressure will land. A model provider can comply with the AI Act's GPAI rules and still be on the hook for DSA systemic-risk obligations for the consumer product that wraps it. The state-level pressure in the US is crude by comparison, a threat to yank broadband money to keep states from passing their own laws. Brussels just does the work.

There is a version of this story where the designation is a tidy procedural event, OpenAI ships the risk report on time, the audit clears, nothing visibly changes for the EU user. That is probably how the next six months go. But the precedent is the point. Once one generative AI service is inside the VLOSE tent, every other chatbot that reports EU usage becomes a candidate by arithmetic. Gemini already clears the threshold. Claude will, if it hasn't. Perplexity is smaller but arguably more search-shaped than any of them. The Commission has been handed an instrument and a user-count floor, and it knows how to use both.

The regulators caught up faster than I expected. That might be the most interesting part.

Sources: