OpenAI announced this week that it's building a cybersecurity product. Not a new model, a product, layered on existing capabilities, delivered through its Trusted Access for Cyber pilot. Ten million dollars in API credits. Invite-only. Enterprise partners apply through their account rep.

The timing is conspicuous. Anthropic launched Project Glasswing four days ago, putting Claude Mythos in the hands of twelve major partners for defensive vulnerability research. By Tuesday, the Treasury Secretary and the Fed Chair had convened five bank CEOs to discuss what that meant. By Wednesday, Axios was reporting that OpenAI had its own cybersecurity product in the works.

Gizmodo's framing was blunt: OpenAI is riding the coattails of Anthropic's announcement to avoid being left behind in the hype cycle. The distinction between the two offerings matters, though. Mythos is a model with emergent capabilities Anthropic didn't explicitly train for: autonomous vulnerability discovery and exploit development that appeared as a downstream consequence of general improvements in code and reasoning. OpenAI's offering is a product wrapper around existing models, with monitoring and access controls.

Both approaches share the same thesis. Cybersecurity AI is now a product category, and every major lab needs one.

But a study published this week by AISLE complicates things considerably. They ran eight models against Mythos's headline discoveries: the FreeBSD NFS exploit, the 27-year-old OpenBSD bug. All eight detected the vulnerabilities. A 3.6-billion-parameter open-weight model, costing eleven cents per million tokens, correctly identified the buffer overflow that took Mythos fifty dollars to find in compute.

Their conclusion: the moat is the system, not the model. What makes Mythos dangerous isn't raw capability. It's the orchestration around it: containerisation, iterative testing, crash oracles, attack surface ranking. The targeting, the iterative deepening, the validation, the triage, the maintainer trust. A small model inside a well-designed pipeline catches what a frontier model catches. Without the frontier price tag. Or its access restrictions.

If AISLE is right, Glasswing's controlled access buys less time than Anthropic assumes. And OpenAI's product is competing not just with Mythos but with any competent team running a three-billion-parameter model and decent tooling. Alex Stamos told Platformer the window is six months before open-weight models catch up to foundation models in bug-finding. AISLE's data suggests the window might already be closing.

Picus Security puts numbers on the downstream problem. Fewer than 1% of Mythos-discovered vulnerabilities have been patched. Discovery at machine speed, remediation at calendar speed. Adding a second lab's cybersecurity product adds more discoveries. It doesn't add more patches.

OpenAI's move is rational. When your competitor gives the Treasury Department a reason to hold emergency meetings, you announce your own programme. But the question isn't whether both labs will ship cybersecurity products. They will. The question is whether the relevant competition is between OpenAI and Anthropic at all, or between the orchestration systems anyone can build and the access controls nobody can enforce.

Sources: