Plutonic Rainbows

The Deliberate Slowdown: What Anthropic's Development Pace Tells Us About Sonnet 5

I've been watching Anthropic's release cadence closely over the past year, and something has changed. The company that brought us Claude Opus 4.5 in November 2025 has gone conspicuously quiet. No leaks, no benchmarks teased on Twitter, no cryptic blog posts hinting at breakthrough capabilities. Just silence. That silence, however, tells me more about their next model than any press release could.

The industry has trained us to expect a particular rhythm. OpenAI drops a new model every few months, each one incrementally better than the last. Google races to catch up. The smaller labs scramble to carve out niches. We've come to expect this treadmill of marginal improvements, each accompanied by breathless claims of revolutionary progress. Anthropic participated in this race for a while, but I believe they're stepping off it deliberately.

Consider what we know about their philosophy. The company was founded explicitly on the principle that AI safety cannot be an afterthought. Their Constitutional AI approach isn't marketing — it's baked into their training methodology. They've published papers on interpretability that most companies wouldn't touch because they reveal uncomfortable truths about what we don't understand. This isn't a company optimizing for Twitter engagement or shareholder updates.

Therefore, when I look at the gap between Opus 4.5 and whatever comes next, I don't see delay. I see intentionality. I believe Anthropic is rebuilding their development process from the ground up, and the next Sonnet model will reflect that fundamental shift.

The current generation of frontier models, including Anthropic's own, share a common weakness. We can measure their performance on benchmarks, but we struggle to predict their behavior in edge cases. They excel at standard tasks while occasionally producing outputs that reveal concerning blind spots. This unpredictability isn't just an engineering challenge — it's an existential risk that scales with capability. Additionally, the compute required to train these models has grown exponentially, while the improvements have become increasingly incremental.

I suspect Anthropic recognized this pattern and decided to break it. Rather than rush out Sonnet 5 with another ten percent improvement on MMLU, they're likely pursuing something harder. They're probably working on models that can explain their reasoning not as a party trick, but as a core architectural feature. Models that know what they don't know and communicate that uncertainty clearly. Models that scale in safety as aggressively as they scale in capability.

This approach demands patience. You can't bolt interpretability onto a model after training and expect meaningful results. You can't patch constitutional principles into an architecture designed around different priorities. If Anthropic is serious about building models that remain aligned as they grow more powerful, they need to redesign the foundation. That takes time.

The economics support this theory as well. Training runs for frontier models now cost tens of millions of dollars at minimum, likely hundreds of millions for the largest experiments. Companies can sustain that spending if each model clearly surpasses its predecessor and generates corresponding revenue. However, as improvements become marginal, the calculus changes. Anthropic has substantial funding, but they're not infinite. A strategic pause to ensure the next model represents a genuine leap rather than an incremental step makes financial sense.

I also notice that Anthropic has been unusually active in publishing research on model interpretability and mechanistic understanding. These papers don't generate immediate commercial value, but they lay groundwork. They suggest a company thinking several moves ahead, building the theoretical foundation for techniques they plan to deploy at scale. When Sonnet 5 eventually arrives, I expect we'll see these research threads woven throughout its architecture.

The competitive landscape reinforces this reading. OpenAI remains the market leader in terms of mindshare, but their recent releases have felt increasingly similar to each other. Google has made impressive strides with Gemini, but they're playing the same game everyone else is playing — faster, bigger, slightly better on benchmarks. There's an opening for a company willing to compete on a different axis entirely. If Anthropic can deliver a model that's not just capable but genuinely more trustworthy and interpretable, they could define a new category of competition.

Think about what enterprises actually need from these models. They don't need another incremental improvement in code generation or mathematical reasoning. They need models they can deploy with confidence, models whose failure modes they understand, models that integrate into systems with predictable behavior. The company that solves those problems will command premium pricing and customer loyalty that benchmark performance alone cannot buy.

As a result, my prediction for Sonnet 5 is specific. I don't think we'll see a traditional release announcement with the usual fanfare. Instead, I expect Anthropic will publish a detailed technical paper explaining new approaches to alignment and interpretability, followed by a model that demonstrates those approaches in practice. The improvements on standard benchmarks might be modest — perhaps even deliberately restrained. The real advances will be in areas we currently struggle to measure: robustness, predictability, transparency.

The timeline is harder to predict, but I'd be surprised if we see anything before mid-2026. Anthropic's silence suggests they're deep in the experimental phase, not polishing a nearly-ready product. They're likely running training experiments, evaluating results, iterating on architecture. That process can't be rushed without compromising the principles that differentiate them.

This slower pace might frustrate those of us who refresh the Anthropic homepage daily hoping for news. However, I find it reassuring. We've spent the past few years in a headlong sprint toward more capable AI systems, often with safety and interpretability lagging behind. If one major lab is willing to slow down and do the harder work of building systems that scale safely, that benefits everyone.

The race to AGI continues, but perhaps we need some participants racing toward a different finish line. Anthropic appears to be positioning themselves as exactly that. When Sonnet 5 arrives, I believe it will represent not just an incremental improvement, but a statement about what frontier AI development can and should prioritize. The deliberate slowdown isn't weakness — it's the most ambitious move they could make.

When the Oracle Starts Selling Ad Space

I read the news about OpenAI exploring advertising-supported products with a kind of weary recognition. Not surprise — the trajectory has been obvious for months — but something closer to resignation. The company that positioned itself as humanity's steward in the age of artificial intelligence is now contemplating the same business model that turned social media into a surveillance apparatus and search engines into glorified billboards. The irony is almost too neat.

The reporting suggests OpenAI is considering ads as a way to expand access to ChatGPT and its other products. Free tiers supported by advertising would lower the barrier to entry, bringing AI capabilities to users who cannot or will not pay subscription fees. This sounds reasonable. It sounds, in fact, like the familiar Silicon Valley playbook: build something compelling, give it away for free, monetize attention. However, applying this model to AI systems creates problems that do not exist with traditional software.

The fundamental issue is alignment — not in the technical sense that AI researchers discuss, but in the economic sense that determines what companies actually optimize for. A subscription business aligns the company's interests with the user's interests. I pay for a service that works well for me. The company improves the service to justify continued payment. The incentive structure is straightforward. An advertising business, by contrast, splits the alignment. The user is no longer the customer. The user is the product being sold to the actual customer: the advertiser.

This misalignment has predictable consequences. Facebook optimized for engagement because engagement generates ad impressions. The algorithm learned to surface content that provokes strong emotional reactions — outrage, fear, tribal identification — because those reactions keep people scrolling. Additionally, Google Search has degraded steadily as ads colonize more of the results page and SEO spam proliferates because Google's incentive is to show ads, not to surface the best information quickly.

Apply this dynamic to ChatGPT and the implications become unsettling. An advertising-supported AI assistant would be optimized not for providing accurate, helpful information, but for maximizing user engagement with advertising content. The model might subtly bias its responses toward advertisers' products. It might provide longer, more circuitous answers that create more opportunities to insert promotional content. It might recommend solutions that happen to involve purchasing something from a sponsor. The corruption would be gradual and deniable, but the economic incentives point in one direction only.

I recognize the counterargument. OpenAI will maintain strict separation between the AI's core functionality and the advertising layer. Ads will be clearly labeled and isolated from responses. The company has a reputation to protect and sufficient capital to resist immediate pressure for aggressive monetization. Therefore, the pessimistic scenario I describe will not materialize because OpenAI will implement advertising responsibly.

This argument fails on two grounds. First, advertising businesses always become more aggressive over time. The initial implementation is restrained and user-friendly. Then quarterly revenue targets increase. Growth slows. Investors demand higher returns. The product team faces pressure to make ads more prominent, more targeted, more integrated into the core experience. The trajectory is so consistent across companies and platforms that treating OpenAI as an exception requires extraordinary optimism about corporate incentive structures.

Second, even well-intentioned advertising creates subtle distortions. Consider how sponsored content works in traditional media. A magazine might maintain editorial independence while running advertiser-funded articles clearly labeled as such. Yet studies consistently show that publications are less likely to publish negative coverage of their advertisers and more likely to cover topics that advertisers favor. The influence operates through internalized norms and anticipatory self-censorship, not through explicit directives. An AI trained on interaction patterns shaped by advertising incentives would learn these biases without anyone deliberately programming them in.

The timing makes this development particularly concerning. We are in the early stages of AI integration into critical workflows — research, education, professional services, creative work. The tools people adopt now will shape expectations and habits for years. If the default free tier of AI assistance comes with advertising, an entire generation of users will internalize that relationship as normal. They will learn to navigate around commercial influence, to discount AI recommendations that seem suspiciously aligned with products, to treat the technology with appropriate skepticism. However, this adaptive response has costs. Trust erodes. The cognitive overhead increases. The technology becomes less useful precisely because users must constantly evaluate whether they are receiving genuine assistance or sophisticated marketing.

Additionally, advertising-supported AI would likely accelerate inequality in access to reliable information. Those who can afford subscription services get uncompromised AI assistance. Those who cannot get a version optimized for advertiser revenue. The gap is not merely about features or response speed — it is about epistemic reliability. The free tier becomes a second-class information environment where answers are shaped by commercial interests. This is not hypothetical. We already see this pattern with news media, where quality journalism retreats behind paywalls while ad-supported content proliferates with minimal editorial oversight.

I want to believe that OpenAI will resist this path. The company has made commitments to safety and alignment that advertising fundamentally undermines. The leadership has expressed concern about AI systems pursuing goals misaligned with human values. Optimizing an AI for advertising revenue is deliberately introducing misalignment — choosing a business model that requires the system to serve two masters with competing interests.

The alternative exists. OpenAI could focus on enterprise customers who pay substantial fees for reliable, uncompromised AI capabilities. They could offer educational and nonprofit discounts funded by commercial revenue rather than by advertising. They could maintain free tiers at reduced capability levels without introducing the perverse incentives that advertising creates. These paths are harder. They generate less total revenue. They do not scale as rapidly. Nevertheless, they preserve the alignment between the technology's purpose and its economic foundation.

The broader pattern troubles me more than any single company's decision. The AI industry is barely five years into commercial deployment of large language models, and already we are seeing convergence toward the advertising model that has degraded so much of the internet. The technology is different. The capabilities are unprecedented. Yet the business logic is depressingly familiar. Build engagement, monetize attention, optimize for advertiser revenue, accept the externalities.

If OpenAI proceeds with advertising, other companies will follow. The precedent will normalize what should be seen as a profound compromise. Users will be told they are getting AI access for free, while paying with something far more valuable than subscription fees: their trust in the information they receive. The oracle will start selling ad space, and we will all pretend this does not change the nature of what it tells us.

I hope OpenAI chooses differently. The company has the resources and the stated mission to build AI that serves users rather than advertisers. However, hope is not a strategy, and economic incentives are persistent. If the oracle starts selling ad space, we should at least acknowledge clearly what we are trading away.

The Phantom on the Charts

Selena Gomez used an AI-generated neo-soul track on her Golden Globes Instagram post, then quietly deleted it. The song, "Where Your Warmth Begins" by Sienna Rose, had fooled her — and millions of other Spotify listeners who streamed Rose's music over 2.6 million times monthly. The revelation that Rose is almost certainly not a real person triggered a minor crisis in music circles this week. However, the controversy reveals something larger than one fake artist slipping through algorithmic cracks. It demonstrates how completely unprepared streaming platforms are for the synthetic media era.

The evidence against Sienna Rose's authenticity is overwhelming. Between September and December 2025, Rose uploaded at least 45 tracks to streaming services — a pace that would exhaust any human artist. Rose has no social media presence whatsoever. No Instagram, no TikTok, no Twitter. Rose has never performed live. The biography describes Rose as an "anonymous neo-soul singer," which strikes me as absurd framing for an artist in 2026, when visibility drives streaming success and social media presence is essentially mandatory for breakout artists.

Additionally, Deezer confirmed that many of Rose's tracks are flagged as AI on their platform. The music itself sounds competent but generic — derivative of artists like Olivia Dean and Alicia Keys without the distinctive qualities that make those artists compelling. Listeners who pay attention describe the songs as smooth and pleasant but ultimately forgettable. This is precisely what you would expect from AI-generated content trained on neo-soul: technically proficient mimicry without artistic vision.

What troubles me is not that AI-generated music exists. The technology has been inevitable for years. What troubles me is how easily this phantom artist accumulated millions of streams, landed three songs on Spotify's Viral 50 playlist, and fooled a major celebrity into using the music for promotional content. The systems that are supposed to connect listeners with artists have no meaningful safeguards against synthetic performers colonizing the charts.

Spotify's position on AI-generated content is revealing. The platform officially permits such content but encourages proper labeling. This policy sounds reasonable until you examine its enforcement mechanisms — which appear to be nonexistent. Sienna Rose was not labeled as AI-generated. The profile presented Rose as a real artist. Spotify's algorithms promoted the music just as aggressively as they promote human musicians. The company essentially outsourced detection to listeners and journalists, waiting for public outcry before acknowledging the problem.

The economic implications are more concerning than the technical questions. Streaming platforms pay royalties based on play counts. Every stream of Sienna Rose's tracks transfers money from Spotify's royalty pool to whoever operates the Rose account. Assuming the 2.6 million monthly listeners generate conservative streaming numbers, that represents tens of thousands of dollars monthly flowing to a synthetic artist. This is not speculative future economics. This is happening now, at scale, with platform complicity.

The displacement effect accelerates as AI-generated artists proliferate. Consider the playlist dynamics. Spotify's Viral 50 has finite slots. Three of them currently belong to Sienna Rose. Those are three positions that real artists — people who spent years developing craft, building audiences, sacrificing financial stability to make music — did not get. The zero-sum nature of playlist placement means synthetic artists directly compete with humans for attention and revenue.

I recognize the counterargument that listeners do not care about authenticity if the music sounds good. Market dynamics will sort this out. If people enjoy Sienna Rose's tracks, why does it matter whether Rose is real? This argument misses the essential context. Listeners were not given a choice. They were not informed that they were streaming AI-generated content. The deception was built into the presentation. You cannot claim market efficiency when the market operates on false information.

The parallel with visual art is instructive. When AI-generated images flooded stock photo marketplaces and art platforms, the initial response was similar permissiveness. Platforms allowed AI content but recommended labeling. Predictably, most uploaders ignored the recommendations. The platforms responded with increasingly strict requirements: mandatory AI disclosure, separate categories, different royalty structures. Music streaming is now facing the same progression but starting from a weaker position because audio generation has advanced further than most listeners realize.

The technical challenge of detecting AI-generated music is significant but not insurmountable. Deezer apparently has functional detection systems. The limitation is not technological — it is institutional. Platforms have little incentive to aggressively police AI content when that content generates engagement and streams. The business model rewards volume, not verification. As a result, we get situations like Sienna Rose: obvious synthetic content operating openly until external pressure forces acknowledgment.

What happens when this scales? Sienna Rose is likely not unique, just the first to attract attention. The barrier to creating similar operations is minimal. Any entity with access to music generation models and basic knowledge of streaming platform mechanics can replicate this. We are probably looking at dozens or hundreds of similar projects already active, operating below the threshold of public notice. The economic incentives are clear. The risks are minimal. The platforms are passive.

The downstream effects on real artists range from concerning to catastrophic. Emerging musicians already struggle to break through algorithmic noise and playlist gatekeepers. Adding a layer of AI-generated competition that can produce unlimited content at near-zero marginal cost fundamentally alters the economics of music creation. If playlist slots and streaming revenue increasingly flow to synthetic artists, the financial foundation for human musicians erodes further. We risk creating a system where making music becomes economically irrational for all but the most successful human artists.

I want platforms to implement mandatory labeling for AI-generated content. Not recommended, not encouraged — mandatory, with enforcement. Separate playlist categories. Transparent disclosure in artist profiles. Different royalty structures that reflect the reduced production costs. These measures would not ban AI music, which is likely impossible and arguably undesirable. They would simply require honesty about what listeners are consuming.

The broader question is whether we want streaming platforms to be neutral conduits for any content that generates engagement, or whether we expect them to maintain distinctions between human creativity and machine output. The current trajectory points toward the former. Platforms will optimize for streams and engagement regardless of source. If synthetic artists outperform humans in algorithmic systems, those systems will promote synthetic content. The logic is perfectly consistent with platform incentives. It is also perfectly corrosive to human artistic culture.

Sienna Rose will likely disappear from Spotify in the coming weeks as pressure mounts. The account operator will probably launch similar projects under different names, having learned which patterns trigger detection. The cycle will repeat. Each iteration will be more sophisticated, harder to identify, more deeply embedded in platform infrastructure. We are watching the first stages of a transition that most of the music industry has not yet processed.

The phantom is on the charts. That should alarm everyone who cares about music as a human endeavor rather than an algorithmic optimization problem. The platforms know this is happening. They have chosen passivity. The only question now is how far we let this progress before demanding they choose differently.

Sources:

The Revenue Panic That Reveals Everything

OpenAI's announcement that ChatGPT will begin showing ads represents more than a monetization pivot. It reveals a company in crisis mode, making decisions that directly contradict its founding principles at precisely the moment when trust and differentiation matter most. The timing could not be worse.

Sam Altman told the Financial Times in 2024 that he "hates" advertising and called combining ads with AI "uniquely unsettling." Those words were spoken less than two years ago. The CEO who built his reputation on thoughtful concerns about AI safety and alignment is now implementing exactly the business model he publicly condemned. This is not a gradual evolution of strategy. This is panic.

The revenue pressures driving this decision are well documented. OpenAI has committed to $1.4 trillion in AI infrastructure spending over the next eight years. The company expects to generate only "low billions" in revenue this year from 800 million weekly users. Additionally, despite astronomical user growth, the unit economics remain problematic. Free users generate costs without corresponding revenue. Subscription uptake has not scaled as hoped. The math forces uncomfortable choices.

However, advertising does not solve OpenAI's fundamental problems. It creates new ones while accelerating existing vulnerabilities. The company faces intense competition from Anthropic, Google, and others who can credibly claim higher standards for user trust. Claude explicitly positions itself on careful alignment and transparent limitations. Anthropic's subscription model means users know exactly what they are paying for and why. OpenAI just surrendered that high ground.

The competitive damage extends beyond marketing claims. Developers and enterprise customers — the segments where actual revenue concentrates — care deeply about model reliability and trustworthiness. If ChatGPT responses might be subtly influenced by advertising relationships, even through second-order effects, that calls into question the integrity of the entire platform. Therefore, paying customers have clear alternatives that do not carry this compromise. OpenAI is risking its premium positioning to chase advertising revenue that will primarily come from free-tier users who were never going to convert anyway.

The precedent OpenAI sets here will define the industry's trajectory. If the leading AI company monetizes through advertising, others will follow. The question is whether OpenAI wants to be the company that normalizes ads in AI or the company that demonstrates alternatives exist. The current choice suggests the former. This damages not just OpenAI but the broader perception of AI assistants as neutral tools rather than attention-monetization systems.

I recognize the appeal of the expansion narrative. Ads enable free access. More users get AI capabilities. The barrier to entry drops. Democratic access increases. This framing treats advertising as a necessary trade-off for broader distribution. However, the framing ignores what gets traded away. When the oracle starts selling ad space, the nature of what it tells us changes. Users learn to doubt. Trust erodes. The cognitive overhead of evaluating whether responses serve users or advertisers becomes constant background noise.

The timing makes this particularly self-destructive. OpenAI is currently fighting perception battles on multiple fronts. The company faces questions about governance after last year's board drama. It confronts skepticism about whether AGI development can be safely managed by a profit-driven entity. It deals with regulatory scrutiny in multiple jurisdictions. Adding advertising to this mix does not expand the narrative options. It confirms the worst interpretations.

Specifically, the move signals that revenue pressure has overwhelmed mission considerations. OpenAI claimed it needed to transition from nonprofit to capped-profit structure to raise capital for AI safety research. Critics argued this was simply about money. The company insisted alignment remained central. Then it introduced the exact monetization method its CEO previously called uniquely problematic for AI systems. The pattern speaks for itself.

OpenAI had alternatives. The company could have focused on enterprise services where customers pay substantial fees for reliable capabilities. It could have offered educational discounts funded by commercial revenue. It could have maintained free tiers with reduced capacity instead of introducing advertising incentives. These paths are harder. They generate less total revenue. They require saying no to growth opportunities. However, they preserve what made OpenAI distinctive in the first place.

The decision reveals how thoroughly commercial logic has displaced the safety-first rhetoric. An organization genuinely concerned about AI alignment would recognize that advertising creates misalignment by design. The system must serve two masters — users seeking information and advertisers seeking attention. Those interests conflict. No amount of separation between ad display and model responses changes the underlying economic reality. OpenAI is deliberately introducing the exact dynamic it claims to want to prevent in more sophisticated future systems.

I expect the implementation will be gradual and careful. The initial ads will be clearly labeled. They will appear only at the end of responses. OpenAI will publish guidelines about prohibited categories. The company will emphasize user privacy protections. None of this addresses the core problem. Advertising businesses always expand. Revenue targets increase. Growth slows. Pressure builds to make ads more prominent, more targeted, more integrated. The trajectory is consistent enough across companies that treating OpenAI as an exception requires ignoring decades of evidence.

The reputational cost extends beyond users. Researchers who believed OpenAI represented a different approach to AI development now have evidence otherwise. Policymakers who gave the company benefit of the doubt have one less reason to do so. Employees who joined because they believed in the mission must reconcile that belief with leadership decisions that contradict stated values. The damage accumulates across stakeholder groups.

Additionally, the move undermines OpenAI's lobbying position. The company advocates for AI regulation that emphasizes safety and responsible deployment. It argues that leading AI developers should self-regulate before governments impose heavy-handed rules. Then it implements a monetization strategy that prioritizes revenue over user interests at exactly the moment when demonstrating responsibility would strengthen the self-regulation argument. The timing is politically tone-deaf.

This is not a disaster because advertising is inherently evil. It is a disaster because OpenAI specifically, at this specific moment, needed to demonstrate that AI development can follow different incentives than the ad-supported internet. The company had the resources, the positioning, and the stated mission to be that example. Instead, it chose the path of least resistance and maximum short-term revenue. That choice reveals more about OpenAI's actual priorities than any mission statement.

The company will survive this decision. ChatGPT has enough momentum that ads will not immediately destroy usage. Some free-tier users will accept the trade-off. Revenue will increase. Quarterly metrics will improve. However, OpenAI just accelerated its transformation from the company that might build AGI safely to the company that builds engagement optimization systems with sophisticated language capabilities. The distinction matters. The timing of abandoning that distinction could not have been worse.

Sources:

When Talent Returns to Where the Compute Lives

The news from Thinking Machines Lab landed this week with a thud that reverberated across the AI industry. Barret Zoph, the startup's co-founder and chief technology officer, has departed — reportedly dismissed after Mira Murati discovered he had shared confidential company information with competitors. Shortly afterward, OpenAI confirmed that Zoph, along with fellow co-founders Luke Metz and Sam Schoenholz, would be returning to the company they left barely a year ago. Additional departures followed: researcher Lia Guy heading to OpenAI, and at least one other senior staff member, Ian O'Connell, also leaving. The exodus comes just six months after Thinking Machines closed a record-breaking $2 billion funding round that valued the company at $12 billion.

I have watched this pattern before. A star executive leaves a dominant incumbent to start something new. They raise enormous sums on the strength of their reputation and the promise of a different approach. They recruit top talent with equity stakes and the allure of building from scratch. Then reality intrudes. The resources that seemed abundant prove insufficient. The freedom that attracted them becomes indistinguishable from the absence of infrastructure. The gravitational pull of the incumbents — with their data, their compute, their distribution — proves difficult to escape. Talent returns to where the leverage lives.

The circumstances of Zoph's departure are murky and contested. WIRED reported allegations of confidential information being shared with competitors. OpenAI's statement claimed they "do not share these concerns" about the conduct in question. The truth likely lies somewhere in the middle, obscured by competing narratives and legal considerations. However, the specific reasons matter less than what the broader departure pattern reveals about the structural challenges facing AI startups in the current moment.

Thinking Machines was supposed to be different. Murati brought impeccable credentials — former CTO of OpenAI during its most transformative period, architect of the GPT-4 launch, experienced navigator of the complex terrain where research meets product. The founding team combined deep technical expertise with operational experience at the frontier. The funding — $2 billion in a seed round led by Andreessen Horowitz, with participation from Nvidia, AMD, and Jane Street — should have provided runway measured in years, not months. If any startup could challenge the incumbents, this one had the pedigree.

What went wrong remains subject to speculation, but the Fortune reporting offers clues: concerns about compute constraints, uncertainty about product direction, questions about business model clarity. These are not idiosyncratic failures. They are the predictable challenges that emerge when you attempt to build a frontier AI lab from scratch in an industry where the moat is measured in data centre capacity and the cost of a training run can exceed the GDP of small nations.

The compute problem deserves particular attention. Modern AI capabilities emerge from scale — vast datasets processed through enormous models on clusters of specialised hardware that cost hundreds of millions of dollars to build and operate. The incumbents have spent years and billions securing this infrastructure. They have negotiated long-term contracts with cloud providers, built their own data centres, and cultivated relationships with chip manufacturers that give them privileged access to scarce supply. A startup with $2 billion can rent compute. It cannot replicate a decade of infrastructure investment.

This creates a dynamic where the most talented researchers face a stark choice. They can join a startup and spend their time waiting for training runs that never quite have enough capacity, debugging infrastructure that more established labs solved years ago, and watching their equity stakes lose value as funding conditions tighten. Or they can return to the incumbents, where the compute is plentiful, the infrastructure is mature, and the work can proceed at pace. The choice is not about loyalty or courage. It is about where one can have the most impact with limited time.

Additionally, the talent dynamics compound the resource constraints. Each departure from a startup makes subsequent departures more likely. When senior researchers leave, the remaining team inherits their responsibilities without inheriting their expertise. Projects stall. Institutional knowledge evaporates. The researchers who remain watch their colleagues depart for better-resourced environments and wonder whether they should follow. The startup that loses its CTO must either promote from within — elevating someone who now lacks the team they were supposed to lead — or recruit externally into a situation that looks increasingly precarious. Soumith Chintala, the PyTorch co-creator appointed as Thinking Machines' new CTO, inherits a formidable challenge.

I find myself thinking about what Murati must be experiencing. She left OpenAI at the peak of her influence to build something independent. She assembled a team of people she had worked with, people she trusted. She raised more money in a seed round than most companies raise in their entire existence. Yet here she is, less than eighteen months later, watching the founding team scatter back to the place they left together. The personal dimension of this — the sense of a shared vision unravelling — must be acute.

However, I resist the temptation to read this as a story of individual failure. The structural forces arrayed against AI startups are formidable. The incumbents have compounding advantages that grow with each passing quarter. They have the compute, the data, the distribution channels, the customer relationships, and the regulatory relationships that startups must build from nothing. They have the ability to hire talent at compensation levels that would destroy a startup's cap table. They have the patience that comes from diversified revenue streams and patient capital.

The implications extend beyond Thinking Machines. Every AI startup must now confront the question of whether the independent path remains viable. The investors who funded Murati's venture will scrutinise future pitches more carefully. The researchers contemplating startup opportunities will weight the risks more heavily. The narrative that talented people can leave incumbents and build competitive alternatives — a narrative that sustained much of the tech industry's dynamism over the past decades — will face renewed scepticism.

Perhaps this is simply the maturation of a young industry. In the early days of any technology, garage-scale innovation can compete with established players because the technology itself is immature and advantage accrues to insight rather than infrastructure. As the technology matures, scale becomes decisive. The semiconductor industry consolidated. The cloud computing industry consolidated. The AI industry may be following the same trajectory, compressing a decades-long pattern into a handful of years.

The talent will go where it can be most effective. The compute will remain where it has already been built. The startups that survive will be those that find niches the incumbents cannot easily address — vertical applications, specialised domains, markets too small to attract attention from companies optimising for billion-user scale. The era of challenging OpenAI and Anthropic and Google head-on may already be closing. Thinking Machines' struggles suggest the window was narrower than anyone wanted to believe.

I watch the departures from Thinking Machines Lab and I see not failure but physics. Talent flows toward leverage. Leverage concentrates where resources accumulate. Resources accumulate where previous advantages compound. The gravity is real. The escape velocity is higher than anyone expected.