Skip to content

Plutonic Rainbows

The World Before the Index

Most of what humanity has written, recorded, and published does not exist on the internet. Not even close. Large language models, search engines, recommendation algorithms: they all treat the web as though it were a reasonable proxy for human knowledge. It is not. It is a shallow, recent, and spectacularly incomplete sample.

Google has scanned tens of millions of books, but most sit behind copyright walls, neither fully searchable nor publicly readable. The rest exist on shelves, in basements, in charity shops where nobody is looking. The vast majority of the world's cultural heritage has never been digitized in any form. Not suppressed, not restricted. Just absent.

The pre-internet age was not merely analogue. It was geographically bounded. John Holbo, writing on Crooked Timber, described it as a kind of epistemic accident: you knew what the six people around you knew, what your local library stocked, what your local record shop carried. A left-handed guitarist might never discover that left-handed guitars existed. That accidental ignorance, that texture of ordinary life, was never documented in a form that any crawler could find. It was the water, not the fish.

The physical record is vanishing too. When the Chicago Sun-Times consolidated its suburban papers, photographs from the Aurora Beacon-News and Elgin Courier-News were thrown in the bin. The Louisville Courier Journal's archive of roughly ten million photographs nearly followed before the University of Louisville negotiated a last-minute donation. These aren't edge cases. They are the norm for local journalism across America and, by extension, for any community record that depended on newsprint.

Meanwhile, born-digital content fares no better. Pew Research found in 2024 that a quarter of all web pages that existed between 2013 and 2023 have already disappeared. MySpace's 2019 migration destroyed millions of songs, videos, and photographs in what the Long Now Foundation described as irreversible data loss. Andy Warhol's digital artwork from the 1980s sat stranded on obsolete Commodore hardware for decades.

The gap is self-reinforcing. If knowledge isn't online, AI can't learn it. If AI can't surface it, fewer people encounter it. If fewer people encounter it, there's less incentive to digitize it. The loop tightens and the memory without metadata that defined most of human experience drifts further from retrieval.

I think about this when people describe AI as a knowledge tool. It is a tool for a particular kind of knowledge, overwhelmingly English-language, overwhelmingly post-1990s, overwhelmingly sourced from the kind of person who publishes on the internet. Everything else, the vast majority of what humans have thought and made and recorded, sits in formats that no model will ever ingest. Not because the technology couldn't handle it, but because nobody is going to scan it.

Sources:

The Thinker and the Talker

Alibaba released Qwen3.5-Omni on Monday and the most interesting thing about it is not what the model can do. It is what Alibaba chose to keep.

The Qwen family has been downloaded over 700 million times on Hugging Face, with more than 100,000 derivative models. That makes Alibaba the most-downloaded open-weight AI provider on the platform, and it was deliberate — a land grab disguised as generosity. Now, with Qwen3.5-Omni, the generosity has limits.

The model splits into two components the team calls the Thinker and the Talker. The Thinker handles reasoning across text, images, audio, and video. The Talker converts that reasoning into streaming speech, frame by frame, through a lightweight convolutional renderer called Code2Wav. The separation is not just clean design. It means external systems (safety filters, retrieval pipelines, function calls) can intervene between cognition and output. Enterprise deployment teams will notice.

The numbers are aggressive. A 256,000-token context window that can absorb ten hours of continuous audio or four million frames of 720p video. Speech recognition in 113 languages. Voice cloning via the API. An emergent capability the team calls audio-visual vibe coding: the model writes functional code by watching screen recordings with spoken instructions, without having been trained on that task. That last detail sounds like marketing until you remember that emergent capabilities in large models have a track record of being real and unsettling in equal measure.

On benchmarks, it outperforms Gemini 3.1 Pro on music understanding (72.4 to 59.6) and edges it on audio comprehension. Voice stability scores undercut ElevenLabs by an order of magnitude. These are not incremental wins.

But only the Light variant ships as open weights. Plus and Flash, the versions you would actually deploy, are API-only through Alibaba's DashScope. No technical paper has been published. No weights to inspect. The 700 million download count was built on open licensing, and the moment the Qwen team produced something genuinely frontier in multimodal, they pulled it behind a paywall.

This is not hypocrisy. It is strategy. Open-weight text models seed the ecosystem, create dependency, train a generation of developers on your API surface. Then, when voice and video become the competitive edge, you charge for access. Alibaba built the largest open-source AI distribution network in history specifically so they could close it at the right moment.

The Thinker reasons for free. The Talker costs money. That might be the most honest thing about the whole architecture.

Sources:

Thirty-Three Million for a Suggestion Box

Pankaj Gupta built a product that let 1.3 million people vote on which AI model gave the best answer. Jeff Dean invested. Biz Stone invested. The CEO of Perplexity invested. a16z crypto's Chris Dixon led a $33 million seed round. On Tuesday, Gupta announced Yupp.ai is winding down, less than ten months after launch. Platform access ends April 15.

The stated reason is the one every failed startup reaches for: product-market fit. "The AI model capability landscape has changed dramatically in the last year alone," Gupta wrote. Which is a polite way of saying the product was a leaderboard for a race where the runners kept swapping positions between refreshes.

Yupp's premise made a kind of sense when it launched in June 2025. Back then, picking between Claude and GPT and Gemini and whatever Mistral was calling itself that week felt consequential. You'd paste a prompt into three chat windows, squint at the results, and develop superstitions about which one "got you." Yupp crowdsourced that process across 800 models. Millions of preference signals a month, all feeding into a ranking system that was supposed to help ordinary people navigate the model landscape.

The problem is that ordinary people stopped caring. Not because the models got worse, but because they got interchangeably good enough. When the gap between first place and eighth place on a benchmark is statistical noise, a consumer taste-test platform becomes a thermometer for a room that's already at temperature.

There's a crueller reading. AI labs figured out that crowdsourced preferences from casual users are a blunt instrument. The shift toward agentic workflows meant models needed to impress other models, not people scrolling on their phones. For the kind of reinforcement learning that matters now, labs hire domain experts and run evaluations against PhD-level feedback. The crowd was never going to be precise enough.

Forty-five angel investors. DeepMind's chief scientist. A $33 million cheque from one of the most connected funds in Silicon Valley. And the thing it bought was ten months of server time and a blog post titled "winddown." The economics of wrapping someone else's API haven't changed since Anthropic started enforcing its terms of service. If anything, the lesson has sharpened. The thinner your layer, the faster the substrate makes you irrelevant.

Some of Yupp's employees are reportedly joining a "well-known" AI company. Which sounds like a soft landing until you consider that it's the same trajectory the product followed: absorbed back into the infrastructure it was built to evaluate.

Sources:

The Skating Rink That Soundtracked Tomorrow

Room 13, BBC Maida Vale Studios. Before it held oscillators and tape machines, the building was a roller skating palace. Opened in 1909 on Delaware Road, converted by 1934, given to a handful of BBC engineers in 1958 with two thousand pounds and whatever surplus military electronics they could find at Portobello Market.

Delia Derbyshire joined the Workshop in 1962 with a mathematics and music degree from Cambridge and a rejection letter from Decca Records, who did not employ women in their studios. In eleven years she created sound for roughly 200 programmes. The Doctor Who theme remains the most famous: Ron Grainer handed her a single sheet of A4 manuscript paper with annotations like "wind bubble" and "cloud," and she realised it from tape-spliced fragments of a plucked string, white noise, and test-tone oscillators meant for calibrating equipment. When Grainer heard it he asked, "Did I really write this?" She said, "Most of it." The BBC would not credit her for another fifty years.

None of this is news. The Workshop's history has been thoroughly documented. What interests me is what those sounds have become now that the context they were made for no longer exists.

The Radiophonic Workshop did not just make television themes. It soundtracked a specific institutional vision of Britain: Open University lectures, schools broadcasts, public information films. The BBC under its post-war mandate believed that educating the nation was a public good, and these electronic textures were the sonic furniture of that belief. Mark Fisher identified this precisely. Hauntological music, he wrote, constitutes "an oneiric conflation of weird fiction, the music of the BBC Radiophonic Workshop, and the lost public spaces of the so-called postwar consensus." That consensus ended in 1979.

The Workshop itself held on until 1998, killed by John Birt's internal market policies. Elizabeth Parker, the last remaining composer, switched off the lights. The archive was nearly discarded.

When Derbyshire died in 2001, 267 reel-to-reel tapes were found in her attic. They sat there like letters from someone who had stopped writing decades earlier. She left the BBC in 1973 and abandoned music entirely by 1975.

Julian House of Ghost Box Records described the Workshop's older material as "the reverb of a reverb of a reverb." That phrase captures how these sounds circulate now. They are not nostalgic. Nostalgia implies you want to go back. This is different. The sounds point forward, toward a public future that was defunded and dismantled, and the fact that they still sound futuristic is the cruel part. They describe a destination cancelled while the signal was still transmitting.

Simon Reynolds called the tension in Ghost Box's work a pull between "heathen heritage" and "modernizing socialism." The Workshop operated at the intersection of state-funded infrastructure and radical experimentation, and both feel equally impossible now.

I keep returning to those 267 tapes in the attic. An entire career's parallel output, boxed and unlabelled, surviving because nobody thought to throw them away.

Sources:

The Night Four Women Became One Sentence

Fiera Milano, March 1991. An exhibition hall on the city's outskirts, a fifteen-metre marble runway, and a U-shaped seating plan that separated press from celebrities from international buyers. Gianni Versace had staged shows before, obviously. But nothing like what happened at the end of this one.

The collection itself was pure Versace at full volume. Boxy cropped jackets over Lycra catsuits printed with baroque scrollwork. Studded leather cut alongside pleated skirts. Thigh-high boots that had no business being paired with silk but somehow were. The colour ran from black through to saturated reds, greens, oranges, and yellows, all of it rendered in that specific register Versace owned: sexy, loud, and entirely uninterested in apology.

Then the finale. George Michael's Freedom! '90 hit the speakers and out came Linda Evangelista, Cindy Crawford, Naomi Campbell, and Christy Turlington. Not walking individually. Not one after another. Arm in arm, four across, lip-syncing the lyrics, laughing, mugging for the front row. They wore dresses in red, yellow, and black. George Michael watched from his seat.

The four supermodels at the Versace AW91 finale

The previous October, David Fincher had released the music video for the same song, starring all four (plus Tatjana Patitz). No George Michael in frame, just supermodels lip-syncing in a stripped-down loft while a jukebox exploded. The video made them icons outside fashion. The Versace finale made that iconography physical, live, happening in a room full of people who understood they were watching something that couldn't be repeated.

The backstory matters. Liz Tilberis, then editor of British Vogue, had told Versace to stop splitting the top models across different slots. Book them together. Let their combined weight collapse the room. He listened. And the result was not just a fashion show but a proof of concept: the runway could function as spectacle, as cultural event, as something people who had never touched a copy of Vogue would eventually see and remember.

Before this night, runway shows were trade events. After it, they were content. Every designer who stages a celebrity-packed front row, every brand that livestreams its collection, every fashion week headline that leads with a name rather than a garment owes a debt to what happened at Fiera Milano. Versace understood something his contemporaries didn't, or wouldn't admit: the models were the collection. The clothes were spectacular. But four women walking in sync to a pop song, grinning like they owned the building (they did), turned a presentation into a cultural marker that outlived the season, the decade, and eventually the designer himself.

Cindy Crawford later said it felt like all the stars had aligned. She wasn't wrong. But stars don't align by accident. Someone has to set the stage.

Sources:

Lobbying With the Thing You Built

Anthropic has been privately briefing government officials about Claude Mythos for weeks, telling them the model could enable large-scale cyberattacks. The briefings started before the CMS leak made the model public. That detail matters.

The leaked draft was unusually candid. It admitted that "AI is currently providing more meaningful capability uplift to attackers than to defenders, and that gap is widening." Mythos can chain attack actions autonomously and run multiple hacking campaigns without human oversight. Security analysts at CSO Online noted the model's recursive self-fixing capability, compressing the gap between human and machine software engineering. Between the leaked draft and the external analysis, the picture is of something closer to a weapon than a product.

The question is what you do with that framing. Gizmodo called it directly: the "classic AI company playbook of talking up the dangers of a model to highlight how powerful and capable it is." I think that's right and incomplete at the same time. Anthropic is doing three things simultaneously: fighting the Pentagon over ethical guardrails on military AI use, warning those same officials that its product could facilitate mass cyberattacks, and preparing for what's rumoured to be a $60 billion IPO later this year. Each of those three positions reinforces the other two. The safety brand makes the government warnings credible. The government warnings make the capability story investable. The IPO pressure makes the capability story necessary.

None of which means the warnings are fabricated. Mythos triggered ASL-3 protections under Anthropic's Responsible Scaling Policy, meaning the company's own framework classified it as requiring enhanced security for model weights and deployment restrictions targeting cyber and biological misuse. Whether it approaches ASL-4, the tier defined by models that become "the primary source of national security risk in a major area," hasn't been disclosed. The leaked capabilities suggest the boundary is closer than anyone expected.

I wrote last week about how conveniently the leak landed. The government briefings add another layer. Pre-leak, they look like responsible disclosure. Post-leak, they look like groundwork, the kind of advance positioning that makes an "accidental" revelation feel less accidental. A company that already told the government "this thing is dangerous" has a much easier time controlling the narrative once the public finds out.

CrowdStrike lost roughly $15 billion in market cap on March 27, the day after the leak. Nearly half of cybersecurity professionals now rank agentic AI as their top threat vector. Anthropic gets to sit at the centre of both the problem and the proposed solution, which is a remarkable place to be when you're about to go public.

Sources:

Downloading Room 13

Spitfire Audio released a sample library in February 2025: 1,087 sounds from the BBC Radiophonic Workshop, recorded at the original Maida Vale studios, sold as a virtual instrument for £149. You load it into your DAW and there they are. Test-tone oscillators. Junk percussion. Tape loops. The raw material of a future that got cancelled nearly three decades ago.

The Workshop opened in 1958, in Room 13 of Maida Vale. Desmond Briscoe and Daphne Oram built it to produce experimental sound for BBC radio and television: effects, incidental music, theme tunes for programmes that hadn't been invented yet. Delia Derbyshire arrived in 1962 and made the Doctor Who theme by constructing each note individually on quarter-inch mono tape, inch by inch. She and Dick Mills unwound the entire reel along the corridor to check for anomalies. Ron Grainer heard the finished piece and asked, "Did I write that?" The BBC refused Derbyshire a credit or royalties.

The Workshop closed in March 1998. Mark Ayres catalogued roughly 3,500 reels of tape. When Derbyshire died in 2001, her partner found 267 tapes in her attic. Tea chests and cardboard boxes, labels peeling off. The originals are too fragile to play.

I keep returning to Chris Christodoulou's 2018 paper, which frames these sounds as artefacts of "a utopian future that has been irrevocably lost." Mark Fisher was more blunt: the Workshop was state-funded. Thatcherism killed the model. The 1998 closure wasn't administrative. It was political.

Ghost Box Records, founded in 2004, built an entire aesthetic from the Workshop's DNA. Simon Reynolds described their approach to sampling as a kind of séance, retrieving voices from dead formats, making them undead rather than restored. You'd find these records in charity shop bins, between warped folk compilations and cracked library music LPs. The medium was part of the message. There's something about the weight of a 10-inch pressing that a FLAC file can't replicate, though I suspect that's nostalgia rather than acoustics.

The Spitfire library is something else entirely. The sounds are clean, categorised, tagged for search: Archive Content, Found Sounds, Junk Percussion, Tape Loops, Synths, Miscellany. Peter Howell and Paddy Kingsland contributed. Dick Mills, who unwound that tape with Derbyshire over sixty years ago, helped record new material. You can have the oscillators, the tape artefacts, the junk percussion. What you can't download is Room 13 itself: the institution, the funding model, the specific arrangement of public money and creative latitude that made someone think it was worth paying Delia Derbyshire to build a bassline from a single plucked string, one inch of tape at a time.

Sources:

Thirty Thousand at Six in the Morning

The email arrived at 6 a.m. It came from "Oracle Leadership," which is not a person. "After careful consideration of Oracle's current business needs, we have made the decision to eliminate your role." Within minutes, system access was revoked. No call from a manager, no meeting, not even an individual name on the message. Just a mass termination in corporate passive voice.

TD Cowen estimates between 20,000 and 30,000 employees received that message on Tuesday morning. Roughly 18% of Oracle's 162,000-person workforce. Cuts landed across the US, India, Canada, Mexico, and Uruguay. Divisions like Revenue and Health Sciences and SaaS and Virtual Operations Services saw reductions exceeding 30%. Some employees with over twenty years of service found out before sunrise.

On Blind and Reddit, confirmation posts appeared in real time. "After 10 years, I've been let go." "Just got an email at 5 am... over 20 years service... nice." Screenshots of the termination email circulated on LinkedIn hours before Oracle acknowledged anything. The company still hasn't issued a formal statement.

The money freed up, somewhere between $8 and $10 billion in annual cash flow, feeds Oracle's $156 billion AI infrastructure bet. Capex for fiscal 2026 is projected at $50 billion, nearly seven times the $6.9 billion spent two years ago. To finance this, Oracle raised $58 billion in debt over the past two months, with $50 billion from a single bond offering. Total debt now exceeds $124 billion. Moody's rates them Baa2, two notches above junk.

Oracle is not in trouble. Net income jumped 95% last quarter to $6.13 billion. Contracted future revenue sits at $523 billion. This isn't a company shedding weight to survive. It's a profitable company that decided its employees are worth less than GPUs.

I wrote about Meta making the same calculation two weeks ago. Fifteen thousand jobs, redirected toward a reported $115-135 billion in capex. The arithmetic is becoming standard. Claudio Lupi put it plainly: "Larry Ellison just showed every enterprise tech company the playbook: lay off your people, buy more GPUs."

What makes Oracle's version particularly grim is the leverage. Microsoft, Meta, and Google can fund their collective AI spending from cash reserves. Oracle cannot. Free cash flow went negative by $10 billion last quarter. The stock has shed more than half its value since September, erasing $463 billion in market cap. Oracle is borrowing at near-junk rates to build data centres it hopes to lease to AI companies. If those contracts don't materialise at the scale Ellison projects, the debt stays. The workers do not come back.

Sources:

Broadband Money as an AI Weapon

The Trump administration has found a creative way to kill state AI laws: threaten to take away their internet money.

The legislative route failed first. Senator Ted Cruz proposed a 10-year moratorium on all state AI laws last May, tucked into the budget reconciliation bill. The House passed it 215-214. The Senate stripped it out 99 to 1. Republican Senators Marsha Blackburn and Josh Hawley voted against it. Blackburn's reason was specific: it would override Tennessee's deepfake protections. When your own party's senators won't back your preemption play by a margin of 99-1, the legislative strategy is dead.

The states made their position clear before the White House even responded. In November, thirty-six attorneys general, led by New York's Letitia James, wrote to Congress opposing federal preemption. The coalition includes Idaho, Indiana, Kansas, Louisiana, Mississippi, South Carolina, Tennessee, and Utah alongside the blue states you'd expect. Republican state lawmakers have been even more blunt. Angela Paxton, a Republican Texas senator, put it plainly: "When you have no regulation, what you have is the wild west." Doug Fiefia, a Republican Utah representative and former Google employee, said Congress "not only will not act, they can't act."

Then came the executive order. Signed in December 2025, it established three enforcement tools. A DOJ litigation task force to challenge state laws in court. A Commerce Department review to label state regulations "onerous." And the sharpest blade: conditioning $42.5 billion in BEAD broadband infrastructure funding on states agreeing not to enforce their AI laws. Texas alone received $1.27 billion in broadband grants. That's a lot of leverage disguised as telecommunications policy.

The order's legal footing looks precarious. John Bergmayer of Public Knowledge pointed to the 2023 Supreme Court decision in National Pork Producers Council v. Ross, noting that states routinely regulate interstate commerce. Congress has now rejected preemption twice. Without an actual federal regulatory framework in place, the argument that state laws "conflict with federal law" doesn't have much federal law to conflict with.

Meanwhile, states keep passing bills. Washington's governor signed chatbot and provenance laws the last week of March. Colorado's AI Act, delayed once already, finally takes effect this summer. California's transparency requirements for frontier AI developers are already on the books. The state-by-state picture suggests the administration faces a whack-a-mole problem it created for itself by rescinding Biden's executive order on inauguration day without replacing it with anything substantive. The vacuum invited exactly the fragmentation they're now scrambling to contain.

I keep thinking about the broadband angle. It's a genuinely novel tactic, using infrastructure money as a regulatory weapon against an entirely different policy domain. Whether courts let it stand is one question. Whether it signals a pattern of using federal funding as coercive leverage against AI dissent is a more uncomfortable one.

Sources:

The .map File That Mapped Everything

A 59.8 megabyte source map file sitting in the npm registry. That's how 512,000 lines of Claude Code's TypeScript ended up on GitHub this morning, mirrored across half a dozen repositories before most people had finished their coffee. Security researcher Chaofan Shou found it in version 2.1.88 of the @anthropic-ai/claude-code package. Bun's bundler generates source maps by default. Nobody added them to .npmignore. The entire codebase shipped.

The detail that keeps pulling me back is Undercover Mode. Buried in the source is a system that activates when Claude Code detects it's being used by an Anthropic employee in a public repository. It injects instructions into the system prompt telling the model not to "blow your cover," blocking it from outputting internal codenames, unreleased model references, or Slack channel names. And the entire mechanism, along with everything it was supposed to protect, shipped in a .map file that anyone could unpack with a single command. I wrote last week about Anthropic leaking its own model details through a CMS misconfiguration. Two packaging errors in the same month from a company whose entire brand is safety and careful deployment. The pattern is getting harder to call coincidence.

What the code actually reveals is more interesting than the leak itself. Claude Code runs 40+ discrete tools behind a permission engine that classifies risk levels, with a YOLO classifier for auto-approving low-risk actions. The multi-agent architecture spawns parallel sub-agents it calls "swarms," each running in isolated contexts. A 46,000-line query engine handles all LLM orchestration. This is not a wrapper around an API. It is a full operating environment.

Then there's KAIROS, an always-on background daemon that monitors your project even when you're not actively prompting. It runs memory consolidation during idle periods through a process called autoDream, cycling through orientation, signal gathering, consolidation, and pruning. Forty-four feature flags gate capabilities that are fully built but compiled to false in the shipping build. The gap between what Claude Code is and what Claude Code ships as is significant.

The most disarming discovery: a complete Tamagotchi pet system. Eighteen species across five rarity tiers, procedurally generated stats for DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK, ASCII art sprites with animation frames. Deterministic gacha mechanics seeded per user. Someone at Anthropic built this. Deliberately.

Internal model codenames confirm what the earlier Mythos leak hinted at: Capybara maps to a Claude 4.6 variant, Fennec to Opus 4.6, and Numbat remains in prelaunch testing. None of this was supposed to be public.

The real takeaway isn't any single feature. It's the distance between Anthropic's public narrative of measured, careful deployment and the velocity of what's actually being built behind the flags. They have a background agent, a pet system, remote planning infrastructure, and an entire mode dedicated to hiding the fact that AI wrote the code. All of it shipping in the same package where someone forgot to exclude debug files.

Sources: