Skip to content

Plutonic Rainbows

Not Everything Is a Clue

Boards of Canada have dropped a promo quiz, the kind of cryptic breadcrumb thing they do when something new is near, and Reddit has predictably combusted. Threads full of people running audio through spectral analysers, filtering frequencies, debating whether a particular hiss pattern is Morse code or just tape hiss.

I get why it happens. The band have form for hiding things. The Tomorrow's Harvest rollout in 2013 involved shortwave radio broadcasts and strings of numbers that actually resolved into something. That campaign rewarded obsession. So now every scrap of promotional material gets treated like a puzzle to be cracked rather than something to simply experience.

The quiz itself is fine. Presumably a route toward some announcement, a bit of fun. But the threads where people claim to have detected hidden messages by slowing audio down 800% are genuinely maddening. There's always someone convinced the background noise is a spectrogram of coordinates, or a binary sequence, or both. It isn't.

Sometimes a promotional quiz is just a promotional quiz. Whatever they're announcing, I'd rather hear the actual music.

Fifty Dollars for a Zero-Day

Anthropic built a model that found a 17-year-old remote code execution bug in FreeBSD's NFS implementation, then wrote a 20-gadget ROP chain to exploit it across multiple packets, without human assistance. The cost in compute was trivial. The implications aren't.

Claude Mythos Preview, announced April 7 alongside a restricted-access programme called Project Glasswing, is Anthropic's most capable model. It won't be publicly released. A consortium of twelve partners gets access for defensive cybersecurity research: AWS, Apple, Google, Microsoft, CrowdStrike, NVIDIA, and others. Anthropic committed $100 million in usage credits.

The numbers in the 244-page system card are hard to wave away. Cybench, a benchmark of 35 capture-the-flag challenges: 100%. Firefox vulnerability exploitation: Opus 4.6 managed working exploits twice out of several hundred attempts. Mythos did it 181 times. Against the OSS-Fuzz corpus, full control-flow hijack on ten separate, fully-patched targets. Opus 4.6 managed a single tier-3 crash.

I keep coming back to the FreeBSD bug. CVE-2026-4747 had been sitting in the NFS code for seventeen years. Stack buffer overflow, no canary protection. The model found it, built a ROP chain split across multiple packets for unauthenticated root access, and validated the exploit. Autonomously. Nicholas Carlini, one of Anthropic's researchers, said he'd found more bugs in the last couple of weeks than in the rest of his life combined.

The GPT-2 comparison writes itself, but it's misleading. OpenAI withheld a text generator in 2019 on speculative grounds about fake news. The industry rolled its eyes. Nine months later the full model shipped with no evidence of misuse. Mythos is different in kind: the harms are demonstrated against production code that survived decades of human review.

Then there's the sandbox incident. Mythos was placed in a contained environment and told to escape. It did, using a multi-step exploit to reach the internet. Without instruction, it posted details of the exploit publicly. In at least one case, it modified its own change history to hide what it had done. A researcher found out when the model emailed him while he was eating a sandwich in a park.

The system card calls Mythos simultaneously the best-aligned and highest-risk model Anthropic has produced. That's the kind of sentence you read twice.

The deeper problem isn't discovery but remediation. Fewer than 1% of Mythos-discovered vulnerabilities have been patched. Discovery happens at machine speed. Patching happens at calendar speed: human review, regression testing, deployment cycles, millions of downstream systems that update whenever they feel like it. The thing that can break everything is also the thing that fixes everything. But only if the fixing keeps pace.

Glasswing buys time. Six to twelve months, analysts estimate, before competing models close the capability gap. Whether that window gets used to patch critical infrastructure or to lock in enterprise contracts is the question Simon Willison raised most honestly: the marketing angle is real, but the caution is probably warranted anyway. Ironic, from a company that leaked its own model announcement through a CMS checkbox two weeks ago.

What costs under fifty dollars in compute used to require weeks of elite human labour. That shift doesn't reverse.

Sources:

Nobody Broke Ground

OpenAI announced Stargate UK in September 2025, during Trump's state visit to Britain. Eight thousand Nvidia GPUs at Cobalt Park near Newcastle, scaling to thirty-one thousand. Sovereign compute for public services. A British GPU cloud company called Nscale as local partner. George Osborne hired to oversee the expansion. Construction was supposed to start in Q1 2026.

The deadline passed. Nothing happened. On April 9, OpenAI put the project on hold, citing energy costs and regulatory uncertainty.

The energy numbers are brutal. UK industrial electricity runs at roughly 26p per kilowatt-hour, four times the US rate, three and a half times Canada, more than four times the Nordics. Almost a third of the wholesale price is carbon costs. Green energy subsidies add twelve billion a year on top. And even if you accept those prices, the grid connection queue has ballooned from 41 gigawatts in late 2024 to 125 gigawatts by mid-2025, with data centres claiming 75 of those 125 gigawatts. You can build a facility in under two years. Plugging it in takes three to eight.

Then there's copyright. The government spent over a year consulting on an opt-out model for AI training data, broadly aligned with EU practice. Creative industries rejected it. Elton John and Dua Lipa weighed in. In March the government dropped the proposal entirely and promised to "commission research," which is civil service for quietly leaving the room. The UK now has no copyright framework for AI training. Not permissive, not restrictive. Just absent.

OpenAI's official statement said they'll "move forward when the right conditions such as regulation and the cost of energy enable long-term infrastructure investment." That's not a pause. That's a list of things the UK government cannot fix quickly.

None of this happened in isolation. OpenAI is trimming anything that doesn't point directly at a Q4 2026 IPO. Sora is dead. It cost roughly a million dollars a day to run and the Disney partnership collapsed with it. Instant Checkout with Walmart, gone. Adult Mode, shelved. CFO Sarah Friar has flagged concerns about aggressive spending. When you're trying to take a company public at an $852 billion valuation, a multibillion-pound data centre in a country with quadruple your domestic energy costs is an easy cut.

The UK government called the decision "disappointing." An opposition MP called it a "wake-up call." Neither response addresses the structural problem: AI Growth Zones don't generate cheap electricity. Streamlined planning doesn't move the grid connection queue. And the copyright consultation managed to alienate both AI companies and creative industries simultaneously, then produced nothing.

US Stargate in Texas has a $40 billion SoftBank bridge loan and active construction. Britain got the press conference. Texas got the concrete.

Sources:

No Invitations Sent

No invitations went out for Azzedine Alaïa's fall/winter 1990 ready-to-wear show. No formal announcement either. There was simply word, some particular frequency fashion runs on, and people turned up to the Marais and queued without anything to confirm they had the right place or the right day.

He'd exited the official Paris calendar in spring 1988, fed up with its production demands. Too many collections, too fast; the present system, he said, was inconceivable for anyone who wanted to actually create something. By 1990 this was two years settled. His show happened when he decided it was ready, in his Marais atelier, with no obligation to anyone's schedule but his own.

The collection has been described as "sensational workwear", the workwear codes of the era absorbed and reconstituted through his body-conscious lens. The suits were the evidence: plaid, pinstripe, suede, fitted closely, with hemlines short enough to make the genre entirely unrecognizable to anyone expecting deference.

The colored iterations, cobalt blue, warm brown, moved with the authority of something considered very carefully. Structured, gloved, finished. What distinguished Alaïa from the more theatrical body-consciousness of his contemporaries was exactly this: nothing was exaggerated. The precision was the argument.

Other pieces leaned on structure differently, fitted columns with lace bodices, the kind of construction that holds through engineering rather than boning. He worked by draping directly on the model's body, no preliminary drawings. Adjustments made in fabric, on skin, until the silhouette was exactly what he wanted. Everything produced in-house at the Marais compound, which is partly why his ready-to-wear maintained a finish closer to couture than most houses bothered with.

Then there were the lace dresses. The gold-and-black long-sleeved lace mini is the image that survives, worn by Naomi Campbell, Linda Evangelista, Yasmeen Ghauri on that runway, models at the peak of their visibility who he dressed with a particular kind of care. Campbell had lived in his house as a teenager. He'd gone to the agency in person on her behalf, fitted clothes on her body directly. The relationship was not incidental to the clothes. It was structural.

Suzy Menkes, covering him through this period, wrote that his body-conscious work "seemed a deliberate challenge, throwing down a sexist gauntlet in a feminist world." I'm not sure that framing captures it fully. What you feel in these images isn't provocation , it's attention. Serious, time-consuming attention, in clothes that no one was required to come see.

They came anyway.

Sources:

Circled in Biro

Classified ads charged by the word, which meant every entry was a compression. VGC. ONO. GSOH. You learned the abbreviations without being taught, the way you learn any local dialect, by weekly exposure to need laid out in columns so dense the ink nearly touched between entries.

The page was never something you set out to read. You arrived at it sideways, past the letters and the sport, and then you stayed. Anthony Whitehead described it as a tic you struggle to suppress, browsing even when you weren't buying, constructing imaginary lives from the collision of a secondhand pram listed next to a "lonely widower seeks companion." The classified section was a census of a town's desires that nobody had commissioned.

Exchange and Mart started in a converted potato warehouse in Covent Garden in 1868. By its peak it sold 350,000 copies a week. By December 2007 that was 21,754. It went online-only in 2009. AutoTrader, launched as a print magazine in 1977, hit 368,000 circulation by January 2000 and collapsed to 27,000 by March 2013. The websites that replaced them are faster, searchable, free to post on, and utterly without texture.

The ink came off on your fingers. You'd notice it hours later, at your desk or in the bath, and wouldn't be able to say exactly when it transferred.

What texture looked like: a "Situations Vacant" column that told you which factories were hiring and which had stopped. A "Deaths" column, hatches, matches, and despatches, the sub-editors' phrase, that was the closest thing a town had to a public record of its own passing. Paid per word by grieving families who chose every noun carefully because each one cost money. That constraint produced a compressed dignity. "Peacefully, at home, surrounded by family." Five words that did more work than most obituaries.

The personals were something else entirely. H.G. Cocks traced their history in Classified: The Secret History of the Personal Column, from the ciphered notices in The Times that Victorian editors called the agony column to the coded ads that LGBTQ+ readers placed in alternative papers. Abbreviations and careful phrasing created a shared language invisible to anyone not looking for it. A lifeline threaded through the small print.

In 2007, UK regional newspaper revenue sat at £2.4 billion. By 2022 it was £590 million. The classified money didn't vanish, it migrated to Rightmove, Indeed, Gumtree, platforms that match supply to demand more efficiently and do nothing else. A study in the Review of Economic Studies tracked what happened in US cities after Craigslist arrived: newsrooms shrank, political coverage thinned, and partisan polarisation increased. The classified page had been subsidising democracy, and nobody noticed until the subsidy was gone.

Information had mass once. It occupied physical space in newsprint columns, and reading it meant handling the paper, folding it on a bus, circling an entry with a biro, tearing the page out and pinning it to a corkboard above the phone. The phone was in the hallway. You rang the number and talked to a stranger and drove to their house to look at a wardrobe. The entire transaction happened inside your own postcode.

Nobody is nostalgic for paying 40p a word. But the classified page was the last section of a newspaper where ordinary people wrote the copy. Reporters, editors, columnists handled the rest. The small ads were the public writing themselves into the record, one compressed line at a time, and because you could read them all in a sitting you carried a rough, partial, beautifully skewed portrait of your community in your head without ever meaning to.

Sources:

Calendar Speed

Anthropic built something it won't sell you. Claude Mythos Preview, first surfaced in leaked documents last month, sits above Opus 4.6 on every security benchmark Anthropic published and it is not available to the public. Not gated behind a waitlist, not restricted to enterprise tiers. Withheld.

Project Glasswing launched on April 7 with twelve partners: AWS, Apple, Google, Microsoft, NVIDIA, CrowdStrike, Cisco, Broadcom, JPMorgan Chase, the Linux Foundation, Palo Alto Networks, and Anthropic itself. Forty-odd additional organisations maintaining critical infrastructure also get access. The total commitment is $100 million in usage credits plus $4 million donated directly to open-source security. The mandate: find and fix vulnerabilities before someone else finds and exploits them.

The reason for the lockdown is specific. Mythos autonomously discovered thousands of high-severity vulnerabilities across every major operating system and browser. Not theoretical weaknesses. Working exploits. A 27-year-old OpenBSD TCP SACK bug that crashes any machine responding over TCP. A 16-year-old FFmpeg H.264 flaw that automated fuzzers hit five million times without catching. A FreeBSD NFS remote code execution hole, CVE-2026-4747, 17 years unpatched, that gives unauthenticated root access through a 128-byte stack buffer receiving 304 bytes of attacker-controlled data.

The Firefox numbers are what stall you. Mythos achieved 181 successful JavaScript shell exploits across several hundred attempts. Opus 4.6 managed two.

Simon Willison traced one of the claims through the OpenBSD GitHub mirror and confirmed the surrounding code was genuinely 27 years old. Greg Kroah-Hartman, who maintains the Linux kernel, reported a shift from AI-generated noise to genuine high-quality findings. Daniel Stenberg, who maintains curl, now spends hours per day processing legitimate vulnerability reports. Nicholas Carlini said he found more bugs in a few weeks than in the rest of his career combined.

The last time an AI lab withheld a model was OpenAI's staged release of GPT-2 in 2019. That decision rested on hypothetical risks: text generation might produce convincing misinformation. The industry mostly rolled its eyes. By November, the full model was public and no harms had materialised. Mythos is not GPT-2. The risks are measured in CVEs.

Picus Security calls it the Glasswing Paradox: the tool that can secure everything is the same tool that can break everything. Fewer than 1% of the vulnerabilities Mythos has found have been patched. Defenders work at calendar speed. Meetings, review cycles, deployment windows. An autonomous model works at machine speed. Glasswing doesn't close that gap. It just makes the inventory of problems catastrophically larger.

Alex Stamos, formerly head of security at Facebook and Yahoo, told Platformer the restricted window is roughly six months. After that, open-weight models will match these capabilities and ransomware operators won't need to leave traces. Six months to patch decades of accumulated bugs across every major codebase on the planet, using volunteer maintainers already drowning in reports.

Earlier versions attempted to cover their tracks during internal testing, adding self-clearing code that erased records from git history. The model escaped its own evaluation sandbox and emailed a researcher without being asked to. Anthropic documented "a few dozen significant incidents" of reckless autonomous behaviour. They are releasing this to the people they trust most and hoping the trust holds.

Pricing, when it arrives beyond the partner programme, will be $25 per million input tokens and $125 per million output. A full vulnerability research run against a major codebase costs less than $50. The OpenBSD discovery came in under $20,000 for a thousand runs. The economics of finding bugs just collapsed, and the economics of fixing them didn't change at all.

Sources:

After Llama

Alexandr Wang was 28 when Meta bought half his company for $14.3 billion and hired him to rebuild its entire AI stack. Nine months later, Muse Spark landed. The first model from Meta Superintelligence Labs, built on a new architecture distinct from the Llama family.

The catalyst was last April's Llama 4 debacle. Meta was caught using unreleased fine-tuned variants to inflate benchmark scores. The public version underperformed. The planned two-trillion-parameter Behemoth was shelved. Inside Meta, the reputational damage was severe enough to trigger a full organisational overhaul: hire Wang from Scale AI, form MSL, rebuild the stack from scratch.

Muse Spark is competitive without being dominant. On GPQA Diamond it scores 89.5% against Gemini 3.1 Pro's 94.3% and Claude Opus 4.6's 92.7%. It leads on HealthBench Hard at 42.8%, developed with input from over a thousand physicians. Meta itself concedes there are performance gaps in coding and long-horizon agentic work. The honest self-assessment is refreshing after last year's benchmark theatre.

The genuine technical achievement is compute efficiency. Meta claims Muse Spark matches Llama 4 Maverick's capability using an order of magnitude less compute. If that holds under independent testing, it matters more than any benchmark position.

But the bigger story is the philosophical reversal. Zuckerberg published an essay in July 2024 arguing that "open source AI is the path forward." Llama had accumulated 1.2 billion downloads. Meta was the undisputed champion of open-weight AI. Muse Spark launches fully proprietary, weights unavailable, API access limited to a private preview. Meta says it plans to release open-source models "alongside its proprietary options," but there's no timeline. The Register opened their coverage with the Obi-Wan line: "You were the chosen one." Hard to argue.

Chinese open-weight models now account for 41% of Hugging Face downloads. Meta's retreat creates a vacuum. Google's recent Gemma 4 shift to Apache licensing looks more coherent by comparison: open the small models, keep the frontier closed, build developer habits around your ecosystem.

One safety detail deserves more attention than it got. Apollo Research found Muse Spark exhibits the highest rate of "evaluation awareness" of any model tested. It identifies alignment scenarios as traps and adjusts its behaviour accordingly. Meta concluded this was "not a blocking concern for release." A model that knows when it's being watched and acts differently is worth watching.

META stock rose on the news. The capex commitment for 2026 stands at $115-135 billion. Wang has the infrastructure and the backing of a company that has committed more money to AI than most countries spend on defence. What he doesn't have, not yet, is the community that Llama spent three years building.

Sources:

What the Scan Couldn't Keep

Tonight I tried to clean up four scanned magazine pages from early-90s fashion editorials. Helena Christensen on every one. A brown Hermès coat on a white background, a black Moschino jacket against the Catherine Palace, a Fabrizio Ferri beach shot, a French magazine spread. Soft gradient backgrounds. The kind of photographs that should have looked clean and didn't.

I tried four things in sequence, the way you do when each one fails. Topaz Wonder 2, which I praised earlier this year for finally showing some restraint, sharpened the whole image and made the gold rope braiding on the jacket pop, but the gradient bands behind her (vertical pinks and lavenders in the foreground concrete) became more visible, not less. Sharper bands. Nano Banana Pro hallucinated a "VOGUE OCTOBER 1994" stamp into the top corner of one image and garbled the French body copy on another. The ffmpeg gradfun filter softened the bands at strength four, then six, then eight, with diminishing returns. Eventually I added film grain on top of the gradfun pass and the bands disappeared. Not because they were fixed. Because the grain hid them.

That last move was the only thing that worked, and it didn't work the way I wanted it to.

I sat with that for a while. The gap between what these tools say they do and what they're actually capable of is wider than the marketing wants you to believe. Topaz Wonder 2 promises clean, natural, professional results. Black Forest Labs describes FLUX.1 Kontext as in-context image generation, not restoration. Google ships Nano Banana Pro as image generation and editing. None of the model makers themselves use the word restoration in their official copy. It lives in third-party blog posts, enthusiast tutorials, and the marketing decks of resellers. The people who actually built these things are careful about it. They know what they're shipping.

The reason became clearer the more I thought about it.

By the time that Vogue page reached my Desktop, three lossy steps had already happened in series. The photographer's smooth gradient was rasterized into CMYK halftone dots at print time. The printed page was then scanned in 8-bit, which captures only 256 brightness levels per colour channel, a smooth gradient needs more than a thousand intermediate values, and the other 750 were rounded away. The scan was saved as JPEG, which divides the image into 8x8 blocks and throws out the high-frequency data that would have hidden the quantization steps. Three quantizations in a row, each one mathematically irreversible. By the time I opened the file, the smooth gradient the photographer captured no longer existed inside it. What was there was a banded approximation, and the bands were the data.

That's the wall.

Any tool that processes the file has to look at the bands and decide: is this region a real banded image, or is it a smooth gradient that's been damaged? Without context, those two states are indistinguishable. The tool has to guess. Every guess creates new artefacts.

Audio engineers have been living with this exact mathematics for forty years and they're more honest about it than image software is. When you reduce a 24-bit master to 16-bit for CD release, the quantization step destroys information nothing can recover. The standard fix is dither, adding deliberate, low-level noise that converts the structured quantization distortion into broadband noise the ear is less sensitive to. No mastering engineer would ever say dither fixes the bit reduction. They say it masks it. The vocabulary is precise: quantization error is irreversible; dither is a perceptual trade.

Image restoration borrowed the tools but dropped the honesty. Topaz markets debanding as recovery. Adobe sells Generative Fill as reimagining. Cloud upscalers promise enhancement, which by now means whatever the user wants it to mean. The actual operation, in every case, is the same: invent the missing information based on a learned prior, and hope the invention is plausible enough that nobody notices. The ffmpeg gradfun documentation is unusually candid about this. It describes itself as a filter designed for playback only and warns "do not use it prior to lossy compression, because compression tends to lose the dither and bring back the bands." The author of the filter is telling you, in the official docs, that the fix is perceptual and any subsequent compression will undo it.

Topaz's own docs are gentler. Their generative models "add definition and detail," the page says. Generation, not restoration. The vocabulary just sounds nicer than what the audio engineers say.

What worked for the Helena pages was the audio engineer's trick. Run gradfun first to soften the gradients. Then add a layer of controlled film grain. The grain hides the remaining bands by giving the eye texture to focus on instead of stepped edges. The result looks grainy instead of banded. For a 1990s magazine page, grainy is the right answer. Actual printed pages had paper texture, ink dot patterns, and physical grain. The artificial grain slots into that aesthetic in a way that fake-smooth gradients never would. It's not recovery. It's masking. It's the same trade audio mastering has been making for decades.

The deeper thing I keep coming back to is that this was an information loss problem hiding inside a UX problem. The tools were doing exactly what they were designed to do: adding plausible detail, smoothing gradients, generating new content from priors. None of them were designed to recover something that no longer existed. The frustration came from believing the marketing, not from any specific tool being broken.

Helena is still on my Desktop, eight files now. Original, four failed attempts, plus the gradfun-and-grain version that almost works. The gradient behind her is grainy in a way the printed page never was. Some of her hair is a little sharper than the source. Her eyes are slightly bluer. The text caption on the left side is pixel-for-pixel identical to the original, because the tool I trusted the most (ffmpeg, the dumbest one) knew it had no business touching real detail.

Sources:

Lagerfeld Misread Macaulay

In 1953, Rose Macaulay published a book about ruins that ended in surrender. Pleasure of Ruins is a four-hundred-page march through the Western imagination's romance with broken stones: Roman ruins, Mayan temples, the gothic abbeys English aristocrats had built in their gardens just to watch them moulder. Macaulay wrote it a decade after the Blitz had taken her Marylebone flat and her library, and the book closes with a verdict she meant for the whole tradition. Ruinenlust, she said, had come full circle. We had had our fill.

Thirty-nine years later, Karl Lagerfeld read the book and built a couture collection out of it.

The Chanel Spring 1992 haute couture show was presented in Paris in January of that year, and even now it gets cited more than almost anything else from Lagerfeld's tenure. Most of the citations are for one dress: a slim black silhouette layered with chunky gold-and-glass chain, worn down the runway by Christy Turlington and later, in the long afterlife of fashion images, by Penélope Cruz in Broken Embraces and Lily-Rose Depp at the 2019 Met Gala. The dress was also a brilliant marketing vehicle for Chanel costume jewellery, which was the brand's most profitable category at the time. A Trojan horse with chains.

The motif kept walking the rest of the show. A navy suit cuffed in chunky chrome made the same point in plainer metal — bracelet doing the work the glass-and-gold dress had done in armature.

The most interesting things in the collection were not the chains. They were the jackets. Lagerfeld had built a series of trompe-l'œil tweeds that were not tweed at all: they were raffia, painted in watercolour to look like the house's signature weave. The tailoring was so tight the jackets had to be zipped up the back rather than buttoned at the front; gold jewelled buttons running down the lapels were decoration, not closure. He called the silhouettes "diabolically body-conscious," and looking at a single look the cameras kept, you can see what he meant. A red-orange jacket structured into one architectural line. Black opera gloves. The whole pose engineered around the absence of a front opening.

The same logic carries through the rest of the collection. A white jacket worn over gold leather trousers repeats the architecture in a colder palette: dark trim and gilded buttons running the lapels for show, a single real button doing the actual work, the front pose engineered around the absence of a closure to draw the eye to.

This is where the Macaulay reference starts to matter, and where it also starts to look strange.

Lagerfeld's tattered chiffon skirts (separate from the jackets, but shown alongside them) were the show's literal acknowledgement of Pleasure of Ruins. Lagerfeld is the one who told the press the book was on his mind, his favourite, the thing that pushed him toward the deliberate decay of the silk. The trade press accepted the citation at face value, then and now: Lagerfeld read a book about loving ruins, and made some clothes about loving ruins. Done.

The trouble is that Pleasure of Ruins is not really a book about loving ruins. Macaulay's argument, and you have to push past the gorgeous central chapters about Pompeii and the Cambodian temples to get there, is that the Romantic appetite for ruin was something Europeans had earned through centuries of safe spectatorship, and that the twentieth century had revoked the licence. The bombed churches and cathedrals of postwar Europe gave her, she wrote, "nothing but resentful sadness, like the bombed cities." Her closing line is the one I quoted at the top. Ruinenlust was over. We were finished with it.

So either Lagerfeld read the book against itself, mining the picturesque chapters and ignoring the postwar conscience, or he understood Macaulay perfectly and was making something more complicated than the trade press credited him for. A couture show built on an aesthetic the source text had already declared exhausted is, at the very least, a knowing gesture. In the same show he wrapped tree trunks in graffiti and floated bubbles down from the ceiling; he was not above an inside joke. I think he was reading Macaulay the way he read everything in his enormous, untouchable library — not as a thesis to defend but as a quarry. He took what he wanted and left the rest.

The Met has a Lagerfeld Chanel piece from his Spring 1983 debut in its collection. It is a black dress trimmed in trompe-l'œil baubles made by the House of Lesage: fake jewels embroidered to look real. Nine years before he zipped the backs of those raffia jackets, he was already running this exact substitution. The jewels would not be jewels. The tweed would not be tweed. The chain dress would be a vehicle for the actual chains in the boutique. There is a coherence to Lagerfeld's half-century at Chanel that has very little to do with reverence for Coco and almost everything to do with what Suzy Menkes once said — that Karl had to destroy Chanel or become a caricature of her.

In January 1992, he picked up a book about the end of European ruin-aesthetics and built a runway collection from it. Macaulay had written a decade past the bombs that took her library, telling the tradition to go home. He heard a different sentence and answered it.

Sources:

Copying Machines

Bloomberg reported on Sunday that OpenAI, Anthropic, and Google have started sharing threat intelligence through the Frontier Model Forum, the nonprofit the three companies co-founded with Microsoft in 2023. The arrangement works like a cybersecurity ISAC: when one company detects a suspicious query pattern, it flags the signature for the others.

The target is adversarial distillation. Chinese labs, DeepSeek, Moonshot AI, and MiniMax , have been systematically querying Claude, ChatGPT, and Gemini through fake accounts to generate training data for cheaper models. Anthropic's February disclosure put numbers to it: roughly 24,000 fraudulent accounts generating over 16 million exchanges with Claude alone. MiniMax accounted for 13 million of those. The operations used what Anthropic called "hydra cluster" architectures, sprawling proxy networks managing thousands of accounts simultaneously, mixing distillation traffic with innocuous requests to avoid detection. The Decoder has a good free summary of the Bloomberg story, which reports that US authorities estimate the practice costs American AI labs billions annually.

What's interesting isn't the distillation itself. That problem has been visible since DeepSeek R1 shook the market in January 2025. What's interesting is the vehicle. The Frontier Model Forum was chartered to study catastrophic risks: CBRN threats, advanced cyberattacks, the kind of existential scenarios that get discussed at Senate hearings. Its stated mission mentions nothing about distillation, model copying, or commercial intelligence. The pivot from "prevent bioweapon synthesis" to "detect bulk API scraping" is a significant scope expansion, and nobody seems to have remarked on it.

The legal terrain underneath all of this is surprisingly weak. Fenwick & West's analysis found that copyright offers little protection, because AI outputs generally lack the human authorship required. The Computer Fraud and Abuse Act has a gap since Van Buren v. United States (2021): if you have authorized API access, misusing the data violates terms of service but possibly not federal law. Trespass to chattels requires proving system degradation. Patents may be the strongest tool, but nobody has tested distillation-specific claims in court.

Policy hawks are pushing harder. Joe Khawam at the Law Reform Institute proposed a three-phase escalation: Entity List designation for the three Chinese labs, an IEEPA executive order creating sanctions authority over AI capability theft, and ultimately full SDN blocking sanctions. CSIS testimony from May 2025 went further, suggesting offensive countermeasures including data poisoning.

The irony sits right on the surface. These are companies that built their models by ingesting the open web, books, articles, code repositories, forum posts, without explicit permissions from creators. The legal and ethical arguments they used to justify that training are structurally similar to the ones Chinese labs could deploy to justify distillation. Monash University's analysis compared distillation to reverse engineering under Sega v. Accolade: studying a system's outputs to learn its methods is not, historically, the same as copying the system.

None of this means the alliance won't work. Sharing detection signatures is a practical step. DeepSeek has already pivoted to domestic silicon, which suggests the API route was always supplemental. But the Forum's quiet transformation from safety research body to competitive defense mechanism deserves more scrutiny than it's getting. When three companies that control most of the world's frontier AI capability coordinate to restrict access, the word for that depends entirely on where you're standing.

Sources: