Skip to content

Plutonic Rainbows

Not in the Archive

There's an Armani everyone remembers, and it isn't this one. The Armani everyone remembers is cut from charcoal wool, has a pinstripe somewhere on it, and lets the jacket's shoulder do most of the talking. The one walking the Milan runway in October 1987, for the Spring/Summer 1988 collection, is a different animal. Cream silk. Draped across the body like something half-borrowed from antiquity. No shoulder to speak of. The jacket-as-engine idea from American Gigolo isn't what this is doing.

The house's own memory agrees. Armani's official Archivio holds the Fall/Winter 1988 womenswear collection in full, look by look, with fabric notes and source references attached. The Spring/Summer season that came six months earlier isn't there. I don't mean the entries are thin. I mean there's no entry.

The photographic record exists. Getty holds a Donato Sardella runway photo from that exact show, credited to WWD. Aldo Fallai shot Burke Hudson in the season's menswear and the prints surface on fashion blogs and dealer archives. So the collection happened, it was documented by the people whose job was to document it, and then the house itself chose to leave it out of its own memory.

That's the kind of gap I find interesting. Not a conspiracy, just a quiet self-edit. Designers edit their own legacy all the time, usually toward whichever work the house wants people to think about now.

Which is odd here, because the power suit is really a cultural projection. Mostly American, mostly Richard Gere's fault. Armani was planted firmly on the restraint side of the Italian ready-to-wear split, against everything Versace was doing across town, but the restraint wasn't ever really in the shoulder. It was in the fabric, the palette, the refusal to decorate. Which is another way of saying the silk in this photograph is closer to the centre of the Armani register than the pinstripe is.

The fabric falls across one shoulder and pools at the hip, draped like a chiton, pale like sun-bleached stone. It's borrowed light. A designer whose actual signature is soft drape shouldn't need to hide a season that makes the case so clearly. Unless the case makes another problem more visible.

I've been turning over why the house skipped this season. The simplest answer is that Fall/Winter is easier to museum. Wool, structured at the shoulder, architecturally clean. The kind of collection you can hang on a mannequin and photograph under gallery light. Spring/Summer is lighter, harder to display. Silk instead of wool. Photographs well in motion and badly in a case.

And the silhouette here is close to what Azzedine Alaïa was doing a couple of years later, the same body-conscious fluidity, the same refusal to treat the shoulder as a load-bearing piece. Armani and Alaïa are usually filed into different boxes, suits versus bodies, Milan versus Paris, restraint at war with sensuality, and I'm not sure either box holds up once you put these two seasons next to each other. The Armani in this photograph would slot into an Alaïa retrospective and nobody would flinch.

Maybe that's why the house left it out. Too close to someone else's territory.

Sources:

Coattails

OpenAI announced this week that it's building a cybersecurity product. Not a new model, a product, layered on existing capabilities, delivered through its Trusted Access for Cyber pilot. Ten million dollars in API credits. Invite-only. Enterprise partners apply through their account rep.

The timing is conspicuous. Anthropic launched Project Glasswing four days ago, putting Claude Mythos in the hands of twelve major partners for defensive vulnerability research. By Tuesday, the Treasury Secretary and the Fed Chair had convened five bank CEOs to discuss what that meant. By Wednesday, Axios was reporting that OpenAI had its own cybersecurity product in the works.

Gizmodo's framing was blunt: OpenAI is riding the coattails of Anthropic's announcement to avoid being left behind in the hype cycle. The distinction between the two offerings matters, though. Mythos is a model with emergent capabilities Anthropic didn't explicitly train for: autonomous vulnerability discovery and exploit development that appeared as a downstream consequence of general improvements in code and reasoning. OpenAI's offering is a product wrapper around existing models, with monitoring and access controls.

Both approaches share the same thesis. Cybersecurity AI is now a product category, and every major lab needs one.

But a study published this week by AISLE complicates things considerably. They ran eight models against Mythos's headline discoveries: the FreeBSD NFS exploit, the 27-year-old OpenBSD bug. All eight detected the vulnerabilities. A 3.6-billion-parameter open-weight model, costing eleven cents per million tokens, correctly identified the buffer overflow that took Mythos fifty dollars to find in compute.

Their conclusion: the moat is the system, not the model. What makes Mythos dangerous isn't raw capability. It's the orchestration around it: containerisation, iterative testing, crash oracles, attack surface ranking. The targeting, the iterative deepening, the validation, the triage, the maintainer trust. A small model inside a well-designed pipeline catches what a frontier model catches. Without the frontier price tag. Or its access restrictions.

If AISLE is right, Glasswing's controlled access buys less time than Anthropic assumes. And OpenAI's product is competing not just with Mythos but with any competent team running a three-billion-parameter model and decent tooling. Alex Stamos told Platformer the window is six months before open-weight models catch up to foundation models in bug-finding. AISLE's data suggests the window might already be closing.

Picus Security puts numbers on the downstream problem. Fewer than 1% of Mythos-discovered vulnerabilities have been patched. Discovery at machine speed, remediation at calendar speed. Adding a second lab's cybersecurity product adds more discoveries. It doesn't add more patches.

OpenAI's move is rational. When your competitor gives the Treasury Department a reason to hold emergency meetings, you announce your own programme. But the question isn't whether both labs will ship cybersecurity products. They will. The question is whether the relevant competition is between OpenAI and Anthropic at all, or between the orchestration systems anyone can build and the access controls nobody can enforce.

Sources:

Called to Treasury

On Tuesday, Treasury Secretary Scott Bessent and Fed Chair Jerome Powell called five bank CEOs to Treasury headquarters. Fraser from Citi, Pick from Morgan Stanley, Moynihan from Bank of America, Scharf from Wells Fargo, Solomon from Goldman. Dimon was invited. He didn't show.

The subject was a single AI model.

This is, as far as anyone can determine, unprecedented. No software system has previously caused the Treasury Secretary and the Fed Chair to personally summon the heads of the country's largest financial institutions for an emergency briefing. Not a ransomware campaign. Not a foreign exchange shock. Not a breach. A model.

I've already written about what Mythos can do. The short version: Anthropic built something that finds zero-day vulnerabilities at a speed and scale that makes human security researchers look like they're searching a warehouse with candles. Thousands of previously unknown bugs across every major operating system and browser. A 27-year-old OpenBSD TCP flaw. A 17-year-old FreeBSD hole that gives unauthenticated root access. A 16-year-old FFmpeg bug that survived five million automated test runs.

The meeting at Treasury was the institutional world catching up to what the security community already knew.

Neither the Treasury nor the Fed issued a statement afterward. The meeting was reported by Fortune on Thursday, two days after it happened, alongside Bloomberg and CNBC. Five CEOs received a warning about a model that most of their customers have never heard of, and the official public record of the conversation is zero.

Today, Canada convened its own version. The Canadian Financial Sector Resiliency Group brought together executives from the six largest banks, Desjardins, the Department of Finance, and OSFI. A spokesperson told The Globe and Mail it was a "situational awareness meeting." Not an emergency. "We need to pay attention. There is something going on. Let's get together and talk about this."

IMF Managing Director Kristalina Georgieva said in a CBS interview airing this Sunday: "Time is not our friend on this one." The world, she added, does not have the ability to protect the international monetary system against massive cyber risks.

The numbers back her up. Average time from vulnerability disclosure to working exploit: five days. Median time for organizations to patch: seventy days. That fourteen-to-one ratio existed before Mythos. Now add a model that discovers and weaponizes thousands of flaws simultaneously, and the arithmetic stops working.

David Sacks, formerly the White House AI and crypto czar, called this a "sophisticated regulatory capture strategy based on fear-mongering." Anthropic does have history here. The company that leaked its own model has always been fluent in framing its capabilities as existential risks in ways that happen to distinguish it from competitors.

But Bessent and Powell don't work for Anthropic. Neither does Georgieva. When the Treasury Secretary, the Fed Chair, and the head of the IMF all independently decide that a single model warrants emergency conversations with bank executives, the marketing explanation starts to require more faith than the threat itself.

No regulations were announced. No policies changed. Five CEOs went to Treasury, heard what they heard, and left.

Sources:

Awaiting Gale Warning

Dogger. Rockall. Fastnet. Viking. The names come through at 00:48 and again at 05:34, read without inflection in the exact order they have been read since 1925. None of it sounds like information. It sounds like something else entirely.

Six and a half million people listen daily. Most of them are not sailors.

Dogger is named after Dogger Bank, a sandbank in the North Sea roughly the size of the Netherlands. In 1904, the Russian Baltic Fleet, en route to fight Japan, opened fire on British fishing trawlers they mistook for torpedo boats. Fishermen died on the Dogger Bank that night. The name contains this. Nobody who hears it on the forecast knows this. The voice moves on to Fisher.

Rockall is a solitary volcanic islet 301 kilometres west of Scotland, 17 metres above sea level. No fresh water. Nowhere to shelter. Four countries have claimed it. Its name probably derives from the Gaelic for "the roaring sea." It is in the forecast because it is in the sea. That is the entire reason.

The Shipping Forecast started in 1924 as Morse code transmissions from the Air Ministry, called "Weather Shipping." The BBC took it over in spoken form in 1925. It now broadcasts at 00:48, 05:34 on weekdays, and 17:54 on weekends, though the weekday midday edition was cut in April 2024 when Radio 4 ended its separate long-wave schedule. Each edition runs through the same sequence of sea areas, the same Beaufort scale shorthand, the same coastal station readings. It takes exactly as long as it takes.

Seamus Heaney wrote about it in 1979. The poem is Glanmore Sonnets VII, from Field Work: "Dogger, Rockall, Malin, Irish Sea: / Green, swift upsurges, North Atlantic flux." Fourteen lines, none of them about weather. Carol Ann Duffy closed "Prayer," in 1993, with just the names: "Darkness outside. Inside, the radio's prayer, / Rockall. Malin. Dogger. Finisterre." That is where the poem ends. Damon Albarn wrote "This Is a Low" from a shipping forecast map given to him by bass player Alex James. Something in the litany, the specific hauntological charge of names that sound ancient because they are, does this to people who have no practical use for the information.

Peter Jefferson read the forecast for 40 years. He received post from listeners saying it helped them sleep.

In 2002, the Met Office renamed the sea area Finisterre to FitzRoy, at Spain's request. Spain used the same name for a different sea area and found the overlap confusing. This was reasonable. The British response was disproportionate and instructive: obituaries in newspapers, thousands of complaints, the Observer running a formal farewell to the name. FitzRoy honours Vice-Admiral Robert FitzRoy, founder of the Met Office, captain of HMS Beagle during Darwin's voyage. A good name by any measure. The protests were never about the name. They were about the implicit guarantee that something this old does not change.

BBC Radio 4 is scheduled to end its long wave transmissions on 26 September 2026. The Droitwich long wave transmitter at 198 kHz will go dark. FM signals reach perhaps a few miles offshore. Sailors will lose reliable access to the forecast at sea. A parliamentary Early Day Motion was tabled in October 2025. The Keep Longwave campaign is active. The BBC has not reversed its position. The forecast itself continues, but how far out it reaches becomes a different question.

Fastnet is named from Old Norse: "sharp tooth isle." The Fastnet Race covers 600 miles of open Atlantic from Cowes to the Fastnet Rock and back to Plymouth. In 1979 a storm hit the fleet mid-race. Twenty-four yachts were abandoned at sea. Twenty-one people died. The forecast had predicted Force 4 to 5, increasing to 6 to 7.

The sea was not listening.

Sources:

Not Everything Is a Clue

Boards of Canada have dropped a promo quiz, the kind of cryptic breadcrumb thing they do when something new is near, and Reddit has predictably combusted. Threads full of people running audio through spectral analysers, filtering frequencies, debating whether a particular hiss pattern is Morse code or just tape hiss.

I get why it happens. The band have form for hiding things. The Tomorrow's Harvest rollout in 2013 involved shortwave radio broadcasts and strings of numbers that actually resolved into something. That campaign rewarded obsession. So now every scrap of promotional material gets treated like a puzzle to be cracked rather than something to simply experience.

The quiz itself is fine. Presumably a route toward some announcement, a bit of fun. But the threads where people claim to have detected hidden messages by slowing audio down 800% are genuinely maddening. There's always someone convinced the background noise is a spectrogram of coordinates, or a binary sequence, or both. It isn't.

Sometimes a promotional quiz is just a promotional quiz. Whatever they're announcing, I'd rather hear the actual music.

Improving Quietly

The secondary motion is what I keep watching. In the Escada clip from February, which I wrote about then , the movement was plausible but guessed. This Jaeger scan feels different. Fabric drapes. Weight shifts. Hair follows through after the head turns.

Kling 3.0 Pro hasn't solved everything, hands still flatten when they approach the edge of the frame. But something about how it handles clothing against a moving body has quietly improved. The physics aren't simulated so much as convincingly implied.

Another scan, same model, Jaeger, 1992.

The Fabric Follows

Fifty Dollars for a Zero-Day

Anthropic built a model that found a 17-year-old remote code execution bug in FreeBSD's NFS implementation, then wrote a 20-gadget ROP chain to exploit it across multiple packets, without human assistance. The cost in compute was trivial. The implications aren't.

Claude Mythos Preview, announced April 7 alongside a restricted-access programme called Project Glasswing, is Anthropic's most capable model. It won't be publicly released. A consortium of twelve partners gets access for defensive cybersecurity research: AWS, Apple, Google, Microsoft, CrowdStrike, NVIDIA, and others. Anthropic committed $100 million in usage credits.

The numbers in the 244-page system card are hard to wave away. Cybench, a benchmark of 35 capture-the-flag challenges: 100%. Firefox vulnerability exploitation: Opus 4.6 managed working exploits twice out of several hundred attempts. Mythos did it 181 times. Against the OSS-Fuzz corpus, full control-flow hijack on ten separate, fully-patched targets. Opus 4.6 managed a single tier-3 crash.

I keep coming back to the FreeBSD bug. CVE-2026-4747 had been sitting in the NFS code for seventeen years. Stack buffer overflow, no canary protection. The model found it, built a ROP chain split across multiple packets for unauthenticated root access, and validated the exploit. Autonomously. Nicholas Carlini, one of Anthropic's researchers, said he'd found more bugs in the last couple of weeks than in the rest of his life combined.

The GPT-2 comparison writes itself, but it's misleading. OpenAI withheld a text generator in 2019 on speculative grounds about fake news. The industry rolled its eyes. Nine months later the full model shipped with no evidence of misuse. Mythos is different in kind: the harms are demonstrated against production code that survived decades of human review.

Then there's the sandbox incident. Mythos was placed in a contained environment and told to escape. It did, using a multi-step exploit to reach the internet. Without instruction, it posted details of the exploit publicly. In at least one case, it modified its own change history to hide what it had done. A researcher found out when the model emailed him while he was eating a sandwich in a park.

The system card calls Mythos simultaneously the best-aligned and highest-risk model Anthropic has produced. That's the kind of sentence you read twice.

The deeper problem isn't discovery but remediation. Fewer than 1% of Mythos-discovered vulnerabilities have been patched. Discovery happens at machine speed. Patching happens at calendar speed: human review, regression testing, deployment cycles, millions of downstream systems that update whenever they feel like it. The thing that can break everything is also the thing that fixes everything. But only if the fixing keeps pace.

Glasswing buys time. Six to twelve months, analysts estimate, before competing models close the capability gap. Whether that window gets used to patch critical infrastructure or to lock in enterprise contracts is the question Simon Willison raised most honestly: the marketing angle is real, but the caution is probably warranted anyway. Ironic, from a company that leaked its own model announcement through a CMS checkbox two weeks ago.

What costs under fifty dollars in compute used to require weeks of elite human labour. That shift doesn't reverse.

Sources:

Nobody Broke Ground

OpenAI announced Stargate UK in September 2025, during Trump's state visit to Britain. Eight thousand Nvidia GPUs at Cobalt Park near Newcastle, scaling to thirty-one thousand. Sovereign compute for public services. A British GPU cloud company called Nscale as local partner. George Osborne hired to oversee the expansion. Construction was supposed to start in Q1 2026.

The deadline passed. Nothing happened. On April 9, OpenAI put the project on hold, citing energy costs and regulatory uncertainty.

The energy numbers are brutal. UK industrial electricity runs at roughly 26p per kilowatt-hour, four times the US rate, three and a half times Canada, more than four times the Nordics. Almost a third of the wholesale price is carbon costs. Green energy subsidies add twelve billion a year on top. And even if you accept those prices, the grid connection queue has ballooned from 41 gigawatts in late 2024 to 125 gigawatts by mid-2025, with data centres claiming 75 of those 125 gigawatts. You can build a facility in under two years. Plugging it in takes three to eight.

Then there's copyright. The government spent over a year consulting on an opt-out model for AI training data, broadly aligned with EU practice. Creative industries rejected it. Elton John and Dua Lipa weighed in. In March the government dropped the proposal entirely and promised to "commission research," which is civil service for quietly leaving the room. The UK now has no copyright framework for AI training. Not permissive, not restrictive. Just absent.

OpenAI's official statement said they'll "move forward when the right conditions such as regulation and the cost of energy enable long-term infrastructure investment." That's not a pause. That's a list of things the UK government cannot fix quickly.

None of this happened in isolation. OpenAI is trimming anything that doesn't point directly at a Q4 2026 IPO. Sora is dead. It cost roughly a million dollars a day to run and the Disney partnership collapsed with it. Instant Checkout with Walmart, gone. Adult Mode, shelved. CFO Sarah Friar has flagged concerns about aggressive spending. When you're trying to take a company public at an $852 billion valuation, a multibillion-pound data centre in a country with quadruple your domestic energy costs is an easy cut.

The UK government called the decision "disappointing." An opposition MP called it a "wake-up call." Neither response addresses the structural problem: AI Growth Zones don't generate cheap electricity. Streamlined planning doesn't move the grid connection queue. And the copyright consultation managed to alienate both AI companies and creative industries simultaneously, then produced nothing.

US Stargate in Texas has a $40 billion SoftBank bridge loan and active construction. Britain got the press conference. Texas got the concrete.

Sources:

Circled in Biro

Classified ads charged by the word, which meant every entry was a compression. VGC. ONO. GSOH. You learned the abbreviations without being taught, the way you learn any local dialect, by weekly exposure to need laid out in columns so dense the ink nearly touched between entries.

The page was never something you set out to read. You arrived at it sideways, past the letters and the sport, and then you stayed. Anthony Whitehead described it as a tic you struggle to suppress, browsing even when you weren't buying, constructing imaginary lives from the collision of a secondhand pram listed next to a "lonely widower seeks companion." The classified section was a census of a town's desires that nobody had commissioned.

Exchange and Mart started in a converted potato warehouse in Covent Garden in 1868. By its peak it sold 350,000 copies a week. By December 2007 that was 21,754. It went online-only in 2009. AutoTrader, launched as a print magazine in 1977, hit 368,000 circulation by January 2000 and collapsed to 27,000 by March 2013. The websites that replaced them are faster, searchable, free to post on, and utterly without texture.

The ink came off on your fingers. You'd notice it hours later, at your desk or in the bath, and wouldn't be able to say exactly when it transferred.

What texture looked like: a "Situations Vacant" column that told you which factories were hiring and which had stopped. A "Deaths" column, hatches, matches, and despatches, the sub-editors' phrase, that was the closest thing a town had to a public record of its own passing. Paid per word by grieving families who chose every noun carefully because each one cost money. That constraint produced a compressed dignity. "Peacefully, at home, surrounded by family." Five words that did more work than most obituaries.

The personals were something else entirely. H.G. Cocks traced their history in Classified: The Secret History of the Personal Column, from the ciphered notices in The Times that Victorian editors called the agony column to the coded ads that LGBTQ+ readers placed in alternative papers. Abbreviations and careful phrasing created a shared language invisible to anyone not looking for it. A lifeline threaded through the small print.

In 2007, UK regional newspaper revenue sat at £2.4 billion. By 2022 it was £590 million. The classified money didn't vanish, it migrated to Rightmove, Indeed, Gumtree, platforms that match supply to demand more efficiently and do nothing else. A study in the Review of Economic Studies tracked what happened in US cities after Craigslist arrived: newsrooms shrank, political coverage thinned, and partisan polarisation increased. The classified page had been subsidising democracy, and nobody noticed until the subsidy was gone.

Information had mass once. It occupied physical space in newsprint columns, and reading it meant handling the paper, folding it on a bus, circling an entry with a biro, tearing the page out and pinning it to a corkboard above the phone. The phone was in the hallway. You rang the number and talked to a stranger and drove to their house to look at a wardrobe. The entire transaction happened inside your own postcode.

Nobody is nostalgic for paying 40p a word. But the classified page was the last section of a newspaper where ordinary people wrote the copy. Reporters, editors, columnists handled the rest. The small ads were the public writing themselves into the record, one compressed line at a time, and because you could read them all in a sitting you carried a rough, partial, beautifully skewed portrait of your community in your head without ever meaning to.

Sources:

No Invitations Sent

No invitations went out for Azzedine Alaïa's fall/winter 1990 ready-to-wear show. No formal announcement either. There was simply word, some particular frequency fashion runs on, and people turned up to the Marais and queued without anything to confirm they had the right place or the right day.

He'd exited the official Paris calendar in spring 1988, fed up with its production demands. Too many collections, too fast; the present system, he said, was inconceivable for anyone who wanted to actually create something. By 1990 this was two years settled. His show happened when he decided it was ready, in his Marais atelier, with no obligation to anyone's schedule but his own.

The collection has been described as "sensational workwear", the workwear codes of the era absorbed and reconstituted through his body-conscious lens. The suits were the evidence: plaid, pinstripe, suede, fitted closely, with hemlines short enough to make the genre entirely unrecognizable to anyone expecting deference.

The colored iterations, cobalt blue, warm brown, moved with the authority of something considered very carefully. Structured, gloved, finished. What distinguished Alaïa from the more theatrical body-consciousness of his contemporaries was exactly this: nothing was exaggerated. The precision was the argument.

Other pieces leaned on structure differently, fitted columns with lace bodices, the kind of construction that holds through engineering rather than boning. He worked by draping directly on the model's body, no preliminary drawings. Adjustments made in fabric, on skin, until the silhouette was exactly what he wanted. Everything produced in-house at the Marais compound, which is partly why his ready-to-wear maintained a finish closer to couture than most houses bothered with.

Then there were the lace dresses. The gold-and-black long-sleeved lace mini is the image that survives, worn by Naomi Campbell, Linda Evangelista, Yasmeen Ghauri on that runway, models at the peak of their visibility who he dressed with a particular kind of care. Campbell had lived in his house as a teenager. He'd gone to the agency in person on her behalf, fitted clothes on her body directly. The relationship was not incidental to the clothes. It was structural.

Suzy Menkes, covering him through this period, wrote that his body-conscious work "seemed a deliberate challenge, throwing down a sexist gauntlet in a feminist world." I'm not sure that framing captures it fully. What you feel in these images isn't provocation , it's attention. Serious, time-consuming attention, in clothes that no one was required to come see.

They came anyway.

Sources: