Skip to content

Plutonic Rainbows

Cut Along the Dotted Line

The coupon was usually in the bottom corner of the right-hand page, bordered with a dashed line and the instruction to cut here. You cut. You filled in your name and address in capital letters. You walked to the post office with a parent or on your own if you were old enough, stood in the queue, and asked for a postal order for one pound ninety-nine. The woman behind the grille filled in the amount, stamped it, and handed it across with a receipt. You folded the order into an envelope with the coupon, stuck on a second-class stamp, and dropped it into the pillar box on the way home.

Then you waited.

The waiting is the part that has become unrecoverable. Most small ads specified "please allow 28 days for delivery," and that figure was realistic rather than padded. The ad you'd torn out lived in Exchange & Mart, or the back pages of Roy of the Rovers, or somewhere inside Smash Hits. The company processing your order might be one man operating out of a garage. Your postal order had to clear. The catalogue had to be printed. Stock had to be located. The padded envelope came back eventually, franked by a post office hundreds of miles away, the address scrawled in a handwriting you'd never seen.

In the month between sending and arriving, the object lived entirely in your head. X-ray specs that saw through skin. Sea monkeys that performed synchronised dances. A magic set whose tricks the advert had strongly implied would fool everyone. The mental image grew more specific and more extraordinary the longer the padded envelope took. You could not check on its progress. There was no tracking number, no notification of dispatch, no photograph of the warehouse worker who'd packed it. The order entered a system and disappeared from view, and your imagination filled the silence.

When the padded envelope arrived, the object inside could not win. The X-ray specs were pieces of plastic with cardboard lenses that made everything look striped and red. The sea monkeys were brine shrimp that hatched to roughly the size of a comma. The magic set came with trick cards you could see through in good light. You held the thing in your hand and felt the distance between the copy in the advert and the object the padding had protected. Then you played with it for an afternoon and mostly forgot.

Nostalgia is not the right register for any of this. The objects were nearly always a let-down. What has disappeared is not the stuff but the structure of expectation the stuff was suspended inside. A whole month of specific, named waiting, knowing exactly what you'd ordered and unable to retrieve or cancel or check. The desire had time to become baroque before meeting reality.

The small ads themselves have mostly vanished from regional press. Their mail-order equivalents migrated to online storefronts that list, illustrate, price, and review everything in one page without requiring you to tear anything out of anywhere. The padded envelope has been replaced by the brown Amazon box, which arrives on a schedule you can track hour by hour. That the product inside is often the same plastic tat imported from the same factories is beside the point. Next-day delivery cannot sustain the same quality of imagining. The real object arrives before the imagined one has started to grow.

Plastic tricks and broken toys still turn up in charity shops and at car boot sales, slipped from their time but intact. The X-ray specs outlast the magazine that advertised them. Someone ordered them in 1983 and kept them in a drawer. The padded envelope is long gone, but the object still carries the shape of something that was waited for. An artefact that came through the post moves on differently from one that came over a counter. It always contained an interval.

A postal order can still be bought at any British post office. I don't know anyone under forty who has ever held one.

Sources:

Cheaper Per Token, More Per Task

The sticker price of a frontier model has been falling for eighteen months. The bill I run up using one has not. Both are true, and they are both true for a reason.

Per million input tokens, GPT-5.4 Standard is $2.50 and $10 on the output side. Claude 4.6 Sonnet sits at $3 and $15. Gemini 3.1 Pro is $1.25 on input, with the output price landing somewhere between $5 and $12 depending on the table you trust. Eighteen months ago the comparable tier was meaningfully higher. At the budget end the collapse has been more dramatic. GPT-5 Nano is $0.05 in, $0.40 out. Gemini 3.1 Flash-Lite hovers around $0.10 and $0.40. DeepSeek halved its prices in late 2025 and the copycat pressure has not let up. If all you are doing is matching 2024 workloads to 2026 models, you are paying less.

That is the sticker answer. The receipt answer is worse.

The first thing that moved is the premium tier. GPT-5.4 has a High Reasoning mode that runs $10 in, $40 out — roughly 16× the standard tier for the same provider, same parent model, just with the thinking dial turned up. A Claude Opus Fast Mode clears $30 per million input tokens. Long-context windows, the big selling feature of 2025, became a billing surface: GPT-5.4 doubles input pricing beyond 272K tokens and adds 1.5× on output, and Gemini 3.1 Pro doubles input beyond 200K. Anthropic, to its credit, removed its long-context premium on March 13; the full 1M window bills at standard rates now.

The second thing is the token count itself. Gemini 3.1 Pro's chain-of-thought reasoning generates internal tokens billed at output rates, and a simple prompt can consume three to five times more tokens than expected. This is the quiet version of a price hike. You did not pay more per token. You paid for more tokens. Any workflow that shipped in 2024 with a predictable output length is now spending meaningfully more on the same question if the model is thinking before answering. Which it will be, because every provider is pushing you toward the reasoning variants as the default for serious work.

Third is where most of the enterprise analysis actually lands: context caching. Both platforms discount cached reads heavily, up to 90% off base rates on repeated context. If your workload has a stable system prompt and repeated document context — customer support, code assistance, document processing — the effective per-million blended rate can compress substantially. If your workload does not — agentic tools that spin up fresh contexts, one-shot research queries — the cache discount saves you nothing. The bill diverges based on access pattern, not sticker price.

Against all of that, the budget tier deserves its own column. A Gemini 3.1 fast variant at $0.075 in, $0.30 out is not a frontier model. It is a utility-grade language model that happens to be smarter than GPT-4 was two years ago. That tier has collapsed so far below the frontier that calling any of this a "price rise" misses the entire rearrangement. For high-volume, low-stakes work, the cost per task has fallen by an order of magnitude. For frontier-quality work on harder tasks, the cost per task has held steady or climbed.

I suspect this is what the providers want. The price floor drops aggressively enough that casual users move on sticker alone. The premium ceiling goes up just as aggressively, and the people who have actual hard problems — research, production coding agents, long-horizon reasoning work — end up on the tier where the margin actually lives. Between the two, the flagship standard tier holds a middling price that lets the pricing page look competitive without giving anything away.

The honest answer, then: per token, no. Per unit of intelligence applied to a specific task, probably yes for anything hard, and flatly yes if you are using the reasoning modes the providers are quietly making default. The budget I set in 2024 for a particular agentic task has to stretch much further on 2026 Opus at $5 and $25, because the agent thinks three times as hard before producing the same output and the output itself is longer. That is not a price rise. It is also not a price cut. It is the industry offering a cheaper ruler and then giving you longer things to measure.

A ruler I keep thinking about: the automated alignment researchers Anthropic ran cost $18,000 in compute for nine Claude instances running for five days. A year ago that per-token figure would have been higher. I doubt the total would have been lower.

Sources:

Eight Years, No Walkout

Google is negotiating with the Department of Defense to let the Pentagon run Gemini on classified networks, The Information reported on Thursday. Reuters, Engadget, and a handful of others picked it up the same day. The proposed contract reportedly carves out two exclusions: no mass domestic surveillance, no autonomous lethal weapons. It is, in shape, the OpenAI deal.

That framing is the story.

What Google already runs inside DoD is larger than I realised. Since December, the GenAI.mil portal has given Gemini to around 1.2 million Defense Department staff across a user base of more than three million. Roughly forty million prompts and four million documents have gone through it. Eight pre-built agents handle work the Pentagon apparently considers administrative: meeting notes, budgets, sanity-checks against the national defense strategy. There's a feature called Agent Designer that lets personnel build their own agents in plain English. None of that is classified. It is also not nothing.

The new deal is the next step. Same infrastructure, cleared for secret and top-secret environments. The Under Secretary of Defense for Research and Engineering was quoted saying expansion talks are "underway."

Eight years ago this would not have happened. In 2018 Google pulled out of Project Maven, the drone-footage computer-vision contract, after employee protests. The company declined to renew it.

The two new exclusions, mass domestic surveillance and autonomous lethal weapons, are the same two items Anthropic refused to drop in February, when the Pentagon blacklisted them for keeping those commitments. Hegseth gave Amodei a Friday deadline; Amodei refused; the company was designated a supply chain risk. OpenAI accepted the terms Anthropic wouldn't and kept its contract. Google, eight years after walking away, is now pitching itself on roughly that middle ground.

It is a narrower position than 2018 Google held. It is a wider position than February Anthropic held. In the current window, it is where the business lives.

The part I keep circling is the silence. In 2018 the Maven protest was a company-wide story with a visible fracture line and a public exit. This week's news is a Reuters summary citing an Information scoop, picked up through the wire services, noted by industry press. There's no internal petition making the rounds. No engineers are speaking anonymously to the Times. The deal and its terms are being negotiated in the normal way, which is to say without anyone getting in the way of it.

That might be because the workforce has changed. It might be because what looked plainly wrong in 2018, helping the military see, has been reclassified as ordinary productivity software that happens to have some optional national security use cases attached. The agents summarising budgets look very much like the ones Anthropic was meant to be, before the February argument.

I don't think this is the same company that walked away from Maven. The contract terms say something Google once said it wouldn't sign. The absence of a visible fight says the company isn't planning to argue about it.

Sources:

Rosalind Without the Promises

OpenAI released GPT-Rosalind yesterday — the first entry in a new "Life Sciences" model series, gated to a trusted-access programme for a handful of enterprise partners. Amgen, Moderna, Thermo Fisher, the Allen Institute, and Los Alamos National Laboratory are on the preview list. Named after the crystallographer who made DNA legible, the model ships in ChatGPT Enterprise, Codex, and the API, behind enterprise-grade security controls and the standard "no training on your data" clause.

The benchmark numbers are good. On BixBench, GPT-Rosalind scores 0.751 Pass@1. On LABBench2, it wins six of eleven subtasks. Against human experts on two representative tasks, it sits in the 95th and 84th percentiles. Reasonable results for a domain model built on top of the frontier reasoning stack.

But the interesting thing isn't the benchmarks. It's the language around them.

Read the announcement carefully. OpenAI doesn't say GPT-Rosalind will design drugs. The model is described as a tool to accelerate the early stages of discovery — evidence synthesis, hypothesis generation, experimental planning. That's the research-assistant frame, not the autonomous-designer frame. The Codex Life Sciences plugin talks to more than fifty scientific databases. The model reads papers, cross-checks datasets, drafts experiments. That is useful. It is not a cure.

Contrast that with the last five years of pure-play AI drug discovery. Exscientia, Recursion, BenevolentAI — the three companies that raised the most money on the premise that AI could find drugs faster — have all had their first clinical readouts. All three were negative. Recursion absorbed Exscientia. BenevolentAI has been in retrenchment. The sector is sitting on roughly a billion and a half in market cap and has approved zero drugs. UCL's Peter Coveney, cited in a recent Nature piece, has made the structural case: discovery isn't the bottleneck. Validation is. You can generate ten thousand candidate molecules. Testing them is the part that takes a decade.

GPT-Rosalind isn't promising to solve validation. It's promising to make the scientists who do validation a little faster at reading papers. That's a smaller claim. It might also be a correct one.

There's something honest about the framing. Opus 4.7 shipped yesterday with Mythos held back, carefully gated — the same instinct in a different domain. OpenAI's move here rhymes: trusted-access only, enterprise-only, a short partner list, no individual researchers, dual-use safety language that reads as if somebody who has read the risk literature wrote the announcement.

Whether this is restraint or marketing discipline is open. It's possible GPT-Rosalind is simply a harder model to sell on hype because pharma buyers have been burned too many times and know better. It's possible the trusted-access structure is there to keep the model from generating plausible-but-wrong bioinformatics claims at industrial scale.

Either way, it's the first new model family in a while whose launch language reads like it was written by people who watched the AI-for-biology narrative play out in public. Rosalind Franklin did careful work and died before seeing it credited. Putting her name on a frontier model is either a very nice gesture or a warning about what happens when you overstate.

Sources:

Left to Weather

The roof of a pagoda at Orford Ness is a pillow of shingle sat on top of concrete piers. That isn't decoration. If one of the WE177 initiators had gone wrong during a vibration test, the pillars were supposed to give way and drop the shingle-laden roof down onto the blast, smothering it from above. No fissile material was ever on site, only the conventional explosives that start the chain reaction. The buildings were designed on the assumption that they might, occasionally, explode.

The Atomic Weapons Research Establishment ran this Suffolk spit from 1953 until 1971. Britain's bombs were shaken, frozen, baked, and spun here before they were shipped to the deterrent. The specific pagodas — Laboratories E2 and E3 — went up in 1960 to test the WE177 and ET317. When the Ministry of Defence eventually cleared out, they took the equipment and not much else. In 1993 the National Trust bought the spit for conservation, inherited the concrete, and eventually adopted a policy they call curated decay.

It means: we will not restore these buildings, and we will not demolish them. We keep roofs from collapsing. We fence off the worst of it. The sea and the gulls do the rest. In 2023 the Trust sent a robot dog in to survey the interiors because the floors are no longer trustworthy.

Sebald walked through here in 1992 for The Rings of Saturn and wrote of feeling he was "amidst the remains of our own civilisation after its extinction." That line is quoted in every essay about Orford Ness because if you stand in front of a pagoda on a low cloud day, with the shingle crunching under your feet, you will probably think it yourself. The future the pagodas were built for, the one where we actually used them, did not arrive. The future that replaced it does not need them. They stand in a kind of double-negative tense, unused and unusable.

The pull toward the hauntological register is strong, but a stubborn counter-reading deserves weight too. Orford Ness is one of the largest vegetated shingle spits in Europe. For the avocets and the spoonbills, the pagodas are just another headland feature, colder than the rest. The Trust knows this. Half the reason the curated-decay policy works is that ripping the concrete out would wreck the ground underneath, which is older and more fragile than anything the MoD ever poured.

There's also the question of whether ruin aesthetics hide the politics of the places they prettify. Nothing went wrong here, which is the point. The bombs worked. Calling the pagodas beautiful in decay elides the fact that their decay is the eventual downstream of a successful deterrent, which is predicated on the possibility of cities burning somewhere else. A curated ruin is a ruin given a meaning it didn't have when it was working. You don't know whether to respect that or not.

The robot dog made it in and out. The floors held.

Sources:

Pierce's Verdict

In November 1966, seven scientists delivered a thirty-four-page report to three federal agencies that had been paying for machine translation research for a decade. The committee was called ALPAC, chaired by John R. Pierce of Bell Labs, and it had been asked to decide whether continued funding was justified. Ninety pages of appendices backed up the body of the report. The committee's conclusion was direct: machine translation was slower than human translation, less accurate, and more expensive. It recommended a pivot toward basic research in computational linguistics.

Funding collapsed within months. The Department of Defense, the National Science Foundation, and the CIA (the three agencies that had been sponsoring the work through the Joint Automatic Language Processing Group) effectively stopped paying for MT research. Graduate pipelines dried up. American MT labs closed or reoriented. The field survived mostly in Europe and Japan, where institutions drew different conclusions from the same evidence, and it did not meaningfully recover in the United States until IBM's 1990 statistical MT paper.

What makes ALPAC worth revisiting is not that it was wrong. On the facts in front of Pierce's committee in 1966, the report was largely correct. The rule-based systems being built with government money really were producing stilted output that needed heavy human post-editing to be useful. And the committee had found something sharper than performance numbers. The whole target was Russian-English translation, because the motivation was military intelligence, and the United States already had more human translators of Russian than it needed. The Joint Publications Research Service had four thousand contract translators on the books and was using an average of three hundred a month. A reasonable committee looking at a reasonable set of numbers concluded that the field was not delivering against a need that did not fully exist.

What makes ALPAC worth revisiting is that the committee's framing — compare what we have now against a human translator, ask whether it is cheaper — foreclosed a direction that, thirty years later, would turn out to be exactly the right one.

The statistical approach that rescued MT in the 1990s did not look like the systems Pierce was evaluating. It was not written by linguists. It was a probabilistic model trained on parallel corpora, which is to say, it won by being dumber and using more data. Rich Sutton's Bitter Lesson names the pattern: over seventy years of AI research, clever knowledge-intensive systems lose to methods that scale with compute. ALPAC sits in the pre-history of that pattern. The committee evaluated the clever systems honestly and killed their funding, which is, in the Sutton frame, exactly what should have happened. What Sutton does not say, and what the ALPAC story does say, is that the method which eventually wins can take decades to arrive.

John Hutchins's (in)famous report analysis argues the impact of ALPAC is often overstated: MT work continued quietly at Wayne State and the University of Texas through the 1970s, and European groups grew into the American gap. This is fair, and a useful correction to the tidy winter narrative. But the American pipeline did die. The Transformer paper, which finally cracked machine translation as a problem, arrived fifty-one years after ALPAC was filed.

A report is a compressed judgment. It takes a field as it finds it and asks whether the trajectory justifies the cost. ALPAC answered no, defensibly, and was right about the trajectory of the systems it evaluated. It just was not evaluating the systems that would win.

Sources:

Opus 4.7 Ships With Mythos in Reserve

Anthropic shipped Claude Opus 4.7 this morning, about ten weeks after Opus 4.6 landed in February. Same pricing: $5 per million input tokens, $25 per million output. Same 1M context window in the extended variant. A handful of new knobs in Claude Code and the API. And one unusually candid line in the release materials, which I think is the most interesting thing about the launch.

First the numbers Anthropic actually cites. On CursorBench, 4.7 hits ~70%, up from 58% for 4.6. That is a twelve-point jump on a benchmark that tracks how the model behaves inside a working IDE, which is closer to the work than most evals. On Rakuten-SWE-Bench, Anthropic says 4.7 resolves three times as many production tasks as 4.6. SWE-bench Verified, SWE-bench Pro, and Terminal-Bench 2.0 numbers have been circulating on third-party blogs, but I cannot find them on Anthropic's own pages, so I am not going to quote them.

Where 4.7 actually feels different, to me, is in the developer affordances. There is a new xhigh effort level above high, which pushes the model into longer deliberation on hard tasks. A "task budgets" public beta caps how much compute a single agentic run can consume before it checks in. A /ultrareview command was added to Claude Code. The model is better at using file-system based memory across sessions. Vision inputs accept images up to 2,576 pixels on the long edge with higher fidelity than before. Small things, individually. They compound.

Holding $5 input and $25 output across another generation is a concession to the shape of current demand. Nobody wants Opus priced out of daily use, and this is now a fairly stable frontier band.

And then there is Claude Mythos Preview, referenced in the Opus 4.7 launch materials as Anthropic's "most powerful model" — one that 4.7 is described as "less broadly capable than." Mythos itself was announced on April 7 under the name Project Glasswing, with a limited rollout to roughly fifty partner organisations and its own public system card. Opus 4.7 is today's general release. Mythos is the one most people cannot touch.

That is a strange thing for a frontier lab to put in an announcement post. The usual move is to ship your best and frame it as the best. Anthropic is instead shipping what it calls a production-ready step up from 4.6 while pointing openly at a more capable internal model nobody else can use at scale. The reason, per Anthropic's own framing, is not alignment. They describe Mythos as the best-aligned model they have trained. The concern is capability: Mythos is good enough at certain offensive-security tasks that Anthropic would rather gate it than ship it broadly.

That reframing matters, because my first instinct was to reach for the chain-of-thought honesty problem and assume Mythos was withheld because its reasoning could not yet be audited. That is not what Anthropic is saying. What they are saying is closer to: the model is aligned enough, but the capabilities it has are the kind that turn a careless user into a serious problem, so general access waits. That is a different kind of caution, and more interesting than "the new model is not safe enough yet."

For the work I actually do — which is how I end up judging any model release — Opus 4.6 was already the best coding agent I had used, and 4.7 in initial testing feels like a modest but real step. The task-budget control is genuinely useful if you run long agentic jobs that can spiral. xhigh is the knob for when you want to burn tokens thinking about something hard. The rest is refinement.

What I cannot do, yet, is compare any of it to Mythos. I suspect that comparison is the one Anthropic wants us to think about.

Sources:

Three Minutes, Thirteen Years

The new Boards of Canada track showed up on their own YouTube channel on April 16, 2026, without a press release. It is called Tape 05 and runs a little over three minutes. This is the first original music they have released since Tomorrow's Harvest in June 2013.

Thirteen years is a long time to wait for a three-minute song, and by the standards of most artists that gap would be career-ending. Sandison and Eoin are not most artists. Their silences are part of the work.

The delivery fits that pattern. In the weeks before the drop, Warp mailed unmarked VHS cassettes carrying only a Hexagon Sun logo to fans who had ordered from the Bleep store, and posters went up in London, Los Angeles, and Manhattan showing children with whited-out eyes, a deliberate callback to the faceless family on the cover of Music Has the Right to Children in 1998. No text. No barcode. No URL. Fans on bocpages logged each one as it surfaced. The rollout carried the same cryptography Warp used to pre-announce Tomorrow's Harvest in 2013: Cosecha numbers stations beamed through shortwave receivers, a mystery Record Store Day 12-inch that later resold for thousands, an augmented-reality puzzle built on six numeric codes. A release does not arrive at Boards of Canada. It surfaces, under conditions.

Tape 05 itself is quieter than the machinery around it. A slow synthesized wash, drifting pitch, faint tape hiss at the back. No percussion. No obvious hook. More Geogaddi than Campfire Headphase, if you want a landmark. At three minutes it is not big enough to carry an announcement on its own, which makes the other signals matter more. It sounds like a door being tried, not a door being opened.

Whether a full album follows is the open question. Resident Advisor is calling it "first new music in 13 years" and stopping short of album confirmation. Billboard is writing around Warp's poster campaign without a release date attached. DJ Mag has noted that the audio on the VHS tapes shares sonic signatures with the Societas x Tape mix from 2019, which supports the most deflating read, that this could be an archival dig rather than new compositions.

I lean toward believing in an album. The VHS campaign is too expensive and too coordinated to ride on a single short drone piece, and it echoes the pre-release shape of 2013's Tomorrow's Harvest too closely to be coincidence. But this is a band that has spent thirty years rewarding patience and punishing prediction, so I will not stake anything on the schedule.

What is already certain is that the ritual works. I spent an evening reading fan forums parsing every frame of the posters. I pulled up Music Has the Right to Children and played it in sequence with Tape 05, listening for the join. Whatever the track is on its own, the event around it is doing what it is supposed to do. The signal went out, the receivers replied, and for a few days the rest of the internet has to wait while a small group of people decode a piece of tape.

Sources:

Blacklisted, Then Summoned

In February, the Pentagon decided Anthropic was too dangerous to trust. In April, the Treasury decided Anthropic was too dangerous to avoid.

Six weeks.

The February story is already documented. Defense Secretary Pete Hegseth gave Dario Amodei a Friday deadline. Drop the ban on fully autonomous weapons and the ban on mass surveillance of US citizens, or lose a $200 million defense contract. Amodei refused. Within hours the company was designated a "supply chain risk to national security," a phrase normally reserved for hostile foreign actors. Trump ordered federal agencies to stop using Anthropic technology, with a six-month phase-out window for the Pentagon itself. OpenAI signed the deal Anthropic wouldn't.

That was the administration's public position on the company. It still is.

On April 10, Scott Bessent and Jerome Powell summoned five bank CEOs to Treasury to discuss Claude Mythos, the Anthropic model that had launched three days earlier under the Project Glasswing programme. The recommendation was that banks consider using it for defensive vulnerability work. Four days later, Bloomberg reported that Treasury CIO Sam Corcos had gone further. He wasn't asking for a briefing. He was asking Anthropic for access to the model itself, so Treasury could run its own vulnerability tests. He hoped to have it, per the reporting, "as soon as this week."

Summoning CEOs is a warning. Asking for access is procurement reconnaissance. You don't request a working copy of a model unless you're thinking about using it, or thinking about understanding it well enough to regulate it. Either answer requires Treasury to be in active technical conversation with a vendor the administration has formally declared untrustworthy.

The easy reading is division of labor. Pentagon handles weapons and surveillance; Treasury handles financial stability; the agencies can disagree on the same company because they're optimising for different risks. From inside each building both calls look rational. Hegseth wanted Anthropic to remove safety features it considered load-bearing. Bessent and Powell want Anthropic to help defend the US financial system against a capability Anthropic itself warned about. No contradiction, just specialisation.

The harder reading is that "supply chain risk" means something. In February, the objection wasn't that Anthropic's technology didn't work. It was that the values embedded in the product — the specific guardrails Anthropic refused to remove — made the company unfit for government business. Those guardrails are still there. If they rendered the company unfit in February they render it unfit now. Treasury asking five banks to consider the technology, and then asking the vendor for a copy, doesn't unbrand the company. It ignores the brand.

There's a third reading worth naming, which the skeptics have been making for a week. Bruce Schneier called Glasswing a PR play. Alex Stamos called the Mythos framing "marketing schtick." AISLE replicated the headline findings with a 3.6-billion-parameter open-weight model costing eleven cents per million tokens. If they're right, then both the February blacklist and the April summoning are overreactions. One kind of overreaction got Anthropic banned from federal agencies. A different kind of overreaction is now getting its model briefed to the largest banks in the country, with access potentially approved for Treasury's own staff. The administration hasn't changed its mind about the company. It just changes which version of the company it's talking to.

Nothing has been retracted. The supply-chain designation stands. The phase-out order stands. The briefing happened. The access request is open. An AI policy reader trying to make the two positions cohere has to pick one, and the Trump administration has been remarkably unbothered about which one you pick.

Whichever you choose, the other one is still government policy.

Sources:

Built to Last Ten Years

Churchill proposed them in March 1944, before the war had ended. The Housing (Temporary Accommodation) Act went through Parliament the same year. The target was 300,000 prefabricated homes within ten years, built in factories and shipped out to bomb sites, edge-of-town fields, and anywhere else that could take them. The country managed 156,623. It was the fastest mass housing programme in British history.

The houses were meant to last ten years.

Most had a built-in refrigerator, unusual for 1946, when many permanent homes still relied on a pantry and the milkman. Flush toilets indoors. Hot water from an immersion heater. A fitted kitchen, essentially. The scheme delivered factory-made domestic convenience in emergency housing assembled in aircraft factories and shipyards from timber, asbestos cement, aluminium, and wood wool.

The Uni-Seco Mk3 was one of the main models: 29,000 built, timber-framed, steel windows, asbestos cladding. You can still find them. The Excalibur Estate in Catford has the largest surviving cluster — 189 bungalows put up in 1945 and 1946, many of them assembled by German and Italian prisoners of war still awaiting repatriation. Around 700 survive in the Bristol area. Others are scattered from the Isle of Lewis to the south-west. Individual prefabs around the country have been Grade II listed.

The ten-year deadline kept being extended. Councils needed the housing stock. Residents, who had been given something strange — a private house, with a garden, for council rent — refused to leave. Some have stayed in the same prefabs for more than seventy years.

The hauntological register is unusual. Most abandoned buildings are haunted by a future that was supposed to last and didn't. Prefabs are the inverse: a temporary future that quietly became the actual past. The "permanent" houses that were to replace them got built too, went up in towers and estates, and in many cases came down before the prefabs did. Trinity Square in Gateshead lasted forty years. The Heygate is gone. The Aylesbury is going now. The Excalibur prefabs were still being lived in while the Heygate was being demolished — temporary housing outlasting its replacement.

Lewisham has been tearing Excalibur down in phases since 2013, though the six listed bungalows on Persant Road remain. Six houses out of 189. A kind of settlement: most of it goes, a token survives.

The Prefab Museum ran a temporary exhibition at 17 Meliot Road in 2014. Former residents came back with photographs and letters from decades of campaigns against demolition. Most of the estate has come down in the years since. The listed row on Persant Road is what's left.

Sources: