Skip to content

Plutonic Rainbows

Lilith, Fall 1992

Rei Kawakubo married Adrian Joffe at the Paris city hall on the fourth of July, 1992. The bride wore a black skirt and a plain white shirt. He had joined Comme des Garçons five years earlier, in 1987. That detail of the wedding outfit is the part most people skip past, because it sounds like a refusal of bridal theatre, when it is actually the opposite. It is the costume of a woman who has thought about clothing for thirty years and decided that on the day she gets married she will wear what she always wears.

The collection she showed for Fall 1992, the one in shops by the winter of her marriage, was called Lilith. The reference is the female demon of Jewish folklore, the night-figure whom God, in some readings of the Talmud, made out of "filth and sediment" because Adam complained the first woman was too much like him. Kawakubo had been quietly working with myths that side-stepped the bridal one for years. Lilith was the moment she made the side-step the entire architecture of the show.

Vogue described the clothes as semi-destroyed, and very sophisticated. Predominantly black, with flashes of pink and strokes of white polka dot. Chiffon layers yoked to cone-shaped knitted turtlenecks that masked the face from the nose down. Sorcerer's sleeves, that long medieval drape from shoulder to floor that the rest of fashion had let go of sometime around the early seventies. The colour she gave the collection was nightshade, a botanical word doing the work of a moral one.

Sandra Bernhard walked the show. Kawakubo had met her when Bernhard came in to be dressed for an event, and put her in a men's suit instead, which Bernhard later said she preferred. The casting tells you something: Kawakubo had spent the late eighties putting Basquiat and Malkovich on her menswear runways, but rarely brought non-models into the women's shows. Bernhard in Lilith was an exception, and the choice rhymed with the collection's premise that the woman it dressed was not the one fashion expected.

The finale was the part everyone remembers. Cocoon-like silhouettes with the models' arms crossed inside the garments, walking in formation as if held. "I do not find clothes that reveal the body attractive," Kawakubo had told Vogue once, plainly, and the cocoon walk was the most literal statement of that position she had ever staged. The body was inside. The garment was a wall around it. You looked at the wall.

What I find striking, looking back, is the timing. By 1992 Yamamoto's eight-year argument with Paris had also been won, and the two of them, who had debuted together in 1981 to cries of beggar-look and Hiroshima chic, were no longer a Japanese incident in European fashion. They were the house style for anyone interested in the body as a problem rather than a display. Lagerfeld was running Chanel like a stage. Versace was running Versace like a magazine cover. Kawakubo was running her own house like a thesis, and the Lilith show is maybe the cleanest single statement of the thesis she made in that decade.

What I keep coming back to is the trick of the collection. The clothes look, in photographs, as if they should be sad, and on the runway they were not. They were funny, in a stern Kawakubo way, and grand, and self-contained. A demon-myth, retold by a woman who had married five months earlier in a black skirt and a white shirt, and was wearing both as a uniform rather than a renunciation.

Sources:

Cold Ashby to Thorny Gale

A four-foot truncated pyramid of cast concrete, planted in a Northamptonshire field on 18 April 1936. Brass fitting on top, three grooves at 120 degrees, designed to lock a theodolite into a forced centre. The pillar at Cold Ashby was the first. The pillar at Thorny Gale, in a Cumbrian valley near Appleby, was the last to be used, on 4 June 1962. Between them sit twenty-six years and roughly six thousand five hundred concrete twins, scattered across hilltops and ridgelines, each one a node in a survey network meant to redraw the country.

You can still find around five and a half thousand of them. Not because anyone maintains them. The Ordnance Survey stopped doing that in the early nineties when GPS arrived and the entire arithmetic of the Retriangulation collapsed. The modern OS Net manages the same job with about a hundred and ten GNSS stations, none of which are visible on a hillside. The pillars are no longer measuring anything. They are simply still there.

Brigadier Martin Hotine designed the pillar in 1935, and the slightly martial profile carries that fact forward whether you know it or not. There is something inescapably interwar about the shape, the same confident geometry you find in lighthouse outbuildings, electricity substations, the smaller pieces of municipal infrastructure that were quietly being made standard. The retriangulation itself was halted by the war in 1939 and resumed in 1945, which means a fair number of the pillars were observed by men who had spent the intervening years doing other things with theodolites.

What I find genuinely strange about a trig pillar, standing beside one on a wet day in the Pennines or on a ridge in mid-Wales, is that the future it was built for actually arrived. This is not the usual hauntological story, where some confident postwar futurity curdled and the building became a monument to disappointment. The Retriangulation worked. The OSGB36 datum it produced is still the foundation of British mapping. The grid references on every map in every glove compartment trace back to those six thousand five hundred bolt-heads. The mission completed and the apparatus became redundant on the same axis, by the same logic, like a scaffold dismantled the day the building opens.

And yet the pillars don't read as monuments. They read as abandoned equipment. The brass spider has often gone, prised off by a passer-by decades ago. The concrete is stained, sometimes split where frost has worked in, sometimes graffitied. Trig-baggers visit them as a kind of ambient collecting hobby, ticking pillars off lists, and the OS itself now treats this affection as the pillars' main remaining function. The Twentieth Century Society has applied to list the Cold Ashby and Thorny Gale pillars, which would mark the start and end of the project as heritage objects, fixing them in time the way a museum plaque fixes a once-living object behind glass.

The thing the pillar measured was the country. The country got measured. Now the pillar is part of the landscape it helped describe, the way a punctuation mark might end up inside the sentence it was meant to organise. They tend to be on hilltops because that is where the sightlines are best, which means anyone walking up to a trig pillar in 2026 is following a route chosen by a surveyor in 1948 for a reason that no longer applies. You arrive at the summit and find a piece of equipment older than your parents, looking out at nothing in particular, doing nothing.

Sources:

Twice Across From Musk

The lawyer leading Sam Altman's defence in the Oakland federal courtroom this past week is the same one who, four years ago, made Elon Musk buy Twitter. William Savitt, a partner at Wachtell, Lipton, Rosen & Katz, represented Twitter in 2022 when the company sued to force Musk through with the forty-four billion dollar purchase he had spent the spring trying to wriggle out of. The case never got to a verdict; Musk capitulated shortly before trial, signed the cheque, and renamed the company. Savitt won by closing the exits.

He is now back across the courtroom from the same person, this time defending OpenAI and Altman against Musk's claim that the 2019 conversion to a for-profit structure betrayed the company's founding charter. According to Business Insider's profile this weekend, Altman picked Savitt specifically because of the prior result. The lore travels with him. The reasoning, told to a journalist by people who work with him, is that Musk responds poorly to opposing counsel who refuse to be impressed.

This is one of the small, structural facts about American high-stakes litigation that does not get talked about much. The top corporate trial bar is small. The same fifteen or twenty people show up across most of the cases that matter, and they remember each other. A defendant who has been across from a particular litigator before knows what that lawyer will do on cross, knows which exhibits they will dwell on, knows the register of voice they use when they are about to spring something. Savitt has had Musk's deposition strategy in his head since 2022. Most of the work was done before he walked into the Oakland building.

It also tells you something about how Altman thinks about risk. He could have hired any of the white-shoe firms in the country. The choice of the lawyer who had specifically beaten Musk in a high-profile commercial case was a tell. It said: this is going to be conducted as a contest of personalities, and we are bringing the personality who won last time. The trial coverage this week, which has spent as much time on Musk's demeanour as on the underlying corporate-governance question, suggests Altman read the room correctly. The judge, Yvonne Gonzalez Rogers, has already told Musk to stop making things worse outside the courtroom. Savitt has barely had to push.

What I find genuinely interesting is what this implies about the shape of the AI industry's coming decade in the courts. There will be many more of these cases. The IP questions around training data, the contractual questions around model distillation that came up on the stand last week, the corporate-governance questions about who actually owns a foundation-model lab when its mission and its valuation pull in opposite directions: all of these will be litigated by the same small bar, in front of the same handful of judges who handle complex commercial disputes in the relevant districts. The matchups will repeat. Reputations will compound. The lawyers who win one of these early cases will be the ones every defendant calls for the next one, and the lawyers on the other side will be the ones the plaintiffs' bar reaches for.

The trial is not over. Brockman and Altman himself are still to testify. The verdict could go either way. But one piece of the dynamic is already settled, which is that the most litigious billionaire in American technology has been put back across the table from the lawyer who got him to write a forty-four billion dollar cheque he did not want to write. That is a small, lawyerly victory in itself, and it happened before opening arguments.

Sources:

Seven Vendors, One Holdout

On Friday the Department of Defense announced agreements with seven companies to run AI on its highest-classification networks, the IL6 and IL7 tiers where the actual operational data lives. The list is the predictable one and the surprising one at the same time: SpaceX, OpenAI, Google, Microsoft, Nvidia, Amazon Web Services, and the comparatively new Reflection. Notably absent, and absent on purpose, is Anthropic. The company that has spent two years building its brand around frontier-safety commitments just got formally cut out of the most lucrative classified contract round in the sector's short history.

The exclusion is not procedural. Defense Secretary Pete Hegseth has been trying for months to label Anthropic a "supply chain risk," a designation usually reserved for foreign-adversary sabotage threats, after Anthropic insisted on contract language that would prohibit Claude from being used in fully autonomous weapons or for surveillance of Americans. Hegseth's position, as relayed through the Pentagon's CTO Emil Michael, is that vendors must allow any use the department deems lawful, full stop. That sentence does a lot of work. "Lawful" inside an IL7 environment is whatever the executive branch and its lawyers say it is on a given Tuesday, and the whole point of Anthropic's clause was to have something more durable than a memo.

OpenAI got there first. Its March deal was, in the words of people involved, structured to "replace Anthropic with ChatGPT in classified environments." That framing is unusual to see in print. Most procurement language is bloodless. This one names a loser. Reading the earlier blacklist move and the West Wing meeting that followed it, the trajectory is now legible: the administration tested whether it could pressure Anthropic into dropping the clause, found that it couldn't, and built the classified stack around the six companies that didn't push back. SpaceX and Reflection are the bonus picks; the rest of the list is just the hyperscalers.

Inside Google the deal is not landing quietly. Six hundred employees signed a letter to Sundar Pichai before the announcement asking him to keep Gemini out of classified work, and after the news broke a smaller group floated a strike before backing off over retaliation fears. DeepMind staff in particular have been here before, the Pentagon-autonomy budget piece laid out the $13.4 billion that's now flowing toward exactly the applications they were promised in 2018 their work would never support. The promise expired. The compute did not.

What's striking is how cleanly the disagreement has been allowed to surface. Anthropic could have signed and quietly carved exceptions into the statement of work, the way large vendors usually do. They didn't. They sued the administration, they publicly held the line on autonomous-weapons use, and now they're watching seven competitors split a contract pool they are not allowed to bid into. Whether that turns out to be principled or expensive is a question for the next funding round, but it is the first time in this cycle a frontier lab has paid a real commercial cost for a stated safety position rather than just gesturing at one.

The cost is the news. The position was already on the record.

Sources:

Square Outlived the Drive

There is a recurring joke on social media, the one where a Gen Z kid sees a real 3.5 inch floppy and laughs that someone has 3D printed the file save icon. The joke works because the disk and the icon have changed places. The icon is the original now. The plastic square is the artefact, an object that has been back-projected from a glyph most users have only ever seen on a toolbar.

Sony introduced the 3.5 inch floppy in 1981. By the mid 80s it was the dominant portable storage medium for personal computers, which is why the first generation of graphical interface designers reached for it when they needed an image to mean "save." The floppy was the thing in your hand that held the work. You ejected it, you carried it, you put it in a sleeve. The metaphor was direct. The icon was a picture of a real object doing the job the action described.

Then the object stopped doing the job. Through the late 1990s and into the early 2000s the floppy was gradually phased out. USB drives ate the consumer market, optical media handled bulk, and network shares quietly absorbed the rest. The icon stayed exactly where it was.

The Nielsen Norman Group has been running studies on this for years, and the finding holds up across age groups: people still read the floppy as save, even people who have never touched one. The icon shifted, in their terminology, from a resemblance icon (a picture of a thing you use) to a reference icon (a shape that points at a concept). It works because it has been used consistently for forty years, not because anyone recognises the object. The training data is the icon itself.

This is the part that gets interesting. A referent has outlived its referent. The picture is now older than the thing it pictures, in any practical sense, because the thing it pictures is in landfill and the picture is on every desktop. There is no analogy for this in older media. A book illustration of a horse does not stop being a horse just because cars arrived. But the floppy icon is no longer a picture of anything. It is a private language between the interface and the user, a sign whose referent has been quietly evacuated.

Microsoft noticed. Windows 11 ships with a save icon that has been abstracted, just enough floppy left to keep the muscle memory, not enough to look like a piece of dead hardware. The Simple Thread piece on this called it anachromorphism, the icon that has detached from its original referent and now simply means itself. Apple has done the same thing, the rounded square, the diagonal line that used to be the metal shutter, all softened into a visual logo that no longer pretends to be a disk.

You could read this as design cowardice, refusing to redesign something that does not work any more. I think it is closer to the inverse. The floppy icon is one of the few stable shapes in software, a fossil that refuses to disintegrate, and replacing it would be the cowardly move. It earned its place by working for a generation, then earned a second tenure by being the only shape that everyone, regardless of age, can read at twelve pixels square.

What the icon really preserves is not the floppy. It is the act of saving as a deliberate, separate gesture, distinct from the work itself. Modern software has largely abolished that gesture. Your phone autosaves. Google Docs autosaves. The cloud assumes it. The floppy icon points at a habit of mind from a period when saving was a thing you remembered to do, and could forget, and could lose work because you forgot. The plastic square is gone. The fear of losing work is still there, just barely, kept alive by a glyph that nobody under twenty has ever held.

Sources:

Just Like a Christmas Tree

Karl Lagerfeld showed Chanel's Fall 1991 ready-to-wear in Paris in March, and the line he gave the wire reporters that week was that the models were dressed "just like a Christmas tree." He meant the chains. Layered gold chains, stacked CC pendants, cuffs, hoops, dog collars, the camellia turned into hardware hanging off a mesh body stocking. Vogue called it taking the house "to the edge of an abyss of kitsch and funk." Reuters described models tearing off black vinyl trench coats to expose sheer mesh catsuits, to a Madonna soundtrack. The collection has been called Lagerfeld's hip-hop show ever since.

The framing is half right. There is hip-hop in the styling, the CNN retrospective on the show makes the case at length, and the layered gold and baseball caps and oversized chains were unmistakably borrowed from a New York street vocabulary that European luxury had spent the previous decade not noticing. But the Madonna soundtrack and the Boy Toy plaques and the conical-bra-adjacent silhouettes point somewhere else too. This was a "Like a Virgin" turn as much as a hip-hop one, two American provocations folded into the same Cambon set.

What made it work, and what makes it still readable now, was that Lagerfeld did not abandon the Chanel codes in order to do this. He doubled them. Quilted leather is a Chanel signature; the Fall 1991 show paired quilted biker jackets with tulle ball skirts. Tweed is a Chanel signature; Karen Mulder walked in a frayed denim mini and a curve-hugging pink tweed jacket. The chain belt is a Chanel signature; here it became a gladiator belt, gold and oversized, the CHANEL wordmark reading like a nameplate. None of these were new vocabulary. They were existing vocabulary turned up loud enough to refuse the dignified register the house had been trained into.

The cast did the rest. Linda Evangelista, Karen Mulder, Helena Christensen, Kristen McMenamy, the same names that were carrying every other March 1991 Paris show. The supermodel era is sometimes flattened into a single mood, but the Chanel show that month sat in deliberate contrast to what the rest of Paris was doing. Valentino's couture in the same window was about gravitas. Lagerfeld answered with chains over tweed and a Madonna track loud enough to make the front row flinch.

It is easy to read this now as cynical, a luxury house mining a Black American street vocabulary for runway shock value with no intention of returning the favour. That reading is also half right. The same Women's Wear Daily issue that month put a "Rap Attack" cover together, and the racial coding around the trend in mainstream fashion press was uncomfortable then and reads worse now. Lagerfeld was doing what Lagerfeld always did, which was synthesise on top of whatever signal was in the air, with limited interest in the source. The show's afterlife in hip-hop itself, which CNN traces through Dapper Dan and onward, ended up being more generous to the house than the house was to its inputs.

Still, the show stuck. It stuck because it was built out of Chanel codes rather than against them, and because Lagerfeld understood that a luxury house's signatures only stay alive if they are allowed to behave badly in public every few years. The Christmas tree line was throwaway, but it was honest. He had stacked the codes until they sparkled and clinked and looked slightly absurd, and the absurdity is what kept them legible.

Sources:

Five Will Never Return Home

Apaches was shot fast on a Home Counties farm in February 1977, twenty-six minutes long, six children from a Maidenhead junior school cast as themselves. The Health and Safety Executive had counted around thirty children killed in farm accidents the year before, and commissioned the Central Office of Information to do something about it. What they got back was a piece of folk horror with the BFI's catalogue number attached.

John Mackenzie directed it, three years before The Long Good Friday. Neville Smith wrote it. Phil Méheux shot it. The production values were below Poverty Row, the slurry pit was real, and the title sequence used the Playbill typeface from Stagecoach. Six children play cowboys and Indians on a working farm. Five never get home.

The deaths are not edited around. Kim falls from a tractor and is run over. Tom slips into a slurry pit, which is liquefied cow excrement, and goes under. Sharon drinks chemicals from an unmarked bottle while pretending it is alcohol, and dies in the night, screaming, off-camera, while her parents stand in the bedroom doorway. Robert is crushed by a gate that Michael has knocked over. Danny, the narrator, crashes a tractor into a ditch and goes through the windscreen. Michael, the cousin, the one who knocked the gate, is the only child left alive at the end.

Danny narrates the film after he is dead. This is the part that does the damage. He is calm, almost pleased, walking us through what we have just watched, while his family arrives for what he calls a party. We see the table laid for the wake, the sandwiches under cling film, the relatives in their dark clothes. The film does not break the spell to tell you Danny is wrong about the party. It lets him keep talking. The register is wrong in a way that only an English public information film could get wrong on purpose.

The COI made dozens of these. Most have curdled into nostalgia, a YouTube reel of teatime menaces. Apaches has not. It still plays as something a state did to its children with a clear budget line and a director's chair and a clapperboard. It was shown in schools all over Britain, broadcast by ITV companies on slow Sunday afternoons, and exported to Canada, Australia, and the United States. Prints kept being struck on 16mm long after every other PIF had been retired to videotape.

The strangeness is not that it is grim. Plenty of films are grim. The strangeness is the genre confusion, a western-themed death drama wearing the costume of a teaching aid. The form is didactic. The content is folk horror. The narrator is dead. The accidents are the point. There is no third-act reveal where the children turn out to have been spared, no closing title card that softens the lesson. The lesson is the deaths, and the deaths are filmed with the patience of someone who already knows how they end.

If you grew up rural and were shown this in a darkened school hall, you remember it. If you grew up urban and never saw it, you might wonder how a country could decide that scaring children to death was the responsible choice. Both reactions are correct.

Sources:

Partly True, Says Musk

On the stand in Oakland federal court last week, Elon Musk conceded, under cross-examination by OpenAI's lead counsel William Savitt, that it was "partly" true xAI had used some of OpenAI's technology to train Grok through distillation. He then softened the concession into a shrug. "It is standard practice to use other AIs to validate your AI," he said, as if the distinction between validating a model and copying its behaviour were self-evident, and as if the room had not just heard him spend three days arguing that OpenAI was a stolen charity owed him roughly thirty-eight million dollars in moral damages plus, by his lawyers' arithmetic, a hundred and thirty-four billion in the for-profit value the conversion produced.

The contradiction did not seem to bother him. It bothered Judge Yvonne Gonzalez Rogers, who opened the trial on Tuesday by asking Musk how the court could get its work done "without you making things worse outside the courtroom," and it bothered, in a quieter way, the cross-examining attorney, who walked Musk through his own xAI valuation (two hundred and fifty billion at the SpaceX merger in February) and his own boasts about Grok's capabilities. The picture that emerged was not the picture Musk's opening narrative had drawn. He had cast himself as a founder defending a charity from corporate capture. The cross painted him as a competitor, valued in the hundreds of billions, who had built his competing model in part on the very outputs he says were stolen from a public mission.

I wrote yesterday that generally, AI companies distill, because the practice is now baseline industry behaviour rather than a deviation from it. The major labs all do versions of it, sometimes openly, more often through quiet evaluation pipelines that nobody itemises in a press release. So Musk's admission, on its technical merits, is not a scandal. It is a statement of how the industry actually works.

The scandal is the framing. To sue OpenAI for one hundred and thirty-four billion dollars on the theory that the company betrayed its founding promise to benefit humanity, while simultaneously running a competitor that benefited, in part, from OpenAI's outputs, is to argue both sides of the same case at once. The mission was sacred enough to litigate. The model weights, or their behavioural shadow, were available enough to use. Both can be true. Neither sits well with the other.

Whether the jury cares is a different question. The CNBC summary of the first week noted that Altman, Satya Nadella and Greg Brockman are still to testify, and that the outcome could threaten OpenAI's anticipated IPO. A finding for Musk would not just unwind the for-profit conversion; it would establish a kind of moral lien on every dollar the company has raised since 2019. A finding for OpenAI would let Altman walk into the IPO roadshow having beaten the most litigious billionaire in technology in open court, on his own terms.

What I keep getting stuck on is the smaller, weirder fact at the centre of all this. The man suing to recover a charity used the charity's outputs to train his rival. He did not deny it. He did not apologise for it. He called it standard practice, which it is. The case will turn on whether that answer is enough.

Sources:

No Threshold to Call the Police

Seven families filed lawsuits against OpenAI in San Francisco last Wednesday, alleging that ChatGPT and its CEO bear direct responsibility for the February shooting in Tumbler Ridge, British Columbia, which killed eight people including six children. The complaints argue something narrower and stranger than the headlines suggest. They argue that OpenAI's own safety staff, in June 2025, flagged the shooter's account for "gun violence activity and planning", urged senior leadership to call Canadian police, and were overruled. The account was deactivated instead. The shooter opened a second one and went on talking to the model for another seven months.

That is the procedural fact at the centre of the cases. The emotional fact is the letter Altman published the Friday before, on the local news site Tumbler RidgeLines, saying he was "deeply sorry that we did not alert law enforcement to the account that was banned in June." David Eby, the BC premier, posted the letter to social media with the comment that the apology was "necessary, and yet grossly insufficient." Cia Edmonds, whose twelve-year-old daughter remains in hospital, said the apology read like it had been written by ChatGPT.

The question the apology accidentally raises is what it concedes. If "deeply sorry that we did not alert law enforcement" is the right thing to say in May 2026, then there is some implied threshold above which the company believes it should have called the Mounties, and below which it should not. That threshold has never been published. It is not in the usage policy, not in the model spec, not in any white paper from the Frontier Model Forum. The industry has spent the past two years building elaborate public language about safety teams, evaluation suites, and red-teaming, but no part of that vocabulary describes a duty to report a specific user to a specific police force in a specific country.

There is a reason for the silence. A formal threshold creates a formal liability. Once an AI lab publishes the rule it uses to decide when to call the police, it can be sued for failing to follow that rule, and it can be sued for the rule being too narrow. So the practice has been to have the rule operationally, inside the company, while not committing to it externally. The internal Slack messages cited in the complaints suggest exactly that arrangement: a safety team with a working notion of "this one warrants a call", senior management with a competing notion of "the privacy and PR cost is too high", and a unilateral deactivation as the compromise that satisfies neither.

What makes the gap concrete is the second account. Treating deactivation as the response presumes that an account is identity-bearing in a way it isn't. If the threat lives in a person and the person can sign up again with a new email in ninety seconds, deactivation is a containment theatre directed at auditors rather than a containment measure directed at risk. The safety staff knew this. The lawsuits' theory of the case is that management knew it too.

The federal politics around this are already moving in the opposite direction. OpenAI is, as Wired reported earlier this month, backing legislation in Illinois that would shield AI companies from liability in incidents where a hundred or more people are killed or injured. There is a Florida criminal investigation in progress over a separate ChatGPT-linked shooting at Florida State University last year. The same week the Tumbler Ridge complaints landed, the Frontier Model Forum was quietly running a working group on distillation rather than a working group on mandatory reporting.

I keep thinking about the second account. Somebody at OpenAI opened a ticket in June 2025 about a person whose conversations they had read, whose plans they had inferred, whose name they might or might not have known, and decided that the right action was to revoke a token and not pick up a phone. Eight months later, six children were dead. The Illinois bill would make sure that the next time, in some sense that the lawyers will argue about, the phone does not need to be picked up either.

Sources:

Capita Holds the Frequency

About a hundred and thirty thousand pagers are still in use across the NHS, which works out, on the most-quoted government count, to roughly ten per cent of every pager left running anywhere on earth. The hospital corridor in 2026 is one of the few places in the country where you can still hear a one-way radio device beep to summon a human being. Most of the rest of British public life has moved on. The cardiac arrest team has not.

The protocol underneath the bleep is POCSAG, the Post Office Code Standardisation Advisory Group's "Radiopaging Code No. 1", adopted in 1981 out of a British Post Office working group that had been nailing down a format for radio paging. It is a low-bitrate, unencrypted, one-way broadcast standard. A central transmitter sends short numeric or alphanumeric messages over a narrow VHF channel; every receiver in range listens passively for its own seven-digit capcode, ignores the rest, and beeps when its number comes up. The architecture is closer to a radio station that talks to one listener at a time than to anything you would call a network.

What makes it still useful is exactly what makes it sound obsolete. The signal travels at frequencies that walk through the thickened walls of a hospital, including the lead-lined ones around radiology and the awkward concrete around the basement plant rooms. Mobile coverage in those parts of an estate is often nominal at best. A POCSAG transmitter sitting on the roof reaches the whole footprint reliably, including the lift shafts and the bits of the Edwardian wing nobody has rewired since the eighties. Battery life on a receiver runs to weeks. There is no app to update, no SIM to provision, no cellular handover to fail at the moment of a code blue.

Matt Hancock, as Health Secretary, announced in February 2019 that the NHS would have rid itself of the things by the end of 2021. That deadline came and went. Vodafone had already left the business in March 2018, leaving Capita's PageOne as the only wide-area paging carrier in the UK, supplemented by a handful of specialist suppliers, including Multitone and Swissphone, for the hospital-by-hospital cardiac systems. The cost of the residual estate to the NHS was put at £6.6 million a year at the time of the ban announcement. Five years later, the bleeps are still going.

The unsettling part, as TechCrunch and others reported in 2019, is that POCSAG was specified in an era when intercepting it required a few thousand pounds of radio gear and a working knowledge of VHF demodulation. A software-defined radio dongle costs well under fifty pounds now. The traffic is still in clear, because retrofitting encryption into a thirty-year installed base of one-way receivers is essentially impossible. So the same property that keeps the protocol alive, its mechanical simplicity, also keeps it readable to anyone with a laptop and a back garden.

I keep coming back to the fact that a 1981 specification is still the load-bearing communications layer for the most time-critical moments in British emergency medicine. Not as a bridge, not as a fallback, but as the thing that actually works when a patient is arresting on Ward 4. The ban did not retire the bleep. The bleep outlived the ban, because in the part of the building where seconds matter and Wi-Fi does not, a 1980s broadcast standard is still the most reliable thing in the room.

Sources: