Skip to content

Plutonic Rainbows

The People Who Simply Vanished

A girl I knew at school moved to another town in 1988. I never saw her again. I don't know where she went, what she did with her life, whether she's alive. There was no forwarding address, no email, no profile to search. She left on a Friday, and by Monday she had ceased to exist in any verifiable sense. I was fifteen. This was ordinary.

Before the internet, people disappeared from your life with a regularity that would seem pathological today. Not dramatically — not in the way true crime podcasts mean when they say "disappeared." Quietly. A colleague took a job somewhere else. A friend moved. A neighbour emigrated. A person you spoke to every day became, over the course of a single week, permanently irretrievable. The world absorbed them and offered nothing back.

I keep thinking about how casually we accepted this. The finality of it. You could spend three years sitting next to someone in a classroom, sharing jokes and minor confidences, and then one of you would leave — and that was it. There was no mechanism for reconnection beyond extraordinary effort. You might try directory enquiries, if you remembered their surname and guessed which town they'd landed in. You might write a letter to their old address and hope it was forwarded. More often, you did nothing. The loss barely registered as loss. It was just how things worked.

The infrastructure of connection was laughably thin. Landline telephones required you to know the number, and numbers changed when people moved. Phone books covered local areas. Letters required a postal address. If someone relocated and didn't tell you — and why would they, if you were a casual friend rather than a close one — the connection severed cleanly and permanently. There was no search engine to type their name into. No social graph linking mutual acquaintances. No algorithm to reconnect you. No suggested friends. Just silence, and eventually acceptance.

I think about a specific group of people I worked with in 1993 at a small office in Sheffield. We shared a space five days a week for almost a year. I remember first names, a few surnames, fragments of personality. One woman was saving for a house. A man was obsessed with rally driving. Someone's mother was unwell. These details survive in my memory with surprising clarity, but the people themselves are gone. When the contract ended, we dispersed. No one suggested staying in touch because staying in touch required sustained, deliberate effort — regular phone calls, letters, visits — and we all understood, without saying so, that the relationship did not warrant that level of maintenance. The threshold for sustained contact was much higher than it is now.

This created a strange emotional texture. You accumulated a growing catalogue of people you had genuinely known and would never encounter again. Not estranged. Not deliberately lost. Simply — gone. The butcher's son who moved to Canada. The woman at the next desk who left to have a baby. The friend from university who returned to Malaysia. Each departure was a small, quiet severance. You carried forward a version of them frozen at the moment of last contact, and that version slowly degraded, merging with invention, losing specificity until only an impression remained.

What strikes me now is how much this resembled a kind of low-grade grief that no one acknowledged. Researchers at Psychology Today have described the concept of "commemorative friends" — people who were important to you earlier in life, with the understanding that you might never see or hear from them again. Before the internet, nearly everyone in your life outside your immediate circle was a commemorative friend in waiting. The category was so large it was invisible. You didn't mourn each departure because there were too many of them, and because the culture offered no framework for treating a drifted friendship as a genuine loss. It was simply what happened.

The asymmetry with the present is difficult to overstate. Today I can find almost anyone. A name typed into a search bar will surface a LinkedIn profile, a social media account, a local news mention, an obituary. The mystery has been eliminated so thoroughly that we've forgotten it ever existed. But for decades, the default condition of human relationships was impermanence followed by permanent silence. You met people, you knew them, they vanished, and the world closed over the gap they left behind.

I've written before about how pre-internet life was never designed to be archived — how it existed as lived experience rather than data, and how the absence of records is not a failure of retrieval but a genuine absence. The disappearance of people operates on the same principle. Those connections were not documented, tracked, or preserved. They existed in person, in proximity, in shared physical space. When the proximity ended, the connection ended. No trace remained in any system. The only archive was your own memory, and memory — as I've explored in thinking about how memories detach from their temporal anchors — is not a reliable archive of anything.

I sometimes wonder whether those people think of me. Whether the woman from Sheffield ever recalls the office we shared, the specific quality of light through those windows, the coffee machine that never worked properly. Probably not. Or if she does, she remembers a vague shape — a young man whose name she cannot retrieve, whose face has blurred into a composite of several faces from that era. This is how it goes. We were real to each other for a period, and then we became ghosts in each other's pasts. Not dead, not absent — just permanently unreachable.

There was something honest about it, though I'm reluctant to romanticise. The impermanence forced a certain presence. You paid attention to people because you sensed, even unconsciously, that this might be all the time you'd get. Conversations carried more weight when you couldn't resume them later via text message. Departures had gravity. When someone left, you understood — really understood — that this was probably the end, and you conducted yourself accordingly. There were more proper goodbyes. More deliberate last conversations. More attention to the fact of someone's physical presence before it was withdrawn.

My father had a friend called Roy who he'd known since childhood. Roy moved to Australia in 1971 and they lost contact almost immediately. For over thirty years, my father mentioned Roy occasionally — wondering aloud what had become of him, whether he'd married, whether he was still alive. There was no way to find out. In 2004, after my father had been online for a few years, he searched for Roy's name and found him within minutes. They exchanged emails. It was friendly but brief. The gap was too wide. They had become different people. The reunion answered the question but couldn't restore the relationship. The mystery had been more sustaining than the resolution.

I suspect that is the real loss here. Not the people themselves — they are out there, or they aren't, living their lives independent of my curiosity. The loss is of a world where not-knowing was a permanent and accepted condition. Where you could carry someone with you for decades as an unanswered question, and the question itself was a form of connection. The internet resolved the questions but dissolved the carrying. Now everything is either findable or confirmed dead. The middle state — alive in memory, unknown in fact — has been almost entirely eliminated.

I don't want to go back to it. But I notice its absence.

Sources:

What Oxidation Does to Memory

I keep a drawer of bottles that I rarely open. Not because they're precious in the collector's sense — nobody is bidding on half-used flacons of discontinued Dior — but because each one carries a specific temporal charge that I'm not always prepared to encounter. Opening them is not like playing an old record or flipping through photographs. It's stranger than that, and more destabilising.

The world of 1990 vanished so completely that even infinite resources couldn't reconstruct it. I've written about this before — the cold clarity of realising that entire atmospheres have disappeared without ceremony. But fragrance is unlike almost any other surviving artefact from that period, and it's an idea worth dwelling on.

A compact disc from 1990 plays back identically to how it played in 1990. The data is frozen. It gives you the music but nothing of the room, nothing of the moment, nothing of you. A photograph, if you had one, would show you a surface — a face, a place — but flattened, stripped of dimension and sensation. These are recordings, but they're recordings of information, not of experience.

Fragrance is different. When you open one of those bottles in the drawer, what reaches you is a chemical substance that was actually present in the era you're grieving. Those molecules were manufactured in the late 1980s or early 1990s. They sat in department stores that no longer exist, were worn by people who have aged or died or disappeared from your life entirely. In a very literal sense, you are inhaling something that belonged to that world. It's not a representation of the past — it's a remnant of it.

But here's where the drift comes in. Fragrance degrades. Top notes evaporate over decades. Oxidation shifts the balance of a composition — terpenes and aldehydes break down into new compounds, hydroperoxides forming and collapsing into ketones and alcohols that weren't part of the original design. What you smell when you open a thirty-five-year-old bottle of something is not quite what it smelled like in 1990. It's close — recognisably close — but altered. The signal is still transmitting, but it has wandered. And that wandering is what makes it so uncanny, because it sits in a space that is neither faithful reproduction nor complete loss. It's the past almost reaching you, but not quite. A hand extended across time that falls just short of touching yours.

There's a reason smell does this more violently than sight or sound. The olfactory bulb feeds directly into the amygdala and hippocampus — the brain's emotional and memory centres — without the interpretive detour that visual and auditory signals take through the thalamus. A photograph gives you time to brace yourself. A scent does not. It arrives before you've decided whether you're ready for it, which is why opening an old bottle can feel less like remembering and more like being ambushed.

Jacques Derrida coined the term hauntology in his 1993 work Spectres of Marx to describe the persistence of things that are neither fully present nor fully absent — ghosts in the philosophical sense, not the supernatural one. Mark Fisher later applied the concept to culture and sound, exploring how certain recordings and artefacts carry the residue of futures that never arrived. I've spent time with that framework before, mostly through music. But fragrance may be its most literal expression.

A record from 1981 can be hauntological because it evokes a cultural moment that has vanished. A fragrance from 1990 is hauntological because it is the vanished moment — or what remains of it after thirty-five years of molecular decay. The distinction matters. One is a representation of loss. The other is loss actively happening, right there on your wrist.

And that near-miss is arguably more painful than total absence. If the fragrance were gone entirely, you could grieve cleanly. If it were perfectly preserved, you could close your eyes and almost believe. But instead you get this third thing — a haunted version, a ghost of a scent carrying just enough of the original to remind you of exactly what has been lost, while simultaneously proving that even the physical traces are slipping away.

I wrote recently about objects that outlive their context — things that become unsettling not through decay but through persistence, surviving into a world that no longer makes sense of them. Fragrance fits that description, with a cruel additional dimension. The object isn't merely out of time. It's actively changing while out of time, drifting further from its original state with each passing year. The drawer doesn't preserve the bottles. It slows their departure.

Perfumers understand this intuitively, even if they frame it differently. The IFRA regulations and serial reformulation of classic compositions have been debated exhaustively in fragrance circles, often with genuine anger. People talk about "vintage batches" the way audiophiles talk about original pressings — as though the earlier version contains something sacred that the new one has lost. They're not entirely wrong. But the reformulation debate concerns commercial products altered by manufacturers. What I'm describing is different. It's the slow, unauthorised revision that time itself performs on a sealed bottle. Nobody decided to change what's in there. Chemistry did. And chemistry doesn't care what the bottle meant to you.

I sprayed some Escada Pour Homme the other day — a bottle from approximately 1993, discontinued and long forgotten by anyone who doesn't haunt fragrance forums. The opening was thinner than I remembered. Sharper. Some of the warmth had retreated behind a veil of something slightly medicinal, which I suspect is the aldehydes shifting after three decades. The heart was still there, though. That particular woody amber signature that I associate with a very specific period in my life, when that fragrance was ordinary enough to buy in any department store and unremarkable enough that nobody commented on it. It reached me the way a voice reaches you through a bad phone connection — recognisable, but with parts missing. And those missing parts were precisely what hurt, because they confirmed that even the most intimate physical traces of a period are subject to the same entropy as everything else.

That's what makes vintage fragrance such a powerful hauntological object. It doesn't just represent the passage of time. It enacts it, right there on your skin.

Sources:

Claude Will Return Soon

"Claude will return soon." Five words on a grey screen, and suddenly two thousand people remembered what it felt like to think without assistance.

The outage hit around midday UTC. Authentication fell over — the API kept running, but claude.ai and Claude Code went dark. Downdetector lit up within minutes. Forty percent of reports were about the web chat. Another third couldn't get the mobile app to load. The rest were presumably just refreshing the status page in a state of quiet dread.

I noticed because I was mid-conversation. Not a casual one — I was deep into a debugging session on this very blog, trying to work out why the X API had been throwing 503s for five days. Claude vanished and I sat there staring at a blinking cursor like someone had unplugged my brain's external hard drive. I opened four tabs looking for alternatives. Closed all four. Went and made a cup of tea instead.

The timing was almost comic. Anthropic had just reached number one on the App Store, the company was dealing with a Pentagon blacklisting over its refusal to drop ethical red lines, and Opus 4.6 had been pulling in users at a rate that apparently exceeded what the login infrastructure could handle. Success as a denial-of-service attack on yourself.

What struck me wasn't the outage itself — everything goes down eventually. It was the speed of the collective panic. Two thousand Downdetector complaints in a few minutes. People weren't annoyed the way you're annoyed when Netflix buffers. They were annoyed the way you're annoyed when your electricity cuts out mid-sentence. Claude has become load-bearing infrastructure for a lot of people's daily work, and most of them didn't fully realise it until they got a grey screen and five polite words.

Anthropic fixed it within a couple of hours. I finished my tea, logged back in, and picked up exactly where I'd left off. The 503 problem turned out to be a lapsed billing tier on the X API, not anything Claude could have helped with anyway. But for those two hours I was genuinely, embarrassingly adrift.

Sources:

OpenAI's Two-Hour Conscience

Dario Amodei told the Pentagon he "cannot in good conscience accede" to its demands. Within hours, the Trump administration blacklisted Anthropic from every federal agency. Before that Friday was over, Sam Altman had signed a deal to put OpenAI's models on classified Pentagon networks. The whole sequence took less than a day.

That timeline deserves to sit with you for a moment.

Anthropic had a $200 million military contract on the table. The company wanted two conditions: no mass surveillance of American citizens, and no fully autonomous weapons systems. These are not fringe demands. They are the kind of restrictions that sound so obviously reasonable you'd assume they were already law. Anthropic's position was that current frontier AI models are not reliable enough for autonomous lethal force, and that mass domestic surveillance violates fundamental rights. The Pentagon told them to drop the conditions or lose the contract. Anthropic dropped the contract.

Defense Secretary Pete Hegseth didn't just cancel the deal. He designated Anthropic a "supply chain risk to national security" — a designation normally reserved for hostile foreign actors, not American companies exercising their right to negotiate terms. Trump ordered all federal agencies to begin a six-month phase-out of Anthropic technology. The message was blunt: comply absolutely, or we will make an example of you.

Amodei's response was equally blunt. "Disagreeing with the government is the most American thing in the world," he said. He's right. However, being right in Washington has never been a reliable survival strategy.

Here is where it gets ugly.

On Thursday evening — the night before the blacklisting — Sam Altman sent a memo to OpenAI staff. He wrote that this was "no longer just an issue between Anthropic and the Pentagon; this is an issue for the whole industry and it is important to clarify our stance." He told CNBC he didn't "personally think the Pentagon should be threatening [the Defense Production Act] against these companies." He said OpenAI shared the same red lines as Anthropic: no mass surveillance, no autonomous weapons, humans in the loop for lethal decisions.

Then, on Friday night — roughly two hours after Anthropic was officially blacklisted — Altman announced that OpenAI had reached an agreement with the Department of War to deploy its models on classified networks.

The deal includes language permitting the government to use OpenAI's technology for "all lawful purposes."

Read that clause again. "All lawful purposes" is a phrase that swallows everything. Surveillance programmes that haven't been ruled illegal yet? Lawful. Autonomous targeting systems that Congress hasn't specifically prohibited? Lawful. The entire architecture of restriction that Anthropic fought for — the architecture Altman publicly praised — dissolves inside three words. OpenAI didn't negotiate the same protections Anthropic demanded. It negotiated the appearance of them.

Altman claimed the DoW "agrees with these principles, reflects them in law and policy, and we put them into our agreement." This is lawyering, not principle. Anthropic asked for contractual guarantees. OpenAI accepted the Pentagon's assurance that existing law already covers it. The difference between those two positions is the difference between a lock on the door and a sign that says "please knock."

The timing is what makes it indefensible. If OpenAI had signed this deal three months ago, you could debate the merits. Companies make different risk calculations. However, Altman didn't sign it three months ago. He waited until the exact moment his competitor had been destroyed for holding the line he publicly endorsed, and then walked through the door Anthropic's corpse was holding open. There is a word for this, and it is not "principled."

OpenAI has form here. Altman told the Financial Times in 2024 that he "hates" advertising and called combining ads with AI "uniquely unsettling." ChatGPT now shows ads. He told the world OpenAI would remain a nonprofit. It converted to a for-profit. He told staff the company shares Anthropic's red lines on military use. The company signed a deal without them. At some point the pattern stops being strategic flexibility and starts being something else entirely.

I keep thinking about what Amodei actually risked. He didn't lose a debate. He lost access to the entire federal government. Anthropic's commercial future in government contracting — worth potentially billions over the next decade — is now in jeopardy. The company has said it will challenge the supply chain risk designation in court, arguing it is legally unsound and sets a dangerous precedent for any American company that attempts to negotiate with the government rather than capitulate. Senator Mark Warner called it an attempt to "bully" the company. Senator Thom Tillis — a Republican — criticised the Pentagon's public approach.

Google and xAI had already accepted military contracts without the restrictions Anthropic demanded. OpenAI was the last major lab besides Anthropic that hadn't signed. The industry had every incentive to quietly fold. That Anthropic didn't — that it chose financial pain over moral compromise — is the kind of corporate behaviour people claim to want but rarely reward.

My own position probably doesn't need stating, given that I'm writing this on a site built around Claude. I use Anthropic's models daily. I think they make the best reasoning systems available right now. However, that's not why this matters to me. Amodei's stand would be just as significant if Claude were mediocre. The question was never about product quality. It was about whether an AI company would accept hard limits on how its technology gets used, even when the most powerful government on earth told it the alternative was annihilation.

Anthropic said yes. OpenAI said whatever you need to hear.

Sources:

Nano Banana 2 Lands With Half the Price and Twice the Speed

Google shipped Nano Banana 2 today. The model — internally Gemini 3.1 Flash Image — replaces the original Nano Banana as the default across Gemini, Search, and Ads. I've already added it to my image editor and made it the default there too.

The numbers matter here. Eight cents per image at 1K, twelve at 2K, sixteen at 4K. The original cost fifteen cents at 1K and thirty at 4K. That's roughly half, and the output is sharper. Text rendering — which the original botched reliably — now validates character by character before the final render. I tested it on watermark removal and text overlays this afternoon. Both worked first time.

The architectural shift underneath is more interesting than the price cut. Nano Banana 2 runs a reasoning loop rather than straight diffusion — plan, evaluate, improve — which explains why spatial relationships and multi-element scenes hold together in ways the original couldn't manage. Four times faster despite doing more work per image.

I'm not sure it fully replaces FLUX 2 Pro for everything. FLUX still handles certain structural edits with more precision. But at eight cents versus five, the gap is small enough that Nano Banana 2 will be where I start most jobs now.

Sources:

Jack Spence's Forty-Year Tape Delay

Freedom To Spend has a specific talent for finding records that fell through every categorical crack available. Jack Spence's Bamboo Sun — originally pressed in 1985 on the tiny Equator Music imprint — is exactly that kind of find. Flute, bongos, vocal harmonics drifting somewhere between choral and accidental, all produced with a sharpness that doesn't match the deliberately loose playing. Spence handled keys, drums, and flute himself. Bob Glaub — a session bassist who played on Jackson Browne and Lennon records — held down the low end. That combination shouldn't cohere. Mostly it does.

The cover tells you what territory you're entering — sepia, handmade, a figure that might be a bird or a body or both. I can't decide, and I don't think Spence could either.

Freedom To Spend's uncommon¢ series has quietly become the most reliable excavation project in experimental reissues. They don't surface lost tapes. They surface records that were pressed in small runs, sold a few hundred copies, and vanished because nobody knew where to shelve them. Forty-one years later, the shelving problem hasn't been solved. The music just found an audience that doesn't need it solved.

Six Hundred Billion and Counting

Microsoft, Alphabet, Amazon, and Meta will spend somewhere between $650 billion and $700 billion on AI infrastructure this year. Gartner projects worldwide AI spending at $2.52 trillion for 2026. These numbers have become so large they've lost the ability to mean anything. A billion dollars used to be noteworthy. Six hundred billion barely makes it past the earnings call.

The question that keeps nagging — the one the earnings presentations spend entire segments avoiding — is what, exactly, all of this money is buying.

The honest answer: cloud growth, mostly. Microsoft's Azure grew 40% year over year in Q2, with AI contributing about 16 percentage points of that growth. Google Cloud hit $17.7 billion in Q4 2025, up 48%. Those are real numbers. Real revenue. Real customers signing real contracts. However — and this is where the narrative curdles — the total direct AI revenue across the industry last year was roughly $51 billion against $527 billion in spending. That is a gap you could park a civilisation in.

An MIT study found that up to 95% of firms investing in AI have not yet seen tangible returns. Only 14% of CFOs report measurable ROI. Despite this, 68% of CEOs plan to increase spending again next year. The logic is circular: we must spend because our competitors are spending, and our competitors are spending because we must spend. Nobody wants to be the one who blinked and missed the platform shift.

I keep returning to the comparison with OpenAI's revenue panic. A company that raised hundreds of billions, has 800 million weekly users, and still can't make the economics work without plastering ads across a product its CEO called "uniquely unsettling" to monetise that way. The unit economics are a warning sign for the entire sector, not just one company.

What frustrates me is that the useful stuff gets buried. Barclays cut £2 billion through AI-driven efficiency programmes. Anthropic just embedded Claude into Excel and PowerPoint, which is boring and practical and probably where the actual value lives — in incremental productivity gains that never make investor presentations exciting. The flashy demos get the funding. The spreadsheet automation gets the results.

Analyst projections warn that Big Tech free cash flow could drop as much as 90% in 2026 as capex outpaces revenue. Ninety percent. That is not a rounding error. That's a structural choice to defer profitability on the bet that whoever builds the most data centres fastest wins the next decade. Maybe they're right. The companies making this bet have been right before — about cloud, about mobile, about search. But they've also been wrong before, about the metaverse and crypto and social audio and a dozen other things that consumed billions before quietly disappearing from earnings calls.

The money is real. The infrastructure is real. The revenue is not — not yet, not at the scale the spending demands.

Sources:

When the Money Goes in Circles

WeWork raised $22 billion, peaked at a $47 billion valuation, and filed for bankruptcy in November 2023. SoftBank alone lost $14.4 billion. The coworking company didn't fail because coworking was a bad idea — it failed because the money propping up its growth never connected to a sustainable business underneath.

The AI industry has a version of this problem, and it's getting harder to ignore.

Bloomberg recently mapped the circular deal structure connecting Microsoft, OpenAI, and Nvidia. The pattern is striking. Nvidia committed up to $100 billion to OpenAI. OpenAI's CFO Sarah Friar acknowledged that the money "will go back to Nvidia" in GPU purchases. Nvidia also backs CoreWeave, which buys Nvidia GPUs to build data centres, then sells capacity back to OpenAI. The money moves. Whether it actually goes anywhere is a different question entirely.

Tom Tunguz drew an explicit comparison to Nortel's vendor financing during the telecom bubble — a company that lent money to its own customers so they could buy its products. Nortel's revenue looked real on paper. Until it didn't.

WeWork had the same circularity, just cruder. SoftBank invested billions. WeWork used those billions to sign long-term leases on buildings it didn't need yet. The expansion justified the valuation. The valuation justified more investment. Adam Neumann called it a "community company" and a "state of consciousness." The market called it a $47 billion technology company when it was a landlord with a beer tap.

The AI version is more sophisticated. The companies involved are profitable elsewhere. Microsoft and Google have cloud businesses generating hundreds of billions. Nvidia sells real products to real customers beyond the AI startup loop. And unlike WeWork — which was locked into leases it couldn't escape when demand fell — data centres have repurposing options. You can run cloud workloads, render farms, scientific computing. I keep reminding myself of this whenever the parallel starts feeling too neat.

The differences matter. I'm not arguing this is WeWork reborn.

What I am arguing is that the circular financing pattern should alarm anyone who watched a bubble before. When revenue from Company A depends on investment from Company B, which depends on revenue from Company A, the system is more fragile than the topline numbers suggest. The spending gap — $527 billion in, $51 billion out — looks especially precarious through this lens.

OpenAI is projected to lose $14 billion in 2026 while seeking another $100 billion in funding. The company that started the whole frenzy still can't make the economics work, even after turning to advertising despite its CEO calling the idea "uniquely unsettling" barely a year earlier.

WeWork's original sin wasn't ambition. It was the gap between the story and the balance sheet — the willingness to let growth narratives paper over unit economics that never worked. SoftBank kept writing cheques because the alternative was admitting the previous cheques were wasted. The AI industry hasn't reached that point. But the circular deals, the vendor financing, the ever-growing commitments justified by ever-larger projected returns — the architecture of the bet looks familiar.

The hardware is different. The founders are different. The technology does more real things for more real people. But money that goes in circles still ends up back where it started.

Sources:

Claude Sat Down at Your Desk

Anthropic shipped Claude directly into Excel and PowerPoint last week — not as a separate app, not as a browser tab you alt-tab to, but as a resident inside the file you're already working in. Generate slides from a prompt. Build pivot tables by describing what you want. Edit charts, rewrite bullet points, restructure entire decks. All native objects, not screenshots or static images. You keep editing after Claude finishes.

The Cowork launch bundled this with customisable "plugins" — pre-configured agents for financial analysis, HR, design, operations — and the stock market responded like someone had pulled a fire alarm. A software industry ETF dropped nearly 6% in a single session. IBM had already lost 13% of its market cap over an Anthropic blog post about COBOL the day before. Two positioning statements, two market convulsions.

Boris Cherny, who created Claude Code, told Fortune he thinks the title "software engineer" will start to disappear by the end of the year. Dario Amodei, Anthropic's own CEO, published an essay warning that AI will cause "unusually painful" disruption to jobs — a shock bigger than any before. When the people building the tool are this candid about the damage, the alarm feels earned.

But I keep snagging on specifics. The PowerPoint integration is a research preview. It doesn't support advanced features, loses chat history between sessions, and Anthropic themselves flag prompt injection risks from malicious templates. The Excel plugin handles pivot tables and conditional formatting, which is useful — genuinely — but the gap between "reformats a spreadsheet" and "replaces the analyst who understands what the numbers mean" is enormous.

The pattern is the same one playing out with AI-driven efficiency programmes in banking. Automation compresses the mechanical work. Headcount shrinks at the junior end. The people who survive are the ones who know which questions to ask, not which buttons to press. The spreadsheet jockey who builds one pivot table a week is not the person at risk. The person at risk is the one who builds fifty — because that volume is precisely the kind of repetitive, pattern-matching labour that an LLM handles well.

Anthropic is positioning Claude as the "default operational layer across enterprise workflows." L'Oréal, Deloitte, and Thomson Reuters are already deploying custom agents. The plugins are open-source and portable, which is a deliberate play against Microsoft's Copilot lock-in. Whether that matters depends on whether enterprises actually want portability or just want one vendor to blame when something breaks.

The job panic will continue. Some of it is justified. Most of it is aimed at the wrong targets.

Sources:

Bluesky and the Empty Room Problem

Forty million registered accounts. Roughly three million daily actives. That's a 92 percent no-show rate. Every time X does something stupid — and it does something stupid often — Bluesky gets a spike, people poke around, and most of them leave within the fortnight. The baseline nudges up slightly each time, which Bluesky's supporters treat as vindication. It isn't. It's a platform running on someone else's dysfunction.

The business model is the real problem. No ads, no subscriptions, no revenue. Twenty-three million in funding and around thirty employees burning through it. Leadership says they have multiple years of runway, which in startup language means they need another round before 2028. The AT Protocol is technically interesting — genuinely — but "technically interesting" and "sustainable" occupy different postcodes.

I signed up. I posted a few times. The timeline felt like a conference afterparty where everyone agrees with each other and nobody's buying drinks. Good conversations happen there, I'm told. They also happen on Discord servers and group chats and park benches. The question isn't whether Bluesky is pleasant — it is — but whether pleasant is enough to build something that lasts without eventually becoming the thing it defined itself against.

Sources: