Skip to content

Plutonic Rainbows

The Metabolic Cost of Looking Back

Certain kinds of thinking cost more than others. Mentally returning to a past moment — really returning, not just glancing — requires the mind to reconstruct something that no longer exists. The room, the light, the particular quality of a voice. When that moment carries emotional weight, the reconstruction doesn't stay intellectual. The body enters it too. Heart rate shifts. Breathing changes. The nervous system begins responding to something that isn't happening.

This is expensive.

I don't mean expensive in some vague, moralising way. Psychologists have a term for this pattern: rumination. The word comes from the digestive process of cows — chewing the same material over and over. When applied to thought, it describes the repetitive focus on distressing content without movement toward resolution. Research published in Stress and Health this year found that people who score high on rumination measures show exaggerated cardiovascular responses to stress and, critically, slower recovery afterward. The body stays activated longer. It doesn't settle.

There's a difference between remembering and dwelling that took me years to understand. Remembering can be reflective, even nourishing — a way of honouring what happened, integrating it, letting it inform the present without dominating it. Dwelling is something else. Dwelling is immersive, comparative, and repetitive. It doesn't integrate. It displaces. The present gets evaluated not on its own terms but against a version of the past that has been retrospectively polished until it gleams.

That comparison is unwinnable.

The past you're measuring against isn't even accurate anymore. Memory doesn't archive experience faithfully. Every recollection is a reconstruction — and reconstruction favours emotional intensity over factual precision. A period that was actually mixed, containing both good and difficult moments, can crystallise into pure golden light when viewed from sufficient distance. The mundane parts drop away. What remains is the atmosphere, stripped of its complications. You end up competing with a ghost that never existed.

A study from the University of Liverpool identified dwelling on negative events as the single biggest predictor of both depression and anxiety. Not the events themselves — the dwelling. The cognitive pattern of returning again and again, generating alternatives that cannot be pursued, asking questions that cannot be answered. What if I had stayed? What if I had said something different? The brain is remarkably good at generating counterfactuals. It is remarkably bad at closing them when the alternatives are impossible. The loop has no exit.

I've been writing about memory for a while now, trying to understand why certain fragments refuse to stay in the past. Part of the answer, I think, is that emotionally vivid memories don't behave like dated entries in a calendar. They feel concurrent with the present. They resist being filed under "then." When I dwell on such a memory, I'm not looking backward at a fixed point. I'm experiencing something that seems to exist alongside now, competing for the same attention, drawing from the same limited pool of emotional energy.

And that pool is limited. Attention, once fixed, is expensive to keep fixed. Emotion that has nowhere to go — no corrective action, no completion, no resolution — exhausts rather than motivates. This is one reason beautiful memories can leave a person feeling depleted afterward. The emotion is real. The activation is real. But there's nothing to do with it. No way to act. The feeling cycles without discharge.

I should say plainly: I don't think any of this means the past should be ignored or that reflecting on difficult memories is inherently harmful. The problem isn't memory. The problem is a specific relationship to memory — one characterised by repetition without integration, by comparison without acceptance, by emotion without agency. The psychological literature calls this "brooding" as opposed to "reflective pondering." Brooding predicts worse outcomes. Reflective pondering can actually help.

The distinction is subtle but feels obvious once you notice it. Reflective pondering asks what happened and what it means. Brooding asks why this happened to me and whether it could have been different. One moves toward understanding. The other moves toward a wall.

Some of the fatigue, I suspect, comes from temporal misallocation of meaning. When a specific period of the past comes to carry disproportionate emotional weight, the present is quietly stripped of legitimacy. New experiences feel thin because they're not allowed to matter in the same way. They're measured against something that has been idealised through distance and repetition. Even neutral or potentially good moments struggle to register because attention has been monopolised elsewhere.

I notice this in myself. There are stretches of time when my present life is fine — genuinely fine, not pretending — but a certain flavour of memory keeps surfacing, and each surfacing takes something. Not much. But accumulating. Like a tax on attention. After a day of this, I'm tired in a way that doesn't correspond to what I've actually done. The body knows it has been working even if the work is invisible.

Recent research in Frontiers in Psychology found that fatigue itself can trigger rumination, creating a feedback loop. Tired people dwell more. Dwelling makes people tired. The cycle reinforces itself. Breaking out requires noticing the pattern — recognising when remembering stops adding depth and starts extracting vitality. That recognition doesn't fix anything by itself, but it marks the point where awareness begins to replace compulsion.

Self-compassion appears to help. Not in the sense of empty reassurance, but in the sense of treating yourself with the same patience you'd offer someone else caught in the same loop. A study published in Nature this year found that self-compassion mediates the relationship between self-critical rumination and anxiety. Which is to say: how you relate to the pattern matters as much as the pattern itself. Beating yourself up for dwelling only adds another layer to the thing you're dwelling on.

I'm not sure I've gotten better at this. I've gotten better at noticing it, which is something. When I catch myself returning to the same moment for the third or fourth time in a day, I can sometimes name what's happening: reconstruction is active, the body is responding to something that isn't here, energy is being spent on a comparison I cannot win. Naming it doesn't stop it. But naming it creates a small gap between the experience and my identification with it.

The hard part — the honest part — is accepting that some memories will keep arriving whether I want them to or not. They'll bring their weather with them. The question isn't how to make them stop. The question is whether I let them run the whole day or whether I can acknowledge their arrival and then, with effort, redirect attention to something I can actually affect.

Some days I manage. Some days I don't.

Sources:

Where Claude Code Goes From Here

The 2.1.0 release landed a few days ago, and buried in the changelog are some features that hint at where Anthropic is taking this thing. Session teleportation — the ability to resume a terminal session at claude.ai/code using /teleport — sounds like a convenience feature until you realise what it actually enables. I can start something complex on my laptop, close the lid, and pick it up on my phone later. The session state persists somewhere in Anthropic's infrastructure, waiting.

This feels like the beginning of something larger. The pattern I'm seeing across recent releases suggests Anthropic is building toward persistent agents that survive individual sessions. Not just chat history — actual running context that carries forward. The hooks system they added for skills and agents points in the same direction. You can now define PreToolUse and PostToolUse logic that scopes to specific contexts. That's infrastructure for agents that remember what they were doing and why.

The Chrome integration is interesting too. Beta, obviously, but the idea of controlling a browser directly from the terminal opens up workflows I hadn't considered. Automated testing that actually sees the page. Form filling. Screenshot analysis. It's not that any individual capability is new — it's that they're converging into something more coherent.

I'm not sure Anthropic has figured out where the boundaries should be. The Explore subagent, which uses Haiku to search codebases efficiently, saves context by doing lightweight reconnaissance before committing the main model's attention. Smart, but it also means decisions about what's relevant happen outside my visibility. Sometimes it finds exactly what I need. Sometimes it misses something obvious because the cheaper model didn't recognise its importance. The tradeoff makes sense economically; I'm less certain it makes sense epistemically.

What I'm watching for next: multi-session orchestration. The teleportation feature only works for resuming a single session right now. But the infrastructure clearly supports more than that — spawning background agents that report back, coordinating work across multiple contexts, that sort of thing. Cowork plugins already hint at this. Companies can apparently build internal plugin catalogs now. The pieces are assembling.

My guess — and this is speculation — is that Anthropic ships proper agent orchestration within the next few months. Not as a separate product, but as an extension of what Claude Code already does. The session teleportation, the hooks system, the subagent architecture: these aren't random features bolted on. They're scaffolding for something more ambitious. Whether that ambition lands gracefully or creates new categories of confusion remains to be seen. The history of agentic AI is littered with impressive demos that fell apart in production.

For now, I'm mostly pleased with where things are. The asking-too-often problem hasn't disappeared, but the tool has gotten better at knowing when to just proceed. The codebase search actually works. The Chrome stuff is rough but promising.

Sources:

Wonder 2 Finally Shows Some Restraint

The original Wonder model turned everything into watercolour. Skin looked airbrushed, fabric lost its weave, and faces — especially small ones in group shots — came out waxy. I ran a batch of family photos from the early 2000s through it last year and the results were unusable. Everyone looked like they'd been smoothed in Photoshop by someone who'd just discovered the blur tool. I stopped using it.

Wonder 2 is different. Topaz finally acknowledged what users had been complaining about: the over-processing. The new model dials back the artificial sharpening and actually preserves texture. Hair looks like hair. Skin has pores. Fabric keeps its weave instead of melting into some vague suggestion of cloth.

I tried the same batch again. The difference is significant. Not perfect — I'm not sure any upscaler handles compression artefacts from early digital cameras gracefully — but the faces are recognisably human now. That waxy sheen is gone.

The catch: it's cloud-only. Topaz says the computational demands are too heavy for local processing, which is probably true, but it means you're uploading your images to their servers. For personal photos, I don't love that. For client work, some people won't accept it at all. I understand the technical reasoning — these models are enormous and running them locally would require hardware most photographers don't have — but it still feels like a step backward in terms of control.

There's also the subscription question. Topaz moved to a subscription model last year, which rubbed a lot of long-time users the wrong way. I'm not going to relitigate that argument here. The software works or it doesn't. For me, Wonder 2 works well enough that I've started using Topaz again after months of avoiding it.

What I actually wanted from an upscaler was always simple: make the image bigger without making it worse. Don't add detail that wasn't there. Don't smooth things that should be rough. Don't sharpen edges until they ring. Just scale it up and preserve what exists. Wonder 2 gets closer to that than anything else I've tried. It's not magic — you can't turn a 200x300 thumbnail into a printable image — but for moderate upscaling of decent source material, it does the job without leaving obvious fingerprints.

The Fidelity update that shipped alongside Wonder 2 includes a bunch of other models too. Recover 3 for softer results, some video stuff I haven't tested. But Wonder 2 is the one that matters to me. It's the reason I'm writing this instead of just ignoring another Topaz release.

Sources: