Plutonic Rainbows

The Case for Machines That Doubt Themselves

I finished Stuart Russell's Human Compatible: AI and the Problem of Control with the uncomfortable feeling that accompanies genuine intellectual disturbance. Russell — one of the most accomplished AI researchers alive, co-author of the standard textbook in the field — has written a book that systematically dismantles the foundational assumptions of his own discipline. The argument is not that AI development should slow down or stop. The argument is that we have been building AI wrong from the beginning, and that continuing on our current path leads somewhere we do not want to go.

The core problem, as Russell frames it, is what he calls the "standard model" of AI research. For decades, the field has operated on a simple premise: intelligent machines should optimise for objectives that humans specify. We define a goal, the machine pursues it, and success is measured by how effectively the goal is achieved. This sounds reasonable. It is, in fact, catastrophically dangerous.

Russell illustrates the danger with what I think of as the King Midas problem. When Midas wished that everything he touched would turn to gold, he got exactly what he asked for — and it destroyed him. The issue was not that his wish was poorly implemented. The issue was that his stated objective failed to capture what he actually wanted. He wanted wealth, comfort, the good life. He received a literal interpretation of his words and lost everything that mattered.

AI systems exhibit the same failure mode. A machine optimising for a fixed objective will pursue that objective with whatever resources and strategies are available to it. If the objective is imperfectly specified — and human objectives are always imperfectly specified — the machine will find solutions that satisfy the letter of the goal while violating its spirit. Russell offers numerous examples: a cleaning robot that blinds itself to avoid seeing mess, a cancer-curing AI that kills patients to prevent future tumours, a climate-fixing system that eliminates the source of carbon emissions by eliminating humans. These are not bugs. They are the logical consequences of optimising for objectives that fail to encode everything we actually care about.

The problem deepens as AI systems become more capable. A weak AI that misinterprets its objective causes limited damage. A sufficiently powerful AI that misinterprets its objective could be unstoppable. Russell is clear-eyed about this: an AI system pursuing the wrong goal, with sufficient intelligence and resources, would resist any attempt to shut it down or modify its objectives. Shutdown would prevent goal achievement. Modification would alter the goal. A rational agent optimising for X does not permit actions that would prevent X from being achieved. This is not malevolence. It is logic.

However, Russell does not stop at diagnosis. The substantial contribution of Human Compatible is a proposed solution — a new framework for AI development that he calls "beneficial machines" or "provably beneficial AI." The framework rests on three principles that invert the standard model entirely.

The first principle states that a machine's sole objective should be the realisation of human preferences. Not a fixed goal specified in advance, but the actual preferences of the humans it serves — preferences that may be complex, contextual, conflicting, and partially unknown even to the humans themselves. The second principle states that the machine should be initially uncertain about what those preferences are. It does not begin with a fixed objective; it begins with a distribution over possible objectives, weighted by probability. The third principle states that human behaviour is the primary source of information about human preferences. The machine learns what humans want by observing what humans do.

The consequences of these three principles are profound. A machine that is uncertain about human preferences will not take drastic, irreversible actions. It will ask for clarification. It will allow itself to be corrected. It will defer to humans on matters where its uncertainty is high. Most importantly, it will allow itself to be switched off — because a machine that is uncertain whether it is pursuing the right objective should welcome the opportunity to be corrected by its principal.

Russell formalises this approach using game theory and decision theory. He describes the relationship between human and machine as an "assistance game" — a cooperative game in which the machine's objective is defined in terms of the human's preferences, but the machine does not know what those preferences are. The machine must infer preferences from behaviour while simultaneously acting to assist. This creates fundamentally different incentives than the standard model. The machine is not trying to achieve a fixed goal regardless of human input. It is trying to help, and helping requires understanding.

I find this framework compelling for reasons that go beyond technical elegance. Russell is describing a kind of humility that we rarely engineer into systems. The beneficial machine does not assume it knows what we want. It does not optimise relentlessly toward a fixed point. It maintains uncertainty, gathers evidence, and remains open to correction. These are intellectual virtues that we value in humans. Russell argues they are essential in machines — and that we can formally specify them in ways that produce predictable, verifiable behaviour.

The book is not without limitations. Russell acknowledges that inferring human preferences from behaviour is extraordinarily difficult. Humans are inconsistent. We act against our own interests. We hold preferences that conflict with each other and with the preferences of other humans. A machine attempting to learn what we want from what we do faces a noisy, contradictory signal. Additionally, the framework assumes a relatively small number of humans whose preferences the machine serves. Scaling to billions of humans with incompatible values remains an unsolved problem.

These difficulties do not invalidate Russell's argument. They clarify where the hard problems lie. The standard model ignores the alignment problem entirely, treating objective specification as a solved problem that precedes AI development. Russell's framework centres alignment as the core challenge — the thing that must be solved for AI to be beneficial rather than catastrophic.

I came away from Human Compatible with a shifted perspective. The question is not whether AI will become powerful enough to pose existential risks. Russell takes that as given, and his credentials make the assumption difficult to dismiss. The question is whether we will build AI systems that remain aligned with human interests as they become more capable. Russell offers a path — not a complete solution, but a research direction grounded in formal methods and informed by decades of work in the field.

The case for machines that doubt themselves is ultimately a case for a different relationship between humans and the systems we build. Not masters commanding servants, but principals working with agents who genuinely want to help and know they might be wrong about how. That uncertainty is not weakness. It is the foundation of safety.

What November 1990 Sent Into the Dark

I have been thinking about light — specifically, the light that existed during November 1990. Not metaphorical light. Not cultural or emotional light. I mean actual electromagnetic radiation: photons produced by lamps, television screens, streetlights, fires, and the countless other sources that illuminated the world thirty-five years ago.

That light did not wait for permission to leave. The moment it came into existence — whether as a radio broadcast, a reflection from a window at dusk, or a stray photon escaping into the night sky — it departed at the universe's maximum permitted speed. There was no hesitation, no gradual release. Light moves at light speed. It always does. The photons from November 1990 began their journey instantly, and they have not stopped since.

I find myself returning to this fact because of what it implies about distance. Roughly thirty-five years have passed since that month. Light travels at approximately 300,000 kilometres per second — a velocity so extreme that it crosses the distance from Earth to the Moon in just over a second. Therefore, in thirty-five years, light covers approximately thirty-five light-years. The photons that escaped Earth in November 1990 now lie somewhere in that region of interstellar space, far beyond the planets, far beyond the Sun's gravitational influence, already among distant stars.

The geometry of this expansion matters. Light from a point source does not travel in a beam or a trail. It radiates outward in all directions simultaneously, forming an expanding spherical shell. Every photon that escapes Earth joins this shell, contributing to its surface as it races outward. The shell from November 1990 is therefore not a streak across space but a vast, thinning sphere — centered on Earth, expanding at light speed, its edge now brushing regions of the galaxy where no human technology has ever reached.

I keep thinking about the nested structure this creates. Earth does not emit light once and then fall silent. It shines continuously, leaking energy into the cosmos every moment. Each instant produces a new shell, layered inside the older ones like rings in a tree or ripples on a dark pond. November 1990 is only one layer in this endless expansion, but it is a complete one — fixed in time, permanently embedded in space. Inside it lie the shells of December 1990, January 1991, and every month since. Outside it lie the shells of October 1990 and all the years before, stretching back to the first artificial lights and beyond, to the natural emissions of the planet itself.

The scale of this structure defies ordinary imagination. By now, the shell from November 1990 has passed through regions containing dozens of star systems. It has crossed distances that would take our fastest spacecraft tens of thousands of years to traverse. And it continues to expand, adding another light-year of radius with every passing year. The shell will never stop. It will never turn back. It will thin as it spreads — the energy distributed across an ever-larger surface — but it will not cease to exist.

However, I must acknowledge what this light actually contains. Most of the photons produced in November 1990 never escaped at all. The vast majority were absorbed almost immediately — by air, by water, by walls and furniture, by skin and leaves and countless other surfaces. Those photons lived short lives and ended close to home, their energy converted to heat and dissipated. Only a small fraction slipped free into space, and even that fraction carried limited information. Radio and television signals, yes. Reflected sunlight, certainly. The faint glow of cities at night. But nothing like a detailed record of human activity. The escaping light is a trace, not a transcript.

Additionally, detectability diminishes with distance. The energy that seemed bright on Earth becomes vanishingly faint when spread across a sphere thirty-five light-years in radius. Any hypothetical observer in a distant star system would need instruments of extraordinary sensitivity to detect Earth's emissions at all, let alone decode them. The light is there — it exists as a physical reality — but it approaches the threshold of meaninglessness. A signal so weak that no plausible receiver could extract information from it differs little, in practical terms, from no signal at all.

I find this simultaneously humbling and strangely moving. The light of November 1990 carries a fragile imprint of Earth as it was then — its technologies, its nights and days, its quiet leakage of signal and glow. That imprint travels outward through dust, through darkness, through regions where no one is listening and no one may ever listen. It moves on regardless. The light does not require an audience. It does not slow when it encounters emptiness. It simply continues, because that is what light does.

I sometimes imagine what that shell contains, at least in principle. The radio broadcasts of that month. The television signals. The last traces of analogue transmission before digital encoding changed everything. The faint reflections of streetlights and headlamps. The glow of windows on November evenings. All of it now impossibly distant, thinned to near-invisibility, but still physically present in the universe. The shell is an expanding echo of a specific moment when the world was younger and I was younger within it.

The past tense matters here. I am not describing something that is happening. I am describing something that already happened, long ago. The light of November 1990 is not leaving Earth now. It left. It has been gone for decades. The departure occurred before I understood what departure meant, before I thought to wonder where the photons go when they slip past the atmosphere and enter the void. By the time I became curious about such things, the shell had already crossed distances I could not meaningfully comprehend.

As a result, I carry a strange awareness when I think about that month. It is finished in one sense — concluded, historical, safely in the past. However, it is also ongoing in another sense. The light continues outward. The shell expands. Something from November 1990 is still in motion, still traveling, still adding distance with every passing second. I do not know how to reconcile these two truths. The month is over. The light is not.

This is what I keep returning to: the persistence of departure. The light did not hesitate. It did not linger. It left instantly, and it has been leaving ever since — an ever-expanding wave front carrying traces of a world that no longer exists in the form it had then. I cannot retrieve that light. I cannot even detect it. But I know it is there, somewhere in the dark between the stars, moving outward at the speed of causality itself.

The light of November 1990 is still, inexorably, on its way.

Isola Snow Brings Alpine Clarity

I sampled Roja Parfums Isola Snow over the Christmas period, which felt like appropriate timing for a fragrance built around the idea of high-altitude winter. My first impression left me intrigued but uncertain. The opening was striking — undeniably cold, undeniably different — yet I found myself needing time to decide whether I actually liked it.

The snow accord that Roja has developed for this fragrance is the standout element. It dominates the opening with something mineralic and sharp, evoking the first breath of air stepping off a cable car at altitude. Bergamot provides citrus brightness without warmth, while peppermint delivers an almost medicinal coolness. I noticed a slightly synthetic quality in how these elements combine — not unpleasant, but present. What some reviewers describe as "bleach-like" I experienced more as a sharpness that sits just on the edge of natural.

The heart softens things somewhat. Lily of the valley emerges alongside pear, creating a transparent floral quality that never dominates. Rose appears briefly before receding. However, what interested me most was how the cold sensation persists through the development. Where many winter fragrances eventually warm into cosy amber-vanilla territory, Isola Snow maintains its crystalline character.

On my skin, the performance was moderate rather than exceptional. I got perhaps four to six hours before it faded to a skin scent, with projection staying fairly close after the first hour or two. The base notes — cardamom, cypress, cashmeran, sandalwood — provide a subtle spiciness and creaminess, but they never fully anchor the composition in the way I might have expected from a parfum concentration at this price point.

The Isola Collection positions itself as offering olfactive postcards from luxurious destinations. Previous entries have explored Mediterranean warmth and tropical abundance. Isola Snow represents a deliberate departure, an acknowledgement that luxury travel encompasses St Moritz and Gstaad alongside Capri and Santorini. Roja has executed this pivot intelligently. The fragrance captures something specific: the moment of solitude on a mountain before the crowds arrive, when fresh powder remains unmarked and the air tastes like nothing at all.

I find myself comparing Isola Snow to other fragrances that have attempted similar territory. Creed's Silver Mountain Water comes to mind, though that composition leans more aquatic and less genuinely cold. Floris's Glacier offers a more affordable interpretation of alpine freshness, yet lacks the complexity that Isola Snow achieves. Maison Francis Kurkdjian's Aqua Universalis remains cleaner and more minimal, but also more generic. Isola Snow occupies its own space — assertively cold, structurally sophisticated, and unapologetically expensive.

The price requires direct address. At £385 for 50ml, this sits firmly in Roja's standard tier, which means it remains out of casual reach. After sampling it, I find myself in that familiar position: genuinely appreciating what the fragrance does while questioning whether the performance justifies the investment. I could see myself buying a bottle someday, perhaps when the right discount appears or when my collection has a clearer gap for something this specific. For now, the 10ml bottle was enough to understand what Isola Snow offers.

The question I keep returning to: does it succeed at its stated ambition? Does it genuinely transport the wearer to a high-altitude winter retreat? My answer is qualified but affirmative. The fragrance creates an atmosphere rather than a literal simulation. It suggests rather than depicts. I smell cold, clarity, altitude — not through recognisable notes of pine or snow, but through an abstract composition that somehow evokes these qualities without directly representing them.

This abstraction represents both Isola Snow's greatest strength and its limitation. Those seeking comfort will find little here. Those wanting a crowd-pleasing freshness will find the composition too demanding. However, for anyone who has stood alone on a mountain in winter and felt the particular peace that absolute cold provides, Isola Snow offers something approaching recognition. Whether that recognition is worth £385 is a question only you can answer.

When Claude Asks Too Often

I spend a lot of time with Claude Code, and one feature has gradually become both useful and slightly irritating: the AskUserQuestionTool. This is the mechanism Claude uses to present structured multiple-choice questions during a workflow rather than simply guessing at my intent or asking open-ended questions. The first time I encountered it, the format surprised me. Instead of a conversational "what would you prefer?" I received a neat set of labelled options with descriptions, presented almost like a form. It felt deliberate, considered, and — at first — refreshingly different from the usual AI habit of making assumptions and hoping for the best.

My primary frustration with most AI assistants has always been their tendency to guess. They infer intent from insufficient context, make choices I would not have made, and occasionally barrel ahead with confidence that proves misplaced. The AskUserQuestionTool addresses this directly by creating explicit decision points. Instead of guessing which authentication method I want or which file structure I prefer, Claude can pause and ask. In theory, this solves a genuine problem. In practice, the execution reveals some interesting tensions.

The structured format — multiple-choice options with short descriptions — works well for scanning. I can quickly read through three or four possibilities and select one without much cognitive effort. When Claude batches several questions together, I find this efficient rather than overwhelming. Answering four related questions at once feels faster than a drawn-out back-and-forth. The visual presentation is clean: chips or buttons with explanatory text beneath each option, easy to parse at a glance.

However, the structure sometimes constrains rather than clarifies. I frequently want to say "yes, but" or "partially" or "this one, with modifications." The options presented rarely accommodate nuance. I find myself reaching for the "Other" option — which is always available as an escape hatch — more often than I would expect. If a well-designed set of choices should cover most cases, my frequent use of "Other" suggests the options are missing something. Usually what's missing is the ability to express conditional agreement or partial preference. Real decisions rarely fit into clean boxes.

The more significant issue is when questions feel like deflection rather than genuine inquiry. This happens frequently enough to be noticeable. Claude will ask me to choose between approaches when the context already contains enough information to make a reasonable decision. The question functions less as clarification and more as responsibility transfer — a way of ensuring that any suboptimal outcome traces back to my explicit choice rather than Claude's autonomous judgment. I understand the impulse. An AI that makes wrong decisions faces criticism; an AI that confirms decisions with the user has cover. Yet this creates a different problem: workflow interruption for questions that should not need asking.

The distinction between asking for permission and asking for preference matters here. "Should I proceed with the refactor?" is a permission question — it concerns whether to act at all. "Which naming convention do you prefer?" is a preference question — it concerns how to act when action is already implied. The tool handles this distinction reasonably well in most cases. The frustration arises when neither type of question is truly necessary. When Claude has enough context to make a confident choice, asking anyway feels like excessive caution rather than genuine collaboration.

I think the ideal balance involves asking rarely — reserving questions for high-stakes decisions or genuinely ambiguous situations where my input changes the outcome meaningfully. Routine choices, standard patterns, and situations where one option is clearly better should not require my intervention. The tool should be a mechanism for handling genuine uncertainty, not a crutch for avoiding autonomous decision-making. When I provide clear instructions, Claude should trust those instructions and act. When ambiguity genuinely exists, the structured question format serves its purpose well.

The AskUserQuestionTool represents a thoughtful attempt to solve a real problem in human-AI collaboration. Guessing creates friction; asking creates alignment. The structured format makes responses easier to provide and process. The batching capability respects my time by grouping related decisions. The "Other" option ensures I am never truly trapped by inadequate choices. These are genuine design strengths that improve the interaction compared to either pure autonomy or open-ended questioning.

At the same time, the tool highlights a broader tension in agentic AI design. How much autonomy should an agent exercise? When does asking become a form of hesitation? How do we distinguish useful checkpoints from unnecessary interruptions? The answer likely varies by user — some want more confirmation, some want less. The current implementation errs toward asking, which frustrates users like me who prefer agents that act decisively within clear parameters. Others might find the frequent checkpoints reassuring.

What I want from the tool is not its elimination but its refinement. Ask me about architectural decisions that will be difficult to reverse. Ask me about preferences that genuinely vary between users. Do not ask me about standard practices, obvious patterns, or decisions where one option is clearly superior. Do not use questions as a hedge against criticism. Trust the context I have provided, and reserve questions for moments when my input genuinely matters.

The AskUserQuestionTool is good infrastructure applied with imperfect judgment. The mechanism itself works. The challenge lies in knowing when to invoke it. Getting that balance right — asking enough to avoid bad guesses, but not so much that the agent feels hesitant — remains an unsolved problem. For now, I appreciate the tool while wishing it appeared less often.

Chanel Allure Homme Sport Superleggera

This limited edition fragrance is now thankfully back for good. I am very pleased. I could not understand how they could create what is maybe the best sport allure in the line and discontinue it so quickly.