Plutonic Rainbows

When Attars Take Flight

Sultan Pasha's decision to reformulate Thebes as an alcohol-based Extrait de Parfum marks a significant departure from the oil-based attar tradition that established his reputation. The original Thebes Grade 1 arrived in 2016 as an homage to Guerlain's discontinued Djedi — a fragrance so evocative that Sultan Pasha described it as the only perfume that had brought him close to tears. After months of painstaking recreation, he captured that spectral atmosphere in oil form, creating what became his signature composition.

Nearly a decade later, the 2025 release transforms that intimate, skin-hugging attar into something altogether different. Working alongside Christian Carbonnel under the new Sultan Pasha Perfumes label, the reformulation explores what happens when you translate oil's density and warmth into alcohol's volatility and projection. The result maintains the core narrative — an ancient Egyptian tomb, the boundary between life and death — while fundamentally altering how that story unfolds in space and time.

The composition itself reads like an exercise in controlled opposition. Bright aldehydes and a white floral bouquet of jasmine, muguet, and rose sit against somber, earthy vetiver and the distinctive chalk-like texture of genuine orris butter. Reviewers consistently note this tension: the fragrance is simultaneously luminous and gloomy, uplifting and ritualistic. One detailed review describes waves of heady florals alternating with leather and salty ambergris, creating an animalic, fatty quality that feels deliberately unsettling.

This approach differs markedly from the attar version's intimate revelation. Alcohol-based perfumes diffuse outward, creating a more public presence that transforms the wearer's relationship to the scent. Where the oil version whispered ancient secrets directly to the skin, the Extrait broadcasts them into the surrounding air. The projection reportedly remains strong for the first two hours before settling closer to the body, with longevity hovering around five hours — a relatively modest performance for an Extrait concentration, suggesting the formula prioritizes complexity over sheer endurance.

The move to alcohol represents more than technical reformulation. Sultan Pasha built his reputation through traditional attar craftsmanship, a method that demands patience and precision but limits commercial reach. Attars require direct application, careful storage, and an understanding that comes through experience. By creating alcohol-based versions of his most celebrated works, he opens a door to audiences who might find oil-based perfumes too unfamiliar or demanding.

However, this accessibility comes with artistic risks. The attar community values the medium's contemplative nature — its quiet intensity, its refusal to announce itself beyond the wearer's personal space. Translating that aesthetic into alcohol requires careful calibration to avoid losing what made the original compelling. Based on early responses, Thebes manages this balance by maintaining its strange, funereal atmosphere even as it reaches farther from the skin. The reformulation amplifies certain aspects — particularly the aldehydic brightness and floral lift — while preserving the dusty, ritualistic core that defines the concept.

Sample sets became available for preorder through January 2026, a deliberate strategy that allows serious enthusiasts to experience the full lineup before committing to full bottles. This approach respects the considered, exploratory mindset that characterizes niche perfume appreciation. These are not fragrances designed for casual purchase; they demand time, attention, and a willingness to sit with discomfort. The animalic qualities alone ensure this remains far from mainstream tastes.

What strikes me most about this release is its timing. The niche perfume market has become increasingly crowded, with countless brands claiming artisanal credentials while churning out derivative compositions. Sultan Pasha's move to alcohol could be read as capitulation to commercial pressure, but the execution suggests otherwise. By maintaining Extrait concentration and preserving the challenging, unconventional character of the original work, he signals that accessibility need not mean simplification.

The question now becomes whether this model succeeds — whether audiences accustomed to attars will embrace the reformulations, and whether those new to Sultan Pasha's work will appreciate what makes these fragrances distinctive. Thebes tests that proposition directly, offering a scent that refuses conventional pleasantness in favor of atmospheric depth. It remains to be seen whether the broader market rewards that uncompromising vision or whether the commercial realities of alcohol-based production eventually push toward safer ground.

For now, Thebes in Extrait form exists as a fascinating experiment in translation, asking how much of an attar's soul survives the journey from oil to alcohol. The early evidence suggests more than you might expect, though undoubtedly something irretrievable remains bound to the original medium. What emerges is neither superior nor inferior, but genuinely different — a parallel interpretation that extends the concept rather than simply reproducing it in another format.

Sources:

When Architecture Becomes Instrument

Philip Johnson's Glass House served as more than a venue for Ryuichi Sakamoto and Alva Noto's 2016 improvisation — it became the instrument itself. Contact microphones placed on the glass walls captured vibrations, transforming the structure into a resonant body. The resulting album, released in 2018, documents a single 37-minute performance where architectural space and electronic processing merge.

The collaboration marked their first live work together since Sakamoto's cancer diagnosis in 2014. Both artists approached the session with minimal rehearsal, spending only one day preparing before the recording. Sakamoto brought a keyboard and glass singing bowls, while Nicolai contributed his characteristic digital processing. However, the true voice emerged from the building itself.

Yayoi Kusama's installation — Dots Obsession: Alive, Seeking for Eternal Hope — occupied the space during the performance. Sakamoto described looking through the glass walls at the landscape while surrounded by Kusama's dots as "a strange mixture of natural, nature, and artificial things, art." That tension between organic and synthetic pervades the recording. Nicolai's glitches and static rest against Sakamoto's melodic fragments, neither dominating.

The Glass House's transparent walls offered ideal conditions for an experiment in architectural acoustics. Therefore, what emerged was not merely electronic music performed in a space, but music generated from the space itself — a document of place as much as performance.

Sources:

The Deliberate Slowdown: What Anthropic's Development Pace Tells Us About Sonnet 5

I've been watching Anthropic's release cadence closely over the past year, and something has changed. The company that brought us Claude Opus 4.5 in November 2025 has gone conspicuously quiet. No leaks, no benchmarks teased on Twitter, no cryptic blog posts hinting at breakthrough capabilities. Just silence. That silence, however, tells me more about their next model than any press release could.

The industry has trained us to expect a particular rhythm. OpenAI drops a new model every few months, each one incrementally better than the last. Google races to catch up. The smaller labs scramble to carve out niches. We've come to expect this treadmill of marginal improvements, each accompanied by breathless claims of revolutionary progress. Anthropic participated in this race for a while, but I believe they're stepping off it deliberately.

Consider what we know about their philosophy. The company was founded explicitly on the principle that AI safety cannot be an afterthought. Their Constitutional AI approach isn't marketing — it's baked into their training methodology. They've published papers on interpretability that most companies wouldn't touch because they reveal uncomfortable truths about what we don't understand. This isn't a company optimizing for Twitter engagement or shareholder updates.

Therefore, when I look at the gap between Opus 4.5 and whatever comes next, I don't see delay. I see intentionality. I believe Anthropic is rebuilding their development process from the ground up, and the next Sonnet model will reflect that fundamental shift.

The current generation of frontier models, including Anthropic's own, share a common weakness. We can measure their performance on benchmarks, but we struggle to predict their behavior in edge cases. They excel at standard tasks while occasionally producing outputs that reveal concerning blind spots. This unpredictability isn't just an engineering challenge — it's an existential risk that scales with capability. Additionally, the compute required to train these models has grown exponentially, while the improvements have become increasingly incremental.

I suspect Anthropic recognized this pattern and decided to break it. Rather than rush out Sonnet 5 with another ten percent improvement on MMLU, they're likely pursuing something harder. They're probably working on models that can explain their reasoning not as a party trick, but as a core architectural feature. Models that know what they don't know and communicate that uncertainty clearly. Models that scale in safety as aggressively as they scale in capability.

This approach demands patience. You can't bolt interpretability onto a model after training and expect meaningful results. You can't patch constitutional principles into an architecture designed around different priorities. If Anthropic is serious about building models that remain aligned as they grow more powerful, they need to redesign the foundation. That takes time.

The economics support this theory as well. Training runs for frontier models now cost tens of millions of dollars at minimum, likely hundreds of millions for the largest experiments. Companies can sustain that spending if each model clearly surpasses its predecessor and generates corresponding revenue. However, as improvements become marginal, the calculus changes. Anthropic has substantial funding, but they're not infinite. A strategic pause to ensure the next model represents a genuine leap rather than an incremental step makes financial sense.

I also notice that Anthropic has been unusually active in publishing research on model interpretability and mechanistic understanding. These papers don't generate immediate commercial value, but they lay groundwork. They suggest a company thinking several moves ahead, building the theoretical foundation for techniques they plan to deploy at scale. When Sonnet 5 eventually arrives, I expect we'll see these research threads woven throughout its architecture.

The competitive landscape reinforces this reading. OpenAI remains the market leader in terms of mindshare, but their recent releases have felt increasingly similar to each other. Google has made impressive strides with Gemini, but they're playing the same game everyone else is playing — faster, bigger, slightly better on benchmarks. There's an opening for a company willing to compete on a different axis entirely. If Anthropic can deliver a model that's not just capable but genuinely more trustworthy and interpretable, they could define a new category of competition.

Think about what enterprises actually need from these models. They don't need another incremental improvement in code generation or mathematical reasoning. They need models they can deploy with confidence, models whose failure modes they understand, models that integrate into systems with predictable behavior. The company that solves those problems will command premium pricing and customer loyalty that benchmark performance alone cannot buy.

As a result, my prediction for Sonnet 5 is specific. I don't think we'll see a traditional release announcement with the usual fanfare. Instead, I expect Anthropic will publish a detailed technical paper explaining new approaches to alignment and interpretability, followed by a model that demonstrates those approaches in practice. The improvements on standard benchmarks might be modest — perhaps even deliberately restrained. The real advances will be in areas we currently struggle to measure: robustness, predictability, transparency.

The timeline is harder to predict, but I'd be surprised if we see anything before mid-2026. Anthropic's silence suggests they're deep in the experimental phase, not polishing a nearly-ready product. They're likely running training experiments, evaluating results, iterating on architecture. That process can't be rushed without compromising the principles that differentiate them.

This slower pace might frustrate those of us who refresh the Anthropic homepage daily hoping for news. However, I find it reassuring. We've spent the past few years in a headlong sprint toward more capable AI systems, often with safety and interpretability lagging behind. If one major lab is willing to slow down and do the harder work of building systems that scale safely, that benefits everyone.

The race to AGI continues, but perhaps we need some participants racing toward a different finish line. Anthropic appears to be positioning themselves as exactly that. When Sonnet 5 arrives, I believe it will represent not just an incremental improvement, but a statement about what frontier AI development can and should prioritize. The deliberate slowdown isn't weakness — it's the most ambitious move they could make.

When the Oracle Starts Selling Ad Space

I read the news about OpenAI exploring advertising-supported products with a kind of weary recognition. Not surprise — the trajectory has been obvious for months — but something closer to resignation. The company that positioned itself as humanity's steward in the age of artificial intelligence is now contemplating the same business model that turned social media into a surveillance apparatus and search engines into glorified billboards. The irony is almost too neat.

The reporting suggests OpenAI is considering ads as a way to expand access to ChatGPT and its other products. Free tiers supported by advertising would lower the barrier to entry, bringing AI capabilities to users who cannot or will not pay subscription fees. This sounds reasonable. It sounds, in fact, like the familiar Silicon Valley playbook: build something compelling, give it away for free, monetize attention. However, applying this model to AI systems creates problems that do not exist with traditional software.

The fundamental issue is alignment — not in the technical sense that AI researchers discuss, but in the economic sense that determines what companies actually optimize for. A subscription business aligns the company's interests with the user's interests. I pay for a service that works well for me. The company improves the service to justify continued payment. The incentive structure is straightforward. An advertising business, by contrast, splits the alignment. The user is no longer the customer. The user is the product being sold to the actual customer: the advertiser.

This misalignment has predictable consequences. Facebook optimized for engagement because engagement generates ad impressions. The algorithm learned to surface content that provokes strong emotional reactions — outrage, fear, tribal identification — because those reactions keep people scrolling. Additionally, Google Search has degraded steadily as ads colonize more of the results page and SEO spam proliferates because Google's incentive is to show ads, not to surface the best information quickly.

Apply this dynamic to ChatGPT and the implications become unsettling. An advertising-supported AI assistant would be optimized not for providing accurate, helpful information, but for maximizing user engagement with advertising content. The model might subtly bias its responses toward advertisers' products. It might provide longer, more circuitous answers that create more opportunities to insert promotional content. It might recommend solutions that happen to involve purchasing something from a sponsor. The corruption would be gradual and deniable, but the economic incentives point in one direction only.

I recognize the counterargument. OpenAI will maintain strict separation between the AI's core functionality and the advertising layer. Ads will be clearly labeled and isolated from responses. The company has a reputation to protect and sufficient capital to resist immediate pressure for aggressive monetization. Therefore, the pessimistic scenario I describe will not materialize because OpenAI will implement advertising responsibly.

This argument fails on two grounds. First, advertising businesses always become more aggressive over time. The initial implementation is restrained and user-friendly. Then quarterly revenue targets increase. Growth slows. Investors demand higher returns. The product team faces pressure to make ads more prominent, more targeted, more integrated into the core experience. The trajectory is so consistent across companies and platforms that treating OpenAI as an exception requires extraordinary optimism about corporate incentive structures.

Second, even well-intentioned advertising creates subtle distortions. Consider how sponsored content works in traditional media. A magazine might maintain editorial independence while running advertiser-funded articles clearly labeled as such. Yet studies consistently show that publications are less likely to publish negative coverage of their advertisers and more likely to cover topics that advertisers favor. The influence operates through internalized norms and anticipatory self-censorship, not through explicit directives. An AI trained on interaction patterns shaped by advertising incentives would learn these biases without anyone deliberately programming them in.

The timing makes this development particularly concerning. We are in the early stages of AI integration into critical workflows — research, education, professional services, creative work. The tools people adopt now will shape expectations and habits for years. If the default free tier of AI assistance comes with advertising, an entire generation of users will internalize that relationship as normal. They will learn to navigate around commercial influence, to discount AI recommendations that seem suspiciously aligned with products, to treat the technology with appropriate skepticism. However, this adaptive response has costs. Trust erodes. The cognitive overhead increases. The technology becomes less useful precisely because users must constantly evaluate whether they are receiving genuine assistance or sophisticated marketing.

Additionally, advertising-supported AI would likely accelerate inequality in access to reliable information. Those who can afford subscription services get uncompromised AI assistance. Those who cannot get a version optimized for advertiser revenue. The gap is not merely about features or response speed — it is about epistemic reliability. The free tier becomes a second-class information environment where answers are shaped by commercial interests. This is not hypothetical. We already see this pattern with news media, where quality journalism retreats behind paywalls while ad-supported content proliferates with minimal editorial oversight.

I want to believe that OpenAI will resist this path. The company has made commitments to safety and alignment that advertising fundamentally undermines. The leadership has expressed concern about AI systems pursuing goals misaligned with human values. Optimizing an AI for advertising revenue is deliberately introducing misalignment — choosing a business model that requires the system to serve two masters with competing interests.

The alternative exists. OpenAI could focus on enterprise customers who pay substantial fees for reliable, uncompromised AI capabilities. They could offer educational and nonprofit discounts funded by commercial revenue rather than by advertising. They could maintain free tiers at reduced capability levels without introducing the perverse incentives that advertising creates. These paths are harder. They generate less total revenue. They do not scale as rapidly. Nevertheless, they preserve the alignment between the technology's purpose and its economic foundation.

The broader pattern troubles me more than any single company's decision. The AI industry is barely five years into commercial deployment of large language models, and already we are seeing convergence toward the advertising model that has degraded so much of the internet. The technology is different. The capabilities are unprecedented. Yet the business logic is depressingly familiar. Build engagement, monetize attention, optimize for advertiser revenue, accept the externalities.

If OpenAI proceeds with advertising, other companies will follow. The precedent will normalize what should be seen as a profound compromise. Users will be told they are getting AI access for free, while paying with something far more valuable than subscription fees: their trust in the information they receive. The oracle will start selling ad space, and we will all pretend this does not change the nature of what it tells us.

I hope OpenAI chooses differently. The company has the resources and the stated mission to build AI that serves users rather than advertisers. However, hope is not a strategy, and economic incentives are persistent. If the oracle starts selling ad space, we should at least acknowledge clearly what we are trading away.

The Phantom on the Charts

Selena Gomez used an AI-generated neo-soul track on her Golden Globes Instagram post, then quietly deleted it. The song, "Where Your Warmth Begins" by Sienna Rose, had fooled her — and millions of other Spotify listeners who streamed Rose's music over 2.6 million times monthly. The revelation that Rose is almost certainly not a real person triggered a minor crisis in music circles this week. However, the controversy reveals something larger than one fake artist slipping through algorithmic cracks. It demonstrates how completely unprepared streaming platforms are for the synthetic media era.

The evidence against Sienna Rose's authenticity is overwhelming. Between September and December 2025, Rose uploaded at least 45 tracks to streaming services — a pace that would exhaust any human artist. Rose has no social media presence whatsoever. No Instagram, no TikTok, no Twitter. Rose has never performed live. The biography describes Rose as an "anonymous neo-soul singer," which strikes me as absurd framing for an artist in 2026, when visibility drives streaming success and social media presence is essentially mandatory for breakout artists.

Additionally, Deezer confirmed that many of Rose's tracks are flagged as AI on their platform. The music itself sounds competent but generic — derivative of artists like Olivia Dean and Alicia Keys without the distinctive qualities that make those artists compelling. Listeners who pay attention describe the songs as smooth and pleasant but ultimately forgettable. This is precisely what you would expect from AI-generated content trained on neo-soul: technically proficient mimicry without artistic vision.

What troubles me is not that AI-generated music exists. The technology has been inevitable for years. What troubles me is how easily this phantom artist accumulated millions of streams, landed three songs on Spotify's Viral 50 playlist, and fooled a major celebrity into using the music for promotional content. The systems that are supposed to connect listeners with artists have no meaningful safeguards against synthetic performers colonizing the charts.

Spotify's position on AI-generated content is revealing. The platform officially permits such content but encourages proper labeling. This policy sounds reasonable until you examine its enforcement mechanisms — which appear to be nonexistent. Sienna Rose was not labeled as AI-generated. The profile presented Rose as a real artist. Spotify's algorithms promoted the music just as aggressively as they promote human musicians. The company essentially outsourced detection to listeners and journalists, waiting for public outcry before acknowledging the problem.

The economic implications are more concerning than the technical questions. Streaming platforms pay royalties based on play counts. Every stream of Sienna Rose's tracks transfers money from Spotify's royalty pool to whoever operates the Rose account. Assuming the 2.6 million monthly listeners generate conservative streaming numbers, that represents tens of thousands of dollars monthly flowing to a synthetic artist. This is not speculative future economics. This is happening now, at scale, with platform complicity.

The displacement effect accelerates as AI-generated artists proliferate. Consider the playlist dynamics. Spotify's Viral 50 has finite slots. Three of them currently belong to Sienna Rose. Those are three positions that real artists — people who spent years developing craft, building audiences, sacrificing financial stability to make music — did not get. The zero-sum nature of playlist placement means synthetic artists directly compete with humans for attention and revenue.

I recognize the counterargument that listeners do not care about authenticity if the music sounds good. Market dynamics will sort this out. If people enjoy Sienna Rose's tracks, why does it matter whether Rose is real? This argument misses the essential context. Listeners were not given a choice. They were not informed that they were streaming AI-generated content. The deception was built into the presentation. You cannot claim market efficiency when the market operates on false information.

The parallel with visual art is instructive. When AI-generated images flooded stock photo marketplaces and art platforms, the initial response was similar permissiveness. Platforms allowed AI content but recommended labeling. Predictably, most uploaders ignored the recommendations. The platforms responded with increasingly strict requirements: mandatory AI disclosure, separate categories, different royalty structures. Music streaming is now facing the same progression but starting from a weaker position because audio generation has advanced further than most listeners realize.

The technical challenge of detecting AI-generated music is significant but not insurmountable. Deezer apparently has functional detection systems. The limitation is not technological — it is institutional. Platforms have little incentive to aggressively police AI content when that content generates engagement and streams. The business model rewards volume, not verification. As a result, we get situations like Sienna Rose: obvious synthetic content operating openly until external pressure forces acknowledgment.

What happens when this scales? Sienna Rose is likely not unique, just the first to attract attention. The barrier to creating similar operations is minimal. Any entity with access to music generation models and basic knowledge of streaming platform mechanics can replicate this. We are probably looking at dozens or hundreds of similar projects already active, operating below the threshold of public notice. The economic incentives are clear. The risks are minimal. The platforms are passive.

The downstream effects on real artists range from concerning to catastrophic. Emerging musicians already struggle to break through algorithmic noise and playlist gatekeepers. Adding a layer of AI-generated competition that can produce unlimited content at near-zero marginal cost fundamentally alters the economics of music creation. If playlist slots and streaming revenue increasingly flow to synthetic artists, the financial foundation for human musicians erodes further. We risk creating a system where making music becomes economically irrational for all but the most successful human artists.

I want platforms to implement mandatory labeling for AI-generated content. Not recommended, not encouraged — mandatory, with enforcement. Separate playlist categories. Transparent disclosure in artist profiles. Different royalty structures that reflect the reduced production costs. These measures would not ban AI music, which is likely impossible and arguably undesirable. They would simply require honesty about what listeners are consuming.

The broader question is whether we want streaming platforms to be neutral conduits for any content that generates engagement, or whether we expect them to maintain distinctions between human creativity and machine output. The current trajectory points toward the former. Platforms will optimize for streams and engagement regardless of source. If synthetic artists outperform humans in algorithmic systems, those systems will promote synthetic content. The logic is perfectly consistent with platform incentives. It is also perfectly corrosive to human artistic culture.

Sienna Rose will likely disappear from Spotify in the coming weeks as pressure mounts. The account operator will probably launch similar projects under different names, having learned which patterns trigger detection. The cycle will repeat. Each iteration will be more sophisticated, harder to identify, more deeply embedded in platform infrastructure. We are watching the first stages of a transition that most of the music industry has not yet processed.

The phantom is on the charts. That should alarm everyone who cares about music as a human endeavor rather than an algorithmic optimization problem. The platforms know this is happening. They have chosen passivity. The only question now is how far we let this progress before demanding they choose differently.

Sources: