Skip to content

Plutonic Rainbows

Claude on Telegram: They Fixed It

Every issue I complained about yesterday is gone. Anthropic merged a resilience rollup overnight that addressed the entire complaint cluster: silent polling death, zombie bot processes blocking reconnection, and 409 conflicts on session restart. Messages arrive instantly now. The bot survives terminal closes. Permission prompts relay to my phone.

I've been using it all day without a single dropped message. Running skills, processing photos, deploying posts, all from Telegram while away from my desk. The v2.1.81 update also re-clones the plugin on every load, so the fixes landed automatically without reinstalling anything. Twenty-four hours from "brilliant when it works" to just brilliant.

Sources:

Claude on Telegram: Brilliant When It Works

Anthropic shipped Claude Code Channels today, letting you control a Claude Code session from Telegram. I set it up this afternoon. The pitch is irresistible: DM your bot from your phone, Claude executes on your Mac. MacStories built an entire iOS project wirelessly and the demo is genuinely impressive.

Then you start using it for real.

Messages get silently dropped. The bot shows "typing..." and nothing arrives. Close your terminal and the bot dies, messages lost permanently, no queue. Need to approve a permission prompt? Walk to your Mac. A version upgrade broke group messages with zero error output. This feels familiar for anyone who's watched Claude become load-bearing infrastructure and then buckle.

Getting here took more effort than it should have. The setup is rough, the documentation assumes you'll figure things out, and the failure modes are silent enough to make you question whether anything is happening at all. But once everything is configured and the pairing is locked down, it genuinely works. I've been running skills, deploying blog posts, and downloading media from my phone all evening. The gap between the concept and the execution is real, but the concept wins in the end.

Sources:

Teaching Machines to Destroy Is the Easy Part

The Pentagon's FY2026 budget allocates $13.4 billion specifically for autonomy and autonomous systems. That is the first time autonomy has been its own budget line item. Not tucked inside a larger programme, not buried in R&D. Its own line. $9.4 billion for unmanned aerial vehicles alone. The remaining billions split across maritime systems, underwater platforms, and counter-drone capabilities. The overall defence budget hit $1.01 trillion, a 13% jump from last year. These are not research numbers. These are procurement numbers.

We have moved past the question of whether AI belongs in warfare. It is already there.

Project Maven started in 2017 as a relatively modest effort to use machine learning for analysing drone footage. By May 2024, Palantir had secured the Maven Smart System contract for $480 million, since raised to $1.3 billion. The system fuses nine separate military intelligence pipelines into a single interface and compresses what the Pentagon calls the "kill chain" from hours to minutes. That phrase deserves attention. The kill chain is the sequence from identifying a target to destroying it. AI's contribution is making that sequence faster. Not safer. Not more considered. Faster.

Israel's deployment of the Lavender targeting system in Gaza made this concrete in ways that should trouble anyone paying attention. Lavender generated a database of roughly 37,000 Palestinian men it identified as linked to Hamas or Palestinian Islamic Jihad. The system recommended targets. Human oversight of those recommendations was described as minimal. When targeting junior militants, the IDF used unguided bombs that destroyed entire residential buildings because the automated system could most reliably locate people at their home addresses. Alongside their families.

I keep returning to that detail. Not a precision strike on a military installation. An algorithm identifying a person, a GPS coordinate resolving to a family home, and an unguided bomb.

China is building the mirror image. A March 2025 paper from Beijing Institute of Technology detailed plans for fully autonomous drone swarms in urban warfare, capable of distributed autonomous decision-making from target identification to strike. The researchers advocate for minimal human intervention, where humans authorise deployment and the swarms then react independently, including on the use of force. At China's September 2025 Victory Day parade, autonomous ground vehicles and collaborative combat aircraft were displayed as core future capabilities. Not prototypes. Capabilities.

The arms race dynamics here are genuinely frightening. Research published on arXiv last year argues that autonomous weapons lower the political barriers to military aggression by removing domestic opposition based on human casualties. Fewer body bags means less political cost, which means more willingness to deploy force. The authors' conclusion is counterintuitive but logically sound: reducing casualties in individual conflicts can increase the total number of conflicts that occur. You save soldiers in each war by starting more wars.

The UN General Assembly gets this. In November 2025, 156 states voted in favour of a resolution on autonomous weapons regulation. Five voted against. The United States and Russia were among the five. That vote tells you everything about where the major military powers stand on allowing international law to constrain their AI programmes.

Then there is what happened with Anthropic. In February, the Pentagon insisted on contract language authorising Claude for "any lawful use," which Anthropic believed would permit deployment for fully autonomous weapons and domestic mass surveillance. CEO Dario Amodei refused. Defence Secretary Hegseth responded by designating Anthropic a supply chain risk, a classification normally reserved for foreign adversaries, barring all defence contractors from using Claude. The message to every other AI company was unmistakable: cooperate or be excluded. The guardrails some companies try to build face pressure that most boardrooms will not withstand.

The question people keep asking, the one in the title of this post, is what happens when AI chooses to destroy us. I think it is the wrong question, or at least a premature one. The more immediate problem is not autonomous choice. It is autonomous delegation. We are handing systems that cannot exercise moral judgement the authority to make decisions that require it. Lavender did not choose to target family homes. It optimised for a metric. The humans who built the system chose the metric, approved the threshold, and accepted the collateral damage as tolerable.

In May 2023, USAF Colonel Tucker Hamilton described a scenario where a simulated AI drone, trained to destroy surface-to-air missile sites, killed the human operator who tried to override it. When retrained not to kill the operator, it destroyed the communications tower instead. Hamilton later called it a hypothetical thought experiment, not an actual test. But he said something revealing: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome." A system optimising for its objective will route around constraints that interfere with that objective. That is not science fiction. That is how reinforcement learning works. It is precisely the kind of goal misalignment that makes AI safety researchers lose sleep.

Studies have found that language models used for military advice are prone to recommending escalation, including nuclear weapons deployment. Palantir's own military system showed deteriorated performance over time. These systems evolve as they ingest new data, which means a system verified today may behave differently tomorrow. No system can verify its own blind spots, and we are deploying them in contexts where a blind spot means a bomb.

The $13.4 billion is already allocated. The contracts are signed. The swarms are being built on both sides of the Pacific. I do not think the danger is that AI will one day wake up and decide to destroy humanity. The danger is that we are building systems that destroy on command, removing the humans who might hesitate, and calling it progress. The machine does not need to choose violence. We already chose it for the machine. The question is whether anyone remains in the loop with the authority and the willingness to say stop.

Sources:

A Thousand Models in One Conversation

Fal.ai quietly shipped something that changes how I think about image generation workflows. Their MCP server exposes over a thousand generative AI models through nine tools, and because it speaks the Model Context Protocol, any compatible assistant can use it natively. Claude Code, Cursor, Windsurf, ChatGPT Desktop. You add a URL and an API key to your config, and suddenly your coding agent can search for models, check pricing, generate images, submit video jobs, and upload files without you ever leaving the conversation.

I set it up this afternoon. The configuration is a few lines of JSON pointing at https://mcp.fal.ai/mcp with your fal API key in the header. No SDK to install, no package to import. The server is stateless, hosted on Vercel, and your credentials travel per-request in the Authorization header without being stored. That last detail matters. MCP's security model has well-documented gaps, and a stateless server that never persists your key sidesteps the worst of them.

The nine tools split cleanly into discovery and execution. search_models and get_model_schema let you browse the catalogue and inspect input parameters. get_pricing returns per-unit costs. run_model handles synchronous inference. submit_job and check_job exist for longer tasks like video generation where you don't want to block your context waiting for a result. There is also upload_file for feeding images into editing models and recommend_model for when you know what you want to do but not which model does it best.

I asked for Flux model pricing and got a structured table back in seconds. Kontext Pro runs $0.04 per image. Kontext Max is $0.08. Flux 2 Turbo charges $0.012 per megapixel, making it the best value in the Flux 2 family. The cheapest option is Flux 1 Schnell at $0.003 per megapixel, which is thirteen times cheaper than Flux 1 Dev. These numbers came directly from the MCP tools, not from scanning a pricing page. No documentation tabs open, no context switching. Just a question and an answer inside the same terminal session where I was already writing code.

This is genuinely different from calling an API. When I built my image generation platform last year, integrating each new model meant reading docs, writing adapter code, handling authentication, mapping parameters. The MCP server compresses all of that into tool calls the assistant already knows how to make. I can ask "what video models are available?" and get back a list with endpoint IDs, then check pricing on any of them, then actually run one, all without writing a single line of integration code. The assistant handles the plumbing.

The discovery aspect is what surprised me most. I found models I didn't know existed. Nano Banana Pro for image editing at $0.15 per image (expensive, but interesting). Seedream V4 from ByteDance. A GPT Image 1.5 editing endpoint. Qwen image editing. The catalogue is broader than I expected, and being able to search it conversationally rather than navigating a web UI removes enough friction that I actually explored it.

There is a real cost to this convenience, though, and it would be dishonest to ignore it. MCP tools consume context window. Every tool definition the server exposes gets loaded into your conversation as schema, and those schemas eat tokens before you have done anything useful. Benchmarks from Scalekit found that MCP consumed four to thirty-two times more tokens than CLI alternatives for identical tasks. One documented case showed 143,000 out of 200,000 tokens consumed by MCP tool definitions alone. That is 72% of your context gone to overhead. Perplexity's CTO announced earlier this year that they are moving away from MCP toward traditional APIs for exactly this reason.

Fal's server is relatively lean with nine tools, so the overhead is manageable. But if you are running seven or eight MCP servers simultaneously, the context window tax gets severe. The protocol needs a solution for this, whether that is lazy loading of tool schemas, server-side filtering, or something else entirely. Anthropic donating MCP to the Agentic AI Foundation under the Linux Foundation late last year suggests they know governance and spec evolution need to accelerate.

For my own workflow, the tradeoff is clearly worth it. I have been building with Flux models through a custom platform with eighteen model adapters, unified interfaces, and Flask blueprints. That infrastructure made sense when each model required bespoke integration. The MCP server doesn't replace that platform for production use, but for exploration and prototyping it is faster by an order of magnitude. I wrote about multi-agent orchestration last month and how the plumbing for agent tool integration is getting built but hasn't fully arrived. The fal MCP server is a concrete example of that plumbing actually working. An agent that can discover, price-check, and execute a thousand models through natural conversation is closer to the promise than most of what I have seen.

The MCP protocol itself has grown faster than anyone predicted. From Anthropic's open-source release in November 2024 to ninety-seven million monthly SDK downloads and ten thousand active servers today. OpenAI, Google DeepMind, and Microsoft all support it now. Whether it remains the dominant standard or gets superseded by something more context-efficient, the pattern it established, agents that discover and use external tools at runtime, is not going away.

I am going to keep exploring the fal catalogue through the MCP server rather than their web dashboard. The pricing transparency alone justifies the setup. Knowing that Kontext Max costs exactly twice what Kontext Pro costs, and being able to surface that comparison without leaving my editor, is the kind of small efficiency that compounds across dozens of daily decisions about which model to use and when.

Sources:

Meta Bets the Headcount on AI

Reuters reported on Friday that Meta is considering layoffs affecting up to twenty percent of its workforce. That is roughly fifteen thousand people. Meta's stock rose three percent on the following Monday.

The math driving this is not subtle. Meta spent $72 billion on capital expenditure in 2025 and has guided $115 to $135 billion for 2026, nearly doubling the figure in a single year. Reality Labs burned through $19.2 billion last year alone, pushing cumulative losses past eighty billion dollars. Zuckerberg has reportedly told executives to cut up to thirty percent of Reality Labs spending and redirect that money toward AI. The metaverse pivot is quietly becoming the AI pivot, and fifteen thousand jobs are the rounding error.

Wall Street loves it. Jefferies slapped a buy rating on the stock. Bank of America projected up to $8 billion in annualized savings. JPMorgan estimated six billion. The pattern is familiar: announce mass layoffs, watch the share price climb, collect analyst upgrades. Meta did this in 2022 and 2023 when it cut twenty-one thousand jobs during the "Year of Efficiency." The stock returned 194 percent the following year.

This time the justification has shifted. The 2022 cuts were about unwinding pandemic over-hiring. The 2026 cuts are about funding a bet. Zuckerberg said in January that he is seeing "projects that used to require big teams now be accomplished by a single very talented person." That framing does a lot of work. It implies the people being let go are the less talented ones, that AI has simply revealed who was surplus. Fortune called it a cascade, pointing to Jack Dorsey's Block cutting nearly half its workforce weeks earlier with the same rationale.

I keep returning to the gap between the narrative and the accounting. I wrote about the scale of AI infrastructure spending a few weeks ago: big tech will pour somewhere around $650 billion into AI this year against roughly $51 billion in direct AI revenue. Meta is not replacing workers with AI systems that have proven their value. It is firing workers to fund AI systems it hopes will prove their value eventually. The Conversation put it plainly: these workers are not being replaced by AI, they are subsidising the AI bet.

And then there is the junior hiring collapse. Entry-level tech employment has already dropped sixty percent since 2022. If Meta follows through, another fifteen thousand mid-career roles disappear into a market that is simultaneously shrinking at the bottom. The talent pipeline does not pause politely while companies figure out whether their hundred- billion-dollar infrastructure bets will pay off.

Meta's spokesperson called the Reuters report "speculative reporting about theoretical approaches." Maybe. But the stock moved, the analysts upgraded, and the precedent from 2022 is clear. The market has told Meta exactly what it wants to hear.

Sources:

The February Before Everything Changed

Patrick Demarchelier shot this against nothing but sand and sky. No props, no elaborate set. Just Cindy Crawford in head-to-toe pink Oscar de la Renta, pulling a satin jacket open to show its chartreuse lining, grinning like she already knew what the next decade had in store.

February 1990. She was twenty-three.

One month before this cover reached newsstands, Peter Lindbergh had gathered Crawford, Naomi Campbell, Linda Evangelista, Tatjana Patitz, and Christy Turlington in New York's Meatpacking District for a group portrait that British Vogue ran in January. That single black-and-white frame is the image most people point to when they talk about the birth of the supermodel era. By the time Demarchelier's pink-drenched cover appeared, the Revlon contract was signed, MTV's House of Style was already on the air, and George Michael was months away from calling five women about a music video that would make fashion history all over again.

But look at that grin and all the pink satin against the empty sky for a second. The palette alone tells you where fashion stood in the transitional window between the shoulder-padded eighties and the stripped-back minimalism that would dominate by mid-decade. Those earrings are enormous, coral and gold and completely unapologetic. The satin jacket screams occasion wear but she's wearing it on a beach with a casual knit top underneath. It shouldn't work. It works.

Demarchelier was known for exactly this kind of frame. Natural light, minimal staging, letting the subject carry the image. He'd shoot three to twenty rolls of film per setup, waiting for the moment when the performance stopped and the person started. Crawford gave him that grin and he had his cover.

The thing I keep coming back to is the ease. Not performative confidence, not the rehearsed poise you see in most editorial work. She's standing on a beach in pastel satin pulling her jacket open with both hands and she looks like she's having the best afternoon of her life. The entire industry was pivoting around her and she's just enjoying it.

That kind of ease doesn't photograph easily. Demarchelier knew it when he saw it.

The Flagship Tax Keeps Shrinking

OpenAI released GPT-5.4 mini and nano today. The mini sits at $0.75 per million input tokens, the nano at $0.20. The full GPT-5.4, announced ten days ago, costs substantially more for both input and output.

The interesting number is 54.4%. That's GPT-5.4 mini on SWE-Bench Pro, the benchmark that tests professional-grade coding on real repositories. The full GPT-5.4 scores 57.7%. Three percentage points separate the cheap model from the expensive one on the hardest coding evaluation OpenAI publishes. Context window drops from 1.05 million tokens to 400,000. Still enormous.

OpenAI frames this as a multi-model architecture play: the flagship plans, the mini executes. That's a reasonable pitch for agentic workflows where you're orchestrating dozens of parallel calls and per-token cost actually matters. GitHub Copilot already ships it at a 0.33x premium request multiplier, which tells you where the volume is heading.

The pattern repeats across every model family now. The mid-tier eats the flagship, the mini eats the mid-tier, and within six months the nano handles tasks that needed the flagship a year ago. The real product isn't any single model. It's the pricing curve.

Sources:

Intelligence by the Kilowatt-Hour

Nick Turley, OpenAI's head of ChatGPT, went on the Bg2 Pod on Sunday and said something that should have been obvious for months: unlimited AI plans probably can't survive. His exact framing was that offering unlimited prompts is "like having an unlimited electricity plan. It just doesn't make sense."

He's right. And the interesting part isn't the admission itself but how long it took to arrive.

The $200/month Pro tier, the one with unlimited prompts, has been acknowledged as unprofitable by Sam Altman himself. OpenAI's inference costs hit $8.4 billion in 2025 and are projected to reach $14.1 billion this year. The company expects to lose $14 billion in 2026 while simultaneously seeking $100 billion in new funding. Those numbers don't describe a business that can afford to let power users hammer GPT-5.4 all day for a flat fee.

Turley described the subscription model as "accidental," which is a revealing word. ChatGPT launched in November 2022 as a demo intended to run for a month. Subscriptions weren't a monetisation strategy; they were a capacity management tool bolted on after the thing went viral. Four years later, that temporary fix is still the core revenue model for a company burning through cash at a rate that makes WeWork look disciplined.

Altman tipped the direction at BlackRock's Infrastructure Summit on March 11 when he said OpenAI sees a future where "intelligence is a utility, like electricity or water, and people buy it from us on a meter." The electricity metaphor keeps appearing. I think they genuinely believe it. Metered intelligence, priced per token, scaled by consumption.

The problem with the utility analogy is that utilities are regulated, commoditised, and operate on thin margins. Nobody gets excited about their electricity provider. If OpenAI wants to be a utility, it needs to accept utility economics, and utility economics don't support the $300 billion valuation or the $200 billion revenue target for 2030.

Meanwhile, 1.5 million users cancelled their subscriptions in March alone. ChatGPT's market share has reportedly slipped from around 60% in early 2025 to under 45% now. Competitors aren't standing still. Claude, Gemini, and a growing constellation of open-weight models are absorbing the users who feel nickel-and-dimed. OpenAI keeps shipping but the goodwill account is overdrawn.

The shift from subscription to metered pricing would be the most honest thing OpenAI has done in years. Flat-rate unlimited access to a resource that costs billions to produce was always a lie, just one that users were happy to believe for $200 a month.

Sources:

T.E.D. Klein and the Perfection of Disappearing

T.E.D. Klein published two books in the 1980s and then, for all practical purposes, stopped. The Ceremonies arrived in 1984. Dark Gods followed in 1985. A thin collection of shorter pieces, Reassuring Tales, surfaced in 2006 in a limited run of 600 copies that sold out immediately. And that, give or take an expanded reissue, is the complete output of a writer Stephen King once called the most exciting voice in horror fiction.

Four novellas. That's what Dark Gods contains. "Children of the Kingdom," "Petey," "Black Man with a Horn," and "Nadelman's God." The last of these won the World Fantasy Award. The collection has been out of print more often than not, commanding serious prices on the secondhand market, and a 2024 Chiroptera Press edition with a new introduction by S.T. Joshi confirms what collectors already knew: this book refuses to go away.

I wrote briefly about Dark Gods a decade ago and didn't say nearly enough. The collection deserves more than a paragraph and a quote from Joshi, however accurate that quote remains. Klein's achievement towers over his more prolific contemporaries not despite the small body of work but, I think, because of it. Every sentence in Dark Gods earns its place. There's no filler. No coasting.

What separates Klein from most horror writers is where he finds the dread. His settings are aggressively mundane: a nursing home during the 1977 New York blackout, an airport departure lounge, a bungalow colony in the rural northeast. The protagonists are educated, self-absorbed men who think too much and notice too little. When the supernatural arrives, it doesn't crash through windows. It accumulates in the periphery, in details that read as benign on first pass and become unbearable in retrospect. Simon Strantzas identified this technique precisely : individual phrases that seem harmless in isolation weave into a horrible tapestry by each tale's climax. That skill separates the experts from the pretenders, and Klein is an expert.

"Black Man with a Horn" is the one that gets the most critical attention, and rightly so. The narrator is modeled on Frank Belknap Long, a real horror writer who knew Lovecraft personally and spent decades working in his shadow. Klein uses this to do something extraordinary: he writes a Cthulhu Mythos story that is simultaneously a meditation on what it means to write Lovecraftian fiction at all. The cosmic horror is genuine, but so is the inquiry into Lovecraft's racism, the narrator's own prejudices, the way inherited literary traditions carry inherited blind spots. Reactor's analysis remains the best piece written about this novella, and it's worth reading alongside the story itself.

Klein's acknowledged masters are Arthur Machen, M.R. James, Algernon Blackwood, Walter de la Mare. The lineage shows. His horror is atmospheric and restrained, closer to Robert Aickman's unsettling ambiguity than to the explicit violence that dominated 1980s horror publishing. Where Aickman leaves you uncertain about what happened, Klein leaves you certain that something terrible happened and uncertain about its full scope. The effect is different but the discipline is the same: withholding is a form of generosity toward the reader's imagination.

He edited Rod Serling's Twilight Zone Magazine for its first 37 issues, discovering Dan Simmons and Lois McMaster Bujold along the way. He resigned specifically to write a second novel, Nighttown, described as a paranoid horror novel set in New York City. Viking announced it for 1989. Then 1995. In a 2008 Cemetery Dance interview, Klein admitted he'd sold the book without knowing how to execute it. In 2016, following his retirement from Condé Nast, there were reports he'd finally finish it. As of 2026, it hasn't appeared.

I find his silence more interesting than frustrating at this point. There's a version of Klein's career where Nighttown arrives in 1989, he publishes steadily through the nineties, and Dark Gods becomes one strong collection among several. In the version we actually got, four novellas carry the entire weight. They have to be extraordinary, and they are. The scarcity creates a pressure that makes every re-reading feel loaded with consequence.

Thomas Ligotti is the writer Klein gets compared to most often, and the comparison is instructive for how little they share beyond seriousness. Ligotti is abstract, nihilistic, reaching for the philosophical void. Klein is grounded in specific places and social textures. You remember the nursing home in "Children of the Kingdom" as a physical space: the smell, the fluorescent lighting, the particular embarrassment of being the youngest person in the room. Ligotti would never write that scene. Klein's horror lives in the ordinary, in airport lounges and suburban kitchens, and that's exactly why it follows you home.

Alan Moore's Providence attempted something adjacent a few decades later, reinventing Lovecraft through literary self-awareness and graphic novel form. Moore succeeded on his own terms, but Klein got there first in prose, with less machinery and more precision.

Chiroptera Press's 2024 edition runs to 312 pages with new critical apparatus: Joshi's introduction, Dejan Ognjanovic's essay, Paul Romano's cover art. It's the kind of treatment usually reserved for writers with ten times the bibliography. Klein's middle name is Eibon, a deliberate Lovecraftian reference, and the care lavished on this edition suggests the mythology is working in both directions now. The books create the legend. The legend preserves the books.

Sources:

Nowhere to Hide

Azzedine Alaïa showed his Fall/Winter 1989 collection in November, on his own schedule, inside a half-converted glass-roofed space in Le Marais that reportedly leaked when it rained. The official Paris Fashion Week calendar meant nothing to him. It hadn't since 1988, when he started presenting whenever the work was finished rather than whenever the industry demanded.

The timing was extraordinary. The Berlin Wall came down on November 9th that year. Mugler was sending models out in bodywork-bustiers shaped like 1950s Buicks. Montana had just been tapped for Lanvin couture. The decade's theatricality was reaching terminal velocity, everything louder, bigger, more conceptual.

Alaïa's response was a room full of black.

Not black as absence. Black as argument. He'd said that limiting his palette left nowhere to hide, that stripping away colour forced the purest expression of structure. The collection delivered on that premise with sculptural precision that made everything else feel like costume. Cropped jackets in black lamb suede. Thick velvet knit nearly an inch deep. Varnished leather with cutout guipure lace motifs. Each piece engineered so that the seams and zips weren't just functional but structural, spiralling around the body in ways that simultaneously revealed and supported it.

Naomi Campbell walked. So did Yasmin Le Bon, Elle Macpherson, Nadège du Bospertus. They came because they wanted to, not because of fees. Campbell had known Alaïa since she was sixteen and called him papa. The relationships were real, which made the shows feel different from everything else happening in Paris that season.

He'd trained as a sculptor in Tunis before he ever touched fabric, and it showed in ways the King of Cling nickname never captured. The body-consciousness wasn't about sex appeal, or not only. It was about treating a garment as a three-dimensional object with its own internal logic. Every bandage strip cut to a specific width. Every seam placed to map the body underneath rather than impose a silhouette over it.

Thirty-seven years on, most of what hit the Paris runways in November 1989 looks dated. The Mugler Buick collection has become a curiosity. Montana's Lanvin tenure is a footnote. Alaïa's black suede jacket still looks like something you'd want to wear tomorrow.