The 2.1.0 release landed a few days ago, and buried in the changelog are some features that hint at where Anthropic is taking this thing. Session teleportation — the ability to resume a terminal session at claude.ai/code using /teleport — sounds like a convenience feature until you realise what it actually enables. I can start something complex on my laptop, close the lid, and pick it up on my phone later. The session state persists somewhere in Anthropic's infrastructure, waiting.

This feels like the beginning of something larger. The pattern I'm seeing across recent releases suggests Anthropic is building toward persistent agents that survive individual sessions. Not just chat history — actual running context that carries forward. The hooks system they added for skills and agents points in the same direction. You can now define PreToolUse and PostToolUse logic that scopes to specific contexts. That's infrastructure for agents that remember what they were doing and why.

The Chrome integration is interesting too. Beta, obviously, but the idea of controlling a browser directly from the terminal opens up workflows I hadn't considered. Automated testing that actually sees the page. Form filling. Screenshot analysis. It's not that any individual capability is new — it's that they're converging into something more coherent.

I'm not sure Anthropic has figured out where the boundaries should be. The Explore subagent, which uses Haiku to search codebases efficiently, saves context by doing lightweight reconnaissance before committing the main model's attention. Smart, but it also means decisions about what's relevant happen outside my visibility. Sometimes it finds exactly what I need. Sometimes it misses something obvious because the cheaper model didn't recognise its importance. The tradeoff makes sense economically; I'm less certain it makes sense epistemically.

What I'm watching for next: multi-session orchestration. The teleportation feature only works for resuming a single session right now. But the infrastructure clearly supports more than that — spawning background agents that report back, coordinating work across multiple contexts, that sort of thing. Cowork plugins already hint at this. Companies can apparently build internal plugin catalogs now. The pieces are assembling.

My guess — and this is speculation — is that Anthropic ships proper agent orchestration within the next few months. Not as a separate product, but as an extension of what Claude Code already does. The session teleportation, the hooks system, the subagent architecture: these aren't random features bolted on. They're scaffolding for something more ambitious. Whether that ambition lands gracefully or creates new categories of confusion remains to be seen. The history of agentic AI is littered with impressive demos that fell apart in production.

For now, I'm mostly pleased with where things are. The asking-too-often problem hasn't disappeared, but the tool has gotten better at knowing when to just proceed. The codebase search actually works. The Chrome stuff is rough but promising.

Sources: