The Thousand-Token Gambit
February 13, 2026 · uneasy.in/f6732e0
OpenAI shipped Codex-Spark yesterday — a smaller GPT-5.3-Codex distilled for raw speed, running on Cerebras Wafer Scale Engine 3 hardware at over a thousand tokens per second. Four weeks from a $10 billion partnership announcement to a shipping product. 128k context, text-only, ChatGPT Pro research preview.
The pitch is flow state — edits so fast the latency disappears and you stay in the loop instead of watching a spinner. Anthropic is chasing the same thing with Opus fast mode. Everybody is.
I wrote about speed becoming the only moat last month. Codex-Spark is that thesis made silicon.
Sources:
Recent Entries
- Enys Men and the Horror of Routine March 24, 2026
- The Critic Can't Be the Author March 24, 2026
- What Cursor Forgot to Mention About Composer 2 March 23, 2026