Good Enough Is a Strategy
April 6, 2026 · uneasy.in/a738cab
The Information reported last week that DeepSeek's V4 model will run entirely on Huawei's Ascend 950PR chips. No NVIDIA. No CUDA. A trillion parameters trained and deployed on Chinese silicon, with Alibaba, ByteDance, and Tencent ordering hundreds of thousands of units in anticipation.
The reflexive Western reading is that this proves export controls failed. The reflexive Chinese reading is that domestic chips have caught up. Both are wrong, and the actual situation is more interesting than either.
Huawei's 950PR delivers roughly 1.56 petaflops at FP4 and carries 112 GB of proprietary HiBL memory. Real numbers, not aspirational ones. But the memory bandwidth sits at 1.4 TB/s against the H100's 3.35 TB/s, and a Council on Foreign Relations report projects NVIDIA will be seventeen times more powerful by 2027. The gap is not closing. It is widening.
This matters because DeepSeek's entire thesis since V3 has been that architectural efficiency compensates for hardware disadvantage. Mixture-of-experts, multi-token prediction, custom numeric formats designed months in advance for chips that hadn't shipped yet. When DeepSeek shook Silicon Valley last year, the V3 training bill was $5.6 million. The V4 figure, if accurate, is $5.2 million for a trillion parameters.
There is a complication. Reports suggest V4 may have been trained on NVIDIA Blackwell chips, with the Huawei optimization focused on inference and deployment rather than training itself. DeepSeek's own R2 model reportedly suffered persistent training failures on Ascend hardware, forcing a reversion to NVIDIA H800s. The headline says "entirely on Huawei." The footnotes are less certain.
None of this diminishes the strategic signal. DeepSeek spent months with Huawei and Cambricon rewriting core code from CUDA to CANN, Huawei's compute framework. They withheld early V4 access from NVIDIA and AMD entirely. The best analysis piece on this framed it simply: when you restrict access to a tool, the people who need it do not stop working. They build a different tool.
The question was never whether Huawei could match NVIDIA chip for chip. It cannot, and the CFR numbers make that plain for at least the next three years. The question is whether a parallel ecosystem can sustain frontier-class AI development at commercially viable cost, on hardware that is worse but available. DeepSeek's answer, backed by trillion-parameter ambition and bulk orders from every major Chinese cloud provider, is that good enough is a strategy. The circular investment logic of the Western AI stack makes this bet look less absurd every quarter.
Sources:
-
DeepSeek V4 points to growing use of Huawei chips in AI models — TechWire Asia
-
The Frog in the Well Cannot See the Chip War — Shashi.co
-
Inside China's AI Machine: Models, Chips, and Strategy — Digital in Asia
-
China's AI Chip Deficit: Why Huawei Can't Catch Nvidia — Council on Foreign Relations
-
DeepSeek Withholds V4 Model from US Chipmakers — The China Academy
Recent Entries
- Information Had Mass April 6, 2026
- Yohji, 15ml April 6, 2026
- Paid by the Word April 6, 2026