DeepSeek, a Chinese start-up, has developed a free large language model called R1 that matches the performance of leading AI models such as those from OpenAI. Its rapid growth and immediate popularity, alongside a low development cost of around $5.6 million, has unsettled both investors and major US tech firms. DeepSeek’s innovative training methods and the programme’s open-source nature are especially significant given the US ban on exporting powerful computer chips to China.

Markets have reacted by selling off shares in large US technology companies that have been investing heavily in data centres to train and deploy AI. Investors are now questioning whether expensive cloud computing and vast hardware reserves are truly necessary, given DeepSeek’s demonstration of a more cost-efficient approach. Tech leaders like Microsoft’s Satya Nadella have acknowledged the impact of DeepSeek’s model and are urging serious consideration of China’s progress in AI research.

DeepSeek’s decision to release R1 as open source has been lauded as a “profound gift” but also raises concerns over potential misuse by malicious parties. Some Western developers may be wary of Chinese censorship controls embedded within the code, while others see this move as evidence of China’s resilience in circumventing US chip restrictions. Influential figures like Marc Andreessen have likened the development to a “Sputnik moment,” prompting calls for greater US investment and strategic planning to maintain a competitive edge in the global AI race.