DeepSeek's newly launched AI model shows how the Chinese startup's focus on open source, low-cost models could help China advance in the geopolitical AI race, especially as the vendor begins to break its reliance on U.S. AI chip giant Nvidia.

DeepSeek released DeepSeek-V4 in preview on April 24. The open source model can process longer prompts than previous DeepSeek models. The new model is the startup's most significant release since its R1 launch in January 2025, which shocked the AI market with its reasoning capability and low price. DeepSeek-V4 comes in two versions: V4-Pro and V4-Flash. V4-Pro is larger with 1.6 trillion total parameters, and V4-Flash is a high-speed version of the model with 284 billion parameters. The models are efficient in long-context scenarios, supporting context lengths of up to a million tokens, according to DeepSeek. The long context length makes V4 models comparable to models such as Google Gemini and Anthropic’s Claude. The models are designed for long-horizon reasoning, coding and agentic workflows.

Related:Google Could Invest Another $40 Billion in Anthropic

DeepSeek's V4 release highlights the vendor's commitment to open source. While the fact that the vendor is from China may discourage some enterprises from using the model, the model's cost adds appeal for enterprises concerned about recent price increases from other model makers. 

The Cost Factor

V4-Pro costs $1.74 for input per million tokens and $3.48 for output per million tokens. V4-Flash costs $0.14 for input and $0.28 for output. Compared with Gemini 3.1 Pro, Gemini 3.1 Pro costs $2 for input and $12 for output. GPT 5.5 costs $5 for input and $30 for output, and Claude Opus 4.7 is $5 for input and $25 for output. 

"The token pricing is a third of the frontier labs' pricing," said Kashyap Kompella, CEO of RPA2AI Research. "That kind of pricing can change buying behavior."

He added that while DeepSeek's models may trail behind frontier models from OpenAI, Google or Anthropic by three to six months, the gap does not matter if the cost gap is also large.

"Enterprises do not always need the absolute best model for all use cases," Kompella said. "They need good enough performance, predictable cost, and control. DeepSeek is forcing Western frontier labs to innovate on cost, not only on model capabilities."

Moving Away from Nvidia

The V4 series also marks a significant shift for DeepSeek, as the models are optimized for inference on Chinese chipmaker Huawei's Ascend supernode. Previous DeepSeek models like V3 were trained on Nvidia H800 chips, but V4-Flash is reported to be partially trained on Huawei hardware, while V4-Pro still relied on Nvidia due to its massive compute needs.

Related:Canadian, German AI Startups Join Forces to Challenge US Dominance

The relationship between DeepSeek and Huawei helps both DeepSeek and the Chinese chipmaker, which has not gotten much traction on its own Pangu series of models, said Lian Jye Su, an analyst at Omdia, a division of Informa TechTarget. 

"Being able to support DeepSeek now natively, showcasing pretty reasonable performance, is a very advanced achievement," Su said. "It does allow China to gain a bit more respect from other vendors." He added that China-friendly countries will be more open to adopting other products from China, "given that now China seems to be able to fight through all the restrictions and the limitations and now emerging stronger as compared to 12 months ago." 

DeepSeek-V4 Models Could Change Global AI Race

However, Huawei is still behind Nvidia and the broader global AI ecosystem in chip fabrication and software development, Su noted.

The Commitment to Open Source

DeepSeek's adherence to open source also matters on the geopolitical front. While U.S. vendors like Meta and OpenAI previously embraced open source, they have since largely departed from it. Open source could be DeepSeek's best differentiator.

"Open source helps them attract developers, build trust and create an ecosystem," Kompella said, adding that open source is also helpful for the Chinese market. Other Chinese vendors -- such as Alibaba, Kimi, Qwen and Minimax -- also offer open-weight models. 

Related:GPT-5.5 Boasts Coding Advancements, But Falls Short of Opus 4.7

"Open source fundamentally is just a commercial strategy," Su said. He added that, for DeepSeek, open source is built into the vendor's approach, and it is more than just a strategy to   entice the market before eventually becoming a closed model provider. 

On the geopolitical front, open source also offers China an opening in price-sensitive markets outside the U.S., Kompella said. 

"It also lets China influence global AI markets without needing to own every application layer," he said.

Furthermore, with DeepSeek beginning to shed its reliance on Nvidia chips, it furthers the Chinese government's intent to reduce its dependence on Nvidia and other Western vendors.

However, the Western market is still wary of Chinese vendors, Su said.

"There are now significant reservations in the Western camp about adopting any open source solution from China," he said. "There is a large pushback from large enterprises, especially those in critical industries, not choosing or avoiding Chinese models entirely, mainly because of scrutiny from governments."

Nevertheless, for enterprises already using Huawei's products, V4 models may be worth considering, Su added.

Moreover, for some enterprises, DeepSeek's low-cost models and inclination toward open source may be enough, which could influence the AI race.

"The global AI race is about who can deliver intelligence at scale, at low cost, on a sovereign technology stack," Kompella said.