Gate News message, April 27 — DeepSeek postponed the release of its V4 model to fine-tune its software stack for Huawei’s Ascend chips, reflecting Beijing’s broader initiative to develop a domestic AI supply chain as access to advanced foreign semiconductors becomes increasingly constrained.
DeepSeek’s V4-Pro model matches performance benchmarks set by OpenAI and Anthropic on major tests, while offering significantly lower API costs at $1.74 per million input tokens compared to Western competitors. The company reported that V4-Pro achieves 27% greater computational efficiency than its V3.2 predecessor, using substantially less computing power in a 1 million-token context. DeepSeek previously demonstrated cost efficiency with its R1 model, which the company said required less than $6 million to develop.
Market reaction reflected the shift toward domestic chip adoption. Shares in Chinese AI companies MiniMax and Zhipu (Knowledge Atlas Technology) each fell approximately 8%, while chipmakers benefited: SMIC, China’s largest contract chip manufacturer, rose 9% and Hua Hong Semiconductor climbed 15%.
However, DeepSeek’s technical report suggests the company remains partly dependent on Nvidia chips. Chinese semiconductors currently handle model inference, but only portions of V4 training appear adapted for domestic hardware, with the report not clarifying whether Nvidia chips performed the majority of the model’s training phase.
Related News
DeepRoute.ai Advanced Driver Assistance System breakthrough: over 300k vehicles deployed. 2026 target: 1 million City NOA fleet.
US State Dept Warns on DeepSeek AI Model Distillation
DeepSeek V4-Flash goes live on Ollama Cloud, US-hosted: Claude Code, OpenClaw one-click integration