China’s Ministry of Industry and Information Technology (MIIT) has announced that the country’s top telecom companies—China Mobile, China Telecom, and China Unicom—have fully adopted the open-source DeepSeek AI model.
These providers use the DeepSeek-R1 model across various products and services, offering tailored computing power solutions and supportive environments to enhance operations.
AI is playing a key role in transforming telecom services. During the 2025 Spring Festival, telecom companies combined AI with 5G, cloud platforms, and big data to introduce new applications and improve AI-driven services.
The operators enhanced digital services using AI and 5G, delivering billions of precise location data points for smart vehicle navigation during the holiday. According to MIIT, they also rolled out AI-powered voice analysis tools to detect fraud for financial institutions and launched cloud-based solutions to support efficient remote work.
AI and big data were also used to strengthen public services. For instance, China Unicon monitored travel patterns and visitor numbers at scenic attractions in real-time, analyzed key infrastructure, and reviewed holiday spending trends to assess post-holiday economic recovery.
DeepSeeks Puts China at the forefront.
The DeepSeek model was introduced at the end of January. It offers an AI chatbot designed to compete with OpenAI’s leading model, which powers ChatGPT.
DeepSeek has made significant strides in optimizing hardware by relying on fewer and fewer powerful chips. It has also improved learning efficiency, making it a more cost-effective option.
The announcement grabbed global media attention, with many predicting DeepSeek could reduce demand for AI hardware. However, this view overlooks four ways in which these advancements could increase the need for AI equipment:
- Reduced training costs will allow more companies to develop their models, avoiding the high costs of relying on major tech firms.
- Larger tech companies can combine improved training efficiency with significant resources to increase performance.
- Researchers can perform more experiments without needing additional resources.
- Providers like OpenAI might shift from offering one general-purpose model to creating multiple specialized models tailored for specific fields, such as science or writing.
Researchers worldwide are constantly working on improving AI model performance, often building on shared, published innovations. DeepSeek has combined several ideas and made notable progress in hardware use and learning techniques.
DeepSeek’s improved hardware efficiency addresses a common issue in large model training—communication bottlenecks between computers. Its new approach allows simultaneous calculations and communication, reducing idle time and increasing productivity.
It has also innovated in how learning takes place. Typically, language models go through three stages: learning from large amounts of text to predict words, fine-tuning with specific examples for better communication, and generating outputs while receiving feedback for improvement.
In the final stage, instead of finding one “correct” answer, the model learns to identify better outcomes from comparisons. DeepSeek’s process evaluates a wider range of outputs during this phase, allowing the second and third stages to be shorter without compromising results.
These improvements significantly boost efficiency, but it’s important to note that this isn’t a groundbreaking leap like the introduction of machine learning in the 1990s or neural networks in the 2010s. It’s unlikely to give DeepSeek a lasting edge over competitors in AI development.
DeepSeek demonstrates that innovation in AI can happen anywhere with a skilled team and adequate funding. The competition among researchers and companies worldwide will continue, and leadership in the field is likely to shift over time.