AI is a highly complex topic that even many technicians don't understand. In addition, reporting is often inaccurate due to the need to simplify things or a need for attention. The result is statements that paint a distorted picture of DeepSeek.
DeepSeek is a Chinese company. The company recently released the DeepSeek-R1 language model. It is said to be just as good and in some ways even better than OpenAI's o1 language model ("ChatGPT").
This led to corporate values of AI companies like Nvidia plummeting. Even the top financial news organization, Forbes reported this in its post dated January 27, 2025.
It is often presented as if DeepSeek is significantly more efficient than ChatGPT. This is true in the relevant aspects, but less so in others.
Then you read headlines like this:
As of: January 31st, 2025, source:
'Harmful and toxic output': DeepSeek has 'major security and safety gaps,' study warns.
This gives the impression that the Chinese language model is not secure because user data may be misused.
Most of the statements of this kind circulating in public are not entirely accurate.
DeepSeek is primarily an AI chatbot and is therefore not dissimilar to Open AI's ChatGPT. Like its competitors, it is available as an app and as a web application. Users can ask the chatbot questions, generate texts, or have it solve complex tasks.
However, DeepSeek goes beyond a pure chatbot and, according to the technology portal Techradar, relies on two main models:
While OpenAI relies on a closed model, DeepSeek is partially open source - a crucial difference. And while OpenAI invested over 100 million US dollars in the development of GPT-4, the final training phase of DeepSeek R1 is said to have cost only 5.6 million dollars - a reduction in costs of 95 percent, according to The Verge.
This was made possible, among other things, by using a special mixture of expert (MoE) architecture, in which the model is not activated in its entirety, but only relevant sub-areas.
Another key feature is the use of synthetic data, i.e. artificially generated training data. This means that DeepSeek could have found a solution to the so-called "data problem" faced by many AI companies.
ChatGPT | Pexels.com
DeepSeek is considered a direct competitor to ChatGPT o1 and GPT-4o - but what exactly is the difference between the two models? We have made a brief comparison and presented it in an overview:
Comparison Criterion |
DeepSeek R1 |
ChatGPT o1 (Open AI) |
Model Type |
Partial Open Source |
Closed Model |
Training Costs |
5.6 million USD |
Over 100 million USD |
Computing Power |
About 2,000 Nvidia chips |
About 16,000 Nvidia chips |
Availability |
Free |
Free and Paid |
Additional Functions |
No AI image/video tool |
DALL·E, GPTs, Plugins |
Data Protection |
Chinese servers for app models, regulation unclear. |
OpenAI guidelines |
While DeepSeek is impressive in terms of efficiency, the system lacks some of the features that ChatGPT offers - in particular image and video generation as well as more extensive customization options. After a brief comparison, users who value these things are still better off with ChatGPT.
DeepSeek | Pexels.com
Below are the facts we believe you should know about DeepSeek:
DeepSeek is the name of a Chinese company. If DeepSeek is used instead as a name for a language model, one must distinguish between two variants:
The above-mentioned report that “DeepSeek” is likely to misuse user data can only refer to the cloud version (“app”). This is because the open source language model can be downloaded and run locally, without any internet connection. Without such a connection, user data can hardly be sent to China.
The DeepSeek language model can be used without any security risk.
Namely in the local version, which can run on its own AI server.
By the way, ChatGPT is not necessarily secure either. American intelligence laws allow US authorities and US intelligence services to intercept data from others. The EU-US data protection agreement DPF was never worth much anyway and was just a formality.
It is also based on a presidential decree by Joe Biden. The presidential decree could also be declared null and void by Donald Trump at any time.
OpenAI also enjoys collecting your data. Even if your ChatGPT data is not used for AI training, it may still be used for other purposes! For example, to evaluate the OpenAI AI, which then makes you more and more dependent (price increases have already been announced).
The training of DeepSeek-R1 is said to have cost around 6 million USD. The actual cost was higher, as this figure does not refer to the total cost and also to the base model DeepSeek-V3. For ChatGPT, a sum of 100 million USD was reported.
In any case, the following is correct:
Why is DeepSeek-R1 smaller than ChatGPT? According to DeepSeek, R1 is a 685B model, meaning it consists of 685 billion neural connections.
DeepSeek-R1 works like the human brain: when you speak, the speech center is predominantly activated. In humans, when you speak, only a few of the neurons in your brain fire. Technically, this is achieved in DeepSeek-R1 using a so-called Mixture of Experts (MoE) architecture. This architecture has long been used. It is already used by Mistral, for example.
Because DeepSeek-R1 is open source, you can download it and run it yourself. To run DeepSeek-R1 on your hardware, you need a server, which costs around 35,000 dollars. Many companies can afford that.
ChatGPT, on the other hand, is not something you want to run on your hardware, not to mention that you can't because OpenAI doesn't want you to and hasn't released the model.
In various benchmarks, DeepSeek-R1 performs just as well as OpenAI o1. This is even though R1 is much more efficient and smaller than ChatGPT. Users report that R1 is just as good as ChatGPT, others see R1 ahead.
Chinese censorship has deleted some facts from the model or distorted them. As a result, the quality of some political issues is poor.
However, a general chatbot is a very bad use case for an enterprise AI. In this respect, it almost doesn't matter that some political facts in R1 are questionable. With standard procedures such as fine-tuning or RAG, text applications can be operated very well with R1. Other use cases can be implemented even better with R1. These include:
OpenAI occasionally releases a sub-version of ChatGPT. These versions differ in terms of answering your question. There is no consistency here. Without consistency, there is no reliability in the automation of processes.
OpenAI is a paid version. The free version is either irrelevant for companies or is simply used. The paid chatbox does not help automate your processes. The paid application programming interface (API) brings with it uncertainties: How often will you have to call this API? How much data will have to be sent to the API? Depending on the volume of data, the costs for using the API are higher or lower.
Updates happen when OpenAI schedules them. This also means that updates do not happen when you want them to. OpenAI as the supplier determines the version of ChatGPT that you are allowed to use.
DeepSeek works as you would expect. Once downloaded, it always responds the same way. Tests and benchmarks show the permanent valid state.
DeepSeek can be operated at fixed costs, which essentially consist of the price of hardware (or its rental).
DeepSeek can easily be replaced by other models or newer model variants. This happens exactly when you want it to. Uncertainties can be eliminated through testing. In general, this is also why it is a good idea to want to solve specific use cases with AI. These can be mastered and validated very well.
It is quite astonishing that a language model like R1 beats the market leader, ChatGPT, in the opinion of the general public. And this is even though DeepSeek's model is smaller. DeepSeek will also have used fewer resources than OpenAI.
But what tops it all off is that DeepSeek-R1 has been published and made freely available. Put simply, this means:
In contrast, here are the possibilities that ChatGPT offers.
But that’s not all.
DeepSeek puts the icing on the cake: DeepSeek tells us all the recipes for creating DeepSeek-R1.
That means:
Specifically, the following is available from DeepSeek as open source:
The code to recreate DeepSeek-R1 is available as source code in the Transformers library in Python. DeepSeek has revealed to everyone how to recreate ChatGPT.
In addition, DeepSeek has revealed to everyone how existing language models, which are quite small, can be easily made even more intelligent using knowledge transfer.
These smaller models are called distillate models. Such a model is so small that it can be operated on inexpensive hardware. Some of these smarter models can be operated on a low-cost AI server. The smallest of these models can even be installed on a modern smartphone and run without an internet connection if necessary!
DeepSeek has also made these distillate models freely available.
DeepSeek (as a company or AI model) is not a privacy risk unless you use the DeepSeek app. OpenAI is a security risk for sensitive data because you can only use the cloud version.
DeepSeek has revealed how ChatGPT can be replaced. The smaller models as a waste product are a great additional gift. For a fairly manageable amount (hardware purchase or compute rental), every company can now recreate ChatGPT for itself.
Even if DeepSeek comes from China, open source is open source. Of course, all providers of larger AI models have stolen data, not just DeepSeek. Google and Meta also misuse user data.
Trends in AI
AI applications
GenAI (Generative Artificial Intelligence)
DeepSeek
ChatGPT
Relevant Keywords
Popular Blogs that you may like