Utilizing DeepSeek Securely

DeepSeek has shaken up the world of artificial intelligence. Founded in 2023, the Chinese company released its new series of AI models of the same name on January 20.
What’s special:
- These are open-source models – they can be used free of charge
- The training strategy was published in a paper
- The development costs amounted to only a few million dollars
- With lower energy consumption, it is still as powerful as the giants of the AI market
So, what’s the catch? This blog post is less about the capabilities or the circumstances of the model range’s origin. Instead, it will explain the risks associated with the common use of utilizing DeepSeek and how these can be minimized.
Reactions
Anyone who currently visits platforms such as LinkedIn or Xing is flooded with positive testimonials from DeepSeek from all over the world. This contrasts with feeds from European authorities, blogs from data protection agencies or articles and columns from technical journals.
At the beginning of February, Luxembourg’s National Commission for Data Protection (CNPD) strongly advised against utilizing DeepSeek.1 To protect national security, US representatives such as Josh Gottheimer are seeking a ban on the use of DeepSeek for government employees and officials in the USA2, similar to restrictions Australia and Taiwan have already decided. In Italy, however, the platform was blocked for everyone already.3 The BSI on the other hand, has not yet commented specifically on DeepSeek (14.02.2025).
Where does this mistrust come from and what justifies the authorities’ high level of caution?
There are two important links at the bottom of the DeepSeek website4 beneath “Legal & Safety”. These are the Terms of Use and Privacy Policy.
Data protection concerns with DeepSeek
DeepSeek’s privacy policy raises significant questions, particularly with regard to the GDPR and the protection of privacy. A central problem is the unclear legal basis for the collection of personal data such as IP addresses, device information and chat histories. The transfer of data to third parties – including advertising partners and analytics services – remains opaque and there is no information on protective measures for international data transfers. Users also have hardly any control over their data: There are no clear mechanisms for the right of withdrawal, data deletion or a contact point within the EU. It is particularly problematic that data is stored in China without clear GDPR-compliant protective measures being specified.
Data is stored “for as long as necessary”, i.e. without specific time limits, which can lead to unlimited retention periods. It also remains unclear how long user data is stored after an account has been blocked.
With regard to the EU AI Regulation (EU AI Act), essential transparency measures are missing: There is no information on the training data used or measures to avoid bias and discrimination.
Overall, the utilizing of DeepSeek is associated with considerable risks, especially for European users who rely on data protection, security and transparent regulation.
The terms of use
The terms of use are similar. DeepSeek reserves the right to store user data even after an account has been deleted. However, it is unclear what data this includes.
Another paragraph mentions that technical means will be used to check user behavior and information—including prompts and outputs. A database is also to be created specifically for “illegal content features.” The next section outlines what such security measures might look like. However, this paragraph remains vague, revealing little about how user data is used and for what purposes.
The penultimate point in the “Input and Outputs” paragraph concerns the use of user data to fulfill requirements under “laws and regulations” and to improve services. The sentence is so long one almost forgets the first few words by the end. What legal requirements must a Chinese company fulfill, and why does it need user prompts and outputs? What happens to this data, and who can access it?
Its use is linked to encryption and de-identification, yet this very data was openly available on the Internet (see below). Users who do not consent to data usage can ‘provide feedback’ via email, but the consequences remain uncertain.
Additionally, the terms of use may be amended anytime without prior notification. They apply immediately and are accepted by simply continuing to use the services. This poses a significant risk for companies and individuals relying on legal clarity. A lack of DSGVO compliance and an unclear legal situation could lead to DeepSeek facing further legal restrictions or bans in the EU.
Further concerns
In addition to the regulatory risks, there are other factors that speak against the use of DeepSeek. Some users report extreme censorship of political topics in real time by DeepSeek services (see Figure 1).5 This is presumably one of the technical methods used to take live action against unwanted content.


Furthermore, the platform was down several times for extended periods at the end of January. Its API service was even unavailable for over a week following a massive cyber attack (see Figure 2).6 At times, Wiz Research was able to access user data and chats via a publicly accessible endpoint.7 Chats could be read there unencrypted.

And now?
As described at the beginning, this blog post is not only intended to explain the risks and concerns, but also to provide solutions to minimize them. The DeepSeek model series remains a revolutionary technology. Furthermore, the concerns and risks lie mainly in the use of the services – not in the models.
Methods that are either based on hosting the models themselves or using an alternative platform are therefore suitable. Not every AI platform from the major cloud providers currently has the ability to provide DeepSeek models.
Things initially look promising for Microsoft, which announced in a blog post at the end of January that DeepSeek had been integrated into its AI platform Azure AI Foundry.8 In the attached video, the implementation also looks very simple. On closer inspection, however, it quickly becomes apparent that utilizing DeepSeek, although it is an open model, can only be used via serverless API in Azure AI Foundry and is only available in the USA.9
AWS is a little further ahead and even offers distilled R1 models via Bedrock in Germany. The original model can even be used in London right now.10
The situation is similar in GCP’s Vertex AI, where the non-distilled model is not yet productive, but can only be used in notebooks (see Figure 3).

Locally, however, the model and its distilled variants can be hosted via platforms such as Ollama.11 However, the R1 model, for example, requires 1.5 TB of VRAM and has a download size of over 400 GB – the distilled models do not require quite as powerful hardware.
Our GenAI Accelerator
We already offer a comprehensive accelerator for the implementation of GenAI UseCases. It is a collection of Terraform modules for AWS, Azure and GCP that utilizes a proven, modular architecture to accelerate Generative AI use cases. It reduces complexity by providing basic infrastructure and best practices so developers can focus on specific implementation.
Since our Accelerator is also able to integrate open models, e.g. from HuggingFace, it can be used to host DeepSeek models in its own secure platform. Technically, it is even possible to use the 671B R1 model.
Conclusion
Whether and in what form DeepSeek will remain available in Europe in the long term is uncertain. Current privacy and security concerns should not prevent companies from taking advantage of the technological benefits of these models – but they should take the right measures. Our GenAI Accelerator offers a secure way to deploy DeepSeek models independently of the official platform. With a GDPR-compliant infrastructure and full control over data processing, companies can realize the potential of utilizing DeepSeek without taking regulatory risks. Contact us if you would like to find out more about the secure implementation of open source AI – we will support you with the implementation.
Author:
Sources
- https://cnpd.public.lu/de/actualites/national/2025/02/deepseek.html ↩︎
- https://gottheimer.house.gov/posts/release-gottheimer-lahood-introduce-new-bipartisan-legislation-to-protect-americans-from-deepseek ↩︎
- https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/10097450 ↩︎
- https://www.deepseek.com/ ↩︎
- https://www.theguardian.com/technology/2025/jan/28/chinese-ai-chatbot-deepseek-censors-itself-in-realtime-users-report?CMP=share_btn_url ↩︎
- https://status.deepseek.com/ ↩︎
- https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak ↩︎
- https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github/ ↩︎
- https://learn.microsoft.com/en-us/azure/ai-studio/how-to/model-catalog-overview#content-safety-for-models-deployed-via-serverless-apis ↩︎
- https://aws.amazon.com/de/blogs/aws/deepseek-r1-models-now-available-on-aws/ ↩︎
- https://ollama.com/library/deepseek-r1:671b ↩︎