Util­iz­ing Deep­Seek Securely



Deep­Seek has shaken up the world of arti­fi­cial intel­li­gence. Foun­ded in 2023, the Chinese com­pany released its new series of AI mod­els of the same name on Janu­ary 20.

What’s spe­cial:

  • These are open-source mod­els – they can be used free of charge
  • The train­ing strategy was pub­lished in a paper
  • The devel­op­ment costs amoun­ted to only a few mil­lion dollars
  • With lower energy con­sump­tion, it is still as power­ful as the giants of the AI market

So, what’s the catch? This blog post is less about the cap­ab­il­it­ies or the cir­cum­stances of the model range’s ori­gin. Instead, it will explain the risks asso­ci­ated with the com­mon use of util­iz­ing Deep­Seek and how these can be minimized.

Reac­tions

Any­one who cur­rently vis­its plat­forms such as LinkedIn or Xing is flooded with pos­it­ive testi­mo­ni­als from Deep­Seek from all over the world. This con­trasts with feeds from European author­it­ies, blogs from data pro­tec­tion agen­cies or art­icles and columns from tech­nical journals.

At the begin­ning of Feb­ru­ary, Lux­em­bour­g’s National Com­mis­sion for Data Pro­tec­tion (CNPD) strongly advised against util­iz­ing Deep­Seek.1 To pro­tect national secur­ity, US rep­res­ent­at­ives such as Josh Got­theimer are seek­ing a ban on the use of Deep­Seek for gov­ern­ment employ­ees and offi­cials in the USA2, sim­ilar to restric­tions Aus­tralia and Taiwan have already decided. In Italy, how­ever, the plat­form was blocked for every­one already.3 The BSI on the other hand, has not yet com­men­ted spe­cific­ally on Deep­Seek (14.02.2025).

Where does this mis­trust come from and what jus­ti­fies the author­it­ies’ high level of caution?

There are two import­ant links at the bot­tom of the Deep­Seek web­site4 beneath “Legal & Safety”. These are the Terms of Use and Pri­vacy Policy.

Data pro­tec­tion con­cerns with DeepSeek

Deep­Seek’s pri­vacy policy raises sig­ni­fic­ant ques­tions, par­tic­u­larly with regard to the GDPR and the pro­tec­tion of pri­vacy. A cent­ral prob­lem is the unclear legal basis for the col­lec­tion of per­sonal data such as IP addresses, device inform­a­tion and chat his­tor­ies. The trans­fer of data to third parties – includ­ing advert­ising part­ners and ana­lyt­ics ser­vices – remains opaque and there is no inform­a­tion on pro­tect­ive meas­ures for inter­na­tional data trans­fers. Users also have hardly any con­trol over their data: There are no clear mech­an­isms for the right of with­drawal, data dele­tion or a con­tact point within the EU. It is par­tic­u­larly prob­lem­atic that data is stored in China without clear GDPR-com­pli­ant pro­tect­ive meas­ures being specified.

Data is stored “for as long as neces­sary”, i.e. without spe­cific time lim­its, which can lead to unlim­ited reten­tion peri­ods. It also remains unclear how long user data is stored after an account has been blocked.

With regard to the EU AI Reg­u­la­tion (EU AI Act), essen­tial trans­par­ency meas­ures are miss­ing: There is no inform­a­tion on the train­ing data used or meas­ures to avoid bias and discrimination.

Over­all, the util­iz­ing of Deep­Seek is asso­ci­ated with con­sid­er­able risks, espe­cially for European users who rely on data pro­tec­tion, secur­ity and trans­par­ent regulation.

The terms of use

The terms of use are sim­ilar. Deep­Seek reserves the right to store user data even after an account has been deleted. How­ever, it is unclear what data this includes.

Another para­graph men­tions that tech­nical means will be used to check user beha­vior and information—including prompts and out­puts. A data­base is also to be cre­ated spe­cific­ally for “illegal con­tent fea­tures.” The next sec­tion out­lines what such secur­ity meas­ures might look like. How­ever, this para­graph remains vague, reveal­ing little about how user data is used and for what purposes.

The pen­ul­tim­ate point in the “Input and Out­puts” para­graph con­cerns the use of user data to ful­fill require­ments under “laws and reg­u­la­tions” and to improve ser­vices. The sen­tence is so long one almost for­gets the first few words by the end. What legal require­ments must a Chinese com­pany ful­fill, and why does it need user prompts and out­puts? What hap­pens to this data, and who can access it?
Its use is linked to encryp­tion and de-iden­ti­fic­a­tion, yet this very data was openly avail­able on the Inter­net (see below). Users who do not con­sent to data usage can ‘provide feed­back’ via email, but the con­sequences remain uncertain.

Addi­tion­ally, the terms of use may be amended any­time without prior noti­fic­a­tion. They apply imme­di­ately and are accep­ted by simply con­tinu­ing to use the ser­vices. This poses a sig­ni­fic­ant risk for com­pan­ies and indi­vidu­als rely­ing on legal clar­ity. A lack of DSGVO com­pli­ance and an unclear legal situ­ation could lead to Deep­Seek facing fur­ther legal restric­tions or bans in the EU.

Fur­ther concerns

In addi­tion to the reg­u­lat­ory risks, there are other factors that speak against the use of Deep­Seek. Some users report extreme cen­sor­ship of polit­ical top­ics in real time by Deep­Seek ser­vices (see Fig­ure 1).5 This is pre­sum­ably one of the tech­nical meth­ods used to take live action against unwanted content.

Fur­ther­more, the plat­form was down sev­eral times for exten­ded peri­ods at the end of Janu­ary. Its API ser­vice was even unavail­able for over a week fol­low­ing a massive cyber attack (see Fig­ure 2).6 At times, Wiz Research was able to access user data and chats via a pub­licly access­ible end­point.7 Chats could be read there unencrypted.

DeepSeek API Service Status Januar - Februar 2025. Das Bild zeigt einen 10-tägigen Ausfall in diesem Bereich
Fig­ure 2: Deep­Seek API Ser­vice Status Janu­ary – Feb­ru­ary 2025

And now?

As described at the begin­ning, this blog post is not only inten­ded to explain the risks and con­cerns, but also to provide solu­tions to min­im­ize them. The Deep­Seek model series remains a revolu­tion­ary tech­no­logy. Fur­ther­more, the con­cerns and risks lie mainly in the use of the ser­vices – not in the models.

Meth­ods that are either based on host­ing the mod­els them­selves or using an altern­at­ive plat­form are there­fore suit­able. Not every AI plat­form from the major cloud pro­viders cur­rently has the abil­ity to provide Deep­Seek models.

Things ini­tially look prom­ising for Microsoft, which announced in a blog post at the end of Janu­ary that Deep­Seek had been integ­rated into its AI plat­form Azure AI Foundry.8 In the attached video, the imple­ment­a­tion also looks very simple. On closer inspec­tion, how­ever, it quickly becomes appar­ent that util­iz­ing Deep­Seek, although it is an open model, can only be used via server­less API in Azure AI Foundry and is only avail­able in the USA.9

AWS is a little fur­ther ahead and even offers dis­tilled R1 mod­els via Bed­rock in Ger­many. The ori­ginal model can even be used in Lon­don right now.10
The situ­ation is sim­ilar in GCP’s Ver­tex AI, where the non-dis­tilled model is not yet pro­duct­ive, but can only be used in note­books (see Fig­ure 3).

Screenshot des Disclaimers von GCP, dass DeepSeek R1 nur in Notebooks zur Verfügung steht
Fig­ure 3: GCP Dis­claimer on the avail­ab­il­ity of Deep­Seek R1

Loc­ally, how­ever, the model and its dis­tilled vari­ants can be hos­ted via plat­forms such as Ollama.11 How­ever, the R1 model, for example, requires 1.5 TB of VRAM and has a down­load size of over 400 GB – the dis­tilled mod­els do not require quite as power­ful hardware.

Our GenAI Accelerator

We already offer a com­pre­hens­ive accelerator for the imple­ment­a­tion of GenAI UseCases. It is a col­lec­tion of Ter­ra­form mod­ules for AWS, Azure and GCP that util­izes a proven, mod­u­lar archi­tec­ture to accel­er­ate Gen­er­at­ive AI use cases. It reduces com­plex­ity by provid­ing basic infra­struc­ture and best prac­tices so developers can focus on spe­cific implementation.

Schematische Darstellung unseres GenAI Accelerators
Fig­ure 4: Our GenAI Accelerator

Since our Accelerator is also able to integ­rate open mod­els, e.g. from Hug­ging­Face, it can be used to host Deep­Seek mod­els in its own secure plat­form. Tech­nic­ally, it is even pos­sible to use the 671B R1 model.

Con­clu­sion

Whether and in what form Deep­Seek will remain avail­able in Europe in the long term is uncer­tain. Cur­rent pri­vacy and secur­ity con­cerns should not pre­vent com­pan­ies from tak­ing advant­age of the tech­no­lo­gical bene­fits of these mod­els – but they should take the right meas­ures. Our GenAI Accelerator offers a secure way to deploy Deep­Seek mod­els inde­pend­ently of the offi­cial plat­form. With a GDPR-com­pli­ant infra­struc­ture and full con­trol over data pro­cessing, com­pan­ies can real­ize the poten­tial of util­iz­ing Deep­Seek without tak­ing reg­u­lat­ory risks. Con­tact us if you would like to find out more about the secure imple­ment­a­tion of open source AI – we will sup­port you with the implementation.

Author:

Lukas Schif­fers


Sources

Deep­Seek License

  1. https://cnpd.public.lu/de/actualites/national/2025/02/deepseek.html ↩︎
  2. https://gottheimer.house.gov/posts/release-gottheimer-lahood-introduce-new-bipartisan-legislation-to-protect-americans-from-deepseek ↩︎
  3. https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/10097450 ↩︎
  4. https://www.deepseek.com/ ↩︎
  5. https://www.theguardian.com/technology/2025/jan/28/chinese-ai-chatbot-deepseek-censors-itself-in-realtime-users-report?CMP=share_btn_url ↩︎
  6. https://status.deepseek.com/ ↩︎
  7. https://www.wiz.io/blog/wiz-research-uncovers-exposed-deepseek-database-leak ↩︎
  8. https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github/ ↩︎
  9. https://learn.microsoft.com/en-us/azure/ai-studio/how-to/model-catalog-overview#content-safety-for-models-deployed-via-serverless-apis ↩︎
  10. https://aws.amazon.com/de/blogs/aws/deepseek-r1-models-now-available-on-aws/ ↩︎
  11. https://ollama.com/library/deepseek-r1:671b ↩︎