The arrival of a new, unexpected and highly functional AI platform, from a Chinese developer costing a fraction of its competitors has raised a variety of questions for users and organizations alike. These include whether DeepSeek poses a cybersecurity risk – or a greater risk – compared to existing platforms originating from other markets.

A little over a month ago saw the value of technology shares plummet as the artificial intelligence (AI) industry was sideswiped by the appearance of a new, free AI chatbot based on the DeepSeek-R1 model, from a vendor that was founded less than two years previously. From a standing start, the DeepSeek mobile app had been downloaded an estimated 2.6 million times by January 28, 2025, becoming firmly pegged to the top of app store download league tables around the world.

DeepSeek’s launch came almost immediately after the announcement by U.S. President Donald Trump of a proposed $500billion investment fund for AI development, though as yet there is no evidence to suggest that this timing was any more than a coincidence – not least because spinning up the processing power to support millions of users cannot be achieved overnight.

DeepSeek’s Rapid Ascension

Various theories exist about the reasons for DeepSeek’s meteoric rise. Simply having an alternative to the behemoth that is OpenAI’s ChatGPT (and, by association, the perceived leading position the U.S. presently has in the AI industry) is an attractive concept to many. The fact that it is free to subscribe and use DeepSeek also goes a long way, while the open source approach taken by the vendor has also received praise from users and the wider industry.

The big news with DeepSeek is, however, what the new R1 model cost – or at least is claimed to have cost – to build and train. According to a paper from the vendor, the figure was circa $5.6 million, which is a fraction of the $63 million claimed cost of training OpenAI’s GPT-4. A significant factor in the relatively low cost is that DeepSeek, with its servers based in China, has no access to the high-power processors that are preferred for AI training due to U.S. export regulations. These rules, which have evolved over many years, were reinforced in October 2022 and then again in 2023 with specific limitations relating to the export of microprocessors that are particularly suited to AI applications, the latter of these changes specifically quoting a U.S. government report from February 2023: “China is rapidly expanding and improving its artificial intelligence (AI) and big data analytics capabilities, which could expand beyond domestic use”.

The commercial upshot of the launch of DeepSeek was profound. Nvidia was universally reported as suffering the biggest-ever single-day loss in market value (US$539billion, or 17% of its value), but this was a blatant case of being a victim of its own success – if a company has gargantuan value then the dollar value of 17% of that value is huge by definition. Nvidia was far from the only AI-relevant company whose share price was severely dented – others included Oracle (13.8%), Broadcom (17.4%), and Marvell (19%). And it should be noted that there were bounce-backs for the affected company in subsequent days thanks to share dealers knowing an unexpected bargain when they saw one.

Cybersecurity Implications for DeepSeek

What are the potential impacts DeepSeek could have in the cybersecurity world in which we work?

First, trust. Going back to the open source nature mentioned earlier, there is unsurprisingly a school of thought that considers the open approach essential as an offset to the inherent secrecy and opacity that the world is used to from China. However, it should be noted that only some elements of the platform are open source right now. Any academics reading this article will no doubt already be wondering when some formal peer reviews will appear for the papers produced by DeepSeek in order to verify the claims they make.

On a more basic level, there is speculation in various circles around whether DeepSeek has used other AI systems’ data as a shortcut in building its own. Sure enough, whilst writing this article we asked the same questions of various AI models and got very similar answers but worded slightly differently. If we had asked a number of informed people the same set of questions the responses would also have contained largely the same facts but, in the same manner, expressed differently and in a different order. For a bit of fun, we asked ChatGPT what it thinks on the subject; it responded: “DeepSeek primarily develops and uses its own large language models (LLMs), such as DeepSeek-V2, to generate answers. However, it's unclear whether they integrate external AI engines (like OpenAI’s GPT, Google Gemini, or other models) for specific tasks … without direct confirmation, we can’t rule out the possibility of them leveraging other AI systems in some capacity”.

One of the key factors with trust in AI is the copyright element. Leaving aside whether DeepSeek is or is not dipping into other models, there is constant potential for copyright infringement if you quote the output of an AI model without properly understanding the sources that the output was derived from. There is always the potential for the answers it gives us simply to be wrong: plenty of the content out there on the internet is inadvertently incorrect or deliberately misleading, with this element of incorrect data inevitably filtering through to the outputs of the AI models using it.

All AI Presents a Risk

This is the point: leaving aside the trust element, the issues DeepSeek presents to us in cybersecurity are really no different from those of any other public AI model. Why do organizations block access to AI sites? Not because the answers might be wrong – decent corporate controls around researching sources and verifying accuracy can help with that, plus we had to check what search engines and online sources told us long before AI became a thing. No: we block AI sites because of the risk of a non-zero number of our users typing: “Please write me a sales report with predictions for the next four quarters” and uploading sensitive corporate data into the wild blue yonder of the internet, for consumption by an unknowable number of competitors and nation states.

So, then: as cybersecurity professionals we must be mindful of DeepSeek. While this mindfulness should perhaps be a little more focused than with public AI models that are not based in one of the world’s most secretive, government-driven countries, most of the threats are similar across all the alternative platforms that are open to us. We know that bad actors have used AI models to commit and assist attacks because they are so incredibly powerful and useful.

Given the reports of attempted cyber-attacks against DeepSeek itself, along with the fact that despite DeepSeek’s rapid rise to stardom the other engines have not exactly gone away (ChatGPT claims to have 300 million active users a week, for instance) – so if they have not yet fallen victim to an attack, this does not mean they will not. Perhaps, then, we should follow the age-old tradition of “assume breach” and treat all the AI sites with a similar level of skepticism.

Related Insights