Examining the impact of DeepSeek: Democratisation versus data protection

Sam Curry, Global VP & CISO in Residence at Zscaler Examining the impact of DeepSeek: Democratisation versus data protection

The release of the first DeepSeek model sparked the latest wave of an AI rush that has been sweeping industries of all shapes and sizes over the past few years. An open source AI-powered chatbot launched out of China, DeepSeek aims to rival OpenAI’s architecture and is being viewed therefore as a significant disruptor. But as with any new AI innovation, discussion of its potential benefits must also be tempered with consideration of its data privacy impact. With Data Privacy Day just passed, now is a good time to take a closer look at how it stacks up.

Boiled right back, AI centres around handling and enriching data. The more data and power an AI engine is fed, the more powerful its AI becomes. Tools like ChatGPT and DeepSeek rely on data as context for their modeling and outcomes. But this raises a number of critical data privacy and protection questions. Who controls the data feeding these tools? Who else has access to this data? And does the data feature any in-built bias that will reflect in the tool’s output?

Democratising the AI industry

DeepSeek claims to have the power to not only process massive amounts of data efficiently, but also throw stock markets into huge turmoil with its substantially lower cost. For years, US companies have dominated the digital innovation space; and this was very much the same case for AI. For the first two years of the AI rush, many of the leading AI companies i.e. OpenAI were American. Is it any surprise then that these digital natives are viewing this Chinese newcomer as a massive threat to their previously unrivalled AI land grab?

By showing that Silicon Valley companies are no longer the only ones capable of shaping the future of this technology, DeepSeek’s entrance is expected to have a positively democratising effect on the AI market. But its open-source nature has to be evaluated carefully – and this is already proving to be a challenge. While the AI tool’s codes are open, its training data sources remain largely opaque, making it difficult to assess potential biases or security risks. More on that to follow.

Rebalancing supply vs. demand

What makes DeepSeek nevertheless so powerful is its unique level of efficiency. The biggest issue that Silicon Valley has had in the wake of the most recent AI rush is the enormous processing power required to support the many, many chatbots and applications being developed and the soaring energy consumption that has resulted from this. DeepSeek has demonstrated the potential for AI to consume computing power far more efficiently, and therefore burn through less energy. This could have big ramifications on future developments. The compute curve was previously approaching an asymptote, governed by rising supply and costs, which was driving market caps for companies in the industry. With DeepSeek, the balance between supply and demand has now shifted.

But this rebalance is likely only temporary. DeepSeek will also act as a catalyst to speed up demand for new applications meaning that organisations will accelerate AI innovation and – barring any breakthroughs in energy production or computing – the industry will yet again come to a point where the capacity maximisation from an energy and compute perspective returns to the same asymptote as before.

Setting firm data protection foundations

In the rush to roll out new AI-driven applications like DeepSeek, organisations should not forget about ensuring they have solid data protection foundations. With any new AI tool – and particularly with an open source tool that provides limited detail on its training resources – there are various governance, privacy, security, legal, social, and ethical considerations that should be taken into account, alongside the promise of improved efficiency and performance. Organisations have to make sure all these components are in alignment before pushing forward, with each of these dimensions requiring not only a framework and deliberation, but articulation and clarity as well.

And the work doesn’t stop at launch. When organisations accelerate the rate of information being fed into their existing AI tools to supercharge adoption, they must review the data sets for bias and be transparent about what data they are collecting and using. The final step is to evaluate not only the outputs of their AI tools, but also the supply chain that has access to them. The thinking has to be 360.

Hitting AI gold

Whether rolling out DeepSeek or the next tool that follows, companies that invest time and effort in their AI governance and data protection mechanisms will be first to hit AI gold. They will have mature AI policies in place that dictate who they work with and how they treat their data, and ethical guidelines and oversight into AI projects to enable the departments that are eagerly evaluating new AI tools and functionality to do so without risk.

Sam Curry, Global VP & CISO in Residence at Zscaler

Sam Curry

Sam Curry is VP & CISO in Residence at Zscaler. With over three decades as an entrepreneur, information security expert and executive at companies including RSA, Arbor Networks, CA and McAfee, Sam is dedicated to empowering defenders in cyber conflict and fulfilling the promise of security, enabling a safe, reliable, connected world.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE