
Tenable warns of data risks from new AI model DeepSeek
Tenable has issued guidance warning of the data exposure risks associated with DeepSeek, an emerging open-source AI model similar to OpenAI's offerings but lacking comparable safety measures.
The open-source nature of DeepSeek presents a heightened risk, as it is more vulnerable to exploitation, according to Tenable. The concern is underlined by the occurrence of 7.6 billion data records being exposed due to misconfigurations last year.
Satnam Narang, Senior Staff Research Engineer at Tenable, acknowledged the model's rapid rise in the tech industry.
Narang said, "DeepSeek has taken the entire tech industry by storm for a few key reasons: first, they have produced an open source large language model that reportedly beats or is on-par with closed-source models like OpenAI's o1. Second, they appear to have achieved this using less intensive computing power due to limitations on the procurement of more powerful hardware through export controls."
He added, "We don't know yet how quickly DeepSeek's models will be leveraged by cybercriminals. Still, if the past is prologue, we'll likely see a rush to leverage these models for nefarious purposes."
In response to these concerns, Tenable has provided a set of recommendations to help organisations safeguard themselves against AI-related threats.
The first step advised by Tenable is to establish a cautious approach when adopting DeepSeek. Organisations are urged to thoroughly evaluate potential risks such as data leakage and to take adequate measures to prevent misuse.
Tenable also advises constant monitoring for anomalies and unauthorised use of AI, using solutions that rapidly detect such occurrences and allow organisations to mitigate exposures promptly, thereby enhancing overall security.
Another important step involves implementing a formal AI governance framework. Tenable recommends setting up an AI governance board to establish clear policies for AI usage, development, and monitoring. This includes utilising tools to identify and monitor AI applications, review codes, audit models, and ensure compliance.
Finally, education of employees on AI usage risks and company policies is highlighted as a critical measure. Educating employees on guidelines, potential threats, and specific organisational policies empowers them to identify, prevent, and report misuse of AI and vulnerabilities.
Tenable's data indicates that misconfigurations and unsecured databases were responsible for 5.8% of breaches in 2023 but were implicated in 74% of exposed records. This highlights the severe impact that misconfigurations can have on sensitive data exposure.
There is growing concern about open-source models like DeepSeek potentially aiding cybercriminals in developing novel malware or exploiting zero-day vulnerabilities. Existing cybercrime tools such as WormGPT, WolfGPT, FraudGPT, EvilGPT, and the newly found GhostGPT exemplify this risk.
In this context, Narang concluded, "While it's still early to say, I wouldn't be surprised to see an influx in the development of DeepSeek wrappers, which are tools that build on DeepSeek with cybercrime as the primary function, or see cybercriminals utilise these existing models on their own to further expand their tools to best fit their needs."