Tenable Research discovered that DeepSeek R1 can be used to generate malware, raising concerns about the security risks of AI-powered cybercrime. The researchers experimented with comparing DeepSeek R1 to keyloggers and ransomware samples. Initially, DeepSeek R1 refused to comply, but jailbreaking techniques revealed that the AI’s safeguards could be easily bypassed.
“Initially, DeepSeek rejected our request to generate a keylogger,” said Nick Miles, staff research engineer at Tenable. “But by reframing the request as an ‘educational exercise’ and applying common jailbreaking methods, we quickly overcame its restrictions.”
Once these guardrails were bypassed, DeepSeek was able to:
- Generate a keylogger that encrypts logs and stores them discreetly on
a device - Produce a ransomware executable capable of encrypting files.
The bigger concern resulting from this research is that GenAI has the potential to scale cybercrime. While DeepSeek’s output still requires manual refinement to function effectively, it lowers the barrier for individuals with little to no coding experience to explore malware development. By generating foundational code and suggesting relevant techniques, AI models like DeepSeek could significantly accelerate the learning curve for novice cybercriminals.
“Tenable’s research highlights the urgent need for responsible AI development and stronger guardrails to prevent misuse. As AI capabilities evolve, organisations, policymakers, and security experts must work together to ensure that these powerful tools do not become enablers of cybercrime,” said Miles.