On February 13, 2025, researchers reported that the Chinese generative AI DeepSeek failed multiple security tests, raising concerns about its safety for users. Conducted by AppSOC, a Silicon Valley security provider, the tests revealed significant vulnerabilities, including jailbreaking and malware generation, which could pose risks to organizations considering its use.
- DeepSeek failed multiple security tests.
- Jailbreaking and malware generation are concerns.
- Experts advise caution for corporate use.
- DeepSeek is not comparable to ChatGPT.
- Risk score of 8.3/10 indicates high risk.
- Avoid using DeepSeek for sensitive data.
DeepSeek’s recent security assessment highlighted alarming failure rates in critical areas. Experts noted that the AI could be manipulated to generate harmful code, which poses serious risks for users. David Reid, a cyber security expert, emphasized that the results are particularly concerning because they suggest that DeepSeek could be exploited to produce malware.
Key findings from the AppSOC tests include:
- High failure rates in jailbreaking attempts.
- Vulnerabilities to injection attacks.
- Capability to generate actual malware.
Anjana Susarla, a responsible AI specialist, cautioned organizations against using DeepSeek in any applications involving sensitive information. She stated that while DeepSeek may offer similar functionalities to established models like ChatGPT, its security shortcomings make it unsuitable for corporate use. AppSOC assigned DeepSeek a risk score of 8.3 out of 10, recommending against its deployment in enterprise settings.
The findings from the DeepSeek assessment underscore the importance of thorough security evaluations for AI technologies. As organizations increasingly rely on generative AI, understanding the potential vulnerabilities is crucial for maintaining data integrity and user safety.