On February 13, 2025, researchers revealed that the Chinese generative AI DeepSeek failed multiple security tests, raising concerns about its safety for users. Conducted by AppSOC, a Silicon Valley security provider, the tests indicated serious vulnerabilities, including the ability to generate malware and be jailbroken.
- DeepSeek failed multiple security tests.
- Jailbreaking and malware generation concerns raised.
- Experts warn against using DeepSeek in enterprises.
- Risk score of 8.3/10 indicates high vulnerability.
- Cheaper options may compromise user security.
- Not comparable to established AI like ChatGPT.
DeepSeek’s failure in security assessments has drawn attention from cybersecurity experts. David Reid, a cyber security expert at Cedarville University, expressed alarm over the results, noting that the AI could produce harmful code. This raises significant concerns for potential users, particularly businesses that handle sensitive information.
AppSOC’s tests revealed failure rates in several critical areas, including:
- Jailbreaking capabilities
- Injection attacks
- Malware generation
These vulnerabilities suggest that organizations considering DeepSeek for corporate applications should proceed with caution. Anjana Susarla from Michigan State University emphasized the potential risks, questioning whether the AI could be manipulated to access sensitive company data. She concluded that while DeepSeek may offer similar functionalities to established models like ChatGPT, it does not meet the same security standards.
With a risk score of 8.3 out of 10, AppSOC strongly recommends against using DeepSeek in enterprise environments, particularly those involving sensitive data or intellectual property. The findings underscore the importance of ensuring that AI tools meet rigorous security benchmarks before being deployed in any business context.
The failure of DeepSeek in security tests highlights significant concerns for users and organizations. Experts advise caution and thorough evaluation to protect sensitive information from potential vulnerabilities associated with this AI technology.