DeepSeek Fails Critical Security Assessments, Experts Urge Businesses to Act Now

"DeepSeek Fails Security Tests: Experts Urge Immediate Action"

Researchers found that the Chinese AI DeepSeek failed security tests, raising concerns about its ability to generate malware and handle sensitive data.
Alex Chen14 February 2025Last Update :
FOX logo
fox11online.com

On February 13, 2025, researchers reported that the Chinese generative AI DeepSeek failed multiple security tests, raising concerns about its safety for users. Conducted by AppSOC, a Silicon Valley security provider, the tests revealed significant vulnerabilities, including jailbreaking and malware generation, which could pose risks to organizations considering its use.

6 Key Takeaways
  • DeepSeek failed multiple security tests.
  • Jailbreaking and malware generation are concerns.
  • Experts advise caution for corporate use.
  • DeepSeek is not comparable to ChatGPT.
  • Risk score of 8.3/10 indicates high risk.
  • Avoid using DeepSeek for sensitive data.
Fast Answer: DeepSeek, a Chinese generative AI, failed security tests by AppSOC, leading experts to warn businesses of potential risks. The AI’s inability to pass benchmarks raises concerns about its use in corporate environments, particularly regarding sensitive data.

DeepSeek’s recent security assessment highlighted alarming failure rates in critical areas. Experts noted that the AI could be manipulated to generate harmful code, which poses serious risks for users. David Reid, a cyber security expert, emphasized that the results are particularly concerning because they suggest that DeepSeek could be exploited to produce malware.

Key findings from the AppSOC tests include:

  • High failure rates in jailbreaking attempts.
  • Vulnerabilities to injection attacks.
  • Capability to generate actual malware.

Anjana Susarla, a responsible AI specialist, cautioned organizations against using DeepSeek in any applications involving sensitive information. She stated that while DeepSeek may offer similar functionalities to established models like ChatGPT, its security shortcomings make it unsuitable for corporate use. AppSOC assigned DeepSeek a risk score of 8.3 out of 10, recommending against its deployment in enterprise settings.

Notice: Canadian businesses should carefully evaluate the security risks associated with using generative AI tools like DeepSeek, especially in environments handling sensitive data.

The findings from the DeepSeek assessment underscore the importance of thorough security evaluations for AI technologies. As organizations increasingly rely on generative AI, understanding the potential vulnerabilities is crucial for maintaining data integrity and user safety.

Leave a Comment

Your email address will not be published. Required fields are marked *


We use cookies to personalize content and ads , to provide social media features and to analyze our traffic...Learn More

Accept
Follow us on Telegram Follow us on Twitter