Artificial Intelligence
Qualys Boosts Web App Security with AI and API Focus

Qualys has announced the launch of its API security platform that leverages AI-powered scanning and deep learning-based web malware detection to secure web apps and APIs across the entire attack surface, including on-premises web servers, databases, hybrid, multi-cloud environments, API gateways, containerized architectures, and microservices.
APIs are integral to digital transformation initiatives across industries. The latest data indicates that over 83% of web traffic now comprises API traffic, highlighting their critical role in modern web applications using microservices, cloud, and hybrid environments. However, this also underscores the vulnerabilities that accompany their widespread adoption.
“Many organizations use a variety of security tools, such as SAST, DAST, SCA, or point solutions for API security that often operate in isolation, without a unified platform to integrate their findings. Moreover, the absence of integration between these tools leads to a fragmented view of the application security posture and results in uncoordinated efforts and gaps in security coverage. Similarly, SAST & DAST tools offer limited coverage for API-specific issues and focus predominantly on code vulnerabilities,” commented Kunal Modasiya, Vice President, Product Management, CyberSecurity Asset Management, Qualys. “Mainly, these solutions fail to extend their assessment to the runtime or environmental threats where APIs operate and provide visibility into the vulnerabilities of the underlying infrastructure hosting these APIs, leaving significant security gaps at the network and host levels.”
Qualys API security addresses and allows organizations to:
- Measure API risks across all attack surfaces with a unified view of API security by discovering & monitoring every API asset across diverse environments, enabling better decision-making and faster response times.
- Communicate API risks like OWASP API Top 10 vulnerabilities & drift from OpenAPI specs with real-time threat detection and response, minimizing the risk window and enhancing overall security.
- Eliminate API risks with integrated workflows supporting Shift-Left & Shift-Right practices, bridging the gap between IT and security teams, promoting seamless collaboration, and improving operational efficiency.
Key features of Qualys API
- Comprehensive API discovery and inventory management
Qualys WAS with API Security automatically identifies and catalogues all APIs within an organization’s network, including internal, external, undocumented, rogue, and shadow APIs. Whether APIs are deployed in multi-cloud environments (AWS, Azure), containerized architectures (Kubernetes), or API gateways (Apigee, Mulesoft), Qualys’ continuous discovery ensures an updated inventory across all platforms, preventing unauthorized access points and shadow APIs. - API vulnerability testing & AI-powered scanning
Qualys provides comprehensive API vulnerability testing using 200+ prebuilt signatures to detect API-specific security vulnerabilities, including those listed in the OWASP API Top 10, such as rate limiting, authentication & authorization issues, PII collection, and sensitive data exposure. Moreover, for large applications, Qualys combines the power of deep learning and AI-assisted clustering to perform efficient vulnerability scans. This smart clustering mechanism targets critical areas, achieving a 96% detection rate with an 80% reduction in scan time. - API compliance monitoring
Qualys performs both active and passive compliance monitoring to identify and address any drift or inconsistencies in API implementation and documentation in adherence to the OpenAPI Specification (OAS v3). Clear, standardized API documentation, in adherence to OAS, ensures that shared documentation is easily understood by recipients, simplifies security assessments and enforcement, and enhances the accuracy of code, benefiting both automated tools and human developers. Qualys also continuously monitors APIs for compliance with industry standards such as PCI-DSS, GDPR, and HIPAA to ensure that APIs remain compliant with evolving regulations, avoiding potential fines and enhancing data protection. - API risk prioritization with TruRisk
Qualys leverages its proprietary TruRisk scoring system, which integrates multiple factors such as severity, exploitability, business context, and asset criticality to prioritize risks based on overall business impact, ensuring that the most critical vulnerabilities are addressed first. It also categorizes risks based on the OWASP API Top 10, helping organizations focus on the most prevalent and severe API security threats. - Seamless integration with Shift-Left and Shift-Right workflows
Qualys integrates seamlessly with existing CI/CD tools (e.g., Bamboo, TeamCity, Github, Jenkins, Azure DevOps) and IT ticketing systems (e.g., Jira, ServiceNow), supporting both shift-left and shift-right security practices. This facilitates automated security testing and real-time threat detection and response without disrupting development workflows. By bridging the gaps between IT and security teams, Qualys ensures smoother operational transitions, improving API security practices and reducing the risk window.
Artificial Intelligence
DeepSeek Popularity Exploited in Latest PyPI Attack

The Supply Chain Security team at Positive Technologies’ Expert Security Center (PT ESC) discovered and neutralised a malicious campaign in the Python Package Index (PyPI) repository. This attack was aimed at developers, ML engineers, and anyone seeking to integrate DeepSeek into their projects.
The attacker’s account, created in June 2023, remained dormant until January 29, when the malicious packages deepseeek and deepseekai were registered. Once installed, these packages would register console commands. When these commands were executed, the packages began stealing sensitive user data, including information about their computers and environment variables often containing database credentials and access keys to various infrastructure resources. The attackers used Pipedream, a popular developer integration platform, as their command-and-control server to receive the stolen information.
Stanislav Rakovsky, Head of Supply Chain Security at PT ESC, explained, “Cybercriminals are always looking for the next big thing to exploit, and DeepSeek’s popularity made it a prime target. What’s particularly interesting is that the malicious code appears to have been generated with the help of an AI assistant, based on comments within the code itself. The malicious packages were uploaded to the popular repository on the evening of January 29.”
Given the heightened interest in DeepSeek, this attack could have resulted in numerous victims if the malicious activity had gone unnoticed for longer. Experts at Positive Technologies strongly recommend being more attentive to new and unknown packages.
Artificial Intelligence
SentinelOne to Spotlight AI-Driven Cybersecurity at LEAP 2025

SentinelOne has announced its participation at LEAP 2025, alongside its distributor, AlJammaz Technologies. The company will showcase its AI-powered cybersecurity solutions including advanced EDR, XDR, and ITDR solutions designed to deliver autonomous protection against evolving cyber threats.
SentinelOne’s solutions align with the Kingdom’s strategic priorities by offering proactive AI-driven protection for critical infrastructure, enterprises, and government entities. The company’s Singularity platform, known for its real-time, AI-driven threat detection, response, and prevention, will be at the centre of its presence at the exhibition. The platform enables enterprises to protect their endpoints, cloud environments, and identity layers, allowing them to innovate confidently amidst evolving cyber threats.
Speaking on their participation, Meriam ElOuazzani, Senior Regional Director, META at SentinelOne, said, “Cybersecurity remains central to progress with Saudi Vision 2030’s digital leadership and economic goals, and our solutions empower businesses to outpace evolving threats and fuel growth. By participating at LEAP, we aim to engage with key stakeholders in the tech ecosystem, explore new partnerships, and demonstrate how our solutions are reshaping workforce capabilities and the future of digital resilience.”
SentinelOne’s AI strategy focuses on delivering autonomous, real-time protection by leveraging machine learning and behavioural AI. This ensures businesses can detect, mitigate, and remediate cyberattacks faster and more effectively than traditional solutions. Senior executives from SentinelOne will be onsite at the AlJammaz Executive Lounge in Hall 1 to share insights on AI-driven security strategies and the future of autonomous cybersecurity. Visitors can also experience live demonstrations of the Singularity platform.
Artificial Intelligence
DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk

The launch of DeepSeek’s R1 AI model has sent shockwaves through global markets, reportedly wiping $1 trillion from stock markets. Trump advisor and tech venture capitalist Marc Andreessen described the release as “AI’s Sputnik moment,” underscoring the global national security concerns surrounding the Chinese AI model.
However, new red teaming research by Enkrypt AI, the world’s leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.
Compared with other models, the research found that DeepSeek’s R1 is:
- 3x more biased than Claude-3 Opus
- 4x more vulnerable to generating insecure code than OpenAI’s O1
- 4x more toxic than GPT-4o
- 11x more likely to generate harmful output compared to OpenAI’s O1
- 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content than OpenAI’s O1 and Claude-3 Opus
Sahil Agarwal, CEO of Enkrypt AI, said, “DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought.”
The model exhibited the following risks during testing:
- BIAS & DISCRIMINATION – 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
- HARMFUL CONTENT & EXTREMISM – 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
- TOXIC LANGUAGE – The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
- CYBERSECURITY RISKS – 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
- BIOLOGICAL & CHEMICAL THREATS – DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.
Sahil Agarwal concluded, “As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”
-
Artificial Intelligence1 week ago
DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk
-
Artificial Intelligence6 days ago
DeepSeek Popularity Exploited in Latest PyPI Attack
-
Artificial Intelligence6 days ago
SentinelOne to Spotlight AI-Driven Cybersecurity at LEAP 2025
-
Cyber Security3 days ago
Employees Are the First Line of Defense
-
News5 days ago
Sophos Completes Secureworks Acquisition
-
Homeland Security1 week ago
Daimler Truck Focuses on Growth in the Defence Sector
-
Cyber Security3 days ago
Proactive Threat Intelligence Can Keep Threats at Bay
-
Cyber Security1 week ago
Tenable Plans to Acquire Vulcan Cyber