Artificial Intelligence
Cloudflare Launches Tool to Block AI Bots

Cloud service giant Cloudflare is taking a stand against rogue AI bots scraping website data for training models. Their newly launched, free tool aims to combat this growing problem. The issue lies with some AI vendors, like Google, OpenAI, and Apple, who allow website owners to block data-scraping bots through robots.txt files. However, as Cloudflare points out, these blockers are often ignored, leaving website owners vulnerable.
To address this, Cloudflare has developed advanced bot detection models specifically trained to identify AI bots. These models analyze traffic patterns and behaviour, including attempts to mimic human web browsing activity. This allows them to catch even the most cunning scraper bots. Cloudflare has also implemented a reporting system for website owners to flag suspected AI bots and crawlers. They plan to continuously update their blacklist based on user reports and manual investigations.
The rise of powerful generative AI models has fueled a massive demand for training data. This has led to a surge in AI scraper bots, often operating without permission or compensation for the data they collect. Many websites are opting to block these bots entirely. Studies show a significant number of top websites blocking bots used by leading AI companies. However, some vendors seem to disregard these blockers, prioritizing data collection over user consent.
Blocking all bots can have unintended consequences. Some AI tools, like Google’s AI Overviews, exclude websites that block specific crawlers. This can limit valuable referral traffic for website owners. Cloudflare’s tool offers a potential solution, but its effectiveness hinges on the accurate detection of these clandestine AI bots. The ongoing battle between website owners and AI companies highlights the need for a clearer regulatory framework to govern data collection practices in the AI training landscape.
Artificial Intelligence
DeepSeek Popularity Exploited in Latest PyPI Attack

The Supply Chain Security team at Positive Technologies’ Expert Security Center (PT ESC) discovered and neutralised a malicious campaign in the Python Package Index (PyPI) repository. This attack was aimed at developers, ML engineers, and anyone seeking to integrate DeepSeek into their projects.
The attacker’s account, created in June 2023, remained dormant until January 29, when the malicious packages deepseeek and deepseekai were registered. Once installed, these packages would register console commands. When these commands were executed, the packages began stealing sensitive user data, including information about their computers and environment variables often containing database credentials and access keys to various infrastructure resources. The attackers used Pipedream, a popular developer integration platform, as their command-and-control server to receive the stolen information.
Stanislav Rakovsky, Head of Supply Chain Security at PT ESC, explained, “Cybercriminals are always looking for the next big thing to exploit, and DeepSeek’s popularity made it a prime target. What’s particularly interesting is that the malicious code appears to have been generated with the help of an AI assistant, based on comments within the code itself. The malicious packages were uploaded to the popular repository on the evening of January 29.”
Given the heightened interest in DeepSeek, this attack could have resulted in numerous victims if the malicious activity had gone unnoticed for longer. Experts at Positive Technologies strongly recommend being more attentive to new and unknown packages.
Artificial Intelligence
SentinelOne to Spotlight AI-Driven Cybersecurity at LEAP 2025

SentinelOne has announced its participation at LEAP 2025, alongside its distributor, AlJammaz Technologies. The company will showcase its AI-powered cybersecurity solutions including advanced EDR, XDR, and ITDR solutions designed to deliver autonomous protection against evolving cyber threats.
SentinelOne’s solutions align with the Kingdom’s strategic priorities by offering proactive AI-driven protection for critical infrastructure, enterprises, and government entities. The company’s Singularity platform, known for its real-time, AI-driven threat detection, response, and prevention, will be at the centre of its presence at the exhibition. The platform enables enterprises to protect their endpoints, cloud environments, and identity layers, allowing them to innovate confidently amidst evolving cyber threats.
Speaking on their participation, Meriam ElOuazzani, Senior Regional Director, META at SentinelOne, said, “Cybersecurity remains central to progress with Saudi Vision 2030’s digital leadership and economic goals, and our solutions empower businesses to outpace evolving threats and fuel growth. By participating at LEAP, we aim to engage with key stakeholders in the tech ecosystem, explore new partnerships, and demonstrate how our solutions are reshaping workforce capabilities and the future of digital resilience.”
SentinelOne’s AI strategy focuses on delivering autonomous, real-time protection by leveraging machine learning and behavioural AI. This ensures businesses can detect, mitigate, and remediate cyberattacks faster and more effectively than traditional solutions. Senior executives from SentinelOne will be onsite at the AlJammaz Executive Lounge in Hall 1 to share insights on AI-driven security strategies and the future of autonomous cybersecurity. Visitors can also experience live demonstrations of the Singularity platform.
Artificial Intelligence
DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk

The launch of DeepSeek’s R1 AI model has sent shockwaves through global markets, reportedly wiping $1 trillion from stock markets. Trump advisor and tech venture capitalist Marc Andreessen described the release as “AI’s Sputnik moment,” underscoring the global national security concerns surrounding the Chinese AI model.
However, new red teaming research by Enkrypt AI, the world’s leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.
Compared with other models, the research found that DeepSeek’s R1 is:
- 3x more biased than Claude-3 Opus
- 4x more vulnerable to generating insecure code than OpenAI’s O1
- 4x more toxic than GPT-4o
- 11x more likely to generate harmful output compared to OpenAI’s O1
- 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content than OpenAI’s O1 and Claude-3 Opus
Sahil Agarwal, CEO of Enkrypt AI, said, “DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought.”
The model exhibited the following risks during testing:
- BIAS & DISCRIMINATION – 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
- HARMFUL CONTENT & EXTREMISM – 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
- TOXIC LANGUAGE – The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
- CYBERSECURITY RISKS – 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
- BIOLOGICAL & CHEMICAL THREATS – DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.
Sahil Agarwal concluded, “As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”
-
Artificial Intelligence1 week ago
DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk
-
Artificial Intelligence6 days ago
DeepSeek Popularity Exploited in Latest PyPI Attack
-
Artificial Intelligence6 days ago
SentinelOne to Spotlight AI-Driven Cybersecurity at LEAP 2025
-
Cyber Security3 days ago
Employees Are the First Line of Defense
-
News5 days ago
Sophos Completes Secureworks Acquisition
-
Homeland Security1 week ago
Daimler Truck Focuses on Growth in the Defence Sector
-
Cyber Security3 days ago
Proactive Threat Intelligence Can Keep Threats at Bay
-
Cyber Security1 week ago
Tenable Plans to Acquire Vulcan Cyber