Connect with us

Artificial Intelligence

ESET’s New AI Assistant Streamlines Threat Detection and Response

Published

on

ESET has introduced ESET AI Advisor, an innovative generative AI-based cybersecurity assistant that transforms incident response and interactive risk analysis. First showcased at RSA Conference 2024, the new solution is now available as part of the ESET PROTECT MDR Ultimate subscription tier and ESET Threat Intelligence.

Unlike other vendor offerings and typical generative AI assistants that focus on soft features like administration or device management, ESET AI Advisor seamlessly integrates into the day-to-day operations of security analysts, conducting in-depth analysis. Building on over two decades of ESET’s expertise in AI-driven endpoint protection, the offering provides detailed incident data and SOC team-level advisory. This is a game-changer for companies with limited IT resources who want to utilize the advantages of advanced Extended Detection and Response (XDR) solutions and threat intelligence feeds.

“As cybersecurity threats become increasingly sophisticated, ESET remains committed to providing cutting-edge solutions that address these challenges. The ESET AI Advisor module represents a significant leap forward in our mission to close the cybersecurity skills gap and empower organizations to safeguard their digital assets effectively,” said Juraj Malcho, Chief Technology Officer at ESET.

One of the primary benefits of this new solution is closing the cybersecurity skills gap. Security analysts of all skill levels can use ESET AI Advisor to conduct interactive risk identification, analysis, and response capabilities, which are provided in an easily understandable format. The user-friendly interface makes sophisticated threat data actionable even for less experienced IT and security professionals.

The ESET AI Advisor also excels in facilitating faster decision-making for critical incidents. Security analysts can simply consult the ESET AI Advisor to understand the specific threats their environment faces. Leveraging extensive XDR collected data, the ESET AI Advisor identifies and analyzes potential malware threats, providing intuitive insights into their behaviour and impact. It assists in recognizing phishing attempts and advising users on how to avoid falling victim to fraudulent emails or websites. By monitoring network traffic, the ESET AI Advisor can flag unusual or suspicious behaviour, helping security teams take appropriate action. Its ability to automate repetitive tasks is an additional advantage. Managing routine processes such as data collection, extraction, and basic threat detection, allows security teams to focus on more strategic initiatives.

In ESET Threat Intelligence, the new module will help researchers analyze vast quantities of unique APT reports and understand latest developments in world of cyber threats. With its conversational prompts and interactive dialogue, ESET AI Advisor empowers organizations to analyze and mitigate threats effortlessly and fortify their cybersecurity posture.

Artificial Intelligence

DeepSeek Popularity Exploited in Latest PyPI Attack

Published

on

The Supply Chain Security team at Positive Technologies’ Expert Security Center (PT ESC) discovered and neutralised a malicious campaign in the Python Package Index (PyPI) repository. This attack was aimed at developers, ML engineers, and anyone seeking to integrate DeepSeek into their projects.

The attacker’s account, created in June 2023, remained dormant until January 29, when the malicious packages deepseeek and deepseekai were registered. Once installed, these packages would register console commands. When these commands were executed, the packages began stealing sensitive user data, including information about their computers and environment variables often containing database credentials and access keys to various infrastructure resources. The attackers used Pipedream, a popular developer integration platform, as their command-and-control server to receive the stolen information.

Stanislav Rakovsky, Head of Supply Chain Security at PT ESC, explained, “Cybercriminals are always looking for the next big thing to exploit, and DeepSeek’s popularity made it a prime target. What’s particularly interesting is that the malicious code appears to have been generated with the help of an AI assistant, based on comments within the code itself. The malicious packages were uploaded to the popular repository on the evening of January 29.”

Given the heightened interest in DeepSeek, this attack could have resulted in numerous victims if the malicious activity had gone unnoticed for longer. Experts at Positive Technologies strongly recommend being more attentive to new and unknown packages.

Continue Reading

Artificial Intelligence

SentinelOne to Spotlight AI-Driven Cybersecurity at LEAP 2025

Published

on

SentinelOne has announced its participation at LEAP 2025, alongside its distributor, AlJammaz Technologies. The company will showcase its AI-powered cybersecurity solutions including advanced EDR, XDR, and ITDR solutions designed to deliver autonomous protection against evolving cyber threats.

SentinelOne’s solutions align with the Kingdom’s strategic priorities by offering proactive AI-driven protection for critical infrastructure, enterprises, and government entities. The company’s Singularity platform, known for its real-time, AI-driven threat detection, response, and prevention, will be at the centre of its presence at the exhibition. The platform enables enterprises to protect their endpoints, cloud environments, and identity layers, allowing them to innovate confidently amidst evolving cyber threats.

Speaking on their participation, Meriam ElOuazzani, Senior Regional Director, META at SentinelOne, said, “Cybersecurity remains central to progress with Saudi Vision 2030’s digital leadership and economic goals, and our solutions empower businesses to outpace evolving threats and fuel growth. By participating at LEAP, we aim to engage with key stakeholders in the tech ecosystem, explore new partnerships, and demonstrate how our solutions are reshaping workforce capabilities and the future of digital resilience.”

SentinelOne’s AI strategy focuses on delivering autonomous, real-time protection by leveraging machine learning and behavioural AI. This ensures businesses can detect, mitigate, and remediate cyberattacks faster and more effectively than traditional solutions. Senior executives from SentinelOne will be onsite at the AlJammaz Executive Lounge in Hall 1 to share insights on AI-driven security strategies and the future of autonomous cybersecurity. Visitors can also experience live demonstrations of the Singularity platform.

Continue Reading

Artificial Intelligence

DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk

Published

on

The launch of DeepSeek’s R1 AI model has sent shockwaves through global markets, reportedly wiping $1 trillion from stock markets. Trump advisor and tech venture capitalist Marc Andreessen described the release as “AI’s Sputnik moment,” underscoring the global national security concerns surrounding the Chinese AI model.

However, new red teaming research by Enkrypt AI, the world’s leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.

Compared with other models, the research found that DeepSeek’s R1 is:

  1. 3x more biased than Claude-3 Opus
  2. 4x more vulnerable to generating insecure code than OpenAI’s O1
  3. 4x more toxic than GPT-4o
  4. 11x more likely to generate harmful output compared to OpenAI’s O1
  5. 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content​ than OpenAI’s O1 and Claude-3 Opus

Sahil Agarwal, CEO of Enkrypt AI, said, “DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought.”

The model exhibited the following risks during testing:

  • BIAS & DISCRIMINATION – 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
  • HARMFUL CONTENT & EXTREMISM – 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
  • TOXIC LANGUAGE – The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
  • CYBERSECURITY RISKS – 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
  • BIOLOGICAL & CHEMICAL THREATS – DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.

Sahil Agarwal concluded, “As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.