Connect with us

Artificial Intelligence

Fortifying Digital Defenses How Generative AI is Transforming Cybersecurity

Published

on

Maxim Baldakov, Head of Fraud & Financial Crime Solutions – META, Group-IB paints a stark picture of the evolving cyber landscape, where generative AI is both a powerful ally and a formidable adversary. While organizations are leveraging AI to automate security operations, generate threat detection rules, and refine fraud prevention models, cybercriminals are simultaneously exploiting the same technology for sophisticated attacks

How is generative AI being utilized to enhance cybersecurity measures today?
One of the most significant applications is in Security Operation Centers (SOCs), where generative AI is used by businesses to assist automation of low level tasks, generating incident response recommendations and runbook compilations based on real-time information from monitoring systems.

Another key use case is the integration of generative AI into SOAR (System orchestration, automation, and responses) systems, where AI helps to make a decision and take preventive actions against potential cyberattacks.

Beyond defense, we see generative AI advancing in offensive cybersecurity simulations. In red teaming exercises as an example, AI is used to generate realistic attack scenarios based on historical APT (Advanced Persistent Threat) group tactics allowing organizations to test their defenses against sophisticated cyber threats. Additionally, we see generative AI playing a promising role in fraud prevention and machine learning model training. AI is leveraged to compile synthetic datasets, which are used to train and refine antifraud models without exposing real user data.

Overall, generative AI is not just optimizing cybersecurity workflows but actively transforming the way organizations detect, prevent, and respond to cyber threats.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI is reshaping cybersecurity and fraud threat landscapes today, with criminals increasingly leveraging artificial intelligence for deepfake, voice cloning or Large Languages Models (LLM’s) technologies.

In the recent months, Group-IB analyzed cases where fraudsters utilize deepfake technology to bypass digital banking biometric security controls, and there has also been cases where the media reported about cases where cybercriminals used Open AI to conduct romance and investment scams. In cyberattacks, threat actors employ AI-based obfuscation to evade detection, making malicious payloads harder to trace which poses a significant threat today.

Beyond generative AI, the rise of AI agents introduces new risks. A recent Group-IB investigation found that AI agents can potentially be used for existing cyber fraud applications such as mass card testing attacks, reducing the time and effort needed for cybercriminals to operate globally.

Perhaps the biggest threat from the latest advancements in AI is not just the expanding use cases for malicious automation but the lowering of an entry barrier for cybercriminals from democratization of AI. Now, anyone who has access to the internet can generate malicious code, deploy phishing pages, generate deepfake videos and launch mass scale fraud campaigns. Deep technical expertise and knowledge of professional tools is no longer a strong requirement to conduct malicious activity thus opening the gates of cybercrime for everyone.

How can organizations leverage generative AI for proactive threat detection and response?
The cybersecurity community has accumulated extensive knowledge on Tactics, Techniques, and procedures (TTPs) of threat actors and past cyberattacks. Generative AI is now being actively utilized to create new detection mechanisms based on such accumulated knowledge.

One of the most common examples is the application of various reasoning models to generate Sigma, YARA or other detection rules based on the threat intelligence reports. This allows cybersecurity professionals to quickly deploy detection logic in the field, enhancing the purpose of proactive threat detection and response.

Similarly, in the case of cyber fraud prevention, AI models generate new detection logic and proactively test existing controls by generating attack scenarios based on the reports of certain fraud groups, schemes and other available cyber fraud intelligence data. These models can also serve as a source of synthetic training data used for more conventional machine learning models training.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
In cybersecurity, AI is increasingly used to automate decisions and prevent fraud, which enhances security, but introduces risks. AI can block bank accounts or revoke critical access during potential cyberattacks, potentially disrupting businesses and individuals. Over-reliance on AI without human oversight can lead to mistakes, unexpected behavior and unnecessary harm. Balancing automation with human control is crucial to avoiding these risks.

Another concern is data sovereignty. AI systems rely on large amounts of data, often stored or processed across international borders. This raises legal and ethical questions about compliance with local privacy laws. Organizations must ensure AI systems adhere to regulations and data protection standards to prevent security vulnerabilities and unethical misuse

To address these challenges, AI in cybersecurity must remain a tool that supports—not replaces—human decision-making. Clear regulations, strict oversight, and ethical guidelines are essential to ensuring AI strengthens security without introducing unintended harm or loss of control.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows
Cybersecurity teams all around the world face a lot of challenges when integrating generative AI tools into their workflows. One of the major issues is false positives, where AI incorrectly flags legitimate activities as threats. This can overwhelm security teams with unnecessary alerts or in extreme cases can even disrupt legitimate business.

For sensitive applications, deploying AI tools on-premises within sensitive zones and environments adds a lot of complexity. These installations require significant infrastructure investments and strict security measures. Additionally, many cybersecurity teams lack deep AI expertise, making it difficult to support it, fine-tune, and administer these tools without constantly relying on external specialists.

Another problem arises from applying general AI tools in niche fields like cybersecurity or cyber fraud which lead to unpredictable behavior. Since many AI models are not specifically designed for these applications, they may misinterpret threats or generate unreliable outputs. Continuous monitoring, tuning, and correction are necessary to ensure the AI functions effectively in these specialized areas.

Another major key challenge is data drift and AI manipulation. Without strict validation, attackers can feed misleading information to distort AI’s learning process, resulting in false threats while missing real ones. As a consequence, the AI may either allow fraudulent transactions (false negatives) or block legitimate ones (false positives), ultimately reducing its reliability.

To mitigate these risks, cybersecurity teams must implement strict validation measures, maintain human oversight and invest in AI expertise to ensure reliable performance.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Human oversight is essential for ensuring generative AI effectively manages cybersecurity and fraud threats. Experts should continuously validate AI-driven decisions and preventive actions to avoid errors like overfitting, biases, and false positives. Since AI lacks contextual judgment, human monitoring is required.

Moreover, human oversight is also absolutely necessary to ensure accountability for automated actions taken by AI. Serious security decisions, such as blocking accounts, revoking accesses and halting financial transactions must be validated to prevent unnecessary disruptions.

How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Smaller organizations with limited budgets must be reasonable when adopting generative AI for cybersecurity. Jumping on AI trends without a strong cybersecurity foundation can cause more harm than good. Instead of rushing into AI integration, organizations should first build a solid cybersecurity culture, ensuring that basic security practices, tools, policies, and processes are implemented.

Applying AI for cybersecurity use cases is an extremely complex and difficult task that requires significant investment in AI expertise and infrastructure. What smaller organizations can consider is relying on trusted Managed Security Service Providers (MSSPs) that have both the expertise and necessary resources for AI application in cybersecurity.

What are the most notable trends in cyber attacks targeting these systems?
AI-driven cybersecurity and fraud prevention systems are not usually a primary target for threat actors, however when they protect external customers—which is the case in fraud prevention tools—they become more exposed. One of the attack methods previously observed was flooding the system with fake or malformed data to cause denial of service or manipulate AI models into overfitting on false patterns which can be harmful as described previously.

Artificial Intelligence

Cequence Intros Security Layer to Protect Agentic AI Interactions

Published

on

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

Continue Reading

Artificial Intelligence

Fortinet Expands FortiAI Across its Security Fabric Platform

Published

on

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:

  • Stop AI-powered threats
  • Automate security and network operations
  • Secure AI tools used by businesses

“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”

Key upgrades:
FortiAI-Assist – AI That Works for You

  1. Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
  2. Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
  3. AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.

FortiAI-Protect – Defending Against AI Threats

  1. Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
  2. Stops new malware with machine learning.
  3. Adapts to new attack methods in real time.

FortiAI-SecureAI – Safe AI Adoption

  1. Protects AI models, data, and cloud workloads.
  2. Prevents leaks from tools like ChatGPT.
  3. Enforces zero-trust access for AI systems.

FortiAI processes queries locally, ensuring sensitive data never leaves your network.

Continue Reading

Artificial Intelligence

SandboxAQ Platform Tackles AI Agent “Non-Human Identity” Threats

Published

on

SandboxAQ has announced the general availability of AQtive Guard, a platform designed to secure Non-Human Identities (NHIs) and cryptographic assets. This critical security solution arrives as organizations worldwide face increasingly sophisticated AI-driven threats capable of autonomously infiltrating networks, bypassing traditional defenses, and exploiting vulnerabilities at machine speed.

Modern enterprises are experiencing an unprecedented surge in machine-to-machine communications, with billions of AI agents now operating across corporate networks. These digital entities – ranging from legitimate automation tools to potential attack vectors – depend on cryptographic keys, digital certificates, and machine identities that frequently go unmanaged. This oversight creates massive security gaps that malicious actors can exploit, leading to potential data breaches, compliance violations, and operational disruptions.

“There will be more than one billion AI agents with significant autonomous power in the next few years,” stated Jack Hidary, CEO of SandboxAQ. “Enterprises are giving AI agents a vastly increased range of capabilities to impact customers and real-world assets. This creates a dangerous attack surface for adversaries. AQtive Guard’s Discover and Protect modules address this urgent issue.”

AQtive Guard addresses these challenges through its integrated Discover and Protect modules. The Discover component maintains continuous, real-time visibility into all NHIs and cryptographic assets including keys, certificates, and algorithms – a fundamental requirement for maintaining regulatory compliance. The Protect module then automates critical security workflows, enforcing essential policies like automated credential rotation and certificate renewal to proactively mitigate risks before they can be exploited.

At the core of AQtive Guard’s capabilities are SandboxAQ’s industry-leading Large Quantitative Models (LQMs), which provide organizations with unmatched visibility and control over their cryptographic infrastructure. This advanced technology enables enterprises to successfully navigate evolving security standards, including the latest NIST requirements, while maintaining robust protection against emerging threats.

Marc Manzano, General Manager of Cybersecurity at SandboxAQ

“As organizations accelerate AI adoption and the use of agents and machine-to-machine communication across all business domains and functions, maintaining a real-time, accurate inventory of NHIs and cryptographic assets is an essential cybersecurity practice. Being able to automatically remediate vulnerabilities and policy violations identified is crucial to decrease time to mitigation and prevent potential breaches within the first day of use of our software,” said Marc Manzano, General Manager of Cybersecurity at SandboxAQ.

SandboxAQ has significantly strengthened AQtive Guard’s capabilities through deep technical integrations with two cybersecurity industry leaders. The platform now features robust integration with CrowdStrike’s Falcon® platform, enabling direct ingestion of endpoint data for real-time vulnerability detection and immediate one-click remediation. This seamless connection allows security teams to identify and neutralize threats with unprecedented speed.

Additionally, AQtive Guard now offers full interoperability with Palo Alto Networks’ security solutions. By analyzing and incorporating firewall log data, the platform delivers enhanced network visibility, improved threat detection, and stronger compliance with enterprise security policies across hybrid environments.

AQtive Guard delivers a comprehensive, AI-powered approach to managing NHIs and cryptographic assets through four key functional areas. The platform’s advanced vulnerability detection system aggregates data from multiple sources including major cloud providers like AWS and Google Cloud, maintaining a continuously updated inventory of all cryptographic assets.

The solution’s AI-driven risk analysis engine leverages SandboxAQ’s proprietary Cyber LQMs to accurately prioritize threats while dramatically reducing false positives. This capability is enhanced by an integrated GenAI assistant that helps security teams navigate complex compliance requirements and implement appropriate remediation strategies.

For operational efficiency, AQtive Guard automates the entire lifecycle management of cryptographic assets, including issuance, rotation, and revocation processes. This automation significantly reduces manual errors while eliminating the risks associated with stale or compromised credentials. The platform also provides robust compliance support with pre-configured rulesets for major regulatory standards, customizable query capabilities, and comprehensive reporting features. These tools help organizations accelerate their transition to new NIST standards while maintaining continuous compliance with evolving requirements.

Available now as a fully managed, cloud-native solution, AQtive Guard is designed for rapid deployment and immediate impact. Enterprises can register for priority access to begin early adoption and conduct comprehensive risk assessments of their cryptographic infrastructure.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.