Connect with us

Artificial Intelligence

Generative AI is Transforming Cybersecurity Across Detection, Defense, and Governance

Published

on

Radu Balanescu, the Associate Director for Cybersecurity at BCG, says the governance domain benefits from GenAI’s ability to streamline compliance and awareness

How is Generative AI being utilised to enhance cybersecurity measures today?
Generative AI is transforming cybersecurity across three critical domains—detection, defense, and governance. In detection activities, GenAI is proficient at analysing vast datasets to identify threats through automated threat intelligence analysis, rapid malware detection, and identifying deepfake content. Tools like Google Gemini can process malware samples in seconds rather than hours, dramatically improving response times and enabling more proactive security postures.

In defense activities, GenAI augments protective capabilities by evaluating language patterns and contextual signals to prevent sophisticated phishing attempts. Organisations are also deploying GenAI to create convincing decoy environments with synthetic data, deliberately misleading attackers while protecting genuine assets. When breaches occur, AI-powered playbooks are invaluable assets for security teams while deciding optimal remediation processes, reducing recovery time while ensuring consistent response protocols.

The governance domain benefits from GenAI’s ability to streamline compliance and awareness. AI tools continuously monitor regulatory changes and emerging threats, automatically suggesting policy updates to maintain compliance. Perhaps most promising is GenAI’s ability to create personalised, realistic training scenarios that adapt to individual employee behavior patterns, dramatically improving retention and effectiveness compared to generic security training approaches. These applications represent just the beginning of GenAI’s potential to redefine our approach to digital protection.

What potential risks does Generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI is a double-edged sword, increasing cybersecurity risks as much as it helps protect against attacks. The broadening landscape for cybersecurity risk encompasses two critical aspects: GenAI empowers attackers with tools to accelerate and simplify their malicious actions, while simultaneously introducing security risks in organisations when deployed for usage.

GenAI per se does not generate new types of attacks. However, it simplifies exploit generation, improves the quality of known attacks, and further reduces the cost of creating malicious tools. It even enables less sophisticated actors to conduct complex attacks that were once reserved for only the most skilled malicious actors. AI-generated phishing attacks now create more sophisticated, human-like emails that increase the likelihood of successful social engineering. Cybercriminals can use GenAI to generate new types of malware, bypassing traditional security systems.

Deepfake-based impersonation using AI-generated audio and video can convincingly mimic executives or government officials, leading to fraud or misinformation campaigns. Particularly concerning is AI-enabled reconnaissance, where threat actors use AI to scan systems for vulnerabilities more efficiently, making cyberattacks more targeted and effective. Deploying GenAI in an organisation introduces new types of security risks, mostly around data exfiltration or manipulation. Hallucination risks in AI security models may produce false positives or misleading security insights, leading to incorrect threat assessments. Data poisoning attacks allow adversaries to manipulate training data to introduce biases or vulnerabilities into AI security models.

Supply chain attacks on AI models present significant risks, as compromised models can provide attackers with unauthorised access. Additionally, sophisticated attackers can manipulate AI-based decision-making, forcing systems to misclassify threats or grant unwarranted access. These emerging risks highlight the need for comprehensive security frameworks specifically designed for the GenAI era, balancing innovation with heightened safeguards against increasingly sophisticated threats.

How can organisations leverage Generative AI for proactive threat detection and response?
Organisations have multiple ways to leverage GenAI in their defense activities, fundamentally transforming security from reactive to proactive postures. AI-powered anomaly detection serves as a foundation, using real-time analysis to identify behavioral deviations that could indicate potential cyber threats before they manifest as full attacks. This works in conjunction with automated threat-hunting capabilities, where GenAI assists security analysts by identifying suspicious patterns and suggesting possible cyberattack vectors that might otherwise remain hidden.

The predictive capabilities of GenAI enable cybersecurity modeling that analyses historical threat data to forecast and prevent future attacks before they occur. When incidents arise, automated incident triage becomes critical—AI can categorise and prioritise security events, ensuring that the most severe threats receive immediate attention while optimising resource allocation.

Security operations benefit from contextualised threat intelligence dashboards where AI summarises and visualises threats in real time, providing actionable insights to cybersecurity teams. On the frontlines, phishing prevention, and email security systems leverage AI to filter and block increasingly sophisticated attacks by detecting language anomalies and metadata inconsistencies. Additionally, AI-powered malware reverse engineering can analyse new strains and generate automated responses to mitigate them rapidly. These capabilities collectively enable organisations to stay ahead of evolving threats, shifting the advantage away from attackers and toward defenders in the ongoing cybersecurity battle.

How do you see Generative AI evolving in the cybersecurity domain over the next few years?
Looking into the future, we see GenAI gaining more prevalence in enhancing defense capabilities across multiple dimensions of the cybersecurity landscape. Organisations will increasingly deploy stronger AI-powered cyber defenses that automate complex security tasks, dramatically improving efficiency while reducing the need for manual intervention in routine security operations.

The cybersecurity battlefield will transform into sophisticated AI vs. AI cyber battles, where defensive AI systems continuously adapt to counter AI-driven attacks. This evolution will necessitate continuous AI model adaptation and training to stay ahead of increasingly sophisticated threats. Identity management will see significant advancements through AI-based verification systems, with AI-driven biometric and behavioral authentication strengthening defenses against impersonation and credential theft.

Zero Trust Architecture implementations will be revolutionised as AI plays a larger role in enforcing these policies, continuously verifying users and devices before granting access to sensitive resources. This dynamic verification approach will significantly reduce the attack surface available to potential intruders. Simultaneously, governments and international organisations are defining and implementing stricter policies to prevent AI misuse in cyberattacks, striving for ethical AI usage in security contexts.

Organisations will leverage AI to continuously monitor compliance with these evolving cybersecurity regulations, automating what has traditionally been a resource-intensive process. Perhaps most forward-looking, as quantum computing progresses and brings new threats to conventional encryption methods, AI models will be adapted to counteract quantum-powered cyber threats, ensuring security resilience even as computational paradigms shift dramatically.

Artificial Intelligence

CyberKnight Partners with Ridge Security for AI-Powered Security Validation

Published

on

The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.

To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.

RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).

“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”

“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”

Continue Reading

Artificial Intelligence

Cequence Intros Security Layer to Protect Agentic AI Interactions

Published

on

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

Continue Reading

Artificial Intelligence

Fortinet Expands FortiAI Across its Security Fabric Platform

Published

on

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:

  • Stop AI-powered threats
  • Automate security and network operations
  • Secure AI tools used by businesses

“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”

Key upgrades:
FortiAI-Assist – AI That Works for You

  1. Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
  2. Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
  3. AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.

FortiAI-Protect – Defending Against AI Threats

  1. Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
  2. Stops new malware with machine learning.
  3. Adapts to new attack methods in real time.

FortiAI-SecureAI – Safe AI Adoption

  1. Protects AI models, data, and cloud workloads.
  2. Prevents leaks from tools like ChatGPT.
  3. Enforces zero-trust access for AI systems.

FortiAI processes queries locally, ensuring sensitive data never leaves your network.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.