Connect with us

Artificial Intelligence

Gen AI is Redefining Cybersecurity’s Future

Published

on

Subhalakshmi Ganapathy, the Chief IT Security Evangelist at ManageEngine, says, by simulating threats, auto-remediating incidents, and decoding attacker tactics, AI empowers organisations to stay ahead of adversaries

How is generative AI being utilised to enhance cybersecurity measures today?
Generative AI is redefining cybersecurity’s future, transforming defenses from reactive to predictive. By simulating threats, auto-remediating incidents, and decoding attacker tactics, it empowers organisations to stay ahead of adversaries. Yet, its true power lies in harmonising human expertise with machine speed—augmenting analysts to focus on strategic risks, not routine alerts. As AI-generated attacks surge, the same technology becomes a double-edged sword, demanding ethical frameworks to prevent misuse.

Forward-thinking leaders must prioritise adaptive AI ecosystems that learn in real time while safeguarding trust. The next frontier isn’t just about stopping threats but fostering resilience through innovation, collaboration, and responsible AI governance. Cybersecurity’s evolution hinges on this balance.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI, while a powerful defender, also brings in sophisticated threats. While empowering defenses, it supercharges attacks: AI-crafted deepfakes erode trust, hyper-personalised phishing bypasses filters, and self-mutating malware evades detection. Adversaries leverage AI to automate exploitation, democratising sophisticated attacks for low-skilled threat actors.

Worse, AI models themselves become targets—poisoned training data or adversarial inputs can corrupt defensive systems. This arms race erodes the asymmetrical defenders once relied on. Leaders must confront the paradox: the tools fortifying security also weaponise threats. Mitigation hinges on AI-augmented threat hunting, adversarial testing of models, and global collaboration to govern AI’s ethical use. Proactive resilience, not just reaction, is the new imperative.

How can organisations leverage generative AI for proactive threat detection and response?
Generative AI enables organisations to shift from reactive to anticipatory cybersecurity by synthesising intelligence and automating precision. By training models on historical and synthetic threat data, AI identifies subtle attack patterns—like zero-day exploits or insider risks—before they escalate. Real-time behavioral analysis flags anomalies in user activity or network traffic, while AI-driven simulations stress-test defenses against evolving adversarial tactics (e.g., AI-generated phishing lures).

Automated playbooks powered generative-AI tools, instantly quarantine threats and patch vulnerabilities, slashing response times. Crucially, generative AI augments human teams—curating actionable insights from noise—enabling analysts to prioritise high-impact risks. The key lies in ethical, explainable AI frameworks that balance autonomy with oversight, fostering trust in machine-augmented defense.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Ethical AI in cybersecurity isn’t just about security; it’s about building a future where security and rights coexist. The ethical frontier of generative AI in cybersecurity demands rigorous introspection, particularly regarding data provenance. The AI’s very efficacy hinges on the data it consumes, a double-edged sword. What type of datasets are ethically sound, and what would constitute a privacy minefield?

We must move beyond mere technical accuracy and embrace ethical precision. Training AI on sensitive, personally identifiable information, or data reflecting historical biases, risks perpetuating and amplifying societal inequalities within security systems. This demands a paradigm shift: prioritising anonymised, representative datasets, and rigorously auditing training data for potential biases.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Integrating generative AI into cybersecurity workflows presents a formidable challenge: balancing innovation with operational integrity. The crux of the issue lies in the accuracy of AI-driven remediation. Inaccurate detection breeds false positives, overwhelming SOCs and eroding analyst trust. More critically, flawed remediation suggestions risk catastrophic configuration changes, impacting employee experience and potentially crippling critical infrastructure.

Imagine AI incorrectly disabling a crucial user account or altering vital system configurations. This necessitates a paradigm shift: AI as an augmentation, not an automation, tool. Rigorous testing, human-in-the-loop protocols, and granular control are paramount. We must avoid the allure of fully automated remediation instead of focusing on AI as a powerful analytical tool that empowers human decision-making. The future of AI in cybersecurity hinges on cautious integration, prioritising accuracy and control to prevent unintended consequences.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
While the vision of AI autonomously repelling cyberattacks captivates, the reality remains a journey, not a destination. We’ve achieved a pivotal advancement: AI’s prowess in threat detection. However, the full spectrum of AI-driven mitigation remains largely theoretical, confined to controlled environments and phased deployments. Enterprises are cautiously navigating this landscape, recognising the potential but wary of the unknown.

We stand at the cusp of a paradigm shift, where AI’s predictive capabilities could preemptively neutralise threats. Yet, true realisation requires meticulous testing and controlled integration. The focus must shift from isolated detection to a holistic, AI-powered security ecosystem. The future holds immense promises, but responsible innovation demands a measured approach, acknowledging that the AI-driven cybersecurity revolution is still in its nascent stages.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
The trajectory of generative AI in cybersecurity points towards a significant evolution, primarily aimed at alleviating the chronic resource shortage plaguing Security Operations Centers (SOCs). We’re witnessing a shift from reactive to proactive security, where AI’s extensive training and Retrieval Augmented Generation (RAG) capabilities will dramatically reduce incident investigation times. By seamlessly integrating data from disparate ecosystems, AI will provide enriched, contextualised insights, empowering analysts to make faster, more informed decisions.

This evolution will not be about replacing human analysts but about augmenting their capabilities. AI will become a powerful force multiplier, automating mundane tasks and freeing up human experts to focus on complex, strategic threats. We’ll see AI evolving into a sophisticated threat intelligence platform, capable of predicting and preempting attacks rather than merely reacting to them. The future of cybersecurity will be defined by a collaborative partnership between human intelligence and AI’s analytical prowess.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
In the dynamic realm of cybersecurity, generative AI serves as a powerful ally, but its efficacy is fundamentally dependent on human oversight. AI excels at processing vast datasets, identifying anomalies, and automating routine tasks. However, it lacks the nuanced understanding of context, ethical considerations, and strategic adaptability that human analysts possess.

HITL ensures that AI-generated alerts are validated, false positives are filtered, and complex threats are accurately assessed. It’s the critical bridge between algorithmic precision and human intuition, ensuring AI remains a tool, not a replacement, for strategic security. Furthermore, human oversight is vital for mitigating bias in AI models and adapting to the ever-evolving threat landscape, ensuring ethical and effective AI deployment

How can smaller organisations with limited budgets incorporate generative AI for cybersecurity?
For resource-constrained organisations, AI cybersecurity isn’t a luxury, but a strategic imperative. The key lies in intelligent deployment. Embrace MSSPs as force multipliers, gaining access to sophisticated AI defences without prohibitive capital expenditure. Prioritise targeted AI applications, focusing on high-return areas like phishing and anomaly detection, thus maximising impact with finite resources.

Democratise AI access through open-source tools and AI-infused security platforms. Crucially, cultivate an AI-literate workforce. Investing in targeted education ensures these tools are leveraged effectively, transforming potential into tangible security gains. This isn’t about mere adoption; it’s about strategic empowerment, turning budgetary constraints into a catalyst for innovative security.

What best practices would you recommend for implementing generative AI tools while minimising risks?
To truly unlock generative AI’s cybersecurity potential, we must build a fortified framework, not merely deploy tools. Foundational to this is rigorous data governance, ensuring AI’s intelligence is built on pristine, unbiased data. Continuous model vigilance is non-negotiable; constant monitoring and evaluation are essential to preempt performance drift and bias.

Human-in-the-loop protocols are the linchpin, guaranteeing that critical decisions remain anchored in human wisdom. Proactive risk assessments and relentless security testing transform vulnerabilities into strengths. Transparency, woven into the AI’s decision-making fabric, builds trust. Clear policies and procedures, coupled with a commitment to staying at the forefront of AI evolution, ensure adaptability in a rapidly changing threat landscape. This holistic approach empowers organisations to harness AI’s transformative power, not as a gamble, but as a strategic, risk-mitigated advantage.

Artificial Intelligence

Cequence Intros Security Layer to Protect Agentic AI Interactions

Published

on

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

Continue Reading

Artificial Intelligence

Fortinet Expands FortiAI Across its Security Fabric Platform

Published

on

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:

  • Stop AI-powered threats
  • Automate security and network operations
  • Secure AI tools used by businesses

“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”

Key upgrades:
FortiAI-Assist – AI That Works for You

  1. Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
  2. Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
  3. AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.

FortiAI-Protect – Defending Against AI Threats

  1. Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
  2. Stops new malware with machine learning.
  3. Adapts to new attack methods in real time.

FortiAI-SecureAI – Safe AI Adoption

  1. Protects AI models, data, and cloud workloads.
  2. Prevents leaks from tools like ChatGPT.
  3. Enforces zero-trust access for AI systems.

FortiAI processes queries locally, ensuring sensitive data never leaves your network.

Continue Reading

Artificial Intelligence

SandboxAQ Platform Tackles AI Agent “Non-Human Identity” Threats

Published

on

SandboxAQ has announced the general availability of AQtive Guard, a platform designed to secure Non-Human Identities (NHIs) and cryptographic assets. This critical security solution arrives as organizations worldwide face increasingly sophisticated AI-driven threats capable of autonomously infiltrating networks, bypassing traditional defenses, and exploiting vulnerabilities at machine speed.

Modern enterprises are experiencing an unprecedented surge in machine-to-machine communications, with billions of AI agents now operating across corporate networks. These digital entities – ranging from legitimate automation tools to potential attack vectors – depend on cryptographic keys, digital certificates, and machine identities that frequently go unmanaged. This oversight creates massive security gaps that malicious actors can exploit, leading to potential data breaches, compliance violations, and operational disruptions.

“There will be more than one billion AI agents with significant autonomous power in the next few years,” stated Jack Hidary, CEO of SandboxAQ. “Enterprises are giving AI agents a vastly increased range of capabilities to impact customers and real-world assets. This creates a dangerous attack surface for adversaries. AQtive Guard’s Discover and Protect modules address this urgent issue.”

AQtive Guard addresses these challenges through its integrated Discover and Protect modules. The Discover component maintains continuous, real-time visibility into all NHIs and cryptographic assets including keys, certificates, and algorithms – a fundamental requirement for maintaining regulatory compliance. The Protect module then automates critical security workflows, enforcing essential policies like automated credential rotation and certificate renewal to proactively mitigate risks before they can be exploited.

At the core of AQtive Guard’s capabilities are SandboxAQ’s industry-leading Large Quantitative Models (LQMs), which provide organizations with unmatched visibility and control over their cryptographic infrastructure. This advanced technology enables enterprises to successfully navigate evolving security standards, including the latest NIST requirements, while maintaining robust protection against emerging threats.

Marc Manzano, General Manager of Cybersecurity at SandboxAQ

“As organizations accelerate AI adoption and the use of agents and machine-to-machine communication across all business domains and functions, maintaining a real-time, accurate inventory of NHIs and cryptographic assets is an essential cybersecurity practice. Being able to automatically remediate vulnerabilities and policy violations identified is crucial to decrease time to mitigation and prevent potential breaches within the first day of use of our software,” said Marc Manzano, General Manager of Cybersecurity at SandboxAQ.

SandboxAQ has significantly strengthened AQtive Guard’s capabilities through deep technical integrations with two cybersecurity industry leaders. The platform now features robust integration with CrowdStrike’s Falcon® platform, enabling direct ingestion of endpoint data for real-time vulnerability detection and immediate one-click remediation. This seamless connection allows security teams to identify and neutralize threats with unprecedented speed.

Additionally, AQtive Guard now offers full interoperability with Palo Alto Networks’ security solutions. By analyzing and incorporating firewall log data, the platform delivers enhanced network visibility, improved threat detection, and stronger compliance with enterprise security policies across hybrid environments.

AQtive Guard delivers a comprehensive, AI-powered approach to managing NHIs and cryptographic assets through four key functional areas. The platform’s advanced vulnerability detection system aggregates data from multiple sources including major cloud providers like AWS and Google Cloud, maintaining a continuously updated inventory of all cryptographic assets.

The solution’s AI-driven risk analysis engine leverages SandboxAQ’s proprietary Cyber LQMs to accurately prioritize threats while dramatically reducing false positives. This capability is enhanced by an integrated GenAI assistant that helps security teams navigate complex compliance requirements and implement appropriate remediation strategies.

For operational efficiency, AQtive Guard automates the entire lifecycle management of cryptographic assets, including issuance, rotation, and revocation processes. This automation significantly reduces manual errors while eliminating the risks associated with stale or compromised credentials. The platform also provides robust compliance support with pre-configured rulesets for major regulatory standards, customizable query capabilities, and comprehensive reporting features. These tools help organizations accelerate their transition to new NIST standards while maintaining continuous compliance with evolving requirements.

Available now as a fully managed, cloud-native solution, AQtive Guard is designed for rapid deployment and immediate impact. Enterprises can register for priority access to begin early adoption and conduct comprehensive risk assessments of their cryptographic infrastructure.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.