Artificial Intelligence
Can AI Outsmart Hackers? How Generative AI is Reshaping Cybersecurity

As generative AI transforms cybersecurity into an AI-versus-AI battleground, organizations must navigate both its defensive potential and emerging risks. We spoke with Ramprakash Ramamoorthy, Director of AI Research at Zoho, about how this technology is reshaping threat detection, automating responses, and even being weaponized by attackers. From real-world attack prevention to ethical implementation challenges, Ramamoorthy shares critical insights on leveraging generative AI effectively while mitigating its dangers in our increasingly digital world
How is generative AI being utilized to enhance cybersecurity measures today?
Generative AI has changed the way cybersecurity operates today. It has not only automated tasks but streamlined workflows, improved threat detection, and is also used to stimulate attacks to see how well an organization is proactive to cyber threats. Unlike traditional static thresholds that require constant human vigilance, Generative AI adapts dynamically, learning from vast data volumes to stay ahead of evolving attacks.
This makes it highly effective in identifying zero-day vulnerabilities and sophisticated threats. Moreover, Generative AI streamlines incident response by generating detailed reports, suggesting mitigation steps, and even creating code patches to address security gaps. Its ability to analyse patterns, predict risks, and automate defensive actions has made Generative AI an important tool in modern cybersecurity threats.
What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI was evolved to make things easier, but it has also become a powerful ally to the bad actors. Cyber attackers use Gen AI to create highly convincing phishing emails, fake websites, and deep fakes to deceive users and steal information from. It also leads to the development of sophisticated malware that bypasses traditional security defences, keeping non-digitized enterprises at a higher risk.
Gen AI can also generate synthetic malware samples, which, while useful for security testing, can also be exploited to bypass detection. Large scale attacks can also be deployed at ease as attackers can automate malware creation. Datasets containing sensitive information can expose AI models to risks like manipulation and data theft. Additionally, biassed models may result in inaccurate threat detection, further complicating cybersecurity efforts.
How can organizations leverage generative AI for proactive threat detection and response?
Generative AI offers a significant advantage in analysing large volumes of data that helps to identify anomalies in real time and save the risk of being vulnerable. Its advanced pattern recognition capabilities help organizations proactively identify threats, provide prescriptive insights, and help to safeguard your organization by being adaptive to the newer thresholds. By simulating realistic cyberattacks, generative AI can also test the effectiveness of defence systems, ensuring they are prepared for real-world scenarios.
As organizations increasingly migrate to cloud environments, new security risks emerge, making Gen AI-driven solutions essential. Gen AI can strengthen Identity and Access Management (IAM) by identifying weaknesses in authentication systems which is a common target for cybercriminals and recommend preventive measures. By combining proactive threat detection, adaptive defence mechanisms, and improved IAM strategies, organizations can build a more resilient security framework against evolving cyber threats.
What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Using generative AI in cybersecurity comes with important ethical considerations that organizations must address. One key concern is bias, where AI models may unfairly target certain behaviors or user profiles due to biased training data. To prevent this, businesses should use diverse datasets and regularly audit their models. Privacy is another major challenge, as AI systems often analyze large volumes of sensitive information. Strong data encryption, anonymization, and strict access controls can help keep this data secure.
There’s also the issue of accountability, especially when AI is making critical security decisions. Incorporating Human-in-the-Loop (HITL) practices ensures human oversight, adding a layer of responsibility and judgment where needed. Finally, transparency is crucial where AI systems should explain their decisions clearly, allowing security teams to trust and understand the reasoning behind each action.
What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Integrating Gen AI into cybersecurity workflows presents several challenges. When there is bias lingering in the models, it can lead to flawed threat detection causing false positives and can disrupt operations. Adversarial attacks pose another risk, where attackers manipulate the data to trick AI models into overlooking malicious activity. Data manipulation is a major concern, as corrupted training data can compromise model accuracy and create security gaps.
Integration challenges may arise when adapting AI tools to legacy systems, requiring significant resources and adjustments. Hence, being a digitally mature organization can smoothen the process of including Gen AI to it. Furthermore, adhering to compliance with data privacy regulations while using AI models adds another layer of complexity. Finally, cybersecurity professionals must continuously update and train AI models to stay effective against evolving threats. Overcoming these challenges requires careful implementation, ongoing monitoring, and collaboration between AI experts and security teams to maximize the benefits of Gen AI tools.
Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Generative AI has proven highly effective in preventing and mitigating cyberattacks through innovative applications. By autonomously analysing large datasets, it can identify threats in real-time, flagging phishing attempts and isolating malicious emails before they reach employees, ultimately preventing potential financial losses. In one notable case in 2023, AI-driven threat intelligence successfully detected a major phishing campaign, saving businesses millions by stopping breaches before they occurred.
Generative AI’s predictive capabilities also allow organizations to simulate potential attacks and refine their defences. For instance, a financial institution used AI to anticipate a zero-day attack, enabling them to prevent a breach that could have exposed sensitive customer data. By combining real-time detection, automated responses, and predictive modelling, gen AI significantly enhances cybersecurity efforts, helping organizations stay one step ahead of evolving threats
How do you see generative AI evolving in the cybersecurity domain over the next few years?
Generative AI will significantly reshape cybersecurity in the coming years. As cyber threats grow more sophisticated, Gen AI will enhance proactive defence strategies by improving anomaly detection, threat prediction, and automated response systems. By being more context aware, Gen AI can distinguish between normal behaviour and subtle attack patterns with increased accuracy. Gen AI coupled with AI Agents can analyse vast data patterns, identify suspicious behaviour, and act swiftly to avoid potential attacks.
AI-driven deception techniques, such as creating realistic decoy assets or fake data, will become more advanced to mislead attackers. However, as AI strengthens security defences, cybercriminals are also expected to use Gen AI to create convincing phishing scams, deep fakes, and adaptive malware.
What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Generative AI systems are powerful at processing vast amounts of data, detecting anomalies, and automating responses, but they can’t do it alone. Human expertise plays a crucial role in interpreting results, validating decisions, and tackling complex, out-of-the-box scenarios. While Gen AI acts as a protective shield, humans step in to handle the tougher security challenges. For a seamless and secure workplace, both must work together.
Humans guide AI to make fair and ethical decisions, reducing bias and discrimination. When Gen AI explains its reasoning, it not only builds trust but also helps security teams learn from its decision-making process. By refining AI models, adjusting detection thresholds, and ensuring systems stay adaptive, humans keep Gen AI effective. In cases of adversarial attacks, where attackers manipulate AI models, human judgment is key to spotting suspicious patterns and strengthening defences. Together, Gen AI and human insight create a stronger, smarter cybersecurity strategy.
How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Smaller organizations don’t require massive budgets to take advantage of generative AI for cybersecurity. Several cloud-based security tools now come with built-in AI features such as threat detection in real time and automated response, making them an affordable option. Open-source AI models also can also help businesses improve security without hefty licensing fees.
These organizations can partner with Managed Security Service Providers (MSSPs) for cybersecurity eliminating the need of in house experts. Moreover, AI agents can handle monotonous tasks such as analysing logs, flagging unusual activity, and prioritising alerts. A combination of budget-friendly Gen AI tools with human oversight and staff training, smaller businesses can strengthen their cybersecurity without going overboard on expenses.
What best practices would you recommend for implementing generative AI tools while minimising risks?
Generative AI tools can be effectively implemented with a more cautious approach to zero down any risks. Ensuring quality data and efficient security practices have to be implemented so the model can be trained without biased data while sensitive information is protected to prevent leaks or manipulation. It is essential to incorporate Human-in-the-Loop (HITL) practices, allowing human oversight to validate AI decisions, reduce errors, and uphold ethical standards.
While handling critical data, there should be strict access control protocols to restrict any unauthorized use. Adversarial testing is a method for systematically evaluating an ML model, which can be carried out regularly to spot vulnerabilities such as data poisoning or manipulation attempts before they are exploited by attackers. Continuous monitoring is essential for identifying performance issues, adapting to evolving threats, and maintaining the model’s accuracy over time. By combining these approaches, organizations can safely and effectively utilize Gen AI in their cybersecurity frameworks.
Artificial Intelligence
Cequence Intros Security Layer to Protect Agentic AI Interactions

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.
There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.
Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.
Key enhancements to Cequence’s UAP platform include:
- Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
- Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
- Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
- Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.
“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”
These extended capabilities will be generally available in June.
Artificial Intelligence
Fortinet Expands FortiAI Across its Security Fabric Platform

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:
- Stop AI-powered threats
- Automate security and network operations
- Secure AI tools used by businesses
“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”
Key upgrades:
FortiAI-Assist – AI That Works for You
- Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
- Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
- AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.
FortiAI-Protect – Defending Against AI Threats
- Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
- Stops new malware with machine learning.
- Adapts to new attack methods in real time.
FortiAI-SecureAI – Safe AI Adoption
- Protects AI models, data, and cloud workloads.
- Prevents leaks from tools like ChatGPT.
- Enforces zero-trust access for AI systems.
FortiAI processes queries locally, ensuring sensitive data never leaves your network.
Artificial Intelligence
SandboxAQ Platform Tackles AI Agent “Non-Human Identity” Threats

SandboxAQ has announced the general availability of AQtive Guard, a platform designed to secure Non-Human Identities (NHIs) and cryptographic assets. This critical security solution arrives as organizations worldwide face increasingly sophisticated AI-driven threats capable of autonomously infiltrating networks, bypassing traditional defenses, and exploiting vulnerabilities at machine speed.
Modern enterprises are experiencing an unprecedented surge in machine-to-machine communications, with billions of AI agents now operating across corporate networks. These digital entities – ranging from legitimate automation tools to potential attack vectors – depend on cryptographic keys, digital certificates, and machine identities that frequently go unmanaged. This oversight creates massive security gaps that malicious actors can exploit, leading to potential data breaches, compliance violations, and operational disruptions.
“There will be more than one billion AI agents with significant autonomous power in the next few years,” stated Jack Hidary, CEO of SandboxAQ. “Enterprises are giving AI agents a vastly increased range of capabilities to impact customers and real-world assets. This creates a dangerous attack surface for adversaries. AQtive Guard’s Discover and Protect modules address this urgent issue.”
AQtive Guard addresses these challenges through its integrated Discover and Protect modules. The Discover component maintains continuous, real-time visibility into all NHIs and cryptographic assets including keys, certificates, and algorithms – a fundamental requirement for maintaining regulatory compliance. The Protect module then automates critical security workflows, enforcing essential policies like automated credential rotation and certificate renewal to proactively mitigate risks before they can be exploited.
At the core of AQtive Guard’s capabilities are SandboxAQ’s industry-leading Large Quantitative Models (LQMs), which provide organizations with unmatched visibility and control over their cryptographic infrastructure. This advanced technology enables enterprises to successfully navigate evolving security standards, including the latest NIST requirements, while maintaining robust protection against emerging threats.

Marc Manzano, General Manager of Cybersecurity at SandboxAQ
“As organizations accelerate AI adoption and the use of agents and machine-to-machine communication across all business domains and functions, maintaining a real-time, accurate inventory of NHIs and cryptographic assets is an essential cybersecurity practice. Being able to automatically remediate vulnerabilities and policy violations identified is crucial to decrease time to mitigation and prevent potential breaches within the first day of use of our software,” said Marc Manzano, General Manager of Cybersecurity at SandboxAQ.
SandboxAQ has significantly strengthened AQtive Guard’s capabilities through deep technical integrations with two cybersecurity industry leaders. The platform now features robust integration with CrowdStrike’s Falcon® platform, enabling direct ingestion of endpoint data for real-time vulnerability detection and immediate one-click remediation. This seamless connection allows security teams to identify and neutralize threats with unprecedented speed.
Additionally, AQtive Guard now offers full interoperability with Palo Alto Networks’ security solutions. By analyzing and incorporating firewall log data, the platform delivers enhanced network visibility, improved threat detection, and stronger compliance with enterprise security policies across hybrid environments.
AQtive Guard delivers a comprehensive, AI-powered approach to managing NHIs and cryptographic assets through four key functional areas. The platform’s advanced vulnerability detection system aggregates data from multiple sources including major cloud providers like AWS and Google Cloud, maintaining a continuously updated inventory of all cryptographic assets.
The solution’s AI-driven risk analysis engine leverages SandboxAQ’s proprietary Cyber LQMs to accurately prioritize threats while dramatically reducing false positives. This capability is enhanced by an integrated GenAI assistant that helps security teams navigate complex compliance requirements and implement appropriate remediation strategies.
For operational efficiency, AQtive Guard automates the entire lifecycle management of cryptographic assets, including issuance, rotation, and revocation processes. This automation significantly reduces manual errors while eliminating the risks associated with stale or compromised credentials. The platform also provides robust compliance support with pre-configured rulesets for major regulatory standards, customizable query capabilities, and comprehensive reporting features. These tools help organizations accelerate their transition to new NIST standards while maintaining continuous compliance with evolving requirements.
Available now as a fully managed, cloud-native solution, AQtive Guard is designed for rapid deployment and immediate impact. Enterprises can register for priority access to begin early adoption and conduct comprehensive risk assessments of their cryptographic infrastructure.
-
Artificial Intelligence1 week ago
Generative AI is Transforming Cybersecurity Across Detection, Defense, and Governance
-
Events1 week ago
OPSWAT Joins GISEC 2025 as Middle East Confronts AI-Driven Cyber Threats
-
Cyber Security1 week ago
Proofpoint Unveils Unified Solution for Workspace Cost, Cyber Risk Reduction
-
Cyber Security1 week ago
Kuwait Renews Cyber First Initiative to Strengthen Digital Defenses for Vision 2035
-
Artificial Intelligence7 days ago
Fortinet Expands FortiAI Across its Security Fabric Platform
-
Cyber Security1 week ago
AmiViz to Show Off the “Future of Cybersecurity” at GISEC 2025
-
Artificial Intelligence1 week ago
How AI is Reinventing Cybersecurity for the Automotive Industry
-
News7 days ago
Fuse Partners with Check Point Software