Artificial Intelligence
AI-Powered Cybersecurity: Proactive Defense in the Age of Automation

Ezzeldin Hussein, the Regional Senior Director for Solution Engineering – META at SentinelOne, underscores the dual nature of generative AI in cybersecurity. He says while AI offers unprecedented capabilities in threat detection and response, it also introduces new avenues for sophisticated cyberattacks
How is generative AI being utilized to enhance cybersecurity measures today?
Generative AI automates threat detection, response, and prevention. AI models can analyze massive volumes of data to identify anomalies, patterns, and potential threats at speeds far beyond human capability. For example, AI-powered systems are used for predictive analytics, detecting emerging threats and vulnerabilities before they can be exploited. Threat hunting has become more efficient with AI’s ability to sift through vast amounts of security data, identifying subtle indicators of compromise that traditional methods might miss.
Generative AI is also being leveraged in incident response to automate repetitive tasks like log analysis, reducing human error and improving response times. Tools like AI-SIEM (Security Information and Event Management) systems provide real-time insights, enriching alerts with contextual information and offering risk mitigation recommendations. Additionally, AI aids in phishing detection, malware analysis, and automated patching, fortifying defenses and enabling a more proactive security posture.
What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?Adversaries can use AI to create highly convincing phishing emails, deepfake videos, and social engineering attacks, making it harder to detect fraud. AI-generated malware and automated hacking tools can adapt in real-time, evading traditional security defenses. Attackers can also exploit AI to scan for vulnerabilities at scale, accelerating the discovery and exploitation of security flaws.
Another risk is data poisoning, where threat actors manipulate AI models by injecting false data, leading to biased or inaccurate threat detection. Privacy concerns arise as AI systems require vast amounts of data, potentially exposing sensitive information. Additionally, false positives and adversarial AI attacks can manipulate security models, causing disruptions in automated defenses.
How can organizations leverage generative AI for proactive threat detection and response?
Organizations can do this by enhancing real-time analysis, automation, and predictive security. AI-driven models can analyze massive datasets, detecting anomalies, malware signatures, and suspicious behaviors faster than traditional methods. Threat hunting powered by AI enables security teams to simulate cyberattacks, identify vulnerabilities, and strengthen defenses before exploitation occurs. AI can automate remediation for incident response by isolating compromised systems, blocking malicious IPs, and suggesting countermeasures. AI-driven tools also help combat phishing and social engineering by identifying deepfake content and fraudulent emails based on linguistic and behavioral analysis.
Moreoever, AI enhances code security by detecting software vulnerabilities and recommending secure alternatives. With adaptive learning, AI models continuously evolve to recognize emerging threats. Organizations should combine AI-driven security solutions with human expertise to maximize effectiveness, ensuring continuous monitoring, governance, and ethical AI deployment for a resilient cyber defense.
What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
The use of Generative AI in cybersecurity raises several ethical concerns. One major issue is bias in AI models, where inaccurate training data could lead to misidentification of threats or discrimination in decision-making. This can result in false positives or overlooking real threats, jeopardizing security. Another concern is privacy. AI models often require vast amounts of data, raising the risk of exposing sensitive information or violating privacy regulations like GDPR. Additionally, the potential for AI-driven cyberattacks creates an ethical dilemma, as adversaries could misuse generative AI to create sophisticated phishing, deepfake, or malware campaigns.
Organizations should implement transparent AI development to address these concerns, ensuring models are tested for bias and regularly updated with diverse, ethical datasets. Data privacy protocols must be in place to protect sensitive information, and strict regulatory compliance should guide the deployment of AI technologies. Ethical oversight and continuous human monitoring are essential to minimize risks.
What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
AI models are complex, requiring skilled personnel to configure, maintain, and optimize them. Data privacy and compliance concerns arise when AI systems process sensitive information, necessitating strict adherence to regulations. Teams must also contend with the risk of false positives or negatives, which could disrupt operations. Integration with existing tools and workflows can be time-consuming and costly. Finally, trust and accountability issues may arise, as AI decisions need constant human oversight to ensure effectiveness and ethical use.
Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Yes, SentinelOne’s solutions, particularly Purple AI and Singularity Hyperautomation, provide notable examples of how Generative AI can successfully prevent and mitigate cyberattacks.
Purple AI enhances threat detection by enabling security teams to conduct faster and more effective threat hunting and investigations using natural language prompts. This allows analysts of all skill levels to uncover hidden risks and investigate threats deeply, significantly reducing the Mean Time to Respond. By automating complex threat analysis and providing actionable insights, Purple AI helps teams stay ahead of attackers and respond swiftly to potential breaches.
Similarly, Singularity Hyperautomation streamlines and automates security workflows, enabling organizations to respond to threats faster. Connecting over 100 pre-built integrations enhances visibility and enriches alerts with context, accelerating response to cyber incidents. Automating repetitive tasks and reducing alert volumes boosts overall operational efficiency, helping security teams focus on high-priority threats and mitigate attacks proactively.
How do you see generative AI evolving in the cybersecurity domain over the next few years?
Over the next few years, Generative AI will become increasingly integral to cybersecurity, evolving to provide more sophisticated threat detection, prevention, and response. AI models will advance in predictive capabilities, enabling earlier detection of zero-day exploits and emerging threats. As attackers use AI for more complex tactics, defensive AI will enhance automation, rapidly identifying and neutralizing threats. Adaptive learning will allow AI systems to continuously improve their responses, offering real-time, autonomous decision-making. Additionally, AI-powered tools will enable greater collaboration across security teams and integrate seamlessly with existing technologies, making defenses smarter and more efficient.
What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Human oversight, or Human-in-the-Loop (HITL), is a must to ensure Generative AI systems effectively manage cybersecurity threats. While AI can automate threat detection, analysis, and response at scale, human expertise is essential for maintaining accuracy, ethics, and context. HITL ensures that experienced security professionals continually monitor and refine AI-driven decisions.
Humans provide critical judgment in complex scenarios where AI models might struggle, such as distinguishing sophisticated threats from false positives or interpreting context that AI may miss. Oversight also ensures compliance with ethical standards and privacy regulations, preventing misuse of sensitive data. Additionally, HITL facilitates feedback loops, allowing AI models to adapt and improve over time based on real-world experience and new threats.
If we combine AI’s speed and efficiency with human insight, organizations can optimize their cybersecurity defenses, ensuring that AI systems are aligned with organizational goals and responding effectively to evolving threats.
How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Smaller organizations can leverage cloud-based AI tools and AI-powered cybersecurity platforms with scalable pricing. Many providers offer affordable, subscription-based models that don’t require significant upfront investment. Additionally, organizations can use open-source AI solutions or freemium versions of AI tools to enhance threat detection and response without heavy costs. AI-driven automation can help reduce the burden on small teams by handling repetitive tasks like log analysis and threat hunting. Finally, integrating AI into existing security infrastructures can improve operational efficiency and proactively address emerging threats without extensive resources.
What best practices would you recommend for implementing generative AI tools while minimizing risks?
To implement Generative AI tools effectively while minimizing risks, organizations should prioritize data privacy and compliance by ensuring that all data handled adheres to regulations such as GDPR and by implementing robust data governance policies to protect sensitive information. Maintaining a human-in-the-loop (HITL) approach is also crucial, as human oversight can review AI decisions, particularly in complex or high-risk situations, thereby ensuring accuracy and accountability.
Continuous monitoring of AI systems is essential to keep them aligned with evolving cybersecurity needs, which involves tracking performance, identifying biases, and making necessary adjustments. Additionally, using diverse and high-quality datasets for training improves the AI’s accuracy, and regular testing for vulnerabilities and biases before deployment is fundamental. Organizations should also limit AI autonomy by initially placing tools in a supportive role and gradually increasing their independence as reliability is demonstrated, ensuring a balance between AI capabilities and human control.
Artificial Intelligence
CyberKnight Partners with Ridge Security for AI-Powered Security Validation

The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.
To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.
RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).
“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”
“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”
Artificial Intelligence
Cequence Intros Security Layer to Protect Agentic AI Interactions

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.
There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.
Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.
Key enhancements to Cequence’s UAP platform include:
- Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
- Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
- Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
- Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.
“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”
These extended capabilities will be generally available in June.
Artificial Intelligence
Fortinet Expands FortiAI Across its Security Fabric Platform

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:
- Stop AI-powered threats
- Automate security and network operations
- Secure AI tools used by businesses
“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”
Key upgrades:
FortiAI-Assist – AI That Works for You
- Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
- Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
- AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.
FortiAI-Protect – Defending Against AI Threats
- Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
- Stops new malware with machine learning.
- Adapts to new attack methods in real time.
FortiAI-SecureAI – Safe AI Adoption
- Protects AI models, data, and cloud workloads.
- Prevents leaks from tools like ChatGPT.
- Enforces zero-trust access for AI systems.
FortiAI processes queries locally, ensuring sensitive data never leaves your network.
-
Cyber Security1 week ago
Dubai Hosts GISEC 2025, Driving Billion-Dollar Cyber Growth
-
Cyber Security6 days ago
GISEC 2025 Launches OT Security Conference Amid 49% Infrastructure Attack Surge
-
Cyber Security1 week ago
DESC to Highlight Dubai’s Cyber Defense Strategies as GISEC 2025 Government Partner
-
Cyber Security1 week ago
GISEC Global 2025: A Vital Platform to Connect With Customers and Partners in the Region
-
Artificial Intelligence6 days ago
CyberKnight Partners with Ridge Security for AI-Powered Security Validation
-
Cyber Security1 week ago
Inside the Shadowy World of Investment Scams: How Fraudsters Use Facebook and Fake News
-
Cyber Security1 week ago
Rising Cyber Insurance Pressures Push UAE Firms to Fix Identity Silos and AI Vulnerabilities
-
Cyber Security6 days ago
Huawei Experts Reiterate the Importance of a Unified Cybersecurity Foundation at GISEC Global 2025