Connect with us

Artificial Intelligence

Generative AI in Cybersecurity: Transforming Defense Strategies and Navigating Risks

Published

on

Alexey Lukatsky, Managing Director and Cybersecurity Business Consultant at Positive Technologies, highlights how generative AI is transforming the cybersecurity landscape. He emphasises its dual role as both a powerful tool for defense—enhancing threat detection, automating response, and improving readiness—and a potential risk, as it introduces new challenges like AI-driven cyberattacks and ethical concerns

How is generative AI being utilized to enhance cybersecurity measures today?
Generative AI (GenAI) is revolutionizing cybersecurity by automating threat detection, accelerating incident response, and improving defense mechanisms. AI-driven security tools analyze vast amounts of data to detect anomalies, generate attack simulations, and optimize security policies in real time. In the UAE and the broader Middle East, financial institutions and critical infrastructure sectors are actively adopting AI to mitigate cyber threats.

For instance, Dubai’s Digital Protection Initiative integrates AI for real-time risk assessment in the financial sector. AI-powered SOC automation or autonomous SOCs are also on the rise, reducing false positives and improving analysts’ efficiency when there is a lack of qualified personnel.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
While GenAI enhances cybersecurity, it also introduces new attack vectors. Malicious actors can use AI to create highly convincing phishing emails, deepfake scams, and automated malware. Research by Positive Technologies found that AI-powered phishing attacks increased. Additionally, cybercriminals in the Middle East are using AI for social engineering attacks targeting financial institutions and government agencies. AI can also be exploited to bypass traditional security controls by generating code to evade detection, as demonstrated in a recent UAE-based cybercrime case involving AI-generated ransomware.

How can organizations leverage generative AI for proactive threat detection and response?
Organizations can use GenAI for threat intelligence automation, behavioral analytics, and predictive analytics. AI-driven SIEM, SOAR, and autonomous SOC solutions help detect early-stage cyber threats, reducing response time significantly. For example, MaxPatrol O2 prepares and implements a relevant response scenario to timely stop an attacker in less than 1 minute.

In the UAE, banks and telecom providers are deploying AI to identify fraud patterns in financial transactions. AI can also simulate cyberattacks, improving an organization’s response readiness through continuous penetration testing and attack surface analysis.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Key ethical concerns include bias in AI decision-making, data privacy issues, and potential misuse of AI models. In the Middle East, where data protection laws such as ADGM’s Data Protection Regulation and DIFC’s Data Protection Law are evolving, organizations must ensure AI systems comply with local data privacy regulations. Transparency is essential—companies should implement explainable AI (XAI) models to prevent unjustified access restrictions or false accusations based on AI-driven assessments. Another concern is the use of AI for offensive cybersecurity purposes, which requires global regulations to prevent AI from escalating cyber conflicts.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
The biggest challenges include data quality issues, model explainability, and integration with legacy systems. AI models require massive datasets to function effectively, but many Middle Eastern organizations lack proper data structuring. Another challenge is the high cost of AI implementation, which is a barrier for smaller businesses. Moreover, security teams lack skilled AI professionals, making it difficult to manage AI-powered SOC operations. UAE’s Cyber Security Council has launched initiatives to train professionals in AI-driven cybersecurity, but the skills gap remains a major hurdle.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Yes. In Saudi Arabia’s banking sector, AI-powered fraud detection systems have prevented millions in financial losses by identifying suspicious transactions in real-time. Similarly, Dubai International Airport uses AI-driven anomaly detection to prevent data breaches in its network infrastructure. Another example is AI-driven endpoint protection, which has successfully blocked zero-day malware attacks in government institutions in the UAE.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
AI in cybersecurity is expected to shift towards autonomous defense systems and real-time threat neutralization. AI-powered self-healing networks will enable organizations to detect and mitigate attacks without human intervention. AI-driven deception technology will also advance, tricking attackers with fake data. The UAE is investing in AI research and cybersecurity R&D, particularly in Abu Dhabi’s Hub71 and Dubai’s Cyber Security Strategy, which will likely drive AI adoption in critical infrastructure protection and smart city security.

Positive Technologies participated in GISEC 2024 and GITEX 2024 in Dubai, dedicating its expositions to the use of AI in security products. And we saw a huge interest in this area, which led to many pilot projects in government organizations, as well as in companies in the financial and oil sectors.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Human oversight is critical in AI-driven security to prevent false positives, biases, and misinterpretations. AI can detect threats, but human analysts provide context and decision-making expertise. UAE’s financial regulators require human verification in AI-powered fraud detection systems to avoid unnecessary account freezes. A hybrid AI-human approach is essential, where AI handles large-scale data analysis, while security experts focus on investigation and strategic response.

How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Smaller businesses can leverage AI-powered cloud security solutions that offer cost-effective threat detection. Many vendors provide AI-driven SOC-as-a-Service solutions or AI-driven virtual Security Analyst-as-a-Service, allowing SMBs to use AI for endpoint protection and log analysis without large upfront investments. Open-source AI tools provide free or low-cost alternatives. In the Middle East, government initiatives, such as UAE’s Smart Protection Program, offer subsidized AI-driven security tools to support SMEs.

What best practices would you recommend for implementing generative AI tools while minimizing risks?

  1. Start with clear objectives: Define what AI should improve—threat detection, response automation, or risk assessment.
  2. Ensure regulatory compliance: Align AI implementation with UAE’s cybersecurity and data protection laws.
  3. Use explainable AI (XAI): Avoid “black-box” AI models that lack transparency in decision-making.
  4. Combine AI with human expertise: Use AI to enhance, not replace, security teams.
  5. Adopt a zero-trust architecture: AI-driven access control should work alongside strong identity verification.
  6. Conduct adversarial testing: Continuously test AI models against evolving threats to prevent exploitation.
  7. Monitor AI outputs regularly: Avoid over-reliance on AI-generated threat intelligence by validating its accuracy.

Artificial Intelligence

CyberKnight Partners with Ridge Security for AI-Powered Security Validation

Published

on

The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.

To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.

RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).

“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”

“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”

Continue Reading

Artificial Intelligence

Cequence Intros Security Layer to Protect Agentic AI Interactions

Published

on

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

Continue Reading

Artificial Intelligence

Fortinet Expands FortiAI Across its Security Fabric Platform

Published

on

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:

  • Stop AI-powered threats
  • Automate security and network operations
  • Secure AI tools used by businesses

“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”

Key upgrades:
FortiAI-Assist – AI That Works for You

  1. Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
  2. Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
  3. AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.

FortiAI-Protect – Defending Against AI Threats

  1. Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
  2. Stops new malware with machine learning.
  3. Adapts to new attack methods in real time.

FortiAI-SecureAI – Safe AI Adoption

  1. Protects AI models, data, and cloud workloads.
  2. Prevents leaks from tools like ChatGPT.
  3. Enforces zero-trust access for AI systems.

FortiAI processes queries locally, ensuring sensitive data never leaves your network.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.