Connect with us

Artificial Intelligence

Generative AI in Cybersecurity: Opportunities, Risks, and the Road Ahead

Published

on

Ehab Adel, Director of Cybersecurity Solutions at Mindware, highlights the transformative impact of generative AI on the cybersecurity landscape. He emphasises how AI is enhancing threat detection, automating response mechanisms, and addressing vulnerabilities, while also acknowledging the potential risks, such as AI-driven cyberattacks

How is generative AI used in cybersecurity today?
Generative AI plays a growing role in cybersecurity by improving threat detection, response times, and vulnerability management. AI can simulate cyberattacks by generating fake data, helping security systems recognise new threats. It also assists in malware analysis by creating new versions of malware to test how systems respond. Additionally, AI tools can automate responses to cyberattacks, speeding up reaction times. In vulnerability management, generative AI helps identify weaknesses in software and predict potential security risks, allowing organisations to take proactive measures before issues arise.

What risks does generative AI bring to cybersecurity?
While generative AI offers many benefits, it also introduces significant risks. Cybercriminals could use AI to create more advanced malware or phishing attacks that are harder to detect. Deepfakes, powered by AI, can deceive individuals into revealing sensitive information by producing realistic fake videos or audio. Additionally, AI could help hackers craft attacks that bypass traditional security defenses, such as malware that adapts to avoid detection. Finally, the scale of attacks could increase, as AI enables criminals to quickly generate multiple variations of attacks, making them harder to defend against.

How can organisations use generative AI for proactive threat detection and response?
Organisations can use generative AI for proactive threat detection by utilising AI-driven behavioral analytics to analyse normal system behavior and detect any unusual activity. AI can also simulate attacks, helping organisations identify vulnerabilities before they can be exploited. Automated playbooks powered by AI can instantly trigger predefined actions when a threat is detected, speeding up the response process. Additionally, AI can analyse vast amounts of data from various sources to spot emerging threats and provide actionable insights, helping organisations stay ahead of potential attackers.

What ethical concerns come with using generative AI in cybersecurity, and how can they be addressed?
Generative AI in cybersecurity raises several ethical concerns. One major issue is bias in AI models, which can lead to missed threats or incorrect decisions if AI is trained on biased data. Misuse by cybercriminals is another concern, requiring strong regulations and oversight to prevent malicious use of AI. Privacy issues may arise if AI systems inadvertently collect sensitive information during network traffic monitoring, so clear privacy policies must be established. Additionally, accountability is crucial—organisations must ensure transparency in how AI makes decisions, so it’s clear who is responsible if something goes wrong.

What challenges do cybersecurity teams face when using generative AI?
Cybersecurity teams face several challenges when using generative AI. There is often a skill gap, as many teams may lack the expertise needed to effectively implement and use AI tools. The complexity of AI systems can also make them difficult to integrate into existing security infrastructures. AI can generate false alerts (false positives) or fail to detect real threats (false negatives), requiring ongoing tuning and optimisation. Additionally, AI tools can be resource-intensive, which may be difficult for smaller organisations to afford, creating budgetary constraints.

Are there examples where generative AI has successfully prevented or reduced cyberattacks?
Yes, there are several examples where generative AI has successfully reduced cyberattacks. IBM Watson for Cybersecurity uses AI to analyse vast amounts of security data, helping detect and respond to threats by identifying patterns and emerging risks. Darktrace is another example, where AI monitors systems in real-time and detects attacks, even identifying new threats before they can cause damage. Both solutions highlight the effectiveness of generative AI in improving threat detection and response times.

How do you see generative AI evolving in cybersecurity over the next few years?
Generative AI is expected to evolve significantly in the coming years. One major development will be smarter threat detection, with AI becoming better at recognising subtle threats, like new types of malware, more quickly. Autonomous defense is another key area, where AI will take over more decision-making during a cyberattack, responding without human intervention. Integration with blockchain technology is also likely, where AI could verify transactions and prevent fraud in real-time. The future will likely see a blend of AI and human collaboration, with AI handling analysis and response, while humans focus on higher-level strategic decisions.

What role does human oversight (HITL) play in AI cybersecurity systems?
Human oversight remains critical in AI cybersecurity systems. Humans must validate AI’s decisions to ensure they align with security policies and make sense in complex scenarios. Continuous feedback from security experts helps AI systems improve over time, adapting to new threats and improving accuracy. Additionally, ethical oversight is essential to ensure that AI tools are used responsibly, with due consideration for privacy, fairness, and transparency. Human involvement is key to maintaining trust and accountability in AI-driven cybersecurity systems.

How can smaller organisations with limited budgets use generative AI for cybersecurity?
Smaller organisations with limited budgets can still leverage generative AI for cybersecurity by using cloud-based AI security tools, which allow them to access advanced AI capabilities without the high costs of infrastructure. Open-source AI models are another affordable option, enabling smaller businesses to develop custom security solutions. Additionally, smaller organisations can partner with Managed Security Service Providers (MSSPs) that offer AI-powered cybersecurity solutions, providing access to expertise and advanced tools without the need for in-house specialists.

What best practices would you recommend for using generative AI while minimising risks?
To minimise risks while using generative AI, organisations should ensure that the AI is trained on diverse, high-quality data to avoid bias and inaccuracies. Regular audits are essential to monitor AI systems and verify that they function as intended, reducing the risk of errors. Human oversight is crucial to validate AI decisions and provide ethical guidance. Finally, organisations should start with small, controlled AI projects and gradually scale them as they become more comfortable with the technology and gain experience in managing its risks.

Artificial Intelligence

CyberKnight Partners with Ridge Security for AI-Powered Security Validation

Published

on

The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.

To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.

RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).

“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”

“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”

Continue Reading

Artificial Intelligence

Cequence Intros Security Layer to Protect Agentic AI Interactions

Published

on

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

Continue Reading

Artificial Intelligence

Fortinet Expands FortiAI Across its Security Fabric Platform

Published

on

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:

  • Stop AI-powered threats
  • Automate security and network operations
  • Secure AI tools used by businesses

“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”

Key upgrades:
FortiAI-Assist – AI That Works for You

  1. Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
  2. Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
  3. AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.

FortiAI-Protect – Defending Against AI Threats

  1. Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
  2. Stops new malware with machine learning.
  3. Adapts to new attack methods in real time.

FortiAI-SecureAI – Safe AI Adoption

  1. Protects AI models, data, and cloud workloads.
  2. Prevents leaks from tools like ChatGPT.
  3. Enforces zero-trust access for AI systems.

FortiAI processes queries locally, ensuring sensitive data never leaves your network.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.