Connect with us

Artificial Intelligence

Revolutionising Threat Detection and Response with Generative AI

Published

on

Ram Narayanan, Country Manager at Check Point Software Technologies, Middle East, highlights the growing importance of proactive cybersecurity measures in the region. He emphasises the need for organisations to adopt advanced threat detection tools, leverage AI-driven solutions, and implement robust security frameworks to combat the increasing complexity of cyber threats

How is generative AI being utilised to enhance cybersecurity measures today?
Generative AI is revolutionising cybersecurity through automated threat detection, security operations and decision-making. Check Point Software embeds GenAI throughout its solutions to enhance efficiency and accuracy. The Check Point Infinity AI Copilot speeds up security management by automating policy generation, threat analysis and incident response, cutting task resolution time up to 90%. Check Point Infinity GenAI Protect guarantees the safe adoption of generative AI use cases by monitoring shadow AI use, blocking data leaks and ensuring compliance with regulations. Through AI-driven threat intelligence, Check Point further enhances its capacity to detect new threats, block phishing and malware attacks and offer real-time security insights. These solutions enable organisations to pre-emptively fortify their cyber defenses while having complete visibility and control over their AI-powered security environment.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
While generative AI offers tremendous progress, it also presents new security threats, such as AI-powered cyberattacks, data leakage and shadow IT issues. Cyber attackers can use AI to automate and amplify social engineering attacks, create advanced phishing emails and produce deepfake material that conventional security solutions might find difficult to detect. Moreover, data spillage is of significant concern as workers may unwittingly feed secret or copyrighted information into public AI models, who in turn might be utilised to train subsequent AI systems. Traditional data loss prevention (DLP) products usually fall short as they depend on pre-established patterns and lack understanding of the contextual nature of unstructured, chat-like data prevalent in GenAI interactions. Without adequate visibility and governance, organisations can lose sensitive information and open themselves up to compliance breaches and security risks.

How can organisations leverage generative AI for proactive threat detection and response?
Organisations can utilise generative AI for real-time threat detection, incident response automation and improved security governance. Check Point’s Infinity GenAI Protect enables enterprises to discover, assess and secure GenAI applications within their environment, providing AI-powered data classification to prevent sensitive information from being leaked. Through the implementation of context-aware security controls, it ensures that AI-driven tools can be adopted securely without compromising critical data. Further, ThreatCloud AI constantly processes telemetry data and indicators of compromise (IoCs) to identify and neutralise phishing, malware and zero-day attacks in real time before they strike. Security operations teams also make use of Infinity AI Copilot, which automates incident response, policy compliance and threat hunting, shortening the time they spend on such manual efforts so that they can concentrate on high-level security strategies.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Generative AI brings with it ethical issues like data leakage, absence of visibility and risk for compliance. The problem is that GenAI tools tend to be shadow IT, so administrators don’t even know about their use. Without governance in place, organisations are at risk of leaking sensitive or copyrighted information, as GenAI services can utilise user inputs for training models. Legacy DLP solutions are not effective in dealing with unstructured, conversational data, adding to the risk of confidential information leakage. To meet these challenges, organisations require AI-driven data analysis that effectively classifies conversational data, offering visibility into GenAI usage, data leakage prevention, and compliance with regulatory standards. Check Point’s methodology is centered around providing AI-driven solutions that can facilitate safe adoption of GenAI, allowing business organisations to achieve its advantages while not creating any new security exposure.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Incorporating generative AI into cybersecurity involves challenges like managing data quality, false positives and workforce acclimatisation. Security products based on AI need high quality, real-time threat intelligence to be effective. If AI models are trained using biased, out-of-date or incomplete data, they will not be able to identify emerging threats or raise false alarms, wasting precious resources. Security professionals also need to adjust to workflows that include AI, which entails training and reskilling. Check Point neutralises these problems with Infinity ThreatCloud AI, which consolidates high-quality, real-time threat intelligence from 150,000 networks and millions of endpoints to enhance AI accuracy. Infinity AI Copilot also eases AI deployment by automating administrative tasks, simplifying complexity and offering AI-guided directions, enabling security teams to adopt AI seamlessly in their operations.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Check Point has included Generative AI in its security products to increase threat prevention and response. Infinity AI Copilot uses Generative AI to automate intricate security tasks and minimise response time, as well as increase accuracy in threat mitigation. It helps security professionals by automating policy design, incident investigation and threat analysis and reducing resolution times by 90%. In addition, GenAI Protect provides secure adoption of generative AI solutions by identifying shadow IT threats, blocking data breaches and imposing governance rules. Together with ThreatCloud AI, these products offer real-time threat intelligence and proactive protection against AI-driven cyberattacks. By incorporating Generative AI into security processes, Check Point enables organisations to block sophisticated threats before they can execute.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
Generative AI will become increasingly critical to cybersecurity, facilitating more sophisticated threat detection, predictive analytics and automated response capabilities. AI-based security tools will continue to advance, enabling organisations to detect and neutralise sophisticated cyber threats in real time. The convergence of AI with zero-trust architectures will strengthen identity verification and anomaly detection. As AI becomes more powerful, it will be better at defeating AI-generated cyberattacks. Moreover, regulatory structures will adapt to promote responsible use of AI, striking a balance between automation and human intervention to preserve accuracy and security in a constantly shifting threat environment.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Human oversight is required in AI-driven cybersecurity to ensure accuracy, ethical decision-making and appropriate response to threats. While AI may detect patterns and automate response, human experience is required to validate AI-generated insights and analyse complex threats. AI models must be updated and monitored periodically to prevent biases, misclassifications or adversarial attacks. Security teams play a critical role in training AI with good data and making strategic decisions based on AI suggestions. A balanced approach, where AI assists security efficiency and humans provide oversight, ensures that AI is a reliable tool and not an uncontrolled decision maker.

How can smaller organisations with limited budgets incorporate generative AI for cybersecurity?
Smaller organisations can fortify their security stance by implementing scalable AI-powered solutions such as Check Point’s Infinity AI Copilot, offering enterprise-grade security without the necessity of large-scale in-house security infrastructure. Cloud-based AI security platforms provide economical threat detection, real-time monitoring and automated response features. AI automation lightens the load for small security teams by performing routine security tasks, enabling staff to concentrate on priority threats. Putting emphasis on AI-based endpoint protection, phishing protection and network scanning allows smaller companies to protect themselves against cyberattacks without enormous expenditure.

What best practices would you recommend for implementing generative AI tools while minimising risks?
To deploy generative AI securely, organisations must first have visibility into how AI is used within their environment to know what the threats could be. Defining governance policies clearly protects the security and compliance that AI tools must support. AI-driven data classification is critical to stopping data leaks because traditional DLP solutions have difficulty with the contextual nature of GenAI prompts. To further reduce risks, companies should institute access controls to govern how employees use GenAI tools and block unauthorised data exposure. Continuous monitoring and real-time threat detection enable identifying and mitigating security vulnerabilities prior to their exploitation.

Artificial Intelligence

CyberKnight Partners with Ridge Security for AI-Powered Security Validation

Published

on

The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.

To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.

RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).

“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”

“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”

Continue Reading

Artificial Intelligence

Cequence Intros Security Layer to Protect Agentic AI Interactions

Published

on

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

Continue Reading

Artificial Intelligence

Fortinet Expands FortiAI Across its Security Fabric Platform

Published

on

Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:

  • Stop AI-powered threats
  • Automate security and network operations
  • Secure AI tools used by businesses

“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”

Key upgrades:
FortiAI-Assist – AI That Works for You

  1. Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
  2. Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
  3. AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.

FortiAI-Protect – Defending Against AI Threats

  1. Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
  2. Stops new malware with machine learning.
  3. Adapts to new attack methods in real time.

FortiAI-SecureAI – Safe AI Adoption

  1. Protects AI models, data, and cloud workloads.
  2. Prevents leaks from tools like ChatGPT.
  3. Enforces zero-trust access for AI systems.

FortiAI processes queries locally, ensuring sensitive data never leaves your network.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.