Cyber Security
Generative AI: Revolutionising Cybersecurity, But With Risks

Fadi Kanafani, General Manager – Middle East, Softserve, explores the pivotal role of generative AI in cybersecurity, outlining its benefits, risks, and the ethical considerations organisations must address
How is generative AI being utilised to enhance cybersecurity measures today?
Generative AI is reshaping cybersecurity by making threat detection and response faster, smarter, and more adaptive. It helps identify patterns in vast datasets, uncovering anomalies that traditional systems might miss. AI-driven models analyse attack behaviors in real time, allowing security teams to anticipate threats before they escalate.
It’s also being used to automate response mechanisms, isolating compromised systems and blocking malicious activity within seconds. Another critical use is in cyber threat simulation, where AI generates attack scenarios to test an organisation’s defenses, helping teams proactively close security gaps. The key isn’t just automation, it’s precision. When deployed effectively, generative AI doesn’t just react to attacks; it helps predict and prevent them.
What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Like any powerful technology, generative AI is a double-edged sword. While it strengthens defenses, it also introduces new risks. Cybercriminals are already leveraging AI to craft highly sophisticated phishing campaigns, deepfake attacks, and malware that can adapt in real time. AI-powered threats are harder to detect because they mimic human behavior more convincingly, whether it’s fake emails, voice impersonations, or dynamically generated malicious code.
There’s also the risk of adversarial AI attacks, where threat actors manipulate AI models by feeding them deceptive data to bypass security controls. The challenge now isn’t just about detecting threats, it’s about detecting AI-generated threats before they gain an edge.
How can organisations leverage generative AI for proactive threat detection and response?
Generative AI can significantly shift cybersecurity from reactive to proactive. By analysing historical attack data, network traffic, and behavioral patterns, AI can flag potential threats before they escalate. Automated threat hunting is another game-changer. AI continuously scans for vulnerabilities and simulates attack scenarios to uncover weak spots before cybercriminals do. In incident response, AI speeds up decision-making by providing real-time risk assessments and suggested countermeasures. The key is integration. AI works best when it complements human expertise, enhancing visibility and response times rather than replacing critical decision-making.
What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
With AI making security decisions, bias, accountability, and data privacy are major concerns. AI models learn from data, and if that data contains biases, the AI’s decisions may be flawed. There’s also the issue of explainability when AI flags a potential threat, security teams need to understand why and how that decision was made. Transparency is crucial. Organisations should implement AI governance frameworks, conduct regular audits, and ensure that AI-driven decisions always have a human checkpoint. Cybersecurity is about trust. AI should enhance it, not erode it.
What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
One of the biggest hurdles is the skill gap. Firstly, AI isn’t a plug-and-play solution, so security teams need a mix of cybersecurity expertise and AI literacy to deploy and manage these tools effectively. Then, we have the matter of data complexity. AI thrives on high-quality data, and cybersecurity environments generate massive, unstructured, and sometimes noisy datasets. Ensuring AI models are trained on the right data without introducing bias is critical.
Other key challenges are false positives, blind spots, and trust. AI can flag potential threats, but it still needs human validation to avoid unnecessary disruptions. Compliance and ethical concerns also come into play as organisations must ensure AI-driven decisions align with regulatory requirements and don’t compromise user privacy. The best approach to ensure efficient workflows is where a human-AI partnership comes into play so AI enhances security teams rather than replacing them, ensuring accuracy, adaptability, and control.
Cyber Security
Positive Technologies Reports 80% of Middle East Cyberattacks Compromise Confidential Data

A new study by cybersecurity firm Positive Technologies has shed light on the evolving cyber threat landscape in the Middle East, revealing that a staggering 80% of successful cyberattacks in the region lead to the breach of confidential information. The research, examining the impact of digital transformation, organized cybercrime, and the underground market, highlights the increasing exposure of Middle Eastern nations to sophisticated cyber threats.
The study found that one in three successful cyberattacks were attributed to Advanced Persistent Threat (APT) groups, which predominantly target government institutions and critical infrastructure. While the rapid adoption of new IT solutions is driving efficiency, it simultaneously expands the attack surface for malicious actors.
Cybercriminals in the region heavily utilize social engineering tactics (61% of cases) and malware (51%), often employing a combination of both. Remote Access Trojans (RATs) emerged as a primary weapon in 27% of malware-based attacks, indicating a common objective of gaining long-term access to compromised systems.
The analysis revealed that credentials and trade secrets (29% each) were the most sought-after data, followed by personal information (20%). This stolen data is frequently leveraged for blackmail or sold on the dark web. Beyond data theft, 38% of attacks resulted in the disruption of core business operations, posing significant risks to critical sectors like healthcare, transportation, and government services.
APT groups are identified as the most formidable threat actors due to their substantial resources and advanced technical capabilities. In 2024, they accounted for 32% of recorded attacks, with a clear focus on government and critical infrastructure. Their activities often extend beyond traditional cybercrime, encompassing cyberespionage and even cyberwarfare aimed at undermining trust and demonstrating digital dominance.
Dark web analysis further revealed that government organizations were the most frequently mentioned targets (34%), followed by the industrial sector (20%). Hacktivist activity was also prominent, with ideologically motivated actors often sharing stolen databases freely, exacerbating the cybercrime landscape.
The United Arab Emirates, Saudi Arabia, Israel, and Qatar, all leaders in digital transformation, were the most frequently cited countries on the dark web in connection with stolen data. Experts suggest that the prevalence of advertisements for selling data from these nations underscores the challenges of securing rapidly expanding digital environments, which cybercriminals are quick to exploit.
Positive Technologies analyst Alexey Lukash said, “In the near future, we expect cyberthreats in the Middle East to grow both in scale and sophistication. As digital transformation efforts expand, so does the attack surface, creating more opportunities for hackers of all skill levels. Governments in the region need to focus on protecting critical infrastructure, financial institutions, and government systems. The consequences of successful attacks in these areas could have far-reaching implications for national security and sovereignty.”
To help organizations build stronger defenses against cyberthreats, Positive Technologies recommends implementing modern security measures. These include vulnerability management systems to automate asset management, as well as identify, prioritize, and remediate vulnerabilities. Positive Technologies also suggests using network traffic analysis tools to monitor network activity and detect cyberattacks. Another critical layer of protection involves securing applications. Such solutions are designed to identify vulnerabilities in applications, detect suspicious activity, and take immediate action to prevent attacks.
Positive Technologies emphasizes the need for a comprehensive, result-driven approach to cybersecurity. This strategy is designed to prevent attackers from disrupting critical business processes. Scalable and flexible, it can be tailored to individual organizations, entire industries, or even large-scale digital ecosystems like nations or international alliances. The goal is to deliver clear, measurable results in cybersecurity—not just to meet compliance standards or rely on isolated technical fixes.
Cyber Security
Axis Communications Sheds Light on Video Surveillance Industry Perspectives on AI

Axis Communications has published a new report that explores the state of AI in the global video surveillance industry. Titled The State of AI in Video Surveillance, the report examines the key opportunities, challenges and future trends, as well as the responsible practices that are becoming critical for organisations in their use of AI. The report draws insights from qualitative research as well as quantitative data sources, including in-depth interviews with carefully selected experts from the Axis global partner network.
A leading insight featured in the report is the unanimous view among interviewees that interest in the technology has surged over the past few years, with more and more business customers becoming curious and increasingly knowledgeable about its potential applications.

Mats Thulin, Director AI & Analytics Solutions at Axis Communications
“AI is a technology that has the potential to touch every corner and every function of the modern enterprise. That said, any implementations or integrations that aim to drive value come with serious financial and ethical considerations. These considerations should prompt organisations to scrutinise any initiative or investment. Axis’s new report not only shows how AI is transforming the video surveillance landscape, but also how that transformation should ideally be approached,” said Mats Thulin, Director AI & Analytics Solutions at Axis Communications.
According to the Axis report, the move by businesses from on-premise security server systems to hybrid cloud architectures continues at pace, driven by the need for faster processing, improved bandwidth usage and greater scalability. At the same time, cloud-based technology is being combined with edge AI solutions, which play a crucial role by enabling faster, local analytics with minimal latency, a prerequisite for real-time responsiveness in security-related situations.
By moving AI processing closer to the source using edge devices such as cameras, businesses can reduce bandwidth consumption and better support real-time applications like security monitoring. As a result, the hybrid approach is expected to continue to shape the role of AI in security and unlock new business intelligence and operational efficiencies.
A trend that is emerging among businesses is the integration of diverse data for a more comprehensive analysis, transforming safety and security. Experts predict that by integrating additional sensory data, such as audio and contextual environmental factors caught on camera, can lead to enhanced situational awareness and greater actionable insights, offering a more comprehensive understanding of events.
Combining multiple data streams can ultimately lead to improved detection and prediction of potential threats or incidents. For example, in emergency scenarios, pairing visual data with audio analysis can enable security teams to respond more quickly and precisely. This context-aware approach can potentially elevate safety, security and operational efficiency, and reflects how system operators can leverage and process multiple data inputs to make better-informed decisions.
According to the Axis report, interviewees emphasised that responsible AI and ethical considerations are critical priorities in the development and deployment of new systems, raising concerns about decisions potentially based on biased or unreliable AI. Other risks highlighted include those related to privacy violations and how facial and behavioural recognition could have ethical and legal repercussions.
As a result, a recurring theme among interviewees was the importance of embedding responsible AI practices early in the development process. Interviewees also pointed to regulatory frameworks, such as the EU AI Act, as pivotal in shaping responsible use of technology, particularly in high-risk areas. While regulation was broadly acknowledged as necessary to build trust and accountability, several interviewees also stressed the need for balance to safeguard innovation and address privacy and data security concerns.
“The findings of this report reflect how enterprises are viewing the trend of AI holistically, working to have a firm grasp of both how to use the technology effectively and understand the macro implications of its usage. Conversations surrounding privacy and responsibility will continue but so will the pace of innovation and the adoption of technologies that advance the video surveillance industry and lead to new and exciting possibilities,” Thulin added.
Artificial Intelligence
CyberKnight Partners with Ridge Security for AI-Powered Security Validation

The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.
To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.
RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).
“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”
“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”
-
GISEC1 week ago
Positive Technologies @ GISEC Global 2025: Demonstrating Cutting-Edge Cyber Threats and AI Defense Strategies
-
Cyber Security1 week ago
Axis Communications Sheds Light on Video Surveillance Industry Perspectives on AI
-
GISEC1 week ago
ManageEngine @ GISEC Global 2025: AI, Quantum Computing, and Ransomware Form Part of Cybersecurity Outlook for 2025
-
Africa Focus1 week ago
CyberKnight Sets Up South Africa Entity
-
Cyber Security6 days ago
Positive Technologies Reports 80% of Middle East Cyberattacks Compromise Confidential Data
-
GISEC1 week ago
Group-IB @ GISEC Global 2025: Tackling Evolving Cyber Threats with Localised Intelligence and AI
-
News1 week ago
ManageEngine Enhances PAM with Native Intelligence
-
Channel Talk6 days ago
Qualys Partners with Teksalah, the First Middle Eastern MSP in its mROC Alliance