Connect with us

Artificial Intelligence

As Adversarial GenAI Takes Off, Threat Intel Must Modernize

Published

on

Written by Bart Lenaerts, Senior Product Marketing Manager, Infoblox

Generative AI, particularly Large Language Models (LLM), is enforcing a transformation in cybersecurity. Adversaries are attracted to GenAI as it lowers entry barriers to create deceiving content. Actors do this to enhance the efficacy of their intrusion techniques like social engineering and detection evasion.

This article provides common examples of malicious GenAI usage like deepfakes, chatbot automation and code obfuscation. More importantly, it also makes a case for early warnings of threat activity and usage of predictive threat intelligence capable of disrupting actors before they execute their attacks.

Example 1: Deepfake scams using voice cloning
At the end of 2024, the FBI warned that criminals were using generative AI to commit fraud on a larger scale, making their schemes more believable. GenAI tools like voice cloning reduce the time and effort needed to deceive targets with trustworthy audio messages. Voice cloning tools can even correct human errors like foreign accents or vocabulary that might otherwise signal fraud. While creating synthetic content isn’t illegal, it can facilitate crimes like fraud and extortion. Criminals use AI-generated text, images, audio, and videos to enhance social engineering, phishing, and financial fraud schemes.

Especially worrying is the easy access cybercriminals have to these tools and the lack of security safeguards. A recent Consumer Reports investigation on six leading publicly available AI voice cloning tools discovered that five have bypassable safeguards, making it easy to clone a person’s voice even without their consent.

Voice cloning technology works by taking an audio sample of a person speaking and then extrapolating that person’s voice into a synthetic audio file. However, without safeguards in place, anyone who registers an account can simply upload audio of an individual speaking, such as from a TikTok or YouTube video, and have the service imitate them.

Voice cloning has been utilized by actors in various scenarios, including large-scale deep-fake videos for cryptocurrency scams or the imitation of voices during individual phone calls. A recent example that garnered media attention is the so-called “grandparent” scams, where a family emergency scheme is used to persuade the victim to transfer funds.

Example 2: AI-powered chat boxes
Actors often pick their victims carefully by gathering insights on their interests and set them up for scams. Initial research is used to craft the smishing message and trigger the victim into a conversation with them. Personal notes like “I read your last social post and wanted to become friends” or “Can we talk for a moment?” are some examples our intel team discovered (step 1 in picture 2). While some of these messages may be extended with AI-modified pictures, what matters is that actors invite their victims to the next step, which is a conversation on Telegram or another actor controlled medium, far away from security controls (step 2 in picture 2).

Once the victim is on the new medium, the actor uses several tactics to continue the conversation, such as invites to local golf tournaments, Instagram following or AI-generated images. These AI bot-driven conversations go on for weeks and include additional steps, like asking for a thumbs-up on YouTube or even a social media repost. At this moment, the actor is trying to assess their victims and see how they respond. Sooner or later, the actor will show some goodwill and create a fake account. Each time the victim reacts positively to the actor’s request, the amount of currency in the fake account will increase. Later, the actor may even request small amounts of investment money, with an ROI of more than 25 percent. When the victim asks to collect their gains (step 3 in picture 2), the actor requests access to the victim’s crypto account and exploits all established trust. At this moment, the scamming comes to an end and the actor steals the crypto money in the account.

While these conversations are time-intensive, they are rewarding for the scammer and can lead to ten-thousands of dollars in ill-gotten gains. By using AI-driven chat boxes, actors have found a productive way to automate the interactions and increase the efficiency of their efforts.

InfoBlox Threat Intel tracks these scams to optimize threat intelligence production. Common characteristics found in malicious chat boxes include:

  1. AI grammar errors, such as an extra space after a period, referencing foreign languages
  2. Using vocabulary that includes fraud-related terms
  3. Forgetting details from past conversations
  4. Repeating messages mechanically due to poorly trained AI chatbots (also known as parroting)
  5. Making illogical requests, like asking if you want to withdraw your funds at irrational moments in the conversation
  6. Using false press releases posted on malicious sites
  7. Opening conversations with commonly used phrases to lure the victim
  8. Using specific cryptocurrency types used often in criminal communities

The combinations of these fingerprints allow threat intel researchers to observe emerging campaigns, track back the actors and their malicious infrastructure.

Example 3: Code obfuscation and evasion
Threat actors are using GenAI not only for creating human readable content. Several news outlets explored how GenAI assists actors in obfuscating their malicious codes. Earlier this year Infosecurity Magazine published details of how threat researchers at HP Wolf discovered social engineering campaigns spreading VIP Keylogger and 0bj3ctivityStealer malware, both of which involved malicious code being embedded in image files.

With a goal to improve the efficiency of their campaign, actors are repurposing and stitching together existing malware via GenAI to evade detection. This approach also assists them in gaining velocity in setting up threat campaigns and reducing the skills needed to construct infection chains. Industry threat research HP Wolf estimates evasion increments of 11% for email threats while other security vendors like Palo Alto Networks estimate that GenAI flipped their own malware classifier model’s verdicts 88% of the time into false negatives. Threat actors are clearly making progress in their AI driven evasion efforts.

Making the case for modernizing threat research
As AI driven attacks pose plenty of detection evasion challenges, defenders need to look beyond traditional tools like sandboxing or indicators derived from incident forensics to produce effective threat intelligence. One of these opportunities can be found by tracking pre-attack activities instead of sending the last suspicious payload to a slow sandbox.

Just like your standard software development lifecycle, threat actors go through multiple stages before launching attacks. First, they develop or generate new variants for the malicious code using GenAI. Next, they set up the infrastructure like email delivery networks or hard to trace traffic distribution systems. Often this happens in combination with domain registrations or worse hijacking of existing domains.

Finally, the attacks go into “production” meaning the domains become weaponized, ready to deliver malicious payload. This is the stage where traditional security tools attempt to detect and stop threats because it involves easily accessible endpoints or networks egress points within the customer’s environment. Because of evasion and deception by GenAI tools, this point of detection may not be effective as the actors continuously alter their payloads or mimic trustworthy sources.

The Value of Predictive Intelligence Based on DNS Telemetry
To stay ahead of these evolving threats, organizations should consider leveraging predictive intelligence derived from DNS telemetry. DNS data plays a crucial role in identifying malicious actors and their infrastructure before attacks even occur. Unlike payloads that can be altered or disguised using GenAI, DNS data is inherently transparent across multiple stakeholders—such as domain owners, registrars, domain servers, clients, and destinations—and must be 100% accurate to ensure proper connectivity. This makes DNS an ideal source for threat research, as its integrity makes it less susceptible to manipulation.

DNS analytics also provides another significant advantage: domains and malicious DNS infrastructures are often configured well in advance of an attack or campaign. By monitoring new domain registrations and DNS records, organizations can track the development of malicious infrastructure and gain insights into the early stages of attack planning. This approach enables the identification of threats before they’re activated.

Conclusion
The evolving landscape of AI and the impact on security is significant. With the right approaches and strategies, such as predictive intelligence derived from DNS, organizations can truly get ahead of GenAI risks and ensure that they don’t become patient zero.

Artificial Intelligence

Help AG and F5 Collaborate on Managed App and API Security

Published

on

Help AG, the cybersecurity arm of e& enterprise, has become the first Managed Services Provider (MSP) partner for F5 in the Middle East. Building on their existing relationship, Help AG is now offering a new Managed App and API Protection Service based on the F5 Distributed Cloud Platform. This service is designed to provide continuous, cloud-delivered security for modern digital systems, including those in public, private, edge, and hybrid cloud environments.

Today’s threat landscape is increasingly complex. As businesses move towards API-driven architectures, edge computing, and cloud-native applications, they expose a wider attack surface. Security teams face growing pressure from automated bot attacks, API misuse, and sophisticated Distributed Denial of Service (DDoS) attempts. Many organizations also lack the necessary knowledge and tools to defend against these attacks effectively.

Help AG’s new service directly addresses these challenges. It offers multi-layered protection as a managed, Software-as-a-Service (SaaS) solution. The service uses F5’s globally recognized Distributed Cloud Services and is operated 24/7 by Help AG’s expert Security Operations Center (SOC) team. This allows clients to streamline operations, meet compliance requirements, and respond to threats in real time. Businesses can now deploy resilient, compliant, and cost-efficient application protection, backed by Help AG’s local expertise.

Stephan Berner, CEO of Help AG, stated, “This partnership with F5 is a major step forward for enterprise security. It reflects our shared goal of securing every application, API, and digital interaction at scale. This new service provides regional organizations with enterprise-grade security that is proactive, cost-effective, and built for the cloud-first era.”

The new solution offers unified protection that includes Web Application Firewall (WAF), advanced bot mitigation, API discovery and security, and DDoS defense. All these features are managed through a centralized SaaS-based console, providing full visibility and control. Clients also benefit from flexible deployment options across various locations and continuous support and tuning from Help AG’s expert teams.

Mustapha Hlil, Director of Channel Sales for the Middle East, Türkiye and Africa at F5, commented, “As cyber threats grow more sophisticated, the need for always-on, adaptable security is critical. Help AG’s security expertise, managed services leadership, and 24/7 SOC support, combined with the F5 Distributed Cloud platform, offer a powerful solution. This will greatly help enterprises that lack the in-house expertise to deploy and manage security solutions.”

This launch marks a new phase in the Help AG and F5 partnership, reinforcing their commitment to securing the region’s digital future and helping organizations build trust in their digital interactions.

Continue Reading

Artificial Intelligence

Cloud Security Trade-Offs Rise: 91% of Leaders Face AI Threats

Published

on

Gigamon has released its 2025 Hybrid Cloud Security Survey, revealing that hybrid cloud infrastructure is under mounting strain from the growing influence of artificial intelligence (AI). The annual study, now in its third year, surveyed over 1,000 global Security and IT leaders across the globe. As cyberthreats increase in both scale and sophistication, breach rates have surged to 55 percent during the past year, representing a 17 percent year-on-year (YoY) rise, with AI-generated attacks emerging as a key driver of this growth.

Security and IT teams are being pushed to a breaking point, with the economic cost of cybercrime now estimated at $3 trillion worldwide according to the World Economic Forum. As AI-enabled adversaries grow more agile, organizations are challenged with ineffective and inefficient tools, fragmented cloud environments, and limited intelligence.

Key findings highlight how ai is reshaping hybrid cloud security priorities:

  • AI’s role in escalating network complexity and accelerating risk is evident. The study reveals that 46 percent of Security and IT leaders say managing AI-generated threats is now their top security priority. One in three organizations report that network data volumes have more than doubled in the past two years due to AI workloads, while nearly half of all respondents (47 percent) are seeing a rise in attacks targeting their organization’s large language model (LLM) deployments. More than half (58 percent) say they’ve seen a surge in AI-powered ransomware—up from 41 percent in 2024 underscoring how adversaries are exploiting AI to outpace and outflank existing defenses.
  • Compromises highlight continued trade-offs in foundational areas of hybrid cloud security. Nine out of ten (91 percent) Security and IT leaders concede to making compromises in securing and managing their hybrid cloud infrastructure. The key challenges that create these compromises include the lack of clean, high-quality data to support secure AI workload deployment (46 percent) and lack of comprehensive insight and visibility across their environments, including lateral movement in East-West traffic (47 percent).
  • Public cloud risks prompt industry recalibration. Once considered an acceptable risk in the rush to scale post-COVID operations, the public cloud is now coming under increasingly intense scrutiny. Many organizations are rethinking their cloud strategies in the face of their growing exposure, with 70 percent of Security and IT leaders now viewing the public cloud as a greater risk than any other environment. As a result, 70 percent report their organization is actively considering repatriating data from public to private cloud due to security concerns and 54 percent are reluctant to use AI in public cloud environments, citing fears around intellectual property protection.
  • Visibility is top of mind for security leaders. As cyberattacks become more sophisticated, the limitations of existing security tools are coming sharply into focus. Organizations are shifting their priorities toward gaining complete visibility into their environments, a capability now seen as crucial for effective threat detection and response. More than half (55 percent) of respondents lack confidence in their current tools’ ability to detect breaches, citing limited visibility as the core issue. As a result, 64 percent say their number one focus for the next 12 months is achieving real-time threat monitoring delivered through having complete visibility into all data in motion.

With AI driving unprecedented traffic volumes, risk, and complexity, nearly nine in 10 (89 percent) Security and IT leaders cite deep observability as fundamental to securing and managing hybrid cloud infrastructure. Executive leadership is taking notice, as boards increasingly prioritize complete visibility into all data in motion, with 83 percent confirming that deep observability is now being discussed at the board level to better protect hybrid cloud environments.

“Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity and vulnerability of public cloud environments,” said Mark Jow, technical evangelist, EMEA, at Gigamon. “Deep observability addresses this challenge by combining MELT data with network-derived telemetry such as packets, flows, and metadata, delivering increased visibility and amore informed view of risk. It enables teams to eliminate visibility gaps, regain control, and act proactively with increased confidence. With 88 percent of Security and IT leaders agreeing it is critical to securing AI deployments, deep observability is fast becoming a strategic imperative.”

“With nearly half of organizations saying attackers are already targeting their large language models, AI security can’t be an afterthought, it needs to be a top priority,” said Mark Walmsley, CISO at Freshfields. “The key to staying ahead? Visibility. When we can clearly see what’s happening across AI systems and data flows, we can cut through the noise and manage risk more effectively. Deep observability helps us spot vulnerabilities early and put the right protections in place before issues arise.”

Continue Reading

Artificial Intelligence

VAST Data Intros AI OS Built for the Age of Agents

Published

on

VAST Data has announced the result of nearly a decade of relentless innovation with the unveiling of the VAST AI Operating System, a revolutionary platform purpose-built to fuel the next wave of AI breakthroughs. Since the beginning of computing, every major technological revolution has been defined by the emergence of a new operating system. From the PC, to mobile, to the cloud – a unified software layer has abstracted complexity, democratized the use of new hardware, and reshaped how the world computes, communicates and innovates.

Now, as AI redefines the fabric of business and society, the industry again finds itself at the dawn of a new computing paradigm – one where trillions of intelligent agents will reason, communicate, and act across a global grid of millions of GPUs that are woven across edge deployments, AI factories and cloud data centers. To make this world accessible, programmable, and operational at extreme scale, a new generation of intelligent systems requires a new software foundation.

The launch of the VAST AI Operating System comes as the company has reached a historic milestone: the fastest path to $2 billion in cumulative bookings of any data company in history. With nearly 5x year-over-year growth in the first quarter of this year compared to last, and a cashflow-positive business model, VAST’s hypergrowth reflects the market’s demand for an operating system purpose-built to operationalize AI at unprecedented scale.

The VAST AI Operating System is the product of nearly ten years of engineering toward a single purpose: to create an intelligent platform architecture that can harness this new generation of AI supercomputing machinery and unlock the potential of AI at scale. Developed from a clean sheet of paper, the platform is built on VAST’s groundbreaking Disaggregated Shared-Everything (DASE) architecture, the world’s first true parallel distributed system architecture – making it possible to completely parallelize AI and analytics workloads, federate clusters into a unified computing and data cloud and then feed new AI workloads with near-infinite amounts of data from one fast and affordable tier of storage.

Today, DASE clusters support over 1 million GPUs around the world in many of the world’s most data intensive computing centers. The scope of the AI OS is broad and will consolidate disparate legacy IT technologies into one simple and modern offering designed to democratize AI computing. What VAST was inventing from the start was conceived not as a collection of features, but as an entirely new computing substrate — one that unifies data, compute, messaging, and reasoning. A system built to capture data from the natural world at extreme scale, enrich it with AI-driven context in real time, and drive agentic workflows. Today, that invention takes shape as the VAST AI Operating System…a continuation of VAST’s pursuit toward building a Thinking Machine.

“This isn’t a product release — it’s a milestone in the evolution of computing,” said Renen Hallak, Founder & CEO of VAST Data. “We’ve spent the past decade reimagining how data and intelligence converge. Today, we’re proud to unveil the AI Operating System for a world that is no longer built around applications — but around agents.” The AI Operating System consists of every aspect of a distributed system to run AI at global scale: a kernel to run platform services on from private to public cloud, a runtime to deploy AI agents with, eventing infrastructure for real-time event processing, messaging infrastructure, and a distributed file and database storage system that can be used for real-time data capture and analytics.

This year, AI models and agents now come to life within the VAST AI Operating System. In 2024, VAST previewed the VAST InsightEngine – a service that extracts context from unstructured data using AI embedding tools. If the VAST InsightEngine prepares data for AI using AI, VAST AgentEngine is how AI now comes to life with data – an auto-scaling AI agent deployment runtime that equips users with a low-code environment to build intelligent workflows, select reasoning models, define agent tools, and operationalize reasoning.

The AgentEngine features a new AI agent tool server that provides support for agents to invoke data, metadata, functions, web search or other agents using them as MCP-compatible tools. AgentEngine allows agents to assume multiple personas with different purpose and security credentials, and provides secure, real-time access to different tools. The platform’s scheduler and fault-tolerant queuing mechanisms also ensure agent resilience against machine or service failure.

Finally, AgentEngine introduces massively-scalable agentic workflow observability – with VAST’s approach to parallel, distributed tracing – the VAST AI OS makes it simple for developers to enjoy a unified and simple view into massively-scaled and complex agentic pipelines.

Just as operating systems ship with pre-built utilities, the VAST AgentEngine will feature a set of open-source Agents that VAST will release, one per month, to help accelerate the journey to AI computing. Some personal assistants will be tailored to industry use cases, whereas others will be designed for general purpose use. Examples include:

  1. A reasoning chatbot, powered by all of an organization’s VAST data
  2. A data engineering agent to curate data automatically
  3. A prompt engineer to help optimize AI workflow inputs
  4. An agent agent, to automate the deployment, evaluation and improvement of agents
  5. A compliance agent, to enforce data and activity level regulatory compliance
  6. An editor agent, to create rich media content
  7. A life sciences researcher, to assist with bioinformatic discovery

In the spirit of enabling organizations to build and build fast on the VAST AI Operating System, VAST Data will be hosting VAST Forward, a series of global workshops, both in-person and online, throughout the year. These workshops will include training on components of the Operating System and sessions on how to develop on the platform.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.