Connect with us

Cyber Security

ChatGPT is Being Used for Cyber Attacks

Published

on

Check Point Research (CPR) is seeing the first instances of cybercriminals using ChatGPT to develop malicious tools. In underground hacking forums, threat actors are creating infostealers, encryption tools and facilitating fraud activity. CPR warns of the fast-growing interest in ChatGPT by cybercriminals and shares three recent cases, with screenshots, of the development and sharing of malicious tools using ChatGPT.

Case 1: Threat actor recreates malware strains for an infostealer
Case 2: Threat actor creates multi-layer encryption tool
Case 3: Threat actor shows how to create a Dark Web marketplace script for trading illegal goods using ChatGPT

CPR is sharing three cases of recent observations to warn the public of the growing interest by cybercriminals in ChatGPT to scale and teach the malicious activity.

Case 1: Creating Infostealer

Figure 1. Cybercriminal showing how he created infostealer using ChatGPT

On December 29, 2022, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.

In actuality, whilst this individual could be a tech-oriented threat actor, these posts seemed to be demonstrating less technically capable cybercriminals how to utilise ChatGPT for malicious purposes, with real examples they can immediately use.

Case 2: Creating a Multi-Layered Encryption Tool

Figure 2. Cybercriminal dubbed USDoD posts multi-layer encryption tool

On December 21, 2022, a threat actor dubbed USDoD posted a Python script, which he emphasized was the ‘first script he ever created’. When another cybercriminal commented that the style of the code resembles openAI code, USDoD confirmed that the OpenAI gave him a “nice [helping] hand to finish the script with a nice scope.”

Figure 3. Confirmation that the multi-layer encryption tool was created using Open AI

This could mean that potential cybercriminals who have little to no development skills at all, could leverage ChatGPT to develop malicious tools and become a fully-fledged cybercriminals with technical capabilities.

All of the aforementioned code can of course be used in a benign fashion. However, this script can easily be modified to encrypt someone’s machine completely without any user interaction. For example, it can potentially turn the code into ransomware if the script and syntax problems are fixed.

Case 3: Facilitating ChatGPT for Fraud Activity

Figure 4. Threat actor using ChatGPT to create DarkWeb Market scripts

A cybercriminal shows how to create a Dark Web marketplace scripts using ChatGPT. The marketplace’s main role in the underground illicit economy is to provide a platform for the automated trade of illegal or stolen goods like stolen accounts or payment cards, malware, or even drugs and ammunition, with all payments in cryptocurrencies.

Figure 5. Multiple threads in the underground forums on how to use ChatGPT for fraud activity

Sergey Shykevich, Threat Intelligence Group Manager at Check Point Software, says, “Cybercriminals are finding ChatGPT attractive. In recent weeks, we’re seeing evidence of hackers starting to use it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. Although the tools that we analyze in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools. CPR will continue to investigate ChatGPT-related cybercrime in the weeks ahead.”

Cyber Security

GISEC Global 2025: Phishing, Data Breaches, Ransomware, and Supply Chain Attacks Causing Challenges

Published

on

Maher Jadallah, the Vice President for Middle East and North Africa at Tenable, says effective exposure management requires a unified view of the entire attack surface (more…)

Continue Reading

Cyber Security

GISEC Global 2025: A Place Where Innovation, Partnerships, and Leadership Come Together

Published

on

Meriam ElOuazzani, the Senior Regional Director for META at SentinelOne, says, the company will showcase its latest developments in AI-powered security solutions, reinforcing its position as a leader in this area (more…)

Continue Reading

Artificial Intelligence

Cequence Intros Security Layer to Protect Agentic AI Interactions

Published

on

Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.