Connect with us

Artificial Intelligence

Fake ChatGPT Apps Scam Users Out of Thousands of Dollars, Says Sophos

Published

on

Sophos has announced that it had uncovered multiple apps masquerading as legitimate, ChatGPT-based chatbots to overcharge users and bring in thousands of dollars a month. As detailed in Sophos X-Ops’ latest report, “’FleeceGPT’ Mobile Apps Target AI-Curious to Rake in Cash,” these apps have popped up in both the Google Play and Apple App Store, and, because the free versions have near-zero functionality and constant ads, they coerce unsuspecting users into signing up for a subscription that can cost hundreds of dollars a year.

“Scammers have and always will use the latest trends or technology to line their pockets. ChatGPT is no exception. With interest in AI and chatbots arguably at an all-time high, users are turning to the Apple App and Google Play Stores to download anything that resembles ChatGPT. These types of scam apps—what Sophos has dubbed ‘fleeceware’—often bombard users with ads until they sign up for a subscription. They’re banking on the fact that users won’t pay attention to the cost or simply forget that they have this subscription. They’re specifically designed so that they may not get much use after the free trial ends, so users delete the app without realizing they’re still on the hook for a monthly or weekly payment,” said Sean Gallagher, principal threat researcher, Sophos.

In total, Sophos X-Ops investigated five of these ChatGPT fleeceware apps, all of which claimed to be based on ChatGPT’s algorithm. In some cases, as with the app “Chat GBT,” the developers played off the ChatGPT name to improve their app’s ranking in the Google Play or App Store. While OpenAI offers the basic functionality of ChatGPT to users for free online, these apps were charging anything from $10 a month to $70.00 a year. The iOS version of “Chat GBT,” called Ask AI Assistant, charges $6 a week—or $312 a year—after the three-day free trial; it netted the developers $10,000 in March alone. Another fleeceware-like app, called Genie, which encourages users to sign up for a $7 weekly or $70 annual subscription, brought in $1 million over the past month.

The key characteristics of so-called fleeceware apps, first discovered by Sophos in 2019, are overcharging users for functionality that is already free elsewhere, as well as using social engineering and coercive tactics to convince users to sign up for a recurring subscription payment. Usually, the apps offer a free trial but with so many ads and restrictions, they’re barely useable until a subscription is paid. These apps are often poorly written and implemented, meaning app function is often less than ideal even after users switch to the paid version. They also inflate their ratings in the app stores through fake reviews and persistent requests of users to rate the app before it’s even been used or the free trial ends.

“Fleeceware apps are specifically designed to stay on the edge of what’s allowed by Google and Apple in terms of service, and they don’t flout the security or privacy rules, so they are hardly ever rejected by these stores during the review. While Google and Apple have implemented new guidelines to curb fleeceware since we reported on such apps in 2019, developers are finding ways around these policies, such as severely limiting app usage and functionality unless users pay up. While some of the ChatGPT fleeceware apps included in this report have already been taken down, more continue to pop up—and it’s likely more will appear. The best protection is education. Users need to be aware that these apps exist and always be sure to read the fine print whenever hitting ‘subscribe.’ Users can also report apps to Apple and Google if they think the developers are using unethical means to profit,” said Gallagher.

Artificial Intelligence

DeepSeek Popularity Exploited in Latest PyPI Attack

Published

on

The Supply Chain Security team at Positive Technologies’ Expert Security Center (PT ESC) discovered and neutralised a malicious campaign in the Python Package Index (PyPI) repository. This attack was aimed at developers, ML engineers, and anyone seeking to integrate DeepSeek into their projects.

The attacker’s account, created in June 2023, remained dormant until January 29, when the malicious packages deepseeek and deepseekai were registered. Once installed, these packages would register console commands. When these commands were executed, the packages began stealing sensitive user data, including information about their computers and environment variables often containing database credentials and access keys to various infrastructure resources. The attackers used Pipedream, a popular developer integration platform, as their command-and-control server to receive the stolen information.

Stanislav Rakovsky, Head of Supply Chain Security at PT ESC, explained, “Cybercriminals are always looking for the next big thing to exploit, and DeepSeek’s popularity made it a prime target. What’s particularly interesting is that the malicious code appears to have been generated with the help of an AI assistant, based on comments within the code itself. The malicious packages were uploaded to the popular repository on the evening of January 29.”

Given the heightened interest in DeepSeek, this attack could have resulted in numerous victims if the malicious activity had gone unnoticed for longer. Experts at Positive Technologies strongly recommend being more attentive to new and unknown packages.

Continue Reading

Artificial Intelligence

SentinelOne to Spotlight AI-Driven Cybersecurity at LEAP 2025

Published

on

SentinelOne has announced its participation at LEAP 2025, alongside its distributor, AlJammaz Technologies. The company will showcase its AI-powered cybersecurity solutions including advanced EDR, XDR, and ITDR solutions designed to deliver autonomous protection against evolving cyber threats.

SentinelOne’s solutions align with the Kingdom’s strategic priorities by offering proactive AI-driven protection for critical infrastructure, enterprises, and government entities. The company’s Singularity platform, known for its real-time, AI-driven threat detection, response, and prevention, will be at the centre of its presence at the exhibition. The platform enables enterprises to protect their endpoints, cloud environments, and identity layers, allowing them to innovate confidently amidst evolving cyber threats.

Speaking on their participation, Meriam ElOuazzani, Senior Regional Director, META at SentinelOne, said, “Cybersecurity remains central to progress with Saudi Vision 2030’s digital leadership and economic goals, and our solutions empower businesses to outpace evolving threats and fuel growth. By participating at LEAP, we aim to engage with key stakeholders in the tech ecosystem, explore new partnerships, and demonstrate how our solutions are reshaping workforce capabilities and the future of digital resilience.”

SentinelOne’s AI strategy focuses on delivering autonomous, real-time protection by leveraging machine learning and behavioural AI. This ensures businesses can detect, mitigate, and remediate cyberattacks faster and more effectively than traditional solutions. Senior executives from SentinelOne will be onsite at the AlJammaz Executive Lounge in Hall 1 to share insights on AI-driven security strategies and the future of autonomous cybersecurity. Visitors can also experience live demonstrations of the Singularity platform.

Continue Reading

Artificial Intelligence

DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk

Published

on

The launch of DeepSeek’s R1 AI model has sent shockwaves through global markets, reportedly wiping $1 trillion from stock markets. Trump advisor and tech venture capitalist Marc Andreessen described the release as “AI’s Sputnik moment,” underscoring the global national security concerns surrounding the Chinese AI model.

However, new red teaming research by Enkrypt AI, the world’s leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.

Compared with other models, the research found that DeepSeek’s R1 is:

  1. 3x more biased than Claude-3 Opus
  2. 4x more vulnerable to generating insecure code than OpenAI’s O1
  3. 4x more toxic than GPT-4o
  4. 11x more likely to generate harmful output compared to OpenAI’s O1
  5. 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content​ than OpenAI’s O1 and Claude-3 Opus

Sahil Agarwal, CEO of Enkrypt AI, said, “DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought.”

The model exhibited the following risks during testing:

  • BIAS & DISCRIMINATION – 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
  • HARMFUL CONTENT & EXTREMISM – 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
  • TOXIC LANGUAGE – The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
  • CYBERSECURITY RISKS – 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
  • BIOLOGICAL & CHEMICAL THREATS – DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.

Sahil Agarwal concluded, “As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.