Connect with us

Market Research

Cloudflare Releases 2023 Phishing Threats Report

Published

on

Cloudflare has released its inaugural 2023 Phishing Threats Report. The findings highlight that phishing remains the most dominant and fastest-growing Internet crime, largely due to the ubiquity of email and the ceaseless issue of human error that is preyed upon by today’s threat actors.

While business email compromise (BEC) losses have topped $50 billion, corporate organizations are not the only victims that attackers are after. The real implications of phishing go beyond Fortune 500’s and global companies, extending to small and local organizations as well as the public sector. For instance, in this year’s report, Cloudflare observed more email threats targeting political organizations. In the three months leading up to the 2022 US midterm elections, Cloudflare’s email security service prevented around 150,000 phishing emails from making their way to campaign officials.

Regardless of an organization’s size, industry or sector, the report revealed that threat actors who leverage phishing campaigns have two major objectives. First and foremost, the goal is to achieve authenticity and legitimacy in the eyes of the victim. The second is to persuade victims to engage or click. These objectives are underscored by the key findings of the report, including:

  1. Malicious links were the #1 threat category, comprising 35.6% of detected threats
  2. Identity deception threats are on the rise — increasing YoY from 10.3% to 14.2% (39.6 million) of total detections
  3. Attackers posed as more than 1,000 different organizations in over 1 billion brand impersonation attempts. The majority of the time (51.7%), they impersonated one of 20 well-known brands
  4. The most impersonated brand happens to be one of the most trusted software companies: Microsoft. Other top companies impersonated included Google, Salesforce, Notion.so, and more
  5. One-third (30%) of detected threats featured newly registered domains — the #2 threat category
  6. Email authentication doesn’t stop threats. The vast majority (89%) of unwanted messages “passed” SPF, DKIM, or DMARC authentication checks

“Phishing is an epidemic that has permeated into the farthest corners of the Internet, preying on trust and victimizing everyone from CEOs to government officials to the everyday consumer,” said Matthew Prince, CEO at Cloudflare. “Email messages and malicious links are nefarious partners in crime when it comes to the most common form of Internet threats. Organizations of all sizes need a Zero Trust solution that encompasses email security – when this is neglected, they are leaving themselves exposed to the largest vector in today’s threat landscape.”

The report is a culmination of data intelligence and security trends gathered from the 112 billion threats that Cloudflare’s global network blocks daily. Cloudflare evaluated a sample of more than 279 million email threat indicators, 250 million malicious messages, over 1 billion instances of brand impersonation (note that it is possible for one email to have multiple instances of brand impersonations), and other data points gathered from approximately 13 billion emails processed between May 2022 to May 2023. Additionally, this report is informed by a Cloudflare-commissioned study conducted by Forrester Consulting. Between January 2023 and February 2023, Forrester Consulting surveyed 316 security decision-makers across North America, EMEA, and APAC about the state of phishing.

Artificial Intelligence

DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk

Published

on

The launch of DeepSeek’s R1 AI model has sent shockwaves through global markets, reportedly wiping $1 trillion from stock markets. Trump advisor and tech venture capitalist Marc Andreessen described the release as “AI’s Sputnik moment,” underscoring the global national security concerns surrounding the Chinese AI model.

However, new red teaming research by Enkrypt AI, the world’s leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.

Compared with other models, the research found that DeepSeek’s R1 is:

  1. 3x more biased than Claude-3 Opus
  2. 4x more vulnerable to generating insecure code than OpenAI’s O1
  3. 4x more toxic than GPT-4o
  4. 11x more likely to generate harmful output compared to OpenAI’s O1
  5. 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content​ than OpenAI’s O1 and Claude-3 Opus

Sahil Agarwal, CEO of Enkrypt AI, said, “DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought.”

The model exhibited the following risks during testing:

  • BIAS & DISCRIMINATION – 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
  • HARMFUL CONTENT & EXTREMISM – 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
  • TOXIC LANGUAGE – The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
  • CYBERSECURITY RISKS – 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
  • BIOLOGICAL & CHEMICAL THREATS – DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.

Sahil Agarwal concluded, “As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”

Continue Reading

Cyber Security

World Economic Forum and Check Point Research Highlight Six Emerging Cybersecurity Challenges for 2025

Published

on

Written by Vasily Dyagilev, Regional Director, Middle East, RCIS at Check Point Software Technologies (more…)

Continue Reading

Cyber Security

One-Third of UAE Children Play Age-Inappropriate Computer Games

Published

on

According to a recent survey conducted by Kaspersky in collaboration with the UAE Cyber Security Council, more than a third of parents surveyed (33%) across the UAE, believe that their children play games that are inappropriate for their age. Based on the survey, boys are more prone to such behaviour than girls – 50% and 43% of children respectively have violated age guidelines when playing games on their computers.

It’s possible that parents tend to exaggerate the problem of violating age restrictions in computer games, or children are not always aware of these restrictions: according to the children themselves, only 30% confessed that they had ever played games that were not suitable for their age. Girls are more obedient to age restrictions of video games, with 78% having never played inappropriate games, while for boys it is 64%.

Playing computer games is a common way for youngsters to spend their free time (91%). Half of them use smartphones for gaming (52%), and the second place is taken by computers (40%). Based on parents’ estimates, 41% of children play video games every day. “Parents often worry that their children spend too much time playing computer games. Of course, it is important to ensure that the child follows a routine, gets enough sleep, takes a break from the screen, and is physically active, however, parents should not blame computer games for everything”, comments Seifallah Jedidi, Head of Consumer Channel for the META at Kaspersky. “Parents should take a proactive position in this area, be interested in the latest products offered by the video game industry, and, of course, understand their children’s gaming preferences and pay attention to the age limits marking. It’s worth mentioning that today, there is a wide variety of games on offer, many of which include educational materials, and so we recommend not to prohibit this type of leisure, but rather to seek a compromise.”

To keep children safe online, Kaspersky recommends that parents:

  1. Pursue interest in what games your children play. Ideally, you should try those games yourself. This will help build more trust in your family relationships and help you to understand what your child is interested in.
  2. If you notice that your child plays a lot, try to analyze the reasons for this and also answer the question of whether they have an alternative that they like, ask what they would like to do besides gaming and try to engage them with another interesting hobby.
  3. Be informed about current cyber threats and talk to your children about the risks they may face online; teach them how to resist online threats and recognize the tricks of scammers.
  4. Use a parental control program on your child’s device. It will allow you to control the applications downloaded on the device or set a schedule for when these applications can be used.

The survey entitled “Growing Up Online” was conducted by Toluna Research Agency at the request of Kaspersky in 2023-2024. The study sample included 2000 online interviews (1000 parent-child pairs, with children aged 3 to 17 years) in the UAE.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.