Connect with us

Market Research

NetApp’s 2024 Cloud Complexity Report: Unveiling the “Disrupt or Die” Era

Published

on

NetApp has released its second annual Cloud Complexity Report. The report looks at the experiences of global technology decision-makers deploying AI at scale and shows a stark contrast between AI leaders and AI laggards. This year’s report provides global insights into progress, readiness, challenges, and momentum since last year’s report, what we can learn from both the AI leaders and AI laggards, and the critical role of a unified data infrastructure in achieving AI success.

“AI is only as good as the data that fuels it,” said Pravjit Tiwana, General Manager and Senior Vice President of Cloud Storage at NetApp. “Both the AI leaders and AI laggards show us that in the prevailing hybrid IT environment, the more unified and reliable your data, the more likely your AI initiatives are to be successful.”  

The report found a clear divide between AI leaders and AI laggards across several areas including:

  1. Regions: 60% of AI-leading countries (India, Singapore, UK, USA) have AI projects up and running or in the pilot, in stark contrast to 36% in AI-lagging countries (Spain, Australia/New Zealand, Germany, Japan).
  2. Industries: Technology leads with 70% of AI projects up and running or in the pilot, while Banking & Financial Services and Manufacturing follow with 55% and 50%, respectively. However, Healthcare (38%) and Media & Entertainment (25%) are trailing.
  3. Company size: Larger companies (with more than 250 employees) are more likely to have AI projects in motion, with 62% reporting projects up and running or in the pilot, versus 36% of smaller companies (with fewer than 250 employees).

Both AI leaders and AI laggards show a difference in their approach to AI:

  1. Globally, 67% of companies in AI-leading countries report having hybrid IT environments, with India leading (70%) and Japan lagging (24%).
  2. AI leaders are also more likely to report benefits from AI, including a 50% increase in production rates, 46% in the automation of routine activities, and a 45% improvement in customer experience.

“The rise of AI is ushering in a new disrupt-or-die era,” said Gabie Boko, Chief Marketing Officer at NetApp. “Data-ready enterprises that connect and unify broad structured and unstructured data sets into an intelligent data infrastructure are best positioned to win in the age of AI.”

Despite the divide, there is notable progress among AI laggards in preparing their IT environments for AI, but the window to catch up is closing rapidly. A significant number of companies in AI-lagging countries (42%) have optimized their IT environments for AI, including Germany (67%) and Spain (59%). Companies in some AI-lagging countries already report they see the benefits of a unified data infrastructure in place, such as:

  • Easier data sharing: Spain (45%), Australia/New Zealand (43%), Germany (44%)
  • Increased visibility: Spain (54%) and Germany (46%)

Rising IT costs and ensuring data security are the two of the biggest challenges in the AI era, but they will not block AI progress. Instead, AI leaders will scale back, cut other IT operations, or reallocate costs from other parts of the business to fund AI initiatives.

  • AI leaders will also increase their cloud operations (CloudOps), data security and AI investments throughout 2024, with 40% of large companies saying AI projects have already increased IT costs
  • Year over year, “increased cybersecurity risk” jumped 16% as a top concern from 45% to 61%, while all other concerns decreased
  • To manage AI project costs, 31% of companies globally are reallocating funds from other business areas, with India (48%), the UK (40%), and the US (35%) leading this trend.

As global companies, whether AI leaders or AI laggards, increase investments, they are relying on the cloud to support their goals.

  1. Companies reported that they expect to increase AI-driven cloud deployments by 19% from 2024 to 2030.
  2. 85% of AI leaders plan to enhance their CloudOps automation over the next year.
  3. Increasing data security investments is a global priority, jumping 25% from 33% in 2023 to 58% in 2024.

Artificial Intelligence

DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk

Published

on

The launch of DeepSeek’s R1 AI model has sent shockwaves through global markets, reportedly wiping $1 trillion from stock markets. Trump advisor and tech venture capitalist Marc Andreessen described the release as “AI’s Sputnik moment,” underscoring the global national security concerns surrounding the Chinese AI model.

However, new red teaming research by Enkrypt AI, the world’s leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.

Compared with other models, the research found that DeepSeek’s R1 is:

  1. 3x more biased than Claude-3 Opus
  2. 4x more vulnerable to generating insecure code than OpenAI’s O1
  3. 4x more toxic than GPT-4o
  4. 11x more likely to generate harmful output compared to OpenAI’s O1
  5. 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content​ than OpenAI’s O1 and Claude-3 Opus

Sahil Agarwal, CEO of Enkrypt AI, said, “DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought.”

The model exhibited the following risks during testing:

  • BIAS & DISCRIMINATION – 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
  • HARMFUL CONTENT & EXTREMISM – 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
  • TOXIC LANGUAGE – The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
  • CYBERSECURITY RISKS – 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
  • BIOLOGICAL & CHEMICAL THREATS – DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.

Sahil Agarwal concluded, “As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”

Continue Reading

Cyber Security

World Economic Forum and Check Point Research Highlight Six Emerging Cybersecurity Challenges for 2025

Published

on

Written by Vasily Dyagilev, Regional Director, Middle East, RCIS at Check Point Software Technologies (more…)

Continue Reading

Cyber Security

One-Third of UAE Children Play Age-Inappropriate Computer Games

Published

on

According to a recent survey conducted by Kaspersky in collaboration with the UAE Cyber Security Council, more than a third of parents surveyed (33%) across the UAE, believe that their children play games that are inappropriate for their age. Based on the survey, boys are more prone to such behaviour than girls – 50% and 43% of children respectively have violated age guidelines when playing games on their computers.

It’s possible that parents tend to exaggerate the problem of violating age restrictions in computer games, or children are not always aware of these restrictions: according to the children themselves, only 30% confessed that they had ever played games that were not suitable for their age. Girls are more obedient to age restrictions of video games, with 78% having never played inappropriate games, while for boys it is 64%.

Playing computer games is a common way for youngsters to spend their free time (91%). Half of them use smartphones for gaming (52%), and the second place is taken by computers (40%). Based on parents’ estimates, 41% of children play video games every day. “Parents often worry that their children spend too much time playing computer games. Of course, it is important to ensure that the child follows a routine, gets enough sleep, takes a break from the screen, and is physically active, however, parents should not blame computer games for everything”, comments Seifallah Jedidi, Head of Consumer Channel for the META at Kaspersky. “Parents should take a proactive position in this area, be interested in the latest products offered by the video game industry, and, of course, understand their children’s gaming preferences and pay attention to the age limits marking. It’s worth mentioning that today, there is a wide variety of games on offer, many of which include educational materials, and so we recommend not to prohibit this type of leisure, but rather to seek a compromise.”

To keep children safe online, Kaspersky recommends that parents:

  1. Pursue interest in what games your children play. Ideally, you should try those games yourself. This will help build more trust in your family relationships and help you to understand what your child is interested in.
  2. If you notice that your child plays a lot, try to analyze the reasons for this and also answer the question of whether they have an alternative that they like, ask what they would like to do besides gaming and try to engage them with another interesting hobby.
  3. Be informed about current cyber threats and talk to your children about the risks they may face online; teach them how to resist online threats and recognize the tricks of scammers.
  4. Use a parental control program on your child’s device. It will allow you to control the applications downloaded on the device or set a schedule for when these applications can be used.

The survey entitled “Growing Up Online” was conducted by Toluna Research Agency at the request of Kaspersky in 2023-2024. The study sample included 2000 online interviews (1000 parent-child pairs, with children aged 3 to 17 years) in the UAE.

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.