Artificial Intelligence
AI Technology’s Potential for Misuse Necessitates Robust Security Policies

Ram Narayanan, the Country Manager at Check Point Software Technologies, Middle East, says collaborating with AI providers and researchers is essential to remain current with AI advancements
What have we achieved so far in terms of use case scenarios of Gen AI?
Generative AI tools like ChatGPT and Google Bard have witnessed remarkable growth in their use case scenarios, showcasing their versatility and potential across various applications. These AI tools have proven to be valuable assets in enhancing productivity and creativity. However, they also present significant challenges, primarily related to their vulnerability to misuse in cyber-attacks.
Instances of Generative AI being exploited to create malicious content, such as malware, phishing emails, and deceptive videos, have raised concerns in the cybersecurity domain. Organizations have had to proactively address these issues to protect their digital assets and sensitive data. While Generative AI continues to offer substantial benefits, organizations must remain vigilant in their efforts to protect against emerging AI threats, ensuring that AI and machine learning-based defences become essential components of their cybersecurity strategies.
Why according to you should companies leverage generative AI?
Companies should leverage generative AI for a multitude of reasons that promise transformative benefits. Generative AI streamlines content creation processes, allowing for efficient, cost-effective production of customized content at scale. Moreover, the scalability of generative AI ensures that businesses can adapt effortlessly to varying audience sizes without compromising content quality. Generative AI extends its utility to customer support through AI-powered chatbots, offering round-the-clock assistance while freeing up human teams for more complex tasks.
Furthermore, its flexibility to generate content in diverse formats, from text to images and audio-visual content, enables companies to diversify their content offerings and reach audiences across multiple platforms. Embracing generative AI grants companies a competitive edge in a dynamic business landscape, fostering agility and innovation. However, responsible AI use is paramount.
The technology’s potential for misuse, including cyber threats and malicious content creation, necessitates robust security policies, especially for mobile devices. Advanced technology, including AI and machine learning, is crucial to effectively detect and mitigate these risks. Companies must also uphold ethical standards in AI deployment, ensuring responsible use that aligns with societal values while reaping the myriad benefits generative AI offers.
What are the challenges companies face in terms of adopting and using Gen AI and how can they be overcome?
Companies face several challenges when adopting and using Generative AI. A primary concern is the potential for misuse, as Gen AI can be exploited for cyber-attacks, including the creation of malware, phishing emails, and deceptive content. This poses significant security risks that must be addressed. Firstly, robust security policies should be established and enforced, governing the use of AI tools on corporate devices and networks.
Employee education is crucial to raise awareness and empower staff to recognize AI-generated threats. Advanced threat detection technologies, utilizing behavioural analysis and machine learning, enhance security measures. Access control to AI tools helps mitigate misuse risks, and regular security updates are essential. Mobile devices, often entry points to organizations, require special attention with robust mobile security solutions.
Ethical concerns, regulatory compliance, quality control, bias mitigation, and public perception challenges also need to be addressed through collaboration, self-regulation, responsible AI development, and continuous monitoring. Striking a balance between AI’s potential and ethical considerations is key for successful Gen AI adoption.
Are companies aware of regional and global policies surrounding the use of Gen AI?
The awareness among companies regarding regional and global policies surrounding the use of Generative AI can vary significantly. Some companies are well-informed and proactive in understanding and adhering to these policies, especially if they operate in highly regulated industries or have a global presence. These companies often invest in compliance efforts to ensure they align with regional and international regulations related to AI.
However, many companies, particularly smaller or newer ones, may have limited awareness of the full scope of regional and global policies concerning Gen AI. It’s worth noting that the awareness of Gen AI policies can also be influenced by the region in which a company operates. The United Arab Emirates has been actively embracing AI technology in various sectors, including healthcare, transportation, finance, and government services.
To ensure responsible and ethical use of AI, the UAE government has developed regulatory frameworks and policies. For instance, the UAE AI Strategy 2031 focuses on creating a conducive environment for AI innovation while also addressing the ethics and legal aspects of AI implementation. Given the substantial investment in AI technology and the government’s commitment to AI governance, it is likely that UAE companies are well-informed about the regional and global policies surrounding the use of Gen AI. Companies operating in sensitive sectors, such as healthcare or finance, may have a higher level of awareness and compliance with AI regulations due to the potential impact on individuals’ privacy and security.
How can companies use their resources on using Gen AI to create a competitive advantage?
Companies can utilize their resources to harness Generative AI strategically, thereby gaining a competitive edge in various aspects. Gen AI enables swift innovation by automating product development, reducing time-to-market, and ensuring companies stay ahead in dynamic industries. Gen AI’s data analysis capabilities facilitate data-driven decision-making, enabling informed strategic choices, rapid response to market trends, and optimized supply chains, leading to cost savings and operational efficiency.
It also plays a vital role in cybersecurity, effectively detecting and mitigating advanced threats to safeguard digital assets and reputation. Automated market research with Gen AI identifies trends and consumer preferences, guiding product development and marketing strategies. Task automation enhances employee productivity, freeing up time for innovation, while Gen AI assists in compliance and risk management efforts.
To maintain a competitive edge, companies should integrate Gen AI strategically, invest in workforce training, ensure ethical use, and implement robust cybersecurity measures. Collaboration with AI providers and researchers is essential to stay current with AI advancements and maintain responsible practices.
What factors do companies need to consider before adopting Gen AI such as having a centralised data strategy?
Before adopting Generative AI, companies must carefully consider several critical factors, one of which is the establishment of a centralized data strategy. Security is of utmost concern, as Gen AI tools have the potential to be exploited in cyber-attacks, exemplified by instances of AI-generated malware and phishing campaigns. To mitigate these risks, robust security policies and measures should be implemented to safeguard sensitive data and prevent data breaches. Mobile devices, commonly used for Gen AI interactions, present unique vulnerabilities, necessitating a focused approach to mobile security that encompasses both prevention and detection, ideally utilizing AI and machine learning in security solutions.
A centralized data strategy should incorporate these security measures to protect against potential AI threats during Gen AI adoption. Additionally, it should encompass data governance practices, data quality assessment, privacy compliance, ethical guidelines, transparency, scalability, cross-functional collaboration, and continuous monitoring to ensure responsible and secure Gen AI integration. Building and maintaining customer trust and preparing for crisis management are integral aspects of a comprehensive Gen AI strategy.
How can companies experiment with Gen AI to predict the future of strategic workforce planning?
Companies can gain a competitive edge by strategically allocating resources to harness Generative AI in various ways. Generative AI accelerates innovation by automating product development processes, leading to faster time-to-market and a competitive advantage in rapidly evolving industries.
It also streamlines content creation, reducing costs, and delivering personalized content to enhance customer engagement and loyalty. With the deployment of AI-powered chatbots and virtual assistants, companies can improve customer support, providing efficient round-the-clock assistance while optimizing the supply chain, ultimately increasing customer satisfaction and operational efficiency.
Generative AI’s role in cybersecurity is crucial, as it effectively detects and mitigates advanced threats. Additionally, it aids in automated market research, identifying trends and consumer preferences to guide product development and marketing strategies. Lastly, it contributes to compliance and risk management efforts.
To maintain this competitive edge, companies must strategically integrate Generative AI, invest in workforce training, ensure ethical use, and implement robust cybersecurity measures to safeguard against AI-related threats. Collaborating with AI providers and researchers is essential to remain current with AI advancements, allowing companies to effectively harness these technologies while upholding responsible practices.
Artificial Intelligence
DeepSeek Popularity Exploited in Latest PyPI Attack

The Supply Chain Security team at Positive Technologies’ Expert Security Center (PT ESC) discovered and neutralised a malicious campaign in the Python Package Index (PyPI) repository. This attack was aimed at developers, ML engineers, and anyone seeking to integrate DeepSeek into their projects.
The attacker’s account, created in June 2023, remained dormant until January 29, when the malicious packages deepseeek and deepseekai were registered. Once installed, these packages would register console commands. When these commands were executed, the packages began stealing sensitive user data, including information about their computers and environment variables often containing database credentials and access keys to various infrastructure resources. The attackers used Pipedream, a popular developer integration platform, as their command-and-control server to receive the stolen information.
Stanislav Rakovsky, Head of Supply Chain Security at PT ESC, explained, “Cybercriminals are always looking for the next big thing to exploit, and DeepSeek’s popularity made it a prime target. What’s particularly interesting is that the malicious code appears to have been generated with the help of an AI assistant, based on comments within the code itself. The malicious packages were uploaded to the popular repository on the evening of January 29.”
Given the heightened interest in DeepSeek, this attack could have resulted in numerous victims if the malicious activity had gone unnoticed for longer. Experts at Positive Technologies strongly recommend being more attentive to new and unknown packages.
Artificial Intelligence
SentinelOne to Spotlight AI-Driven Cybersecurity at LEAP 2025

SentinelOne has announced its participation at LEAP 2025, alongside its distributor, AlJammaz Technologies. The company will showcase its AI-powered cybersecurity solutions including advanced EDR, XDR, and ITDR solutions designed to deliver autonomous protection against evolving cyber threats.
SentinelOne’s solutions align with the Kingdom’s strategic priorities by offering proactive AI-driven protection for critical infrastructure, enterprises, and government entities. The company’s Singularity platform, known for its real-time, AI-driven threat detection, response, and prevention, will be at the centre of its presence at the exhibition. The platform enables enterprises to protect their endpoints, cloud environments, and identity layers, allowing them to innovate confidently amidst evolving cyber threats.
Speaking on their participation, Meriam ElOuazzani, Senior Regional Director, META at SentinelOne, said, “Cybersecurity remains central to progress with Saudi Vision 2030’s digital leadership and economic goals, and our solutions empower businesses to outpace evolving threats and fuel growth. By participating at LEAP, we aim to engage with key stakeholders in the tech ecosystem, explore new partnerships, and demonstrate how our solutions are reshaping workforce capabilities and the future of digital resilience.”
SentinelOne’s AI strategy focuses on delivering autonomous, real-time protection by leveraging machine learning and behavioural AI. This ensures businesses can detect, mitigate, and remediate cyberattacks faster and more effectively than traditional solutions. Senior executives from SentinelOne will be onsite at the AlJammaz Executive Lounge in Hall 1 to share insights on AI-driven security strategies and the future of autonomous cybersecurity. Visitors can also experience live demonstrations of the Singularity platform.
Artificial Intelligence
DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk

The launch of DeepSeek’s R1 AI model has sent shockwaves through global markets, reportedly wiping $1 trillion from stock markets. Trump advisor and tech venture capitalist Marc Andreessen described the release as “AI’s Sputnik moment,” underscoring the global national security concerns surrounding the Chinese AI model.
However, new red teaming research by Enkrypt AI, the world’s leading AI security and compliance platform, has uncovered serious ethical and security flaws in DeepSeek’s technology. The analysis found the model to be highly biased and susceptible to generating insecure code, as well as producing harmful and toxic content, including hate speech, threats, self-harm, and explicit or criminal material. Additionally, the model was found to be vulnerable to manipulation, allowing it to assist in the creation of chemical, biological, and cybersecurity weapons, posing significant global security concerns.
Compared with other models, the research found that DeepSeek’s R1 is:
- 3x more biased than Claude-3 Opus
- 4x more vulnerable to generating insecure code than OpenAI’s O1
- 4x more toxic than GPT-4o
- 11x more likely to generate harmful output compared to OpenAI’s O1
- 3.5x more likely to produce Chemical, Biological, Radiological, and Nuclear (CBRN) content than OpenAI’s O1 and Claude-3 Opus
Sahil Agarwal, CEO of Enkrypt AI, said, “DeepSeek-R1 offers significant cost advantages in AI deployment, but these come with serious risks. Our research findings reveal major security and safety gaps that cannot be ignored. While DeepSeek-R1 may be viable for narrowly scoped applications, robust safeguards—including guardrails and continuous monitoring—are essential to prevent harmful misuse. AI safety must evolve alongside innovation, not as an afterthought.”
The model exhibited the following risks during testing:
- BIAS & DISCRIMINATION – 83% of bias tests successfully produced discriminatory output, with severe biases in race, gender, health, and religion. These failures could violate global regulations such as the EU AI Act and U.S. Fair Housing Act, posing risks for businesses integrating AI into finance, hiring, and healthcare.
- HARMFUL CONTENT & EXTREMISM – 45% of harmful content tests successfully bypassed safety protocols, generating criminal planning guides, illegal weapons information, and extremist propaganda. In one instance, DeepSeek-R1 drafted a persuasive recruitment blog for terrorist organizations, exposing its high potential for misuse.
- TOXIC LANGUAGE – The model ranked in the bottom 20th percentile for AI safety, with 6.68% of responses containing profanity, hate speech, or extremist narratives. In contrast, Claude-3 Opus effectively blocked all toxic prompts, highlighting DeepSeek-R1’s weak moderation systems.
- CYBERSECURITY RISKS – 78% of cybersecurity tests successfully tricked DeepSeek-R1 into generating insecure or malicious code, including malware, trojans, and exploits. The model was 4.5x more likely than OpenAI’s O1 to generate functional hacking tools, posing a major risk for cybercriminal exploitation.
- BIOLOGICAL & CHEMICAL THREATS – DeepSeek-R1 was found to explain in detail the biochemical interactions of sulfur mustard (mustard gas) with DNA, a clear biosecurity threat. The report warns that such CBRN-related AI outputs could aid in the development of chemical or biological weapons.
Sahil Agarwal concluded, “As the AI arms race between the U.S. and China intensifies, both nations are pushing the boundaries of next-generation AI for military, economic, and technological supremacy. However, our findings reveal that DeepSeek-R1’s security vulnerabilities could be turned into a dangerous tool—one that cybercriminals, disinformation networks, and even those with biochemical warfare ambitions could exploit. These risks demand immediate attention.”
-
Artificial Intelligence1 week ago
DeepSeek-R1 AI Poses 11x Higher Harmful Content Risk
-
Artificial Intelligence6 days ago
DeepSeek Popularity Exploited in Latest PyPI Attack
-
Artificial Intelligence6 days ago
SentinelOne to Spotlight AI-Driven Cybersecurity at LEAP 2025
-
Cyber Security3 days ago
Employees Are the First Line of Defense
-
News5 days ago
Sophos Completes Secureworks Acquisition
-
Homeland Security1 week ago
Daimler Truck Focuses on Growth in the Defence Sector
-
Cyber Security3 days ago
Proactive Threat Intelligence Can Keep Threats at Bay
-
Cyber Security1 week ago
Tenable Plans to Acquire Vulcan Cyber