Connect with us

Artificial Intelligence

Cybersecurity Defences Employing AI Can Combat Threats with Greater Speeds

Published

on

Emile Abou Saleh, the Senior Director for Middle East, Turkey and Africa at Proofpoint, says a proactive approach to cybersecurity robustly protects organizations against a wide range of threats in an increasingly complex digital landscape

What have we achieved so far in terms of use case scenarios of Gen AI in the realm of cybersecurity?
Generative AI has gained considerable attention in the news lately, and like any new technology, there’s a lot of excitement around it. Today’s Generative AI tools go beyond traditional chatbots; they are becoming more advanced. Generative AI’s potential reaches far and wide, benefiting professionals across different industries. Financial advisers can use it to analyze market trends, educators can tailor lessons to students’ needs, and it’s also proving useful in the field of cybersecurity. Security analysts can leverage Generative AI to examine user behaviour and detect patterns that could indicate potential data breaches.

One of the standout features of Generative AI in cybersecurity is its ability to quickly and accurately process vast amounts of data related to emerging threats. Security administrators can use these tools to run queries quickly, and in just a few minutes, these tools can summarize current credential compromise threats and highlight specific indicators to watch out for.

Why according to you should cybersecurity companies leverage generative AI?
Our lives and work cultures are forever changed, with so many people working and interacting digitally—and the velocity of business and the volume of corporate data we generate growing exponentially, across multiple digital platforms.

Many organizations across all industries have found that implementing artificial intelligence (AI) into business systems has helped them to ensure continuity, with one main aspect being increased productivity. When looking at this from a cybersecurity point of view, there are many ways AI and machine learning (ML) can bolster an organization’s overall cybersecurity posture.

Today’s threat landscape is characterized by attackers preying on human vulnerability. Proofpoint research shows that nearly 99% of all threats require some sort of human interaction. Whether it is malware-free threats such as the different types of Business Email Compromise (BEC) or Email Account Compromise (EAC) like payroll diversion, account takeover, and executive impersonation, or malware-based threats, people are falling victim to these attacks day-in and day-out. And all it takes is one click, from one employee for a threat actor to infiltrate an organization’s systems, no matter how complex the environment.

To stop these types of attacks, organizations need to deploy a security solution that can stay ahead of the ever-changing landscape and adapt to the way humans act. AI and ML are critical components in a robust cybersecurity detection strategy. It’s faster and more effective than manual analysis and can quickly adapt to new and evolving threats and trends. Cybersecurity defences that employ AI can combat such threats with greater speed, relying on data and learnings from previous, similar attacks to predict and prevent their spread.

What are the cybersecurity challenges facing companies with the adoption of AI and how can they be overcome?
With the adoption of AI, organizations face a set of cybersecurity challenges that need immediate attention. While AI has shown remarkable progress in defending against common threats, it has also opened doors for cybercriminals.

Take phishing: AI has the potential to supercharge this threat, increasing the speed and accuracy in which these phishing emails are sent to victims. However, it’s important to remember that many social engineering emails aren’t designed to be “perfect” – they are intentionally written poorly to find people who are more likely to engage.

That’s also only one part of the threat. Headers, senders, attachments, and URLs are among the many other threat indicators that are analyzed by robust detection technologies. Even where there would be a substantial benefit to having better-crafted emails, like many business email compromise scenarios, there is a lot of other information the threat actor needs to have access to. They need to know who is paying what money to whom and at what dates, which they probably have already accessed in a different way. They don’t necessarily need AI assistance when they already have access to that person’s inbox and they can merely copy an old email.

It’s crucial for organizations to note that no matter the attack vector, or how complex it is, the majority of cyberattacks require human interaction to be successful. By tricking just one employee, threat actors can circumvent security tools and siphon sensitive corporate data. Organizations must implement a people-centric cybersecurity strategy, consistently training employees at all levels of the business, in cybersecurity best practices so they are aware of the latest cyber threats and are able to detect them, report them, and not fall victim to them.

How can organizations use their resources effectively to leverage Gen AI to gain a competitive edge in the cybersecurity landscape?
To effectively leverage Generative AI and gain a competitive advantage in the cybersecurity landscape, organizations should focus on two vital aspects. It is firstly essential to embrace a people-centric security model for data loss prevention, acknowledging that individuals often play a pivotal role in the movement of data. This approach encompasses content awareness, behavioural analysis, and threat awareness, granting in-depth insights into how employees interact with sensitive data.

Increased visibility facilitates real-time detection and prevention of data loss incidents. Secondly, organizations should integrate artificial intelligence (AI) and machine learning (ML) technologies into their cybersecurity practices. For instance, in email security solutions, AI and ML swiftly identify and thwart phishing campaigns, malicious URLs, imposter messages, and unusual user activity in cloud accounts. A proactive approach to cybersecurity robustly protects organizations against a wide range of threats in an increasingly complex digital landscape.

Artificial Intelligence

Check Point Leverages AI to Strengthen Network Security

Published

on

Check Point Software Technologies has announced the new Check Point Quantum Firewall Software R82 (R82) and additional innovations for the Infinity Platform. As organizations face a 75% surge in cyber-attacks worldwide, R82 delivers new AI-powered engines to prevent zero-day threats including phishing, malware, and domain name system (DNS) exploits. It also includes new architectural changes and innovations that drive DevOps agility for data centre operations as well as simplicity and scale.

“Threats are continuing to multiply exponentially, and organizations need intelligent solutions that can keep them a step ahead,” said Nataly Kremer, Chief Product Officer at Check Point Software Technologies. “Network security is increasingly strategic. Our suite of AI-powered threat prevention tools – from Check Point Quantum Firewall Software R82 to GenAI Protect and more – are not only bringing world-class innovations but also relentlessly focused on making it operationally simple and resilient.”

Quantum Software R82 delivers over 50 new capabilities for enterprise customers including:

  • Industry Leading AI-Powered Threat Prevention to block 99.8% of zero-day threats. It introduces four new AI engines to find hidden relationships and patterns to block over 500K additional attacks per month that protect against sophisticated zero-day phishing and malware campaigns.
  • Agile Datacenter Operations to accelerate app development with automated integration of security policy.
  • With dramatically simplified firewall virtualization, organizations achieve 3X faster provisioning of virtual systems for multi-tenancy and agile application development benefiting DevOps.
  • Operational Simplicity to offer seamless scalability for networks of all sizes, automatically adapting to business growth and traffic spikes. It enables organizations to achieve resilience with built-in load sharing and clustering technology (ElasticXL) while benefiting from 3x faster provisioning and operations for firewall management.
  • Post-Quantum Cryptography (PQC) to provide the latest NIST-approved cryptography Kyber (ML-KEM) for quantum-safe encryption, assuring that today’s encrypted data won’t turn into tomorrow’s treasure chest for threat actors.

“Maintaining effective network security requires AI, automation, and the ability to adapt quickly to the latest threats,” said Frank Dickson, IDC Group Vice President of Security and Trust. “Security needs to be strong, but it also needs to enable business innovation at the speed of DevOps. With Check Point’s new collaborative AI-powered solutions and Quantum Firewall Software, Check Point looks to deliver high-performance AI threat prevention while enabling organizations to innovate quickly.”

The new capabilities build upon Check Point’s recently released suite of AI-powered threat prevention innovations:

  1. Check Point Infinity AI Copilot is a responsive AI-powered assistant designed to automate and accelerate security management and threat resolution.
  2. Check Point GenAI Protect is a pioneering solution for the safe adoption of generative AI in enterprises.
  3. Check Point Infinity External Risk Management (ERM) delivers continuous monitoring and real-time threat prevention, augmented by expert-managed services. This protects customers against a wider array of external risks, from credential threat and vulnerability exploitation to phishing attacks and fraud.

“We’ve seen a definite performance increase and operational value with our upgrade to Check Point’s Quantum Firewall Software R82 software release. The new Quantum Firewall software allows us to secure and manage our encrypted traffic more easily than ever,” said Jeff Burgess, Manager of I.T. Enterprise, Aviation Technical Services. “With Check Point, all of our security products are working in sync together to provide a level of security which was previously unattainable.”

Continue Reading

Artificial Intelligence

Dataiku Launches LLM Guard Services to Control Generative AI Rollouts

Published

on

Dataiku has announced the launch of its LLM Guard Services suite, designed to advance enterprise GenAI deployments at scale from proof-of-concept to full production without compromising cost, quality, or safety. Dataiku LLM Guard Services includes three solutions: Cost Guard, Safe Guard, and the newest addition, Quality Guard. These components are integrated within the Dataiku LLM Mesh, the market’s most comprehensive and agnostic LLM gateway, for building and managing enterprise-grade GenAI applications that will remain effective and relevant over time. LLM Guard Services provides a scalable no-code framework to foster greater transparency, inclusive collaboration, and trust in GenAI projects between teams across companies.

Today’s enterprise leaders want to use fewer tools to reduce the burden of scaling projects with siloed systems, but 88% do not have specific applications or processes for managing LLMs, according to a recent Dataiku survey. Available as a fully integrated suite within the Dataiku Universal AI Platform, LLM Guard Services is designed to address this challenge and mitigate common risks when building, deploying, and managing GenAI in the enterprise.

“As the AI hype cycle follows its course, the excitement of two years ago has given way to frustration bordering on disillusionment today. However, the issue is not the abilities of GenAI, but its reliability,” said Florian Douetteau, Dataiku CEO. “Ensuring that GenAI applications deliver consistent performance in terms of cost, quality, and safety is essential for the technology to deliver its full potential in the enterprise. As part of the Dataiku Universal AI platform, LLM Guard Services is effective in managing GenAI rollouts end-to-end from a centralized place that helps avoid costly setbacks and the proliferation of unsanctioned ‘shadow AI’ – which are as important to the C-suite as they are for IT and data teams.”

Dataiku LLM Guard Services provides oversight and assurance for LLM selection and usage in the enterprise, consisting of three primary pillars:

  • Cost Guard: A dedicated cost-monitoring solution to enable effective tracing and monitoring of enterprise LLM usage to anticipate better and manage spend vs. budget of GenAI.
  • Safe Guard: A solution that evaluates requests and responses for sensitive information and secures LLM usage with customizable tooling to avoid data abuse and leakage.
  • Quality Guard: The newest addition to the suite that provides quality assurance via automatic, standardized, code-free evaluation of LLMs for each use-case to maximize response quality and bring both objectivity and scalability to the evaluation cycle.

Previously, companies deploying GenAI have been forced to use custom code-based approaches to LLM evaluation or leverage separate, pure-play point solutions. Now, within the Dataiku Universal AI Platform, enterprises can quickly and easily determine GenAI quality and integrate this critical step in the GenAI use-case building cycle. By using LLM Quality Guard, customers can automatically compute standard LLM evaluation metrics, including LLM-as-a-judge techniques like answer relevancy, answer correctness, context precision, etc., as well as statistical techniques such as BERT, Rouge and Bleu, and more to ensure they select the most relevant LLM and approach to sustain GenAI reliability over time with greater predictability. Further, Quality Guard democratizes GenAI applications so any stakeholder can understand the move from proof-of-concept experiments to enterprise-grade applications with a consistent methodology for evaluating quality.

Continue Reading

Artificial Intelligence

Cloudflare Helps Content Creators Regain Control of their Content from AI Bots

Published

on

Cloudflare has announced AI Audit, a set of tools to help websites of any size analyse and control how their content is used by artificial intelligence (AI) models. For the first time, website and content creators will be able to quickly and easily understand how AI model providers are using their content, and then take control of whether and how the models can access it. Additionally, Cloudflare is developing a new feature where content creators can reliably set a fair price for their content that is used by AI companies for model training and retrieval augmented generation (RAG).

Website owners, whether for-profit companies, media and news publications, or small personal sites, may be surprised to learn AI bots of all types are scanning their content thousands of times every day without the content creator knowing or being compensated, causing significant destruction of value for businesses large and small. Even when website owners are aware of how AI bots are using their content, they lack a sophisticated way to determine what scanning to allow and a simple way to take action. For society to continue to benefit from the depth and diversity of content on the Internet, content creators need the tools to take back control.

“AI will dramatically change content online, and we must all decide together what its future will look like,” said Matthew Prince, co-founder and CEO, Cloudflare. “Content creators and website owners of all sizes deserve to own and have control over their content. If they don’t, the quality of online information will deteriorate or be locked exclusively behind paywalls. With Cloudflare’s scale and global infrastructure, we believe we can provide the tools and set the standards to give websites, publishers, and content creators control and fair compensation for their contribution to the Internet, while still enabling AI model providers to innovate.”

With AI Audit, Cloudflare aims to give content creators information and take back control so there can be a transparent exchange between the websites that want greater control over their content, and the AI model providers that are in need of fresh data sources so that everyone benefits. With this announcement, Cloudflare aims to help any website:

  • Automatically control AI bots, for free: AI is a quickly evolving space, and many website owners need time to understand and analyze how AI bots are affecting their traffic or business. Many small sites don’t have the skills or bandwidth to manually block AI bots. The ability to block all AI bots in one click puts content creators back in control.
  • Tap into analytics to see how AI bots access their content: Every site using Cloudflare now has access to analytics to understand why, when, and how often AI models access their website. Website owners can now make a distinction between bots – for example, text-generative bots that still credit the source of the data they use when generating a response, versus bots that scrape data with no attribution or credit.
  • Better protect their rights when negotiating with model providers: An increasing number of sites are signing agreements directly with model providers to license the training and retrieval of content in exchange for payment. Cloudflare’s AI Audit tab will provide advanced analytics to understand metrics that are commonly used in these negotiations, like the rate of crawling for certain sections or the entire page. Cloudflare will also model terms of use that every content creator can add to their sites to legally protect their rights.
  • Set a fair price for the right to scan content and transact seamlessly (in development): Many site owners, whether they are the large companies of the future or a high-quality individual blogs, do not have the resources, context, or expertise to negotiate one-off deals that larger publishers are signing with AI model providers, and AI model providers do not have the bandwidth to do this with every site that approaches them. In the future, even the largest content creators will benefit from Cloudflare’s seamless price setting and transaction flow, making it easy for model providers to find fresh content to scan they may otherwise be blocked from, and content providers to take control and be paid for the value they create.
Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.