Connect with us

Artificial Intelligence

Cloudflare Launches Tool to Block AI Bots

Published

on

Cloud service giant Cloudflare is taking a stand against rogue AI bots scraping website data for training models. Their newly launched, free tool aims to combat this growing problem. The issue lies with some AI vendors, like Google, OpenAI, and Apple, who allow website owners to block data-scraping bots through robots.txt files. However, as Cloudflare points out, these blockers are often ignored, leaving website owners vulnerable.

To address this, Cloudflare has developed advanced bot detection models specifically trained to identify AI bots. These models analyze traffic patterns and behaviour, including attempts to mimic human web browsing activity. This allows them to catch even the most cunning scraper bots. Cloudflare has also implemented a reporting system for website owners to flag suspected AI bots and crawlers. They plan to continuously update their blacklist based on user reports and manual investigations.

The rise of powerful generative AI models has fueled a massive demand for training data. This has led to a surge in AI scraper bots, often operating without permission or compensation for the data they collect. Many websites are opting to block these bots entirely. Studies show a significant number of top websites blocking bots used by leading AI companies. However, some vendors seem to disregard these blockers, prioritizing data collection over user consent.

Blocking all bots can have unintended consequences. Some AI tools, like Google’s AI Overviews, exclude websites that block specific crawlers. This can limit valuable referral traffic for website owners. Cloudflare’s tool offers a potential solution, but its effectiveness hinges on the accurate detection of these clandestine AI bots. The ongoing battle between website owners and AI companies highlights the need for a clearer regulatory framework to govern data collection practices in the AI training landscape.

Artificial Intelligence

Dataiku Launches LLM Guard Services to Control Generative AI Rollouts

Published

on

Dataiku has announced the launch of its LLM Guard Services suite, designed to advance enterprise GenAI deployments at scale from proof-of-concept to full production without compromising cost, quality, or safety. Dataiku LLM Guard Services includes three solutions: Cost Guard, Safe Guard, and the newest addition, Quality Guard. These components are integrated within the Dataiku LLM Mesh, the market’s most comprehensive and agnostic LLM gateway, for building and managing enterprise-grade GenAI applications that will remain effective and relevant over time. LLM Guard Services provides a scalable no-code framework to foster greater transparency, inclusive collaboration, and trust in GenAI projects between teams across companies.

Today’s enterprise leaders want to use fewer tools to reduce the burden of scaling projects with siloed systems, but 88% do not have specific applications or processes for managing LLMs, according to a recent Dataiku survey. Available as a fully integrated suite within the Dataiku Universal AI Platform, LLM Guard Services is designed to address this challenge and mitigate common risks when building, deploying, and managing GenAI in the enterprise.

“As the AI hype cycle follows its course, the excitement of two years ago has given way to frustration bordering on disillusionment today. However, the issue is not the abilities of GenAI, but its reliability,” said Florian Douetteau, Dataiku CEO. “Ensuring that GenAI applications deliver consistent performance in terms of cost, quality, and safety is essential for the technology to deliver its full potential in the enterprise. As part of the Dataiku Universal AI platform, LLM Guard Services is effective in managing GenAI rollouts end-to-end from a centralized place that helps avoid costly setbacks and the proliferation of unsanctioned ‘shadow AI’ – which are as important to the C-suite as they are for IT and data teams.”

Dataiku LLM Guard Services provides oversight and assurance for LLM selection and usage in the enterprise, consisting of three primary pillars:

  • Cost Guard: A dedicated cost-monitoring solution to enable effective tracing and monitoring of enterprise LLM usage to anticipate better and manage spend vs. budget of GenAI.
  • Safe Guard: A solution that evaluates requests and responses for sensitive information and secures LLM usage with customizable tooling to avoid data abuse and leakage.
  • Quality Guard: The newest addition to the suite that provides quality assurance via automatic, standardized, code-free evaluation of LLMs for each use-case to maximize response quality and bring both objectivity and scalability to the evaluation cycle.

Previously, companies deploying GenAI have been forced to use custom code-based approaches to LLM evaluation or leverage separate, pure-play point solutions. Now, within the Dataiku Universal AI Platform, enterprises can quickly and easily determine GenAI quality and integrate this critical step in the GenAI use-case building cycle. By using LLM Quality Guard, customers can automatically compute standard LLM evaluation metrics, including LLM-as-a-judge techniques like answer relevancy, answer correctness, context precision, etc., as well as statistical techniques such as BERT, Rouge and Bleu, and more to ensure they select the most relevant LLM and approach to sustain GenAI reliability over time with greater predictability. Further, Quality Guard democratizes GenAI applications so any stakeholder can understand the move from proof-of-concept experiments to enterprise-grade applications with a consistent methodology for evaluating quality.

Continue Reading

Artificial Intelligence

Cloudflare Helps Content Creators Regain Control of their Content from AI Bots

Published

on

Cloudflare has announced AI Audit, a set of tools to help websites of any size analyse and control how their content is used by artificial intelligence (AI) models. For the first time, website and content creators will be able to quickly and easily understand how AI model providers are using their content, and then take control of whether and how the models can access it. Additionally, Cloudflare is developing a new feature where content creators can reliably set a fair price for their content that is used by AI companies for model training and retrieval augmented generation (RAG).

Website owners, whether for-profit companies, media and news publications, or small personal sites, may be surprised to learn AI bots of all types are scanning their content thousands of times every day without the content creator knowing or being compensated, causing significant destruction of value for businesses large and small. Even when website owners are aware of how AI bots are using their content, they lack a sophisticated way to determine what scanning to allow and a simple way to take action. For society to continue to benefit from the depth and diversity of content on the Internet, content creators need the tools to take back control.

“AI will dramatically change content online, and we must all decide together what its future will look like,” said Matthew Prince, co-founder and CEO, Cloudflare. “Content creators and website owners of all sizes deserve to own and have control over their content. If they don’t, the quality of online information will deteriorate or be locked exclusively behind paywalls. With Cloudflare’s scale and global infrastructure, we believe we can provide the tools and set the standards to give websites, publishers, and content creators control and fair compensation for their contribution to the Internet, while still enabling AI model providers to innovate.”

With AI Audit, Cloudflare aims to give content creators information and take back control so there can be a transparent exchange between the websites that want greater control over their content, and the AI model providers that are in need of fresh data sources so that everyone benefits. With this announcement, Cloudflare aims to help any website:

  • Automatically control AI bots, for free: AI is a quickly evolving space, and many website owners need time to understand and analyze how AI bots are affecting their traffic or business. Many small sites don’t have the skills or bandwidth to manually block AI bots. The ability to block all AI bots in one click puts content creators back in control.
  • Tap into analytics to see how AI bots access their content: Every site using Cloudflare now has access to analytics to understand why, when, and how often AI models access their website. Website owners can now make a distinction between bots – for example, text-generative bots that still credit the source of the data they use when generating a response, versus bots that scrape data with no attribution or credit.
  • Better protect their rights when negotiating with model providers: An increasing number of sites are signing agreements directly with model providers to license the training and retrieval of content in exchange for payment. Cloudflare’s AI Audit tab will provide advanced analytics to understand metrics that are commonly used in these negotiations, like the rate of crawling for certain sections or the entire page. Cloudflare will also model terms of use that every content creator can add to their sites to legally protect their rights.
  • Set a fair price for the right to scan content and transact seamlessly (in development): Many site owners, whether they are the large companies of the future or a high-quality individual blogs, do not have the resources, context, or expertise to negotiate one-off deals that larger publishers are signing with AI model providers, and AI model providers do not have the bandwidth to do this with every site that approaches them. In the future, even the largest content creators will benefit from Cloudflare’s seamless price setting and transaction flow, making it easy for model providers to find fresh content to scan they may otherwise be blocked from, and content providers to take control and be paid for the value they create.
Continue Reading

Artificial Intelligence

Lenovo PCs Get AI Security Boost from SentinelOne

Published

on

SentinelOne and Lenovo have announced a multi-year collaboration to bring AI-powered endpoint security to millions of Lenovo devices across the globe. Lenovo will include SentinelOne’s industry-leading Singularity Platform and generative AI capabilities (Purple AI) in new PC shipments, as well as offer upgrades to existing customers to expand its ThinkShield security portfolio and autonomously protect devices from modern attacks.

“The complexity and speed of today’s cyber threats demand an intelligent, adaptable defence,” said Nima Baiati, Executive Director and General Manager, Cybersecurity Solutions, Intelligent Devices Group, Lenovo. “SentinelOne’s Singularity Platform and Purple AI are at the forefront of this evolution, offering unparalleled, AI-powered protection. As Lenovo introduces groundbreaking new AI PCs to the market, we are integrating these cutting-edge AI-powered endpoint security capabilities into Lenovo’s ThinkShield security platform. This will enhance endpoint protection and fortify enterprise resilience against the ever-evolving threat landscape.”

Lenovo is a leading enterprise PC vendor that sells tens of millions of devices annually. The new agreement between the long-time strategic partners is designed to significantly increase the number of Lenovo devices that ship with SentinelOne’s AI-powered security and will benefit from Lenovo’s broad global sales and partner network. As a result, Lenovo’s direct sales team and channel partners can provide cutting-edge, built-in security to businesses of all sizes.

“Cyber resilience is incredibly important for business continuity as organizations increasingly face the unpredictable. Our security services collaboration with SentinelOne is another key aspect of Lenovo’s cybersecurity and cyber resilience services intended to help protect customers from anomalous threats,” said Patricia Wilkey, SVP and GM of Lenovo Solutions and Services Group International Sales. As part of the expanded collaboration, Lenovo will also build a new Managed Detection and Response (MDR) service using AI and EDR capabilities from SentinelOne’s Singularity Platform as its foundation.

“The endpoint remains a primary vector of cyberattacks and the most critical part of a business’ ongoing operations. By working with market leaders like Lenovo, we can rapidly scale AI-powered security to millions of PCs and servers across the globe,” said Akhil Kapoor, Vice President Embedded Business, SentinelOne. “It’s an opportunity for Lenovo and SentinelOne to give Lenovo customers a clear security and resiliency advantage by delivering intelligent devices that defend themselves in real time.”

Continue Reading
Advertisement

Follow Us

Trending

Copyright © 2021 Security Review Magazine. Rysha Media LLC. All Rights Reserved.