Cloudflare announced that users of its Application Security Advanced offering will receive an improved web application firewall designed to defend applications that run large language models, reports The Register.
The company said that Firewall for AI will feature Advanced Rate Limiting, which would allow users to implement a policy setting a rate limit for requests performed by an individual IP address or API key for each session. This feature helps block distributed denial of service attacks targeting the model as well as other incidents that would overwhelm and disrupt the LLM’s functions. Another feature, called Sensitive Data Detection, prevents the model from leaking sensitive data when responding to queries. Users can also create rules to scan for and remove financial information and similar secrets from LLM responses.
Cloudflare Group Product Manager Daniele Molteni said the firewall can be deployed in front of ChatGPT, Claude by Anthropic, and even private LLMs for in-house use, “whether they are hosted on Cloudflare Workers AI or on other platforms or hosting providers," as long as "the request [and] response is proxied through Cloudflare."