Bot traffic now outnumbers human web traffic, with “bad bots” outnumbering good ones, according to Imperva’s 2025 Bad Bot Report.
Overall, humans only accounted for 49% of web traffic in 2024, with more than a third — 37% — of traffic coming from malicious bots.
Imperva said wide accessibility and rapid adoption of AI and large language model (LLM) technology contributed to the overall rise in bot traffic, with threat actors also using AI to enhance their attacks.
“Attackers now use AI not only to generate bots but also they analyze failed attempts and refine their techniques to bypass detection with greater efficiency,” the report stated.
API threats from bad bots were a major focus of the report, as attacks against APIs made up 44% of advanced bot traffic in 2024, compared with 10% for web applications.
Advanced bots — those that emulate human behavior to evade detection — are also on the rise, increasing from 40% of malicious bot traffic in 2023 to 45% in 2024.
The role of GenAI in bad bot traffic
AI-powered bot attacks, launched with the help of LLM tools, were also highlighted in the report, with Imperva state it blocks an average of 2 million AI-powered attacks daily.
“On average, we see almost 700,000 SQL injections, automated attack, RCE, and XSS attack attempts from AI tools on a daily basis,” Imperva Chief Technology Officer for Application Security David Holmes told SC Media. “Attacks are varied, and we’ve seen a variety of distinct payloads.”
Holmes said Imperva can see when a malicious request comes from an LLM, such as OpenAI’s ChatGPT, Anthropic’s Claude or Google’s Gemini, based on metadata, behavior and known IP addresses, and that attackers can use jailbreak techniques and LLM browser functions to help facilitate attacks.
According to the Bad Bot report, Bytespider, a web crawler used by TikTok owner ByteDance, made up a majority of the AI-related bot traffic blocked by Imperva, at 54%.
“While other AI-related tools like web crawlers aren’t inherently malicious or directly responsible for bot attacks, they are increasingly being leveraged by attackers — both human and automated — to conduct reconnaissance, scan for vulnerabilities, or extract sensitive data,” Holmes said.
How bad bots target APIs, evade detection
Data scraping was noted to be the most common bot attack against APIs, making up 31% of API-targeted bad bot activity, followed by payment fraud at 26% and account takeover at 12%. Scalping, where bots rapidly purchase or reserve products or services to later sell at an inflated price, made up 11% of API bot attacks.
Data access (37%), payment checkout (32%) and authentication (16%) API endpoints were the most targeted by bad bots, with financial (40%) and business services (24%) being the most targeted sectors for these API attacks.
Attackers use a range of techniques, including browser imitation, the use of residential proxies and AI-assisted CAPTCHA solving to bypass bot protections. Google Chrome was found to be the most commonly impersonated browser, at 46%, due to its widespread use and the fact that it is whitelisted by many websites, according to the report.
Defending against malicious bot traffic
Imperva recommended several methods for organizations to improve their defenses against bad bot traffic, beginning with identifying the greatest risk areas, such as their most highly targeted APIs and web pages. Organizations should monitor for unusual spikes in traffic to specific endpoints and prioritize these endpoints with measures like rate limiting and authentication hardening.
Organizations can also defend against evasion measures like browser impersonation and proxying by blocking or restricting traffic from specific browser versions and IPs from bulk IP services. For example, bot tools often use outdated browser versions, while human users typically use the latest browser versions due to automatic updates.
Real-time monitoring and dynamic adaptation to malicious bot behavior provides greater security, as static bot defenses enable threat actors to learn from and evade the defenses they encounter. Use AI-powered defense tools can also aid in accurately identifying sophisticated bot traffic and dynamically adjusting defense mechanisms.