Nearly half of all internet traffic came from bots last year, a 5.1% increase over the previous year, according to findings from the 2023 Imperva Bad Bot Report. More troubling, the volume of traffic from bad bots increased for the fourth consecutive year, resulting in higher levels of account compromise, data theft, spam, and degraded online services.
Organizations lose billions (USD) every year as a result of automated attacks on their websites, infrastructure, APIs, and applications. What’s more, malicious automation in the form of bad bots are also responsible for higher infrastructure and support costs, customer churn, and tarnished brand reputations.
Now with the advent of generative artificial intelligence (AI), bots will evolve at an accelerated, more concerning pace over the next 10 years. Regardless of the industry, automated attacks will become a greater source of risk for every organization.
Increasing Sophistication of Bad Bots Leads to Higher Levels of Fraud
More than a decade ago, bot technology was used as a way to increase the scale of phishing email attacks. Since then, the technology has evolved rapidly -- to the point that advanced bad bots can now mimic human-like keypad and mouse movements.
Unfortunately, the proportion of bad bots classified as advanced more than doubled between 2021 and 2022, representing the majority of bad bot traffic globally. This should be an alarm bell for any digital organization, as these bots have the ability to evade detection by cycling through random IPs, entering through anonymous proxies, and changing identities. Over time, advanced bad bots will lead to more online fraud, data loss, and deteriorated online services.
We’re already seeing the early signs of what advanced automation could spell for the future of the internet. Last year, the volume of account takeover (ATO) attacks grew by an astonishing 155%. Meanwhile, 15% of all login attempts, across all industries, were classified as account takeover. Increasingly, there is a correlation between public data breaches and the volume of account takeover attacks, as motivated cybercriminals leverage leaked credentials before users realize their data is exposed.
4 Ways the Evolution of Bots Will Disrupt Security in the Next Decade
Generative AI will be used by bad actors to accelerate the development and sophistication of bots in the coming months and years. As a result, we’ll see four trends emerge:
- The inevitable demise of CAPTCHA. For years, organizations have relied on CAPTCHA puzzles to challenge users and differentiate between human and automated traffic. While this approach was effective at protecting websites and online services in the past, generative AI will render this detection tool useless. Sophisticated bots will be able to easily emulate human behavior, obfuscate their actions, and evade detection. Organizations will need to evolve their defenses, putting emphasis on behavioral-based detection solutions.
- An internet of automated users. The percentage of traffic coming from bots will increase in the next 10 years, overtaking the proportion of human traffic on the internet. We could see an astounding 70 - 80% of global traffic come from automation, particularly in the coming years as content scrapers and crawlers multiply in use as AI tools are more widely used. This will put pressure on organizations to more effectively detect and block bad bot traffic.
- Dawn of a new age for online fraud. The way fraudsters compromise your identity and steal sensitive information will evolve as a result of generative AI. It will become easier for fraudsters to masquerade as somebody else, leading to a new breed of social engineering attacks. For example, a fraudster could create a believable, fake version of you by scraping the internet and social media for information, audio clips, and imagery that can be packaged up using AI. This illegitimate version of you could be used to create new passwords, open accounts, and more.
- APIs become a ripe target for attackers. In 2022, 17% of all attacks on APIs came from bad bots abusing business logic. Even more concerning, 35% of account takeover attacks in 2022 specifically targeted an API. With the help of AI, bad actors can automate the process of calling an API programmatically to take over an account, exfiltrate data, scrape data, and more -- without ever triggering an alarm.
Gone are the days where you can effectively protect your site from bad bots with just a few configurations and rules. Today, advanced bots can mimic human behavior, making it harder to detect and stop automated threats. Organizations need to implement a bot management solution that can identify and stop sophisticated automation that targets APIs and application business logic, while not affecting the experience of legitimate users. To do this, organizations should implement a solution with machine learning, device fingerprinting, and behavioral analysis built-in that can pinpoint anomalies specific to your site’s unique traffic patterns. More aggressive protection measures should be deployed across high-traffic parts of the site, but not necessarily the entire site, to avoid impacting users’ experience.
The next 10 years will bring significant challenges to security teams as they grapple with the evolving threat of automation and bad bots. By understanding the potential risks, and staying informed about the latest trends in generative AI, organizations can more effectively minimize the impact of bad bots across their website, APIs, and applications.
By Karl Triebes, SVP and General Manager, Application Security, Imperva