Agentic AI is Changing the Fabric of the Internet and Reshaping Fraud Detection

Blog
Aurelie Guerrieri, Chief Marketing & Alliances Officer, DataDome
Aug 06, 2025
Blog

Over the past decade, traditional fraud prevention systems have been based on a relatively stable understanding of how users behave online. No longer. Agentic AI is transforming the very fabric of the internet. 

AI agents are the new users, and they don’t act like humans. From how they navigate to how they access content, agents are creating a fundamental shift in how we assess identity, detect fraud, and even monetize digital experiences.

Intent > identity

AI agents are already used to complete real-world tasks like comparing prices, travel destinations, and even purchasing on the user’s behalf. But these agents don’t browse the way we do; they skip pages, go straight to APIs, and follow logic paths instead of emotional ones.

For example, a human comparing flights might visit three travel sites, skim reviews, and toggle between tabs. An AI assistant sends parallel API requests, picks the cheapest fare, and books it instantly—no page views, no browsing behavior, no time on site.

While this looks like bot behavior to most fraud systems, it is behavior merchants want to allow. Fraud teams can’t rely on binary classifications like “bot or not” anymore. It’s about understanding intent: is this agent helping a legitimate user or doing harm?

Visibility collapse is already happening

AI agents are changing not just how users behave on-site, but how they arrive in the first place.

Instead of clicking through search results, users are increasingly relying on AI summaries and overviews to get information. But while the humans stay off-site, the LLMs generating those responses are hitting websites in massive numbers, often without identifying themselves or respecting crawl policies like robots.txt.

We’ve seen firsthand how this impacts traffic: some websites report 30–80% drops in referral visits from organic search. At the same time, they're fielding millions of unannounced LLM crawler hits. Case in point: DataDome recorded nearly 1 billion requests from Open AI-identified crawlers in the last 30 days.

For businesses, this means:

  • Traditional web analytics no longer reflect actual demand.
  • Metrics like click-throughs, bounce rate, or time-on-page are disappearing.
  • Conversion funnels are breaking down, along with upsells, personalization, and merchandising logic. 

Forrester recently analyzed that 36% of US adults would be somewhat or very interested in delegating an AI agent to find and book reservations for travel, concerts, and/or other experiences. Gaining visibility into agentic traffic is now mission-critical for survival.  

That’s why DataDome is making its extensive database of AI & LLM crawlers available to the public for free.  

The rise of indistinguishable agents

Some AI agents perform useful functions, but others (whether careless or malicious) create real risks for fraud, scraping, and system abuse.

Take a sneaker brand doing a limited-edition drop. One set of agents helps shoppers purchase quickly. Another set mimics their traffic patterns to buy out the stock at scale and resell it for profit. And without traditional session context (cookies, scrolls, clicks), it’s harder to tell the difference. API calls from both sets of agents can look nearly identical.

Worse, this opens the door to manipulation. A bad actor can spoof a trusted agent’s traffic pattern to bypass defenses and access sensitive data.  Even more dangerous is what happens when agents start influencing what users see. 

Picture a scenario where an airline unknowingly becomes invisible to a user because their travel assistant, intentionally or not, deprioritized or excluded it. Whether caused by faulty data or subtle manipulation (e.g., a competitor-influenced assistant), the result is the same: loss of control over brand exposure and user experience.

API abuse and signal loss

As traffic shifts from browsers to APIs, AI agents are eroding the behavioral signals that traditional fraud detection systems rely on.

Unlike human users, AI agents are stateless, meaning they don’t maintain sessions, remember carts, or follow logical flows tied to a single visit. Instead, they execute rapid, repetitive requests that look isolated, but are often part of a larger pattern.

Imagine a shopping assistant configured to scan retail sites hourly for price changes. It doesn’t retain context from one scan to the next. It doesn’t pause or scroll. It simply makes identical API calls, over and over, appearing each time as a “new” visitor. Traditional detection tools have nothing to anchor to.

From protection to monetization

This isn’t just a security challenge; it’s a shift in business models.

AI agents don’t behave like human users. They don’t click on ads, linger on product pages, or respond to upsell prompts. They complete tasks quickly and efficiently, often bypassing the very monetization levers that businesses rely on: ads, cross-sells, engagement, and remarketing.

To stay competitive, companies will need to rethink how they extract value from this new kind of traffic. That starts with real-time visibility: understanding which agents are interacting with your site, what actions they’re taking, and whether those interactions are helping or hurting your business.

In some cases, the proper response might be to block or throttle. For others, it might mean creating structured, gated access: offering paid APIs or subscription models that cater to verified agents linked to trusted platforms and known users.

This marks a turning point, from blocking bots to designing new revenue models around them.

Agentic AI isn’t just changing how people interact with the web, it’s redefining what it means to be a “user.” As digital experiences increasingly depend on AI intermediaries, businesses need fraud protection that adapts to this new internet, where visibility is scarce, bots wear many hats, and the line between user and agent is always moving.

Join me at MRC San Diego for a workshop, AI in Action: How AI Agents Are Reshaping Visibility, Revenue & Fraud. Have a topic idea or speaker to suggest? Drop me a line!



About DataDome

DataDome stops cyberfraud and bots in real time, outpacing AI-driven attacks across websites, apps, and APIs. Named a Leader in The Forrester Wave™ for Bot Management, DataDome is trusted by leading brands like Tripadvisor, Zocdoc, and SoundCloud. Its multi-layered AI engine focuses on intent, not just identity—because it’s not about knowing who’s real, it’s about what they intend to do. With thousands of adaptive AI models, DataDome blocks every fraudulent click, signup, and login in under 2 milliseconds without compromising performance. Backed by a 24/7 SOC and expert threat researchers, DataDome autonomously stops over 400 billion attacks annually. With 50+ integrations, 30+ global PoPs, and record-fast time to value, DataDome is a recognized Leader on G2 and one of G2’s Best Security Products of 2024—delivering protection that outperforms.

Tagged:
Blue-tinted background of a man watching a webinar

Host a Webinar with the MRC

Help the MRC community stay current on relevant fraud, payments, and law enforcement topics.
Submit a Request

Publish Your Document with the MRC

Feature your case studies, surveys, and whitepapers in the MRC Resource Center.
Submit Your Document

Related Resources