Skip to main content

The Next-Gen Weapon Against AI-Powered Fraud

Blog
Fraud
Friendly Fraud
Identity Fraud
New Account Fraud
Synthetic Identity
Artificial Intelligence
Behavioral Analytics
Machine Learning
Diarmuid Thoma, VP of Fraud and Data Strategy, AtData
May 13, 2025
Blog

Fraud leaders do not need another article telling them that bad actors are using AI. You already know. We are seeing it in account creation patterns, promotion abuse, refund claims, synthetic identities, mule activity, and the slow degradation of signals that used to feel reliable.

The more important challenge is harder to admit:
AI is not simply giving fraudsters new tools. It is teaching them how to perform trust.

This is an important distinction. A fraudster no longer has to guess what a "good" customer looks like. They can test, iterate, and shape identities around the signals merchants reward. They can create credible personas, produce normal-looking support interactions, vary behavior across accounts, and build histories that feel organic when viewed inside one merchant's environment. The result is a growing class of identities that don't look risky, because they were designed specifically to avoid looking risky.

This is why fraud teams are running into a strange and frustrating inversion.

Real customers can look messy. They change devices, use old email addresses, mistype information, travel, share households, abandon carts, return later, and behave inconsistently because human behavior is inconsistent.

Prepared fraud can look cleaner. It arrives with the patience and discipline of someone who knows the rules.

This is the real pressure point. Fraud systems built to reward clean presentation may begin trusting the identities that were most carefully manufactured.

And the scale of the problem is no longer abstract. MRC's own payments and fraud research continues to frame ecommerce fraud as a fast-moving, strategically complex operating challenge for merchants.

So, the issue isn't whether fraud is scaling. It is whether trust systems are evolving fast enough to understand why that trust was created.

Fraudsters Are Learning the Shape of "Normal"

Most mature merchants have layered defenses. They use device intelligence, velocity rules, identity verification, payment analytics, behavioral signals, consortium data, machine learning, and manual review where needed. The weakness is rarely that a team is relying on one outdated check.

The weakness is that many decisions still depend on a narrow window of activity.

Inside that window, a prepared identity can look fine. The email is valid. The device has history. The account is aged. The behavior is steady. The transaction does not scream for attention.

What the merchant often cannot see is whether that identity developed naturally or was assembled to pass inspection.

AI makes this harder because it reduces the cost of rehearsal. Fraudsters can test different versions of identity behavior, learn which patterns create friction, and refine future attempts. Over time, trust signals become targets. Good behavior is no longer always evidence of good intent. Sometimes it is the output of careful preparation.

That is where many fraud models become vulnerable. They are trained to identify deviation, yet fraudsters are getting better at manufacturing conformity.

This is the uncomfortable part: the next generation of fraud may not look abnormal. It may look statistically acceptable.

The Blind Spot Is Sequence

Fraud professionals know that single signals are fragile. The real insight usually comes from how signals relate to each other.

An email created yesterday and used for a high-value order tells one story. A ten-year-old email with steady engagement across legitimate environments tells another. A cluster of accounts with unique devices but similar timing, similar behavior, and similar activation patterns tells another still.

The signal is not the attribute. The signal is the sequence.

This is where local decisioning has limits. A merchant may see a clean first purchase. A broader activity network may see the final move in a months-long cycle. One business sees a customer. The network sees a pattern.

That difference is especially important in areas like loyalty abuse, new account fraud, refund fraud, and synthetic identity. These attacks often depend on looking reasonable at the moment of decision. The fraud event is only the last step. The preparation happened earlier, elsewhere, and in smaller increments.

If a fraud stack only evaluates the event, it arrives late.

Activity-driven identity graphs address this gap by looking at the formation of trust over time. They help determine whether an identity has a history that developed naturally, appeared suddenly, went dormant, reactivated suspiciously, or connects to behavior seen across other environments.

That is a more sophisticated standard than "does this identity look good right now?"

It asks whether the identity has earned the confidence the system is about to give it.

Why Email Still Matters, Even When Fraudsters Can Create It Instantly

The obvious objection to an email-centered view of identity is fair. Email addresses are easy to create. Consumers use aliases. Some rely on masked emails. Inboxes go dormant. Accounts get compromised. Email alone should never be treated as proof of legitimacy.

That is precisely why activity matters.

The value of email is not that it exists. The value is that, when viewed through a large enough activity graph, it carries behavioral history. It connects account age, engagement, transaction presence, ecosystem consistency, and identity continuity across channels.

Devices change. Cards are reissued. Addresses shift. IPs are noisy. Phone numbers are recycled and ported. Email often remains one of the few identifiers that follows a consumer across the lifecycle, from account creation to purchase to loyalty to reactivation.

Fraudsters can create an email quickly and at scale. Creating years of believable, distributed, consistent activity around that email is a much harder problem.

That is where email becomes useful again. Not as a static field. Not as a deliverability check. As a behavioral anchor inside a broader identity graph.

For merchants, the practical distinction is enormous. A new email with no meaningful activity history should not be treated the same as a long-standing address with consistent signals across trusted environments. A dormant email that suddenly appears across multiple accounts and promotions should not be treated the same as an address with steady, organic use.

The intelligence is in the difference.

False Positives Are a Symptom of Weak Identity Context

Fraud losses are easy to measure. False positives are harder because the customer usually leaves without explanation.

That invisibility makes them dangerous.

A legitimate shopper challenged at the wrong moment may abandon. A good customer declined during account creation may never return. A high-value buyer with messy but legitimate behavior may look worse than a synthetic identity built to appear clean.

This is one of the less discussed consequences of AI-driven fraud. As bad actors get better at looking normal, real customers can become the outliers.

A fraud system without strong longitudinal identity context may punish organic messiness and reward manufactured discipline. That is a costly reversal.

Activity-driven graphs help correct that imbalance. They give fraud teams more confidence to approve customers who look unusual locally but trustworthy historically. They also help identify accounts that look clean locally but suspicious in broader context.

That matters because the goal should not be to simply block more fraud. The goal is to make better trust decisions with less unnecessary friction.

For merchants, this is where fraud strategy intersects directly with revenue quality, customer experience, and operational efficiency. A control that stops fraud while quietly suppressing good customers is not performing as well as the dashboard suggests.

The Next Defense Is Memory

Fraud rings benefit from merchant isolation. Every business sees its own transaction, its own account, its own chargeback, its own promotion abuse. That fragmentation gives prepared fraud room to move.

Activity-driven networks reduce that advantage by creating memory across time and context.

They help answer the questions that matter:

  • How did this identity develop?
  • Has it behaved consistently across legitimate environments?
  • Did trust signals accumulate gradually or appear in a compressed window?
  • Does this account resemble a real customer, or does it resemble a preparation pattern?
  • Are we seeing one risky identity, or one piece of a coordinated network?

This is what separates an activity-driven graph from a basic identity graph. Matching records is useful. Recognizing behavior is becoming more valuable. The graph should not only connect identifiers. It should help interpret whether the activity around those identifiers deserves trust.

That requires scale, recency, and historical depth. It requires signals that show both whether an identity is reachable and whether it has behaved in ways consistent with real human use. It requires seeing identity as something that changes, rather than something verified once and stored.

This is where AtData's perspective matters. After decades of working with email-centered identity intelligence, the lesson is clear: the email address is often the most durable starting point, but the activity surrounding it is what creates meaning. Trust does not live in a single field. It lives in the history, density, freshness, and consistency of signals around that field.

That is the layer merchants increasingly need.

The Strategic Shift for Fraud Leaders

The most advanced fraud teams will not win by adding more. They will win by becoming more precise about where trust came from.

That requires a different internal audit.

  • Where are we rewarding clean presentation without understanding preparation?
  • Where are we treating local history as if it were complete history?
  • Where are we challenging real customers because their behavior is human and imperfect?
  • Where are our models learning from outcomes that may already contain bias, blind spots, or incomplete identity truth?
  • Where are fraudsters manufacturing the exact signals our systems reward?

These are uncomfortable questions, which is why they are useful.

AI-powered fraud will keep improving. The identities will be more coherent. The conversations will feel more human. The transaction patterns will look more ordinary. The easy tells will fade.

The defense has to move upstream from the fraud event into the history of the identity itself.

Longitudinal history is the defense because it brings memory to a system that fraudsters want to keep fragmented. It helps merchants see whether trust was earned over time or assembled for the moment. It creates a stronger basis for blocking prepared fraud and recognizing legitimate customers, faster.

Fraud prevention needs to be able to tell the difference between a real customer with messy signals and a fake identity with perfect ones.

With AI, that difference is becoming harder to see. It is also becoming more important.

About AtData

AtData helps organizations connect with real people, prevent fraud, and improve digital trust through permissioned, email-anchored identity intelligence and the largest network of activity signals. With more than 25 years of experience in data quality, identity, and fraud prevention, AtData supports enterprises across marketing, risk, and data operations.

Contact Us: Learn more at AtData.com and explore fraud detection strategies designed for modern commerce.

Blue-tinted background of a man watching a webinar

Host a Webinar with the MRC

Help the MRC community stay current on relevant fraud, payments, and law enforcement topics.
Submit a Request

Publish Your Document with the MRC

Feature your case studies, surveys, and whitepapers in the MRC Resource Center.
Submit Your Document

Related Resources