AI Impersonation Scams: How Deepfake and Voice Cloning Are Being Used in Romance Scams

How scammers use AI deepfake video and voice cloning in romance scams. Real cases, detection techniques, and a verification checklist to protect yourself.

How AI Is Changing Romance Scams

Romance scams have traditionally relied on a predictable weakness: the scammer could not prove they were who they claimed to be. They refused video calls, used stolen photos, and relied entirely on text-based communication to maintain the deception. Victims and fraud experts alike pointed to the refusal to appear on video as the single clearest warning sign.

That barrier is eroding. Artificial intelligence tools now available at low or no cost allow scammers to generate synthetic faces, conduct deepfake video calls, and produce cloned voice messages that sound like a specific real person. According to the Federal Trade Commission, reports of romance scams involving suspected AI-generated content increased substantially in 2024 and 2025, though the true scale is difficult to measure because many victims do not realize AI was involved (FTC Staff Report on AI and Consumer Protection, 2024).

This does not mean detection is impossible. AI-generated content has specific tells, and the verification techniques outlined in this guide are designed to expose synthetic media even as the technology improves.

How Scammers Use Deepfake Video

What Deepfake Video Is

Deepfake video uses machine learning models to superimpose one person's face onto another person's live video feed. The scammer runs software on their computer that captures their webcam feed, replaces their face with the face from stolen photos, and outputs the manipulated video to a video call application. The result is a video call where the other person appears to be someone they are not.

How It Is Used in Romance Scams

A scammer using stolen photos of a specific person can now conduct brief video calls that appear to show that person. The calls are typically kept short (30 seconds to 2 minutes), conducted in low lighting, and involve minimal head movement or unusual facial expressions, all of which reduce the likelihood of visible artifacts.

This directly undermines the traditional advice to "insist on a video call" as a verification method. A brief, controlled call no longer provides the same level of assurance it once did.

Real-World Reports

In 2024, the FBI IC3 noted an increase in romance fraud complaints where victims reported having seen the person on video before discovering the fraud. While the FBI does not publicly quantify AI-specific complaint categories, Special Agent in Charge statements at multiple field offices have referenced deepfake video as an emerging tool in romance fraud operations (FBI IC3 Public Statements, 2024).

A 2023 investigative report by Wired documented cases where victims in the UK and Australia had video-called their online partner multiple times before losing money, only to discover through law enforcement that the person's face had been digitally generated. The victims described the calls as "slightly off" but attributed the oddities to poor internet connections.

How to Detect Deepfake Video Calls

Deepfake technology is improving, but current systems have consistent weaknesses:

Visual artifacts to look for:

Behavioral tests you can perform:

How Scammers Use Voice Cloning

What Voice Cloning Is

Voice cloning uses AI to generate speech that mimics a specific person's voice. Modern voice cloning tools can produce convincing results from as little as 3-10 seconds of sample audio. The technology captures the unique characteristics of a person's voice, including pitch, cadence, accent, and tone, and allows the operator to type text that is then spoken in the cloned voice.

Where Scammers Get Voice Samples

If the scammer is impersonating a real person whose voice exists anywhere online, they can clone it.

How to Detect Cloned Voice Messages

Technical indicators:

Behavioral indicators:

AI-Generated Profile Photos

The End of Reverse Image Search as a Reliable Tool

Traditionally, reverse image search was a powerful way to detect stolen photos. Scammers used real people's images, which could be traced back to the original source. AI-generated photos created by generative adversarial networks (GANs) and diffusion models produce faces of people who have never existed. These photos will not appear in any reverse image search.

How to Spot AI-Generated Photos

Verification Beyond Photos

Because AI-generated photos cannot be detected through reverse image search, verification must shift to behavioral and contextual checks:

  1. Do they have a social media presence that predates your conversation by years?
  2. Are there tagged photos of them posted by other real people (friends, family)?
  3. Do mutual connections exist who can verify their identity?
  4. Can they provide a LinkedIn profile with a verifiable employment history?
  5. Will they do an extended, unscripted video call with the tests described above?

Verification Checklist: Defending Against AI Impersonation

Use this checklist for any online relationship where doubt exists:

What to Do If You Suspect AI Impersonation

  1. Do not confront the person. They may escalate manipulation or disappear with evidence.
  2. Save everything: screenshots of conversations, photos they sent, voice messages, video call recordings if possible, and financial records.
  3. Talk to someone you trust. A friend or family member can provide perspective you may not have when emotionally involved.
  4. Contact the AARP Fraud Helpline: 1-877-908-3360 (free, weekdays 8am-8pm ET).
  5. File a report with the FTC: reportfraud.ftc.gov
  6. File a complaint with the FBI IC3: ic3.gov
  7. If money was sent: Contact your bank immediately. For cryptocurrency, provide wallet addresses to the FBI. For wire transfers, request a recall through the sending institution.

Frequently Asked Questions

Can AI really make a video call look like someone else?

Yes. Real-time deepfake software can replace a person's face during a live video call. The technology is imperfect and has detectable weaknesses, but brief calls in poor conditions can be convincing. Extended calls with the behavioral tests described in this guide reliably expose deepfakes.

How common are AI-generated romance scams?

The FTC and FBI have both noted increasing reports involving suspected AI-generated content in romance scams. The exact prevalence is hard to measure because many victims do not know AI was involved. As AI tools become cheaper and easier to use, the proportion is expected to grow.

If they pass a video call test, does that guarantee they are real?

No single test provides absolute certainty. However, an extended video call (10+ minutes) with multiple behavioral challenges (profile views, hand over face, holding up specified objects) combined with verifiable social media history significantly reduces the likelihood of AI impersonation. Combine video verification with contextual checks like social media history and mutual connections.

Can I use AI detection tools to verify their photos or videos?

AI detection tools exist but are not reliable enough for personal use. They produce both false positives and false negatives. The behavioral verification methods described in this guide are more reliable than current automated detection tools for personal safety decisions.

What if I think I was scammed using AI but I am not sure?

File a report with the FBI IC3 regardless. Include as much detail as possible, including any suspicions about AI-generated content. Law enforcement is actively building expertise in AI-enabled fraud and your report contributes to their understanding of the threat.


If any of this article resonates with your situation, take the free Are They Real? Scam Risk Test now. The quiz evaluates your relationship against documented scam patterns, including AI-enabled deception. It is private, takes five minutes, and nothing is stored or shared.