Romance fraud is one of the most financially and emotionally destructive forms of online crime. According to the FBI’s Internet Crime Complaint Center (IC3), Americans alone reported losing $1.3 billion to romance scams in 2023 — making it the highest-loss category among all reported internet crimes. The real number is almost certainly higher, since shame and embarrassment prevent many victims from reporting.

What has changed in recent years is the technology. Scammers who once relied on stolen photographs from real people’s social media accounts now have access to AI image generators and deepfake video tools. They can create photorealistic profile photos of people who have never existed, and they can conduct live video calls using real-time face-swapping software. The human on the other end of the screen may not be human at all.

This guide explains how these scams work, what the warning signs are, and what forensic tools — including free analysis at FakeRadar — can help you verify whether you are dealing with a real person.


How AI-Powered Romance Scams Work

The typical romance scam follows a well-documented pattern, now supercharged with generative AI:

Stage 1: The Profile

The scammer creates a profile on a dating app, social media platform, or messaging service. The profile photo is either:

  • Stolen from a real person (a model, soldier, doctor, or engineer found on Instagram or LinkedIn)
  • AI-generated using tools like Midjourney, DALL-E, or ThisPersonDoesNotExist-style face generators

AI-generated profiles are increasingly preferred by sophisticated operators because they are unique — they do not appear in reverse image search results, which is the most common defensive check victims perform.

Stage 2: Building Trust

The scammer invests weeks or months in conversation, establishing emotional intimacy. They are patient. They never ask for money immediately. They build a detailed fictional biography — often posing as a military officer deployed overseas, an oil rig worker, a surgeon working abroad, or a widowed professional with children.

Stage 3: The Crisis

Eventually, a manufactured emergency creates the pretext for money: a medical crisis, a corrupt official demanding a bribe, a business opportunity that requires temporary funds. The request is framed as borrowing, not gifting. The victim is often deeply emotionally invested before the first financial request arrives.

Stage 4: Escalation

Once money flows, requests multiply. Scammers continue until the victim runs out of money, cuts contact, or realizes they have been deceived.


Detecting a Fake Profile Photo

The profile photograph is the first thing you can verify forensically. Here is what to check:

Reverse Image Search (First Line of Defense)

Upload the profile photo to:

  • Google Images (images.google.com — drag and drop)
  • TinEye (tineye.com)
  • Yandex Images (often better at finding social media matches)

If the photo appears on another website attached to a different name or identity, you have found a stolen image. If it does not appear anywhere — which is increasingly common with AI-generated faces — reverse search alone is insufficient.

AI Detection Analysis

This is where FakeRadar’s analysis becomes useful. Upload the profile photo and look for:

  • High AI detection probability score from the Hive classifier
  • Uniform ELA heatmap with no natural photographic variation (AI-generated faces show consistent error levels across skin, hair, and background)
  • Missing or empty EXIF data with no camera make, model, or timestamp
  • No C2PA manifest — or conversely, a C2PA manifest from an AI tool like DALL-E 3

Visual Red Flags to Check Yourself

Even without tools, certain visual artifacts suggest AI generation:

FeatureWhat to Look For
EyesToo symmetrical; catchlight reflections may be identical in both eyes
EarsOften poorly rendered; jewelry may merge with skin
HairStrands may disappear into the background or blend unnaturally
BackgroundMay contain dreamlike or incoherent elements
AccessoriesGlasses frames, earrings, and necklaces may show distortion or asymmetry
SkinOverly smooth; no pores, no fine lines, no natural variation
TeethMay be slightly too perfect or oddly shaped

Detecting Deepfakes During Video Calls

Real-time deepfake technology — where a person’s face is swapped live during a video call — has become more accessible. Scammers use tools like DeepFaceLive, Reface, and custom implementations to appear on camera as a fictional person.

Warning Signs During a Live Call

Technical artifacts:

  • Facial edge blur — a soft halo around the face where the synthetic overlay does not perfectly match the background
  • Lighting inconsistency — the face lighting does not match the room lighting (e.g., light appears to come from in front even though the room is lit from the side)
  • Frame rate drops — deepfake rendering is computationally expensive; the face may stutter or freeze briefly while the background continues moving
  • Expression lag — micro-expressions and eye movements may appear slightly delayed
  • Hair and background interaction — stray hairs near face edges may disappear or behave unrealistically when the person moves

Behavioral tactics to test for deepfakes:

  • Ask them to turn sideways — profile views are significantly harder for face-swap models to render convincingly
  • Ask them to touch their face — hands near the face often cause visible glitching at the boundary between the synthetic face and real hands
  • Ask them to hold up a piece of paper with today’s date written on it — this cannot be faked in real time without significant effort
  • Change the lighting — ask them to move toward or away from a window; real people cast consistent shadows, deepfakes often do not update correctly
  • Use a different app — some deepfake software only works with specific video call applications; switching from WhatsApp to FaceTime, for example, may cause problems

Practical Protection Steps

Before Investing Emotionally or Financially

  1. Analyze profile photos with FakeRadar or a similar AI detection tool
  2. Reverse image search all photos they send you
  3. Verify their claimed identity on LinkedIn, professional licensing databases, or military service verification portals
  4. Video call early — within the first week — and apply the deepfake tests above
  5. Never send money to someone you have not met in person, regardless of how convincing the relationship feels

If You Are Already in a Suspected Scam

  • Stop sending money immediately
  • Do not confront the scammer directly — this can lead to threats or escalated manipulation
  • Save all evidence — screenshots, chat logs, email addresses, profile links, transaction records
  • Tell someone you trust — isolation is a key tactic scammers use; breaking that isolation helps

When to Report

Report romance scams to:

  • FBI IC3: ic3.gov (United States)
  • Action Fraud: actionfraud.police.uk (United Kingdom)
  • ACCC Scamwatch: scamwatch.gov.au (Australia)
  • Your national cybercrime unit — most countries have one
  • The platform where contact occurred (dating app, social media)
  • Your bank immediately if money was transferred — some transactions can be reversed within a narrow window

You are not alone and you are not foolish for having been targeted. These operations are run by professional criminal organizations that invest significant effort in appearing authentic. Forensic tools exist precisely because human judgment is not enough.


Suspicious about a profile photo or a video call screenshot? Upload the image to FakeRadar’s free analysis tool — get an AI detection score, ELA heatmap, EXIF report, and C2PA check in under a minute. No account required for your first analysis.