Every week, someone shares a screenshot of an AI detection result with a caption like “Confirmed fake — 98% AI-generated.” Every week, that framing misleads people.

This article explains why responsible AI detection tools — including FakeRadar — report signals, not verdicts, and why that distinction matters.

The Verdict Problem

A verdict is binary: guilty or not guilty, real or fake. A signal is probabilistic: it raises or lowers the likelihood of a hypothesis. These are fundamentally different things.

Current AI detection technology cannot reliably produce verdicts for one simple reason: the space of both genuine and AI-generated images is enormous, overlapping, and constantly shifting. A score of “94% AI-generated” does not mean there is a 94% chance this image is AI-generated. It means the model’s output for this image, on a scale from 0 to 1, was 0.94. Those are not the same thing.

What Signals Actually Are

When FakeRadar analyses an image, it runs multiple independent forensic checks:

1. Hive AI deepfake classifier A neural network trained on millions of real and AI-generated images. It outputs a probability that the image was generated or significantly manipulated by an AI model. It is the closest thing to a “score” — but it is a classifier output, not a ground truth probability.

2. Error Level Analysis (ELA) Detects differences in JPEG compression history across regions of an image. Regions with different histories than their surroundings may indicate compositing, cloning, or splicing. Read our ELA guide.

3. FFT (Fast Fourier Transform) frequency analysis Analyses the spatial frequency patterns of an image. AI-generated images often have characteristic frequency signatures — grid-like periodicity or absent high-frequency noise — that differ from camera-captured photographs.

4. C2PA content credential verification Checks whether the image carries a cryptographic provenance record, and whether that record is valid. Read our C2PA guide.

5. EXIF metadata inspection Checks for the presence, plausibility, and consistency of camera metadata — make, model, GPS, timestamp, software signature. AI tools typically produce files with no EXIF or with software fields like “Stable Diffusion” or “DALL-E.”

Each of these signals is independent and imperfect. Their value comes from convergence.

Why Convergence Matters

Imagine three scenarios:

Scenario A: Hive scores 0.92. ELA shows uniform pattern. FFT shows no natural noise. EXIF shows “Generated by Midjourney.” C2PA is absent. → Five signals all pointing in the same direction. Confidence is high.

Scenario B: Hive scores 0.88. ELA shows some bright patches. EXIF is present and plausible. C2PA shows a valid Canon camera certificate. → Conflicting signals. The classifier flags it, but provenance is legitimate. This image may have been AI-edited (e.g., face swap) but not wholly generated.

Scenario C: Hive scores 0.31. ELA is uniform. FFT looks natural. EXIF present. C2PA valid. → Signals mostly point toward genuine. Low-confidence manipulation flag.

In all three cases, a single number from a single model would mislead you. The multi-signal view gives you something to reason about.

The Base Rate Problem

Here is something almost no detection tool tells you: your prior probability matters enormously.

If you are reviewing an image that arrived via a disinformation researcher who found it on a Telegram channel selling deepfakes, your prior that it is AI-generated is already very high — say 70%. A detection signal of 0.80 from the classifier meaningfully updates your belief upward.

If you are reviewing a photo taken by your friend on a recent hiking trip, your prior is very low — say 2%. Even a classifier score of 0.80 should not override that prior without extraordinary evidence from multiple signals.

This is Bayesian reasoning applied to forensics. It is how trained analysts think, and it is why responsible detection tools emphasise signals over verdicts.

Known Failure Modes

False positives (genuine images flagged as AI):

  • Heavily processed photographs (HDR, skin retouching, heavy saturation)
  • Illustrations, digital art, and 3D renders — which are not AI-generated but share frequency characteristics
  • Screenshots of AI-generated images that have been re-saved multiple times
  • Images of CGI in films or video games

False negatives (AI images not flagged):

  • AI-generated images printed and re-photographed (re-introduces camera noise)
  • AI-generated images processed through social media compression
  • Older models that detectors were not trained on
  • Novel architectures that produce different artefact patterns

Neither type of error is rare. This is not a flaw in any specific tool — it reflects the fundamental difficulty of the task.

How to Use Detection Results Responsibly

  1. Treat high-confidence multi-signal convergence as a strong indicator, not proof. Contact the original source, check publication context, and look for independent corroboration.
  2. Treat low-confidence or single-signal results as inconclusive. A classifier score of 0.55 means the model is barely above chance.
  3. Consider your prior. What do you already know about where this image came from?
  4. Do not publish “AI-confirmed fake” based solely on a tool result. That is not journalism — it is tool output.
  5. Remember that absence of signals is not confirmation of authenticity. Many sophisticated manipulations leave no detectable trace.

Why FakeRadar Uses This Language

FakeRadar’s result pages say “signals detected” rather than “this is fake” deliberately. We believe that honest uncertainty communication is a professional and ethical requirement.

Tools that say “94% fake — confirmed” are not more accurate than tools that say “high-confidence AI generation signals detected across multiple indicators.” They are just more confident-sounding — and that confidence is not warranted by the underlying science.

The goal of detection is to inform human judgement, not to replace it.


Analyse an image with full signal transparency — try FakeRadar free.