About

About FakeRadar

Built to give everyone access to the same forensic signals that media professionals rely on — without requiring technical expertise.

Our Mission

FakeRadar was created to address a growing problem: AI-generated images and deepfake videos are becoming indistinguishable from real content, yet the tools to detect them have remained locked behind enterprise paywalls or academic research.

Our mission is to make forensic-grade AI content verification accessible to everyone — journalists fact-checking viral images, researchers studying synthetic media, and individuals who simply want to know if what they're looking at is real.

We don't claim to provide definitive answers. We provide forensic signals. The interpretation remains yours.

Our Methodology

FakeRadar applies a multi-signal approach to content verification. No single detector is definitive — AI models make mistakes, ELA can be fooled by re-saving, and metadata can be stripped. That's why we layer multiple independent signals and present them together.

Our analysis pipeline works as follows:

  1. 1
    SHA-256 fingerprinting — The uploaded file is hashed before analysis. If we've seen this exact file before, we return the cached result instantly without re-processing. This eliminates redundant API calls and speeds up repeat queries.
  2. 2
    Multi-engine AI detection — The file is sent simultaneously to Hive AI (primary) and Sightengine (Pro). Each model returns independent confidence scores trained on different datasets and architectures. Agreement between models strengthens confidence; disagreement signals uncertainty.
  3. 3
    Forensic signal analysis (Pro) — ELA, FFT frequency analysis, C2PA credential check, and EXIF metadata extraction run in parallel on our inference server. These signals operate entirely independently of the AI detectors.
  4. 4
    Signal aggregation — Results from all engines are aggregated and presented as a unified report. Each signal is shown with its raw confidence value, not summarized into a single opaque score.

We deliberately avoid a single "real/fake" verdict because no detector achieves 100% accuracy on all content types. Instead, we present each signal with its confidence level so you can make an informed judgment.

Detection Signals Explained

AI

AI Model Detection

Deep learning classifiers trained on millions of real and AI-generated images. These models learn statistical patterns that differ between camera-captured photos and generative model outputs (GAN artifacts, diffusion model fingerprints). Hive AI and Sightengine each use independently trained architectures, reducing the risk of shared blind spots.

ELA

Error Level Analysis

ELA re-saves an image at a known compression level and measures the difference between the original and re-saved version. Authentic photos compress uniformly — edited or composited regions compress differently because they've been processed a different number of times. ELA heatmaps make these inconsistencies visible.

FFT

Frequency Domain Analysis

Fast Fourier Transform analysis converts an image from pixel space to frequency space. Many AI-generated images contain characteristic frequency artifacts — periodic patterns or unusual spectral distributions — that are invisible to the eye but appear clearly in the frequency domain. This technique is particularly effective against GAN-generated images.

C2PA

Content Credentials (C2PA)

C2PA (Coalition for Content Provenance and Authenticity) is an open standard for embedding cryptographically signed provenance data into media files. When a camera, phone, or AI platform embeds C2PA credentials, FakeRadar verifies the cryptographic signature and displays the full provenance chain — device model, capture time, software used, and any edits applied.

EXIF

EXIF Metadata

EXIF metadata records camera settings, GPS coordinates, timestamps, and software used when a photo is taken. AI-generated images typically lack authentic EXIF data, or contain metadata inconsistencies — such as software signatures from image editors applied to a file with a camera model that never existed. We inspect and surface these anomalies.

No single signal is conclusive. EXIF can be stripped; ELA can be confused by heavy compression; AI models have false positive rates. Use FakeRadar results as one input in a broader verification process, not as final proof.

Who It's For

Journalists & Fact-Checkers

Verify viral images and video clips before publication. Generate shareable analysis reports with a permanent link to document your verification process. Our journalist-specific guide covers recommended workflow and limitations.

Researchers

Study AI-generated media patterns with access to raw signal data. Pro tier provides ELA heatmaps, FFT spectra, and Sightengine scores alongside Hive results — suitable for comparative analysis across multiple detectors.

Social Media Moderators

Quickly assess flagged content before escalation. The analysis runs in seconds and generates a timestamped report that can serve as documentation for moderation decisions.

Individuals

Anyone who encounters a suspicious image or video online. Free tier covers basic AI detection for images — no account required for your first analysis.

Technology Stack

FakeRadar is built on Cloudflare's global edge network, with analysis processing distributed to minimize latency worldwide.

Component Technology Notes
AI Detection (primary) Hive AI Industry-leading deepfake & AI content detection
AI Detection (secondary) Sightengine Independent second opinion — Pro tier
ELA + FFT + C2PA + EXIF VPS Inference Custom FastAPI server, runs independently — Pro tier
Frontend Astro + Cloudflare Workers Server-rendered, edge-deployed
Database Cloudflare D1 SQLite at the edge — analysis results, users
File storage Cloudflare R2 ELA heatmaps, FFT images (temporary)

The forensic analysis server (ELA, FFT, C2PA, EXIF) runs our own Python implementation, not a third-party service. This gives us full control over the analysis pipeline and ensures no uploaded content is sent to additional third parties beyond what's documented in our Privacy Policy.

Privacy & Zero Retention

FakeRadar does not store your uploaded files. Here's exactly what happens to your content:

  • Upload: Your file is received by our Cloudflare Worker, processed in memory, and forwarded to the analysis engines.
  • Analysis engines: Hive AI and Sightengine receive your file for classification. Their data handling is governed by their own privacy policies, which we link in ours.
  • Forensic server: For ELA and FFT analysis, the file is sent to our private inference server. The original file is discarded immediately after analysis. Only the generated heatmap images are stored temporarily in R2 (up to 90 days for Pro, shorter for free) for display in your report.
  • What we keep: A SHA-256 hash of the file content (used to cache results and avoid re-processing identical files), the analysis scores, and metadata like file type and dimensions. Never the original file.
  • Shared reports: If you share a result, the report remains accessible at its URL. You can delete shared reports from your dashboard at any time.

For complete details, see our Privacy Policy.

Contact & Background

FakeRadar is built and maintained by Oktay Atalay, an independent software developer based in Istanbul, Turkey. It was created out of direct frustration with how difficult it was to verify a single suspicious image without either paying for an enterprise tool or running Python scripts locally.

If you've found an analysis error, a bug, or want to discuss a partnership or press inquiry: