Every digital photograph carries a hidden layer of data that most people never see. This data — known as EXIF (Exchangeable Image File Format) — is written automatically by the camera or smartphone at the moment of capture. It records the device, the settings, the time, and often the location. For forensic analysts, journalists, and fact-checkers, EXIF metadata is one of the first places to look when verifying an image’s authenticity.
The problem: AI-generated images either have no EXIF data at all, or they carry metadata that does not match any real photographic process. Knowing how to read and interpret this invisible layer is a foundational skill in modern image verification.
What Is EXIF Data?
EXIF is a standard metadata format defined by the Japan Electronics and Information Technology Industries Association (JEITA). It was originally designed so that digital cameras could store technical parameters alongside the image pixels themselves, inside the same file.
When you take a photograph with a digital camera or smartphone, the device automatically writes dozens of fields into the EXIF block. These fields travel with the image file wherever it goes — unless something strips them out.
Common EXIF Fields in a Real Photograph
| Field | What It Contains | Example |
|---|---|---|
| Make | Camera manufacturer | Canon, Nikon, Apple |
| Model | Specific camera/phone model | iPhone 15 Pro, Canon EOS R5 |
| DateTimeOriginal | Exact capture timestamp | 2025-08-14 09:23:41 |
| GPS Latitude / Longitude | Geographic coordinates | 41.0082° N, 28.9784° E |
| LensModel | Lens used | 24-70mm f/2.8 |
| FocalLength | Focal length at capture | 50mm |
| ISO | Light sensitivity setting | ISO 800 |
| ShutterSpeedValue | Exposure time | 1/250 sec |
| ApertureValue | Aperture (f-stop) | f/4.0 |
| Software | Firmware or editing software | Adobe Lightroom 7.0 |
| Flash | Flash fired or not | No flash |
| ColorSpace | Color encoding | sRGB |
| PixelXDimension / PixelYDimension | Image resolution | 4032 x 3024 |
A complete EXIF block from a real photograph tells a coherent story: a specific camera, at a specific place, at a specific time, with physically plausible settings. The details should be internally consistent — a 50mm focal length on a full-frame sensor should correspond to a certain angle of view that matches the image content.
EXIF in AI-Generated Images
This is where the gap becomes forensically useful.
Scenario 1: No EXIF at all
Most AI image generators — including Midjourney, Stable Diffusion, and many others — produce PNG or JPEG files with empty or near-empty metadata blocks. There is no camera make, no timestamp, no GPS. The absence of EXIF is not definitive proof of AI generation (some cameras and workflows strip metadata legitimately), but it is a significant flag worth investigating further.
Scenario 2: Synthetic or placeholder metadata
Some pipelines add generic metadata automatically. You might see:
Software: GIMP 2.10on an image that shows no GIMP editing artifacts- A creation date that matches the exact second of file export
- A resolution that does not correspond to any known sensor format
Scenario 3: “Generated by X” disclosure
DALL-E 3 and some other OpenAI products write XMP metadata identifying the generator. This is intentional transparency. A field like dc:creator: OpenAI or an embedded C2PA manifest (see below) makes identification straightforward — as long as the metadata has not been stripped.
Scenario 4: Deliberately falsified EXIF
Anyone with basic tools can write arbitrary EXIF data into an image file. A bad actor can plant a fake camera model, timestamp, and even GPS coordinates to make an AI-generated image look like a real photograph taken at a real location on a real date. This is the most dangerous scenario, and it is why EXIF alone is insufficient — it must be combined with pixel-level analysis.
Red Flags: Metadata Inconsistencies to Look For
The most useful forensic signals are not missing data but contradictory data:
- Software field contradicts the claimed camera. A photo supposedly from a Nikon D850 shows
Software: Stable Diffusion WebUI 1.7.0. - DateTimeOriginal is in the future or impossibly recent. A timestamp from after the event the image allegedly documents.
- GPS coordinates do not match the scene. A photograph of a tropical beach with GPS coordinates placing it in Finland.
- Pixel dimensions do not match any real sensor. A resolution of 1024x1024 is a strong indicator — that is a common AI output size, not a camera sensor format.
- ISO, shutter speed, and aperture do not form a physically consistent exposure triangle for the apparent lighting in the scene.
How Social Media Strips EXIF
Here is a critical complication: most major social media platforms automatically strip EXIF metadata when images are uploaded. Facebook, Instagram, X (Twitter), TikTok, and WhatsApp all remove EXIF before displaying images to other users. This is done partly for privacy (protecting GPS data) and partly for technical reasons (smaller file sizes).
The consequence: if you download an image from social media and check its EXIF, you will almost certainly find it empty — regardless of whether the original image was a real photograph or AI-generated. Social media stripping is not evidence of manipulation; it is a platform-level behavior.
This is why forensic analysis of social media images must rely on visual artifact detection, ELA, and FFT rather than metadata alone.
How FakeRadar Analyzes EXIF
FakeRadar’s EXIF analysis reads the full metadata block of every uploaded image and evaluates it across several dimensions:
- Presence check — Is any camera metadata present at all?
- Consistency check — Do the fields tell a coherent photographic story (make, model, lens, exposure settings)?
- Software field analysis — Does the software field indicate an AI generation tool or image editor?
- C2PA manifest check — Is there an embedded Content Credentials manifest identifying an AI model as the creator?
- Resolution plausibility — Does the image resolution match known camera sensor formats?
The results are displayed alongside the image’s ELA heatmap and AI classifier scores, giving you a combined view of all available signals.
Limitations of EXIF Analysis
EXIF analysis is powerful but not conclusive on its own. Be aware of these limitations:
- EXIF can be stripped from real photographs by any image editor, WhatsApp, or social media platform, making a genuine photo look suspicious.
- EXIF can be fabricated by anyone with a tool like ExifTool. A determined bad actor can write convincing fake metadata.
- EXIF does not survive screenshots. A screenshot of an AI-generated image has no original EXIF; it inherits only the screen capture timestamp from the device.
- Some legitimate workflows strip EXIF. News agencies, legal discovery processes, and some publishing pipelines remove EXIF for privacy compliance.
This is why FakeRadar combines EXIF analysis with pixel-level forensics (ELA, FFT) and AI classifier scores. No single signal is authoritative — the combination of consistent signals across multiple analysis layers is what builds a reliable conclusion.
Want to see what metadata is hiding in an image? Upload any JPEG or PNG to FakeRadar and get a complete EXIF breakdown alongside full forensic analysis — ELA heatmap, FFT spectrum, C2PA verification, and AI detection score — in one report.