Every day, images spread across social media stripped of context. A photo from 2019 becomes “proof” of a 2026 event. An AI-generated portrait becomes a “real person.” Content credentials — and specifically the C2PA standard — are the industry’s attempt to fix this at the file level.

What is C2PA?

C2PA stands for the Coalition for Content Provenance and Authenticity. It is an open technical standard that embeds a cryptographically signed provenance record directly inside an image, video, or audio file.

Think of it as a chain of custody document baked into the file itself.

The record — called a manifest — can contain:

  • The device or software that created the file
  • The date and location of capture (if available)
  • Any edits made and the tools used
  • Whether any AI generation or editing was involved
  • A thumbnail of the content at each edit stage

Because the manifest is cryptographically signed, any tampering — including stripping it out — is detectable.

Who Supports C2PA?

C2PA was founded by Adobe, BBC, Intel, Microsoft, Sony and Truepic. It has since grown to include:

  • Camera manufacturers: Leica, Sony, Nikon (hardware signing in cameras)
  • AI platforms: OpenAI (DALL-E 3 attaches C2PA manifests to generated images), Google (Imagen), Microsoft (Bing Image Creator)
  • Social platforms: LinkedIn verifies and displays C2PA credentials on uploads
  • News organisations: BBC, AFP, Reuters

This is not a niche academic standard — it is being deployed at scale in production tools right now.

How C2PA Works

  1. Signing at creation: A camera, software, or AI tool signs the file with a certificate issued to that device or organisation. The signature covers a hash of the image content.
  2. Embedding: The manifest is stored inside the file’s metadata (JUMBF for JPEG, similar containers for other formats).
  3. Verification: Any tool that reads C2PA (including FakeRadar) can verify the signature chain and display the provenance history.
  4. Chaining: If the image is then edited in Photoshop (which supports C2PA), Photoshop adds a new signed manifest layer on top — preserving the full history.

What Happens When an Image Has No C2PA?

Most images on the internet today have no C2PA data. The absence of content credentials does not mean an image is fake — it simply means no provenance record was attached. This is the current default for virtually all older cameras and most consumer software.

However, as C2PA adoption grows, the absence of credentials will increasingly become a meaningful signal for images that should have been captured by a C2PA-capable device (e.g., a recent Sony camera) or generated by a C2PA-compliant AI tool (e.g., DALL-E 3).

What Happens When Credentials Are Stripped?

Stripping metadata — a common step in many image editors, social media platforms, and image downloaders — removes the C2PA manifest. FakeRadar detects when credentials have been stripped and reports this as a separate signal.

A stripped manifest from a file that claims to have come from a C2PA-capable source is itself suspicious.

C2PA vs. Watermarking

C2PA is often confused with invisible watermarking (like Google’s SynthID). They are different:

C2PAAI Watermarking
What it isMetadata manifestInvisible signal in pixels
Survives screenshotNoSometimes
Survives editingPartially (chain preserved)Degrades
Requires creator buy-inYesYes (model-level)
Verifiable by third partiesYesOnly with provider’s tool

Both are useful; neither is foolproof. They are complementary, not competing.

How FakeRadar Uses C2PA

FakeRadar extracts and verifies the C2PA manifest from every image you upload. The result shows:

  • Valid credentials found — who signed it, when, and what the editing history shows
  • Credentials present but invalid — signature verification failed; the file may have been tampered with
  • Credentials stripped — the file once had a manifest that has been removed
  • No credentials — no manifest was found (the norm for most images today)

This is combined with ELA, FFT, Hive AI deepfake scoring, and EXIF metadata to produce a multi-signal report.

What C2PA Can and Cannot Prove

C2PA can prove:

  • That a specific device or tool produced or modified a file
  • What edits were applied and in what order
  • Whether AI generation was declared by the creating tool

C2PA cannot prove:

  • That an image is genuine if the creating device or tool is compromised
  • Anything about images with no manifest
  • That content is accurate or truthful — only that it came from a stated source

Summary

C2PA is the most promising technical standard for content provenance available today. It is already deployed in major cameras, AI tools, and platforms. It gives journalists, fact-checkers, and platforms a verifiable chain of custody — something that was impossible before.

It is not a silver bullet. But combined with other forensic signals, it meaningfully raises the cost and complexity of undetected manipulation.


Check whether your image carries valid C2PA credentials — analyse it free on FakeRadar.