This page is a procedural recipe. Given an image whose authenticity matters — a viral social-media post, a wire image being considered for publication, a piece of evidence presented in court, a photograph in a fraud investigation — what do you do, in what order, with what tools, to produce a defensible judgment about its origin?
The procedure has six stages, organized roughly from cheapest to most expensive and from highest-confidence to most-judgmental. A typical case is resolved in the first one or two stages; the later stages are needed only when the easier signals are absent or contradictory. The whole chain takes perhaps fifteen minutes for a routine case, hours or days for a contested one. Nothing in the workflow is a single-shot verdict; the discipline is in combining signals.
Stage 1: Frame the question
Before opening tools, state what you are trying to verify. The questions verification can answer are narrower than "is this real":
- Is this image from where the publisher claims it is from?
- Was this image captured when claimed?
- Is this the original image, or has it been edited?
- Is this an AI-generated image?
- Has this image been published before in a different context?
Different questions need different evidence. An image-publication date question is answered by reverse search; an AI-or-not question is answered by a combination of provenance, watermark detection, and forensics; an edited-or-not question is answered by provenance and forensic indicators. Skipping this framing produces verification work that gathers evidence but does not answer any particular question; useful only as a research exercise.
Stage 2: Inspect provenance
If the image carries C2PA Content Credentials, that is the strongest signal available. Run the file through a validator and read the report.
The right tools depend on context. The Adobe Content Credentials Verify site (verify.contentauthenticity.org) accepts a file or URL and shows the full chain. The c2patool command-line utility produces a structured JSON report suitable for scripting and archiving. Newer browsers and editors expose Content Credentials inline; for any case that matters, prefer a dedicated validator over a browser badge, because the validator surfaces the structured reasons behind a result and the browser may collapse nuance into a yes-or-no.
A validator's output tells you:
- Whether a manifest was found at all.
- Whether the manifest's signatures validate cryptographically.
- Whether the signing certificate is on the C2PA Trust List.
- What chain of edits is recorded.
- Whether any ingredient (prior version) has a missing or broken sub-chain.
- Whether the binding to the asset bytes is intact, or whether only a soft binding survived.
A valid chain from a trusted signer goes a long way toward answering the question. A missing manifest is not a negative signal — most images have none — but a present-but-invalid manifest is. The limitations page covers what valid chains do not establish.
Stage 3: Read metadata
Run the file through exiftool, FotoForensics, or the metadata viewer of your choice. Look for:
- Software / History fields naming editing tools or AI generators.
- Capture timestamps consistent or inconsistent with the claimed event time.
- GPS coordinates matching the claimed location.
- Camera make / model matching the photographer's known equipment.
- Embedded thumbnails matching the main image (or revealingly not).
- IPTC fields with photographer, copyright, and location notes.
Metadata is editable, so consistency is suggestive rather than conclusive. But specific inconsistencies are highly informative: a "Software: Photoshop" field on an image claimed to be straight from a camera; a "Make: Canon" field with a Nikon-formatted MakerNote; a "DateTimeOriginal" earlier than the camera model existed. The metadata analysis page covers the technique in detail.
Stage 4: Reverse search
Run the image through the chained reverse-search workflow described on the reverse search page: TinEye first, then Google Lens, then Yandex, then Bing. The goal is to find:
- Earlier appearances of the image, with their original captions and contexts.
- Stock-photography origins, which contradict claims of candid capture.
- Pages where the image appears under different captions, which establish the editorial trajectory.
- Confirmation that the image is plausibly new (no matches in any engine), which is necessary but not sufficient to support a claim of newness.
Reverse search is the most-cost-effective single verification step. A positive match (the image existed earlier with a different caption) often ends the inquiry. A negative result (no engine matches) does not establish authenticity but moves the inquiry to other stages.
Stage 5: Forensic indicators
When provenance is absent, metadata is uninformative, and reverse search produces no matches, the residual question is whether the image's pixel content is consistent with its claims. This is the domain of classical image forensics and of AI detection.
Specific checks:
- Run ELA and JPEG-ghost analysis to look for composite indicators.
- Run an AI-detection classifier (Hive, Optic, or research-group equivalents) and treat the result as a probability, not a verdict.
- For images claiming capture, look for CFA traces consistent with the claimed camera.
- Inspect the image for the visual tells described on the AI tells page, with awareness that the tells are eroding.
- Check lighting and shadow consistency in scenes that contain multiple subjects.
Each of these is a weak signal individually. The combination is stronger than any single one, but no combination of pixel-level analysis produces certainty. The forensic stage produces a probabilistic assessment that supports or weakens the working hypothesis built from earlier stages.
Stage 6: Source assessment
The technical stages above establish what the image is. They do not establish whether the source presenting the image is trustworthy. A source-assessment step is essential for any case where the image's status as evidence depends on who is presenting it.
Source assessment is journalistic and qualitative:
- Who is the person or organization presenting the image?
- What is their track record with previous images and reporting?
- Do they have access to the scene the image claims to depict?
- Are there independent corroborations of the image's content from other sources?
- Does the source's account of how they obtained the image withstand questioning?
This stage is the one that all the technology in this reference does not address. C2PA can establish that an image came from a specific signer; it cannot establish that the signer's account of how they came to sign it is true. Source assessment is the editorial layer in which all the technical signals are interpreted.
| Stage | Tools | Time | Answers |
|---|---|---|---|
| 1. Frame the question | None | 2 min | What you are looking for |
| 2. Inspect provenance | Content Credentials Verify, c2patool | 3 min | Origin if credentialed |
| 3. Read metadata | exiftool, FotoForensics | 3 min | Capture parameters, edit history |
| 4. Reverse search | TinEye, Google, Yandex, Bing | 5 min | Prior appearances |
| 5. Forensic indicators | FotoForensics, AI detectors | 10–30 min | Probabilistic origin assessment |
| 6. Source assessment | Editorial judgment | Variable | Trustworthiness of presenter |
Documentation and archiving
For any verification that may need to be defended later — a published news image, an evidentiary submission, a takedown decision — document the workflow as you go. The minimum record is: the file analyzed (with its hash for later identification), the timestamps of each check performed, the tools used and their versions, and the result at each stage. For C2PA-credentialed images, save the c2patool JSON output. For reverse-search results, screenshot the relevant matches.
This documentation is not just a defensive measure. It is also how verification practice improves: archived workflows let later reviewers see what was done, what was missed, and what could have been done differently. Several newsroom verification desks maintain shared archives of resolved cases for exactly this purpose.
What this workflow does not cover
The procedure above is for still images. Video, audio, and multi-modal content require adapted workflows that share some steps (provenance inspection, source assessment) and substitute others (frame-by-frame video forensics, audio waveform analysis). The video case is covered partially in the broader Project Origin documentation and in InVID/WeVerify's video toolkit; this site does not cover it in depth.
The workflow also does not cover specialized contexts — evidentiary photography in police investigations, satellite imagery verification, medical-image authenticity. Each has its own established practices, often more rigorous than the general-purpose workflow above, and is the appropriate authority within its domain.
Where the field is moving
The single largest change in verification practice over the next several years will be the broader visibility of C2PA credentials in consumer interfaces. As browsers and platforms surface credentials inline, Stage 2 becomes routine for ordinary readers rather than the specialist step it is today. This will compress the rest of the workflow into the cases where credentials are absent or contested, which is the long tail rather than the bulk.
The other change is regulatory. The EU AI Act's marking obligations create asymmetric pressure for synthetic content to be identifiable, which raises the baseline expectation that synthetic images will carry detectable markers. The verification workflow's Stage 5 — AI-specific forensic checks — becomes the residual stage for content that should have been marked and was not, which makes the absence of a marker itself a more meaningful signal than it is today. None of this eliminates the need for the workflow; it shifts where the workflow's effort is spent.