Classical image forensics is the body of techniques developed from the late 1990s through the 2010s for examining suspect digital images. It predates the AI-generation problem and was built around different threat models — photo composites, retouches, content insertions and removals, screenshot-of-print substitutions — but its tools remain part of the verification toolkit. Many of them transfer usefully to AI-generated content; others have been deprecated by changes in the imaging ecosystem.
This page covers the dominant forensic techniques, what each one tells you, and where each one fails or is widely misinterpreted. The intended audience is anyone who might be asked to examine an image and produce a defensible opinion: forensic examiners, newsroom verifiers, lawyers preparing image evidence. The literature is mature; Hany Farid's textbook "Photo Forensics" (MIT Press, 2016) remains the standard reference and is updated through Farid's continued work at Berkeley.
Error Level Analysis (ELA)
ELA is probably the most-cited and most-misinterpreted forensic technique in the popular literature. The procedure is simple: re-save the suspect JPEG at a known quality (typically 90 or 95), then compute the per-region difference between the suspect and the re-saved image. The output is a visualization where edited regions are supposed to appear with different brightness than untouched regions.
The mechanism behind ELA is that JPEG re-encoding is approximately idempotent for regions that have already been encoded at the same quality, but produces larger changes in regions that were encoded differently. Pasted-in content from a different-quality source produces a visible difference; untouched original content does not. The technique was popularized by Neal Krawetz through the FotoForensics site, where ELA is the headline visualization.
ELA's limits are extensive. It is sensitive to image content (high-frequency regions like edges always appear differently from flat regions, with no manipulation involved), to color (saturated regions behave differently from desaturated ones), to repeated re-encoding (which approaches a fixed point and erases the signal), and to the choice of re-save quality (different qualities produce different ELA outputs). Popular interpretations of ELA images — "bright regions are edited," "dark regions are original" — are not generally correct. ELA is a heuristic input to forensic analysis, not a verdict; treating it as a verdict is the single most common misuse.
JPEG ghosts
A related technique that exploits the same JPEG-quantization idempotence. If an image was previously saved at quality Q1, then a region was modified and saved at quality Q2 within a larger Q3 file, the previously-Q1-quantized region will exhibit characteristic ghost patterns when probed by re-saving at various qualities. Farid's 2009 paper, "Exposing Digital Forgeries from JPEG Ghosts," formalized the approach.
JPEG ghosts are more diagnostic than basic ELA because they exploit a specific signature of compositing rather than a generic re-save effect. They are also more sensitive to detection of multiple-quality regions, which corresponds to the actual manipulation pattern: take a Q1 photograph, paste in a Q2 region, save the result. The ghost technique can identify the regions and even estimate the original quality of each.
CFA artifacts and demosaicing
Digital camera sensors use color filter arrays (typically a Bayer pattern) so that each pixel captures only one color channel. The other two are interpolated by a demosaicing algorithm. This process leaves characteristic statistical traces — specific correlations between adjacent pixels that are predictable from the demosaicing algorithm. Pasted-in content from a different source has different CFA traces, or none at all (if the content came from a re-rendered image).
CFA analysis is one of the more reliable classical techniques when the image still carries CFA traces. It is defeated by aggressive resampling (which destroys the traces), by re-encoding at low quality, and by content that has never been through a camera sensor at all (including AI-generated content). For pre-2023-era forensic problems, CFA was a workhorse; for the diffusion-era problem of AI-generated content, it gives a different kind of signal: the absence of CFA traces is suspicious for content claiming camera capture.
Noise residuals and PRNU
Photo-Response Non-Uniformity is a per-sensor noise pattern caused by tiny variations in pixel sensitivity. Every camera sensor has a unique PRNU pattern, and that pattern appears in every image the sensor produces. PRNU has been used since the mid-2000s for camera identification: given a suspect image and a candidate camera, extract the PRNU pattern from a set of reference images from the camera and correlate it with the noise residual of the suspect.
PRNU is the closest thing classical forensics has to a fingerprint. It is generally robust against compression, modest editing, and reasonable resizing. It has been used in courtrooms for camera identification, particularly in child-exploitation cases where source attribution is critical. The technique is also defeated by adversarial scrubbing, by aggressive editing that overwhelms the residual signal, and by re-rendering (AI generation included).
Cloning and copy-paste detection
Cloning detection looks for regions of an image that are copies of other regions within the same image — a sign of content removal (paint over with adjacent material) or duplication (repeating a crowd member to make a crowd look larger). The classical algorithms use block-matching with rotation and scale invariance to find near-duplicate regions. Modern implementations use SIFT or learned features to handle more general transformations.
The technique catches a specific class of manipulation reliably. It does not catch composites from external sources, AI-generated content, or any manipulation that does not involve self-duplication. It is one of the older techniques and still useful in specific cases — the 2006 Hajj scandal involving cloned smoke was diagnostic in classical clone-detection terms.
Lighting and shadow consistency
Real scenes obey physics: shadows fall in directions consistent with light sources, and the lighting on objects matches the implied scene illumination. Composited images often violate these constraints in ways that are visually unnoticed but mathematically detectable. The classical work (Kee et al., "Exposing Photo Manipulation from Shading and Shadows," 2014) developed quantitative tests for shadow consistency that can identify many composites.
The technique requires sufficient image content to estimate the lighting environment, which not every image provides. It is most useful for outdoor scenes with strong directional lighting and for scenes with multiple objects whose shadows can be cross-checked. It is one of the techniques most likely to be useful against high-quality composites that defeat lower-level forensic checks.
| Technique | What it catches | Notable failure mode |
|---|---|---|
| ELA | Quality-mismatched composites (sometimes) | Heavily content-dependent; widely misread |
| JPEG ghosts | Multi-quality composites | Defeated by uniform re-save |
| CFA / demosaicing | Non-camera content insertion | Defeated by resampling |
| PRNU | Camera identification | Defeated by adversarial scrubbing |
| Cloning detection | Self-duplicated regions | Misses external composites |
| Lighting consistency | Physically implausible composites | Requires sufficient scene content |
How classical forensics applies to AI-generated content
Many classical techniques transfer usefully to the AI-detection problem in a specific way: they detect the absence of expected signals rather than the presence of unexpected ones. An AI-generated image was never through a camera sensor, so it has no CFA traces and no PRNU residual matching any known camera. It was rendered as a single image, not composited from differently-encoded sources, so JPEG ghost analysis finds no boundaries.
This makes the techniques diagnostic in a negative sense: a claimed-camera image that lacks CFA traces is suspicious. The challenge is that many legitimate operations — heavy post-processing, format conversion, screenshot capture — also erase these traces. The forensic examiner cannot infer AI generation from absence alone; the absence is one input among several. Combining classical forensics with AI-specific detection (covered on the detection page) and with metadata analysis is the standard practice.
Tooling
The most widely used public tools are Neal Krawetz's FotoForensics (which provides ELA, JPEG quality estimation, and several other classical visualizations), Jonas Wagner's Forensically (a similar set with magnifier, clone detection, and noise analysis), and the InVID/WeVerify suite (which is video-focused but includes still-image tools). Hany Farid's lab has released several reference implementations of more advanced techniques. Commercial forensic software (Amped Authenticate, Cognitech) provides validated pipelines suitable for evidentiary use.
For working examiners, the production tool is typically Amped Authenticate or an equivalent commercial product whose outputs are validated against documented procedures and whose results can be defended in court. The free web tools are useful for triage and reasonable for low-stakes verification; they should not be the sole basis for an evidentiary conclusion.
Where the field is moving
Classical image forensics has been quietly modernizing through 2024 and 2025, with most of the published work focusing on adapting the techniques to handle modern smartphone imaging pipelines (which apply substantial post-processing on-device, defeating some classical assumptions about pixel statistics) and to detect the absence of expected camera-pipeline signals in synthetic content. The techniques themselves are mature; what is new is the integration with AI-detection methods and the framing of forensics as a check on metadata claims rather than as a standalone verdict mechanism.
The longer arc is that forensics is shifting from a primary verification method to a secondary one. In the C2PA-credentialed future, the primary signal will be the cryptographic chain; forensics will be the check applied when the chain is absent or when an adversary is plausibly attacking the chain itself. This is a more limited role than forensics has historically played, but probably a more defensible one — the limits of the techniques become less consequential when they are one signal among several rather than the only signal available.