The United States has not enacted a federal image provenance or AI marking statute as of mid-2026. What exists instead is a state-by-state patchwork: legislation passed in roughly half the states since 2019, addressing specific harms (intimate-imagery deepfakes, election-period synthetic political ads) rather than imposing horizontal marking obligations. The result is a body of law that varies substantially across jurisdictions, leaves significant gaps, and intersects with First Amendment law in ways that have produced both upheld statutes and successful constitutional challenges.
This page surveys the operative state laws as of May 2026, the federal proposals that have not become law, and the patterns that have emerged from the early litigation. The intended audience is anyone trying to understand the legal context for image provenance in the US — counsel for a platform, a publisher, an AI provider, or an individual concerned about a specific image. The page does not provide legal advice; jurisdiction-specific questions need jurisdiction-specific counsel.
California SB 942: AI Transparency Act
California SB 942, the California AI Transparency Act, was signed by Governor Newsom in September 2024. The Act requires covered generative AI providers operating in California to provide both visible disclosures and machine-readable disclosures of AI-generated content. The visible-disclosure provision applies to content the user can affirmatively request a label for; the machine-readable provision requires latent disclosure embedded in content metadata or as a watermark.
SB 942 also requires covered providers to make available an "AI detection tool" — a free, publicly accessible mechanism for users to check whether content was produced by the provider's system. The Act's covered-provider definition uses thresholds based on user counts, which in practice scopes the Act to the major commercial generators.
The Act has been the subject of substantial industry comment about implementation details, particularly around what constitutes "machine-readable" disclosure (the C2PA-led private-sector consensus is the working answer) and what the detection-tool requirement means in light of the cross-model brittleness documented on the AI detection page. The first enforcement actions are expected in the second half of 2026.
Election deepfake statutes
Election deepfake statutes are the largest single category of US state legislation related to synthetic media. They typically prohibit, in the period preceding an election, the distribution of materially deceptive AI-generated content depicting candidates with the intent to influence voters. The statutes vary in specifics — coverage windows (often 60 to 90 days before the election), exceptions for clearly labeled satire, and remedies (criminal liability, civil cause of action, or both).
Texas SB 751, enacted in 2019, was the first state election-deepfake law and remains a frequently-cited model. It targets deepfakes intended to injure a candidate or influence an election within 30 days of the election. Several other states followed: Minnesota HF 1370, California AB 730 and AB 2655, Michigan, Washington, Georgia, and a growing list. The Brennan Center for Justice maintains a tracking database that is the standard reference for the current state of these laws.
The election deepfake laws have produced some early litigation. California AB 2655 (Defending Democracy from Deepfake Deception Act) was preliminarily enjoined in October 2024 in Kohls v. Bonta on First Amendment grounds, with the court finding that the statute was likely unconstitutional as applied to satirical content. Other state statutes with narrower scope have so far survived constitutional challenge. The doctrinal pattern is that statutes narrowly drawn around specific election-deception harms have a better chance of surviving than broad prohibitions on synthetic political imagery.
Intimate-imagery deepfake statutes
The second major category of state legislation addresses non-consensual intimate-imagery deepfakes. These statutes typically prohibit the creation or distribution of sexually explicit synthetic imagery depicting an identifiable person without their consent. The harms — extortion, harassment, reputational damage — have been a primary driver of the legislative wave since 2023.
The statutes generally provide criminal penalties and civil causes of action. Most have not faced significant constitutional challenge, partly because the existing doctrine on non-consensual intimate imagery (which addresses real photographs without synthesis) provides analogical support. The federal Take It Down Act, signed in 2025, added federal-level requirements for platforms to remove non-consensual intimate imagery including deepfakes within specified timeframes, supplementing rather than replacing the state-level criminal provisions.
| Category | Notable examples | Status |
|---|---|---|
| AI transparency | California SB 942 | Effective; first enforcement 2026 |
| Election deepfakes | Texas SB 751, Minnesota HF 1370, California AB 730 / AB 2655 | Mostly upheld; some enjoined |
| Intimate-imagery deepfakes | Most states have some statute; federal Take It Down Act (2025) | Effective and mostly unchallenged |
| Identity / impersonation | State-specific statutes vary widely | Patchwork |
| General AI disclosure | Few states beyond California | Pending in several legislatures |
Federal landscape
Federal legislation on image provenance and deepfakes has been proposed repeatedly since 2019 without becoming law. The bills that have advanced furthest include the NO FAKES Act (variously introduced over multiple congresses), the Identifying Outputs of Generative Adversarial Networks Act, and several deepfake-criminalization proposals. None has passed both chambers.
The federal action that has taken effect comes from executive branch rather than legislative sources: the 2023 White House Executive Order 14110 on Safe, Secure, and Trustworthy AI directed several agencies to develop guidance on AI authentication and content provenance, with NIST taking the lead on technical standards. The 2025 follow-on executive order issued by the new administration reduced the scope of federal AI direction but preserved several of the technical-standards initiatives. NIST's continuing work on AI Risk Management Framework (AI RMF) and its companion provenance guidance has been the practical federal touchpoint for industry implementing provenance.
The federal Take It Down Act, mentioned above, is the most consequential federal statute on synthetic imagery to date. It requires platforms to remove non-consensual intimate imagery within 48 hours of notice, with criminal penalties for publishing such content with intent to harm. The Act explicitly covers AI-generated imagery and is the federal-level analogue to the state intimate-imagery statutes.
Section 230 and platform liability
Platform liability for user-uploaded deepfakes operates against the background of Section 230 of the Communications Decency Act, which generally immunizes platforms from liability for user content. Section 230 does not immunize platforms from federal criminal law, from intellectual property claims, or from specific statutes that explicitly carve out platform liability. The Take It Down Act's removal-on-notice structure operates within these doctrinal constraints.
For state law, Section 230 preempts state laws that would impose publisher-style liability on platforms for user content. Many of the state deepfake statutes are written to target the original creator and distributor rather than the platform, partly to avoid Section 230 preemption issues. Whether the state statutes survive Section 230 challenges varies; the litigation continues to develop the doctrine.
The Federal Rules of Evidence connection
The Federal Rules of Evidence's authentication provisions (Rules 901 and 902) intersect with image provenance through the digital-records self-authentication rules added in 2017. Rule 902(13) and 902(14) provide for self-authentication of digital records via certification by a qualified person, which has been invoked in some cases involving authentication of digital photographs and video. The legal evidence page covers the specific evidentiary mechanics.
State evidence rules generally mirror the federal rules on authentication; specific state rules vary. C2PA-credentialed images may eventually be admitted under the self-authentication provisions when accompanied by appropriate certification, though the practice is still developing and the body of case law specifically addressing C2PA credentials remains small.
The patchwork problem
The dominant feature of the US legal landscape is fragmentation. A platform operating nationally must comply with the most restrictive of the applicable state laws or implement state-by-state differentiation. A producer of synthetic media in one state may face civil or criminal liability for distribution into another state where the conduct is prohibited. The compliance complexity has been a recurring topic in industry comment on federal legislation, which has been proposed in part to preempt the patchwork with a uniform federal rule.
The preemption arguments cut both ways. Industry generally favors federal preemption to reduce compliance complexity; consumer-protection groups and state attorneys general often favor preserving state authority to address harms federal law does not address. The political alignment has prevented a federal-preemption bill from passing, and the patchwork is likely to deepen rather than resolve over the next several years.
How the laws interact with C2PA
None of the existing US state or federal laws mandates C2PA specifically. California SB 942's "machine-readable disclosure" requirement is satisfiable by C2PA-style marking but does not specify it. The federal Take It Down Act addresses takedown obligations rather than marking. The state election-deepfake statutes prohibit specific harms regardless of whether the content is marked.
The practical effect for major commercial generators is that C2PA-plus-watermarking deployments satisfy whatever marking obligations exist at the state level. The harder cases — open-weights model deployments, individual-user generation, content modified after generation — fall outside the marking framework regardless of jurisdiction and remain subject to substantive prohibitions (intimate imagery, election deception) without the structural defense of being marked.
Where the field is moving
The next several years will likely see continued state-level activity, with the election-deepfake category leading and the intimate-imagery category broadening. Federal action remains uncertain; the political constellation has not produced consensus on either substantive prohibition or preemption. NIST's technical-standards work continues and is likely to influence federal procurement and contractor requirements even in the absence of a marking statute.
The litigation pattern is also worth watching. The Kohls preliminary injunction and similar challenges have established that broad election-deepfake statutes face serious First Amendment scrutiny. Narrower statutes — those targeting specific deceptive intent, providing affirmative defenses for satire and reporting, and excluding broadcasters acting in good faith — have a better record. The state legislative response has been to draft new statutes with narrower scope, which may produce a more durable body of law over time. The doctrinal questions about how the First Amendment applies to AI-generated political imagery will continue to develop, with consequences for any provenance-based remedy.