AttestTrail

API

C2PA Verification API

Verify Content Credentials without building C2PA manifest parsing yourself. One endpoint, deterministic results.

By AttestTrail Editorial TeamReviewed by AttestTrail Research

Content Credentials are becoming the standard for image provenance. The EU AI Act mandates transparency for AI-generated content by August 2026. Camera manufacturers are shipping C2PA-enabled hardware. Adobe, OpenAI, and Google are embedding signed manifests in every image their tools produce.

If your platform accepts user-uploaded images, you will need to verify these credentials. The question is whether you build it yourself or call an API.

Why not implement C2PA yourself

The C2PA specification is thorough. The reference implementation, c2pa-rs, is open source. In theory, you can parse manifests, validate signatures, and extract signer information on your own. In practice, three problems make this harder than it looks. The latest published specification is version 2.3. For a broader explainer on the standard itself, start with What is C2PA?. If you want the verification workflow in plain language before integrating the API, read C2PA Verification.

Manifest parsing is the easy part. The hard part is deciding what a valid signature means. A self-signed certificate passes signature validation just like an Adobe Firefly credential does. Without a curated trust list that maps signers to organizations, reputation scores, and signer types (camera, AI generator, editor), you cannot distinguish a trusted provenance chain from a meaningless one.

Trust lists require ongoing maintenance. New signers appear regularly. Certificates get revoked. Organizations change signing keys. Maintaining an accurate, up-to-date signer database is operational work that never ends.

Stripped credentials are invisible to manifest parsing. Social platforms, messaging apps, and CDNs routinely strip C2PA metadata from images. If you only check for embedded manifests, you miss every image that had provenance but lost it in transit. Recovering these requires perceptual fingerprint matching against a known corpus -- a capability that requires its own infrastructure.

AttestTrail handles all three. One endpoint. One response format. Deterministic decisions.

Quick start

Send a POST request to /v1/verify with an image. The API accepts multipart/form-data with the image in the file field.

curl -X POST https://api.attesttrail.com/v1/verify \
  -F "file=@photo.jpg"

The response is a structured provenance report:

{
  "decision_class": "verified_synthetic",
  "provenance": {
    "status": "valid",
    "signer": "Adobe Firefly",
    "signer_type": "ai_generator",
    "trust_decision": "trusted",
    "trust_reason": "signer_on_trust_list",
    "credential_chain": [
      {
        "issuer": "Adobe Inc.",
        "algorithm": "ES256",
        "not_after": "2027-03-01T00:00:00Z"
      }
    ]
  },
  "fingerprint": {
    "matched": false,
    "corpus_size": 142000
  },
  "fallback_risk": null,
  "reason_codes": [
    "c2pa_valid",
    "signer_trusted",
    "signer_is_ai_generator"
  ],
  "recommended_action": "auto_label_ai",
  "human_summary": "Valid Content Credentials from Adobe Firefly (trusted AI generator). This image was generated by AI. Label accordingly."
}

Every field is deterministic. No confidence percentages. No "probably AI-generated." The decision_class tells you what happened, the reason_codes tell you why, and the recommended_action tells you what to do about it.

The four decision classes

Every response returns exactly one of four decision classes. These are the only possible outcomes, and each maps to a clear moderation action.

verified_synthetic

The image carries a valid C2PA manifest signed by a known AI generation tool -- Adobe Firefly, DALL-E, Google Imagen, or similar. The signer is on the trust list and categorized as an AI generator.

Recommended action: Auto-label as AI-generated. No human review needed.

verified_camera_origin

The image carries a valid C2PA manifest signed by a hardware camera -- Nikon, Canon, Leica, Sony. This is cryptographic proof that the image was captured by a physical device, not synthesized.

Recommended action: Allow. This is the strongest provenance signal available. The image is authenticated at the point of capture.

unverified_high_risk

No valid C2PA manifest was found, but other signals suggest the image may be synthetic or manipulated. This can mean a perceptual fingerprint match against a known AI-generated image, or elevated ML classifier scores. The response includes fallback_risk with details about which signals fired.

Recommended action: Flag for human review. The evidence is suggestive but not cryptographically definitive.

unverified_low_risk

No valid C2PA manifest was found and no strong signals indicate synthetic origin. This is the vast majority of images on the internet today -- ordinary photos that were never signed.

Recommended action: Allow per your existing policy. Absence of provenance is not evidence of manipulation.

How the trust list works

C2PA signature validation answers one question: "Was this manifest signed with a valid certificate?" It does not answer the question that actually matters: "Should we trust this signer?"

AttestTrail maintains a curated signer database that goes beyond certificate validation. Every signer is mapped to:

  • Organization -- the company or entity behind the signing certificate (Adobe Inc., Nikon Corp., OpenAI)
  • Signer type -- ai_generator, camera, editor, publisher, or unknown
  • Trust status -- trusted, flagged, or unknown, based on ongoing vetting
  • Trust reason -- why the trust decision was made (signer_on_trust_list, certificate_revoked, self_signed_unknown, etc.)

This distinction matters. A self-signed certificate from an unknown entity produces a technically valid C2PA signature. Without trust list matching, your system would treat it identically to a credential from Adobe or Nikon. AttestTrail separates signature validity from signer reputation so your moderation logic can act accordingly.

The trust list is updated continuously as new signers enter the ecosystem. You do not need to track certificate issuances, revocations, or new C2PA adopters -- the API handles it.

Perceptual fingerprint recovery

When a user uploads an image to a social platform, that platform often re-encodes the file -- stripping EXIF metadata and C2PA manifests in the process. The image itself is visually identical, but the provenance data is gone.

AttestTrail addresses this with perceptual fingerprint matching. When an image with valid C2PA credentials is verified, its perceptual hash is stored. If a later upload matches that fingerprint but lacks embedded credentials, the API recovers the original provenance data and includes it in the response.

This means your platform can identify AI-generated images even when the Content Credentials were stripped before the image reached you. The fingerprint.matched field in the response indicates when this recovery path was used.

Pricing

AttestTrail is designed to minimize friction at every level of usage.

Free tier: 5 verifications per day without any account. Create an API key to get 100 verifications per month. No credit card required.

Pay-as-you-go: $0.01 per request via the x402 protocol. Pay in USDC on Base. No API key required, no subscription, no billing portal. Send a request, include payment, get a response.

Volume and Enterprise: Custom pricing for high-volume use cases. Dedicated support, custom trust policies, SLA guarantees, and on-premise deployment options. Contact us for details.

x402: crypto-native pay-per-request

The x402 payment protocol is particularly well-suited for API access. It works like this:

  1. Send a request to /v1/verify without an API key
  2. If you have exceeded the free tier, the server responds with 402 Payment Required and a payment header specifying the price and accepted tokens
  3. Your client sends the payment (USDC on Base) and retries the request with a payment proof header
  4. The server verifies payment and returns the provenance report

This flow is atomic -- no accounts, no subscriptions, no pre-funding. It is especially powerful for AI agents and automated workflows that need to call the API programmatically without human-managed credentials. An agent with a wallet can verify image provenance without any setup.

x402-compatible client libraries exist for TypeScript, Python, and Go.

# With x402, no API key needed -- payment is inline
curl -X POST https://api.attesttrail.com/v1/verify \
  -F "file=@photo.jpg" \
  -H "X-PAYMENT: <x402-payment-header>"

Integration patterns

Content moderation pipeline

The most common integration. Images enter your moderation queue, get verified via the API, and are routed based on decision_class:

  • verified_synthetic -> auto-label, no human review
  • verified_camera_origin -> fast-track approval
  • unverified_high_risk -> flag for human moderator
  • unverified_low_risk -> apply existing policy

This eliminates the false-positive problem of ML-only detection. When C2PA credentials are present, the decision is cryptographically definitive.

CMS and upload workflows

Add provenance verification to your upload flow. When a journalist, contributor, or user uploads an image, verify it before publication. Store the decision_class and human_summary alongside the image in your CMS. Surface provenance information to readers using the verification report.

AI agent workflows

Agents that browse, scrape, or process images can verify provenance inline. With x402 payment, an agent can call the API without pre-configured credentials -- it just needs a wallet. This is relevant for fact-checking agents, research pipelines, and content aggregation workflows.

Batch processing

For existing image libraries, call the API for each image and store the results. Build a provenance index across your entire corpus. Identify which images have verified origins and which need additional review.

What the API does not do

AttestTrail verifies provenance. It does not:

  • Generate C2PA manifests -- if you need to sign images, use the C2PA SDK directly
  • Detect deepfakes via pixel analysis -- ML classifiers are included as fallback signals, but the API is not an AI detection tool. It is a provenance verification tool that uses detection as one signal among several
  • Store your images -- uploaded files are processed in memory and discarded. Only perceptual hashes are retained for fingerprint matching

Next steps

The API is live. The free tier requires no signup. Send a request and see what comes back.