TL;DR
- —One POST request signs your asset and writes an immutable provenance record
- —Required fields are required by design — provenance without creator and tool attribution is not provenance
- —Rate limits are per API key, not per organization — run separate keys for separate pipelines
- —The response returns a
provenance_tokenyou can store, distribute, or embed in asset metadata - —The public verify endpoint lets anyone confirm authenticity without authentication or contacting Ledgible
The Design Goal: Sign at the Speed of a Log Line
The goal behind the Ledgible ingest API was to make signing an asset as lightweight as logging an event. One HTTP call, and your asset is on the ledger — immutably, with a cryptographic signature you can hand to anyone.
Most content provenance systems require you to understand their data model before you can write a record. You configure schemas, map fields, and handle the provenance concern as a separate workflow from your creation pipeline. We built the opposite: a single endpoint that works with the data you already have.
The minimum viable ingest call looks like this:
curl -X POST https://ledgible.ai/api/v1/assets/ingest \
-H "X-API-Key: ldg_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"asset_type": "image",
"creator_id": "org:acme-corp",
"tool_id": "adobe-firefly@3.0",
"canonical_hash": "sha256:abc123..."
}'Response:
{
"asset_id": "550e8400-e29b-41d4-a716-446655440000",
"provenance_token": "prov_tok_xyz789...",
"canonical_hash": "sha256:abc123...",
"signed_at": "2026-03-10T10:00:00Z",
"status": "verified_automated"
}That is the complete integration for a basic use case. Store the asset_id alongside your asset. Use the canonical_hash for verification later. The provenance record is written, signed, and publicly verifiable immediately.
Why the Required Fields Are Required
The temptation in API design is to make everything optional to maximize flexibility. We pushed back hard on that instinct for the ingest API.
The four required fields — asset_type, creator_id, tool_id, and canonical_hash — are required because provenance without them is not really provenance. It is just a timestamp on an unknown file.
`creator_id` identifies who is making the provenance claim. This is the field that distinguishes a provenance record from a hash registry entry. Format: human:email@org.com for human creators, ai:model-name@version for AI-generated content. Under EU AI Act Article 50, the distinction between human and AI origin is precisely what must be disclosed.
`tool_id` records the instrument of creation. Format: tool-name@version. A provenance record that says "this was made" but not "with what" is incomplete for any legal or compliance purpose. When a regulator asks which AI system produced an asset, tool_id is where that answer lives.
`canonical_hash` is the cryptographic fingerprint of the asset. If you have not computed a hash yet, the API will compute one server-side from the file bytes — but supplying your own hash lets you maintain the chain from your generation system through to the ledger. This matters for pipeline integrity.
`asset_type` scopes the record to a media category: image, video, text, or audio. This is used in compliance export filtering and in the verification response.
The Full Request Schema
For production integrations, the complete schema supports additional fields that make provenance records more useful across the full content lifecycle:
{
"title": "Campaign Hero Image Q2 2026",
"asset_type": "image",
"asset_id": "your-internal-uuid",
"creator_id": "ai:adobe-firefly@3.0",
"tool_id": "adobe-firefly@3.0",
"canonical_hash": "sha256:abc123...",
"parent_hash": "sha256:def456...",
"metadata": {
"ai_generated": true,
"campaign": "Q2-2026",
"prompt_category": "marketing",
"department": "brand"
}
}`title` is for human-readable identification in your dashboard. It does not affect the provenance record.
`asset_id` lets you supply your own UUID if you want the Ledgible record to reference your internal identifier. If omitted, one is generated.
`parent_hash` is for derived assets — if this asset was produced by post-processing an original, record the original's hash here. This creates a verifiable lineage chain: original → processed → published. Each step is signed, and the chain is traversable.
`metadata` accepts arbitrary JSON. Use it for fields specific to your workflow — AI generation parameters, campaign identifiers, department tags, compliance flags. Metadata is stored and returned in verification responses but is not part of the signed payload.
API Key Design: Per-Pipeline, Not Per-Organization
Rate limits in the ingest API are enforced per API key, not per organization. This is a deliberate design choice.
Enterprise content pipelines are rarely monolithic. A large brand might have separate pipelines for campaign imagery, product photography, video content, and editorial copy — each running in a different system, owned by a different team, and requiring independent monitoring and rate management.
The per-key rate limit model means you can issue one API key per pipeline and track ingest volume, usage patterns, and anomalies independently. In your Ledgible dashboard, each key shows its own usage count and last-used timestamp. Revoking a compromised key does not affect your other pipelines.
Keys are generated with read-once behavior: the plaintext key is shown exactly once at creation and is never stored or retrievable. Only the SHA-256 hash of the key is persisted. If a key is lost, revoke it and generate a new one.
Verification: Closing the Loop
The ingest API is one side of the provenance loop. Verification is the other.
Every ingested asset is immediately queryable via the public verify endpoint — no authentication required:
curl "https://ledgible.ai/api/v1/verify?hash=sha256:abc123..."
{
"verified": true,
"asset_id": "550e8400-e29b-41d4-a716-446655440000",
"signer": "org:acme-corp",
"tool_id": "adobe-firefly@3.0",
"signed_at": "2026-03-10T10:00:00Z",
"status": "verified_automated"
}This is the endpoint your regulators, legal team, brand safety auditors, and publishing partners can call independently — without requiring access to your systems, without contacting Ledgible, and without any prior relationship with your organization. The verification response is cryptographically backed by the signature recorded at ingest time.
How Ledgible Differs from Hash Registries and Blockchain Provenance Tools
Hash registries record that a hash existed at a point in time. They do not record who submitted it, which tool produced the underlying asset, or what the creator-to-asset relationship is. The record is "this hash existed" rather than "this organization signed this asset made with this tool."
Blockchain provenance tools add decentralization — the record is stored across a distributed network rather than in a centralized database. This solves the sovereign verifiability problem but introduces integration complexity, gas fees or transaction costs, and latency that is incompatible with high-volume signing pipelines. Ledgible treats blockchain anchoring as an optional Phase 2 feature for enterprise customers who require sovereign verifiability — not as the foundation of the trust model.
Ledgible is built for enterprise content pipelines that produce assets at volume. Single API call. Sub-100ms verification. Unified audit trail across asset types and tools. Legal-grade compliance export. The trust model is infrastructure security and cryptographic signing — not distributed consensus.