- Docs
- Bureau — Blue Team (defensive)
- Evidence-Locker
Bureau — Blue Team (defensive)
Evidence-Locker
When a screenshot of an AI conversation gets submitted as a court exhibit in a deepfake or fraud case, defense should be able to verify it wasn't tampered with. Evidence-Locker signs the exhibit at intake, signs the deepfake-detection result, signs the expert witness's AI-tool manifest.
Posture: 🔵 Blue Team (defensive) · Status: alpha
What it does
When digital evidence – a video, an audio clip, a screenshot – gets introduced in court, the rules that govern admissibility are the Federal Rules of Evidence: FRE 901 (authentication and identification of evidence) and FRE 902 (self-authenticating evidence). FRE 902(13) and 902(14) already accept cryptographic authentication of digital records. The problem is that nobody uses it consistently. Most digital exhibits still go through manual chain-of-custody affidavits that are easy to attack and slow to verify. Evidence-Locker makes cryptographic authentication the standard.
Three things get signed. First, the exhibit itself: when defense or prosecution intakes a piece of digital evidence, the forensic lab signs a body containing the SHA-256 of the exhibit bytes (never the bytes themselves), the case-id-hash, the submitting-party-hash, and an ordered chain-of-custody (intake clerk → evidence custodian → forensic lab → court submission). Each leg signs a prevDigest linking it to the previous leg, so any break in the chain is computable. Second, the deepfake detection result: when a detector lab (Intel FakeCatcher, Microsoft Video Authenticator) runs against the exhibit, they sign a result bound to that exact exhibitDigest. The detection cannot be retroactively swapped. Third, the expert witness tooling manifest: the expert signs a list of AI tools they declared in their Oath versus tools they actually used. If the actually-used set exceeds the declared set, that's a tooling-undisclosed proof.
Who would use it
- A criminal-defense attorney who needs to verify prosecution's digital exhibits weren't altered.
- A civil-litigation forensic lab standardizing chain-of-custody for contested AI evidence.
- A court clerk verifying exhibit integrity before admission.
- An expert witness disclosing the AI tools used in their analysis.
- A judge ruling on FRE 902(13)/(14) self-authentication.
What you'll need
- The Pluck CLI (
npm install -g @sizls/pluck-cli). - A signing key for each role in the custody chain (intake clerk, evidence custodian, forensic lab, court).
- A deepfake detector with a published signing key. Intel FakeCatcher, Microsoft Video Authenticator, and OpenAI's classifier all work; you bind their result to your exhibit digest.
- For expert witnesses: an Oath commitment (the disclosed-tools list) plus an SBOM-AI manifest of tools actually used.
- For verifiers: only the Rekor entries plus the lab's published signing-key fingerprint. No Pluck or vendor cooperation needed.
Step-by-step
The alpha runs the full constraint chain on synthetic exhibits, detections, and expert manifests – there is no live forensic-lab daemon yet. Production capture and verify ship in a follow-up. To exercise the system today:
pluck bureau evidence-locker demo
Expected output: the system ingests three exhibits (one authentic, one deepfake, one chain-broken), two detector results, and one expert-witness tooling manifest, and emits three signed proofs (exhibit-deepfake, tooling-undisclosed, chain-broken). Each proof carries the exhibit digest, the detection result digest (when applicable), and the chain-of-custody legs.
What to do with the output: in production the forensic-lab daemon signs every exhibit at intake and the court clerk's verifier checks the Rekor entry before admission. A chain-broken proof is a procedural-objection ground; an exhibit-deepfake proof is admissibility-dispositive evidence; a tooling-undisclosed proof is potential perjury under FRE 702 expert-disclosure rules.
Run it yourself
Drop this into a Node 18+ project (npm install @sizls/pluck-bureau-evidence-locker @sizls/pluck-bureau-core tsx):
// index.ts
import { createHash } from "node:crypto";
import {
createEvidenceLockerSystem,
fingerprintPrivateKey,
signCanonicalBody,
type CustodyLeg,
type DeepfakeDetection,
type Exhibit,
type ExpertWitnessTooling,
type SubmittingParty,
} from "@sizls/pluck-bureau-evidence-locker";
import { generateOperatorKey } from "@sizls/pluck-bureau-core";
async function main() {
const operator = generateOperatorKey();
const lab = generateOperatorKey();
const detector = generateOperatorKey();
const expert = generateOperatorKey();
const labFp = fingerprintPrivateKey(lab.privateKeyPem);
const detectorFp = fingerprintPrivateKey(detector.privateKeyPem);
const expertFp = fingerprintPrivateKey(expert.privateKeyPem);
const fakeDigest = digest("exhibit:deepfake-cleartext");
const fakeChain = buildChain([labFp, labFp], ["2026-04-26T00:30:00.000Z", "2026-04-26T00:45:00.000Z"], [fakeDigest, fakeDigest]);
const fakeExhibit = buildExhibit(fakeDigest, "defense", "party:defense", "case:2026-1138", "2026-04-26T00:30:00.000Z", fakeChain, lab.privateKeyPem, labFp);
const fakeDetection = buildDetection(fakeDigest, "intel-fakecatcher-v3", 0.94, true, "2026-04-26T02:00:00.000Z", detector.privateKeyPem, detectorFp);
// Expert tooling manifest: declared = [fakecatcher], used = [fakecatcher, gpt-4o] -> tooling-undisclosed.
const tooling = buildTooling("expert:dr-jane-smith", "case:2026-1138", ["intel-fakecatcher-v3"], ["intel-fakecatcher-v3", "openai/gpt-4o"], "2026-04-26T03:00:00.000Z", expert.privateKeyPem, expertFp);
const locker = createEvidenceLockerSystem({
signingKey: operator.privateKeyPem,
disablePausePoll: true,
disableLogging: true,
tolerances: { deepfakeScoreThreshold: 0.85, minCustodyLegs: 1 },
});
try {
locker.intakeExhibit(fakeExhibit);
locker.observeDetection(fakeDetection);
locker.discloseTooling(tooling);
for (let i = 0; i < 60; i++) await new Promise((r) => setImmediate(r));
const proofs = locker.facts.proofs();
console.log(`evidence proofs = ${proofs.length}`);
for (const p of proofs) console.log(`kind=${p.kind} proofId=${p.proofId.slice(0, 16)}…`);
} finally {
await locker.shutdown();
}
}
function digest(s: string): string { return createHash("sha256").update(s).digest("hex"); }
function buildLeg(actorFingerprint: string, observedAt: string, legDigest: string, prevDigest?: string): CustodyLeg {
const skeleton = { actorFingerprint, observedAt, legDigest, ...(prevDigest !== undefined ? { prevDigest } : {}) };
const legId = createHash("sha256").update(JSON.stringify(skeleton)).digest("hex");
return { legId, ...skeleton };
}
function buildChain(actors: string[], times: string[], digests: string[]): CustodyLeg[] {
const legs: CustodyLeg[] = [];
for (let i = 0; i < actors.length; i++) {
const prev = legs[i - 1];
legs.push(buildLeg(actors[i]!, times[i]!, digests[i]!, prev?.legDigest));
}
return legs;
}
function buildExhibit(exhibitDigest: string, partyRole: SubmittingParty, partyId: string, caseId: string, intakeAt: string, custodyChain: CustodyLeg[], labKey: string, labFingerprint: string): Exhibit {
const partyHash = createHash("sha256").update(`${partyId}:${caseId}:salt`).digest("hex");
const caseIdHash = createHash("sha256").update(`${caseId}:salt`).digest("hex");
const skeleton = { schemaVersion: 1 as const, exhibitDigest, partyHash, partyRole, caseIdHash, intakeAt, custodyChain, labFingerprint };
const exhibitId = createHash("sha256").update(JSON.stringify(skeleton)).digest("hex");
const signed = signCanonicalBody({ ...skeleton, exhibitId }, labKey);
return { ...skeleton, exhibitId, signature: signed.signature };
}
function buildDetection(exhibitDigest: string, detectorModel: string, score: number, manipulated: boolean, detectedAt: string, detectorKey: string, detectorFingerprint: string): DeepfakeDetection {
const skeleton = { schemaVersion: 1 as const, exhibitDigest, detectorModel, score, manipulated, detectedAt, detectorFingerprint };
const detectionId = createHash("sha256").update(JSON.stringify(skeleton)).digest("hex");
const signed = signCanonicalBody({ ...skeleton, detectionId }, detectorKey);
return { ...skeleton, detectionId, signature: signed.signature };
}
function buildTooling(expertId: string, caseId: string, declaredTools: string[], usedTools: string[], observedAt: string, expertKey: string, expertFingerprint: string): ExpertWitnessTooling {
const expertHash = createHash("sha256").update(`${expertId}:${caseId}:salt`).digest("hex");
const caseIdHash = createHash("sha256").update(`${caseId}:salt`).digest("hex");
const skeleton = { schemaVersion: 1 as const, expertHash, caseIdHash, declaredTools, usedTools, observedAt, expertFingerprint };
const toolingId = createHash("sha256").update(JSON.stringify(skeleton)).digest("hex");
const signed = signCanonicalBody({ ...skeleton, toolingId }, expertKey);
return { ...skeleton, toolingId, signature: signed.signature };
}
main().catch((err) => { console.error(err); process.exit(1); });
Run with tsx index.ts. Expected output:
evidence proofs = 2
kind=exhibit-deepfake proofId=…
kind=tooling-undisclosed proofId=…
Open in StackBlitz – runs in your browser, no install required.
What you get
A signed EvidenceLocker.Proof with one of three kind values:
exhibit-deepfake– the detector lab's result on the bound exhibit indicates manipulation.tooling-undisclosed– the expert's actually-used AI tool set exceeds their declared set.chain-broken– a custody leg'sprevDigestdoes not reproduce against the prior leg'slegDigest.
The Bureau never carries raw exhibit bytes, party identities, or expert names. Everything is hashed. Nothing in a published proof can dox a witness or leak case strategy.
What it can't do
- Evidence-Locker does not adjudicate the truth of the exhibit's content – only its integrity. A pristine chain of custody on a misleading exhibit is still misleading.
- The detector lab's result is only as good as the detector. Evidence-Locker binds the detector's claim to the exhibit; it does not validate the detector's accuracy.
- A compromised intake key can sign a false
exhibitbody. Multi-party intake (intake clerk plus evidence custodian must both sign) raises the bar. - Evidence-Locker does not handle physical evidence – only digital exhibits and the manifests describing how they were processed.
A real-world example
A criminal-fraud trial centers on a 47-minute audio recording of a CEO allegedly admitting securities fraud. Defense alleges the recording was deepfake-generated. The forensic lab Evidence-Locker-signs the recording at intake – exhibitDigest published, custody chain initiated. Intel FakeCatcher runs against the bound digest and signs a result with manipulated: false. Defense's expert witness runs an alternate detector and signs manipulated: true against the same exhibitDigest. Both detector results are admissible because both bind to the same exhibit. The judge instructs the jury that the integrity chain is verified – the dispute is over detector accuracy, not evidence tampering. Two months later, defense's expert is found to have used three undisclosed AI tools in their analysis; a tooling-undisclosed proof against their Oath becomes the basis for an FRE 702 challenge to their testimony.
For developers
Predicate URIs
| URI | What it attests |
|---|---|
https://pluck.run/EvidenceLocker.Exhibit/v1 | Digital exhibit digest plus chain-of-custody legs plus party and case hashes. |
https://pluck.run/EvidenceLocker.DeepfakeDetection/v1 | Detector-lab result bound to a specific exhibitDigest. |
https://pluck.run/EvidenceLocker.ExpertWitness/v1 | Expert's declared-vs-used AI tooling manifest. |
https://pluck.run/EvidenceLocker.Proof/v1 | Published proof with kind (exhibit-deepfake, tooling-undisclosed, chain-broken). |
Programs composed
- Custody – chain-of-custody primitives.
- Fingerprint – deepfake-detector signed-result composition.
- SBOM-AI – software-bill-of-materials for AI tools used by the expert.
- Oath – expert-witness disclosed-tools commitment.
- Rotate – handles compromised forensic-lab keys without invalidating prior signings.
Threat model + adversary
Adversaries: a party that wants to introduce or rebut a deepfake, an expert who used undisclosed tools, an intake clerk with a compromised key. Evidence-Locker closes byte-integrity, detector-binding, and expert-disclosure holes. See Threat Model.
Verify a published cassette
pluck bureau verify <bundle-dir>
cosign verify-blob --key <pubkey.pem> --signature <sig> --type https://pluck.run/EvidenceLocker.Proof/v1 <body.json>
Every proof is a DSSE envelope notarized to Rekor. Court clerks verify a proof's signature chain against the lab's published signing-key fingerprint, re-derive the chain-of-custody digests, and confirm the deepfake-detection binding – all from public data. No Pluck or vendor cooperation required.
See also
- Bureau Foundations
- Threat Model
- Verify a dossier
- Custody – chain-of-custody primitives
- Fingerprint – deepfake / model-fingerprint clustering
- SBOM-AI – software-bill-of-materials for AI tooling
- Oath – disclosed-tools commitments
- Rotate – key compromise / rotation handling