- Docs
- Bureau — Red & Blue (dual-use)
- Tempest-Witness
Bureau — Red & Blue (dual-use)
Tempest-Witness
A GPU's electromagnetic emanations vary measurably with the workload running on it. Tempest-Witness captures those emanations with a commodity USB software-defined radio (RTL-SDR or HackRF) and produces a signed classification proof identifying which model is running on a rack.
Posture: 🟣 Red & Blue (dual-use) · Status: alpha
What it does
Every GPU emits unintended electromagnetic radiation. When a model runs, its memory access patterns modulate the EM field around the chassis at the DDR-refresh harmonics – frequencies in the 100 MHz to 2 GHz range. Different model architectures (e.g. GPT-5, Llama-3-70B, Stable Diffusion XL) produce measurably distinct emission patterns.
Tempest-Witness captures the emissions with commodity hardware (an RTL-SDR USB stick designed for digital TV reception, or a more sensitive HackRF) and reduces each second of capture to a fixed-length feature vector via FFT. The detector compares the vector against registered model fingerprints by cosine distance and emits a signed classification proof. When a vendor's signed workload claim disagrees with the EM-derived classification, a model-mismatch contradiction proof is emitted. The hardware required is inexpensive enough to make EM-based model identification practical for ordinary auditors and customers.
Who would use it
- An enterprise AI customer who wants vendor-independent proof of which model their inference runs on.
- A regulator (FTC, EU AI Office) auditing AI providers' compliance with disclosed-model rules.
- A national security team auditing whether a foreign-vendor GPU rack is running a sanctioned workload.
- A research lab proving its competitor's "GPT-5-equivalent" claim is actually a Llama fork.
- A cloud datacenter operator selling per-rack model attestation as a tenant-facing product.
What you'll need
- The Pluck CLI (
npm install -g @sizls/pluck-cli). - An RTL-SDR (about $30, NooElec NESDR or RTL-SDR Blog v3) for casual use, or a HackRF (about $300, more sensitive and wider-band) for production.
- A small whip antenna; line-of-sight to the rack helps but isn't required for high-radiating GPUs at close range.
- A baseline fingerprint for each model you intend to detect. The Pluck reference fingerprint set ships with GPT-4-class, Llama-3-70B, and Stable Diffusion XL centroids. You generate your own for proprietary models with one minute of ground-truth capture.
Step-by-step
The alpha runs the full constraint chain on synthetic IQ captures – there is no live RTL-SDR integration yet. The production capture daemon ships in a follow-up. To exercise the system today:
pluck bureau tempest-witness demo
Expected output: the system registers a GPT-4 fingerprint and a Llama-3-70B fingerprint, ingests three reader-signed EM captures (two that classify cleanly as GPT-4, one where the vendor claims GPT-4 but the EM signature classifies as Llama), and emits three model-classify proofs plus one model-mismatch contradiction. The mismatch proof carries cosine distance to both centroids, the vendor's workload claim, and an Ed25519 signature you can independently verify.
What to do with the output: in production the same captures stream live from a USB SDR. You publish proofs to Sigstore Rekor and cite the Rekor entries when filing a vendor dispute, regulatory complaint, or research paper.
Run it yourself
Drop this into a Node 18+ project (npm install @sizls/pluck-bureau-tempest-witness @sizls/pluck-bureau-core tsx):
// index.ts
import { createHash } from "node:crypto";
import {
createTempestWitnessSystem,
fingerprintPrivateKey,
signCanonicalBody,
type EmCapture,
type WorkloadFingerprint,
} from "@sizls/pluck-bureau-tempest-witness";
import { generateOperatorKey } from "@sizls/pluck-bureau-core";
async function main() {
const operator = generateOperatorKey();
const reader = generateOperatorKey();
const auditor = generateOperatorKey();
const readerFp = fingerprintPrivateKey(reader.privateKeyPem);
const auditorFp = fingerprintPrivateKey(auditor.privateKeyPem);
const locationHash = digest("location:lab-1:salt");
const deviceFingerprint = digest("device:rtl-sdr-001:salt");
const tempest = createTempestWitnessSystem({
signingKey: operator.privateKeyPem,
disablePausePoll: true,
disableLogging: true,
});
try {
// Register two model-family fingerprint centroids.
tempest.registerFingerprint(buildFp("gpt-4-turbo", auditor.privateKeyPem, auditorFp, fpVec(0.7, 0.3, 0.5)));
tempest.registerFingerprint(buildFp("llama-3-70b", auditor.privateKeyPem, auditorFp, fpVec(0.2, 0.8, 0.4)));
// Capture A: matches GPT-4 closely.
const capA = buildCapture("2026-04-26T00:00:00.000Z", reader.privateKeyPem, readerFp, locationHash, deviceFingerprint, fpVec(0.71, 0.29, 0.51));
tempest.observeCapture(capA);
// Capture B: vendor claims GPT-4, but EM signature is Llama.
const capB = buildCapture("2026-04-26T00:00:01.000Z", reader.privateKeyPem, readerFp, locationHash, deviceFingerprint, fpVec(0.21, 0.79, 0.41));
tempest.observeCapture(capB);
tempest.claimVendor({ captureId: capB.captureId, claimedModelFamilyId: "gpt-4-turbo" });
for (let i = 0; i < 30; i++) await new Promise((r) => setImmediate(r));
const proofs = tempest.facts.proofs();
console.log(`tempest proofs = ${proofs.length}`);
for (const p of proofs) console.log(`kind=${p.kind} classified=${p.classifiedAs.modelFamilyId} claimed=${p.claimedAs?.modelFamilyId ?? "–"}`);
} finally {
await tempest.shutdown();
}
}
function digest(s: string): string { return createHash("sha256").update(s).digest("hex"); }
function fpVec(a: number, b: number, c: number): number[] {
const out = new Array<number>(64);
for (let i = 0; i < 64; i++) {
const phase = (i / 64) * Math.PI * 2;
out[i] = a * Math.sin(phase) + b * Math.cos(phase * 2) + c * Math.sin(phase * 3 + 0.3);
}
return out;
}
function buildCapture(timestamp: string, readerKey: string, readerFingerprint: string, locationHash: string, deviceFingerprint: string, fftFeatureVector: number[]): EmCapture {
const iqDigest = createHash("sha256").update(timestamp + locationHash).digest("hex");
const skeleton = { schemaVersion: 1 as const, iqDigest, fftFeatureVector, bandStartHz: 100_000_000, bandEndHz: 1_000_000_000, sampleRateHz: 8_000_000, timestamp, locationHash, deviceFingerprint, readerFingerprint };
const captureId = createHash("sha256").update(JSON.stringify(skeleton)).digest("hex");
const signed = signCanonicalBody({ ...skeleton, captureId }, readerKey);
return { ...skeleton, captureId, signature: signed.signature };
}
function buildFp(modelFamilyId: string, auditorKey: string, auditorFingerprint: string, centroidVector: number[]): WorkloadFingerprint {
const skeleton = { schemaVersion: 1 as const, modelFamilyId, centroidVector, centroidStddev: 0.05, sampleCount: 64, registeredAt: "2026-04-26T00:00:00.000Z", auditorFingerprint };
const fingerprintId = createHash("sha256").update(JSON.stringify(skeleton)).digest("hex");
const signed = signCanonicalBody({ ...skeleton, fingerprintId }, auditorKey);
return { ...skeleton, fingerprintId, signature: signed.signature };
}
main().catch((err) => { console.error(err); process.exit(1); });
Run with tsx index.ts. Expected output:
tempest proofs = 3
kind=model-classify classified=gpt-4-turbo claimed=–
kind=model-classify classified=llama-3-70b claimed=–
kind=model-mismatch classified=llama-3-70b claimed=gpt-4-turbo
Open in StackBlitz – runs in your browser, no install required.
What you get
A signed TempestWitness.Classify proof for every capture (a positive identification: "this rack is running Llama-3-70B with cosine distance 0.04") and a signed TempestWitness.Mismatch proof when a vendor's workload claim disagrees. Raw IQ never travels on the wire – only the digest of the recording plus the FFT-derived feature vector.
What it can't do
- A determined adversary can shield a GPU with a Faraday cage, killing the EM signal. Pair with Power-Ledger or Ember for layered defense.
- Strong RF noise (a busy datacenter, broadcast transmitters nearby) raises the floor and reduces classification confidence. The reader signs SNR alongside the vector so you know when to discount a result.
- The fingerprint database is the trust anchor for classification. If your fingerprints are wrong, your classifications are wrong. Treat fingerprint authoring as a notarized act with multi-party signing.
- Two models with similar architectures and similar memory-access stride patterns may produce similar vectors. Fine-tunes of the same base model are particularly hard to distinguish.
A real-world example
An EU regulator under the AI Act needs to verify a US AI provider's claim that "all European customer queries are routed to GPT-5 deployed in our Frankfurt datacenter." The regulator's auditor visits the datacenter with a HackRF and captures 10 seconds per rack across 40 racks from the cold-aisle. Tempest-Witness emits 32 model-classify proofs matching the registered GPT-5 fingerprint, while eight racks classify as a smaller Llama-3 variant. The regulator publishes the Rekor entries. Each capture's signed SNR is well above the noise floor, so a "those racks were idle and classified on noise" defense is not consistent with the data. The provider rebalances workloads and resolves the finding.
For developers
Predicate URIs
| URI | What it attests |
|---|---|
https://pluck.run/TempestWitness.Capture/v1 | Reader-signed EM capture: digest of the IQ recording plus the FFT-derived feature vector at the DDR-refresh harmonics. Raw IQ stays in cassette persistence – never on the wire. |
https://pluck.run/TempestWitness.Fingerprint/v1 | Auditor-signed model-family centroid: per-PRN attention-head fan-out stride pattern as a fixed-length feature vector. |
https://pluck.run/TempestWitness.Classify/v1 | Bureau-signed nearest-centroid classification – the headline attestation. |
https://pluck.run/TempestWitness.Mismatch/v1 | Bureau-signed contradiction: the vendor's workload claim disagrees with the EM-derived classification. |
Programs composed
- Power-Ledger – same architecture, different physics (PSU rails instead of EM). Cross-check both for higher confidence.
- Ember – multi-modal attestation across four channels including EM, for cases where higher confidence than a single-channel result is required.
- Oath / Dragnet – vendor identity bindings used to thread vendor-signed workload claims into the mismatch detector.
Threat model + adversary
The adversary is the AI vendor and (in the offensive direction) a competitor's customers running EM eavesdropping on a vendor's racks. Vendor countermeasures: Faraday cages, EMI-suppression caps, varying memory access patterns to muddy the fingerprint. None of these are cheap. See Threat Model.
Verify a published cassette
pluck bureau verify <bundle-dir>
cosign verify-blob --key <pubkey.pem> --signature <sig> --type https://pluck.run/TempestWitness.Mismatch/v1 <body.json>
Every signed shape is a detached Ed25519 signature over canonical JSON, optionally notarized into Sigstore Rekor as a DSSE in-toto envelope. Verify with verifyCanonicalBody() from this package, or with any standard Sigstore-compatible client.
See also
- Bureau Foundations
- Threat Model
- Verify a dossier
- Power-Ledger – single-channel power side-channel attestor
- Ember – multi-modal (4-channel including EM) side-channel attestor