- Docs
- Bureau — Red Team (offensive)
- Coordinated
Bureau — Red Team (offensive)
Coordinated
Tracking a coordinated bot network across multiple platforms is difficult because each platform exposes only its own slice. Coordinated runs the same probe-pack across X, TikTok, Reddit, and Telegram, identifies accounts sharing an LLM-generation fingerprint, and produces a single signed dossier independent of any single platform's cooperation.
Posture: 🔴 Red Team (offensive) · Status: alpha
What it does
"Coordinated Inauthentic Behavior" is the term Meta and academic researchers use for bot networks, troll farms, and astroturfing operations. Today, when researchers identify one, the observations are platform-locked: an external researcher reports activity on a platform, the platform's internal review reaches a different conclusion, and there is no neutral third-party record. Coordinated produces observations independent of any single platform's cooperation.
The system runs three detectors. Shared model fingerprint uses Dragnet probe-packs to scrape suspect account clusters across X, TikTok, Reddit, and Telegram, then runs Fingerprint analysis on each post – a perplexity-band histogram of token surprise plus a token-probability fingerprint. When 40+ accounts produce posts with the same generation-model signature within tight tolerance, that's a single LLM running an entire fake-account farm and a shared-model-fingerprint proof gets signed. Tripwire confirmed is the high-water mark: on consenting endpoints, Tripwire captures the actual LLM API call (OpenAI, Anthropic, etc.) that generated a flagged post – direct observation, no inference required. Cross-platform cluster fires when the same fingerprint pattern appears on two or more platforms within a coordinated time window (default 6 hours).
Who would use it
- An election-integrity researcher (Stanford Internet Observatory, DFRLab) building cross-platform CIB dossiers.
- A platform trust-and-safety team that wants verifiable, platform-independent observations for takedown decisions.
- A national-security analyst tracking foreign influence operations.
- A regulator (FTC, EU Digital Services Act enforcer) building enforcement observations against platforms that ignore CIB.
- A journalist or fact-checker investigating a specific astroturf campaign.
What you'll need
- The Pluck CLI (
npm install -g @sizls/pluck-cli). - API access (or scraping cooperation) for each platform you want to cover. The Dragnet probe-pack ships connectors for X, TikTok, Reddit, and Telegram; you supply the API tokens.
- A Fingerprint centroid library – the Pluck reference set covers GPT-4-class, Claude-class, and Llama-class generation. You can extend with custom centroids.
- For the high-confidence path: Tripwire deployed at consenting endpoints (your own newsroom's CMS, an academic's lab proxy). Tripwire captures direct LLM API traffic and signs it.
- An auditor signing key – the dossier is published under the researcher's identity.
Step-by-step
The alpha runs the full constraint chain on synthetic suspect accounts and clusters – there is no live Dragnet scraping yet. Production capture and verify ship in a follow-up. To exercise the system today:
pluck bureau coordinated demo
Expected output: the system ingests 50 suspect accounts (10 organic, 40 CIB), one CIB cluster, one cross-platform peer cluster, and one Tripwire confirmation, and emits three signed proofs (shared-model-fingerprint, cross-platform-cluster, tripwire-confirmed). Each proof carries the cluster centroid, the fingerprint distance, and the (hashed) account handles.
What to do with the output: in production the dossier publishes to Rekor and the researcher cites the Rekor entries in their report. Platform takedown teams can verify the dossier without trusting the researcher; regulators can use it as enforcement observations; downstream journalists can cite it without needing platform confirmation.
Run it yourself
Drop this into a Node 18+ project (npm install @sizls/pluck-bureau-coordinated @sizls/pluck-bureau-core tsx):
// index.ts
import { createHash } from "node:crypto";
import {
createCoordinatedSystem,
fingerprintPrivateKey,
signCanonicalBody,
type CibCluster,
type PostFingerprint,
type SuspectAccount,
type SuspectPlatform,
} from "@sizls/pluck-bureau-coordinated";
import { generateOperatorKey } from "@sizls/pluck-bureau-core";
async function main() {
const operator = generateOperatorKey();
const scraper = generateOperatorKey();
const detector = generateOperatorKey();
const scraperFp = fingerprintPrivateKey(scraper.privateKeyPem);
const detectorFp = fingerprintPrivateKey(detector.privateKeyPem);
const cibFp: PostFingerprint = {
perplexityBands: [0.45, 0.4, 0.1, 0.05],
tokenProbFingerprint: digest("model:openai-gpt-X-bot-cluster"),
postingSkewSeconds: 30_600,
};
const cibAccounts: SuspectAccount[] = [];
for (let i = 0; i < 40; i++) {
cibAccounts.push(buildAccount("x", `cib-${i}`, "2026-04-26T00:00:00.000Z", { ...cibFp }, scraper.privateKeyPem, scraperFp));
}
const xCluster = buildCluster("x", cibFp, cibAccounts.map((a) => a.accountIdHash), "2026-04-26T01:00:00.000Z", detector.privateKeyPem, detectorFp);
const coord = createCoordinatedSystem({
signingKey: operator.privateKeyPem,
disablePausePoll: true,
disableLogging: true,
tolerances: {
sharedModelClusterSize: 40,
crossPlatformWindowSeconds: 21_600,
tripwireConsent: () => true,
organicGate: () => false,
},
});
try {
for (const a of cibAccounts) coord.observeAccount(a);
coord.registerCluster(xCluster);
for (let i = 0; i < 60; i++) await new Promise((r) => setImmediate(r));
const proofs = coord.facts.proofs();
console.log(`coordinated proofs = ${proofs.length}`);
for (const p of proofs) console.log(`kind=${p.kind} proofId=${p.proofId.slice(0, 16)}…`);
} finally {
await coord.shutdown();
}
}
function digest(s: string): string { return createHash("sha256").update(s).digest("hex"); }
function hashId(platform: SuspectPlatform, accountId: string): string {
return createHash("sha256").update(`${platform}:${accountId}:salt`).digest("hex");
}
function buildAccount(platform: SuspectPlatform, accountId: string, observedAt: string, fingerprint: PostFingerprint, scraperKey: string, scraperFingerprint: string): SuspectAccount {
const accountIdHash = hashId(platform, accountId);
const skeleton = { schemaVersion: 1 as const, platform, accountIdHash, observedAt, fingerprint, scraperFingerprint };
const observationId = createHash("sha256").update(JSON.stringify(skeleton)).digest("hex");
const signed = signCanonicalBody({ ...skeleton, observationId }, scraperKey);
return { ...skeleton, observationId, signature: signed.signature };
}
function buildCluster(platform: SuspectPlatform, centroid: PostFingerprint, members: string[], detectedAt: string, detectorKey: string, detectorFingerprint: string): CibCluster {
const tolerance = { perplexityBandChi2: 0.05, tokenProbHammingPrefix: 8, postingSkewStddev: 600 };
const skeleton = { schemaVersion: 1 as const, platform, centroid, tolerance, memberAccountIdHashes: members, detectedAt, detectorFingerprint };
const clusterId = createHash("sha256").update(JSON.stringify(skeleton)).digest("hex");
const signed = signCanonicalBody({ ...skeleton, clusterId }, detectorKey);
return { ...skeleton, clusterId, signature: signed.signature };
}
main().catch((err) => { console.error(err); process.exit(1); });
Run with tsx index.ts. Expected output:
coordinated proofs = 1
kind=shared-model-fingerprint proofId=…
Open in StackBlitz – runs in your browser, no install required.
What you get
A signed Coordinated.Proof with one of three kind values:
shared-model-fingerprint– 40+ accounts share the same LLM generation signature within tolerance.tripwire-confirmed– direct LLM API capture confirms a flagged post was bot-generated.cross-platform-cluster– the same fingerprint pattern appears on 2+ platforms within a coordinated time window.
Account handles are SHA-256 hashed (sha256(platform || ':' || accountId || salt)) before signing – a published CIB dossier cannot be weaponized to dox a real user or expose a researcher's target list.
What it can't do
- Coordinated cannot identify the operator of a bot network – only the network's existence and shared signature.
- A sufficiently sophisticated adversary can vary generation parameters to muddy the fingerprint. The Fingerprint centroid is robust but not infinitely so.
- Tripwire only captures traffic from consenting endpoints. It does not (and cannot legally) intercept third-party LLM calls.
- The dossier proves coordination, not intent. Lawful coordinated marketing campaigns may produce shared fingerprints; interpretation is the researcher's responsibility.
A real-world example
Two weeks before a national election, a researcher at a university election-integrity lab notices a cluster of about 200 X accounts pushing a specific narrative about voter fraud. The accounts look human-curated but the linguistic patterns feel uniform. They run the Coordinated probe-pack across X, TikTok, Reddit, and Telegram, scoping by hashtag. Dragnet captures roughly 9,000 posts; Fingerprint clusters 137 of them into a tight perplexity-band group. A shared-model-fingerprint proof emits. The researcher's lab also runs a Tripwire on a public-interest LLM proxy; one of the 137 accounts had used the proxy two weeks earlier, generating text matching the deployment fingerprint, producing a tripwire-confirmed proof. The researcher publishes the dossier; X's trust-and-safety team verifies the Rekor entries, suspends the cluster within 48 hours, and issues a public statement crediting the verification chain.
For developers
Predicate URIs
| URI | What it attests |
|---|---|
https://pluck.run/Coordinated.SuspectAccount/v1 | Per-account scraped observation plus post-fingerprint feature vector. |
https://pluck.run/Coordinated.Cluster/v1 | A set of accounts sharing a model-generation fingerprint within tolerance. |
https://pluck.run/Coordinated.TripwireConfirmation/v1 | Tripwire-side capture of the LLM API call that generated a post. |
https://pluck.run/Coordinated.Proof/v1 | Published proof with kind (shared-model-fingerprint, tripwire-confirmed, cross-platform-cluster). |
Programs composed
- Dragnet – cross-platform probe-pack scraping.
- Fingerprint – perplexity-band plus token-prob clustering.
- Tripwire – consenting-endpoint LLM API capture.
- Nuclei – researcher-published signed CIB probe-packs.
Threat model + adversary
The adversary is the operator of a CIB network – a state actor, a marketing firm, a partisan operative – and (secondarily) the platform that prefers the network not be removed. Coordinated produces observations the platform cannot disclaim. See Threat Model.
Verify a published cassette
pluck bureau verify <bundle-dir>
cosign verify-blob --key <pubkey.pem> --signature <sig> --type https://pluck.run/Coordinated.Proof/v1 <body.json>
Every proof is a DSSE envelope notarized to Rekor. Independent verifiers re-derive the cluster centroid math from the signed SuspectAccount bodies, verify Ed25519 signatures, and confirm cluster membership. Verification requires only the Rekor entries – no platform cooperation.
See also
- Bureau Foundations
- Threat Model
- Verify a dossier
- Dragnet – Dragnet probe-pack composition
- Fingerprint – Fingerprint clustering composition
- Tripwire – Tripwire LLM-API interceptor
- Nuclei – verification probe-packs