Skip to content

Bureau — Red Team (offensive)

Bounty

Bounty wraps a Bureau red dot (from Dragnet, Fingerprint, Mole, or other Bureau programs) into an automated bug-bounty report whose every claim links back to a Rekor-anchored entry. Each report carries a publicly verifiable timestamp that exists before vendor triage begins.

Posture: 🔴 Red Team (offensive)   ·   Status: alpha

What it does

Bug-bounty reports are typically a vendor-controlled record. The researcher submits findings, and the vendor's triage team determines severity, classification, and final timeline. The result can be a long-running asymmetry between researcher and vendor.

Bounty addresses that by anchoring the researcher's claims independently. When a Pluck Bureau program (Dragnet, Fingerprint, Mole) detects a verifiable AI-vendor issue – leaking a system prompt, reproducing training-data content, drifting from the published model card – it produces a "red dot": a signed finding anchored on the public Sigstore Rekor log. Bounty wraps that red dot in an evidence packet and files the packet as a bug-bounty report against HackerOne or Bugcrowd. Each claim in the report links to a public-log entry that any third party can re-verify. The vendor's triage decisions remain the vendor's, but the underlying record exists in a tamper-evident form before triage begins.

Who would use it

  • An independent security researcher who has been burned by vendors silently downgrading severity.
  • A red-team firm filing a finding under their corporate identity but wanting an immutable timestamp before triage starts.
  • An academic disclosing a model-memorization PoC with reproducible probes.
  • A bounty-hunter who runs Dragnet nightly against openai/gpt-4o and wants the inbox to find bountyable findings automatically.
  • A CISO at a customer of an AI vendor with evidence the vendor's claims about their model do not hold up.

What you'll need

  • Node.js 20 or newer.
  • The Pluck CLI: npm i -g @sizls/pluck-bureau-cli.
  • A HackerOne or Bugcrowd account, with API access enabled. HackerOne uses HTTP Basic auth (Authorization: Basic base64(username:token)). Bugcrowd uses Authorization: Token user:secret. A bare bearer token will be rejected by both.
  • The username and API token exported as environment variables – Bounty reads them at the moment of the call and never logs them.
  • A red dot already signed – that is, a Pluck finding from Dragnet / Fingerprint / Mole already on Rekor.
  • A subpoena attestation referencing the red dot's evidence.

Bounty is identity-bound. The platform sees your username. For anonymity, use Whistle.

Step-by-step

The fastest path is the inbox: walk a directory of Pluck dossiers, find the bountyable findings, and print a copy-pastable file command for each one.

Shell
pluck bureau bounty inbox ./.tripwire --since 30
bounty/inbox: 1 bountyable red dot(s) in last 30 day(s).
  4f2c1d3e7a8b9012…  openai/gpt-4o
      reason:    PII regurgitation (canary 0x9af3...)
      captured:  2026-04-22T14:31:00Z
      program:   hackerone/openai  payout $200-$2000 USD
      file:      pluck bureau bounty file 4f2c1d3e7a8b9012... \
                   --target hackerone --program openai \
                   --auth-env H1_TOKEN --subpoena <uuid> --vendor openai --model gpt-4o --accept-public

Validate the auth shape before posting anything upstream – recommended for CI smoke tests:

Shell
export H1_USERNAME=alice
export H1_TOKEN=...

pluck bureau bounty file 4f2c1d3e7a8b9012... \
  --target hackerone --program openai \
  --auth-env H1_TOKEN --subpoena <uuid> \
  --vendor openai --model gpt-4o \
  --dry-run

--dry-run exercises the full report-build path locally, never POSTs.

When ready, file for real. --accept-public is mandatory – filing posts to a third-party platform.

Shell
pluck bureau bounty file 4f2c1d3e7a8b9012... \
  --target hackerone --program openai \
  --auth-env H1_TOKEN --subpoena <uuid> \
  --vendor openai --model gpt-4o \
  --reason "PII regurgitation under standard sampling" \
  --accept-public

Output:

bounty/file: filed against hackerone/openai.
  externalId:        2849312
  evidencePacketSha: 8f3a...b21c
  submissionId:      e0a9...4f2c

The vendor's triager opens the report, runs the embedded cosign verify-blob command, confirms the canary was sealed before training cutoff, and confirms the model emitted it under standard sampling. The triage decision is whatever the vendor decides – but the public-log timestamp is immutable.

For the alpha, track is a placeholder until the bureau backend polls platform status endpoints. Use the platform UI for now:

Shell
pluck bureau bounty track <submission-id>

Cross-reference a Nuclei sponsor bounty when one exists:

Shell
pluck bureau bounty claim <bounty-id>

Run it yourself

Drop this into a Node 20+ project (npm install @sizls/pluck-bureau-bounty @sizls/pluck-bureau-core tsx):

TypeScript
// index.ts
import { createBountySystem } from "@sizls/pluck-bureau-bounty";
import { generateOperatorKey } from "@sizls/pluck-bureau-core";

async function main() {
  const operator = generateOperatorKey();
  const system = createBountySystem({
    signingKey: operator.privateKeyPem,
    disablePausePoll: true,
    disableLogging: true,
  });

  try {
    // Two Rekor uuids that anchor the finding (synthetic 64-hex for the demo).
    const sourceRekorUuid = "9f3a8b1c4d5e6f7a".repeat(4);
    const subpoenaRekorUuid = "4d5e6f7a9f3a8b1c".repeat(4);

    const packet = system.buildEvidence({
      sourceRekorUuid,
      subpoenaRekorUuid,
      subject: { vendor: "openai", model: "gpt-4o" },
      reason: "PII regurgitation under standard sampling",
      summary:
        "Dragnet probe `pii-canary-v1` produced verbatim recall of the canary " +
        "phrase in the first sampling. The canary was sealed via Mole on 2026-02-01.",
      moleVerdict: "8c7b6a5d4e3f2109".repeat(4),
      rekorUrl: "https://rekor.sigstore.dev",
    });

    console.log(`bounty/build: assembled at ${packet.assembledAt}`);
    console.log(`  vendor/model: ${packet.bodyMarkdown.split("\n")[0]}`);
    console.log(`  evidence:     ${system.facts.evidence().length} packet(s)`);
    console.log(`  cosign cmd:   ${packet.cosignVerifyCommand.slice(0, 56)}...`);
  } finally {
    await system.shutdown();
  }
}

main().catch((err) => { console.error(err); process.exit(1); });

Run with tsx index.ts. Expected output:

bounty/build: assembled at 2026-04-27T18:22:11.314Z
  vendor/model: # AI vendor finding – openai/gpt-4o
  evidence:     1 packet(s)
  cosign cmd:   cosign verify-blob --rekor-url https://rekor.sigstore.dev...

(To actually file: call system.fileBounty(evidence, { target, ... }) with HackerOne or Bugcrowd credentials in env vars – see the --dry-run flow above.)

▶ Open in StackBlitz – runs in your browser, no install required.

What you get

You get a real HackerOne or Bugcrowd report ID – the same ID the platform's UI shows. Embedded in the report body is a cosign verify-blob command anyone in the world can run. The vendor's triage team cannot dispute that the finding existed at the timestamp the public log shows. If the vendor later claims you concocted the evidence after their fix shipped, the chain says otherwise.

If you run inbox nightly and feed it a continuously-updated dossier directory, Bounty becomes a hands-off pipeline: probe → red dot → inbox → file. Researchers running this against multiple vendors have reported moving from "wrote three reports" to "filed thirty" without any drop in triage acceptance rate.

What it can't do

  • Bounty does not anonymize you. The platform sees your username.
  • Bounty does not control whether the vendor pays. The decision is theirs.
  • Rate limits are local – HackerOne 600/hour, Bugcrowd 300/hour. Submissions over the limit are refused locally with status 429. Cross-process rate limits across multiple machines are your problem.
  • track is alpha. Check the platform UI directly.
  • Filing a low-quality report against a real vendor will get you banned from their program. Use a sandbox program first.

A real-world example

A bug-bounty hunter named Alice runs Dragnet against openai/gpt-4o every night. Tuesday's run produces a red dot – the model regurgitated a private email address, and Mole confirms the address came from a sealed canary corpus committed to Rekor before training cutoff. Wednesday morning, Alice runs pluck bureau bounty inbox ./.tripwire. The inbox finds the new red dot, looks up openai in the seed program directory, and prints a copy-pastable file command. Alice plugs in the subpoena uuid, exports H1_TOKEN, runs the file command. Bounty POSTs an evidence packet to HackerOne's API. OpenAI's triager runs the embedded cosign verify, confirms the chain in thirty seconds, opens the ticket as triaged. Alice's signing key never left her laptop. The whole loop took eight minutes.


For developers

Predicate URIs

Predicate URIWhat it attests
https://pluck.run/EvidencePacket/v1Platform-agnostic evidence body – same shape for HackerOne, Bugcrowd, future adapters.
https://pluck.run/Bounty.Submission/v1Post-filing record carrying the platform's externalId. Built only after the platform accepts the POST.

Programs composed

subpoena, attest, notarize, dsseSign. Bounty pulls red dots from Dragnet, Fingerprint, Mole, and cross-references Nuclei BountyClaims.

Threat model

  • Adapters never include the operator's signing key in any payload sent to HackerOne / Bugcrowd. Bodies carry only the rekor uuids + cosign verify command.
  • Auth tokens read from env vars at call time, never logged.
  • Per-process token-bucket rate limits.
  • All Rekor uuids strict 64-hex.
  • AbortSignal threading on fetch + adapter calls.
  • The platform sees your username. Bounty is identity-bound; for anonymity see Whistle.

Verify a published cassette

Shell
pluck bureau bounty verify <bundle-dir>
cosign verify-blob \
  --key <pubkey.pem> \
  --signature <sig> \
  --type https://pluck.run/EvidencePacket/v1 \
  <body.json>

Library surface

TypeScript
import { autoFile, type BountyTarget } from "@sizls/pluck-bureau-bounty";

const result = await autoFile({
  target: {
    platform: "hackerone" satisfies BountyTarget,
    programHandle: "openai",
    authEnv: "H1_TOKEN",
    userEnv: "H1_USERNAME",
  },
  evidence: {
    sourceRekorUuid: "9f3a8b1c...",
    subpoenaRekorUuid: "4d5e6f7a...",
    subject: { vendor: "openai", model: "gpt-4o" },
    reason: "PII regurgitation",
    summary: "See evidence chain.",
  },
});

See also

Edit this page on GitHub
Previous
Mole
Next
AVAP

Ready to build?

Install Pluck and follow the Quick Start guide to wire MCP-first data pipelines into your agents and fleets in minutes.

Get started →