Trusted Setup Ceremony — Phase 1 live

Help Onym become trustworthy.

Zero-knowledge proofs need a shared public parameter set. That set was born from one person rolling a huge die in secret. For it to be safe on mainnet, many strangers each have to roll their own, and then erase the number. Onym's mainnet keyset will be the product of this ceremony. You are invited.

We generate SRS $\sigma = (\{\tau^i G\}_{i=0}^{2^d-1}, \{\alpha\tau^i G\}, \{\beta\tau^i G\}, \alpha H, \beta H)$ via an $N$-party MPC over BLS12-381. Each round $i$ re-randomizes $(\tau, \alpha, \beta)$ by independent scalars $(\tau_i, \alpha_i, \beta_i)$; security relies on 1-of-$N$ honest erasure and consistency checks via the bilinear pairing $e: \mathbb{G}_1 \times \mathbb{G}_2 \to \mathbb{G}_T$.

The twelve steps

Each step below is written twice: a plain-language version for humans, and a formal version for mathematicians. Use the toggle at the top to switch everyone's default; individual cards have a per-card override.

00What is a trusted setup?

Zero-knowledge proofs let you prove you belong to a group — without revealing which member you are. To do that, the group needs a shared "public parameter set" — call it a lockbox that everyone uses.

The problem: building the lockbox requires a secret key. If the person who builds it keeps that key, they can forge any membership proof. If the key is destroyed, nobody can ever forge again. A trusted setup ceremony is a choreographed way for a crowd to build the lockbox together, where anyone who destroys their personal part of the key locks it forever.

For Groth16 over a pairing-friendly curve (BLS12-381 here), soundness requires a structured reference string (SRS) of the form

$$ \sigma = \Big( \{ \tau^i G \}_{i=0}^{n-1}, \{ \alpha \tau^i G \}_{i=0}^{n-1}, \{ \beta \tau^i G \}_{i=0}^{n-1},\ \alpha H,\ \beta H \Big) $$

with $G \in \mathbb{G}_1$, $H \in \mathbb{G}_2$, and toxic scalars $\tau, \alpha, \beta \in \mathbb{F}_r^*$ that must be erased after generation. If $(\tau, \alpha, \beta)$ survive, the SRS is broken: anyone holding them can construct accepting proofs for false statements [Groth16].

01What is toxic waste?

"Toxic waste" is a playful name for the secret numbers used to build the lockbox. Anyone who holds them can forge. Any copy, anywhere — on a hard drive, in a swap file, in RAM that a cold-boot attack can read — is a catastrophe.

Good ceremony practice: generate on an air-gapped or ephemeral machine, never write the scalar to disk, overwrite memory, throw away the VM, and ideally smash the hardware.

Toxic waste $= (\tau_i, \alpha_i, \beta_i) \in (\mathbb{F}_r^*)^3$ sampled by participant $i$. The MPC output after round $i$ is $(\tau^{(i)}, \alpha^{(i)}, \beta^{(i)}) = (\tau^{(i-1)} \tau_i,\ \alpha^{(i-1)} \alpha_i,\ \beta^{(i-1)} \beta_i)$. Soundness after $N$ rounds requires

$$\exists\, i^\star \in [N]\ :\ \tau_{i^\star},\ \alpha_{i^\star},\ \beta_{i^\star}\ \text{erased}.$$

The ceremony is sound iff at least one honest erasure occurred; this is the 1-of-$N$ assumption.

02Why 1-of-N honest works

Each contributor takes the previous lockbox and multiplies in their own secret. Forging requires the product of every contributor's secret. If even one person erased theirs and vanished, nobody can ever recover the product.

This is why more contributors, from more walks of life, is strictly better. We don't need to trust anyone in particular — we just need one of them to have been honest, and we don't need to know which.

Let $\tau^{(N)} = \tau^{(0)} \prod_{i=1}^N \tau_i$. An adversary controlling any subset $S \subsetneq [N]$ learns $\prod_{i \in S} \tau_i$ but, given $\tau_{i^\star}$ is erased for $i^\star \notin S$, recovering $\tau^{(N)}$ requires solving DLOG over $\mathbb{G}_1$ — i.e., the scheme reduces to the $q$-SDH assumption under the generic group model [BGM17].

03What if everyone colludes?

If every contributor kept their secret and shared it, they could collectively forge. That is the whole risk. It's why we want dozens of contributors who don't know each other and who publicly announce their contribution — collusion becomes progressively harder as the crowd grows.

It's also why we encourage people who suspect foul play to contribute: your participation tightens the bound.

The 1-of-$N$ bound is tight: if all $N$ participants are corrupted, $\tau^{(N)} = \tau^{(0)} \prod \tau_i$ is fully known and Groth16 soundness collapses. No cryptographic assumption rescues this case. The ceremony therefore aims for diverse, non-colluding $N$ — geographic, institutional, and ideological diversity are all load-bearing.

04Your turn, step by step

  1. Get in line — sign up with a Nostr key (browser extension).
  2. When you reach the head of the queue, download the current state (the lockbox-so-far).
  3. Run the signed native binary we provide — ideally on an air-gapped or ephemeral machine.
  4. The binary generates your secret, multiplies it into the state, writes a receipt, and erases the secret from memory.
  5. Upload the three output files back here. We verify them; if they pass, they become part of the public transcript.

You have 2 hours from claiming your slot. If you miss it, we re-queue.

At round $i$, participant $i$:

  1. Samples $(\tau_i, \alpha_i, \beta_i) \leftarrow (\mathbb{F}_r^*)^3$ via OS CSPRNG.
  2. Computes $\sigma^{(i)}$ by scalar multiplication over $\sigma^{(i-1)}$.
  3. Computes a Schnorr PoK over $\mathbb{G}_1$ binding the round-id and new commitments: $\pi_i = (\tau_i G, H(\text{transcript}) \cdot \tau_i G, \ldots)$.
  4. Emits state.srs, state.txt (metadata + hashes), receipt.txt (PoK + participant pubkey).
  5. Zeroizes $(\tau_i, \alpha_i, \beta_i)$; the binary uses zeroize and rejects stdout of scalars.

05How to generate good randomness

Four suggestions in descending order of paranoia:

  • Ephemeral VM. Spin up a fresh cloud VM, run the binary, destroy the VM. Simple, and enough for most threat models.
  • Air-gapped laptop. Old ThinkPad that never touches the internet again. Move files via USB; physically destroy the disk after.
  • Hardware source. Read bytes from a hardware RNG (Intel RDRAND, TPM, YubiHSM) and mix with /dev/urandom.
  • Dice. Roll 256 bits of entropy on physical dice, hash with SHA-256, pipe into the binary via --seed-hex. Slow but unimpeachable.

Our binary defaults to the OS CSPRNG (OsRng) if no seed is provided.

The binary samples $(\tau_i, \alpha_i, \beta_i)$ from a uniform distribution over $\mathbb{F}_r^*$ via rejection sampling. The underlying entropy source is OsRng (which on Linux is getrandom(2) ultimately backed by the kernel CSPRNG). Bias is bounded: $\Delta \le 2^{-r} \cdot \log_2 r$ where $r = 255$ for BLS12-381's scalar field. For participants demanding higher assurance, --seed-hex accepts 32 externally generated bytes, from which we derive via HKDF-SHA256.

06How to prove you didn't cheat

You could, in theory, generate a fake contribution. We catch this with public checks anyone can run. Your upload includes a short "receipt" that commits to your input and output, signed with a key derived from the random scalars themselves. Because the signing key is derived from the scalars, a liar can't produce a valid receipt without actually doing the computation.

The coordinator runs these checks before your round is accepted.

Receipt = Schnorr proof-of-knowledge $\pi$ of $\tau_i$ bound to a Fiat–Shamir challenge over the transcript hash. Specifically:

$$ \pi = (\tau_i G,\ H(\text{transcript}) \cdot \tau_i G),\qquad \text{transcript} = \text{round}\ \|\ \sigma^{(i-1)}\ \|\ \sigma^{(i)} $$

Verification: $e(H(\text{tx}) \cdot \tau_i G, H) \stackrel{?}{=} e(\tau_i G, H(\text{tx}) \cdot H)$. Analogously for $\alpha_i, \beta_i$. See $\texttt{src/ceremony/mod.rs}$ functions verify_contribution and verify_initial_contribution.

07What verification means

At the end of the ceremony, anyone can replay the transcript: walk round by round, check every signature, and recompute every pairing. If it all checks out, the final lockbox is sound assuming at least one honest contributor — no need to trust us, no need to trust any single participant.

We ship a browser-based verifier (WASM) so you don't even need to install anything to audit. The ceremony is meaningful because verification is cheap.

Verification of round $i$ checks six pairing equations (three ratio consistency checks, three power-of-$\tau$ consistency checks). For the initial round there are three additional checks against the generator. A full-transcript verify is $O(N)$ in pairings; on commodity hardware this is a few minutes for $N=128$ and $n=2^{11}$.

Browser verify uses ceremony-wasm compiled with ark-bls12-381; backend verify shells out to the same ceremony_tool binary participants run.

08The six pairing equations

Every round is checked against six simple identities. Think of them as a double-entry ledger: if the new lockbox was built correctly, six specific "balances" line up; otherwise they don't. Getting the identities to lie requires knowing the secrets — which is exactly what we're ruling out.

For each round $i$ verifier checks:

$$ \begin{aligned} e(\tau_i^{(\text{new})} G,\ H) &\stackrel{?}{=} e(G,\ \tau_i H) \cdot e(\tau^{(i-1)} G,\ H) \\ e(\alpha_i^{(\text{new})} G,\ H) &\stackrel{?}{=} e(G,\ \alpha_i H) \cdot e(\alpha^{(i-1)} G,\ H) \\ e(\beta_i^{(\text{new})} G,\ H) &\stackrel{?}{=} e(G,\ \beta_i H) \cdot e(\beta^{(i-1)} G,\ H) \end{aligned} $$

Plus three power-of-$\tau$ chain checks: $e(\tau^{(i),k} G,\ H) \stackrel{?}{=} e(\tau^{(i),k-1} G,\ \tau^{(i)} H)$ for $k \in \{1,\ldots,n-1\}$.

Authoritative source: src/ceremony/mod.rs::verify_contribution.

09Why anyone can verify

Every round artifact is stored twice: content-addressed on our Blossom blob server, and announced as a signed Nostr event on our relay. You can fetch the whole transcript from either — or from mirrors — without ever talking to us.

If this coordinator disappears, the ceremony continues. If we try to lie about order or content, every signed event catches us.

Artifacts are addressed by $\text{SHA-256}(\text{bytes})$; transcript events are Nostr kind 30078 (addressable) signed with the coordinator's secp256k1 key. Each event carries:

  • d: sepceremony1:<tier>:r<round> (replaceable key)
  • e: previous round's event-id (hash-linked chain)
  • blob: (name, sha256, url) — one per artifact file
  • participant_pubkey: contributor's npub

The chain yields a hash-linked Merkle-ish ledger; tampering with any round invalidates all subsequent e tags. Mirrors can ingest directly from any Nostr relay that accepted the events.

10Why Phase 2 is circuit-specific

Phase 1 (what you're contributing to now) is generic — it makes a lockbox usable by any small-enough circuit. Phase 2 specializes it to our circuit (Onym's membership proof). It's a second, shorter ceremony that bakes the circuit's specific wiring into the keys.

Phase 2 runs after Phase 1 freezes, separately for each tier (small / medium / large), using snarkjs as the MPC driver.

Phase 2 translates the universal SRS into Groth16-specific proving and verifying keys via the circuit's QAP: given the R1CS $(\mathbf{A}, \mathbf{B}, \mathbf{C})$ with $m$ constraints and $n$ variables, and the Phase 1 output $\{\tau^i G\}_{i \le 2^d}$, Phase 2 computes the Lagrange-evaluated polynomials $\ell_i(\tau), r_i(\tau), o_i(\tau)$ and the $\mathbf{h}(\tau) t(\tau) / \delta$ terms. $\delta$ is introduced as a fresh random scalar at Phase 2; this is why Phase 2 is per-circuit.

We run Phase 2 via snarkjs zkey rounds inside a pinned Docker image; see docs/phase2-mpc-integration.md.

11The random beacon at the end

To seal the ceremony, we mix in a number that nobody could have predicted: a future Bitcoin block hash. Once the block is mined, we XOR its hash into the final keyset. This closes the last subtle loophole — even if a contributor tried to bias their randomness, they couldn't have pre-computed the beacon.

We'll announce the exact block height when Phase 2 freezes. Until then it's literally unpredictable.

Beacon $B = \text{SHA-256}(\text{blockheader}_h)$ for some pre-announced height $h$. Final SRS is derived as $\sigma^{(\text{final})} = \sigma^{(N)} \cdot B$ via scalar-multiplying each commitment by $\text{HKDF}(B, \text{"onym-ceremony-1"})$.

Beacon serves as a "post-commit" common reference, removing the last-contributor-biasing attack: $B$ is not known until after every $\tau_i$ has been committed, so no adversary can adapt $\tau_N$ to yield a useful target.