[Cryptography] Quillon Graph: A private, post-quantum electronic cash system
Viktor S. Kristensen
overdrevetfedmetodologi at pm.me
Thu Mar 26 02:30:03 EDT 2026
List,
This is a follow-up to the Quillon Graph thread from January.
The network launched on February 22, 2026 and has been running
continuously for 33 days. I owe this list an honest status
report, corrections to claims I made in January, and responses
to the criticisms raised -- particularly by Bear, Peter Gutmann,
John Gilmore, and Peter Fairbrother.
1. CORRECTIONS TO JANUARY CLAIMS
Several things I stated or implied in January were wrong or
misleading. I want to correct them before anything else.
(a) The mining algorithm is BLAKE3 + VDF, not SHA-3.
The January posts referenced SHA-3 in several places. The actual
proof-of-work implementation uses iterative BLAKE3 hashing:
h = BLAKE3(header || nonce)
for i in 0..T: h = BLAKE3(h)
check: h < target
SHA-3-256 appears in the protocol layer as an algorithm identifier,
in challenge-response hashing, and in a cfg-gated CPU fallback path
within the hybrid mining library (compiled when gpu-mining feature
is disabled). However, the primary PoW implementation that all
miners and nodes actually execute -- DagKnightVDF in q-miner and
q-api-server -- uses BLAKE3. I should have been precise about this
in January.
(b) The 50ms finality figure needs clarification.
Peter Fairbrother questioned this. There are two metrics:
- User-visible confirmation: <50ms. The SSE streaming system
pushes transaction events to connected wallets within 50ms of
node acceptance. Users see their balance update almost
instantly. This is the number users experience.
- DAG-Knight consensus finality: 1.4 seconds. This is when the
transaction is ordered in the DAG with sufficient probability
of irreversibility (delta=1 confirmation depth).
In January I conflated these. The 50ms figure is real and
measurable -- but it is node-level acceptance and event delivery,
not consensus finality. For a merchant accepting payment, 1.4
seconds is the honest finality number. For a user watching their
wallet, sub-50ms is the honest UX number. Both are legitimate
metrics; I should have distinguished them.
(c) The TPS figures need context.
The January thread referenced 48,000 TPS. Independent benchmarking
confirms the throughput is real -- the HTTP binary batch protocol
peaks at 12,414 TPS (1000-tx batches, P99 latency 125ms, 100%
success rate) and sustains 8,600-9,700 TPS across batch sizes from
100 to 10,000. Single-transaction PaaS API calls measure 1,273 TPS
at 73ms P99.
However, throughput without finality context is misleading. The
full picture across layers:
Layer TPS Latency
─────────────────────────────────────────────────────
HTTP binary batch (peak) 12,414 125ms P99
Optimistic finality (SSE) — <100ms
AEGIS-256 affirmation — 53-160ms
DAG-Knight 1-conf finality — ~1.4s avg
Deep confirmation (3-block) — ~4.3s avg
The <100ms optimistic finality is real and measurable: when a
sender signs a transaction, the receiver's wallet balance updates
within 100ms via SSE push, backed by an AEGIS-256 authenticated
affirmation certificate. This is not a UI trick -- the balance
update is cryptographically attested -- but it precedes full DAG
consensus ordering by ~1.3 seconds.
I should have presented all layers in January rather than citing only the
peak throughput.
(d) The codebase size requires clarification.
I cited 680,000+ lines in January, measured across an 83-crate
workspace. The current figure for non-test, non-backup Rust
application code is approximately 767,000 lines (555,000
excluding blanks and comments). Including the TypeScript
frontend and test suites, the full project is approximately
944,000 lines. The January number was roughly accurate for
its time; the codebase has grown since then.
(e) SQIsign: real isogeny arithmetic now available via FFI.
Since the January thread, we completed the FFI integration path
recommended in our own technical review. The official NIST Round 2
SQIsign C reference implementation (SQISign/the-sqisign, Apache-2.0)
is now vendored in the codebase and callable from Rust via two new
crates:
q-sqisign-sys Raw FFI bindings to the C reference (75,545 lines
of production C, 203 files). Compiles via CMake,
links GMP. Exposes keygen, sign, verify, open.
q-sqisign Safe Rust wrapper with KeyPair::generate(), sign(),
verify(). Secret keys zeroized on drop. 11 tests
including roundtrip, tampering, wrong-key rejection.
What is now real (Level I, via C FFI):
- Key generation produces actual isogeny-derived keypairs
(pk=65 bytes j-invariant, sk=353 bytes isogeny coefficients)
- sign() computes the real dimension-4 isogeny push-through
via KLPT + Deuring correspondence (~30ms on modern Intel)
- verify() checks that the isogeny diagram commutes (~1.5ms)
- All Fp2 arithmetic, Velu formulas, quaternion algebra, and
KLPT lattice reduction are implemented in the vendored C
- NIST KAT vectors included (900 test cases per security level)
- Detached signature size: 148 bytes (smaller than the 204-byte
scaffold estimate)
What still uses the hash-based scaffold:
- Levels III and V (only Level I C reference is vendored)
- Builds without the sqisign-ffi feature flag (fallback path)
The integration is feature-gated: cargo build --features sqisign-ffi
enables the real C implementation for Level I. Without the flag, the
hash-based scaffold remains as a structural fallback. The height-gated
activation point (PHASE2_SQISIGN_MANDATORY, block 2,000,000) has
already been passed -- the chain is currently above 11 million blocks.
Mainnet blocks are signed with SQIsign (Phase 2) by default.
The chain transitioned through Ed25519 (Phase 0) → Dilithium5
(Phase 1, now deprecated) → SQIsign (Phase 2, 204-byte compact
signatures, 95.6% smaller than Dilithium5's 4,627 bytes).
2. ADDRESSING THE CRITICISMS
Bear's premature standardization argument (Jan 12):
"Any standard for quantum-resistant cryptography is a ludicrously
premature standard made without any input from real-world systems
or threats."
Bear, you were right about the prescription problem. In January I
argued that hybrid classical+PQ was the answer. Having now run the
system on mainnet for a month, I can report what hybrid actually
costs in practice:
- Dilithium5 signatures are 4,627 bytes vs Ed25519's 64 bytes.
On a DAG with thousands of blocks per minute, this is
significant storage overhead.
- Kyber1024 KEM adds ~3,200 bytes to each P2P handshake.
With 80+ connected peers, this is measurable.
- The real cost is not space -- it is attention. Engineering
time spent on PQ integration is time not spent on consensus
correctness, storage reliability, and operational safety.
In our first month of mainnet, we had:
- A height regression bug that lost 3,000 blocks on restart
- A RocksDB memory leak that OOM-killed the bootstrap node
- A gossipsub flood that crashed the 10Gbit supernode 6x/hour
- An ephemeral port collision that caused bind() failures
None of these were cryptographic failures. All of them were
engineering failures that threatened the network more immediately
than any quantum computer. This is exactly Gutmann's Scenario D
point: "while the world is fixated on dealing with a threat that
no-one has been able to prove exists, we're not addressing actual
vulnerabilities."
I still believe hybrid is correct for an immutable ledger. But
Bear's point about premature standards diverting engineering
resources from real threats is empirically validated by our first
month of operation.
Your suggestion to maintain "a few hundred different attempted
solutions" before standardizing has merit. The crypto-agility
architecture (height-gated validation rules, algorithm field in
block headers) was designed for exactly this: the PQ layer can be
replaced without a chain reset if lattice assumptions fall or
better constructions emerge.
Peter Gutmann's Scenario D -- Desinformatsiya (Jan 11):
"While the world is fixated on dealing with a threat that no-one
has been able to prove exists, we're not addressing actual
vulnerabilities."
Peter, this hit hard. Our operational experience confirms it.
The three most dangerous bugs we encountered in mainnet's first
month were:
1. Sync-down: a peer announcing a lower height could cause the
node to delete blocks down to the peer's height, destroying
the chain. This is a trivial implementation bug with
catastrophic consequences -- orders of magnitude more
dangerous than any quantum attack.
2. Unbounded block-pack allocation: syncing peers requesting
500+ blocks spawned unbounded tokio tasks, each allocating
50-150MB. Four concurrent sync requests = 10GB allocation =
OOM. We gated this with a semaphore (max 4 concurrent, max
200 blocks per response, ~200MB worst case).
3. RocksDB auto-configuring block cache to 1/3 of RAM (16GB on
our main node), leaving insufficient memory for the
application. Fixed by explicit cap: ROCKSDB_BLOCK_CACHE_MB=4096.
These are pedestrian engineering problems. They are also the ones
that actually threatened user funds. Your "bollocks" framing has
a real engineering corollary: the threat model that dominates in
practice is implementation bugs, not algorithmic breaks.
That said, I maintain the immutable ledger asymmetry argument.
These engineering bugs are fixable (and were fixed, within hours).
A classical cryptographic break of the signature scheme, if it
occurs after keys are exposed on-chain, is not fixable -- those
funds are permanently stealable. The distinction between
"recoverable engineering failure" and "irrecoverable
cryptographic failure" still favors defense-in-depth.
John Gilmore's trust deficit (Jan 10):
"Why is NSA pushing belt-without-suspenders quantum-resistant
crypto? [...] You could bet that they've changed their spots,
but it's a sucker bet."
John, I cannot distinguish your Scenarios A/B/C from the outside.
The implementation responds as follows:
- No NIST-blessed RNG. We use getrandom (OS entropy) and
optionally hardware TRNG, not NIST SP 800-90A.
- Parameter transparency. Dilithium5 and Kyber1024 use the
published NIST parameters, but the protocol is designed for
algorithm replacement. If someone produces a convincing
cryptanalytic result against ML-DSA/ML-KEM, we can activate
a replacement at a future block height without chain reset.
- Belt AND suspenders. Ed25519 remains as the Phase 0 layer.
Dilithium5 is Phase 1. Both signatures must verify for blocks
in the hybrid phase. Breaking one is insufficient.
Whether NSA has broken lattice assumptions is unknowable. Whether
hybrid classical+lattice is strictly harder to break than either
alone is a theorem, not a bet.
Peter Fairbrother's privacy question (Jan 7):
"DAG-based BFTs like Bullshark or Mysticeti produce a consensus
blockchain quickly, but the blockchain isn't particularly private."
This was the most technically important critique. The privacy does
not come from the DAG consensus layer -- it comes from the
transaction layer above it:
- LSAG ring signatures [1] over Ristretto points hide the
sender within a ring of decoy inputs. Key images prevent
double-spends without de-anonymization.
- Stealth addresses ensure each payment goes to a unique
one-time address, preventing address clustering.
- Bulletproofs++ [2] range proofs hide transaction amounts.
- Dandelion++ [3] with embedded Tor (4 circuits per validator,
independently rotated) prevents IP-to-transaction correlation.
The DAG-BFT layer orders opaque transactions. Validators verify
validity proofs without learning transaction contents.
Since v3.4.16-beta, privacy proofs are mandatory -- not opt-in.
Every transaction submitted through the node automatically receives
maximum privacy (Bulletproofs + STARK + LatticeGuard) before
broadcast. Users do not choose a privacy level; the highest
available level is applied by default. If proof generation fails
for any reason, the transaction still proceeds but with reduced
privacy -- this is logged as a warning.
The anonymity set is therefore the entire transaction set, not a
subset of privacy-conscious users. This addresses the fundamental
weakness of opt-in privacy systems (Zcash's shielded pool problem)
where the anonymity set is limited to the small fraction of users
who choose to use privacy features.
3. WHAT ACTUALLY HAPPENED ON MAINNET
The network launched February 22, 2026 at 12:00 UTC. Empirical
observations from 31 days of operation:
Aggregate hashrate: ~7 GH/s (BLAKE3+VDF, CPU-only miners)
Block production: Continuous since genesis, no halts
Bootstrap nodes: 4 (geographically distributed)
Connected miners: 316+ (322 at last measurement)
Chain height: ~11.4M blocks
Finality: ~1.4s (1-confirmation)
Reorgs: 0 (DAG structure absorbs concurrent blocks)
Consensus failures: 0
Unplanned outages: 3 (all engineering bugs, all recovered)
The 7 GH/s figure is notable only because it represents organic
adoption from 316+ CPU miners without exchange listings, mining
pools, or marketing. For context, established CPU-mineable
currencies took years to reach comparable hashrates (Monero
launched April 2014, reached ~7 GH/s around 2019-2020 with
RandomX).
The comparison is imperfect -- 2026 has mature mining
infrastructure that 2014 did not. But the speed of adoption
suggests the BLAKE3+VDF algorithm and economic parameters
(2,625,000 QUG/year, 21M max supply, 4-year halving) are
attractive to existing CPU miners.
4. BLAKE3+VDF MINING DETAILS
Since this was mis-stated in January, the precise algorithm:
Input = prev_block_hash || miner_pubkey || nonce || timestamp
h = BLAKE3(Input)
for i in 0..T:
h = BLAKE3(h) // Sequential -- cannot parallelize
valid = (h < difficulty_target)
BLAKE3 was chosen over SHA-3 for:
- 3-5x faster on x86-64 with AVX2/AVX-512 SIMD
- Excellent ARM NEON performance (future target)
- Cache-friendly: the iterated VDF loop fits in L1
The VDF loop (T sequential iterations) provides ASIC resistance.
Custom hardware can optimize the BLAKE3 compression function by
perhaps 2-3x over a modern CPU, but cannot skip iterations. This
bounds the ASIC advantage to a small constant factor, unlike
SHA-256d where ASICs achieve >10,000x over CPUs.
GPU mining is supported (OpenCL kernel for BLAKE3), but the
sequential VDF loop limits effective GPU parallelism. Each GPU
thread runs an independent VDF chain -- more threads help, but
cannot accelerate any single chain.
5. ARCHITECTURAL OVERVIEW
The system is a Rust workspace of ~80 crates. The consensus-critical
components (listing only the relevant subset):
q-dag-knight DAG-BFT ordering (Sompolinsky et al. [4])
q-vdf Wesolowski [5] + Pietrzak [6] VDFs
q-types Block/tx types, signature verification
q-storage RocksDB blockchain storage
q-network libp2p gossipsub + Kademlia DHT
Post-quantum and privacy:
q-quantum-crypto Dilithium5 (pqcrypto-dilithium) + Kyber1024
q-quantum-mixing LSAG ring sigs, stealth addrs, Bulletproofs++
q-zk-snark Groth16/PLONK/Marlin (arkworks [7])
q-zk-stark Custom AIR/FRI STARK prover
q-dandelion Dandelion++ relay with Tor bridge
q-tor-client Embedded arti Tor client
q-tor-circuit 4 dedicated circuits per validator
The signature scheme progression:
Phase 0: Ed25519 only (64-byte sigs) — genesis through early chain
Phase 1: Dilithium5 (4,627-byte sigs) — first PQC phase, now deprecated
Phase 2: SQIsign (204-byte sigs) — current default, via FFI to NIST
Round 2 C reference (real isogeny arithmetic for Level I)
The chain is currently in Phase 2 (above 11 million blocks). Phase
transitions are height-gated: blocks below the activation height
validate under old rules; blocks above under new rules. Old blocks
never need re-validation. This is how we maintain crypto-agility
without chain resets -- the answer to Bear's "what happens when the
standard is wrong."
6. KNOWN LIMITATIONS AND OPEN PROBLEMS
Honest assessment of where the system falls short:
(a) The privacy layer uses curve25519-based ring signatures,
which are quantum-vulnerable. Migrating ring signatures to
lattice-based constructions is an open research problem.
Current candidates (e.g., Esgin et al. [8]) produce
signatures that are orders of magnitude larger, which may
be impractical for a ring signature scheme. This is the
single largest technical debt in the system.
(b) No independent audit of any cryptographic implementation.
The Dilithium5 integration uses the pqcrypto-dilithium
library (which has had some review), but our ring signature,
Bulletproofs++, and STARK implementations are custom and
unaudited. This is a serious limitation.
(c) The compact signature scheme (Phase 2) now wraps the
official NIST Round 2 SQIsign C reference via FFI for
Level I (real isogeny keygen, sign, verify). Levels III
and V still use a hash-based scaffold. The C reference
is vendored and compiled into the binary -- no pure-Rust
SQIsign library exists as of March 2026. Side-channel
hardening of the C signing path (KLPT, quaternion lattice
reduction) remains an open problem tracked upstream.
(d) VDF security parameters have not been formally analyzed.
The RSA group size and iteration count were empirically
tuned. A formal analysis relating these to concrete security
levels is needed.
(e) The DAG-Knight consensus has not been formally verified.
We test extensively (4,000+ tests including adversarial
scenarios), but testing is not proof.
(f) The Tor integration has not been evaluated against a
global passive adversary. The 4-circuit architecture
provides defense-in-depth but has not been analyzed for
traffic correlation resistance under realistic threat
models.
7. WHAT WE LEARNED FROM THIS LIST
The January thread was the most useful technical review the
project has received. Specific changes made in response:
- Clarified finality claims (50ms UX vs 1.4s consensus)
- Separated HNDL and HNFL threat models in documentation
- Completed SQIsign FFI integration (NIST Round 2 C reference,
Level I real isogeny arithmetic, 148-byte signatures)
- Prioritized engineering reliability over feature additions
(Gutmann's Scenario D was persuasive)
- Documented the hybrid architecture as "belt AND suspenders"
not "belt instead of suspenders" (Gilmore's framing)
- Added algorithm replacement infrastructure (Bear's diversity
argument) via height-gated crypto-agility
The network is stronger for this discussion. I welcome continued
criticism.
REFERENCES
[1] J. Liu, V. Wei, D. Wong, "Linkable Spontaneous Anonymous
Group Signature for Ad Hoc Groups," ACISP 2004, LNCS 3108.
[2] L. Eagen, D. Fiore, A. Gabizon, "Bulletproofs++: Next
Generation Confidential Transactions via Reciprocal Set
Membership Arguments," ePrint 2022/510.
[3] G. Fanti et al., "Dandelion++: Lightweight Cryptocurrency
Networking with Formal Anonymity Guarantees," ACM SIGMETRICS
2018.
[4] Y. Sompolinsky, S. Wyborski, A. Zohar, "DAG-Knight: An
Asynchronous Byzantine Fault Tolerant Consensus Protocol,"
ePrint 2022/1494.
[5] B. Wesolowski, "Efficient Verifiable Delay Functions,"
EUROCRYPT 2019, LNCS 11478.
[6] K. Pietrzak, "Simple Verifiable Delay Functions," ITCS 2019,
LIPIcs 124.
[7] arkworks contributors, "arkworks: An Ecosystem for zkSNARKs,"
https://arkworks.rs
[8] M.F. Esgin, R. Steinfeld, D. Sakzad, J.K. Liu, D. Liu,
"Short Lattice-based One-out-of-Many Proofs and Applications
to Ring Signatures," ACNS 2019, LNCS 11464.
Source: https://quillon.xyz
Binary: https://quillon.xyz/downloads/q-api-server-linux-x86_64
-Viktor
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.metzdowd.com/pipermail/cryptography/attachments/20260326/f0d38a70/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: publickey - overdrevetfedmetodologi at pm.me - 0x5F4716BA.asc
Type: application/pgp-keys
Size: 1722 bytes
Desc: not available
URL: <https://www.metzdowd.com/pipermail/cryptography/attachments/20260326/f0d38a70/attachment.key>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 603 bytes
Desc: OpenPGP digital signature
URL: <https://www.metzdowd.com/pipermail/cryptography/attachments/20260326/f0d38a70/attachment.sig>
More information about the cryptography
mailing list