Bridging the Multichain Universe with Zero Knowledge Proofs
Ingonyama - Zero Knowledge Proof Hardware Acceleration
A next-generation semiconductor company, we focus on finding and solving computational bottlenecks in ZKP.
Originally published on Ingonyama Blog
Bridges are communication protocols that facilitate the transfer of information such as messages, funds or other data between blockchains. While useful, building bridges is a risky business. Some of the most expensive hacks in blockchain history have targeted bridges alone.
As of 2022, it is?estimated?that 69% of the funds lost in the past year were due to attacks on bridges, resulting in losses amounting to billions of dollars.
In this article, we focus on specific implementations of bridge constructions using Zero Knowledge Proofs (ZKP’s). While some of the hacks are not preventable just because one uses ZKP’s, the soundness of a ZKP extends the security of the blockchain consensus protocols to the bridge.
Bridges and Zero Knowledge Proofs
In recent years we have seen tremendous progress in applications of Zero Knowledge Proofs (ZKPs) for rollups, where soundness properties allow for secure and decentralized applications. It therefore makes sense that ZKPs are also being explored to formulate bridge constructions. In the following, we review and compare three interesting developments in this field:
These projects leverage the properties of zk-SNARKS to redefine how bridges should be designed. All of the above assume that there exists a light client protocol that ensures nodes can synchronize block headers of a finalized blockchain state. The two main challenges in applying the ideas behind ZKP rollups to bridges is that first, the circuit sizes involved in bridges are orders of magnitude larger compared to rollups, and secondly, how to reduce storage and computational overhead onchain.
Succinct Verification of Proof of Consensus (Succinct Labs)
Succinct Labs has built a?light client for Ethereum 2.0?proof of stake consensus to?construct a trust minimized bridge between?Gnosis?and Ethereum, that uses the?succinct properties of zk-SNARKS (not Zero Knowledge)?to efficiently verify consensus validity proofs on-chain.
The setup consists of a sync committee of 512 validators in Ethereum randomly chosen every 27 hours. These validators are required to sign every block header during their period, and if more than 2/3 of the validators sign off every block header, the state of Ethereum is deemed a valid state.?The validation process essentially consists of verification of:
Verification of the above requires the storage of 512 BLS public keys on-chain every 27 hours, and for each header verification the signatures are verified, which leads to 512 Elliptic curve additions (in the curve?BLS12–381) and a pairing check on-chain, which is cost prohibitive. The core idea here is to use a zk-SNARK (Groth16) to produce a validity proof (which is constant size) and can be efficiently verified on-chain on Gnosis.
The Ethereum light client uses a solidity smart contract on the Gnosis chain, while the off-chain computations consist of constructing circom circuits for the verification of the validators and their BLS signatures, and then computing the zk-SNARK proof. Following this, the block headers and the proof are submitted to the smart contract, which then performs the verification on the Gnosis chain. The circuit sizes and proving times of the SNARK part of the computation is summarized below:
Optimizations include usage of the 512 Public key (PK) inputs of the validators as a commitment using a?ZK friendly Poseidon hash. Poseidon hash addresses a storage overhead problem and reduces the circuit sizes. The circuit size reduction happens as follows: The trusted committee is updated after a 27 hour period and the previous committee uses a SSZ (Simple Serialization) that employs sequences of SHA256 to digitally sign the new committee.?Instead of using this directly in the SNARK which creates large circuits (each bitwise operation takes a gate, and there are large number of bitwise operations in SHA), a commitment to the current PK’s?using Poseidon hash is made, which is a computation that leads to a SNARK friendly representation of the corresponding circuit.
Bottomline:?This bridging method used is quite specific to the application (consensus protocol dependent), and derives its security from the soundness property of the zk-SNARKs. Furthermore with the optimizations, it achieves low storage overhead, reduction in circuit complexity and succinct verification and appears generalizable. However, the usage of a zk-SNARK lowers the trust assumptions which is in the end perhaps what we are looking for.
Bringing IBC to Ethereum (Electron Labs)
Electron labs aims to construct a bridge from the?Cosmos SDK?ecosystem (a framework for application specific blockchains) that uses?IBC (Inter-Blockchain Communication)?to communicate across all sovereign blockchains defined in the framework.
This setup is similar to the case discussed earlier, but in the reverse direction where a light client (from the cosmos SDK) needs to verify within a smart contract on Ethereum. In a practical sense, running a light client from other blockchains on Ethereum appears challenging. In the cosmos SDK, the?Tendermint?light client operates on the twisted Edwards curve (Ed25519), which is not natively supported on the Ethereum chain. Thus onchain?verification of Ed25519 signatures on Ethereum (BN254)?becomes inefficient and cost prohibitive.
Similar to our?earlier discussion, every block header on Cosmos SDK, for which each block header consists of about 128 EdDSA signatures on curve ed25519, is signed off by a set of validators (32 high stake signatures are required to validate a block). Verification of the signatures generates large circuits, which is a significant computational component. Thus, the basic question is how to verify ed25519 signatures from any blockchain in the cosmos SDK efficiently and cheaply on the Ethereum chain. The solution is to construct a?zkSNARK that produces a proof of signature validity off-chain?and only?verifies the proof itself on the Ethereum chain.
The circom library supports the curves BN128, BLS12–381, and Ed448-Goldilocks and thus, in order to perform modular arithmetic on the ed25519 curve with the prime p=2^(255)-19, one breaks the representation of field elements into smaller 85 bit integers (85*3=255) for?efficient modular arithmetic. The circuit generated by circom is an R1CS representation of ed25519 signature verification circuit, that consists of Elliptic curve point additions/doublings with the modular arithmetic as defined above. The?circuit?for the signature verification is constructed using the circom library and leads to about ~ 2M constraints per signature verification.
Following the witness computation, a Groth16 proof for ed25519 signature verification is generated by the?Rapidsnark library. The ed25519 curve signatures are not aggregatable and therefore cannot produce a single zk-SNARK proof for aggregated signatures, unlike the BLS signatures. Instead, signatures are verified in batches, and it is observed that the proof-time scales linearly with respect to the number of signatures in a batch.
Thus if one wants to decrease the number of signatures in a batch, it will lower the proof time (decrease latency) , but increase the cost (gas fees), due to the increased number of proofs generated per batch.
Bottomline:?This bridging method is also specific to their application and enjoys the security levels from the soundness properties of the zk-SNARK proof. In particular it verifies the ed25519 signatures of the Tendermint light client on Ethereum, without introducing any new trust assumptions. The?out of field modular arithmetic?is a valuable optimization for the verification computation onchain. One specific technical issue similar to the Succinct Labs approach is latency. The?block production rate?in cosmos SDK is ~7 secs, and in order to keep up with this rate the prover time should be significantly lowered. Electronlabs have?proposed?to parallelize the computation with multiple machines to generate proofs at the same rate as the block production rate and do a recursion to generate a single zk-Snark proof.
zkbridge (Berkeley RDI)
Unlike the other two industry-led ZKP bridge constructions, zkbridge is a framework on top of which several applications can be built. The idea is similar to that of the two approaches discussed earlier, and requires a?light client?and smart contracts on both chains that keep track of the digest, corresponding to the most recent state on either side. The core components of the bridge are a block header relay network, an updater contract, and application specific contracts (sender: SC1, receiver: SC2).
The block header relay network consists of a network of relay nodes that listen to the state changes on the bridged chains, and retrieve block headers from the full nodes in the blocks.?The main functionality of a relay node on the bridge is to generate a ZKP that attests to the correctness of the block headers from one chain and relays it to the updater contract on the other chain.?The updater contract verifies and either accepts or rejects proofs from nodes in the relay network.?The main difference between the industry-led approaches and zkbridge is that the trust assumption is basically reduced to the existence of one honest node in the relay network, and that the zk-SNARK is sound.
A key innovation in this construction is the usage of a?parallelized use of the zkSNARK:?Virgo prover?(deVirgo) which has succinct verification/proof size and does not require a trusted setup. The motivation is that a circuit for verifying N signatures essentially consists of N copies of identical sub-circuits, known as a data-parallel circuit, with each sub-circuit mutually exclusive from the rest. This is the case, for instance, in the ed25519 signature verification discussed in an earlier section.
The core component of a Virgo prover is based on a zero knowledge extension of the?GKR protocol?which runs?sum check arguments?for each sub-circuit in the layered circuit and a polynomial commitment scheme. The deVirgo generalization essentially runs a Virgo prover on a set of relay nodes, and avoids the linear growth of the proof size by aggregating the proofs and polynomial commitments into a master node.
领英推荐
For a circuit that validates 100 signatures with about 10M gates, the proof size is 210KB (same as that of the Virgo prover). Zkbridge uses a two-step recursion. In the first step, a deVirgo proof is generated, which is then compressed using the Groth16 prover. The Groth16 verifier generates a proof of integrity of the execution of the deVirgo circuit.?The main purpose of the recursion is to achieve succinctness (proof size) and reduce verification gas costs.
The relay network then submits the Groth16 proof to the updater contract that can verify it on-chain. The deVirgo proof system is post quantum resistant since it only relies on collision resistant hash functions, and the main computational bottlenecks are Number Theoretic Transforms (NTT’s) in large sized circuits. One thing that seems to have escaped mention is that the relay network computation will suffer the same communication complexities as the MPC, and that will also affect the prover time. The GKR multilayered sum-check protocol has a communication complexity of O(N log_2(gates per layer)) for N machines in the relay network. Even for the 32 signature case, with 32 machines in the relay network, this leads to a relatively large number of rounds of communication in the network, which might completely kill the performance coming from distributed computation.
The problem of verification of ed25519 signatures from the cosmos SDK-Ethereum light client discussed earlier is addressed using the above approach. The bridge consists of a relay network that fetches the Cosmos block headers and generates a deVirgo Proof for distributed proof generation. Following which, a?Gnark?adaptation of the optimized signature verification circuit (for out of field arithmetic) designed by?Electron-labs?generates the Groth16 proof in the second step of the recursion.
The update contract is implemented in Solidity on Ethereum and keeps track of the Cosmos block headers, and the relay network’s Groth16 proof. The verification costs are a constant <230K gas, which is due to the constant size of the Groth16 proof. Furthermore, it is possible to batch the verification of B consecutive block headers, and generate a single proof for the B headers. However, increasing the size of the batch also increases the prover time, but reduces gas cost due to lesser verification burden on chain. As before, hardware acceleration is likely to further improve the Gnark prover as well.
Bottomline:?The zkbridge is a framework for building applications on the bridge. The bridge design uses a relay network for generating zkp and has the least trust assumptions of all. As long as the MPC-like communication complexities in the relay network can be overcome, any parallelizable ZK prover can be used. More specifically,?leaving aside the MPC complexity of the deVirgo relay network, the NTT’s are the bottleneck in the individual Virgo prover component of the relay nodes.
A quick comparison:
Below we provide a quick comparison of the various features of the three bridge constructions discussed in this article.
In summary, using ZKP for designing bridges solves the problems of decentralization and security,?but creates a computational bottleneck due to large circuit sizes.
The issues of computational overhead can be ameliorated using hardware acceleration, and the usage of SNARKS in particular, as well as tricks for committing public data, can reduce storage overhead. Since much of the bridge work is proving data-parallel circuits, a?generalization of ZKP for parallelism like deVirgo are valuable directions for research.
Furthermore, since the blockchains in the multichain universe are defined over a wide variety of domains (fields, curves) depending on application, optimizations in and out of field arithmetic are vital building blocks at the lowest level.?Parallelism in proof generation via MPC brings its own bottlenecks in communication complexity, which are as yet open issues.
Why is the Multi-Chain Universe Fragmented?
The current state of the blockchain ecosystem resembles a heterogeneous distribution of bubble universes (fragmented multichain universe), each with its own rules of consensus mechanism, design, applications, and use cases. As of the time of writing this article, there are more than 100 layer 1 (L1)?blockchain protocols?with a growing number of users, and with increased use-cases of blockchains this number is likely to grow.
The?Blockchain trilemma?states that it is hard to simultaneously achieve the three cornerstones for ideal blockchain fulfillment:
Depending on the use-case, the order of importance of the three cornerstones may vary, in addition to throughput and cost. Different tradeoffs in the trilemma can be imagined as a morphing of the triangle while keeping the area fixed. Naturally, when two corners approach each other, the third one moves further. These tradeoffs lead to different conceptualizations of blockchains, thus enabling developers the freedom to choose different platforms for suitable applications. This led to the fragmented multichain universe, where each blockchain basically operates in isolation, completely oblivious to the existence of other blockchains.
Interchain communication in the multichain universe, often referred to as the interoperability layer, is a foundational infrastructure that acts as a bridge between different blockchains. Bridges enable users to communicate messages between chains including digital assets (cryptocurrencies), state of the chain, contract requests, proofs and more. In short, cross-chain bridges “defragment” the fragmented multichain universe. Hence there is a lot of research and development focussed on building this critical component in the multichain universe. As of the time of writing, there are several active cross-chain?bridge projects.
Building bridges
A bridge is a two way communication protocol that proves the occurrence of events in one chain C1 to applications in another chain C2 and vice-versa. For simplicity we use the terminology, origin chain (C1) and target chain (C2), though it is interchangeable. The state change on the C1 has to be verified “on-chain” on C2. This is typically done by a lightweight client: a contract on the C2 that keeps track of a set of block headers on C1, and verifies them with a Merkle Proof corresponding to a root submitted from the origin chain. In general, C1 and C2 could operate in different domains, and verification operations require out of field arithmetic. Besides the list of headers continuing to increase, the client would require the storage and verification of new headers as they come along. This leads to significant computational and storage overheads, and is in general inefficient. To bypass this problem, many bridge constructions have taken a more centralized approach.
Achilles heel: A light client protocol with a small set of trusted validators to sign state changes.
This is typically in the case of transfer of funds where substantial trust assumptions are placed on the centralized bridging entity, which usually consists of a small number of trusted parties. Notwithstanding the fact that this goes against the very founding principles of blockchains, it brings with it issues related to censorship and security.
The main reason for?security vulnerabilities?are due to the way a bridge acts as a centralized storage unit. Most existing bridges (for liquidity) operate via a Lock-Mint-Burn-Release mechanism. A typical user interacts with a bridge by sending funds on a chain C1 to the bridge protocol that “locks” these funds into contract, i.e these funds are unusable in C1. The bridge then allows the user to mint equivalent funds in another blockchain, C2. Once the user spends some funds and wishes to return the remaining funds to C1, he “burns” the funds in C2, which the bridging entity verifies, and “releases” the remaining funds in C1. In such an interchain bridge, a substantial amount of funds could be sitting in a bridge whose security relies on a small number of trusted parties, making it an active target for attacks.
To summarize, the main technical challenges in building bridges are
Acknowledgements:
We would like to thank?Kobi Gurkan,?John Guibas,?Uma Roy, and?Garvit Goel?for comments and suggestions.
Follow our Journey
Twitter:?https://twitter.com/Ingo_zk
Github:?https://github.com/ingonyama-zk
YouTube:?https://www.youtube.com/@ingo_zk
Join us:?https://www.ingonyama.com/careers