A Bird’s Eye View of PLONK
Last updated
Last updated
We did all that work to explore the mathematical components behind PLONK and other proof systems, so let’s fit them together to see how they interact.
The starting point of any proof system is the same: "What computation do we need to know has been done correctly?" Once we have that we can turn it into an idealized circuit.
In PLONK, we use plonkish arithmetization to turn the idealized circuit into a detailed zk-circuit. So we now have addition and multiplication gates connected by wires to represent our computation. Some of the wire values are decided, but some will have to be inserted by the prover.
We can do that for the right input wires too, as well as the output wires. And then we can commit to the polynomials using our PCS. And now we can make proofs about our computation by opening commitments (called querying since we query polynomials to get their openings). And this will keep all our data secret thanks to the DLOG assumption.
Lastly we will use the openings to our commitments and elliptic curve cryptography to check that they satisfy the expected relationship. And if they satisfy the expected relationship, then by the Schwartz-Zippel Lemma, we can conclude that the computation was indeed valid.
Before diving into the systems, we briefly need to explore one concept that is going to help us understand the idea of efficiency as it relates to circuits. This is the concept of a constraint count.
The constraint count determines how big the circuits are. The bigger the circuit, the longer it will take to generate the proof and the more data is required to store the proof. Lower constraint counts mean smaller circuits and faster proving (or proof generation) times, as well as less data needed to store the proof.
During the arithmetization process, the idealized circuit had to be turned into addition and multiplication gates. Each gate adds 1 to the constraint count. Generally speaking, a low constraint count is good and a high constraint count is bad. Most circuit developers are aiming for a minimal constraint count. An optimal circuit should contain enough constraints to verify a computation in a secure and valid way, but not so many that proving time, storage, and computational requirements become too burdensome.
Remember that circuits are reusable. Both the prover and verifier will know the layout and logic of the circuit in advance. This circuit will contain many wires with known values and some wires with undecided values. The prover will be putting private information into those undecided wires, for example inserting their private key, their address, or who they are voting for. In the backend, the prover will take the circuit, fill in their private values and make a proof; then the verifier will take in this original circuit and check whether the proof behaves as it is supposed to. Because circuits may be reused millions or even billions of times, the right constraint count is very important.
The constraint count will also vary depending on the type of arithmetization you use. For example, to represent the computation of voting, we can use Plonkish arithmetization or binary arithmetization (what computers do), which would result in different constraint counts. And there are types of problems that are more suited to using Plonkish arithmetization, and types of problems that are more suited to other methods.
The prover will fill in all the empty wire values using their private data. With that done, we have a filled out circuit. We then list all the left input wires from the gates (a in the image below represents all the left input wires), and get something like . We represent this list as a polynomial by using them as coefficients. As a result, we get:
.