A post big block paradigm

Big blocks have long been demonized as a sub-optimal solution to throughput constraints. They increase the barrier to entry for block production and decrease decentralization in the process. This is correct under the assumption that the blockchain’s construction is naïve and presents minimal mechanisms to mitigate against the adverse effects of big blocks.

Scaling small blocks

The most common scaling solution pursued that attempts to subvert the problem of larger block sizes is sharding. Sharding maintains a relatively low hardware barrier for block production because each shard can still have small blocks, with the network having many shards. Although, this is an imperfect solution under a situation where consensus is sharded, and the network validators are split up into smaller sets for each shard. While shard validator sets can be randomized at an arbitrary interval, the security of a given shard is ultimately determined by its validator set, which is only a subset of the validators of the entire network — reducing the benefits of decentralization.

Even for blockchains that have small block sizes and low hardware costs to participate in block production, there is still a bound on the number of validators that can be managed without performance degradation. This plays a large factor in determining the minimum staking requirements for validators. In most classical consensus algorithms, message overhead is quadratic (n²), so allowing 1M validators to contribute to consensus would result in 1T messages per round. Given that minimum staking requirements to become a validator for most blockchains already constitute a high barrier to entry, many over $100k, it makes little sense to target low-cost hardware for block production.

Changing mechanics

Now the problem shifts from decentralized block production to “centralized” block production and anti-censorship mechanisms. Centralization doesn’t refer to a single node that is responsible for block production, but rather a validator set that appears centralized (100–300) relative to the common idea of decentralization (1000+). The idea behind this is that block production can be centralized, where blocks are big and only a relatively small number of validators can participate, but it is extremely difficult for them to censor transactions and act maliciously.

It is the cost of verification that must remain cheap so that block producers can be held accountable. If the cost of verification is similar to that of block production, the only parties that can hold block producers responsible are those that have hardware requirements to be a block producer initially. The cost of which can be further reduced by using fraud or validity proofs to verify the current state of the chain cheaply and efficiently. Data availability sampling will also aid in this as it will allow light nodes to verify data availability by conducting rounds of random sampling, without having to download the entire block. Therefore, block verification can continue to remain highly decentralized.

In effect, reducing the size of the validator set will shift the preference of consensus mechanics such that block producers aren’t able to push invalid state transitions, rather than the network remaining live in the presence of >1/3 colluding validators. So, we move to a system where safety is favored over liveness. This is also exemplified by the design of zkRollups. While sequencers and provers are currently operated under a single entity, they can be split among a committee where each “leader” is assigned to be their designated role for a specified time-length. However, high hardware requirements for those two roles will be a centralizing force, in which they can’t be decentralized among an extreme number of participants. The solution is to prefer safety over liveness where in the event of censorship from either the prover or sequencer the network will halt, and a force exit mechanism will allow users to withdraw funds to the settlement layer — assuming there is at least one honest full node to generate Merkle proofs of account balances. Additionally, Proof generation needs to occur on high end hardware for faster speed to compute all the complex cryptographic commitments. High end hardware becomes even more important considering that throughput can be increased through parallelizing proof generation.

Force exit mechanisms are adequate but better solutions can be developed, particularly for blockchains where users are unable to exit during a liveness failure. This is where anti-censorship mechanisms can be built into the network to make it difficult or even impossible for block producers to get away with malicious actions. One such mechanism is that of a separate payments channel where transactions can be pushed through in the event that censorship occurs. Rollups could also employ methods like this to save from liveness failures and mitigate the power of centralized sequencers and provers, particularly in the event of MEV where the sequencer has full control of ordering and the ability to censor transactions.

Consensus sharding has limitations

Rather than continue under the current consensus sharding paradigm, there is a similar design that demonstrates a potentially better solution, in node sharding. Node sharding allows the processes of a node to run on a cluster of machines rather than a single one. This is more intuitive given the prospect of smaller validator sets. For example, if a node is sharded across three machines and there is a validator set of 200, the total amount of independent machines that can contribute to block production increases to 600.

Although, node sharding is a solution to scaling not decentralization because it allows the block size to be raised such that each machine in the cluster retains the same hardware cost as the pre-sharded chain. The block size increase is mainly dependent on the number of machines in the clusters and the number of data availability sampling light nodes — given the blockchain has data availability sampling. Because node sharding doesn’t shard the blockchain, the validator set isn’t fragmented across multiple shards, it therefore maintains the overall security of the validator set and network.

Conclusion

  1. Centralizing block production with big blocks is inevitable given that it is incredibly difficult to scale small blocks with low-cost hardware and that the minimum staking requirement to become a validator is already extremely high in most cases. This demonstrates that there is little pragmatic evidence to retain the small-block architecture. To accommodate this paradigm without compromising the network, anti-censorship mechanisms can be used to mitigate security risks.
  2. Block verification will remain highly decentralized with a low hardware cost to hold block producers accountable in the event they attempt to collude or act maliciously. This is aided with succinct proofs and data availability sampling.
  3. Fragmenting validator sets with consensus sharding imposes limits and sacrifices some of the benefits of decentralization. A potentially superior alternative is node sharding where a node’s processes to run on a cluster of machines, allowing the block size to be increased and the validator set to remain unfragmented.