ZPoW #1 - Exploiting The Block Time & Block Size

The Distributed ledger is the cornerstone of any blockchain technology and has proven useful over the years. A distributed ledger is the consensus of replicated, shared and synchronized data that is spread among different nodes. In general, it requires a computing and networking infrastructure and a consensus algorithm so that the ledger can be reliably replicated across the nodes.

A blockchain adds new blocks to the distributed ledger, of fixed or variable size, at a rate that depends on the block time. The block time is the measure of time it takes for miners or validators in a network to verify transactions and produce a block that is added to the blockchain.

Block time and block size optimization is an important issue as it directly affects the performance and the scalability of applications as it determines the network's throughput, i.e., the number of transactions per second. These two factors are contradictory as large blocks need more time to propagate on the distributed infrastructure and can limit the decentralization of the network.

The optimal selection of the block size block time parameters is not a new research topic. On the contrary, it has been discussed since the early days of Bitcoin and Ethereum's infancy [1] which have taken a different approach to the problem. Bitcoin follows a 10-minute block time and a 1MB block size unlike Ethereum, which chose an average block time of 12 seconds and an average block size of 100KB [2] with a maximum size equal to 1.875 MB. In contrast, BNB has selected the smallest block time of 3 seconds, while the average block size is about 10KB [3] and the maximum block size is equal to 32 MB.

Although this discussion has been going on for years, it is now more topical than ever. The advent of edge computing and 5G networks completely changes the underlying infrastructure characteristics, setting limits that must be considered when selecting the optimal block size-time pair. This is important because it ensures that blocks canโ€™t be arbitrarily large. If blocks could be arbitrarily large, then less efficient full nodes would gradually cease being able to keep up with the network due to space and speed requirements. The larger the block, the more time is required to process and be ready for the next slot. This is a centralizing force, which is resisted by capping block size.

The trade-off that needs to be addressed is the following: the more transactions included in a given block, the larger its size and thus the higher the introduced latency to reach consensus. Stale blocks are more common when nodes propagate blocks more slowly than large mining pools, resulting in a higher prevalence of uncle blocks. If the uncle rate rapidly increases and nodes leave the network, the gas limit may be too high, compromising network security.

It becomes obvious that optimal selection is not an easy task as there are many objectives that need to be considered at the same time. In the Callisto Network, we plan as a first step towards reaching a significantly high number of transactions per second to exploit this trade-off. towards reaching the physical limits of the computing and networking infrastructure by selecting the optimal block size and time values that will enhance the performance of the blockchain without limiting the network decentralization. After selecting the appropriate values we plan to improve the consensus algorithm towards further increasing the networkโ€™s performance without any limitations in the networkโ€™s security.

[1] https://bitcoinmagazine.com/technical/on-consensus-or-why-bitcoin-s-block-size-presents-a-political-trade-off-1452887468

[2] Etherscan

[3] Bscscan

Last updated