Danksharding: The Next Leap in Ethereum’s Scalability Journey
By Sumi Maria Abraham, Research and Development Engineer, Kerala Blockchain Academy,
Being a public blockchain, the major challenge that Ethereum faced was scalability. The block times (before merge) were 15–20 secs, serving an average of 10–12 transactions per second (tps), and the transaction costs were high. The time delay and high transaction costs disappointed smart contract developers and users. Multiple updates were proposed to improve the scalability of Ethereum, and a series of upgrades collaboratively known as Ethereum 2.0 was published.
Do you remember the initial plans for Ethereum Upgrade? The upgrade plans were revised multiple times due to many developments and research. Last year, Ethereum successfully switched the consensus to Proof of Stake, reducing energy consumption by 99.95%. It was a significant upgrade on a live blockchain that was executed successfully.
Coming up — Sharding
The next major upgrade is sharding. To deal with the time delays in transaction processing, Ethereum proposed sharding. The original idea of sharding was to create multiple parallel chains (shards) — each handling a section of the overall transactions occurring in Ethereum, connected to the main chain. Each shard will have a committee — a group of validators. The shards will be divided among the nodes so that each node does less work and the work gets done faster. The implementation is challenging and also takes considerable time to get implemented.
Meanwhile, smart contract developers also tried various methods to deal with scalability. It led to the evolution of Layer 2 solutions or L2. L2 solutions like rollups were successful in improving the transaction processing time. Their success was followed by an increased number of rollup proposals and witnessed a vast user base also.
Read more about L2 at
Scaling The Ethereum Using Rollups-Layer 2
Breaking Down Ethereum & Scalability: Layer 2 (L2) Sidechains
How did rollups improve scalability?
Rollups are applications that run on top of Ethereum. They have a separate chain, a network of nodes and a consensus mechanism. The rollup chain executes the Ethereum transactions on its chain, “rolls” them up into one batch, and the results will be stored on Ethereum. Ethereum should have a provision to verify the validity of these transactions executed off-Ethereum chain. For this, rollups use cryptographic proofs. Zero Knowledge rollups (zk-rollups) use validity proofs, and optimistic rollups use fraud proofs. The rollup manages the expensive and computationally-intensive data processing and uses Ethereum only to store some data. It also uses calldata, the cheapest storage space available, for data storage. Rollups considerably reduce the transaction fee and time.
97% of the fees on a roll-up are related to calldata storage on Ethereum. The expense arises from all the Ethereum nodes processing it and persists on the chain forever. But rollups do not actually need this data on-chain for so long. After giving sufficient time for the provers to verify the rollup transaction, the data can be stored off-chain by rollup operators or other users.
If Ethereum data storage cost becomes lower, rollups can also become cheaper. Since sharding proposes an L1 scaling solution that alters the protocol, implementing it may take years. The Ethereum community acknowledged that rollups could provide quick results and improve once sharding is implemented. So the upgrade plans were revamped into rollup-centric designs. Consequently, sharding plans were also updated to Danksharding.
Danksharding
The danksharding refers to an updated sharding design proposed by a researcher, Dankrad Fiest. Danksharding supports rollups by providing a separate cheaper space for storing data from rollups. The complete implementation of danksharding is complex and time taking. The current sharding design EIP-4844, known as Proto-Danksharding, is an intermediary version of danksharding. It derives its name from Ethereum researchers Protolambda and Dankrad Fiest.
EIP-4844: Shard Blob Transactions
EIP-4844 introduces a new kind of transaction type which accepts “blobs” of data to be stored in the beacon node for a short time.
Blob stands for “binary large objects”, which refer to dynamic memory spaces. A blob can store 4096 elements of 32 bytes each, with a long-term maximum of 16 blobs per block, thereby allowing 4096 * 32 bytes * 16 per block = 2 MiB per block maximum [2]. The blob cap per block can start low and grow over multiple network upgrades [2].
Proto-Danksharding says that the rollup data can be stored in blobs, and these blobs can be stored in the beacon nodes (consensus layer). Though they can be larger, they are relatively cheap to transact with as the consensus layer holds them rather than Ethereum’s computationally expensive execution layer. The data stored in the blob is needed for verifying the transaction execution by rollup. But it is not required to be available forever. So it will be automatically deleted after a specific period (1 to 3 months). The blob data is not accessible by EVM. This makes it cheaper.
The blob storage cost is measured using a separate metering mechanism called data gas; 1 byte = 1 data gas. Each data gas will be priced using a specific pricing logic.
The rollups are expected to post a commitment to their transaction data on-chain and the data itself within the blobs. The commitment will be accessible to EVM and persist, whereas the transaction data in the blobs will be periodically pruned. If blob data is to be stored forever, it will shoot up the storage requirements of the consensus client. Thus rollups can store their data at a lower cost, reducing the transaction cost.
In the above figure, the blob-carrying transactions are shown as Data tx. Apart from the usual fields in a transaction, it will be having some extra fields like a commitment to the corresponding data blob.
How can the blobs be verified?
Let me explain using a super simplified example. Imagine you have a list of data, say 100 items, <D0, D1, D2, …D100> which can be plotted in a graph and represented by the curve below.
The commitment for this data will be a set of secret data points; let’s assume 4 data points <D12, D20, D78, D99>, which evaluates to the highlighted positions <P0, P1, P2, P3> in the curve. This commitment will be stored on the chain.
The prover now tries to fit the same curve over the blob data. To verify it, the prover will re-calculate the position of the data points in the commitment. If any blob data has been changed, the curve will be different, and consequently, the commitment data points may not fit in the altered graph.
The illustration corresponds to a simplified explanation for a beginner-level reader. The actual proposed procedure for blob transaction processing uses cryptographic commitment technique called KZG scheme which is an alternative to Merkle proofs. The KZG fits a polynomial equation to the data. The commitment evaluates the polynomial at some secret data points. A prover would fit the same polynomial over the data and evaluate it at the same values, checking that the result is the same.
The Ethereum team has yet to announce the actual time of the EIP-4844 upgrade formally. But it is expected by the end of 2023 in the Ethereum Cancun (execution layer)-Deneb(consensus layer) upgrade. Proto-Danksharding will be a significant upgrade in Ethereum history. It can help reduce transaction fees and improve network throughput and scalability.
References
[1] https://ethglobal.com/talks/scaling-ethereum-summit-eip-4844-in-44-minutes-or-less-4n29r
[4] https://notes.ethereum.org/@vbuterin/blob_transactions_simple