node distri.js testnetblocks.txt
no blocks 5000
total size (bytes) 57563723
block average size (bytes) 11512.7446
hexadecimal digits counts
a: 5535784, b: 4808945,
c: 4537947, d: 3896688,
e: 4398336, f: 4253774
Some blocks are empty; others have more than 150 smart contract transactions. The average block size is, then, 11K. It’s only an informal result, over a short range of blocks. The array at the end of the above output shows the count of each hexadecimal digit. A graphic:
First result: most of the bytes are zeroes.
node parts.js testnetblocks.txt
It analyze the blocks headers. The output:
no blocks 5000
block headers size 4434741
160000, 160000, 100000,
160000, 160000, 160000,
1280000, 20000, 15000,
15000, 12887, 20000,
85000, 27963, 20000,
3428, 400000, 910464,
So, the average block header size is less than 1K. The array has the byte size of each block header part. A graphic:
Notably, the “heavy” parts are:
- Bloom filter: 256 bytes, many of them are zeroes.
- BTC Merkle Proof: generated in merge mining process
- BTC Coinbase Transaction: generated in merge mining process
- BTC Header: 80 bytes each
Analyze the transactions and the uncles. I expect that the uncles show a similar distribution to block header. Locate where are the zeroes: only in the headers, in transactions, in uncles? Analyze the size of the block depending on number and kind of transactions, and number of uncles.
Although the key value store (in this case LevelDB) could compress this information (using snappy, like in geth), it could be better don’t depend on that feature. Also, a better encoding could allow shorter network messages.
First proposal: encode the bloom filter in an efficient way (I have code in my blockhain personal project, see Bloom, BloomEncoder and their tests). And although it is not in the block, a lot of bloom filter are generated in the transactions receipts (to be analyzed).
The information regarding merge mining with Bitcoin, could be improved, maybe with some zero knowledge proof, but I’m not sure about this.