Google News
logo
Hadoop - Interview Questions
Why are blocks in HDFS huge?
By default, the size of the HDFS data block is 128 MB. The ideas for the large size of blocks are :
 
* To reduce the expense of seek: Because of the large size blocks, the time consumed to shift the data from the disk can be longer than the usual time taken to commence the block. As a result, the multiple blocks are transferred at the disk transfer rate.

* If there are small blocks, the number of blocks will be too many in Hadoop HDFS and too much metadata to store. Managing such a vast number of blocks and metadata will create overhead and head to traffic in a network.
Advertisement