Thursday, February 9, 2012

Use Compression with Mapreduce


                               Hadoop is intended for storing large data volumes, so compression becomes a mandatory requirement here. There are different compression formats available like gzip,Bzip,LZO etc. Of these Bzip(the latest) and LZO are splittable and in that Bzip offers a better compression but the decompression of the same is expensive. When we look at both space and time LZO is more advisable. Also LZO supports indexing which would again help you while using hive on your data. While running mapreduce with compression we need to know at least the following
1.       How to run map reduce on compressed data
2.       How to produce compressed output from mapreduce

Running Mapreduce on compressed data
                It is very straight forward, no need to implement any custom input format for the same. You can use any input formats with compression. The only step is to add the compression codec to the value in io.compression.codecs

Suppose if you are using LZO then your value would look something like
io.compression.codecs  =  org.apache.hadoop.io.compress.GzipCodec, org.apache.hadoop.io.compress.DefaultCodec, com.hadoop.compression.lzo.LzopCodec

Then configure and run your map reduce jobs as you do normally on uncompressed files. When map wants to process a file and if it is compressed it would check for the io.compression.codecs   and use a suitable codec from there to read the file.

Produce compressed data from map reduce
                It is again straight forward and you can achieve the same by setting the following parameters. (Using LZO here)
mapred.output.compress=true
mapred.output.compression.codec= com.hadoop.compression.lzo.LzopCodec

You get your output compressed in LZO. Again here also you can use the same with any normal output formats.

Index LZO files
                It is possible with just 2 lines of code as

//Run theLZO indexer on files in hdfs
LzoIndexer indexer = new LzoIndexer(fs.getConf());
indexer.index(filePath);

Compress Intermediate output (map output)
                Compressing intermediate output is also a good idea in map reduce. The map outputs have to be copied across nodes to reducers and if compressed it saves network and transfer time. Just specify the following configuration parameters as
mapred.compress.map.output=true
mapred.map.output.compression.codec= hadoop.compression.lzo.LzoCodec

7 comments:

  1. Very good tutorial about compression.

    I have a question-

    if we give single LZO compressed file as input to Hadoop, then will Hadoop divide this file into smaller parts and give it to multiple mappers ?

    Thanks.

    ReplyDelete
  2. LZO is not splittable on its own. You need to index it after file creation to make it splittable. Else if LZO is used with Sequence Files it becomes splittable as sequence files are splittable on its own.

    ReplyDelete
  3. Whether hadoop compresses records in output or it compresses complete file?

    ReplyDelete
  4. Hi Manish,

    Compression is based on the file type you use. If it is SequenceFile then you can do your compression on a record level or block level (block means a buffer in using sequence file not hdfs block). If the file type is text then compression happens on a file level.

    Regards,
    Bejoy

    ReplyDelete
  5. Hi Bejoy,
    If i have 10 files compressed using lzo codec individually and i need to provide 5 of the compressed files to input of another job do i need to run indexer for the 5 files alone or i can do as a one time activity for all files and supply required files as input to job?

    Thanks in advance!

    ReplyDelete
  6. Hello sir,
    How compression happens at the file level?I know how block compression happens, but need some info on file compression(text file). Thank you.

    ReplyDelete
  7. i want to read data from lzo file then what i have to do in map reduce

    ReplyDelete