Thursday, May 24, 2012

Why use deflate instead of gzip for text files served by Apache?


What advantages do either method offer for html, css and javascript files served by a LAMP server. Are there better alternatives?



The server provides information to a map application using Json, so a high volume of small files.



See also Is there any performance hit involved in choosing gzip over deflate for http compression?


Source: Tips4all

8 comments:

  1. GZip is simply deflate plus a checksum and header/footer. Deflate is faster, though, as I learned the hard way.

    ReplyDelete
  2. Why use deflate instead of gzip for text files served by Apache?


    The simple answer is don't.



    RFC 2616 defines deflate as:


    deflate The "zlib" format defined in RFC 1950 in combination with the "deflate" compression mechanism described in RFC 1951


    The zlib format is defined in RFC 1950 as :

    0 1
    +---+---+
    |CMF|FLG| (more-->)
    +---+---+

    0 1 2 3
    +---+---+---+---+
    | DICTID | (more-->)
    +---+---+---+---+

    +=====================+---+---+---+---+
    |...compressed data...| ADLER32 |
    +=====================+---+---+---+---+


    So, a few headers and an ADLER32 checksum

    RFC 2616 defines gzip as:


    gzip An encoding format produced by the file compression program
    "gzip" (GNU zip) as described in RFC 1952 [25]. This format is a
    Lempel-Ziv coding (LZ77) with a 32 bit CRC.


    RFC 1952 defines the compressed data as:


    The format presently uses the DEFLATE method of compression but can be easily extended to use other compression methods.


    CRC-32 is slower than ADLER32


    Compared to a cyclic redundancy check of the same length, it trades reliability for speed (preferring the latter).


    So ... we have 2 compression mechanisms that use the same algorithm for compression, but a different algorithm for headers and checksum.

    Now, the underlying TCP packets are already pretty reliable, so the issue here is not Adler 32 vs CRC-32 that GZIP uses.



    Turns out many browsers over the years implemented an incorrect deflate algorithm. Instead of expecting the zlib header in RFC 1950 they simply expected the compressed payload. Similarly various web servers made the same mistake.

    So, over the years browsers started implementing a fuzzy logic deflate implementation, they try for zlib header and adler checksum, if that fails they try for payload.

    The result of having complex logic like that is that it is often broken. Verve Studio have a user contributed test section that show how bad the situation is.

    For example: deflate works in Safari 4.0 but is broken in Safari 5.1, it also always has issues on IE.



    So, best thing to do is avoid deflate altogether, the minor speed boost (due to adler 32) is not worth the risk of broken payloads.

    ReplyDelete
  3. I think there's no big difference between deflate and gzip, because gzip basically is just a header wrapped around deflate (see RFCs 1951 and 1952).

    ReplyDelete
  4. The main reason is that deflate is faster to encode than gzip and on a busy server that might make a difference. With static pages it's a different question, since they can easily be pre-compressed once.

    ReplyDelete
  5. mod_deflate requires fewer resources on your server, although you may pay a small penalty in terms of the amount of compression.

    If you are serving many small files, I'd recommend benchmarking and load testing your compressed and uncompressed solutions - you may find some cases where enabling compression will not result in savings.

    ReplyDelete
  6. There shouldn't be any difference in gzip & deflate for decompression. Gzip is just deflate with a few dozen byte header wrapped around it including a checksum. The checksum is the reason for the slower compression. However when you're precompressing zillions of files you want those checksums as a sanity check in your filesystem. In addition you can utilize commandline tools to get stats on the file. For our site we are precompressing a ton of static data (the entire open directory, 13,000 games, autocomplete for millions of keywords, etc.) and we are ranked 95% faster than all websites by Alexa. Faxo Search. However, we do utilize a home grown proprietary web server. Apache/mod_deflate just didn't cut it. When those files are compressed into the filesystem not only do you take a hit for your file with the minimum filesystem block size but all the unnecessary overhead in managing the file in the filesystem that the webserver could care less about. Your concerns should be total disk footprint and access/decompression time and secondarily speed in being able to get this data precompressed. The footprint is important because even though disk space is cheap you want as much as possible to fit in the cache.

    ReplyDelete
  7. if I remember correctly


    gzip will compress a little more than deflate
    deflate is more efficient

    ReplyDelete
  8. Just for future reference:

    Under a debian system (I'm on ubuntu) with Apache2 and the deflate module already installed (which it is by default), you can enable deflate compression in two easy steps:

    a2enmod deflate
    /etc/init.d/apache2 force-reload


    And you're away! I found pages I served over my adsl connection loaded MUCH faster!

    ReplyDelete