JBIG2.com :: An Introduction to JBIG2

JBIG2: The Compression Connection

Essential compression issues

Storage, speed and keeping up with Moore and Parkinson

In the big picture, what makes file compression so important? The discussion hinges on two "laws" of the digital media world. The first is Moore's Law, which indicates that computers roughly double in speed every 18 months. The second is Parkinson's Law of Data, which submits that computer data expands to fill storage space.

The need for compression is demonstrated at the intersection of these two laws. The increase in speed generates more data; data that will fill the available storage space. Compression is the answer to a digital world that creates ever-increasing amounts of data.

Compression answers the need for more efficient storage of digital information. Compression makes it possible for digital media files to optimize the increasing speed of computer systems. Compression reduces the time it takes for backups of crucial corporate data. Compression maximizes the potential of the Internet, internal business networks, and wireless devices for individuals and businesses alike. Without compression, digital media would simply fill up the available space for storage - and bandwidth for transmission - of electronic files far too quickly.

How does compression work? What role does JBIG2 play in compression? A ground-level introduction to these issues follows.

How compression happens: a very simple explanation

Simply stated, compression is the process of representing a set of data with a smaller set of data. Whenever a data file requires N bits of digital information to depict it, and it is somehow depicted with fewer than N bits, compression has taken place.

In a compression system, the input data is the original file. It enters an encoder which compresses the data to become a much smaller bitstream. The bitstream can either be transmitted to storage or another location. But in either case, the bitstream of compressed data must be somehow decoded for its content to be utilized at the point of output.

JBIG2 and high-level compression

Because JBIG2 is a smart compression standard, it has strict specifications to decode a file, but no precise specifications for how to encode a file. As noted earlier, this allows a sophisticated vendor to employ a variety of techniques to increase the compression ratio.

Not specifying the encoding method for JBIG2 means there is considerable latitude for a JBIG2 design & implementation team to qualitatively differentiate one JBIG2 encoder from another. Among the differentiating factors in JBIG2 implementations are:

  • Compressed file size
  • Image quality
  • Supported JBIG2 modes
  • Speed of the encoder & decoder
  • Speed of displaying a page in a file
  • Speed of printing the file

The JBIG2 standard

Again, the JBIG2 standard does not specify how a JBIG2 encoder shall operate, but rather how a JBIG2 decoder represents a JBIG2 bitstream. Because JBIG2 offers a number of encoding options for any image, there are a significant number of different JBIG2 bitstreams that may result. The limitation on JBIG2 decoding simply means that for every JBIG2 bitstream there is only one resulting raw image that can be decoded.

This versatility in encoder design allows for encoders to offer a wide variety of features. That being said, JBIG2 encoding is not a free-for-all process: every JBIG2 encoder must produce bitstreams compliant to the standard's bitstream syntax.

As noted earlier, the JBIG2 standard offers two main coding methods: arithmetic coding & Huffman coding. With few exceptions, arithmetic coding has better compression performance than Huffman coding, and compression technology has taken advantage of that factor.

JBIG2 segments

A JBIG2 bitstream (encoding) is a collection of ordered JBIG2 segments. Each segment contains a segment header that contains a type identifier which specifies how the data part of the segment, if present, is to be decoded. (Some simpler segment types need no data part; e.g. an end of page segment.)

Most segments have a data part of substantial size. Each data part has a data header that offers details about how the data was encoded. A JBIG2 decoder examines the segment header and data header of the segment to properly interpret and process the bitstream information.

The sequence of the segments is crucial. To accurately process the data in one segment, the decoder may need to have already processed the data in one or more previous segments. When this happens, one segment is described as referring to the other.

JBIG2 segments can be classified into three loose categories:

  • Control segments give the decoder "broad boundary" information such as the page dimensions and end-of-page markers.
  • Region segments offer the needed information to produce an image on a page. This information may pertain to the entire page, or a specific rectangular region of the page. A single page may contain several region segments, which can overlap. There are four different types of region segments: generic regions, refinement regions, text regions, and halftone regions.
  • Support segments contain data that will be used by region segments, but are not region segments themselves. Two important support segment types are dictionary segments and pattern segments.

JBIG2 features

Unlike TIFF-based methods, the JBIG2 specs can take advantage of image segmentation, i.e., separation of the image into text, picture, graphical regions, etc. The accuracy of the segmentation effects image quality. As indicated previously, JBIG2 also supports other modes, including those that allow for

  • Lossy and lossless compression
  • Compact addressing
  • Efficient font matching
  • MMR (modified modified read) coding
  • Halftoning picture regions

JBIG2 offers a flexibility never before seen in compression codecs. That flexibility can leverage the dynamics of compression itself. Our discussion continues with descriptions of two fundamental types of data compression (lossless and lossy) and, through a very select group of vendors, a category of compression called perceptually lossless compression.

Lossless, lossy, and perceptually lossless compression

Lossless compression

A lossless JBIG2 encoding keeps the image exactly identical to the image at the time of scan. This JBIG2 mode, which typically achieves compression rates up to 2x smaller than a TIFF G4 encoding, does not allow for any image transformations (e.g., deskewing, rescaling, font matching, and despeckling). A comparison of relative file sizes for TIFF G4, G4-coded PDF, lossless JBIG2, and lossless JBIG2-coded PDF is given in the table that follows.

Figure 3. The lossless JBIG2 images are less than half the size of the corresponding TIFF images. The additional cost of the PDF wrapper is negligible.

As can be seen from the table, there is virtually no file size difference between G4 and PDF-wrapped G4 or between JBIG2 and PDF-wrapped JBIG2. In general, there is minimal overhead for PDF-wrapping of a compression format, assuming the format is supported within the PDF specifications. For most users the additional functionality and accessibility of PDF-wrapping is well worth the slight increase in file size.

The advantage of lossless encoding is that the image quality is guaranteed to be the same as the original. If a user doesn't trust a JBIG2 vendor to make the critical decisions necessary for effective lossy or perceptually lossless compression, lossless would be their safest option. Even within lossless compression there are many ways of implementing it, and the differences between competing vendors can be significant. In addition to differences in compression ratio, there can also be dramatic differences in both the speed of the encoding and in the latency of displaying and printing the compressed files.

Lossy compression

Lossy compression has been roughly defined as "any method of data compression that reconstructs the original data approximately, rather than exactly." Lossy JBIG2 image coding can result in dramatically reduced file sizes. Of course, file size reduction is good. The table below compares file sizes for lossy and lossless JBIG2.

Figure 4. The lossy JBIG2 images are less than 25% of the size of the lossless JBIG2 images.

The problem with lossy JBIG2 is that some implementations are exactly that -lossy. Lossy JBIG2, implemented naively by an unqualified vendor, may significantly degrade image quality. In many document management applications with record retention policies, such as mortgage banking and medical fields, lossy JBIG2 coding is problematic and must be used with caution. A lossy JBIG2 file encoding may result in significant image artifacts and degraded text recognition rates, e.g., fewer word hits when the file is converted to text using an OCR program.

While lossy compression offers superior compression to lossless JBIG2, for many corporate and professional users the loss of document integrity would not be worth the tradeoff. Yet when utilized properly, the techniques of lossy compression can actually improve image quality. With a proper JBIG2 implementation you can drastically reduce file size even as you create a cleaner, more readable document.

Perceptually lossless compression

When the compressed image appears indistinguishable from the original scanned document, it is called perceptually lossless.

It is crucial to distinguish between an implementation that sacrifices image quality in order to get a compression savings and one that gets a compression savings through improving image quality. Perceptually lossless JBIG2 mode is where there appears to be significant ROI (return on investment) for the digital imaging industry. This is the mode where digital devices and document management systems can see real benefit from utilizing JBIG2 technology. It truly provides the best of both worlds. The file size is similar to what a naive JBIG2 lossy implementation produces, while the image quality of the original is maintained or even improved.

In review, the most effective compression rates for JBIG2 files are achieved using lossy compression methods. Such methods include machine learning of image font classes and halftone patterns. These methods are called "lossy," for they enable the compressed file to differ in appearance from the original image. It is important that these methods be used judiciously, for they have the ability to severely degrade an image. A good JBIG2 implementation will always ensure that all lossy methods adhere to the rigorous standard of being perceptually lossless. While the JBIG2 specifications do not require this, it is clearly a desired (if unstated) objective. When JBIG2 compression is done properly, any perceptual differences between the compressed file and the original will be enhancements and not degradations.