Monochrome and Color Dithering


Dithering techniques are used to render images and graphics with more apparent colors than are actually available on a display. Dithering is sometimes called digital half-toning.

When our visual systems are confronted with large regions of high-frequency color changes they tend to blend the individual colors into uniform color field. Dithering attempts to uses this property of perception to represent colors that cannot be directly represented.


When do we need dithering?

Under fixed lighting conditions the typical person can discern approximately 100 different brightness levels. This number varies slightly with hue (for instance we can see more distinct shades of green than blue). Which particular set of 100 levels also changes as a function of the ambient lighting conditions.

The 256 colors available for each primary in a true color display are usually adequate for representing these 100 levels under normal indoor lighting (when the nonlinearities of the display are properly compensated for). Thus, there is usually no need to dither a true color display.

A high-color display, however, only allows 32 shades of a given primary, and without dithering you will usually be able to detect visible contours between two colors that vary by only one level. Our visual system happens to be particularly sensitive to this, and it even amplifies the variation so that it is more pronounced than the small intensity difference would suggest. This apparent amplification of contours is called Mach-banding, and it is names for the psycho physical researcher who first described it.

On index displays dithering is frequently used to represent color images. Given a 256 entry color map you can only represent approximately 6 colors per red, green, and blue primary (6x6x6=216). However, if just one or two hues are used it is possible to allocate enough color table entries (~50 per hue) so that dithering can be avoided.

By far the largest customer of dithering is in printed media. You probably seen dithering yourselves on newsprint or in the printing on continuous-tone images on a laser printer. In most printing technologies there is very little control of the shade of ink that can be deposited at a particular point. Instead only the density of ink is controlled.


Terminology

There are three key ideas that come to play when converting continuous images to dithered images. They are quantization, thresholding, and noise.

The process of representing a continuous function with discrete values is called quantization (Where have we seen this before?). It can best be visualized with a drawing:

The input values that cause the quantization levels to change output values are called thresholds. The example above has four quantization levels and three thresholds. When the spacing between the thresholds and the quantization levels is constant the process is called uniform quantization. Today, I will only discuss dithering in the context of uniform quantization. However, the concepts are easily adapted to any quantization scheme.

Let's look at some examples:

When a signal, or color, is modified from it original value by the addition of some other value, it is said that noise has been added. This variation can be regular (a repeated signal that is independent of either the input or output), correlated (a signal that is relate to the input), random, or some combination.

Dithering can be neatly summarized as a quantization process in which noise has has been introduced to the input. The character of the dither is determined entirely by the structure of the noise.


Dither noise patterns

Let's consider some dither noise patterns. One simple type of noise is called uniform or white noise. White noise generates values within some interval such that all values are equally likely. We will zero-mean white noise to our image. The term zero-mean indicates that the average value of the noise is zero, therefore, the interval of values must be symmetric about the value of zero so that there will be a noise value of opposite sign to cancel each generated value. Since all values are equally likely, his assures that the result has an average value of zero.

The only thing left to specify is the range of noise values, this is called the noise's amplitude. The noise amplitude that makes the most sense when uniform quantization is used is the spacing between thresholds. This is not, however a requirement. It is rare to specify a larger amplitude (Why?), but frequently slightly smaller amplitudes are used. Let's look back at our example to see what random noise dithering looks like.

The result is not a good as expected. The noise pattern tends to clump in different regions of the image. The unsettling aspect of this clumping is that it is unrelated to the image. Thus the dithering process adds apparent detail to the image that are not really in the image.

The next noise function uses a regular spatial pattern. This technique is called ordered dithering. Ordered dithering adds a noise pattern with specific amplitudes.

The final technique, called error diffusion, uses correlated noise based on the input image.


Implementation

The following is an abstract class that all of my dithering demos are derived from


import java.awt.*;

import Raster;



public abstract class DitherRaster extends Raster { 

    protected int quantize[];

    protected int threshold;

        

    public DitherRaster(Image img)

    {

        super(img);

    }

    

    public DitherRaster(DitherRaster in)

    {

        width = in.width;

        height = in.height;

        pixel = in.pixel;

        quantize = in.quantize;

        threshold = in.threshold;

    }

    

    public void setQuantLevels(int levels)

    {

        if (quantize == null)

            quantize = new int[256];

        

        threshold = (510 + levels) / (2*levels);

        for (int i = 0; i < 256; i++) {

            int q = Math.round(255*Math.round(i*(levels-1)/255.0)/(levels-1));

            quantize[i] = q;

        }

    }

    

    public abstract int getNoiseyPixel(int x, int y);

    

    public void draw(Raster bgnd)

    {

        int x = (bgnd.width - width) / 2;

        int y = (bgnd.height - height) / 2;

        for (int j = 0; j < height; j++) {

            if ((y+j >= 0) && (y+j < bgnd.height)) {

                for (int i = 0; i < width; i++) {

                    if ((x+i >= 0) && (x+i < bgnd.width)) {

                        int pix = getNoiseyPixel(i, j);

                        if ((pix & 0xff000000) != 0)

                            bgnd.setPixel(pix, x+i, y+j);

                    }

                }

            }

        }  

    }

}

The following implements the no dithering case:


import java.awt.*;

import DitherRaster;



public class NoDitherRaster extends DitherRaster { 

    

    public NoDitherRaster(Image img)

    {

        super(img);

    }

    

    public NoDitherRaster(DitherRaster in)

    {

        super(in);

    }

    

    public int getNoiseyPixel(int x, int y) {

        return getPixel(x,y);

    }

}

Then the random dithering raster.


import java.awt.*;

import java.util.*;

import DitherRaster;



public class RandomRaster extends DitherRaster { 

    Random whiteNoise;

    

    public RandomRaster(Image img)

    {

        super(img);

        whiteNoise = new Random();

    }

    

    public RandomRaster(DitherRaster in)

    {

        super(in);

        whiteNoise = new Random();

    }

    

    private int random;

    private int rcount;

    

    public int getNoiseyPixel(int x, int y) {

        int pix = getPixel(x,y);

        int a = pix & 0xff000000;

        int r = (pix >> 16) & 255;

        int g = (pix >> 8) & 255;

        int b = pix & 255;



        if (rcount == 0) {

            random = whiteNoise.nextInt();

            rcount = 4;

        }

        

        int noise = (random & 255) - 128;

        noise = (threshold * noise) >> 8;

        random >>= 8;

        rcount -= 1;

        

        r += noise;

        g += noise;

        b += noise;



        r = ((r & ~255) == 0) ? r : ((r < 0) ? 0 : 255);

        g = ((g & ~255) == 0) ? g : ((g < 0) ? 0 : 255);

        b = ((b & ~255) == 0) ? b : ((b < 0) ? 0 : 255);

        

        r = quantize[r] << 16;

        g = quantize[g] << 8;

        b = quantize[b];

        return (a|r|g|b);

    }

}

Then the ordered dithering raster.


import java.awt.*;

import java.util.*;

import DitherRaster;



public class OrderedDither extends DitherRaster {

    protected int pattern[];

    

    public OrderedDither(Image img)

    {

        super(img);

        pattern = new int[16];

        initPattern();

    }

    

    public OrderedDither(DitherRaster in)

    {

        super(in);

        pattern = new int[16];

        initPattern();

    }

    

    public void setQuantLevels(int levels)

    {

        super.setQuantLevels(levels);

        initPattern();

    }

    

    public void initPattern()

    {

        pattern[0]  = (15*threshold)/32;

        pattern[1]  = (-1*threshold)/32;

        pattern[2]  = (11*threshold)/32;

        pattern[3]  = (-5*threshold)/32;

        pattern[4]  = (-9*threshold)/32;        

        pattern[5]  = (7*threshold)/32;

        pattern[6]  = (-13*threshold)/32;

        pattern[7]  = (3*threshold)/32;

        pattern[8]  = (9*threshold)/32;

        pattern[9]  = (-7*threshold)/32;

        pattern[10] = (13*threshold)/32;

        pattern[11] = (-3*threshold)/32;

        pattern[12] = (-15*threshold)/32;

        pattern[13] = (1*threshold)/32;

        pattern[14] = (-11*threshold)/32;

        pattern[15] = (5*threshold)/32;

    }



    public int getNoiseyPixel(int x, int y) {

        int pix = getPixel(x, y);

        int a = pix & 0xff000000;

        int r = (pix >> 16) & 255;

        int g = (pix >> 8) & 255;

        int b = pix & 255;

        

        int i = 4*(y & 3) + (x & 3);

        r = r + pattern[i];

        g = g + pattern[i];

        b = b + pattern[i];



        r = ((r & ~255) == 0) ? r : ((r < 0) ? 0 : 255);

        g = ((g & ~255) == 0) ? g : ((g < 0) ? 0 : 255);

        b = ((b & ~255) == 0) ? b : ((b < 0) ? 0 : 255);



        r = quantize[r] << 16;

        g = quantize[g] << 8;

        b = quantize[b];

        return (a|r|g|b);

    }

}

Then the error-diffusion dithering raster.


import java.awt.*;

import java.util.*;

import DitherRaster;



public class ErrorDiffusion extends DitherRaster {

    protected int rerror[][];

    protected int gerror[][];

    protected int berror[][];

    

    public ErrorDiffusion(Image img)

    {

        super(img);

        rerror = new int[2][width];

        gerror = new int[2][width];

        berror = new int[2][width];

        zeroError();

    }

    

    public ErrorDiffusion(DitherRaster in)

    {

        super(in);

        rerror = new int[2][width];

        gerror = new int[2][width];

        berror = new int[2][width];

        zeroError();

    }

    

    public void setQuantLevels(int levels)

    {

        super.setQuantLevels(levels);

        zeroError();

    }

    

    private void zeroError()

    {

        for (int i = 0; i < width; i++) {

            rerror[0][i] = 0;   rerror[1][i] = 0;

            gerror[0][i] = 0;   gerror[1][i] = 0;

            berror[0][i] = 0;   berror[1][i] = 0;

        }

    }

    

    public int getNoiseyPixel(int x, int y) {

        int pix = getPixel(x,y);

        int a = pix & 0xff000000;

        int r = (pix >> 16) & 255;

        int g = (pix >> 8) & 255;

        int b = pix & 255;

        

        int thisrow = y & 1;

        int nextrow = (y+1) & 1;



        r = r + rerror[thisrow][x];

        g = g + gerror[thisrow][x];

        b = b + berror[thisrow][x];



        r = ((r & ~255) == 0) ? r : ((r < 0) ? 0 : 255);

        g = ((g & ~255) == 0) ? g : ((g < 0) ? 0 : 255);

        b = ((b & ~255) == 0) ? b : ((b < 0) ? 0 : 255);

        

        int qr = quantize[r];

        int qg = quantize[g];

        int qb = quantize[b];

       

        rerror[thisrow][x] = 0;

        gerror[thisrow][x] = 0;

        berror[thisrow][x] = 0;

        

        r -= qr;

        g -= qg;

        b -= qb;

        rerror[nextrow][x] += (5*r + 8) >> 4;

        gerror[nextrow][x] += (5*g + 8) >> 4;

        berror[nextrow][x] += (5*b + 8) >> 4;

        

        if (x-1 >= 0) {

            rerror[nextrow][x-1] += (3*r + 8) >> 4;

            gerror[nextrow][x-1] += (3*g + 8) >> 4;

            berror[nextrow][x-1] += (3*b + 8) >> 4;

        }

        

        if (x+1 < width) {

            rerror[thisrow][x+1] += (7*r + 8) >> 4;

            rerror[nextrow][x+1] += (r + 8) >> 4;

            gerror[thisrow][x+1] += (7*g + 8) >> 4;

            gerror[nextrow][x+1] += (g + 8) >> 4;

            berror[thisrow][x+1] += (7*b + 8) >> 4;

            berror[nextrow][x+1] += (b + 8) >> 4;

        }

        

        r = qr << 16;

        g = qg << 8;

        b = qb;

        return (a|r|g|b);

    }

}


This page last updated on Monday, October 07, 1996