by Maria Ingold
The use of computer images has increased dramatically in recent years. Simple charts and graphs clarify numbers in cost analysis applications. Advertising agencies create magazine layouts on computers rather than on paper. Photographs can be digitized with a scanner, or taken directly with a digital camera and stored on digital media for later viewing and processing. Computer images are composed of a series of mathematical objects, as in a graph, or are samples of a two-dimensional (2-D) or three-dimensional (3-D) scene from the real world, as with the digitization of a photograph. These two types of images are respectively referred to as vector images and raster images. The difference between these two types of images is how they are drawn and how they are stored.
X.3.1 Vector Images
A vector image might be a 2-D graphic from a simple drawing program, or an elaborate 3-D wireframe model from an architectural design package. Vector images are created from a set of objects and their characteristics such as size and color. The instructions and attributes for redrawing them are stored in an image file typically referred to as a metafile. To be precise, a metafile can store both vector information and raster data. The OS/2 Presentation Manager (PM) Metafile and Computer Graphics Metafile (CGM) are two types of metafile formats./p>
If a vector image contained an ellipse, the metafile would preserve information about the center, radii, line width, outline color, and fill color. Storing just these parameters rather than the entire image means that modifications, such as size and color, are simple to make. This also means that when an image is scaled, only the size value changes while the file size remains the same.
In drawing applications, line art (such as a happy face made from a circle, two dots and an arc) and fonts are often created as vector images since they are frequently resized and modified. However, vector graphics are not appropriate for all images. Each movement or modification of the vector image forces it to be regenerated. Since it takes a relatively long time to draw a vector image on a raster device, such as a display, vector graphics are impractical for objects such as a mouse cursors, icons, and fonts, since they are constantly changing position. Paintings and photographs are difficult to reproduce realistically using vector objects because of the level of detail involved. Imagine trying to reproduce the quality of Leonardo Da Vinci’s Mona Lisa with a collection of circles and lines. These types of images are better represented as raster images.
X.3.2 Raster Images
Raster images, also known as bitmaps, are displayed on raster devices, such as a screen or a dot matrix printer. A raster device is composed of a rectangular matrix of picture elements referred to as pixels or pels. Each pel is assigned a color. When a photograph is scanned in, the color of the pel corresponds to the color used in the equivalent location of the original image. The computer records this image by internally mapping the pels to a series of bits stored in adjacent addresses of memory. These bits are contained in a graphics object called a bitmap. A bitmap file (.BMP, .PCX, etc.) also retains information such as the size, number of colors, and file format associated with the image.
The source for a raster image can be generated by using a paint package, by using equations to generate an image (eg. as with a fractal landscape), or by digitizing a real-world scene. A digitized image can be created from a 2-D source, such as a photograph, or 3-D source, such as a room monitored by a video camera. When a computer image is captured via a camera, the image source is analog. The digital image is generated by sampling the analog source at specified intervals; these samples produce the pixel values. The number of sampled locations per unit area, such as 300 DPI (dots per inch), defines the sampling rate. The output device of the sampled image determines what resolution should be used. High resolution phototypesetter devices can achieve resolutions in the range of 3500 DPI, whereas screen resolution is only about 96 DPI.
A bitmap is output to a raster device one row of pels at a time. The bits for each row, or scan line, are packed together in memory. OS/2 pads the end of each scan line to ensure that every line begins on a 32-bit boundary, and displays the scan lines in bottom-to-top order. The means that the bottom scan line exists in the low memory addresses, with the bits used to represent the leftmost pel stored first. This is referred to as a bottom-up algorithm as opposed to a top-down method. The top-down method is used by Windows.
X.3.3 Storing Color Information
Color information for bitmaps is stored in either a multiple color plane format or a single plane format with a multiple bit count. Color information stored in an array of bits is called a color plane. In the multiple color plane format this information is stored in separate image arrays. For example, in the three-plane format one plane corresponds to red pels, a second to green pels, and a third to blue pels. The image is then displayed by reading the pels from all the planes in parallel, and combining them in real time. This combining is typically done in hardware and allows for various binary operations such as ANDing and ORing of the planes. This method is frequently used on expensive workstations. The VGA (Video Graphics Array) device and its predecessors EGA (Enhanced Graphics Adapter) and CGA (Color Graphics Adapter) also use this multiple plane format. Its four-bit color is stored in four planes, one bit per plane. Most bitmaps use the single plane format but can be converted internally to any multiplane format supported by a device. OS/2 stores the color information bits consecutively in a single plane. The number of adjacent bits needed to create each pel is referred to as the bit count.
On a monochrome display, black, or pel off, and another color, or pel on, represent the two color choices. Storing this choice requires only one bit per pel, or a bit count of one. The bit is either on or off. To create eight distinct colors, three bits are needed, one for red, one for green, and one for blue. As the color depth (number of colors) increases, a greater pixel density (bits per pel) is needed to represent all the possibile colors. For example, eight bits per pel produces 256 colors, sixteen bits per pel maps to 65,536 colors, and twenty-four bits per pel yields 16.7 million colors. Twenty-four bit color is also referred to as RGB, from red, green, and blue. An RGB value uses eight bits, or a byte, to store the intensity of each color. This means that each color can range in intensity from 0 to 255, where 0 indicates that the color is absent, and 255 displays the color at full intensity.
Some bitmap formats use a color table, which is an array of RGB values. The number of entries in the color table corresponds to the number of bits per pel. Specifically, a color table for n bits per pel has 2n entries.
The physical color table, or palette, designates the colors the device can currently create. This is frequently a subset of the total number of colors available to the system. There is often a tradeoff between the screen resolution and the number of colors that can be displayed. At lower resolutions more colors are available and at higher resolutions fewer colors can be displayed. This is because a display adapter has only enough memory or VRAM (Video Random Access Memory) to support certain combinations. Remembering that 256 colors are contained in 8 bits or one byte, 65,536 colors are represented by 16 bits or 2 bytes of information, and 16.7 million colors are stored in 24 bits or 3 bytes of information, take a look at the following relations in Table X.3.
|Color Depth in bytes||Horizontal Resolution||Vertical Resolution||Total Bytes|
Table X.3 Total video memory required for various resolution and color combinations.
A display adapter with one MB of VRAM supports resolutions of 1024 x 768 pels with 256 colors/pel, up to a size of about 800 x 600 with 65,536 colors, and only a size of about 640 x 480 if the card supports 24 bit true color. Cards with two MB of VRAM could support more colors with larger resolutions.
If a certain color is needed in the physical color palette, the default color table can be overwritten, and a new palette can be loaded. By ensuring that a color is available, dithering, or a “checkerboard” approximation of a color, is prevented.
While the physical color table contains the RGB definitions that correspond to the device color, the logical color table defines colors that are specific to an application. A logical color table, also known as a color look-up table (CLUT), provides a color mapping from an index value used by the application to an RGB value. The OS/2 GPI and graphics drivers then map the RGB value to a device color in the physical color table.
X.3.4 Image Compression
The primary problem with bitmap images is the number of bits required to store them. A 640 x 480 image with 256 colors requires 307,200 bytes of storage. With 16.7 million colors, the same image needs 921,600 bytes. Since display adapters are increasing in both resolution and color capability, the memory requirements needed for bitmaps also continues to grow. Image compression is needed to minimize image storage sizes.
Image compression, like audio compression, can be either lossless or lossy. Lossless compression is also known as bit-preserving or reversible compression. In this case, when the compressed image is reconstructed, each pixel in the image is identical to each pixel in the original image. Only minimal compression can be achieved using this technique. Better compression ratios are achieved using lossy or irreversible compression. With lossy compression, the reconstructed image will contain degradations when compared to the original. When these degradations are not visually apparent to the human eye, it is referred to as visually lossless compression.
Image compression is generally achieved by eliminating redundancy within an image. The two main types of redundancy are spatial and spectral. In spatial compression, correlations or dependencies between neighboring pixel values are ascertained. Spectral compression finds correlations between different color planes or spectral bands. Redundancies can be found between single pixel values both horizontally and vertically, within or between pixel blocks, or between lines.
The implementation of compression algorithms differ between compression formats. Several image compression standards are in existence today. For images such as facsimiles the standard is the JBIG (Joint Bilevel Imaging Group) algorithm. Monochrome and color images use an algorithm proposed by JPEG (Joint Photographic Experts Group). This uses a lossy compression method called DCT (discrete cosine transformation) and operates on 8 x 8 blocks of pixels. The result is a compression ratio of about 10 to 20 times that of the original image.
X.3.5 OS/2 Multimedia Image Formats
In addition to these compression standards, there are a number of other compressed and uncompressed image formats available. OS/2 Multimedia provides device-independent support for the image formats listed in Table X.4. Additional image format support can be added via I/O Procedures as we will see in section X.8.2 on Multimedia I/O.
|Image Format||File Extension|
|GIF (Graphics Interchange Format) Image Compressed||.GIF|
|IBM Audio Visual Connection (AVC) Still Video Image||._IM, .!IM, AND .ID|
|IBM M-Motion Still Video Image||.VID|
|IBM OS/2 1.3 Bitmap||.BMP|
|IBM OS/2 2.0 Bitmap||.BMP|
|MS Windows Device-Independent Bitmap (DIB)||.DIB|
|PCX (PC Paintbrush) Image Compressed||.PCX|
|RIFF Device Independent Bitmap||.RDI|
|Targa Image Compressed||.TGA|
|Targa Image Uncompressed||.TGA|
|TIFF (Tagged Image File Format) FAX Compressed||.TIF|
|TIFF Intel Compressed||.TIF|
|TIFF Intel Uncompressed||.TIF|
|TIFF Motorola Compressed||.TIF|
|TIFF Motorola Uncompressed||.TIF|
Table X.4 OS/2 Multimedia Image Formats