Liking cljdoc? Tell your friends :D

jdk.awt.image.AffineTransformOp

This class uses an affine transform to perform a linear mapping from 2D coordinates in the source image or Raster to 2D coordinates in the destination image or Raster. The type of interpolation that is used is specified through a constructor, either by a RenderingHints object or by one of the integer interpolation types defined in this class.

If a RenderingHints object is specified in the constructor, the interpolation hint and the rendering quality hint are used to set the interpolation type for this operation. The color rendering hint and the dithering hint can be used when color conversion is required.

Note that the following constraints have to be met:

The source and destination must be different. For Raster objects, the number of bands in the source must be equal to the number of bands in the destination.

This class uses an affine transform to perform a linear mapping from
2D coordinates in the source image or Raster to 2D coordinates
in the destination image or Raster.
The type of interpolation that is used is specified through a constructor,
either by a RenderingHints object or by one of the integer
interpolation types defined in this class.

If a RenderingHints object is specified in the constructor, the
interpolation hint and the rendering quality hint are used to set
the interpolation type for this operation.  The color rendering hint
and the dithering hint can be used when color conversion is required.

Note that the following constraints have to be met:

The source and destination must be different.
For Raster objects, the number of bands in the source must
be equal to the number of bands in the destination.
raw docstring

jdk.awt.image.AreaAveragingScaleFilter

An ImageFilter class for scaling images using a simple area averaging algorithm that produces smoother results than the nearest neighbor algorithm. This class extends the basic ImageFilter Class to scale an existing image and provide a source for a new image containing the resampled image. The pixels in the source image are blended to produce pixels for an image of the specified size. The blending process is analogous to scaling up the source image to a multiple of the destination size using pixel replication and then scaling it back down to the destination size by simply averaging all the pixels in the supersized image that fall within a given pixel of the destination image. If the data from the source is not delivered in TopDownLeftRight order then the filter will back off to a simple pixel replication behavior and utilize the requestTopDownLeftRightResend() method to refilter the pixels in a better way at the end. It is meant to be used in conjunction with a FilteredImageSource object to produce scaled versions of existing images. Due to implementation dependencies, there may be differences in pixel values of an image filtered on different platforms.

An ImageFilter class for scaling images using a simple area averaging
algorithm that produces smoother results than the nearest neighbor
algorithm.
This class extends the basic ImageFilter Class to scale an existing
image and provide a source for a new image containing the resampled
image.  The pixels in the source image are blended to produce pixels
for an image of the specified size.  The blending process is analogous
to scaling up the source image to a multiple of the destination size
using pixel replication and then scaling it back down to the destination
size by simply averaging all the pixels in the supersized image that
fall within a given pixel of the destination image.  If the data from
the source is not delivered in TopDownLeftRight order then the filter
will back off to a simple pixel replication behavior and utilize the
requestTopDownLeftRightResend() method to refilter the pixels in a
better way at the end.
It is meant to be used in conjunction with a FilteredImageSource
object to produce scaled versions of existing images.  Due to
implementation dependencies, there may be differences in pixel values
of an image filtered on different platforms.
raw docstring

jdk.awt.image.BandCombineOp

This class performs an arbitrary linear combination of the bands in a Raster, using a specified matrix.

The width of the matrix must be equal to the number of bands in the source Raster, optionally plus one. If there is one more column in the matrix than the number of bands, there is an implied 1 at the end of the vector of band samples representing a pixel. The height of the matrix must be equal to the number of bands in the destination.

For example, a 3-banded Raster might have the following transformation applied to each pixel in order to invert the second band of the Raster.

[ 1.0 0.0 0.0 0.0 ] [ b1 ] [ 0.0 -1.0 0.0 255.0 ] x [ b2 ] [ 0.0 0.0 1.0 0.0 ] [ b3 ] [ 1 ]

Note that the source and destination can be the same object.

This class performs an arbitrary linear combination of the bands
in a Raster, using a specified matrix.

The width of the matrix must be equal to the number of bands in the
source Raster, optionally plus one.  If there is one more
column in the matrix than the number of bands, there is an implied 1 at the
end of the vector of band samples representing a pixel.  The height
of the matrix must be equal to the number of bands in the destination.

For example, a 3-banded Raster might have the following
transformation applied to each pixel in order to invert the second band of
the Raster.


  [ 1.0   0.0   0.0    0.0  ]     [ b1 ]
  [ 0.0  -1.0   0.0  255.0  ]  x  [ b2 ]
  [ 0.0   0.0   1.0    0.0  ]     [ b3 ]
                                  [ 1 ]


Note that the source and destination can be the same object.
raw docstring

jdk.awt.image.BandedSampleModel

This class represents image data which is stored in a band interleaved fashion and for which each sample of a pixel occupies one data element of the DataBuffer. It subclasses ComponentSampleModel but provides a more efficient implementation for accessing band interleaved image data than is provided by ComponentSampleModel. This class should typically be used when working with images which store sample data for each band in a different bank of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. The pixel stride for a BandedSampleModel is one. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. Bank indices denote the correspondence between a bank of the data buffer and a band of image data. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT, and TYPE_DOUBLE datatypes

This class represents image data which is stored in a band interleaved
fashion and for
which each sample of a pixel occupies one data element of the DataBuffer.
It subclasses ComponentSampleModel but provides a more efficient
implementation for accessing band interleaved image data than is provided
by ComponentSampleModel.  This class should typically be used when working
with images which store sample data for each band in a different bank of the
DataBuffer. Accessor methods are provided so that image data can be
manipulated directly. Pixel stride is the number of
data array elements between two samples for the same band on the same
scanline. The pixel stride for a BandedSampleModel is one.
Scanline stride is the number of data array elements between
a given sample and the corresponding sample in the same column of the next
scanline.  Band offsets denote the number
of data array elements from the first data array element of the bank
of the DataBuffer holding each band to the first sample of the band.
The bands are numbered from 0 to N-1.
Bank indices denote the correspondence between a bank of the data buffer
and a band of image data.  This class supports
TYPE_BYTE,
TYPE_USHORT,
TYPE_SHORT,
TYPE_INT,
TYPE_FLOAT, and
TYPE_DOUBLE datatypes
raw docstring

jdk.awt.image.BufferedImage

The BufferedImage subclass describes an Image with an accessible buffer of image data. A BufferedImage is comprised of a ColorModel and a Raster of image data. The number and types of bands in the SampleModel of the Raster must match the number and types required by the ColorModel to represent its color and alpha components. All BufferedImage objects have an upper left corner coordinate of (0, 0). Any Raster used to construct a BufferedImage must therefore have minX=0 and minY=0.

This class relies on the data fetching and setting methods of Raster, and on the color characterization methods of ColorModel.

The BufferedImage subclass describes an Image with an accessible buffer of image data.
A BufferedImage is comprised of a ColorModel and a
Raster of image data.
The number and types of bands in the SampleModel of the
Raster must match the number and types required by the
ColorModel to represent its color and alpha components.
All BufferedImage objects have an upper left corner
coordinate of (0, 0).  Any Raster used to construct a
BufferedImage must therefore have minX=0 and minY=0.


This class relies on the data fetching and setting methods
of Raster,
and on the color characterization methods of ColorModel.
raw docstring

jdk.awt.image.BufferedImageFilter

The BufferedImageFilter class subclasses an ImageFilter to provide a simple means of using a single-source/single-destination image operator (BufferedImageOp) to filter a BufferedImage in the Image Producer/Consumer/Observer paradigm. Examples of these image operators are: ConvolveOp, AffineTransformOp and LookupOp.

The BufferedImageFilter class subclasses an
ImageFilter to provide a simple means of
using a single-source/single-destination image operator
(BufferedImageOp) to filter a BufferedImage
in the Image Producer/Consumer/Observer
paradigm. Examples of these image operators are: ConvolveOp,
AffineTransformOp and LookupOp.
raw docstring

jdk.awt.image.BufferedImageOp

This interface describes single-input/single-output operations performed on BufferedImage objects. It is implemented by AffineTransformOp, ConvolveOp, ColorConvertOp, RescaleOp, and LookupOp. These objects can be passed into a BufferedImageFilter to operate on a BufferedImage in the ImageProducer-ImageFilter-ImageConsumer paradigm.

Classes that implement this interface must specify whether or not they allow in-place filtering-- filter operations where the source object is equal to the destination object.

This interface cannot be used to describe more sophisticated operations such as those that take multiple sources. Note that this restriction also means that the values of the destination pixels prior to the operation are not used as input to the filter operation.

This interface describes single-input/single-output
operations performed on BufferedImage objects.
It is implemented by AffineTransformOp,
ConvolveOp, ColorConvertOp, RescaleOp,
and LookupOp.  These objects can be passed into
a BufferedImageFilter to operate on a
BufferedImage in the
ImageProducer-ImageFilter-ImageConsumer paradigm.

Classes that implement this
interface must specify whether or not they allow in-place filtering--
filter operations where the source object is equal to the destination
object.

This interface cannot be used to describe more sophisticated operations
such as those that take multiple sources. Note that this restriction also
means that the values of the destination pixels prior to the operation are
not used as input to the filter operation.
raw docstring

jdk.awt.image.BufferStrategy

The BufferStrategy class represents the mechanism with which to organize complex memory on a particular Canvas or Window. Hardware and software limitations determine whether and how a particular buffer strategy can be implemented. These limitations are detectable through the capabilities of the GraphicsConfiguration used when creating the Canvas or Window.

It is worth noting that the terms buffer and surface are meant to be synonymous: an area of contiguous memory, either in video device memory or in system memory.

There are several types of complex buffer strategies, including sequential ring buffering and blit buffering. Sequential ring buffering (i.e., double or triple buffering) is the most common; an application draws to a single back buffer and then moves the contents to the front (display) in a single step, either by copying the data or moving the video pointer. Moving the video pointer exchanges the buffers so that the first buffer drawn becomes the front buffer, or what is currently displayed on the device; this is called page flipping.

Alternatively, the contents of the back buffer can be copied, or blitted forward in a chain instead of moving the video pointer.

Double buffering:

               ***********         ***********
               *         * ------> *         *

[To display] <---- * Front B * Show * Back B. * <---- Rendering * * <------ * * *********** ***********

Triple buffering:

[To *********** *********** *********** display] * * --------+---------+------> * * <---- * Front B * Show * Mid. B. * * Back B. * <---- Rendering * * <------ * * <----- * * *********** *********** ***********

Here is an example of how buffer strategies can be created and used:

// Check the capabilities of the GraphicsConfiguration ...

// Create our component Window w = new Window(gc);

// Show our window w.setVisible(true);

// Create a general double-buffering strategy w.createBufferStrategy(2); BufferStrategy strategy = w.getBufferStrategy();

// Main loop while (!done) { // Prepare for rendering the next frame // ...

// Render single frame
do {
    // The following loop ensures that the contents of the drawing buffer
    // are consistent in case the underlying surface was recreated
    do {
        // Get a new graphics context every time through the loop
        // to make sure the strategy is validated
        Graphics graphics = strategy.getDrawGraphics();

        // Render to graphics
        // ...

        // Dispose the graphics
        graphics.dispose();

        // Repeat the rendering if the drawing buffer contents
        // were restored
    } while (strategy.contentsRestored());

    // Display the buffer
    strategy.show();

    // Repeat the rendering if the drawing buffer was lost
} while (strategy.contentsLost());

}

// Dispose the window w.setVisible(false); w.dispose();

The BufferStrategy class represents the mechanism with which
to organize complex memory on a particular Canvas or
Window.  Hardware and software limitations determine whether and
how a particular buffer strategy can be implemented.  These limitations
are detectable through the capabilities of the
GraphicsConfiguration used when creating the
Canvas or Window.

It is worth noting that the terms buffer and surface are meant
to be synonymous: an area of contiguous memory, either in video device
memory or in system memory.

There are several types of complex buffer strategies, including
sequential ring buffering and blit buffering.
Sequential ring buffering (i.e., double or triple
buffering) is the most common; an application draws to a single back
buffer and then moves the contents to the front (display) in a single
step, either by copying the data or moving the video pointer.
Moving the video pointer exchanges the buffers so that the first buffer
drawn becomes the front buffer, or what is currently displayed on the
device; this is called page flipping.

Alternatively, the contents of the back buffer can be copied, or
blitted forward in a chain instead of moving the video pointer.


Double buffering:

                   ***********         ***********
                   *         * ------> *         *
[To display] <---- * Front B *   Show  * Back B. * <---- Rendering
                   *         * <------ *         *
                   ***********         ***********

Triple buffering:

[To      ***********         ***********        ***********
display] *         * --------+---------+------> *         *
   <---- * Front B *   Show  * Mid. B. *        * Back B. * <---- Rendering
         *         * <------ *         * <----- *         *
         ***********         ***********        ***********

Here is an example of how buffer strategies can be created and used:


// Check the capabilities of the GraphicsConfiguration
...

// Create our component
Window w = new Window(gc);

// Show our window
w.setVisible(true);

// Create a general double-buffering strategy
w.createBufferStrategy(2);
BufferStrategy strategy = w.getBufferStrategy();

// Main loop
while (!done) {
    // Prepare for rendering the next frame
    // ...

    // Render single frame
    do {
        // The following loop ensures that the contents of the drawing buffer
        // are consistent in case the underlying surface was recreated
        do {
            // Get a new graphics context every time through the loop
            // to make sure the strategy is validated
            Graphics graphics = strategy.getDrawGraphics();

            // Render to graphics
            // ...

            // Dispose the graphics
            graphics.dispose();

            // Repeat the rendering if the drawing buffer contents
            // were restored
        } while (strategy.contentsRestored());

        // Display the buffer
        strategy.show();

        // Repeat the rendering if the drawing buffer was lost
    } while (strategy.contentsLost());
}

// Dispose the window
w.setVisible(false);
w.dispose();
raw docstring

jdk.awt.image.ByteLookupTable

This class defines a lookup table object. The output of a lookup operation using an object of this class is interpreted as an unsigned byte quantity. The lookup table contains byte data arrays for one or more bands (or components) of an image, and it contains an offset which will be subtracted from the input values before indexing the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands.

This class defines a lookup table object.  The output of a
lookup operation using an object of this class is interpreted
as an unsigned byte quantity.  The lookup table contains byte
data arrays for one or more bands (or components) of an image,
and it contains an offset which will be subtracted from the
input values before indexing the arrays.  This allows an array
smaller than the native data size to be provided for a
constrained input.  If there is only one array in the lookup
table, it will be applied to all bands.
raw docstring

jdk.awt.image.ColorConvertOp

This class performs a pixel-by-pixel color conversion of the data in the source image. The resulting color values are scaled to the precision of the destination image. Color conversion can be specified via an array of ColorSpace objects or an array of ICC_Profile objects.

If the source is a BufferedImage with premultiplied alpha, the color components are divided by the alpha component before color conversion. If the destination is a BufferedImage with premultiplied alpha, the color components are multiplied by the alpha component after conversion. Rasters are treated as having no alpha channel, i.e. all bands are color bands.

If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used to control color conversion.

Note that Source and Destination may be the same object.

This class performs a pixel-by-pixel color conversion of the data in
the source image.  The resulting color values are scaled to the precision
of the destination image.  Color conversion can be specified
via an array of ColorSpace objects or an array of ICC_Profile objects.

If the source is a BufferedImage with premultiplied alpha, the
color components are divided by the alpha component before color conversion.
If the destination is a BufferedImage with premultiplied alpha, the
color components are multiplied by the alpha component after conversion.
Rasters are treated as having no alpha channel, i.e. all bands are
color bands.

If a RenderingHints object is specified in the constructor, the
color rendering hint and the dithering hint may be used to control
color conversion.

Note that Source and Destination may be the same object.
raw docstring

jdk.awt.image.ColorModel

The ColorModel abstract class encapsulates the methods for translating a pixel value to color components (for example, red, green, and blue) and an alpha component. In order to render an image to the screen, a printer, or another image, pixel values must be converted to color and alpha components. As arguments to or return values from methods of this class, pixels are represented as 32-bit ints or as arrays of primitive types. The number, order, and interpretation of color components for a ColorModel is specified by its ColorSpace. A ColorModel used with pixel data that does not include alpha information treats all pixels as opaque, which is an alpha value of 1.0.

This ColorModel class supports two representations of pixel values. A pixel value can be a single 32-bit int or an array of primitive types. The Java(tm) Platform 1.0 and 1.1 APIs represented pixels as single byte or single int values. For purposes of the ColorModel class, pixel value arguments were passed as ints. The Java(tm) 2 Platform API introduced additional classes for representing images. With BufferedImage or RenderedImage objects, based on Raster and SampleModel classes, pixel values might not be conveniently representable as a single int. Consequently, ColorModel now has methods that accept pixel values represented as arrays of primitive types. The primitive type used by a particular ColorModel object is called its transfer type.

ColorModel objects used with images for which pixel values are not conveniently representable as a single int throw an IllegalArgumentException when methods taking a single int pixel argument are called. Subclasses of ColorModel must specify the conditions under which this occurs. This does not occur with DirectColorModel or IndexColorModel objects.

Currently, the transfer types supported by the Java 2D(tm) API are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, DataBuffer.TYPE_INT, DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE. Most rendering operations will perform much faster when using ColorModels and images based on the first three of these types. In addition, some image filtering operations are not supported for ColorModels and images based on the latter three types. The transfer type for a particular ColorModel object is specified when the object is created, either explicitly or by default. All subclasses of ColorModel must specify what the possible transfer types are and how the number of elements in the primitive arrays representing pixels is determined.

For BufferedImages, the transfer type of its Raster and of the Raster object's SampleModel (available from the getTransferType methods of these classes) must match that of the ColorModel. The number of elements in an array representing a pixel for the Raster and SampleModel (available from the getNumDataElements methods of these classes) must match that of the ColorModel.

The algorithm used to convert from pixel values to color and alpha components varies by subclass. For example, there is not necessarily a one-to-one correspondence between samples obtained from the SampleModel of a BufferedImage object's Raster and color/alpha components. Even when there is such a correspondence, the number of bits in a sample is not necessarily the same as the number of bits in the corresponding color/alpha component. Each subclass must specify how the translation from pixel values to color/alpha components is done.

Methods in the ColorModel class use two different representations of color and alpha components - a normalized form and an unnormalized form. In the normalized form, each component is a float value between some minimum and maximum values. For the alpha component, the minimum is 0.0 and the maximum is 1.0. For color components the minimum and maximum values for each component can be obtained from the ColorSpace object. These values will often be 0.0 and 1.0 (e.g. normalized component values for the default sRGB color space range from 0.0 to 1.0), but some color spaces have component values with different upper and lower limits. These limits can be obtained using the getMinValue and getMaxValue methods of the ColorSpace class. Normalized color component values are not premultiplied. All ColorModels must support the normalized form.

In the unnormalized form, each component is an unsigned integral value between 0 and 2n - 1, where n is the number of significant bits for a particular component. If pixel values for a particular ColorModel represent color samples premultiplied by the alpha sample, unnormalized color component values are also premultiplied. The unnormalized form is used only with instances of ColorModel whose ColorSpace has minimum component values of 0.0 for all components and maximum values of 1.0 for all components. The unnormalized form for color and alpha components can be a convenient representation for ColorModels whose normalized component values all lie between 0.0 and 1.0. In such cases the integral value 0 maps to 0.0 and the value 2n - 1 maps to 1.0. In other cases, such as when the normalized component values can be either negative or positive, the unnormalized form is not convenient. Such ColorModel objects throw an IllegalArgumentException when methods involving an unnormalized argument are called. Subclasses of ColorModel must specify the conditions under which this occurs.

The ColorModel abstract class encapsulates the
methods for translating a pixel value to color components
(for example, red, green, and blue) and an alpha component.
In order to render an image to the screen, a printer, or another
image, pixel values must be converted to color and alpha components.
As arguments to or return values from methods of this class,
pixels are represented as 32-bit ints or as arrays of primitive types.
The number, order, and interpretation of color components for a
ColorModel is specified by its ColorSpace.
A ColorModel used with pixel data that does not include
alpha information treats all pixels as opaque, which is an alpha
value of 1.0.

This ColorModel class supports two representations of
pixel values.  A pixel value can be a single 32-bit int or an
array of primitive types.  The Java(tm) Platform 1.0 and 1.1 APIs
represented pixels as single byte or single
int values.  For purposes of the ColorModel
class, pixel value arguments were passed as ints.  The Java(tm) 2
Platform API introduced additional classes for representing images.
With BufferedImage or RenderedImage
objects, based on Raster and SampleModel classes, pixel
values might not be conveniently representable as a single int.
Consequently, ColorModel now has methods that accept
pixel values represented as arrays of primitive types.  The primitive
type used by a particular ColorModel object is called its
transfer type.

ColorModel objects used with images for which pixel values
are not conveniently representable as a single int throw an
IllegalArgumentException when methods taking a single int pixel
argument are called.  Subclasses of ColorModel must
specify the conditions under which this occurs.  This does not
occur with DirectColorModel or IndexColorModel objects.

Currently, the transfer types supported by the Java 2D(tm) API are
DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, DataBuffer.TYPE_INT,
DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE.
Most rendering operations will perform much faster when using ColorModels
and images based on the first three of these types.  In addition, some
image filtering operations are not supported for ColorModels and
images based on the latter three types.
The transfer type for a particular ColorModel object is
specified when the object is created, either explicitly or by default.
All subclasses of ColorModel must specify what the
possible transfer types are and how the number of elements in the
primitive arrays representing pixels is determined.

For BufferedImages, the transfer type of its
Raster and of the Raster object's
SampleModel (available from the
getTransferType methods of these classes) must match that
of the ColorModel.  The number of elements in an array
representing a pixel for the Raster and
SampleModel (available from the
getNumDataElements methods of these classes) must match
that of the ColorModel.

The algorithm used to convert from pixel values to color and alpha
components varies by subclass.  For example, there is not necessarily
a one-to-one correspondence between samples obtained from the
SampleModel of a BufferedImage object's
Raster and color/alpha components.  Even when
there is such a correspondence, the number of bits in a sample is not
necessarily the same as the number of bits in the corresponding color/alpha
component.  Each subclass must specify how the translation from
pixel values to color/alpha components is done.

Methods in the ColorModel class use two different
representations of color and alpha components - a normalized form
and an unnormalized form.  In the normalized form, each component is a
float value between some minimum and maximum values.  For
the alpha component, the minimum is 0.0 and the maximum is 1.0.  For
color components the minimum and maximum values for each component can
be obtained from the ColorSpace object.  These values
will often be 0.0 and 1.0 (e.g. normalized component values for the
default sRGB color space range from 0.0 to 1.0), but some color spaces
have component values with different upper and lower limits.  These
limits can be obtained using the getMinValue and
getMaxValue methods of the ColorSpace
class.  Normalized color component values are not premultiplied.
All ColorModels must support the normalized form.

In the unnormalized
form, each component is an unsigned integral value between 0 and
2n - 1, where n is the number of significant bits for a
particular component.  If pixel values for a particular
ColorModel represent color samples premultiplied by
the alpha sample, unnormalized color component values are
also premultiplied.  The unnormalized form is used only with instances
of ColorModel whose ColorSpace has minimum
component values of 0.0 for all components and maximum values of
1.0 for all components.
The unnormalized form for color and alpha components can be a convenient
representation for ColorModels whose normalized component
values all lie
between 0.0 and 1.0.  In such cases the integral value 0 maps to 0.0 and
the value 2n - 1 maps to 1.0.  In other cases, such as
when the normalized component values can be either negative or positive,
the unnormalized form is not convenient.  Such ColorModel
objects throw an IllegalArgumentException when methods involving
an unnormalized argument are called.  Subclasses of ColorModel
must specify the conditions under which this occurs.
raw docstring

jdk.awt.image.ComponentColorModel

A ColorModel class that works with pixel values that represent color and alpha information as separate samples and that store each sample in a separate data element. This class can be used with an arbitrary ColorSpace. The number of color samples in the pixel values must be same as the number of color components in the ColorSpace. There may be a single alpha sample.

For those methods that use a primitive array pixel representation of type transferType, the array length is the same as the number of color and alpha samples. Color samples are stored first in the array followed by the alpha sample, if present. The order of the color samples is specified by the ColorSpace. Typically, this order reflects the name of the color space type. For example, for TYPE_RGB, index 0 corresponds to red, index 1 to green, and index 2 to blue.

The translation from pixel sample values to color/alpha components for display or processing purposes is based on a one-to-one correspondence of samples to components. Depending on the transfer type used to create an instance of ComponentColorModel, the pixel sample values represented by that instance may be signed or unsigned and may be of integral type or float or double (see below for details). The translation from sample values to normalized color/alpha components must follow certain rules. For float and double samples, the translation is an identity, i.e. normalized component values are equal to the corresponding sample values. For integral samples, the translation should be only a simple scale and offset, where the scale and offset constants may be different for each component. The result of applying the scale and offset constants is a set of color/alpha component values, which are guaranteed to fall within a certain range. Typically, the range for a color component will be the range defined by the getMinValue and getMaxValue methods of the ColorSpace class. The range for an alpha component should be 0.0 to 1.0.

Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT have pixel sample values which are treated as unsigned integral values. The number of bits in a color or alpha sample of a pixel value might not be the same as the number of bits for the corresponding color or alpha sample passed to the ComponentColorModel(ColorSpace, int[], boolean, boolean, int, int) constructor. In that case, this class assumes that the least significant n bits of a sample value hold the component value, where n is the number of significant bits for the component passed to the constructor. It also assumes that any higher-order bits in a sample value are zero. Thus, sample values range from 0 to 2n - 1. This class maps these sample values to normalized color component values such that 0 maps to the value obtained from the ColorSpace's getMinValue method for each component and 2n - 1 maps to the value obtained from getMaxValue. To create a ComponentColorModel with a different color sample mapping requires subclassing this class and overriding the getNormalizedComponents(Object, float[], int) method. The mapping for an alpha sample always maps 0 to 0.0 and 2n - 1 to 1.0.

For instances with unsigned sample values, the unnormalized color/alpha component representation is only supported if two conditions hold. First, sample value value 0 must map to normalized component value 0.0 and sample value 2n - 1 to 1.0. Second the min/max range of all color components of the ColorSpace must be 0.0 to 1.0. In this case, the component representation is the n least significant bits of the corresponding sample. Thus each component is an unsigned integral value between 0 and 2n - 1, where n is the number of significant bits for a particular component. If these conditions are not met, any method taking an unnormalized component argument will throw an IllegalArgumentException.

Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE have pixel sample values which are treated as signed short, float, or double values. Such instances do not support the unnormalized color/alpha component representation, so any methods taking such a representation as an argument will throw an IllegalArgumentException when called on one of these instances. The normalized component values of instances of this class have a range which depends on the transfer type as follows: for float samples, the full range of the float data type; for double samples, the full range of the float data type (resulting from casting double to float); for short samples, from approximately -maxVal to maxVal, where maxVal is the per component maximum value for the ColorSpace (-32767 maps to -maxVal, 0 maps to 0.0, and 32767 maps to maxVal). A subclass may override the scaling for short sample values to normalized component values by overriding the getNormalizedComponents(Object, float[], int) method. For float and double samples, the normalized component values are taken to be equal to the corresponding sample values, and subclasses should not attempt to add any non-identity scaling for these transfer types.

Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE use all the bits of all sample values. Thus all color/alpha components have 16 bits when using DataBuffer.TYPE_SHORT, 32 bits when using DataBuffer.TYPE_FLOAT, and 64 bits when using DataBuffer.TYPE_DOUBLE. When the ComponentColorModel(ColorSpace, int[], boolean, boolean, int, int) form of constructor is used with one of these transfer types, the bits array argument is ignored.

It is possible to have color/alpha sample values which cannot be reasonably interpreted as component values for rendering. This can happen when ComponentColorModel is subclassed to override the mapping of unsigned sample values to normalized color component values or when signed sample values outside a certain range are used. (As an example, specifying an alpha component as a signed short value outside the range 0 to 32767, normalized range 0.0 to 1.0, can lead to unexpected results.) It is the responsibility of applications to appropriately scale pixel data before rendering such that color components fall within the normalized range of the ColorSpace (obtained using the getMinValue and getMaxValue methods of the ColorSpace class) and the alpha component is between 0.0 and 1.0. If color or alpha component values fall outside these ranges, rendering results are indeterminate.

Methods that use a single int pixel representation throw an IllegalArgumentException, unless the number of components for the ComponentColorModel is one and the component value is unsigned -- in other words, a single color component using a transfer type of DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT and no alpha.

A ComponentColorModel can be used in conjunction with a ComponentSampleModel, a BandedSampleModel, or a PixelInterleavedSampleModel to construct a BufferedImage.

A ColorModel class that works with pixel values that
represent color and alpha information as separate samples and that
store each sample in a separate data element.  This class can be
used with an arbitrary ColorSpace.  The number of
color samples in the pixel values must be same as the number of
color components in the ColorSpace. There may be a
single alpha sample.

For those methods that use
a primitive array pixel representation of type transferType,
the array length is the same as the number of color and alpha samples.
Color samples are stored first in the array followed by the alpha
sample, if present.  The order of the color samples is specified
by the ColorSpace.  Typically, this order reflects the
name of the color space type. For example, for TYPE_RGB,
index 0 corresponds to red, index 1 to green, and index 2 to blue.

The translation from pixel sample values to color/alpha components for
display or processing purposes is based on a one-to-one correspondence of
samples to components.
Depending on the transfer type used to create an instance of
ComponentColorModel, the pixel sample values
represented by that instance may be signed or unsigned and may
be of integral type or float or double (see below for details).
The translation from sample values to normalized color/alpha components
must follow certain rules.  For float and double samples, the translation
is an identity, i.e. normalized component values are equal to the
corresponding sample values.  For integral samples, the translation
should be only a simple scale and offset, where the scale and offset
constants may be different for each component.  The result of
applying the scale and offset constants is a set of color/alpha
component values, which are guaranteed to fall within a certain
range.  Typically, the range for a color component will be the range
defined by the getMinValue and getMaxValue
methods of the ColorSpace class.  The range for an
alpha component should be 0.0 to 1.0.

Instances of ComponentColorModel created with transfer types
DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT,
and DataBuffer.TYPE_INT have pixel sample values which
are treated as unsigned integral values.
The number of bits in a color or alpha sample of a pixel value might not
be the same as the number of bits for the corresponding color or alpha
sample passed to the
ComponentColorModel(ColorSpace, int[], boolean, boolean, int, int)
constructor.  In
that case, this class assumes that the least significant n bits of a sample
value hold the component value, where n is the number of significant bits
for the component passed to the constructor.  It also assumes that
any higher-order bits in a sample value are zero.  Thus, sample values
range from 0 to 2n - 1.  This class maps these sample values
to normalized color component values such that 0 maps to the value
obtained from the ColorSpace's getMinValue
method for each component and 2n - 1 maps to the value
obtained from getMaxValue.  To create a
ComponentColorModel with a different color sample mapping
requires subclassing this class and overriding the
getNormalizedComponents(Object, float[], int) method.
The mapping for an alpha sample always maps 0 to 0.0 and
2n - 1 to 1.0.

For instances with unsigned sample values,
the unnormalized color/alpha component representation is only
supported if two conditions hold.  First, sample value value 0 must
map to normalized component value 0.0 and sample value 2n - 1
to 1.0.  Second the min/max range of all color components of the
ColorSpace must be 0.0 to 1.0.  In this case, the
component representation is the n least
significant bits of the corresponding sample.  Thus each component is
an unsigned integral value between 0 and 2n - 1, where
n is the number of significant bits for a particular component.
If these conditions are not met, any method taking an unnormalized
component argument will throw an IllegalArgumentException.

Instances of ComponentColorModel created with transfer types
DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and
DataBuffer.TYPE_DOUBLE have pixel sample values which
are treated as signed short, float, or double values.
Such instances do not support the unnormalized color/alpha component
representation, so any methods taking such a representation as an argument
will throw an IllegalArgumentException when called on one
of these instances.  The normalized component values of instances
of this class have a range which depends on the transfer
type as follows: for float samples, the full range of the float data
type; for double samples, the full range of the float data type
(resulting from casting double to float); for short samples,
from approximately -maxVal to maxVal, where maxVal is the per
component maximum value for the ColorSpace
(-32767 maps to -maxVal, 0 maps to 0.0, and 32767 maps
to maxVal).  A subclass may override the scaling for short sample
values to normalized component values by overriding the
getNormalizedComponents(Object, float[], int) method.
For float and double samples, the normalized component values are
taken to be equal to the corresponding sample values, and subclasses
should not attempt to add any non-identity scaling for these transfer
types.

Instances of ComponentColorModel created with transfer types
DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and
DataBuffer.TYPE_DOUBLE
use all the bits of all sample values.  Thus all color/alpha components
have 16 bits when using DataBuffer.TYPE_SHORT, 32 bits when
using DataBuffer.TYPE_FLOAT, and 64 bits when using
DataBuffer.TYPE_DOUBLE.  When the
ComponentColorModel(ColorSpace, int[], boolean, boolean, int, int)
form of constructor is used with one of these transfer types, the
bits array argument is ignored.

It is possible to have color/alpha sample values
which cannot be reasonably interpreted as component values for rendering.
This can happen when ComponentColorModel is subclassed to
override the mapping of unsigned sample values to normalized color
component values or when signed sample values outside a certain range
are used.  (As an example, specifying an alpha component as a signed
short value outside the range 0 to 32767, normalized range 0.0 to 1.0, can
lead to unexpected results.) It is the
responsibility of applications to appropriately scale pixel data before
rendering such that color components fall within the normalized range
of the ColorSpace (obtained using the getMinValue
and getMaxValue methods of the ColorSpace class)
and the alpha component is between 0.0 and 1.0.  If color or alpha
component values fall outside these ranges, rendering results are
indeterminate.

Methods that use a single int pixel representation throw
an IllegalArgumentException, unless the number of components
for the ComponentColorModel is one and the component
value is unsigned -- in other words,  a single color component using
a transfer type of DataBuffer.TYPE_BYTE,
DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT
and no alpha.

A ComponentColorModel can be used in conjunction with a
ComponentSampleModel, a BandedSampleModel,
or a PixelInterleavedSampleModel to construct a
BufferedImage.
raw docstring

jdk.awt.image.ComponentSampleModel

This class represents image data which is stored such that each sample of a pixel occupies one data element of the DataBuffer. It stores the N samples which make up a pixel in N separate data array elements. Different bands may be in different banks of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. This class can support different kinds of interleaving, e.g. band interleaving, scanline interleaving, and pixel interleaving. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. This class can represent image data for which each sample is an unsigned integral number which can be stored in 8, 16, or 32 bits (using DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT, respectively), data for which each sample is a signed integral number which can be stored in 16 bits (using DataBuffer.TYPE_SHORT), or data for which each sample is a signed float or double quantity (using DataBuffer.TYPE_FLOAT or DataBuffer.TYPE_DOUBLE, respectively). All samples of a given ComponentSampleModel are stored with the same precision. All strides and offsets must be non-negative. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT, TYPE_DOUBLE,

This class represents image data which is stored such that each sample
of a pixel occupies one data element of the DataBuffer.  It stores the
N samples which make up a pixel in N separate data array elements.
Different bands may be in different banks of the DataBuffer.
Accessor methods are provided so that image data can be manipulated
directly. This class can support different kinds of interleaving, e.g.
band interleaving, scanline interleaving, and pixel interleaving.
Pixel stride is the number of data array elements between two samples
for the same band on the same scanline. Scanline stride is the number
of data array elements between a given sample and the corresponding sample
in the same column of the next scanline.  Band offsets denote the number
of data array elements from the first data array element of the bank
of the DataBuffer holding each band to the first sample of the band.
The bands are numbered from 0 to N-1.  This class can represent image
data for which each sample is an unsigned integral number which can be
stored in 8, 16, or 32 bits (using DataBuffer.TYPE_BYTE,
DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT,
respectively), data for which each sample is a signed integral number
which can be stored in 16 bits (using DataBuffer.TYPE_SHORT),
or data for which each sample is a signed float or double quantity
(using DataBuffer.TYPE_FLOAT or
DataBuffer.TYPE_DOUBLE, respectively).
All samples of a given ComponentSampleModel
are stored with the same precision.  All strides and offsets must be
non-negative.  This class supports
TYPE_BYTE,
TYPE_USHORT,
TYPE_SHORT,
TYPE_INT,
TYPE_FLOAT,
TYPE_DOUBLE,
raw docstring

jdk.awt.image.ConvolveOp

This class implements a convolution from the source to the destination. Convolution using a convolution kernel is a spatial operation that computes the output pixel from an input pixel by multiplying the kernel with the surround of the input pixel. This allows the output pixel to be affected by the immediate neighborhood in a way that can be mathematically specified with a kernel.

This class operates with BufferedImage data in which color components are premultiplied with the alpha component. If the Source BufferedImage has an alpha component, and the color components are not premultiplied with the alpha component, then the data are premultiplied before being convolved. If the Destination has color components which are not premultiplied, then alpha is divided out before storing into the Destination (if alpha is 0, the color components are set to 0). If the Destination has no alpha component, then the resulting alpha is discarded after first dividing it out of the color components.

Rasters are treated as having no alpha channel. If the above treatment of the alpha channel in BufferedImages is not desired, it may be avoided by getting the Raster of a source BufferedImage and using the filter method of this class which works with Rasters.

If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required.

Note that the Source and the Destination may not be the same object.

This class implements a convolution from the source
to the destination.
Convolution using a convolution kernel is a spatial operation that
computes the output pixel from an input pixel by multiplying the kernel
with the surround of the input pixel.
This allows the output pixel to be affected by the immediate neighborhood
in a way that can be mathematically specified with a kernel.

This class operates with BufferedImage data in which color components are
premultiplied with the alpha component.  If the Source BufferedImage has
an alpha component, and the color components are not premultiplied with
the alpha component, then the data are premultiplied before being
convolved.  If the Destination has color components which are not
premultiplied, then alpha is divided out before storing into the
Destination (if alpha is 0, the color components are set to 0).  If the
Destination has no alpha component, then the resulting alpha is discarded
after first dividing it out of the color components.

Rasters are treated as having no alpha channel.  If the above treatment
of the alpha channel in BufferedImages is not desired, it may be avoided
by getting the Raster of a source BufferedImage and using the filter method
of this class which works with Rasters.

If a RenderingHints object is specified in the constructor, the
color rendering hint and the dithering hint may be used when color
conversion is required.

Note that the Source and the Destination may not be the same object.
raw docstring

jdk.awt.image.core

No vars found in this namespace.

jdk.awt.image.CropImageFilter

An ImageFilter class for cropping images. This class extends the basic ImageFilter Class to extract a given rectangular region of an existing Image and provide a source for a new image containing just the extracted region. It is meant to be used in conjunction with a FilteredImageSource object to produce cropped versions of existing images.

An ImageFilter class for cropping images.
This class extends the basic ImageFilter Class to extract a given
rectangular region of an existing Image and provide a source for a
new image containing just the extracted region.  It is meant to
be used in conjunction with a FilteredImageSource object to produce
cropped versions of existing images.
raw docstring

jdk.awt.image.DataBuffer

This class exists to wrap one or more data arrays. Each data array in the DataBuffer is referred to as a bank. Accessor methods for getting and setting elements of the DataBuffer's banks exist with and without a bank specifier. The methods without a bank specifier use the default 0th bank. The DataBuffer can optionally take an offset per bank, so that data in an existing array can be used even if the interesting data doesn't start at array location zero. Getting or setting the 0th element of a bank, uses the (0+offset)th element of the array. The size field specifies how much of the data array is available for use. Size offset for a given bank should never be greater than the length of the associated data array. The data type of a data buffer indicates the type of the data array(s) and may also indicate additional semantics, e.g. storing unsigned 8-bit data in elements of a byte array. The data type may be TYPE_UNDEFINED or one of the types defined below. Other types may be added in the future. Generally, an object of class DataBuffer will be cast down to one of its data type specific subclasses to access data type specific methods for improved performance. Currently, the Java 2D(tm) API image classes use TYPE_BYTE, TYPE_USHORT, TYPE_INT, TYPE_SHORT, TYPE_FLOAT, and TYPE_DOUBLE DataBuffers to store image data.

This class exists to wrap one or more data arrays.  Each data array in
the DataBuffer is referred to as a bank.  Accessor methods for getting
and setting elements of the DataBuffer's banks exist with and without
a bank specifier.  The methods without a bank specifier use the default 0th
bank.  The DataBuffer can optionally take an offset per bank, so that
data in an existing array can be used even if the interesting data
doesn't start at array location zero.  Getting or setting the 0th
element of a bank, uses the (0+offset)th element of the array.  The
size field specifies how much of the data array is available for
use.  Size  offset for a given bank should never be greater
than the length of the associated data array.  The data type of
a data buffer indicates the type of the data array(s) and may also
indicate additional semantics, e.g. storing unsigned 8-bit data
in elements of a byte array.  The data type may be TYPE_UNDEFINED
or one of the types defined below.  Other types may be added in
the future.  Generally, an object of class DataBuffer will be cast down
to one of its data type specific subclasses to access data type specific
methods for improved performance.  Currently, the Java 2D(tm) API
image classes use TYPE_BYTE, TYPE_USHORT, TYPE_INT, TYPE_SHORT,
TYPE_FLOAT, and TYPE_DOUBLE DataBuffers to store image data.
raw docstring

jdk.awt.image.DataBufferByte

This class extends DataBuffer and stores data internally as bytes. Values stored in the byte array(s) of this DataBuffer are treated as unsigned values.

Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array, as noted below in the documentation for those methods.

This class extends DataBuffer and stores data internally as bytes.
Values stored in the byte array(s) of this DataBuffer are treated as
unsigned values.


Note that some implementations may function more efficiently
if they can maintain control over how the data for an image is
stored.
For example, optimizations such as caching an image in video
memory require that the implementation track all modifications
to that data.
Other implementations may operate better if they can store the
data in locations other than a Java array.
To maintain optimum compatibility with various optimizations
it is best to avoid constructors and methods which expose the
underlying storage as a Java array, as noted below in the
documentation for those methods.
raw docstring

jdk.awt.image.DataBufferDouble

This class extends DataBuffer and stores data internally in double form.

Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.

This class extends DataBuffer and stores data internally
in double form.


Note that some implementations may function more efficiently
if they can maintain control over how the data for an image is
stored.
For example, optimizations such as caching an image in video
memory require that the implementation track all modifications
to that data.
Other implementations may operate better if they can store the
data in locations other than a Java array.
To maintain optimum compatibility with various optimizations
it is best to avoid constructors and methods which expose the
underlying storage as a Java array as noted below in the
documentation for those methods.
raw docstring

jdk.awt.image.DataBufferFloat

This class extends DataBuffer and stores data internally in float form.

Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.

This class extends DataBuffer and stores data internally
in float form.


Note that some implementations may function more efficiently
if they can maintain control over how the data for an image is
stored.
For example, optimizations such as caching an image in video
memory require that the implementation track all modifications
to that data.
Other implementations may operate better if they can store the
data in locations other than a Java array.
To maintain optimum compatibility with various optimizations
it is best to avoid constructors and methods which expose the
underlying storage as a Java array as noted below in the
documentation for those methods.
raw docstring

jdk.awt.image.DataBufferInt

This class extends DataBuffer and stores data internally as integers.

Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.

This class extends DataBuffer and stores data internally
as integers.


Note that some implementations may function more efficiently
if they can maintain control over how the data for an image is
stored.
For example, optimizations such as caching an image in video
memory require that the implementation track all modifications
to that data.
Other implementations may operate better if they can store the
data in locations other than a Java array.
To maintain optimum compatibility with various optimizations
it is best to avoid constructors and methods which expose the
underlying storage as a Java array as noted below in the
documentation for those methods.
raw docstring

jdk.awt.image.DataBufferShort

This class extends DataBuffer and stores data internally as shorts.

Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.

This class extends DataBuffer and stores data internally as shorts.


Note that some implementations may function more efficiently
if they can maintain control over how the data for an image is
stored.
For example, optimizations such as caching an image in video
memory require that the implementation track all modifications
to that data.
Other implementations may operate better if they can store the
data in locations other than a Java array.
To maintain optimum compatibility with various optimizations
it is best to avoid constructors and methods which expose the
underlying storage as a Java array as noted below in the
documentation for those methods.
raw docstring

jdk.awt.image.DataBufferUShort

This class extends DataBuffer and stores data internally as shorts. Values stored in the short array(s) of this DataBuffer are treated as unsigned values.

Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.

This class extends DataBuffer and stores data internally as
shorts.  Values stored in the short array(s) of this DataBuffer
are treated as unsigned values.


Note that some implementations may function more efficiently
if they can maintain control over how the data for an image is
stored.
For example, optimizations such as caching an image in video
memory require that the implementation track all modifications
to that data.
Other implementations may operate better if they can store the
data in locations other than a Java array.
To maintain optimum compatibility with various optimizations
it is best to avoid constructors and methods which expose the
underlying storage as a Java array as noted below in the
documentation for those methods.
raw docstring

jdk.awt.image.DirectColorModel

The DirectColorModel class is a ColorModel class that works with pixel values that represent RGB color and alpha information as separate samples and that pack all samples for a single pixel into a single int, short, or byte quantity. This class can be used only with ColorSpaces of type ColorSpace.TYPE_RGB. In addition, for each component of the ColorSpace, the minimum normalized component value obtained via the getMinValue() method of ColorSpace must be 0.0, and the maximum value obtained via the getMaxValue() method must be 1.0 (these min/max values are typical for RGB spaces). There must be three color samples in the pixel values and there can be a single alpha sample. For those methods that use a primitive array pixel representation of type transferType, the array length is always one. The transfer types supported are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT. Color and alpha samples are stored in the single element of the array in bits indicated by bit masks. Each bit mask must be contiguous and masks must not overlap. The same masks apply to the single int pixel representation used by other methods. The correspondence of masks and color/alpha samples is as follows:

Masks are identified by indices running from 0 through 2 if no alpha is present, or 3 if an alpha is present. The first three indices refer to color samples; index 0 corresponds to red, index 1 to green, and index 2 to blue. Index 3 corresponds to the alpha sample, if present.

The translation from pixel values to color/alpha components for display or processing purposes is a one-to-one correspondence of samples to components. A DirectColorModel is typically used with image data which uses masks to define packed samples. For example, a DirectColorModel can be used in conjunction with a SinglePixelPackedSampleModel to construct a BufferedImage. Normally the masks used by the SampleModel and the ColorModel would be the same. However, if they are different, the color interpretation of pixel data will be done according to the masks of the ColorModel.

A single int pixel representation is valid for all objects of this class, since it is always possible to represent pixel values used with this class in a single int. Therefore, methods which use this representation will not throw an IllegalArgumentException due to an invalid pixel value.

This color model is similar to an X11 TrueColor visual. The default RGB ColorModel specified by the getRGBdefault method is a DirectColorModel with the following parameters:

Number of bits: 32 Red mask: 0x00ff0000 Green mask: 0x0000ff00 Blue mask: 0x000000ff Alpha mask: 0xff000000 Color space: sRGB isAlphaPremultiplied: False Transparency: Transparency.TRANSLUCENT transferType: DataBuffer.TYPE_INT

Many of the methods in this class are final. This is because the underlying native graphics code makes assumptions about the layout and operation of this class and those assumptions are reflected in the implementations of the methods here that are marked final. You can subclass this class for other reasons, but you cannot override or modify the behavior of those methods.

The DirectColorModel class is a ColorModel
class that works with pixel values that represent RGB
color and alpha information as separate samples and that pack all
samples for a single pixel into a single int, short, or byte quantity.
This class can be used only with ColorSpaces of type ColorSpace.TYPE_RGB.
In addition, for each component of the ColorSpace, the minimum
normalized component value obtained via the getMinValue()
method of ColorSpace must be 0.0, and the maximum value obtained via
the getMaxValue() method must be 1.0 (these min/max
values are typical for RGB spaces).
There must be three color samples in the pixel values and there can
be a single alpha sample.  For those methods that use a primitive array
pixel representation of type transferType, the array
length is always one.  The transfer
types supported are DataBuffer.TYPE_BYTE,
DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT.
Color and alpha samples are stored in the single
element of the array in bits indicated by bit masks.  Each bit mask
must be contiguous and masks must not overlap.  The same masks apply to
the single int pixel representation used by other methods.  The
correspondence of masks and color/alpha samples is as follows:

 Masks are identified by indices running from 0 through 2
if no alpha is present, or 3 if an alpha is present.
 The first three indices refer to color samples;
index 0 corresponds to red, index 1 to green, and index 2 to blue.
 Index 3 corresponds to the alpha sample, if present.


The translation from pixel values to color/alpha components for
display or processing purposes is a one-to-one correspondence of
samples to components.  A DirectColorModel is
typically used with image data which uses masks to define packed
samples.  For example, a DirectColorModel can be used in
conjunction with a SinglePixelPackedSampleModel to
construct a BufferedImage.  Normally the masks used by the
SampleModel and the ColorModel would be the
same.  However, if they are different, the color interpretation
of pixel data will be done according to the masks of the
ColorModel.

A single int pixel representation is valid for all objects of this
class, since it is always possible to represent pixel values used with
this class in a single int.  Therefore, methods which use this
representation will not throw an IllegalArgumentException
due to an invalid pixel value.

This color model is similar to an X11 TrueColor visual.
The default RGB ColorModel specified by the
getRGBdefault method is a
DirectColorModel with the following parameters:


Number of bits:        32
Red mask:              0x00ff0000
Green mask:            0x0000ff00
Blue mask:             0x000000ff
Alpha mask:            0xff000000
Color space:           sRGB
isAlphaPremultiplied:  False
Transparency:          Transparency.TRANSLUCENT
transferType:          DataBuffer.TYPE_INT

Many of the methods in this class are final. This is because the
underlying native graphics code makes assumptions about the layout
and operation of this class and those assumptions are reflected in
the implementations of the methods here that are marked final.  You
can subclass this class for other reasons, but you cannot override
or modify the behavior of those methods.
raw docstring

jdk.awt.image.FilteredImageSource

This class is an implementation of the ImageProducer interface which takes an existing image and a filter object and uses them to produce image data for a new filtered version of the original image. Here is an example which filters an image by swapping the red and blue compents:

 Image src = getImage("doc:///demo/images/duke/T1.gif");
 ImageFilter colorfilter = new RedBlueSwapFilter();
 Image img = createImage(new FilteredImageSource(src.getSource(),
                                                 colorfilter));
This class is an implementation of the ImageProducer interface which
takes an existing image and a filter object and uses them to produce
image data for a new filtered version of the original image.
Here is an example which filters an image by swapping the red and
blue compents:


     Image src = getImage("doc:///demo/images/duke/T1.gif");
     ImageFilter colorfilter = new RedBlueSwapFilter();
     Image img = createImage(new FilteredImageSource(src.getSource(),
                                                     colorfilter));
raw docstring

jdk.awt.image.ImageConsumer

The interface for objects expressing interest in image data through the ImageProducer interfaces. When a consumer is added to an image producer, the producer delivers all of the data about the image using the method calls defined in this interface.

The interface for objects expressing interest in image data through
the ImageProducer interfaces.  When a consumer is added to an image
producer, the producer delivers all of the data about the image
using the method calls defined in this interface.
raw docstring

jdk.awt.image.ImageFilter

This class implements a filter for the set of interface methods that are used to deliver data from an ImageProducer to an ImageConsumer. It is meant to be used in conjunction with a FilteredImageSource object to produce filtered versions of existing images. It is a base class that provides the calls needed to implement a "Null filter" which has no effect on the data being passed through. Filters should subclass this class and override the methods which deal with the data that needs to be filtered and modify it as necessary.

This class implements a filter for the set of interface methods that
are used to deliver data from an ImageProducer to an ImageConsumer.
It is meant to be used in conjunction with a FilteredImageSource
object to produce filtered versions of existing images.  It is a
base class that provides the calls needed to implement a "Null filter"
which has no effect on the data being passed through.  Filters should
subclass this class and override the methods which deal with the
data that needs to be filtered and modify it as necessary.
raw docstring

jdk.awt.image.ImageObserver

An asynchronous update interface for receiving notifications about Image information as the Image is constructed.

An asynchronous update interface for receiving notifications about
Image information as the Image is constructed.
raw docstring

jdk.awt.image.ImageProducer

The interface for objects which can produce the image data for Images. Each image contains an ImageProducer which is used to reconstruct the image whenever it is needed, for example, when a new size of the Image is scaled, or when the width or height of the Image is being requested.

The interface for objects which can produce the image data for Images.
Each image contains an ImageProducer which is used to reconstruct
the image whenever it is needed, for example, when a new size of the
Image is scaled, or when the width or height of the Image is being
requested.
raw docstring

jdk.awt.image.ImagingOpException

The ImagingOpException is thrown if one of the BufferedImageOp or RasterOp filter methods cannot process the image.

The ImagingOpException is thrown if one of the
BufferedImageOp or RasterOp filter methods cannot
process the image.
raw docstring

jdk.awt.image.IndexColorModel

The IndexColorModel class is a ColorModel class that works with pixel values consisting of a single sample that is an index into a fixed colormap in the default sRGB color space. The colormap specifies red, green, blue, and optional alpha components corresponding to each index. All components are represented in the colormap as 8-bit unsigned integral values. Some constructors allow the caller to specify "holes" in the colormap by indicating which colormap entries are valid and which represent unusable colors via the bits set in a BigInteger object. This color model is similar to an X11 PseudoColor visual.

Some constructors provide a means to specify an alpha component for each pixel in the colormap, while others either provide no such means or, in some cases, a flag to indicate whether the colormap data contains alpha values. If no alpha is supplied to the constructor, an opaque alpha component (alpha = 1.0) is assumed for each entry. An optional transparent pixel value can be supplied that indicates a pixel to be made completely transparent, regardless of any alpha component supplied or assumed for that pixel value. Note that the color components in the colormap of an IndexColorModel objects are never pre-multiplied with the alpha components.

The transparency of an IndexColorModel object is determined by examining the alpha components of the colors in the colormap and choosing the most specific value after considering the optional alpha values and any transparent index specified. The transparency value is Transparency.OPAQUE only if all valid colors in the colormap are opaque and there is no valid transparent pixel. If all valid colors in the colormap are either completely opaque (alpha = 1.0) or completely transparent (alpha = 0.0), which typically occurs when a valid transparent pixel is specified, the value is Transparency.BITMASK. Otherwise, the value is Transparency.TRANSLUCENT, indicating that some valid color has an alpha component that is neither completely transparent nor completely opaque (0.0 < alpha < 1.0).

If an IndexColorModel object has a transparency value of Transparency.OPAQUE, then the hasAlpha and getNumComponents methods (both inherited from ColorModel) return false and 3, respectively. For any other transparency value, hasAlpha returns true and getNumComponents returns 4.

The values used to index into the colormap are taken from the least significant n bits of pixel representations where n is based on the pixel size specified in the constructor. For pixel sizes smaller than 8 bits, n is rounded up to a power of two (3 becomes 4 and 5,6,7 become 8). For pixel sizes between 8 and 16 bits, n is equal to the pixel size. Pixel sizes larger than 16 bits are not supported by this class. Higher order bits beyond n are ignored in pixel representations. Index values greater than or equal to the map size, but less than 2n, are undefined and return 0 for all color and alpha components.

For those methods that use a primitive array pixel representation of type transferType, the array length is always one. The transfer types supported are DataBuffer.TYPE_BYTE and DataBuffer.TYPE_USHORT. A single int pixel representation is valid for all objects of this class, since it is always possible to represent pixel values used with this class in a single int. Therefore, methods that use this representation do not throw an IllegalArgumentException due to an invalid pixel value.

Many of the methods in this class are final. The reason for this is that the underlying native graphics code makes assumptions about the layout and operation of this class and those assumptions are reflected in the implementations of the methods here that are marked final. You can subclass this class for other reasons, but you cannot override or modify the behaviour of those methods.

The IndexColorModel class is a ColorModel
class that works with pixel values consisting of a
single sample that is an index into a fixed colormap in the default
sRGB color space.  The colormap specifies red, green, blue, and
optional alpha components corresponding to each index.  All components
are represented in the colormap as 8-bit unsigned integral values.
Some constructors allow the caller to specify "holes" in the colormap
by indicating which colormap entries are valid and which represent
unusable colors via the bits set in a BigInteger object.
This color model is similar to an X11 PseudoColor visual.

Some constructors provide a means to specify an alpha component
for each pixel in the colormap, while others either provide no
such means or, in some cases, a flag to indicate whether the
colormap data contains alpha values.  If no alpha is supplied to
the constructor, an opaque alpha component (alpha = 1.0) is
assumed for each entry.
An optional transparent pixel value can be supplied that indicates a
pixel to be made completely transparent, regardless of any alpha
component supplied or assumed for that pixel value.
Note that the color components in the colormap of an
IndexColorModel objects are never pre-multiplied with
the alpha components.


The transparency of an IndexColorModel object is
determined by examining the alpha components of the colors in the
colormap and choosing the most specific value after considering
the optional alpha values and any transparent index specified.
The transparency value is Transparency.OPAQUE
only if all valid colors in
the colormap are opaque and there is no valid transparent pixel.
If all valid colors
in the colormap are either completely opaque (alpha = 1.0) or
completely transparent (alpha = 0.0), which typically occurs when
a valid transparent pixel is specified,
the value is Transparency.BITMASK.
Otherwise, the value is Transparency.TRANSLUCENT, indicating
that some valid color has an alpha component that is
neither completely transparent nor completely opaque
(0.0 < alpha < 1.0).



If an IndexColorModel object has
a transparency value of Transparency.OPAQUE,
then the hasAlpha
and getNumComponents methods
(both inherited from ColorModel)
return false and 3, respectively.
For any other transparency value,
hasAlpha returns true
and getNumComponents returns 4.



The values used to index into the colormap are taken from the least
significant n bits of pixel representations where
n is based on the pixel size specified in the constructor.
For pixel sizes smaller than 8 bits, n is rounded up to a
power of two (3 becomes 4 and 5,6,7 become 8).
For pixel sizes between 8 and 16 bits, n is equal to the
pixel size.
Pixel sizes larger than 16 bits are not supported by this class.
Higher order bits beyond n are ignored in pixel representations.
Index values greater than or equal to the map size, but less than
2n, are undefined and return 0 for all color and
alpha components.


For those methods that use a primitive array pixel representation of
type transferType, the array length is always one.
The transfer types supported are DataBuffer.TYPE_BYTE and
DataBuffer.TYPE_USHORT.  A single int pixel
representation is valid for all objects of this class, since it is
always possible to represent pixel values used with this class in a
single int.  Therefore, methods that use this representation do
not throw an IllegalArgumentException due to an invalid
pixel value.

Many of the methods in this class are final.  The reason for
this is that the underlying native graphics code makes assumptions
about the layout and operation of this class and those assumptions
are reflected in the implementations of the methods here that are
marked final.  You can subclass this class for other reasons, but
you cannot override or modify the behaviour of those methods.
raw docstring

jdk.awt.image.Kernel

The Kernel class defines a matrix that describes how a specified pixel and its surrounding pixels affect the value computed for the pixel's position in the output image of a filtering operation. The X origin and Y origin indicate the kernel matrix element that corresponds to the pixel position for which an output value is being computed.

The Kernel class defines a matrix that describes how a
specified pixel and its surrounding pixels affect the value
computed for the pixel's position in the output image of a filtering
operation.  The X origin and Y origin indicate the kernel matrix element
that corresponds to the pixel position for which an output value is
being computed.
raw docstring

jdk.awt.image.LookupOp

This class implements a lookup operation from the source to the destination. The LookupTable object may contain a single array or multiple arrays, subject to the restrictions below.

For Rasters, the lookup operates on bands. The number of lookup arrays may be one, in which case the same array is applied to all bands, or it must equal the number of Source Raster bands.

For BufferedImages, the lookup operates on color and alpha components. The number of lookup arrays may be one, in which case the same array is applied to all color (but not alpha) components. Otherwise, the number of lookup arrays may equal the number of Source color components, in which case no lookup of the alpha component (if present) is performed. If neither of these cases apply, the number of lookup arrays must equal the number of Source color components plus alpha components, in which case lookup is performed for all color and alpha components. This allows non-uniform rescaling of multi-band BufferedImages.

BufferedImage sources with premultiplied alpha data are treated in the same manner as non-premultiplied images for purposes of the lookup. That is, the lookup is done per band on the raw data of the BufferedImage source without regard to whether the data is premultiplied. If a color conversion is required to the destination ColorModel, the premultiplied state of both source and destination will be taken into account for this step.

Images with an IndexColorModel cannot be used.

If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required.

This class allows the Source to be the same as the Destination.

This class implements a lookup operation from the source
to the destination.  The LookupTable object may contain a single array
or multiple arrays, subject to the restrictions below.

For Rasters, the lookup operates on bands.  The number of
lookup arrays may be one, in which case the same array is
applied to all bands, or it must equal the number of Source
Raster bands.

For BufferedImages, the lookup operates on color and alpha components.
The number of lookup arrays may be one, in which case the
same array is applied to all color (but not alpha) components.
Otherwise, the number of lookup arrays may
equal the number of Source color components, in which case no
lookup of the alpha component (if present) is performed.
If neither of these cases apply, the number of lookup arrays
must equal the number of Source color components plus alpha components,
in which case lookup is performed for all color and alpha components.
This allows non-uniform rescaling of multi-band BufferedImages.

BufferedImage sources with premultiplied alpha data are treated in the same
manner as non-premultiplied images for purposes of the lookup.  That is,
the lookup is done per band on the raw data of the BufferedImage source
without regard to whether the data is premultiplied.  If a color conversion
is required to the destination ColorModel, the premultiplied state of
both source and destination will be taken into account for this step.

Images with an IndexColorModel cannot be used.

If a RenderingHints object is specified in the constructor, the
color rendering hint and the dithering hint may be used when color
conversion is required.

This class allows the Source to be the same as the Destination.
raw docstring

jdk.awt.image.LookupTable

This abstract class defines a lookup table object. ByteLookupTable and ShortLookupTable are subclasses, which contain byte and short data, respectively. A lookup table contains data arrays for one or more bands (or components) of an image (for example, separate arrays for R, G, and B), and it contains an offset which will be subtracted from the input values before indexing into the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands. All arrays must be the same size.

This abstract class defines a lookup table object.  ByteLookupTable
and ShortLookupTable are subclasses, which
contain byte and short data, respectively.  A lookup table
contains data arrays for one or more bands (or components) of an image
(for example, separate arrays for R, G, and B),
and it contains an offset which will be subtracted from the
input values before indexing into the arrays.  This allows an array
smaller than the native data size to be provided for a
constrained input.  If there is only one array in the lookup
table, it will be applied to all bands.  All arrays must be the
same size.
raw docstring

jdk.awt.image.MemoryImageSource

This class is an implementation of the ImageProducer interface which uses an array to produce pixel values for an Image. Here is an example which calculates a 100x100 image representing a fade from black to blue along the X axis and a fade from black to red along the Y axis:

 int w = 100;
 int h = 100;
 int pix[] = new int[w * h];
 int index = 0;
 for (int y = 0; y < h; y++) {
     int red = (y * 255) / (h - 1);
     for (int x = 0; x < w; x++) {
         int blue = (x * 255) / (w - 1);
         pix[index++] = (255 << 24) | (red << 16) | blue;
     }
 }
 Image img = createImage(new MemoryImageSource(w, h, pix, 0, w));

The MemoryImageSource is also capable of managing a memory image which varies over time to allow animation or custom rendering. Here is an example showing how to set up the animation source and signal changes in the data (adapted from the MemoryAnimationSourceDemo by Garth Dickie):

 int pixels[];
 MemoryImageSource source;

 public void init() {
     int width = 50;
     int height = 50;
     int size = width * height;
     pixels = new int[size];

     int value = getBackground().getRGB();
     for (int i = 0; i < size; i++) {
         pixels[i] = value;
     }

     source = new MemoryImageSource(width, height, pixels, 0, width);
     source.setAnimated(true);
     image = createImage(source);
 }

 public void run() {
     Thread me = Thread.currentThread( );
     me.setPriority(Thread.MIN_PRIORITY);

     while (true) {
         try {
             Thread.sleep(10);
         } catch( InterruptedException e ) {
             return;
         }

         // Modify the values in the pixels array at (x, y, w, h)

         // Send the new data to the interested ImageConsumers
         source.newPixels(x, y, w, h);
     }
 }
This class is an implementation of the ImageProducer interface which
uses an array to produce pixel values for an Image.  Here is an example
which calculates a 100x100 image representing a fade from black to blue
along the X axis and a fade from black to red along the Y axis:


     int w = 100;
     int h = 100;
     int pix[] = new int[w * h];
     int index = 0;
     for (int y = 0; y < h; y++) {
         int red = (y * 255) / (h - 1);
         for (int x = 0; x < w; x++) {
             int blue = (x * 255) / (w - 1);
             pix[index++] = (255 << 24) | (red << 16) | blue;
         }
     }
     Image img = createImage(new MemoryImageSource(w, h, pix, 0, w));
The MemoryImageSource is also capable of managing a memory image which
varies over time to allow animation or custom rendering.  Here is an
example showing how to set up the animation source and signal changes
in the data (adapted from the MemoryAnimationSourceDemo by Garth Dickie):


     int pixels[];
     MemoryImageSource source;

     public void init() {
         int width = 50;
         int height = 50;
         int size = width * height;
         pixels = new int[size];

         int value = getBackground().getRGB();
         for (int i = 0; i < size; i++) {
             pixels[i] = value;
         }

         source = new MemoryImageSource(width, height, pixels, 0, width);
         source.setAnimated(true);
         image = createImage(source);
     }

     public void run() {
         Thread me = Thread.currentThread( );
         me.setPriority(Thread.MIN_PRIORITY);

         while (true) {
             try {
                 Thread.sleep(10);
             } catch( InterruptedException e ) {
                 return;
             }

             // Modify the values in the pixels array at (x, y, w, h)

             // Send the new data to the interested ImageConsumers
             source.newPixels(x, y, w, h);
         }
     }
raw docstring

jdk.awt.image.MultiPixelPackedSampleModel

The MultiPixelPackedSampleModel class represents one-banded images and can pack multiple one-sample pixels into one data element. Pixels are not allowed to span data elements. The data type can be DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT. Each pixel must be a power of 2 number of bits and a power of 2 number of pixels must fit exactly in one data element. Pixel bit stride is equal to the number of bits per pixel. Scanline stride is in data elements and the last several data elements might be padded with unused pixels. Data bit offset is the offset in bits from the beginning of the DataBuffer to the first pixel and must be a multiple of pixel bit stride.

The following code illustrates extracting the bits for pixel x, y from DataBuffer data and storing the pixel data in data elements of type dataType:

 int dataElementSize = DataBuffer.getDataTypeSize(dataType);
 int bitnum = dataBitOffset  x*pixelBitStride;
 int element = data.getElem(y*scanlineStride  bitnum/dataElementSize);
 int shift = dataElementSize - (bitnum & (dataElementSize-1))
             - pixelBitStride;
 int pixel = (element >> shift) & ((1 << pixelBitStride) - 1);
The MultiPixelPackedSampleModel class represents
one-banded images and can pack multiple one-sample
pixels into one data element.  Pixels are not allowed to span data elements.
The data type can be DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT,
or DataBuffer.TYPE_INT.  Each pixel must be a power of 2 number of bits
and a power of 2 number of pixels must fit exactly in one data element.
Pixel bit stride is equal to the number of bits per pixel.  Scanline
stride is in data elements and the last several data elements might be
padded with unused pixels.  Data bit offset is the offset in bits from
the beginning of the DataBuffer to the first pixel and must be
a multiple of pixel bit stride.

The following code illustrates extracting the bits for pixel
x, y from DataBuffer data
and storing the pixel data in data elements of type
dataType:


     int dataElementSize = DataBuffer.getDataTypeSize(dataType);
     int bitnum = dataBitOffset  x*pixelBitStride;
     int element = data.getElem(y*scanlineStride  bitnum/dataElementSize);
     int shift = dataElementSize - (bitnum & (dataElementSize-1))
                 - pixelBitStride;
     int pixel = (element >> shift) & ((1 << pixelBitStride) - 1);
raw docstring

jdk.awt.image.PackedColorModel

The PackedColorModel class is an abstract ColorModel class that works with pixel values which represent color and alpha information as separate samples and which pack all samples for a single pixel into a single int, short, or byte quantity. This class can be used with an arbitrary ColorSpace. The number of color samples in the pixel values must be the same as the number of color components in the ColorSpace. There can be a single alpha sample. The array length is always 1 for those methods that use a primitive array pixel representation of type transferType. The transfer types supported are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT. Color and alpha samples are stored in the single element of the array in bits indicated by bit masks. Each bit mask must be contiguous and masks must not overlap. The same masks apply to the single int pixel representation used by other methods. The correspondence of masks and color/alpha samples is as follows:

Masks are identified by indices running from 0 through getNumComponents - 1. The first getNumColorComponents indices refer to color samples. If an alpha sample is present, it corresponds the last index. The order of the color indices is specified by the ColorSpace. Typically, this reflects the name of the color space type (for example, TYPE_RGB), index 0 corresponds to red, index 1 to green, and index 2 to blue.

The translation from pixel values to color/alpha components for display or processing purposes is a one-to-one correspondence of samples to components. A PackedColorModel is typically used with image data that uses masks to define packed samples. For example, a PackedColorModel can be used in conjunction with a SinglePixelPackedSampleModel to construct a BufferedImage. Normally the masks used by the SampleModel and the ColorModel would be the same. However, if they are different, the color interpretation of pixel data is done according to the masks of the ColorModel.

A single int pixel representation is valid for all objects of this class since it is always possible to represent pixel values used with this class in a single int. Therefore, methods that use this representation do not throw an IllegalArgumentException due to an invalid pixel value.

A subclass of PackedColorModel is DirectColorModel, which is similar to an X11 TrueColor visual.

The PackedColorModel class is an abstract
ColorModel class that works with pixel values which represent
color and alpha information as separate samples and which pack all
samples for a single pixel into a single int, short, or byte quantity.
This class can be used with an arbitrary ColorSpace.  The number of
color samples in the pixel values must be the same as the number of color
components in the ColorSpace.  There can be a single alpha
sample.  The array length is always 1 for those methods that use a
primitive array pixel representation of type transferType.
The transfer types supported are DataBuffer.TYPE_BYTE,
DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT.
Color and alpha samples are stored in the single element of the array
in bits indicated by bit masks.  Each bit mask must be contiguous and
masks must not overlap.  The same masks apply to the single int
pixel representation used by other methods.  The correspondence of
masks and color/alpha samples is as follows:

 Masks are identified by indices running from 0 through
getNumComponents - 1.
 The first
getNumColorComponents
indices refer to color samples.
 If an alpha sample is present, it corresponds the last index.
 The order of the color indices is specified
by the ColorSpace.  Typically, this reflects the name of
the color space type (for example, TYPE_RGB), index 0
corresponds to red, index 1 to green, and index 2 to blue.


The translation from pixel values to color/alpha components for
display or processing purposes is a one-to-one correspondence of
samples to components.
A PackedColorModel is typically used with image data
that uses masks to define packed samples.  For example, a
PackedColorModel can be used in conjunction with a
SinglePixelPackedSampleModel to construct a
BufferedImage.  Normally the masks used by the
SampleModel and the ColorModel would be the same.
However, if they are different, the color interpretation of pixel data is
done according to the masks of the ColorModel.

A single int pixel representation is valid for all objects
of this class since it is always possible to represent pixel values
used with this class in a single int.  Therefore, methods
that use this representation do not throw an
IllegalArgumentException due to an invalid pixel value.

A subclass of PackedColorModel is DirectColorModel,
which is similar to an X11 TrueColor visual.
raw docstring

jdk.awt.image.PixelGrabber

The PixelGrabber class implements an ImageConsumer which can be attached to an Image or ImageProducer object to retrieve a subset of the pixels in that image. Here is an example:

public void handlesinglepixel(int x, int y, int pixel) { int alpha = (pixel >> 24) & 0xff; int red = (pixel >> 16) & 0xff; int green = (pixel >> 8) & 0xff; int blue = (pixel ) & 0xff; // Deal with the pixel as necessary... }

public void handlepixels(Image img, int x, int y, int w, int h) { int[] pixels = new int[w * h]; PixelGrabber pg = new PixelGrabber(img, x, y, w, h, pixels, 0, w); try { pg.grabPixels(); } catch (InterruptedException e) { System.err.println("interrupted waiting for pixels!"); return; } if ((pg.getStatus() & ImageObserver.ABORT) != 0) { System.err.println("image fetch aborted or errored"); return; } for (int j = 0; j < h; j++) { for (int i = 0; i < w; i++) { handlesinglepixel(x+i, y+j, pixels[j * w i]); } } }

The PixelGrabber class implements an ImageConsumer which can be attached
to an Image or ImageProducer object to retrieve a subset of the pixels
in that image.  Here is an example:


public void handlesinglepixel(int x, int y, int pixel) {
     int alpha = (pixel >> 24) & 0xff;
     int red   = (pixel >> 16) & 0xff;
     int green = (pixel >>  8) & 0xff;
     int blue  = (pixel      ) & 0xff;
     // Deal with the pixel as necessary...
}

public void handlepixels(Image img, int x, int y, int w, int h) {
     int[] pixels = new int[w * h];
     PixelGrabber pg = new PixelGrabber(img, x, y, w, h, pixels, 0, w);
     try {
         pg.grabPixels();
     } catch (InterruptedException e) {
         System.err.println("interrupted waiting for pixels!");
         return;
     }
     if ((pg.getStatus() & ImageObserver.ABORT) != 0) {
         System.err.println("image fetch aborted or errored");
         return;
     }
     for (int j = 0; j < h; j++) {
         for (int i = 0; i < w; i++) {
             handlesinglepixel(x+i, y+j, pixels[j * w  i]);
         }
     }
}
raw docstring

jdk.awt.image.PixelInterleavedSampleModel

This class represents image data which is stored in a pixel interleaved fashion and for which each sample of a pixel occupies one data element of the DataBuffer. It subclasses ComponentSampleModel but provides a more efficient implementation for accessing pixel interleaved image data than is provided by ComponentSampleModel. This class stores sample data for all bands in a single bank of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. Bank indices denote the correspondence between a bank of the data buffer and a band of image data. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT and TYPE_DOUBLE datatypes.

This class represents image data which is stored in a pixel interleaved
fashion and for
which each sample of a pixel occupies one data element of the DataBuffer.
It subclasses ComponentSampleModel but provides a more efficient
implementation for accessing pixel interleaved image data than is provided
by ComponentSampleModel.  This class
stores sample data for all bands in a single bank of the
DataBuffer. Accessor methods are provided so that image data can be
manipulated directly. Pixel stride is the number of
data array elements between two samples for the same band on the same
scanline. Scanline stride is the number of data array elements between
a given sample and the corresponding sample in the same column of the next
scanline.  Band offsets denote the number
of data array elements from the first data array element of the bank
of the DataBuffer holding each band to the first sample of the band.
The bands are numbered from 0 to N-1.
Bank indices denote the correspondence between a bank of the data buffer
and a band of image data.
This class supports
TYPE_BYTE,
TYPE_USHORT,
TYPE_SHORT,
TYPE_INT,
TYPE_FLOAT and
TYPE_DOUBLE datatypes.
raw docstring

jdk.awt.image.Raster

A class representing a rectangular array of pixels. A Raster encapsulates a DataBuffer that stores the sample values and a SampleModel that describes how to locate a given sample value in a DataBuffer.

A Raster defines values for pixels occupying a particular rectangular area of the plane, not necessarily including (0, 0). The rectangle, known as the Raster's bounding rectangle and available by means of the getBounds method, is defined by minX, minY, width, and height values. The minX and minY values define the coordinate of the upper left corner of the Raster. References to pixels outside of the bounding rectangle may result in an exception being thrown, or may result in references to unintended elements of the Raster's associated DataBuffer. It is the user's responsibility to avoid accessing such pixels.

A SampleModel describes how samples of a Raster are stored in the primitive array elements of a DataBuffer. Samples may be stored one per data element, as in a PixelInterleavedSampleModel or BandedSampleModel, or packed several to an element, as in a SinglePixelPackedSampleModel or MultiPixelPackedSampleModel. The SampleModel is also controls whether samples are sign extended, allowing unsigned data to be stored in signed Java data types such as byte, short, and int.

Although a Raster may live anywhere in the plane, a SampleModel makes use of a simple coordinate system that starts at (0, 0). A Raster therefore contains a translation factor that allows pixel locations to be mapped between the Raster's coordinate system and that of the SampleModel. The translation from the SampleModel coordinate system to that of the Raster may be obtained by the getSampleModelTranslateX and getSampleModelTranslateY methods.

A Raster may share a DataBuffer with another Raster either by explicit construction or by the use of the createChild and createTranslatedChild methods. Rasters created by these methods can return a reference to the Raster they were created from by means of the getParent method. For a Raster that was not constructed by means of a call to createTranslatedChild or createChild, getParent will return null.

The createTranslatedChild method returns a new Raster that shares all of the data of the current Raster, but occupies a bounding rectangle of the same width and height but with a different starting point. For example, if the parent Raster occupied the region (10, 10) to (100, 100), and the translated Raster was defined to start at (50, 50), then pixel (20, 20) of the parent and pixel (60, 60) of the child occupy the same location in the DataBuffer shared by the two Rasters. In the first case, (-10, -10) should be added to a pixel coordinate to obtain the corresponding SampleModel coordinate, and in the second case (-50, -50) should be added.

The translation between a parent and child Raster may be determined by subtracting the child's sampleModelTranslateX and sampleModelTranslateY values from those of the parent.

The createChild method may be used to create a new Raster occupying only a subset of its parent's bounding rectangle (with the same or a translated coordinate system) or with a subset of the bands of its parent.

All constructors are protected. The correct way to create a Raster is to use one of the static create methods defined in this class. These methods create instances of Raster that use the standard Interleaved, Banded, and Packed SampleModels and that may be processed more efficiently than a Raster created by combining an externally generated SampleModel and DataBuffer.

A class representing a rectangular array of pixels.  A Raster
encapsulates a DataBuffer that stores the sample values and a
SampleModel that describes how to locate a given sample value in a
DataBuffer.

A Raster defines values for pixels occupying a particular
rectangular area of the plane, not necessarily including (0, 0).
The rectangle, known as the Raster's bounding rectangle and
available by means of the getBounds method, is defined by minX,
minY, width, and height values.  The minX and minY values define
the coordinate of the upper left corner of the Raster.  References
to pixels outside of the bounding rectangle may result in an
exception being thrown, or may result in references to unintended
elements of the Raster's associated DataBuffer.  It is the user's
responsibility to avoid accessing such pixels.

A SampleModel describes how samples of a Raster
are stored in the primitive array elements of a DataBuffer.
Samples may be stored one per data element, as in a
PixelInterleavedSampleModel or BandedSampleModel, or packed several to
an element, as in a SinglePixelPackedSampleModel or
MultiPixelPackedSampleModel.  The SampleModel is also
controls whether samples are sign extended, allowing unsigned
data to be stored in signed Java data types such as byte, short, and
int.

Although a Raster may live anywhere in the plane, a SampleModel
makes use of a simple coordinate system that starts at (0, 0).  A
Raster therefore contains a translation factor that allows pixel
locations to be mapped between the Raster's coordinate system and
that of the SampleModel.  The translation from the SampleModel
coordinate system to that of the Raster may be obtained by the
getSampleModelTranslateX and getSampleModelTranslateY methods.

A Raster may share a DataBuffer with another Raster either by
explicit construction or by the use of the createChild and
createTranslatedChild methods.  Rasters created by these methods
can return a reference to the Raster they were created from by
means of the getParent method.  For a Raster that was not
constructed by means of a call to createTranslatedChild or
createChild, getParent will return null.

The createTranslatedChild method returns a new Raster that
shares all of the data of the current Raster, but occupies a
bounding rectangle of the same width and height but with a
different starting point.  For example, if the parent Raster
occupied the region (10, 10) to (100, 100), and the translated
Raster was defined to start at (50, 50), then pixel (20, 20) of the
parent and pixel (60, 60) of the child occupy the same location in
the DataBuffer shared by the two Rasters.  In the first case, (-10,
-10) should be added to a pixel coordinate to obtain the
corresponding SampleModel coordinate, and in the second case (-50,
-50) should be added.

The translation between a parent and child Raster may be
determined by subtracting the child's sampleModelTranslateX and
sampleModelTranslateY values from those of the parent.

The createChild method may be used to create a new Raster
occupying only a subset of its parent's bounding rectangle
(with the same or a translated coordinate system) or
with a subset of the bands of its parent.

All constructors are protected.  The correct way to create a
Raster is to use one of the static create methods defined in this
class.  These methods create instances of Raster that use the
standard Interleaved, Banded, and Packed SampleModels and that may
be processed more efficiently than a Raster created by combining
an externally generated SampleModel and DataBuffer.
raw docstring

jdk.awt.image.RasterFormatException

The RasterFormatException is thrown if there is invalid layout information in the Raster.

The RasterFormatException is thrown if there is
invalid layout information in the Raster.
raw docstring

jdk.awt.image.RasterOp

This interface describes single-input/single-output operations performed on Raster objects. It is implemented by such classes as AffineTransformOp, ConvolveOp, and LookupOp. The Source and Destination objects must contain the appropriate number of bands for the particular classes implementing this interface. Otherwise, an exception is thrown. This interface cannot be used to describe more sophisticated Ops such as ones that take multiple sources. Each class implementing this interface will specify whether or not it will allow an in-place filtering operation (i.e. source object equal to the destination object). Note that the restriction to single-input operations means that the values of destination pixels prior to the operation are not used as input to the filter operation.

This interface describes single-input/single-output
operations performed on Raster objects.  It is implemented by such
classes as AffineTransformOp, ConvolveOp, and LookupOp.  The Source
and Destination objects must contain the appropriate number
of bands for the particular classes implementing this interface.
Otherwise, an exception is thrown.  This interface cannot be used to
describe more sophisticated Ops such as ones that take multiple sources.
Each class implementing this interface will specify whether or not it
will allow an in-place filtering operation (i.e. source object equal
to the destination object).  Note that the restriction to single-input
operations means that the values of destination pixels prior to the
operation are not used as input to the filter operation.
raw docstring

jdk.awt.image.renderable.ContextualRenderedImageFactory

ContextualRenderedImageFactory provides an interface for the functionality that may differ between instances of RenderableImageOp. Thus different operations on RenderableImages may be performed by a single class such as RenderedImageOp through the use of multiple instances of ContextualRenderedImageFactory. The name ContextualRenderedImageFactory is commonly shortened to "CRIF."

All operations that are to be used in a rendering-independent chain must implement ContextualRenderedImageFactory.

Classes that implement this interface must provide a constructor with no arguments.

ContextualRenderedImageFactory provides an interface for the
functionality that may differ between instances of
RenderableImageOp.  Thus different operations on RenderableImages
may be performed by a single class such as RenderedImageOp through
the use of multiple instances of ContextualRenderedImageFactory.
The name ContextualRenderedImageFactory is commonly shortened to
"CRIF."

 All operations that are to be used in a rendering-independent
chain must implement ContextualRenderedImageFactory.

 Classes that implement this interface must provide a
constructor with no arguments.
raw docstring

jdk.awt.image.renderable.core

No vars found in this namespace.

jdk.awt.image.renderable.ParameterBlock

A ParameterBlock encapsulates all the information about sources and parameters (Objects) required by a RenderableImageOp, or other classes that process images.

Although it is possible to place arbitrary objects in the source Vector, users of this class may impose semantic constraints such as requiring all sources to be RenderedImages or RenderableImage. ParameterBlock itself is merely a container and performs no checking on source or parameter types.

All parameters in a ParameterBlock are objects; convenience add and set methods are available that take arguments of base type and construct the appropriate subclass of Number (such as Integer or Float). Corresponding get methods perform a downward cast and have return values of base type; an exception will be thrown if the stored values do not have the correct type. There is no way to distinguish between the results of "short s; add(s)" and "add(new Short(s))".

Note that the get and set methods operate on references. Therefore, one must be careful not to share references between ParameterBlocks when this is inappropriate. For example, to create a new ParameterBlock that is equal to an old one except for an added source, one might be tempted to write:

ParameterBlock addSource(ParameterBlock pb, RenderableImage im) { ParameterBlock pb1 = new ParameterBlock(pb.getSources()); pb1.addSource(im); return pb1; }

This code will have the side effect of altering the original ParameterBlock, since the getSources operation returned a reference to its source Vector. Both pb and pb1 share their source Vector, and a change in either is visible to both.

A correct way to write the addSource function is to clone the source Vector:

ParameterBlock addSource (ParameterBlock pb, RenderableImage im) { ParameterBlock pb1 = new ParameterBlock(pb.getSources().clone()); pb1.addSource(im); return pb1; }

The clone method of ParameterBlock has been defined to perform a clone of both the source and parameter Vectors for this reason. A standard, shallow clone is available as shallowClone.

The addSource, setSource, add, and set methods are defined to return 'this' after adding their argument. This allows use of syntax like:

ParameterBlock pb = new ParameterBlock(); op = new RenderableImageOp("operation", pb.add(arg1).add(arg2));

A ParameterBlock encapsulates all the information about sources and
parameters (Objects) required by a RenderableImageOp, or other
classes that process images.

 Although it is possible to place arbitrary objects in the
source Vector, users of this class may impose semantic constraints
such as requiring all sources to be RenderedImages or
RenderableImage.  ParameterBlock itself is merely a container and
performs no checking on source or parameter types.

 All parameters in a ParameterBlock are objects; convenience
add and set methods are available that take arguments of base type and
construct the appropriate subclass of Number (such as
Integer or Float).  Corresponding get methods perform a
downward cast and have return values of base type; an exception
will be thrown if the stored values do not have the correct type.
There is no way to distinguish between the results of
"short s; add(s)" and "add(new Short(s))".

 Note that the get and set methods operate on references.
Therefore, one must be careful not to share references between
ParameterBlocks when this is inappropriate.  For example, to create
a new ParameterBlock that is equal to an old one except for an
added source, one might be tempted to write:



ParameterBlock addSource(ParameterBlock pb, RenderableImage im) {
    ParameterBlock pb1 = new ParameterBlock(pb.getSources());
    pb1.addSource(im);
    return pb1;
}

 This code will have the side effect of altering the original
ParameterBlock, since the getSources operation returned a reference
to its source Vector.  Both pb and pb1 share their source Vector,
and a change in either is visible to both.

 A correct way to write the addSource function is to clone
the source Vector:



ParameterBlock addSource (ParameterBlock pb, RenderableImage im) {
    ParameterBlock pb1 = new ParameterBlock(pb.getSources().clone());
    pb1.addSource(im);
    return pb1;
}

 The clone method of ParameterBlock has been defined to
perform a clone of both the source and parameter Vectors for
this reason.  A standard, shallow clone is available as
shallowClone.

 The addSource, setSource, add, and set methods are
defined to return 'this' after adding their argument.  This allows
use of syntax like:



ParameterBlock pb = new ParameterBlock();
op = new RenderableImageOp("operation", pb.add(arg1).add(arg2));
raw docstring

jdk.awt.image.renderable.RenderableImage

A RenderableImage is a common interface for rendering-independent images (a notion which subsumes resolution independence). That is, images which are described and have operations applied to them independent of any specific rendering of the image. For example, a RenderableImage can be rotated and cropped in resolution-independent terms. Then, it can be rendered for various specific contexts, such as a draft preview, a high-quality screen display, or a printer, each in an optimal fashion.

A RenderedImage is returned from a RenderableImage via the createRendering() method, which takes a RenderContext. The RenderContext specifies how the RenderedImage should be constructed. Note that it is not possible to extract pixels directly from a RenderableImage.

The createDefaultRendering() and createScaledRendering() methods are convenience methods that construct an appropriate RenderContext internally. All of the rendering methods may return a reference to a previously produced rendering.

A RenderableImage is a common interface for rendering-independent
images (a notion which subsumes resolution independence).  That is,
images which are described and have operations applied to them
independent of any specific rendering of the image.  For example, a
RenderableImage can be rotated and cropped in
resolution-independent terms.  Then, it can be rendered for various
specific contexts, such as a draft preview, a high-quality screen
display, or a printer, each in an optimal fashion.

 A RenderedImage is returned from a RenderableImage via the
createRendering() method, which takes a RenderContext.  The
RenderContext specifies how the RenderedImage should be
constructed.  Note that it is not possible to extract pixels
directly from a RenderableImage.

 The createDefaultRendering() and createScaledRendering() methods are
convenience methods that construct an appropriate RenderContext
internally.  All of the rendering methods may return a reference to a
previously produced rendering.
raw docstring

jdk.awt.image.renderable.RenderableImageOp

This class handles the renderable aspects of an operation with help from its associated instance of a ContextualRenderedImageFactory.

This class handles the renderable aspects of an operation with help
from its associated instance of a ContextualRenderedImageFactory.
raw docstring

jdk.awt.image.renderable.RenderableImageProducer

An adapter class that implements ImageProducer to allow the asynchronous production of a RenderableImage. The size of the ImageConsumer is determined by the scale factor of the usr2dev transform in the RenderContext. If the RenderContext is null, the default rendering of the RenderableImage is used. This class implements an asynchronous production that produces the image in one thread at one resolution. This class may be subclassed to implement versions that will render the image using several threads. These threads could render either the same image at progressively better quality, or different sections of the image at a single resolution.

An adapter class that implements ImageProducer to allow the
asynchronous production of a RenderableImage.  The size of the
ImageConsumer is determined by the scale factor of the usr2dev
transform in the RenderContext.  If the RenderContext is null, the
default rendering of the RenderableImage is used.  This class
implements an asynchronous production that produces the image in
one thread at one resolution.  This class may be subclassed to
implement versions that will render the image using several
threads.  These threads could render either the same image at
progressively better quality, or different sections of the image at
a single resolution.
raw docstring

jdk.awt.image.renderable.RenderContext

A RenderContext encapsulates the information needed to produce a specific rendering from a RenderableImage. It contains the area to be rendered specified in rendering-independent terms, the resolution at which the rendering is to be performed, and hints used to control the rendering process.

Users create RenderContexts and pass them to the RenderableImage via the createRendering method. Most of the methods of RenderContexts are not meant to be used directly by applications, but by the RenderableImage and operator classes to which it is passed.

The AffineTransform parameter passed into and out of this class are cloned. The RenderingHints and Shape parameters are not necessarily cloneable and are therefore only reference copied. Altering RenderingHints or Shape instances that are in use by instances of RenderContext may have undesired side effects.

A RenderContext encapsulates the information needed to produce a
specific rendering from a RenderableImage.  It contains the area to
be rendered specified in rendering-independent terms, the
resolution at which the rendering is to be performed, and hints
used to control the rendering process.

 Users create RenderContexts and pass them to the
RenderableImage via the createRendering method.  Most of the methods of
RenderContexts are not meant to be used directly by applications,
but by the RenderableImage and operator classes to which it is
passed.

 The AffineTransform parameter passed into and out of this class
are cloned.  The RenderingHints and Shape parameters are not
necessarily cloneable and are therefore only reference copied.
Altering RenderingHints or Shape instances that are in use by
instances of RenderContext may have undesired side effects.
raw docstring

jdk.awt.image.renderable.RenderedImageFactory

The RenderedImageFactory interface (often abbreviated RIF) is intended to be implemented by classes that wish to act as factories to produce different renderings, for example by executing a series of BufferedImageOps on a set of sources, depending on a specific set of parameters, properties, and rendering hints.

The RenderedImageFactory interface (often abbreviated RIF) is
intended to be implemented by classes that wish to act as factories
to produce different renderings, for example by executing a series
of BufferedImageOps on a set of sources, depending on a specific
set of parameters, properties, and rendering hints.
raw docstring

jdk.awt.image.RenderedImage

RenderedImage is a common interface for objects which contain or can produce image data in the form of Rasters. The image data may be stored/produced as a single tile or a regular array of tiles.

RenderedImage is a common interface for objects which contain
or can produce image data in the form of Rasters.  The image
data may be stored/produced as a single tile or a regular array
of tiles.
raw docstring

jdk.awt.image.ReplicateScaleFilter

An ImageFilter class for scaling images using the simplest algorithm. This class extends the basic ImageFilter Class to scale an existing image and provide a source for a new image containing the resampled image. The pixels in the source image are sampled to produce pixels for an image of the specified size by replicating rows and columns of pixels to scale up or omitting rows and columns of pixels to scale down. It is meant to be used in conjunction with a FilteredImageSource object to produce scaled versions of existing images. Due to implementation dependencies, there may be differences in pixel values of an image filtered on different platforms.

An ImageFilter class for scaling images using the simplest algorithm.
This class extends the basic ImageFilter Class to scale an existing
image and provide a source for a new image containing the resampled
image.  The pixels in the source image are sampled to produce pixels
for an image of the specified size by replicating rows and columns of
pixels to scale up or omitting rows and columns of pixels to scale
down.
It is meant to be used in conjunction with a FilteredImageSource
object to produce scaled versions of existing images.  Due to
implementation dependencies, there may be differences in pixel values
of an image filtered on different platforms.
raw docstring

jdk.awt.image.RescaleOp

This class performs a pixel-by-pixel rescaling of the data in the source image by multiplying the sample values for each pixel by a scale factor and then adding an offset. The scaled sample values are clipped to the minimum/maximum representable in the destination image.

The pseudo code for the rescaling operation is as follows:

for each pixel from Source object { for each band/component of the pixel { dstElement = (srcElement*scaleFactor) offset } }

For Rasters, rescaling operates on bands. The number of sets of scaling constants may be one, in which case the same constants are applied to all bands, or it must equal the number of Source Raster bands.

For BufferedImages, rescaling operates on color and alpha components. The number of sets of scaling constants may be one, in which case the same constants are applied to all color (but not alpha) components. Otherwise, the number of sets of scaling constants may equal the number of Source color components, in which case no rescaling of the alpha component (if present) is performed. If neither of these cases apply, the number of sets of scaling constants must equal the number of Source color components plus alpha components, in which case all color and alpha components are rescaled.

BufferedImage sources with premultiplied alpha data are treated in the same manner as non-premultiplied images for purposes of rescaling. That is, the rescaling is done per band on the raw data of the BufferedImage source without regard to whether the data is premultiplied. If a color conversion is required to the destination ColorModel, the premultiplied state of both source and destination will be taken into account for this step.

Images with an IndexColorModel cannot be rescaled.

If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required.

Note that in-place operation is allowed (i.e. the source and destination can be the same object).

This class performs a pixel-by-pixel rescaling of the data in the
 source image by multiplying the sample values for each pixel by a scale
 factor and then adding an offset. The scaled sample values are clipped
 to the minimum/maximum representable in the destination image.

 The pseudo code for the rescaling operation is as follows:


for each pixel from Source object {
    for each band/component of the pixel {
        dstElement = (srcElement*scaleFactor)  offset
    }
}

 For Rasters, rescaling operates on bands.  The number of
 sets of scaling constants may be one, in which case the same constants
 are applied to all bands, or it must equal the number of Source
 Raster bands.

 For BufferedImages, rescaling operates on color and alpha components.
 The number of sets of scaling constants may be one, in which case the
 same constants are applied to all color (but not alpha) components.
 Otherwise, the  number of sets of scaling constants may
 equal the number of Source color components, in which case no
 rescaling of the alpha component (if present) is performed.
 If neither of these cases apply, the number of sets of scaling constants
 must equal the number of Source color components plus alpha components,
 in which case all color and alpha components are rescaled.

 BufferedImage sources with premultiplied alpha data are treated in the same
 manner as non-premultiplied images for purposes of rescaling.  That is,
 the rescaling is done per band on the raw data of the BufferedImage source
 without regard to whether the data is premultiplied.  If a color conversion
 is required to the destination ColorModel, the premultiplied state of
 both source and destination will be taken into account for this step.

 Images with an IndexColorModel cannot be rescaled.

 If a RenderingHints object is specified in the constructor, the
 color rendering hint and the dithering hint may be used when color
 conversion is required.

 Note that in-place operation is allowed (i.e. the source and destination can
 be the same object).
raw docstring

jdk.awt.image.RGBImageFilter

This class provides an easy way to create an ImageFilter which modifies the pixels of an image in the default RGB ColorModel. It is meant to be used in conjunction with a FilteredImageSource object to produce filtered versions of existing images. It is an abstract class that provides the calls needed to channel all of the pixel data through a single method which converts pixels one at a time in the default RGB ColorModel regardless of the ColorModel being used by the ImageProducer. The only method which needs to be defined to create a useable image filter is the filterRGB method. Here is an example of a definition of a filter which swaps the red and blue components of an image:

 class RedBlueSwapFilter extends RGBImageFilter {
     public RedBlueSwapFilter() {
         // The filter's operation does not depend on the
         // pixel's location, so IndexColorModels can be
         // filtered directly.
         canFilterIndexColorModel = true;
     }

     public int filterRGB(int x, int y, int rgb) {
         return ((rgb & 0xff00ff00)
                 | ((rgb & 0xff0000) >> 16)
                 | ((rgb & 0xff) << 16));
     }
 }
This class provides an easy way to create an ImageFilter which modifies
the pixels of an image in the default RGB ColorModel.  It is meant to
be used in conjunction with a FilteredImageSource object to produce
filtered versions of existing images.  It is an abstract class that
provides the calls needed to channel all of the pixel data through a
single method which converts pixels one at a time in the default RGB
ColorModel regardless of the ColorModel being used by the ImageProducer.
The only method which needs to be defined to create a useable image
filter is the filterRGB method.  Here is an example of a definition
of a filter which swaps the red and blue components of an image:


     class RedBlueSwapFilter extends RGBImageFilter {
         public RedBlueSwapFilter() {
             // The filter's operation does not depend on the
             // pixel's location, so IndexColorModels can be
             // filtered directly.
             canFilterIndexColorModel = true;
         }

         public int filterRGB(int x, int y, int rgb) {
             return ((rgb & 0xff00ff00)
                     | ((rgb & 0xff0000) >> 16)
                     | ((rgb & 0xff) << 16));
         }
     }
raw docstring

jdk.awt.image.SampleModel

This abstract class defines an interface for extracting samples of pixels in an image. All image data is expressed as a collection of pixels. Each pixel consists of a number of samples. A sample is a datum for one band of an image and a band consists of all samples of a particular type in an image. For example, a pixel might contain three samples representing its red, green and blue components. There are three bands in the image containing this pixel. One band consists of all the red samples from all pixels in the image. The second band consists of all the green samples and the remaining band consists of all of the blue samples. The pixel can be stored in various formats. For example, all samples from a particular band can be stored contiguously or all samples from a single pixel can be stored contiguously.

Subclasses of SampleModel specify the types of samples they can represent (e.g. unsigned 8-bit byte, signed 16-bit short, etc.) and may specify how the samples are organized in memory. In the Java 2D(tm) API, built-in image processing operators may not operate on all possible sample types, but generally will work for unsigned integral samples of 16 bits or less. Some operators support a wider variety of sample types.

A collection of pixels is represented as a Raster, which consists of a DataBuffer and a SampleModel. The SampleModel allows access to samples in the DataBuffer and may provide low-level information that a programmer can use to directly manipulate samples and pixels in the DataBuffer.

This class is generally a fall back method for dealing with images. More efficient code will cast the SampleModel to the appropriate subclass and extract the information needed to directly manipulate pixels in the DataBuffer.

This abstract class defines an interface for extracting samples of pixels
in an image.  All image data is expressed as a collection of pixels.
Each pixel consists of a number of samples. A sample is a datum
for one band of an image and a band consists of all samples of a
particular type in an image.  For example, a pixel might contain
three samples representing its red, green and blue components.
There are three bands in the image containing this pixel.  One band
consists of all the red samples from all pixels in the
image.  The second band consists of all the green samples and
the remaining band consists of all of the blue samples.  The pixel
can be stored in various formats.  For example, all samples from
a particular band can be stored contiguously or all samples from a
single pixel can be stored contiguously.

Subclasses of SampleModel specify the types of samples they can
represent (e.g. unsigned 8-bit byte, signed 16-bit short, etc.)
and may specify how the samples are organized in memory.
In the Java 2D(tm) API, built-in image processing operators may
not operate on all possible sample types, but generally will work
for unsigned integral samples of 16 bits or less.  Some operators
support a wider variety of sample types.

A collection of pixels is represented as a Raster, which consists of
a DataBuffer and a SampleModel.  The SampleModel allows access to
samples in the DataBuffer and may provide low-level information that
a programmer can use to directly manipulate samples and pixels in the
DataBuffer.

This class is generally a fall back method for dealing with
images.  More efficient code will cast the SampleModel to the
appropriate subclass and extract the information needed to directly
manipulate pixels in the DataBuffer.
raw docstring

jdk.awt.image.ShortLookupTable

This class defines a lookup table object. The output of a lookup operation using an object of this class is interpreted as an unsigned short quantity. The lookup table contains short data arrays for one or more bands (or components) of an image, and it contains an offset which will be subtracted from the input values before indexing the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands.

This class defines a lookup table object.  The output of a
lookup operation using an object of this class is interpreted
as an unsigned short quantity.  The lookup table contains short
data arrays for one or more bands (or components) of an image,
and it contains an offset which will be subtracted from the
input values before indexing the arrays.  This allows an array
smaller than the native data size to be provided for a
constrained input.  If there is only one array in the lookup
table, it will be applied to all bands.
raw docstring

jdk.awt.image.SinglePixelPackedSampleModel

This class represents pixel data packed such that the N samples which make up a single pixel are stored in a single data array element, and each data data array element holds samples for only one pixel. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_INT data types. All data array elements reside in the first bank of a DataBuffer. Accessor methods are provided so that the image data can be manipulated directly. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Bit masks are the masks required to extract the samples representing the bands of the pixel. Bit offsets are the offsets in bits into the data array element of the samples representing the bands of the pixel.

The following code illustrates extracting the bits of the sample representing band b for pixel x,y from DataBuffer data:

 int sample = data.getElem(y * scanlineStride  x);
 sample = (sample & bitMasks[b]) >>> bitOffsets[b];
This class represents pixel data packed such that the N samples which make
 up a single pixel are stored in a single data array element, and each data
 data array element holds samples for only one pixel.
 This class supports
 TYPE_BYTE,
 TYPE_USHORT,
 TYPE_INT data types.
 All data array elements reside
 in the first bank of a DataBuffer.  Accessor methods are provided so
 that the image data can be manipulated directly. Scanline stride is the
 number of data array elements between a given sample and the corresponding
 sample in the same column of the next scanline. Bit masks are the masks
 required to extract the samples representing the bands of the pixel.
 Bit offsets are the offsets in bits into the data array
 element of the samples representing the bands of the pixel.

The following code illustrates extracting the bits of the sample
representing band b for pixel x,y
from DataBuffer data:


     int sample = data.getElem(y * scanlineStride  x);
     sample = (sample & bitMasks[b]) >>> bitOffsets[b];
raw docstring

jdk.awt.image.TileObserver

An interface for objects that wish to be informed when tiles of a WritableRenderedImage become modifiable by some writer via a call to getWritableTile, and when they become unmodifiable via the last call to releaseWritableTile.

An interface for objects that wish to be informed when tiles
of a WritableRenderedImage become modifiable by some writer via
a call to getWritableTile, and when they become unmodifiable via
the last call to releaseWritableTile.
raw docstring

jdk.awt.image.VolatileImage

VolatileImage is an image which can lose its contents at any time due to circumstances beyond the control of the application (e.g., situations caused by the operating system or by other applications). Because of the potential for hardware acceleration, a VolatileImage object can have significant performance benefits on some platforms.

The drawing surface of an image (the memory where the image contents actually reside) can be lost or invalidated, causing the contents of that memory to go away. The drawing surface thus needs to be restored or recreated and the contents of that surface need to be re-rendered. VolatileImage provides an interface for allowing the user to detect these problems and fix them when they occur.

When a VolatileImage object is created, limited system resources such as video memory (VRAM) may be allocated in order to support the image. When a VolatileImage object is no longer used, it may be garbage-collected and those system resources will be returned, but this process does not happen at guaranteed times. Applications that create many VolatileImage objects (for example, a resizing window may force recreation of its back buffer as the size changes) may run out of optimal system resources for new VolatileImage objects simply because the old objects have not yet been removed from the system. (New VolatileImage objects may still be created, but they may not perform as well as those created in accelerated memory). The flush method may be called at any time to proactively release the resources used by a VolatileImage so that it does not prevent subsequent VolatileImage objects from being accelerated. In this way, applications can have more control over the state of the resources taken up by obsolete VolatileImage objects.

This image should not be subclassed directly but should be created by using the Component.createVolatileImage or GraphicsConfiguration.createCompatibleVolatileImage(int, int) methods.

An example of using a VolatileImage object follows:

// image creation VolatileImage vImg = createVolatileImage(w, h);

// rendering to the image void renderOffscreen() { do { if (vImg.validate(getGraphicsConfiguration()) == VolatileImage.IMAGE_INCOMPATIBLE) { // old vImg doesn't work with new GraphicsConfig; re-create it vImg = createVolatileImage(w, h); } Graphics2D g = vImg.createGraphics(); // // miscellaneous rendering commands... // g.dispose(); } while (vImg.contentsLost()); }

// copying from the image (here, gScreen is the Graphics // object for the onscreen window) do { int returnCode = vImg.validate(getGraphicsConfiguration()); if (returnCode == VolatileImage.IMAGE_RESTORED) { // Contents need to be restored renderOffscreen(); // restore contents } else if (returnCode == VolatileImage.IMAGE_INCOMPATIBLE) { // old vImg doesn't work with new GraphicsConfig; re-create it vImg = createVolatileImage(w, h); renderOffscreen(); } gScreen.drawImage(vImg, 0, 0, this); } while (vImg.contentsLost());

Note that this class subclasses from the Image class, which includes methods that take an ImageObserver parameter for asynchronous notifications as information is received from a potential ImageProducer. Since this VolatileImage is not loaded from an asynchronous source, the various methods that take an ImageObserver parameter will behave as if the data has already been obtained from the ImageProducer. Specifically, this means that the return values from such methods will never indicate that the information is not yet available and the ImageObserver used in such methods will never need to be recorded for an asynchronous callback notification.

VolatileImage is an image which can lose its
contents at any time due to circumstances beyond the control of the
application (e.g., situations caused by the operating system or by
other applications). Because of the potential for hardware acceleration,
a VolatileImage object can have significant performance benefits on
some platforms.

The drawing surface of an image (the memory where the image contents
actually reside) can be lost or invalidated, causing the contents of that
memory to go away.  The drawing surface thus needs to be restored
or recreated and the contents of that surface need to be
re-rendered.  VolatileImage provides an interface for
allowing the user to detect these problems and fix them
when they occur.

When a VolatileImage object is created, limited system resources
such as video memory (VRAM) may be allocated in order to support
the image.
When a VolatileImage object is no longer used, it may be
garbage-collected and those system resources will be returned,
but this process does not happen at guaranteed times.
Applications that create many VolatileImage objects (for example,
a resizing window may force recreation of its back buffer as the
size changes) may run out of optimal system resources for new
VolatileImage objects simply because the old objects have not
yet been removed from the system.
(New VolatileImage objects may still be created, but they
may not perform as well as those created in accelerated
memory).
The flush method may be called at any time to proactively release
the resources used by a VolatileImage so that it does not prevent
subsequent VolatileImage objects from being accelerated.
In this way, applications can have more control over the state
of the resources taken up by obsolete VolatileImage objects.

This image should not be subclassed directly but should be created
by using the Component.createVolatileImage or
GraphicsConfiguration.createCompatibleVolatileImage(int, int) methods.

An example of using a VolatileImage object follows:


// image creation
VolatileImage vImg = createVolatileImage(w, h);


// rendering to the image
void renderOffscreen() {
     do {
         if (vImg.validate(getGraphicsConfiguration()) ==
             VolatileImage.IMAGE_INCOMPATIBLE)
         {
             // old vImg doesn't work with new GraphicsConfig; re-create it
             vImg = createVolatileImage(w, h);
         }
         Graphics2D g = vImg.createGraphics();
         //
         // miscellaneous rendering commands...
         //
         g.dispose();
     } while (vImg.contentsLost());
}


// copying from the image (here, gScreen is the Graphics
// object for the onscreen window)
do {
     int returnCode = vImg.validate(getGraphicsConfiguration());
     if (returnCode == VolatileImage.IMAGE_RESTORED) {
         // Contents need to be restored
         renderOffscreen();      // restore contents
     } else if (returnCode == VolatileImage.IMAGE_INCOMPATIBLE) {
         // old vImg doesn't work with new GraphicsConfig; re-create it
         vImg = createVolatileImage(w, h);
         renderOffscreen();
     }
     gScreen.drawImage(vImg, 0, 0, this);
} while (vImg.contentsLost());

Note that this class subclasses from the Image class, which
includes methods that take an ImageObserver parameter for
asynchronous notifications as information is received from
a potential ImageProducer.  Since this VolatileImage
is not loaded from an asynchronous source, the various methods that take
an ImageObserver parameter will behave as if the data has
already been obtained from the ImageProducer.
Specifically, this means that the return values from such methods
will never indicate that the information is not yet available and
the ImageObserver used in such methods will never
need to be recorded for an asynchronous callback notification.
raw docstring

jdk.awt.image.WritableRaster

This class extends Raster to provide pixel writing capabilities. Refer to the class comment for Raster for descriptions of how a Raster stores pixels.

The constructors of this class are protected. To instantiate a WritableRaster, use one of the createWritableRaster factory methods in the Raster class.

This class extends Raster to provide pixel writing capabilities.
Refer to the class comment for Raster for descriptions of how
a Raster stores pixels.

 The constructors of this class are protected.  To instantiate
a WritableRaster, use one of the createWritableRaster factory methods
in the Raster class.
raw docstring

jdk.awt.image.WritableRenderedImage

WriteableRenderedImage is a common interface for objects which contain or can produce image data in the form of Rasters and which can be modified and/or written over. The image data may be stored/produced as a single tile or a regular array of tiles.

WritableRenderedImage provides notification to other interested objects when a tile is checked out for writing (via the getWritableTile method) and when the last writer of a particular tile relinquishes its access (via a call to releaseWritableTile). Additionally, it allows any caller to determine whether any tiles are currently checked out (via hasTileWriters), and to obtain a list of such tiles (via getWritableTileIndices, in the form of a Vector of Point objects).

Objects wishing to be notified of changes in tile writability must implement the TileObserver interface, and are added by a call to addTileObserver. Multiple calls to addTileObserver for the same object will result in multiple notifications. An existing observer may reduce its notifications by calling removeTileObserver; if the observer had no notifications the operation is a no-op.

It is necessary for a WritableRenderedImage to ensure that notifications occur only when the first writer acquires a tile and the last writer releases it.

WriteableRenderedImage is a common interface for objects which
contain or can produce image data in the form of Rasters and
which can be modified and/or written over.  The image
data may be stored/produced as a single tile or a regular array
of tiles.

WritableRenderedImage provides notification to other interested
objects when a tile is checked out for writing (via the
getWritableTile method) and when the last writer of a particular
tile relinquishes its access (via a call to releaseWritableTile).
Additionally, it allows any caller to determine whether any tiles
are currently checked out (via hasTileWriters), and to obtain a
list of such tiles (via getWritableTileIndices, in the form of a Vector
of Point objects).

Objects wishing to be notified of changes in tile writability must
implement the TileObserver interface, and are added by a
call to addTileObserver.  Multiple calls to
addTileObserver for the same object will result in multiple
notifications.  An existing observer may reduce its notifications
by calling removeTileObserver; if the observer had no
notifications the operation is a no-op.

It is necessary for a WritableRenderedImage to ensure that
notifications occur only when the first writer acquires a tile and
the last writer releases it.
raw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close