An interface for events that know how to dispatch themselves. By implementing this interface an event can be placed upon the event queue and its dispatch() method will be called when the event is dispatched, using the EventDispatchThread.
This is a very useful mechanism for avoiding deadlocks. If a thread is executing in a critical section (i.e., it has entered one or more monitors), calling other synchronized code may cause deadlocks. To avoid the potential deadlocks, an ActiveEvent can be created to run the second section of code at later time. If there is contention on the monitor, the second thread will simply block until the first thread has finished its work and exited its monitors.
For security reasons, it is often desirable to use an ActiveEvent to avoid calling untrusted code from a critical thread. For instance, peer implementations can use this facility to avoid making calls into user code from a system thread. Doing so avoids potential deadlocks and denial-of-service attacks.
An interface for events that know how to dispatch themselves. By implementing this interface an event can be placed upon the event queue and its dispatch() method will be called when the event is dispatched, using the EventDispatchThread. This is a very useful mechanism for avoiding deadlocks. If a thread is executing in a critical section (i.e., it has entered one or more monitors), calling other synchronized code may cause deadlocks. To avoid the potential deadlocks, an ActiveEvent can be created to run the second section of code at later time. If there is contention on the monitor, the second thread will simply block until the first thread has finished its work and exited its monitors. For security reasons, it is often desirable to use an ActiveEvent to avoid calling untrusted code from a critical thread. For instance, peer implementations can use this facility to avoid making calls into user code from a system thread. Doing so avoids potential deadlocks and denial-of-service attacks.
The interface for objects which have an adjustable numeric value contained within a bounded range of values.
The interface for objects which have an adjustable numeric value contained within a bounded range of values.
The AlphaComposite class implements basic alpha compositing rules for combining source and destination colors to achieve blending and transparency effects with graphics and images. The specific rules implemented by this class are the basic set of 12 rules described in T. Porter and T. Duff, "Compositing Digital Images", SIGGRAPH 84, 253-259. The rest of this documentation assumes some familiarity with the definitions and concepts outlined in that paper.
This class extends the standard equations defined by Porter and Duff to include one additional factor. An instance of the AlphaComposite class can contain an alpha value that is used to modify the opacity or coverage of every source pixel before it is used in the blending equations.
It is important to note that the equations defined by the Porter and Duff paper are all defined to operate on color components that are premultiplied by their corresponding alpha components. Since the ColorModel and Raster classes allow the storage of pixel data in either premultiplied or non-premultiplied form, all input data must be normalized into premultiplied form before applying the equations and all results might need to be adjusted back to the form required by the destination before the pixel values are stored.
Also note that this class defines only the equations for combining color and alpha values in a purely mathematical sense. The accurate application of its equations depends on the way the data is retrieved from its sources and stored in its destinations. See Implementation Caveats for further information.
The following factors are used in the description of the blending equation in the Porter and Duff paper:
Factor Definition Asthe alpha component of the source pixel Csa color component of the source pixel in premultiplied form Adthe alpha component of the destination pixel Cda color component of the destination pixel in premultiplied form Fsthe fraction of the source pixel that contributes to the output Fdthe fraction of the destination pixel that contributes to the output Arthe alpha component of the result Cra color component of the result in premultiplied form
Using these factors, Porter and Duff define 12 ways of choosing the blending factors Fs and Fd to produce each of 12 desirable visual effects. The equations for determining Fs and Fd are given in the descriptions of the 12 static fields that specify visual effects. For example, the description for SRC_OVER specifies that Fs = 1 and Fd = (1-As). Once a set of equations for determining the blending factors is known they can then be applied to each pixel to produce a result using the following set of equations:
Fs = f(Ad)
Fd = f(As)
Ar = As*Fs Ad*Fd
Cr = Cs*Fs Cd*Fd
The following factors will be used to discuss our extensions to the blending equation in the Porter and Duff paper:
Factor Definition Csr one of the raw color components of the source pixel Cdr one of the raw color components of the destination pixel Aac the "extra" alpha component from the AlphaComposite instance Asr the raw alpha component of the source pixel Adrthe raw alpha component of the destination pixel Adf the final alpha component stored in the destination Cdf the final raw color component stored in the destination
Preparing Inputs
The AlphaComposite class defines an additional alpha value that is applied to the source alpha. This value is applied as if an implicit SRC_IN rule were first applied to the source pixel against a pixel with the indicated alpha by multiplying both the raw source alpha and the raw source colors by the alpha in the AlphaComposite. This leads to the following equation for producing the alpha used in the Porter and Duff blending equation:
As = Asr * Aac
All of the raw source color components need to be multiplied by the alpha in the AlphaComposite instance. Additionally, if the source was not in premultiplied form then the color components also need to be multiplied by the source alpha. Thus, the equation for producing the source color components for the Porter and Duff equation depends on whether the source pixels are premultiplied or not:
Cs = Csr * Asr * Aac (if source is not premultiplied)
Cs = Csr * Aac (if source is premultiplied)
No adjustment needs to be made to the destination alpha:
Ad = Adr
The destination color components need to be adjusted only if they are not in premultiplied form:
Cd = Cdr * Ad (if destination is not premultiplied)
Cd = Cdr (if destination is premultiplied)
Applying the Blending Equation
The adjusted As, Ad, Cs, and Cd are used in the standard Porter and Duff equations to calculate the blending factors Fs and Fd and then the resulting premultiplied components Ar and Cr.
Preparing Results
The results only need to be adjusted if they are to be stored back into a destination buffer that holds data that is not premultiplied, using the following equations:
Adf = Ar
Cdf = Cr (if dest is premultiplied)
Cdf = Cr / Ar (if dest is not premultiplied)
Note that since the division is undefined if the resulting alpha is zero, the division in that case is omitted to avoid the "divide by zero" and the color components are left as all zeros.
Performance Considerations
For performance reasons, it is preferable that Raster objects passed to the compose method of a CompositeContext object created by the AlphaComposite class have premultiplied data. If either the source Raster or the destination Raster is not premultiplied, however, appropriate conversions are performed before and after the compositing operation.
Implementation Caveats
Many sources, such as some of the opaque image types listed in the BufferedImage class, do not store alpha values for their pixels. Such sources supply an alpha of 1.0 for all of their pixels.
Many destinations also have no place to store the alpha values that result from the blending calculations performed by this class. Such destinations thus implicitly discard the resulting alpha values that this class produces. It is recommended that such destinations should treat their stored color values as non-premultiplied and divide the resulting color values by the resulting alpha value before storing the color values and discarding the alpha value.
The accuracy of the results depends on the manner in which pixels are stored in the destination. An image format that provides at least 8 bits of storage per color and alpha component is at least adequate for use as a destination for a sequence of a few to a dozen compositing operations. An image format with fewer than 8 bits of storage per component is of limited use for just one or two compositing operations before the rounding errors dominate the results. An image format that does not separately store color components is not a good candidate for any type of translucent blending. For example, BufferedImage.TYPE_BYTE_INDEXED should not be used as a destination for a blending operation because every operation can introduce large errors, due to the need to choose a pixel from a limited palette to match the results of the blending equations.
Nearly all formats store pixels as discrete integers rather than the floating point values used in the reference equations above. The implementation can either scale the integer pixel values into floating point values in the range 0.0 to 1.0 or use slightly modified versions of the equations that operate entirely in the integer domain and yet produce analogous results to the reference equations.
Typically the integer values are related to the floating point values in such a way that the integer 0 is equated to the floating point value 0.0 and the integer 2^n-1 (where n is the number of bits in the representation) is equated to 1.0. For 8-bit representations, this means that 0x00 represents 0.0 and 0xff represents 1.0.
The internal implementation can approximate some of the equations and it can also eliminate some steps to avoid unnecessary operations. For example, consider a discrete integer image with non-premultiplied alpha values that uses 8 bits per component for storage. The stored values for a nearly transparent darkened red might be:
(A, R, G, B) = (0x01, 0xb0, 0x00, 0x00)
If integer math were being used and this value were being composited in SRC mode with no extra alpha, then the math would indicate that the results were (in integer format):
(A, R, G, B) = (0x01, 0x01, 0x00, 0x00)
Note that the intermediate values, which are always in premultiplied form, would only allow the integer red component to be either 0x00 or 0x01. When we try to store this result back into a destination that is not premultiplied, dividing out the alpha will give us very few choices for the non-premultiplied red value. In this case an implementation that performs the math in integer space without shortcuts is likely to end up with the final pixel values of:
(A, R, G, B) = (0x01, 0xff, 0x00, 0x00)
(Note that 0x01 divided by 0x01 gives you 1.0, which is equivalent to the value 0xff in an 8-bit storage format.)
Alternately, an implementation that uses floating point math might produce more accurate results and end up returning to the original pixel value with little, if any, roundoff error. Or, an implementation using integer math might decide that since the equations boil down to a virtual NOP on the color values if performed in a floating point space, it can transfer the pixel untouched to the destination and avoid all the math entirely.
These implementations all attempt to honor the same equations, but use different tradeoffs of integer and floating point math and reduced or full equations. To account for such differences, it is probably best to expect only that the premultiplied form of the results to match between implementations and image formats. In this case both answers, expressed in premultiplied form would equate to:
(A, R, G, B) = (0x01, 0x01, 0x00, 0x00)
and thus they would all match.
Because of the technique of simplifying the equations for calculation efficiency, some implementations might perform differently when encountering result alpha values of 0.0 on a non-premultiplied destination. Note that the simplification of removing the divide by alpha in the case of the SRC rule is technically not valid if the denominator (alpha) is 0. But, since the results should only be expected to be accurate when viewed in premultiplied form, a resulting alpha of 0 essentially renders the resulting color components irrelevant and so exact behavior in this case should not be expected.
The AlphaComposite class implements basic alpha compositing rules for combining source and destination colors to achieve blending and transparency effects with graphics and images. The specific rules implemented by this class are the basic set of 12 rules described in T. Porter and T. Duff, "Compositing Digital Images", SIGGRAPH 84, 253-259. The rest of this documentation assumes some familiarity with the definitions and concepts outlined in that paper. This class extends the standard equations defined by Porter and Duff to include one additional factor. An instance of the AlphaComposite class can contain an alpha value that is used to modify the opacity or coverage of every source pixel before it is used in the blending equations. It is important to note that the equations defined by the Porter and Duff paper are all defined to operate on color components that are premultiplied by their corresponding alpha components. Since the ColorModel and Raster classes allow the storage of pixel data in either premultiplied or non-premultiplied form, all input data must be normalized into premultiplied form before applying the equations and all results might need to be adjusted back to the form required by the destination before the pixel values are stored. Also note that this class defines only the equations for combining color and alpha values in a purely mathematical sense. The accurate application of its equations depends on the way the data is retrieved from its sources and stored in its destinations. See Implementation Caveats for further information. The following factors are used in the description of the blending equation in the Porter and Duff paper: Factor Definition Asthe alpha component of the source pixel Csa color component of the source pixel in premultiplied form Adthe alpha component of the destination pixel Cda color component of the destination pixel in premultiplied form Fsthe fraction of the source pixel that contributes to the output Fdthe fraction of the destination pixel that contributes to the output Arthe alpha component of the result Cra color component of the result in premultiplied form Using these factors, Porter and Duff define 12 ways of choosing the blending factors Fs and Fd to produce each of 12 desirable visual effects. The equations for determining Fs and Fd are given in the descriptions of the 12 static fields that specify visual effects. For example, the description for SRC_OVER specifies that Fs = 1 and Fd = (1-As). Once a set of equations for determining the blending factors is known they can then be applied to each pixel to produce a result using the following set of equations: Fs = f(Ad) Fd = f(As) Ar = As*Fs Ad*Fd Cr = Cs*Fs Cd*Fd The following factors will be used to discuss our extensions to the blending equation in the Porter and Duff paper: Factor Definition Csr one of the raw color components of the source pixel Cdr one of the raw color components of the destination pixel Aac the "extra" alpha component from the AlphaComposite instance Asr the raw alpha component of the source pixel Adrthe raw alpha component of the destination pixel Adf the final alpha component stored in the destination Cdf the final raw color component stored in the destination Preparing Inputs The AlphaComposite class defines an additional alpha value that is applied to the source alpha. This value is applied as if an implicit SRC_IN rule were first applied to the source pixel against a pixel with the indicated alpha by multiplying both the raw source alpha and the raw source colors by the alpha in the AlphaComposite. This leads to the following equation for producing the alpha used in the Porter and Duff blending equation: As = Asr * Aac All of the raw source color components need to be multiplied by the alpha in the AlphaComposite instance. Additionally, if the source was not in premultiplied form then the color components also need to be multiplied by the source alpha. Thus, the equation for producing the source color components for the Porter and Duff equation depends on whether the source pixels are premultiplied or not: Cs = Csr * Asr * Aac (if source is not premultiplied) Cs = Csr * Aac (if source is premultiplied) No adjustment needs to be made to the destination alpha: Ad = Adr The destination color components need to be adjusted only if they are not in premultiplied form: Cd = Cdr * Ad (if destination is not premultiplied) Cd = Cdr (if destination is premultiplied) Applying the Blending Equation The adjusted As, Ad, Cs, and Cd are used in the standard Porter and Duff equations to calculate the blending factors Fs and Fd and then the resulting premultiplied components Ar and Cr. Preparing Results The results only need to be adjusted if they are to be stored back into a destination buffer that holds data that is not premultiplied, using the following equations: Adf = Ar Cdf = Cr (if dest is premultiplied) Cdf = Cr / Ar (if dest is not premultiplied) Note that since the division is undefined if the resulting alpha is zero, the division in that case is omitted to avoid the "divide by zero" and the color components are left as all zeros. Performance Considerations For performance reasons, it is preferable that Raster objects passed to the compose method of a CompositeContext object created by the AlphaComposite class have premultiplied data. If either the source Raster or the destination Raster is not premultiplied, however, appropriate conversions are performed before and after the compositing operation. Implementation Caveats Many sources, such as some of the opaque image types listed in the BufferedImage class, do not store alpha values for their pixels. Such sources supply an alpha of 1.0 for all of their pixels. Many destinations also have no place to store the alpha values that result from the blending calculations performed by this class. Such destinations thus implicitly discard the resulting alpha values that this class produces. It is recommended that such destinations should treat their stored color values as non-premultiplied and divide the resulting color values by the resulting alpha value before storing the color values and discarding the alpha value. The accuracy of the results depends on the manner in which pixels are stored in the destination. An image format that provides at least 8 bits of storage per color and alpha component is at least adequate for use as a destination for a sequence of a few to a dozen compositing operations. An image format with fewer than 8 bits of storage per component is of limited use for just one or two compositing operations before the rounding errors dominate the results. An image format that does not separately store color components is not a good candidate for any type of translucent blending. For example, BufferedImage.TYPE_BYTE_INDEXED should not be used as a destination for a blending operation because every operation can introduce large errors, due to the need to choose a pixel from a limited palette to match the results of the blending equations. Nearly all formats store pixels as discrete integers rather than the floating point values used in the reference equations above. The implementation can either scale the integer pixel values into floating point values in the range 0.0 to 1.0 or use slightly modified versions of the equations that operate entirely in the integer domain and yet produce analogous results to the reference equations. Typically the integer values are related to the floating point values in such a way that the integer 0 is equated to the floating point value 0.0 and the integer 2^n-1 (where n is the number of bits in the representation) is equated to 1.0. For 8-bit representations, this means that 0x00 represents 0.0 and 0xff represents 1.0. The internal implementation can approximate some of the equations and it can also eliminate some steps to avoid unnecessary operations. For example, consider a discrete integer image with non-premultiplied alpha values that uses 8 bits per component for storage. The stored values for a nearly transparent darkened red might be: (A, R, G, B) = (0x01, 0xb0, 0x00, 0x00) If integer math were being used and this value were being composited in SRC mode with no extra alpha, then the math would indicate that the results were (in integer format): (A, R, G, B) = (0x01, 0x01, 0x00, 0x00) Note that the intermediate values, which are always in premultiplied form, would only allow the integer red component to be either 0x00 or 0x01. When we try to store this result back into a destination that is not premultiplied, dividing out the alpha will give us very few choices for the non-premultiplied red value. In this case an implementation that performs the math in integer space without shortcuts is likely to end up with the final pixel values of: (A, R, G, B) = (0x01, 0xff, 0x00, 0x00) (Note that 0x01 divided by 0x01 gives you 1.0, which is equivalent to the value 0xff in an 8-bit storage format.) Alternately, an implementation that uses floating point math might produce more accurate results and end up returning to the original pixel value with little, if any, roundoff error. Or, an implementation using integer math might decide that since the equations boil down to a virtual NOP on the color values if performed in a floating point space, it can transfer the pixel untouched to the destination and avoid all the math entirely. These implementations all attempt to honor the same equations, but use different tradeoffs of integer and floating point math and reduced or full equations. To account for such differences, it is probably best to expect only that the premultiplied form of the results to match between implementations and image formats. In this case both answers, expressed in premultiplied form would equate to: (A, R, G, B) = (0x01, 0x01, 0x00, 0x00) and thus they would all match. Because of the technique of simplifying the equations for calculation efficiency, some implementations might perform differently when encountering result alpha values of 0.0 on a non-premultiplied destination. Note that the simplification of removing the divide by alpha in the case of the SRC rule is technically not valid if the denominator (alpha) is 0. But, since the results should only be expected to be accurate when viewed in premultiplied form, a resulting alpha of 0 essentially renders the resulting color components irrelevant and so exact behavior in this case should not be expected.
Thrown when a serious Abstract Window Toolkit error has occurred.
Thrown when a serious Abstract Window Toolkit error has occurred.
The root event class for all AWT events. This class and its subclasses supercede the original java.awt.Event class. Subclasses of this root AWTEvent class defined outside of the java.awt.event package should define event ID values greater than the value defined by RESERVED_ID_MAX.
The event masks defined in this class are needed by Component subclasses which are using Component.enableEvents() to select for event types not selected by registered listeners. If a listener is registered on a component, the appropriate event mask is already set internally by the component.
The masks are also used to specify to which types of events an AWTEventListener should listen. The masks are bitwise-ORed together and passed to Toolkit.addAWTEventListener.
The root event class for all AWT events. This class and its subclasses supercede the original java.awt.Event class. Subclasses of this root AWTEvent class defined outside of the java.awt.event package should define event ID values greater than the value defined by RESERVED_ID_MAX. The event masks defined in this class are needed by Component subclasses which are using Component.enableEvents() to select for event types not selected by registered listeners. If a listener is registered on a component, the appropriate event mask is already set internally by the component. The masks are also used to specify to which types of events an AWTEventListener should listen. The masks are bitwise-ORed together and passed to Toolkit.addAWTEventListener.
AWTEventMulticaster implements efficient and thread-safe multi-cast event dispatching for the AWT events defined in the java.awt.event package.
The following example illustrates how to use this class:
public myComponent extends Component { ActionListener actionListener = null;
public synchronized void addActionListener(ActionListener l) {
actionListener = AWTEventMulticaster.add(actionListener, l);
}
public synchronized void removeActionListener(ActionListener l) {
actionListener = AWTEventMulticaster.remove(actionListener, l);
}
public void processEvent(AWTEvent e) {
// when event occurs which causes "action" semantic
ActionListener listener = actionListener;
if (listener != null) {
listener.actionPerformed(new ActionEvent());
}
}
} The important point to note is the first argument to the add and remove methods is the field maintaining the listeners. In addition you must assign the result of the add and remove methods to the field maintaining the listeners.
AWTEventMulticaster is implemented as a pair of EventListeners that are set at construction time. AWTEventMulticaster is immutable. The add and remove methods do not alter AWTEventMulticaster in anyway. If necessary, a new AWTEventMulticaster is created. In this way it is safe to add and remove listeners during the process of an event dispatching. However, event listeners added during the process of an event dispatch operation are not notified of the event currently being dispatched.
All of the add methods allow null arguments. If the first argument is null, the second argument is returned. If the first argument is not null and the second argument is null, the first argument is returned. If both arguments are non-null, a new AWTEventMulticaster is created using the two arguments and returned.
For the remove methods that take two arguments, the following is returned:
null, if the first argument is null, or the arguments are equal, by way of ==. the first argument, if the first argument is not an instance of AWTEventMulticaster. result of invoking remove(EventListener) on the first argument, supplying the second argument to the remove(EventListener) method.
Swing makes use of EventListenerList for similar logic. Refer to it for details.
AWTEventMulticaster implements efficient and thread-safe multi-cast event dispatching for the AWT events defined in the java.awt.event package. The following example illustrates how to use this class: public myComponent extends Component { ActionListener actionListener = null; public synchronized void addActionListener(ActionListener l) { actionListener = AWTEventMulticaster.add(actionListener, l); } public synchronized void removeActionListener(ActionListener l) { actionListener = AWTEventMulticaster.remove(actionListener, l); } public void processEvent(AWTEvent e) { // when event occurs which causes "action" semantic ActionListener listener = actionListener; if (listener != null) { listener.actionPerformed(new ActionEvent()); } } } The important point to note is the first argument to the add and remove methods is the field maintaining the listeners. In addition you must assign the result of the add and remove methods to the field maintaining the listeners. AWTEventMulticaster is implemented as a pair of EventListeners that are set at construction time. AWTEventMulticaster is immutable. The add and remove methods do not alter AWTEventMulticaster in anyway. If necessary, a new AWTEventMulticaster is created. In this way it is safe to add and remove listeners during the process of an event dispatching. However, event listeners added during the process of an event dispatch operation are not notified of the event currently being dispatched. All of the add methods allow null arguments. If the first argument is null, the second argument is returned. If the first argument is not null and the second argument is null, the first argument is returned. If both arguments are non-null, a new AWTEventMulticaster is created using the two arguments and returned. For the remove methods that take two arguments, the following is returned: null, if the first argument is null, or the arguments are equal, by way of ==. the first argument, if the first argument is not an instance of AWTEventMulticaster. result of invoking remove(EventListener) on the first argument, supplying the second argument to the remove(EventListener) method. Swing makes use of EventListenerList for similar logic. Refer to it for details.
Signals that an Abstract Window Toolkit exception has occurred.
Signals that an Abstract Window Toolkit exception has occurred.
An AWTKeyStroke represents a key action on the keyboard, or equivalent input device. AWTKeyStrokes can correspond to only a press or release of a particular key, just as KEY_PRESSED and KEY_RELEASED KeyEvents do; alternately, they can correspond to typing a specific Java character, just as KEY_TYPED KeyEvents do. In all cases, AWTKeyStrokes can specify modifiers (alt, shift, control, meta, altGraph, or a combination thereof) which must be present during the action for an exact match.
AWTKeyStrokes are immutable, and are intended to be unique. Client code should never create an AWTKeyStroke on its own, but should instead use a variant of getAWTKeyStroke. Client use of these factory methods allows the AWTKeyStroke implementation to cache and share instances efficiently.
An AWTKeyStroke represents a key action on the keyboard, or equivalent input device. AWTKeyStrokes can correspond to only a press or release of a particular key, just as KEY_PRESSED and KEY_RELEASED KeyEvents do; alternately, they can correspond to typing a specific Java character, just as KEY_TYPED KeyEvents do. In all cases, AWTKeyStrokes can specify modifiers (alt, shift, control, meta, altGraph, or a combination thereof) which must be present during the action for an exact match. AWTKeyStrokes are immutable, and are intended to be unique. Client code should never create an AWTKeyStroke on its own, but should instead use a variant of getAWTKeyStroke. Client use of these factory methods allows the AWTKeyStroke implementation to cache and share instances efficiently.
This class is for AWT permissions. An AWTPermission contains a target name but no actions list; you either have the named permission or you don't.
The target name is the name of the AWT permission (see below). The naming convention follows the hierarchical property naming convention. Also, an asterisk could be used to represent all AWT permissions.
The following table lists all the possible AWTPermission target names, and for each provides a description of what the permission allows and a discussion of the risks of granting code the permission.
Permission Target Name What the Permission Allows Risks of Allowing this Permission
accessClipboard Posting and retrieval of information to and from the AWT clipboard This would allow malfeasant code to share potentially sensitive or confidential information.
accessEventQueue Access to the AWT event queue After retrieving the AWT event queue, malicious code may peek at and even remove existing events from its event queue, as well as post bogus events which may purposefully cause the application or applet to misbehave in an insecure manner.
accessSystemTray Access to the AWT SystemTray instance This would allow malicious code to add tray icons to the system tray. First, such an icon may look like the icon of some known application (such as a firewall or anti-virus) and order a user to do something unsafe (with help of balloon messages). Second, the system tray may be glutted with tray icons so that no one could add a tray icon anymore.
createRobot Create java.awt.Robot objects The java.awt.Robot object allows code to generate native-level mouse and keyboard events as well as read the screen. It could allow malicious code to control the system, run other programs, read the display, and deny mouse and keyboard access to the user.
fullScreenExclusive Enter full-screen exclusive mode Entering full-screen exclusive mode allows direct access to low-level graphics card memory. This could be used to spoof the system, since the program is in direct control of rendering. Depending on the implementation, the security warning may not be shown for the windows used to enter the full-screen exclusive mode (assuming that the fullScreenExclusive permission has been granted to this application). Note that this behavior does not mean that the showWindowWithoutWarningBanner permission will be automatically granted to the application which has the fullScreenExclusive permission: non-full-screen windows will continue to be shown with the security warning.
listenToAllAWTEvents Listen to all AWT events, system-wide After adding an AWT event listener, malicious code may scan all AWT events dispatched in the system, allowing it to read all user input (such as passwords). Each AWT event listener is called from within the context of that event queue's EventDispatchThread, so if the accessEventQueue permission is also enabled, malicious code could modify the contents of AWT event queues system-wide, causing the application or applet to misbehave in an insecure manner.
readDisplayPixels Readback of pixels from the display screen Interfaces such as the java.awt.Composite interface or the java.awt.Robot class allow arbitrary code to examine pixels on the display enable malicious code to snoop on the activities of the user.
replaceKeyboardFocusManager Sets the KeyboardFocusManager for a particular thread. When SecurityManager is installed, the invoking thread must be granted this permission in order to replace the current KeyboardFocusManager. If permission is not granted, a SecurityException will be thrown.
setAppletStub Setting the stub which implements Applet container services Malicious code could set an applet's stub and result in unexpected behavior or denial of service to an applet.
setWindowAlwaysOnTop Setting always-on-top property of the window: Window.setAlwaysOnTop(boolean) The malicious window might make itself look and behave like a real full desktop, so that information entered by the unsuspecting user is captured and subsequently misused
showWindowWithoutWarningBanner Display of a window without also displaying a banner warning that the window was created by an applet Without this warning, an applet may pop up windows without the user knowing that they belong to an applet. Since users may make security-sensitive decisions based on whether or not the window belongs to an applet (entering a username and password into a dialog box, for example), disabling this warning banner may allow applets to trick the user into entering such information.
toolkitModality Creating TOOLKIT_MODAL dialogs and setting the TOOLKIT_EXCLUDE window property. When a toolkit-modal dialog is shown from an applet, it blocks all other applets in the browser. When launching applications from Java Web Start, its windows (such as the security dialog) may also be blocked by toolkit-modal dialogs, shown from these applications.
watchMousePointer Getting the information about the mouse pointer position at any time Constantly watching the mouse pointer, an applet can make guesses about what the user is doing, i.e. moving the mouse to the lower left corner of the screen most likely means that the user is about to launch an application. If a virtual keypad is used so that keyboard is emulated using the mouse, an applet may guess what is being typed.
This class is for AWT permissions. An AWTPermission contains a target name but no actions list; you either have the named permission or you don't. The target name is the name of the AWT permission (see below). The naming convention follows the hierarchical property naming convention. Also, an asterisk could be used to represent all AWT permissions. The following table lists all the possible AWTPermission target names, and for each provides a description of what the permission allows and a discussion of the risks of granting code the permission. Permission Target Name What the Permission Allows Risks of Allowing this Permission accessClipboard Posting and retrieval of information to and from the AWT clipboard This would allow malfeasant code to share potentially sensitive or confidential information. accessEventQueue Access to the AWT event queue After retrieving the AWT event queue, malicious code may peek at and even remove existing events from its event queue, as well as post bogus events which may purposefully cause the application or applet to misbehave in an insecure manner. accessSystemTray Access to the AWT SystemTray instance This would allow malicious code to add tray icons to the system tray. First, such an icon may look like the icon of some known application (such as a firewall or anti-virus) and order a user to do something unsafe (with help of balloon messages). Second, the system tray may be glutted with tray icons so that no one could add a tray icon anymore. createRobot Create java.awt.Robot objects The java.awt.Robot object allows code to generate native-level mouse and keyboard events as well as read the screen. It could allow malicious code to control the system, run other programs, read the display, and deny mouse and keyboard access to the user. fullScreenExclusive Enter full-screen exclusive mode Entering full-screen exclusive mode allows direct access to low-level graphics card memory. This could be used to spoof the system, since the program is in direct control of rendering. Depending on the implementation, the security warning may not be shown for the windows used to enter the full-screen exclusive mode (assuming that the fullScreenExclusive permission has been granted to this application). Note that this behavior does not mean that the showWindowWithoutWarningBanner permission will be automatically granted to the application which has the fullScreenExclusive permission: non-full-screen windows will continue to be shown with the security warning. listenToAllAWTEvents Listen to all AWT events, system-wide After adding an AWT event listener, malicious code may scan all AWT events dispatched in the system, allowing it to read all user input (such as passwords). Each AWT event listener is called from within the context of that event queue's EventDispatchThread, so if the accessEventQueue permission is also enabled, malicious code could modify the contents of AWT event queues system-wide, causing the application or applet to misbehave in an insecure manner. readDisplayPixels Readback of pixels from the display screen Interfaces such as the java.awt.Composite interface or the java.awt.Robot class allow arbitrary code to examine pixels on the display enable malicious code to snoop on the activities of the user. replaceKeyboardFocusManager Sets the KeyboardFocusManager for a particular thread. When SecurityManager is installed, the invoking thread must be granted this permission in order to replace the current KeyboardFocusManager. If permission is not granted, a SecurityException will be thrown. setAppletStub Setting the stub which implements Applet container services Malicious code could set an applet's stub and result in unexpected behavior or denial of service to an applet. setWindowAlwaysOnTop Setting always-on-top property of the window: Window.setAlwaysOnTop(boolean) The malicious window might make itself look and behave like a real full desktop, so that information entered by the unsuspecting user is captured and subsequently misused showWindowWithoutWarningBanner Display of a window without also displaying a banner warning that the window was created by an applet Without this warning, an applet may pop up windows without the user knowing that they belong to an applet. Since users may make security-sensitive decisions based on whether or not the window belongs to an applet (entering a username and password into a dialog box, for example), disabling this warning banner may allow applets to trick the user into entering such information. toolkitModality Creating TOOLKIT_MODAL dialogs and setting the TOOLKIT_EXCLUDE window property. When a toolkit-modal dialog is shown from an applet, it blocks all other applets in the browser. When launching applications from Java Web Start, its windows (such as the security dialog) may also be blocked by toolkit-modal dialogs, shown from these applications. watchMousePointer Getting the information about the mouse pointer position at any time Constantly watching the mouse pointer, an applet can make guesses about what the user is doing, i.e. moving the mouse to the lower left corner of the screen most likely means that the user is about to launch an application. If a virtual keypad is used so that keyboard is emulated using the mouse, an applet may guess what is being typed.
The BasicStroke class defines a basic set of rendering attributes for the outlines of graphics primitives, which are rendered with a Graphics2D object that has its Stroke attribute set to this BasicStroke. The rendering attributes defined by BasicStroke describe the shape of the mark made by a pen drawn along the outline of a Shape and the decorations applied at the ends and joins of path segments of the Shape. These rendering attributes include:
width The pen width, measured perpendicularly to the pen trajectory. end caps The decoration applied to the ends of unclosed subpaths and dash segments. Subpaths that start and end on the same point are still considered unclosed if they do not have a CLOSE segment. See SEG_CLOSE for more information on the CLOSE segment. The three different decorations are: CAP_BUTT, CAP_ROUND, and CAP_SQUARE. line joins The decoration applied at the intersection of two path segments and at the intersection of the endpoints of a subpath that is closed using SEG_CLOSE. The three different decorations are: JOIN_BEVEL, JOIN_MITER, and JOIN_ROUND. miter limit The limit to trim a line join that has a JOIN_MITER decoration. A line join is trimmed when the ratio of miter length to stroke width is greater than the miterlimit value. The miter length is the diagonal length of the miter, which is the distance between the inside corner and the outside corner of the intersection. The smaller the angle formed by two line segments, the longer the miter length and the sharper the angle of intersection. The default miterlimit value of 10.0f causes all angles less than 11 degrees to be trimmed. Trimming miters converts the decoration of the line join to bevel. dash attributes The definition of how to make a dash pattern by alternating between opaque and transparent sections.
All attributes that specify measurements and distances controlling the shape of the returned outline are measured in the same coordinate system as the original unstroked Shape argument. When a Graphics2D object uses a Stroke object to redefine a path during the execution of one of its draw methods, the geometry is supplied in its original form before the Graphics2D transform attribute is applied. Therefore, attributes such as the pen width are interpreted in the user space coordinate system of the Graphics2D object and are subject to the scaling and shearing effects of the user-space-to-device-space transform in that particular Graphics2D. For example, the width of a rendered shape's outline is determined not only by the width attribute of this BasicStroke, but also by the transform attribute of the Graphics2D object. Consider this code:
// sets the Graphics2D object's Transform attribute
g2d.scale(10, 10);
// sets the Graphics2D object's Stroke attribute
g2d.setStroke(new BasicStroke(1.5f));
Assuming there are no other scaling transforms added to the Graphics2D object, the resulting line will be approximately 15 pixels wide. As the example code demonstrates, a floating-point line offers better precision, especially when large transforms are used with a Graphics2D object. When a line is diagonal, the exact width depends on how the rendering pipeline chooses which pixels to fill as it traces the theoretical widened outline. The choice of which pixels to turn on is affected by the antialiasing attribute because the antialiasing rendering pipeline can choose to color partially-covered pixels.
For more information on the user space coordinate system and the rendering process, see the Graphics2D class comments.
The BasicStroke class defines a basic set of rendering attributes for the outlines of graphics primitives, which are rendered with a Graphics2D object that has its Stroke attribute set to this BasicStroke. The rendering attributes defined by BasicStroke describe the shape of the mark made by a pen drawn along the outline of a Shape and the decorations applied at the ends and joins of path segments of the Shape. These rendering attributes include: width The pen width, measured perpendicularly to the pen trajectory. end caps The decoration applied to the ends of unclosed subpaths and dash segments. Subpaths that start and end on the same point are still considered unclosed if they do not have a CLOSE segment. See SEG_CLOSE for more information on the CLOSE segment. The three different decorations are: CAP_BUTT, CAP_ROUND, and CAP_SQUARE. line joins The decoration applied at the intersection of two path segments and at the intersection of the endpoints of a subpath that is closed using SEG_CLOSE. The three different decorations are: JOIN_BEVEL, JOIN_MITER, and JOIN_ROUND. miter limit The limit to trim a line join that has a JOIN_MITER decoration. A line join is trimmed when the ratio of miter length to stroke width is greater than the miterlimit value. The miter length is the diagonal length of the miter, which is the distance between the inside corner and the outside corner of the intersection. The smaller the angle formed by two line segments, the longer the miter length and the sharper the angle of intersection. The default miterlimit value of 10.0f causes all angles less than 11 degrees to be trimmed. Trimming miters converts the decoration of the line join to bevel. dash attributes The definition of how to make a dash pattern by alternating between opaque and transparent sections. All attributes that specify measurements and distances controlling the shape of the returned outline are measured in the same coordinate system as the original unstroked Shape argument. When a Graphics2D object uses a Stroke object to redefine a path during the execution of one of its draw methods, the geometry is supplied in its original form before the Graphics2D transform attribute is applied. Therefore, attributes such as the pen width are interpreted in the user space coordinate system of the Graphics2D object and are subject to the scaling and shearing effects of the user-space-to-device-space transform in that particular Graphics2D. For example, the width of a rendered shape's outline is determined not only by the width attribute of this BasicStroke, but also by the transform attribute of the Graphics2D object. Consider this code: // sets the Graphics2D object's Transform attribute g2d.scale(10, 10); // sets the Graphics2D object's Stroke attribute g2d.setStroke(new BasicStroke(1.5f)); Assuming there are no other scaling transforms added to the Graphics2D object, the resulting line will be approximately 15 pixels wide. As the example code demonstrates, a floating-point line offers better precision, especially when large transforms are used with a Graphics2D object. When a line is diagonal, the exact width depends on how the rendering pipeline chooses which pixels to fill as it traces the theoretical widened outline. The choice of which pixels to turn on is affected by the antialiasing attribute because the antialiasing rendering pipeline can choose to color partially-covered pixels. For more information on the user space coordinate system and the rendering process, see the Graphics2D class comments.
A border layout lays out a container, arranging and resizing its components to fit in five regions: north, south, east, west, and center. Each region may contain no more than one component, and is identified by a corresponding constant: NORTH, SOUTH, EAST, WEST, and CENTER. When adding a component to a container with a border layout, use one of these five constants, for example:
Panel p = new Panel(); p.setLayout(new BorderLayout()); p.add(new Button("Okay"), BorderLayout.SOUTH); As a convenience, BorderLayout interprets the absence of a string specification the same as the constant CENTER:
Panel p2 = new Panel(); p2.setLayout(new BorderLayout()); p2.add(new TextArea()); // Same as p.add(new TextArea(), BorderLayout.CENTER);
In addition, BorderLayout supports the relative positioning constants, PAGE_START, PAGE_END, LINE_START, and LINE_END. In a container whose ComponentOrientation is set to ComponentOrientation.LEFT_TO_RIGHT, these constants map to NORTH, SOUTH, WEST, and EAST, respectively.
For compatibility with previous releases, BorderLayout also includes the relative positioning constants BEFORE_FIRST_LINE, AFTER_LAST_LINE, BEFORE_LINE_BEGINS and AFTER_LINE_ENDS. These are equivalent to PAGE_START, PAGE_END, LINE_START and LINE_END respectively. For consistency with the relative positioning constants used by other components, the latter constants are preferred.
Mixing both absolute and relative positioning constants can lead to unpredictable results. If you use both types, the relative constants will take precedence. For example, if you add components using both the NORTH and PAGE_START constants in a container whose orientation is LEFT_TO_RIGHT, only the PAGE_START will be layed out.
NOTE: Currently (in the Java 2 platform v1.2), BorderLayout does not support vertical orientations. The isVertical setting on the container's ComponentOrientation is not respected.
The components are laid out according to their preferred sizes and the constraints of the container's size. The NORTH and SOUTH components may be stretched horizontally; the EAST and WEST components may be stretched vertically; the CENTER component may stretch both horizontally and vertically to fill any space left over.
Here is an example of five buttons in an applet laid out using the BorderLayout layout manager:
The code for this applet is as follows:
import java.awt.*; import java.applet.Applet;
public class buttonDir extends Applet { public void init() { setLayout(new BorderLayout()); add(new Button("North"), BorderLayout.NORTH); add(new Button("South"), BorderLayout.SOUTH); add(new Button("East"), BorderLayout.EAST); add(new Button("West"), BorderLayout.WEST); add(new Button("Center"), BorderLayout.CENTER); } }
A border layout lays out a container, arranging and resizing its components to fit in five regions: north, south, east, west, and center. Each region may contain no more than one component, and is identified by a corresponding constant: NORTH, SOUTH, EAST, WEST, and CENTER. When adding a component to a container with a border layout, use one of these five constants, for example: Panel p = new Panel(); p.setLayout(new BorderLayout()); p.add(new Button("Okay"), BorderLayout.SOUTH); As a convenience, BorderLayout interprets the absence of a string specification the same as the constant CENTER: Panel p2 = new Panel(); p2.setLayout(new BorderLayout()); p2.add(new TextArea()); // Same as p.add(new TextArea(), BorderLayout.CENTER); In addition, BorderLayout supports the relative positioning constants, PAGE_START, PAGE_END, LINE_START, and LINE_END. In a container whose ComponentOrientation is set to ComponentOrientation.LEFT_TO_RIGHT, these constants map to NORTH, SOUTH, WEST, and EAST, respectively. For compatibility with previous releases, BorderLayout also includes the relative positioning constants BEFORE_FIRST_LINE, AFTER_LAST_LINE, BEFORE_LINE_BEGINS and AFTER_LINE_ENDS. These are equivalent to PAGE_START, PAGE_END, LINE_START and LINE_END respectively. For consistency with the relative positioning constants used by other components, the latter constants are preferred. Mixing both absolute and relative positioning constants can lead to unpredictable results. If you use both types, the relative constants will take precedence. For example, if you add components using both the NORTH and PAGE_START constants in a container whose orientation is LEFT_TO_RIGHT, only the PAGE_START will be layed out. NOTE: Currently (in the Java 2 platform v1.2), BorderLayout does not support vertical orientations. The isVertical setting on the container's ComponentOrientation is not respected. The components are laid out according to their preferred sizes and the constraints of the container's size. The NORTH and SOUTH components may be stretched horizontally; the EAST and WEST components may be stretched vertically; the CENTER component may stretch both horizontally and vertically to fill any space left over. Here is an example of five buttons in an applet laid out using the BorderLayout layout manager: The code for this applet is as follows: import java.awt.*; import java.applet.Applet; public class buttonDir extends Applet { public void init() { setLayout(new BorderLayout()); add(new Button("North"), BorderLayout.NORTH); add(new Button("South"), BorderLayout.SOUTH); add(new Button("East"), BorderLayout.EAST); add(new Button("West"), BorderLayout.WEST); add(new Button("Center"), BorderLayout.CENTER); } }
Capabilities and properties of buffers.
Capabilities and properties of buffers.
A type-safe enumeration of the possible back buffer contents after page-flipping
A type-safe enumeration of the possible back buffer contents after page-flipping
This class creates a labeled button. The application can cause some action to happen when the button is pushed. This image depicts three views of a "Quit" button as it appears under the Solaris operating system:
The first view shows the button as it appears normally. The second view shows the button when it has input focus. Its outline is darkened to let the user know that it is an active object. The third view shows the button when the user clicks the mouse over the button, and thus requests that an action be performed.
The gesture of clicking on a button with the mouse is associated with one instance of ActionEvent, which is sent out when the mouse is both pressed and released over the button. If an application is interested in knowing when the button has been pressed but not released, as a separate gesture, it can specialize processMouseEvent, or it can register itself as a listener for mouse events by calling addMouseListener. Both of these methods are defined by Component, the abstract superclass of all components.
When a button is pressed and released, AWT sends an instance of ActionEvent to the button, by calling processEvent on the button. The button's processEvent method receives all events for the button; it passes an action event along by calling its own processActionEvent method. The latter method passes the action event on to any action listeners that have registered an interest in action events generated by this button.
If an application wants to perform some action based on a button being pressed and released, it should implement ActionListener and register the new listener to receive events from this button, by calling the button's addActionListener method. The application can make use of the button's action command as a messaging protocol.
This class creates a labeled button. The application can cause some action to happen when the button is pushed. This image depicts three views of a "Quit" button as it appears under the Solaris operating system: The first view shows the button as it appears normally. The second view shows the button when it has input focus. Its outline is darkened to let the user know that it is an active object. The third view shows the button when the user clicks the mouse over the button, and thus requests that an action be performed. The gesture of clicking on a button with the mouse is associated with one instance of ActionEvent, which is sent out when the mouse is both pressed and released over the button. If an application is interested in knowing when the button has been pressed but not released, as a separate gesture, it can specialize processMouseEvent, or it can register itself as a listener for mouse events by calling addMouseListener. Both of these methods are defined by Component, the abstract superclass of all components. When a button is pressed and released, AWT sends an instance of ActionEvent to the button, by calling processEvent on the button. The button's processEvent method receives all events for the button; it passes an action event along by calling its own processActionEvent method. The latter method passes the action event on to any action listeners that have registered an interest in action events generated by this button. If an application wants to perform some action based on a button being pressed and released, it should implement ActionListener and register the new listener to receive events from this button, by calling the button's addActionListener method. The application can make use of the button's action command as a messaging protocol.
A Canvas component represents a blank rectangular area of the screen onto which the application can draw or from which the application can trap input events from the user.
An application must subclass the Canvas class in order to get useful functionality such as creating a custom component. The paint method must be overridden in order to perform custom graphics on the canvas.
A Canvas component represents a blank rectangular area of the screen onto which the application can draw or from which the application can trap input events from the user. An application must subclass the Canvas class in order to get useful functionality such as creating a custom component. The paint method must be overridden in order to perform custom graphics on the canvas.
A CardLayout object is a layout manager for a container. It treats each component in the container as a card. Only one card is visible at a time, and the container acts as a stack of cards. The first component added to a CardLayout object is the visible component when the container is first displayed.
The ordering of cards is determined by the container's own internal ordering of its component objects. CardLayout defines a set of methods that allow an application to flip through these cards sequentially, or to show a specified card. The addLayoutComponent(java.awt.Component, java.lang.Object) method can be used to associate a string identifier with a given card for fast random access.
A CardLayout object is a layout manager for a container. It treats each component in the container as a card. Only one card is visible at a time, and the container acts as a stack of cards. The first component added to a CardLayout object is the visible component when the container is first displayed. The ordering of cards is determined by the container's own internal ordering of its component objects. CardLayout defines a set of methods that allow an application to flip through these cards sequentially, or to show a specified card. The addLayoutComponent(java.awt.Component, java.lang.Object) method can be used to associate a string identifier with a given card for fast random access.
A check box is a graphical component that can be in either an "on" (true) or "off" (false) state. Clicking on a check box changes its state from "on" to "off," or from "off" to "on."
The following code example creates a set of check boxes in a grid layout:
setLayout(new GridLayout(3, 1)); add(new Checkbox("one", null, true)); add(new Checkbox("two")); add(new Checkbox("three"));
This image depicts the check boxes and grid layout created by this code example:
The button labeled one is in the "on" state, and the other two are in the "off" state. In this example, which uses the GridLayout class, the states of the three check boxes are set independently.
Alternatively, several check boxes can be grouped together under the control of a single object, using the CheckboxGroup class. In a check box group, at most one button can be in the "on" state at any given time. Clicking on a check box to turn it on forces any other check box in the same group that is on into the "off" state.
A check box is a graphical component that can be in either an "on" (true) or "off" (false) state. Clicking on a check box changes its state from "on" to "off," or from "off" to "on." The following code example creates a set of check boxes in a grid layout: setLayout(new GridLayout(3, 1)); add(new Checkbox("one", null, true)); add(new Checkbox("two")); add(new Checkbox("three")); This image depicts the check boxes and grid layout created by this code example: The button labeled one is in the "on" state, and the other two are in the "off" state. In this example, which uses the GridLayout class, the states of the three check boxes are set independently. Alternatively, several check boxes can be grouped together under the control of a single object, using the CheckboxGroup class. In a check box group, at most one button can be in the "on" state at any given time. Clicking on a check box to turn it on forces any other check box in the same group that is on into the "off" state.
The CheckboxGroup class is used to group together a set of Checkbox buttons.
Exactly one check box button in a CheckboxGroup can be in the "on" state at any given time. Pushing any button sets its state to "on" and forces any other button that is in the "on" state into the "off" state.
The following code example produces a new check box group, with three check boxes:
setLayout(new GridLayout(3, 1)); CheckboxGroup cbg = new CheckboxGroup(); add(new Checkbox("one", cbg, true)); add(new Checkbox("two", cbg, false)); add(new Checkbox("three", cbg, false));
This image depicts the check box group created by this example:
The CheckboxGroup class is used to group together a set of Checkbox buttons. Exactly one check box button in a CheckboxGroup can be in the "on" state at any given time. Pushing any button sets its state to "on" and forces any other button that is in the "on" state into the "off" state. The following code example produces a new check box group, with three check boxes: setLayout(new GridLayout(3, 1)); CheckboxGroup cbg = new CheckboxGroup(); add(new Checkbox("one", cbg, true)); add(new Checkbox("two", cbg, false)); add(new Checkbox("three", cbg, false)); This image depicts the check box group created by this example:
This class represents a check box that can be included in a menu. Selecting the check box in the menu changes its state from "on" to "off" or from "off" to "on."
The following picture depicts a menu which contains an instance of CheckBoxMenuItem:
The item labeled Check shows a check box menu item in its "off" state.
When a check box menu item is selected, AWT sends an item event to the item. Since the event is an instance of ItemEvent, the processEvent method examines the event and passes it along to processItemEvent. The latter method redirects the event to any ItemListener objects that have registered an interest in item events generated by this menu item.
This class represents a check box that can be included in a menu. Selecting the check box in the menu changes its state from "on" to "off" or from "off" to "on." The following picture depicts a menu which contains an instance of CheckBoxMenuItem: The item labeled Check shows a check box menu item in its "off" state. When a check box menu item is selected, AWT sends an item event to the item. Since the event is an instance of ItemEvent, the processEvent method examines the event and passes it along to processItemEvent. The latter method redirects the event to any ItemListener objects that have registered an interest in item events generated by this menu item.
The Choice class presents a pop-up menu of choices. The current choice is displayed as the title of the menu.
The following code example produces a pop-up menu:
Choice ColorChooser = new Choice(); ColorChooser.add("Green"); ColorChooser.add("Red"); ColorChooser.add("Blue");
After this choice menu has been added to a panel, it appears as follows in its normal state:
In the picture, "Green" is the current choice. Pushing the mouse button down on the object causes a menu to appear with the current choice highlighted.
Some native platforms do not support arbitrary resizing of Choice components and the behavior of setSize()/getSize() is bound by such limitations. Native GUI Choice components' size are often bound by such attributes as font size and length of items contained within the Choice.
The Choice class presents a pop-up menu of choices. The current choice is displayed as the title of the menu. The following code example produces a pop-up menu: Choice ColorChooser = new Choice(); ColorChooser.add("Green"); ColorChooser.add("Red"); ColorChooser.add("Blue"); After this choice menu has been added to a panel, it appears as follows in its normal state: In the picture, "Green" is the current choice. Pushing the mouse button down on the object causes a menu to appear with the current choice highlighted. Some native platforms do not support arbitrary resizing of Choice components and the behavior of setSize()/getSize() is bound by such limitations. Native GUI Choice components' size are often bound by such attributes as font size and length of items contained within the Choice.
The Color class is used to encapsulate colors in the default sRGB color space or colors in arbitrary color spaces identified by a ColorSpace. Every color has an implicit alpha value of 1.0 or an explicit one provided in the constructor. The alpha value defines the transparency of a color and can be represented by a float value in the range 0.0 - 1.0 or 0 - 255. An alpha value of 1.0 or 255 means that the color is completely opaque and an alpha value of 0 or 0.0 means that the color is completely transparent. When constructing a Color with an explicit alpha or getting the color/alpha components of a Color, the color components are never premultiplied by the alpha component.
The default color space for the Java 2D(tm) API is sRGB, a proposed standard RGB color space. For further information on sRGB, see http://www.w3.org/pub/WWW/Graphics/Color/sRGB.html .
The Color class is used to encapsulate colors in the default sRGB color space or colors in arbitrary color spaces identified by a ColorSpace. Every color has an implicit alpha value of 1.0 or an explicit one provided in the constructor. The alpha value defines the transparency of a color and can be represented by a float value in the range 0.0 - 1.0 or 0 - 255. An alpha value of 1.0 or 255 means that the color is completely opaque and an alpha value of 0 or 0.0 means that the color is completely transparent. When constructing a Color with an explicit alpha or getting the color/alpha components of a Color, the color components are never premultiplied by the alpha component. The default color space for the Java 2D(tm) API is sRGB, a proposed standard RGB color space. For further information on sRGB, see http://www.w3.org/pub/WWW/Graphics/Color/sRGB.html .
This exception is thrown if the native CMM returns an error.
This exception is thrown if the native CMM returns an error.
This abstract class is used to serve as a color space tag to identify the specific color space of a Color object or, via a ColorModel object, of an Image, a BufferedImage, or a GraphicsDevice. It contains methods that transform colors in a specific color space to/from sRGB and to/from a well-defined CIEXYZ color space.
For purposes of the methods in this class, colors are represented as arrays of color components represented as floats in a normalized range defined by each ColorSpace. For many ColorSpaces (e.g. sRGB), this range is 0.0 to 1.0. However, some ColorSpaces have components whose values have a different range. Methods are provided to inquire per component minimum and maximum normalized values.
Several variables are defined for purposes of referring to color space types (e.g. TYPE_RGB, TYPE_XYZ, etc.) and to refer to specific color spaces (e.g. CS_sRGB and CS_CIEXYZ). sRGB is a proposed standard RGB color space. For more information, see http://www.w3.org/pub/WWW/Graphics/Color/sRGB.html .
The purpose of the methods to transform to/from the well-defined CIEXYZ color space is to support conversions between any two color spaces at a reasonably high degree of accuracy. It is expected that particular implementations of subclasses of ColorSpace (e.g. ICC_ColorSpace) will support high performance conversion based on underlying platform color management systems.
The CS_CIEXYZ space used by the toCIEXYZ/fromCIEXYZ methods can be described as follows:
CIEXYZ
viewing illuminance: 200 lux
viewing white point: CIE D50
media white point: "that of a perfectly reflecting diffuser" -- D50
media black point: 0 lux or 0 Reflectance
flare: 1 percent
surround: 20percent of the media white point
media description: reflection print (i.e., RLAB, Hunt viewing media)
note: For developers creating an ICC profile for this conversion
space, the following is applicable. Use a simple Von Kries
white point adaptation folded into the 3X3 matrix parameters
and fold the flare and surround effects into the three
one-dimensional lookup tables (assuming one uses the minimal
model for monitors).
This abstract class is used to serve as a color space tag to identify the specific color space of a Color object or, via a ColorModel object, of an Image, a BufferedImage, or a GraphicsDevice. It contains methods that transform colors in a specific color space to/from sRGB and to/from a well-defined CIEXYZ color space. For purposes of the methods in this class, colors are represented as arrays of color components represented as floats in a normalized range defined by each ColorSpace. For many ColorSpaces (e.g. sRGB), this range is 0.0 to 1.0. However, some ColorSpaces have components whose values have a different range. Methods are provided to inquire per component minimum and maximum normalized values. Several variables are defined for purposes of referring to color space types (e.g. TYPE_RGB, TYPE_XYZ, etc.) and to refer to specific color spaces (e.g. CS_sRGB and CS_CIEXYZ). sRGB is a proposed standard RGB color space. For more information, see http://www.w3.org/pub/WWW/Graphics/Color/sRGB.html . The purpose of the methods to transform to/from the well-defined CIEXYZ color space is to support conversions between any two color spaces at a reasonably high degree of accuracy. It is expected that particular implementations of subclasses of ColorSpace (e.g. ICC_ColorSpace) will support high performance conversion based on underlying platform color management systems. The CS_CIEXYZ space used by the toCIEXYZ/fromCIEXYZ methods can be described as follows: CIEXYZ viewing illuminance: 200 lux viewing white point: CIE D50 media white point: "that of a perfectly reflecting diffuser" -- D50 media black point: 0 lux or 0 Reflectance flare: 1 percent surround: 20percent of the media white point media description: reflection print (i.e., RLAB, Hunt viewing media) note: For developers creating an ICC profile for this conversion space, the following is applicable. Use a simple Von Kries white point adaptation folded into the 3X3 matrix parameters and fold the flare and surround effects into the three one-dimensional lookup tables (assuming one uses the minimal model for monitors).
No vars found in this namespace.
The ICC_ColorSpace class is an implementation of the abstract ColorSpace class. This representation of device independent and device dependent color spaces is based on the International Color Consortium Specification ICC.1:2001-12, File Format for Color Profiles (see http://www.color.org).
Typically, a Color or ColorModel would be associated with an ICC Profile which is either an input, display, or output profile (see the ICC specification). There are other types of ICC Profiles, e.g. abstract profiles, device link profiles, and named color profiles, which do not contain information appropriate for representing the color space of a color, image, or device (see ICC_Profile). Attempting to create an ICC_ColorSpace object from an inappropriate ICC Profile is an error.
ICC Profiles represent transformations from the color space of the profile (e.g. a monitor) to a Profile Connection Space (PCS). Profiles of interest for tagging images or colors have a PCS which is one of the device independent spaces (one CIEXYZ space and two CIELab spaces) defined in the ICC Profile Format Specification. Most profiles of interest either have invertible transformations or explicitly specify transformations going both directions. Should an ICC_ColorSpace object be used in a way requiring a conversion from PCS to the profile's native space and there is inadequate data to correctly perform the conversion, the ICC_ColorSpace object will produce output in the specified type of color space (e.g. TYPE_RGB, TYPE_CMYK, etc.), but the specific color values of the output data will be undefined.
The details of this class are not important for simple applets, which draw in a default color space or manipulate and display imported images with a known color space. At most, such applets would need to get one of the default color spaces via ColorSpace.getInstance().
The ICC_ColorSpace class is an implementation of the abstract ColorSpace class. This representation of device independent and device dependent color spaces is based on the International Color Consortium Specification ICC.1:2001-12, File Format for Color Profiles (see http://www.color.org). Typically, a Color or ColorModel would be associated with an ICC Profile which is either an input, display, or output profile (see the ICC specification). There are other types of ICC Profiles, e.g. abstract profiles, device link profiles, and named color profiles, which do not contain information appropriate for representing the color space of a color, image, or device (see ICC_Profile). Attempting to create an ICC_ColorSpace object from an inappropriate ICC Profile is an error. ICC Profiles represent transformations from the color space of the profile (e.g. a monitor) to a Profile Connection Space (PCS). Profiles of interest for tagging images or colors have a PCS which is one of the device independent spaces (one CIEXYZ space and two CIELab spaces) defined in the ICC Profile Format Specification. Most profiles of interest either have invertible transformations or explicitly specify transformations going both directions. Should an ICC_ColorSpace object be used in a way requiring a conversion from PCS to the profile's native space and there is inadequate data to correctly perform the conversion, the ICC_ColorSpace object will produce output in the specified type of color space (e.g. TYPE_RGB, TYPE_CMYK, etc.), but the specific color values of the output data will be undefined. The details of this class are not important for simple applets, which draw in a default color space or manipulate and display imported images with a known color space. At most, such applets would need to get one of the default color spaces via ColorSpace.getInstance().
A representation of color profile data for device independent and device dependent color spaces based on the International Color Consortium Specification ICC.1:2001-12, File Format for Color Profiles, (see http://www.color.org).
An ICC_ColorSpace object can be constructed from an appropriate ICC_Profile. Typically, an ICC_ColorSpace would be associated with an ICC Profile which is either an input, display, or output profile (see the ICC specification). There are also device link, abstract, color space conversion, and named color profiles. These are less useful for tagging a color or image, but are useful for other purposes (in particular device link profiles can provide improved performance for converting from one device's color space to another's).
ICC Profiles represent transformations from the color space of the profile (e.g. a monitor) to a Profile Connection Space (PCS). Profiles of interest for tagging images or colors have a PCS which is one of the two specific device independent spaces (one CIEXYZ space and one CIELab space) defined in the ICC Profile Format Specification. Most profiles of interest either have invertible transformations or explicitly specify transformations going both directions.
A representation of color profile data for device independent and device dependent color spaces based on the International Color Consortium Specification ICC.1:2001-12, File Format for Color Profiles, (see http://www.color.org). An ICC_ColorSpace object can be constructed from an appropriate ICC_Profile. Typically, an ICC_ColorSpace would be associated with an ICC Profile which is either an input, display, or output profile (see the ICC specification). There are also device link, abstract, color space conversion, and named color profiles. These are less useful for tagging a color or image, but are useful for other purposes (in particular device link profiles can provide improved performance for converting from one device's color space to another's). ICC Profiles represent transformations from the color space of the profile (e.g. a monitor) to a Profile Connection Space (PCS). Profiles of interest for tagging images or colors have a PCS which is one of the two specific device independent spaces (one CIEXYZ space and one CIELab space) defined in the ICC Profile Format Specification. Most profiles of interest either have invertible transformations or explicitly specify transformations going both directions.
A subclass of the ICC_Profile class which represents profiles which meet the following criteria: the color space type of the profile is TYPE_GRAY and the profile includes the grayTRCTag and mediaWhitePointTag tags. Examples of this kind of profile are monochrome input profiles, monochrome display profiles, and monochrome output profiles. The getInstance methods in the ICC_Profile class will return an ICC_ProfileGray object when the above conditions are met. The advantage of this class is that it provides a lookup table that Java or native methods may be able to use directly to optimize color conversion in some cases.
To transform from a GRAY device profile color space to the CIEXYZ Profile Connection Space, the device gray component is transformed by a lookup through the tone reproduction curve (TRC). The result is treated as the achromatic component of the PCS.
PCSY = grayTRC[deviceGray]
The inverse transform is done by converting the PCS Y components to device Gray via the inverse of the grayTRC.
A subclass of the ICC_Profile class which represents profiles which meet the following criteria: the color space type of the profile is TYPE_GRAY and the profile includes the grayTRCTag and mediaWhitePointTag tags. Examples of this kind of profile are monochrome input profiles, monochrome display profiles, and monochrome output profiles. The getInstance methods in the ICC_Profile class will return an ICC_ProfileGray object when the above conditions are met. The advantage of this class is that it provides a lookup table that Java or native methods may be able to use directly to optimize color conversion in some cases. To transform from a GRAY device profile color space to the CIEXYZ Profile Connection Space, the device gray component is transformed by a lookup through the tone reproduction curve (TRC). The result is treated as the achromatic component of the PCS. PCSY = grayTRC[deviceGray] The inverse transform is done by converting the PCS Y components to device Gray via the inverse of the grayTRC.
The ICC_ProfileRGB class is a subclass of the ICC_Profile class that represents profiles which meet the following criteria:
The profile's color space type is RGB. The profile includes the redColorantTag, greenColorantTag, blueColorantTag, redTRCTag, greenTRCTag, blueTRCTag, and mediaWhitePointTag tags.
The ICC_Profile getInstance method will return an ICC_ProfileRGB object when these conditions are met. Three-component, matrix-based input profiles and RGB display profiles are examples of this type of profile.
This profile class provides color transform matrices and lookup tables that Java or native methods can use directly to optimize color conversion in some cases.
To transform from a device profile color space to the CIEXYZ Profile Connection Space, each device color component is first linearized by a lookup through the corresponding tone reproduction curve (TRC). The resulting linear RGB components are converted to the CIEXYZ PCS using a a 3x3 matrix constructed from the RGB colorants.
linearR = redTRC[deviceR]
linearG = greenTRC[deviceG]
linearB = blueTRC[deviceB]
[ PCSX ] [ redColorantX greenColorantX blueColorantX ] [ linearR ] [ ] [ ] [ ] [ PCSY ] = [ redColorantY greenColorantY blueColorantY ] [ linearG ] [ ] [ ] [ ] [_ PCSZ _] [_ redColorantZ greenColorantZ blueColorantZ _] [_ linearB _] The inverse transform is performed by converting PCS XYZ components to linear RGB components through the inverse of the above 3x3 matrix, and then converting linear RGB to device RGB through inverses of the TRCs.
The ICC_ProfileRGB class is a subclass of the ICC_Profile class that represents profiles which meet the following criteria: The profile's color space type is RGB. The profile includes the redColorantTag, greenColorantTag, blueColorantTag, redTRCTag, greenTRCTag, blueTRCTag, and mediaWhitePointTag tags. The ICC_Profile getInstance method will return an ICC_ProfileRGB object when these conditions are met. Three-component, matrix-based input profiles and RGB display profiles are examples of this type of profile. This profile class provides color transform matrices and lookup tables that Java or native methods can use directly to optimize color conversion in some cases. To transform from a device profile color space to the CIEXYZ Profile Connection Space, each device color component is first linearized by a lookup through the corresponding tone reproduction curve (TRC). The resulting linear RGB components are converted to the CIEXYZ PCS using a a 3x3 matrix constructed from the RGB colorants. linearR = redTRC[deviceR] linearG = greenTRC[deviceG] linearB = blueTRC[deviceB] _ _ _ _ _ _ [ PCSX ] [ redColorantX greenColorantX blueColorantX ] [ linearR ] [ ] [ ] [ ] [ PCSY ] = [ redColorantY greenColorantY blueColorantY ] [ linearG ] [ ] [ ] [ ] [_ PCSZ _] [_ redColorantZ greenColorantZ blueColorantZ _] [_ linearB _] The inverse transform is performed by converting PCS XYZ components to linear RGB components through the inverse of the above 3x3 matrix, and then converting linear RGB to device RGB through inverses of the TRCs.
This exception is thrown when an error occurs in accessing or processing an ICC_Profile object.
This exception is thrown when an error occurs in accessing or processing an ICC_Profile object.
A component is an object having a graphical representation that can be displayed on the screen and that can interact with the user. Examples of components are the buttons, checkboxes, and scrollbars of a typical graphical user interface. The Component class is the abstract superclass of the nonmenu-related Abstract Window Toolkit components. Class Component can also be extended directly to create a lightweight component. A lightweight component is a component that is not associated with a native window. On the contrary, a heavyweight component is associated with a native window. The isLightweight() method may be used to distinguish between the two kinds of the components.
Lightweight and heavyweight components may be mixed in a single component hierarchy. However, for correct operating of such a mixed hierarchy of components, the whole hierarchy must be valid. When the hierarchy gets invalidated, like after changing the bounds of components, or adding/removing components to/from containers, the whole hierarchy must be validated afterwards by means of the Container.validate() method invoked on the top-most invalid container of the hierarchy.
Serialization It is important to note that only AWT listeners which conform to the Serializable protocol will be saved when the object is stored. If an AWT object has listeners that aren't marked serializable, they will be dropped at writeObject time. Developers will need, as always, to consider the implications of making an object serializable. One situation to watch out for is this:
import java.awt.; import java.awt.event.; import java.io.Serializable;
class MyApp implements ActionListener, Serializable { BigObjectThatShouldNotBeSerializedWithAButton bigOne; Button aButton = new Button();
MyApp()
{
// Oops, now aButton has a listener with a reference
// to bigOne!
aButton.addActionListener(this);
}
public void actionPerformed(ActionEvent e)
{
System.out.println("Hello There");
}
} In this example, serializing aButton by itself will cause MyApp and everything it refers to to be serialized as well. The problem is that the listener is serializable by coincidence, not by design. To separate the decisions about MyApp and the ActionListener being serializable one can use a nested class, as in the following example:
import java.awt.; import java.awt.event.; import java.io.Serializable;
class MyApp implements java.io.Serializable { BigObjectThatShouldNotBeSerializedWithAButton bigOne; Button aButton = new Button();
static class MyActionListener implements ActionListener
{
public void actionPerformed(ActionEvent e)
{
System.out.println("Hello There");
}
}
MyApp()
{
aButton.addActionListener(new MyActionListener());
}
}
Note: For more information on the paint mechanisms utilitized by AWT and Swing, including information on how to write the most efficient painting code, see Painting in AWT and Swing.
For details on the focus subsystem, see
How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
A component is an object having a graphical representation that can be displayed on the screen and that can interact with the user. Examples of components are the buttons, checkboxes, and scrollbars of a typical graphical user interface. The Component class is the abstract superclass of the nonmenu-related Abstract Window Toolkit components. Class Component can also be extended directly to create a lightweight component. A lightweight component is a component that is not associated with a native window. On the contrary, a heavyweight component is associated with a native window. The isLightweight() method may be used to distinguish between the two kinds of the components. Lightweight and heavyweight components may be mixed in a single component hierarchy. However, for correct operating of such a mixed hierarchy of components, the whole hierarchy must be valid. When the hierarchy gets invalidated, like after changing the bounds of components, or adding/removing components to/from containers, the whole hierarchy must be validated afterwards by means of the Container.validate() method invoked on the top-most invalid container of the hierarchy. Serialization It is important to note that only AWT listeners which conform to the Serializable protocol will be saved when the object is stored. If an AWT object has listeners that aren't marked serializable, they will be dropped at writeObject time. Developers will need, as always, to consider the implications of making an object serializable. One situation to watch out for is this: import java.awt.*; import java.awt.event.*; import java.io.Serializable; class MyApp implements ActionListener, Serializable { BigObjectThatShouldNotBeSerializedWithAButton bigOne; Button aButton = new Button(); MyApp() { // Oops, now aButton has a listener with a reference // to bigOne! aButton.addActionListener(this); } public void actionPerformed(ActionEvent e) { System.out.println("Hello There"); } } In this example, serializing aButton by itself will cause MyApp and everything it refers to to be serialized as well. The problem is that the listener is serializable by coincidence, not by design. To separate the decisions about MyApp and the ActionListener being serializable one can use a nested class, as in the following example: import java.awt.*; import java.awt.event.*; import java.io.Serializable; class MyApp implements java.io.Serializable { BigObjectThatShouldNotBeSerializedWithAButton bigOne; Button aButton = new Button(); static class MyActionListener implements ActionListener { public void actionPerformed(ActionEvent e) { System.out.println("Hello There"); } } MyApp() { aButton.addActionListener(new MyActionListener()); } } Note: For more information on the paint mechanisms utilitized by AWT and Swing, including information on how to write the most efficient painting code, see Painting in AWT and Swing. For details on the focus subsystem, see How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
The ComponentOrientation class encapsulates the language-sensitive orientation that is to be used to order the elements of a component or of text. It is used to reflect the differences in this ordering between Western alphabets, Middle Eastern (such as Hebrew), and Far Eastern (such as Japanese).
Fundamentally, this governs items (such as characters) which are laid out in lines, with the lines then laid out in a block. This also applies to items in a widget: for example, in a check box where the box is positioned relative to the text.
There are four different orientations used in modern languages as in the following table.
LT RT TL TR A B C C B A A D G G D A D E F F E D B E H H E B G H I I H G C F I I F C (In the header, the two-letter abbreviation represents the item direction in the first letter, and the line direction in the second. For example, LT means "items left-to-right, lines top-to-bottom", TL means "items top-to-bottom, lines left-to-right", and so on.)
The orientations are:
LT - Western Europe (optional for Japanese, Chinese, Korean) RT - Middle East (Arabic, Hebrew) TR - Japanese, Chinese, Korean TL - Mongolian
Components whose view and controller code depends on orientation should use the isLeftToRight() and isHorizontal() methods to determine their behavior. They should not include switch-like code that keys off of the constants, such as:
if (orientation == LEFT_TO_RIGHT) { ... } else if (orientation == RIGHT_TO_LEFT) { ... } else { // Oops } This is unsafe, since more constants may be added in the future and since it is not guaranteed that orientation objects will be unique.
The ComponentOrientation class encapsulates the language-sensitive orientation that is to be used to order the elements of a component or of text. It is used to reflect the differences in this ordering between Western alphabets, Middle Eastern (such as Hebrew), and Far Eastern (such as Japanese). Fundamentally, this governs items (such as characters) which are laid out in lines, with the lines then laid out in a block. This also applies to items in a widget: for example, in a check box where the box is positioned relative to the text. There are four different orientations used in modern languages as in the following table. LT RT TL TR A B C C B A A D G G D A D E F F E D B E H H E B G H I I H G C F I I F C (In the header, the two-letter abbreviation represents the item direction in the first letter, and the line direction in the second. For example, LT means "items left-to-right, lines top-to-bottom", TL means "items top-to-bottom, lines left-to-right", and so on.) The orientations are: LT - Western Europe (optional for Japanese, Chinese, Korean) RT - Middle East (Arabic, Hebrew) TR - Japanese, Chinese, Korean TL - Mongolian Components whose view and controller code depends on orientation should use the isLeftToRight() and isHorizontal() methods to determine their behavior. They should not include switch-like code that keys off of the constants, such as: if (orientation == LEFT_TO_RIGHT) { ... } else if (orientation == RIGHT_TO_LEFT) { ... } else { // Oops } This is unsafe, since more constants may be added in the future and since it is not guaranteed that orientation objects will be unique.
The Composite interface, along with CompositeContext, defines the methods to compose a draw primitive with the underlying graphics area. After the Composite is set in the Graphics2D context, it combines a shape, text, or an image being rendered with the colors that have already been rendered according to pre-defined rules. The classes implementing this interface provide the rules and a method to create the context for a particular operation. CompositeContext is an environment used by the compositing operation, which is created by the Graphics2D prior to the start of the operation. CompositeContext contains private information and resources needed for a compositing operation. When the CompositeContext is no longer needed, the Graphics2D object disposes of it in order to reclaim resources allocated for the operation.
Instances of classes implementing Composite must be immutable because the Graphics2D does not clone these objects when they are set as an attribute with the setComposite method or when the Graphics2D object is cloned. This is to avoid undefined rendering behavior of Graphics2D, resulting from the modification of the Composite object after it has been set in the Graphics2D context.
Since this interface must expose the contents of pixels on the target device or image to potentially arbitrary code, the use of custom objects which implement this interface when rendering directly to a screen device is governed by the readDisplayPixels AWTPermission. The permission check will occur when such a custom object is passed to the setComposite method of a Graphics2D retrieved from a Component.
The Composite interface, along with CompositeContext, defines the methods to compose a draw primitive with the underlying graphics area. After the Composite is set in the Graphics2D context, it combines a shape, text, or an image being rendered with the colors that have already been rendered according to pre-defined rules. The classes implementing this interface provide the rules and a method to create the context for a particular operation. CompositeContext is an environment used by the compositing operation, which is created by the Graphics2D prior to the start of the operation. CompositeContext contains private information and resources needed for a compositing operation. When the CompositeContext is no longer needed, the Graphics2D object disposes of it in order to reclaim resources allocated for the operation. Instances of classes implementing Composite must be immutable because the Graphics2D does not clone these objects when they are set as an attribute with the setComposite method or when the Graphics2D object is cloned. This is to avoid undefined rendering behavior of Graphics2D, resulting from the modification of the Composite object after it has been set in the Graphics2D context. Since this interface must expose the contents of pixels on the target device or image to potentially arbitrary code, the use of custom objects which implement this interface when rendering directly to a screen device is governed by the readDisplayPixels AWTPermission. The permission check will occur when such a custom object is passed to the setComposite method of a Graphics2D retrieved from a Component.
The CompositeContext interface defines the encapsulated and optimized environment for a compositing operation. CompositeContext objects maintain state for compositing operations. In a multi-threaded environment, several contexts can exist simultaneously for a single Composite object.
The CompositeContext interface defines the encapsulated and optimized environment for a compositing operation. CompositeContext objects maintain state for compositing operations. In a multi-threaded environment, several contexts can exist simultaneously for a single Composite object.
A generic Abstract Window Toolkit(AWT) container object is a component that can contain other AWT components.
Components added to a container are tracked in a list. The order of the list will define the components' front-to-back stacking order within the container. If no index is specified when adding a component to a container, it will be added to the end of the list (and hence to the bottom of the stacking order).
Note: For details on the focus subsystem, see
How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
A generic Abstract Window Toolkit(AWT) container object is a component that can contain other AWT components. Components added to a container are tracked in a list. The order of the list will define the components' front-to-back stacking order within the container. If no index is specified when adding a component to a container, it will be added to the end of the list (and hence to the bottom of the stacking order). Note: For details on the focus subsystem, see How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
A FocusTraversalPolicy that determines traversal order based on the order of child Components in a Container. From a particular focus cycle root, the policy makes a pre-order traversal of the Component hierarchy, and traverses a Container's children according to the ordering of the array returned by Container.getComponents(). Portions of the hierarchy that are not visible and displayable will not be searched.
By default, ContainerOrderFocusTraversalPolicy implicitly transfers focus down-cycle. That is, during normal forward focus traversal, the Component traversed after a focus cycle root will be the focus-cycle-root's default Component to focus. This behavior can be disabled using the setImplicitDownCycleTraversal method.
By default, methods of this class will return a Component only if it is visible, displayable, enabled, and focusable. Subclasses can modify this behavior by overriding the accept method.
This policy takes into account focus traversal policy providers. When searching for first/last/next/previous Component, if a focus traversal policy provider is encountered, its focus traversal policy is used to perform the search operation.
A FocusTraversalPolicy that determines traversal order based on the order of child Components in a Container. From a particular focus cycle root, the policy makes a pre-order traversal of the Component hierarchy, and traverses a Container's children according to the ordering of the array returned by Container.getComponents(). Portions of the hierarchy that are not visible and displayable will not be searched. By default, ContainerOrderFocusTraversalPolicy implicitly transfers focus down-cycle. That is, during normal forward focus traversal, the Component traversed after a focus cycle root will be the focus-cycle-root's default Component to focus. This behavior can be disabled using the setImplicitDownCycleTraversal method. By default, methods of this class will return a Component only if it is visible, displayable, enabled, and focusable. Subclasses can modify this behavior by overriding the accept method. This policy takes into account focus traversal policy providers. When searching for first/last/next/previous Component, if a focus traversal policy provider is encountered, its focus traversal policy is used to perform the search operation.
No vars found in this namespace.
A class to encapsulate the bitmap representation of the mouse cursor.
A class to encapsulate the bitmap representation of the mouse cursor.
A class that implements a mechanism to transfer data using cut/copy/paste operations.
FlavorListeners may be registered on an instance of the Clipboard class to be notified about changes to the set of DataFlavors available on this clipboard (see addFlavorListener(java.awt.datatransfer.FlavorListener)).
A class that implements a mechanism to transfer data using cut/copy/paste operations. FlavorListeners may be registered on an instance of the Clipboard class to be notified about changes to the set of DataFlavors available on this clipboard (see addFlavorListener(java.awt.datatransfer.FlavorListener)).
Defines the interface for classes that will provide data to a clipboard. An instance of this interface becomes the owner of the contents of a clipboard (clipboard owner) if it is passed as an argument to Clipboard.setContents(java.awt.datatransfer.Transferable, java.awt.datatransfer.ClipboardOwner) method of the clipboard and this method returns successfully. The instance remains the clipboard owner until another application or another object within this application asserts ownership of this clipboard.
Defines the interface for classes that will provide data to a clipboard. An instance of this interface becomes the owner of the contents of a clipboard (clipboard owner) if it is passed as an argument to Clipboard.setContents(java.awt.datatransfer.Transferable, java.awt.datatransfer.ClipboardOwner) method of the clipboard and this method returns successfully. The instance remains the clipboard owner until another application or another object within this application asserts ownership of this clipboard.
No vars found in this namespace.
A DataFlavor provides meta information about data. DataFlavor is typically used to access data on the clipboard, or during a drag and drop operation.
An instance of DataFlavor encapsulates a content type as defined in RFC 2045 and RFC 2046. A content type is typically referred to as a MIME type.
A content type consists of a media type (referred to as the primary type), a subtype, and optional parameters. See RFC 2045 for details on the syntax of a MIME type.
The JRE data transfer implementation interprets the parameter "class" of a MIME type as a representation class. The representation class reflects the class of the object being transferred. In other words, the representation class is the type of object returned by Transferable.getTransferData(java.awt.datatransfer.DataFlavor). For example, the MIME type of imageFlavor is "image/x-java-image;class=java.awt.Image", the primary type is image, the subtype is x-java-image, and the representation class is java.awt.Image. When getTransferData is invoked with a DataFlavor of imageFlavor, an instance of java.awt.Image is returned. It's important to note that DataFlavor does no error checking against the representation class. It is up to consumers of DataFlavor, such as Transferable, to honor the representation class.
Note, if you do not specify a representation class when creating a DataFlavor, the default representation class is used. See appropriate documentation for DataFlavor's constructors.
Also, DataFlavor instances with the "text" primary MIME type may have a "charset" parameter. Refer to RFC 2046 and selectBestTextFlavor(java.awt.datatransfer.DataFlavor[]) for details on "text" MIME types and the "charset" parameter.
Equality of DataFlavors is determined by the primary type, subtype, and representation class. Refer to equals(DataFlavor) for details. When determining equality, any optional parameters are ignored. For example, the following produces two DataFlavors that are considered identical:
DataFlavor flavor1 = new DataFlavor(Object.class, "X-test/test; class=<java.lang.Object>; foo=bar"); DataFlavor flavor2 = new DataFlavor(Object.class, "X-test/test; class=<java.lang.Object>; x=y"); // The following returns true. flavor1.equals(flavor2); As mentioned, flavor1 and flavor2 are considered identical. As such, asking a Transferable for either DataFlavor returns the same results.
For more information on the using data transfer with Swing see the How to Use Drag and Drop and Data Transfer, section in Java Tutorial.
A DataFlavor provides meta information about data. DataFlavor is typically used to access data on the clipboard, or during a drag and drop operation. An instance of DataFlavor encapsulates a content type as defined in RFC 2045 and RFC 2046. A content type is typically referred to as a MIME type. A content type consists of a media type (referred to as the primary type), a subtype, and optional parameters. See RFC 2045 for details on the syntax of a MIME type. The JRE data transfer implementation interprets the parameter "class" of a MIME type as a representation class. The representation class reflects the class of the object being transferred. In other words, the representation class is the type of object returned by Transferable.getTransferData(java.awt.datatransfer.DataFlavor). For example, the MIME type of imageFlavor is "image/x-java-image;class=java.awt.Image", the primary type is image, the subtype is x-java-image, and the representation class is java.awt.Image. When getTransferData is invoked with a DataFlavor of imageFlavor, an instance of java.awt.Image is returned. It's important to note that DataFlavor does no error checking against the representation class. It is up to consumers of DataFlavor, such as Transferable, to honor the representation class. Note, if you do not specify a representation class when creating a DataFlavor, the default representation class is used. See appropriate documentation for DataFlavor's constructors. Also, DataFlavor instances with the "text" primary MIME type may have a "charset" parameter. Refer to RFC 2046 and selectBestTextFlavor(java.awt.datatransfer.DataFlavor[]) for details on "text" MIME types and the "charset" parameter. Equality of DataFlavors is determined by the primary type, subtype, and representation class. Refer to equals(DataFlavor) for details. When determining equality, any optional parameters are ignored. For example, the following produces two DataFlavors that are considered identical: DataFlavor flavor1 = new DataFlavor(Object.class, "X-test/test; class=<java.lang.Object>; foo=bar"); DataFlavor flavor2 = new DataFlavor(Object.class, "X-test/test; class=<java.lang.Object>; x=y"); // The following returns true. flavor1.equals(flavor2); As mentioned, flavor1 and flavor2 are considered identical. As such, asking a Transferable for either DataFlavor returns the same results. For more information on the using data transfer with Swing see the How to Use Drag and Drop and Data Transfer, section in Java Tutorial.
FlavorEvent is used to notify interested parties that available DataFlavors have changed in the Clipboard (the event source).
FlavorEvent is used to notify interested parties that available DataFlavors have changed in the Clipboard (the event source).
Defines an object which listens for FlavorEvents.
Defines an object which listens for FlavorEvents.
A two-way Map between "natives" (Strings), which correspond to platform- specific data formats, and "flavors" (DataFlavors), which correspond to platform-independent MIME types. FlavorMaps need not be symmetric, but typically are.
A two-way Map between "natives" (Strings), which correspond to platform- specific data formats, and "flavors" (DataFlavors), which correspond to platform-independent MIME types. FlavorMaps need not be symmetric, but typically are.
A FlavorMap which relaxes the traditional 1-to-1 restriction of a Map. A flavor is permitted to map to any number of natives, and likewise a native is permitted to map to any number of flavors. FlavorTables need not be symmetric, but typically are.
A FlavorMap which relaxes the traditional 1-to-1 restriction of a Map. A flavor is permitted to map to any number of natives, and likewise a native is permitted to map to any number of flavors. FlavorTables need not be symmetric, but typically are.
A class to encapsulate MimeType parsing related exceptions
A class to encapsulate MimeType parsing related exceptions
A Transferable which implements the capability required to transfer a String.
This Transferable properly supports DataFlavor.stringFlavor and all equivalent flavors. Support for DataFlavor.plainTextFlavor and all equivalent flavors is deprecated. No other DataFlavors are supported.
A Transferable which implements the capability required to transfer a String. This Transferable properly supports DataFlavor.stringFlavor and all equivalent flavors. Support for DataFlavor.plainTextFlavor and all equivalent flavors is deprecated. No other DataFlavors are supported.
The SystemFlavorMap is a configurable map between "natives" (Strings), which correspond to platform-specific data formats, and "flavors" (DataFlavors), which correspond to platform-independent MIME types. This mapping is used by the data transfer subsystem to transfer data between Java and native applications, and between Java applications in separate VMs.
The SystemFlavorMap is a configurable map between "natives" (Strings), which correspond to platform-specific data formats, and "flavors" (DataFlavors), which correspond to platform-independent MIME types. This mapping is used by the data transfer subsystem to transfer data between Java and native applications, and between Java applications in separate VMs.
Defines the interface for classes that can be used to provide data for a transfer operation.
For information on using data transfer with Swing, see
How to Use Drag and Drop and Data Transfer, a section in The Java Tutorial, for more information.
Defines the interface for classes that can be used to provide data for a transfer operation. For information on using data transfer with Swing, see How to Use Drag and Drop and Data Transfer, a section in The Java Tutorial, for more information.
Signals that the requested data is not supported in this flavor.
Signals that the requested data is not supported in this flavor.
A FocusTraversalPolicy that determines traversal order based on the order of child Components in a Container. From a particular focus cycle root, the policy makes a pre-order traversal of the Component hierarchy, and traverses a Container's children according to the ordering of the array returned by Container.getComponents(). Portions of the hierarchy that are not visible and displayable will not be searched.
If client code has explicitly set the focusability of a Component by either overriding Component.isFocusTraversable() or Component.isFocusable(), or by calling Component.setFocusable(), then a DefaultFocusTraversalPolicy behaves exactly like a ContainerOrderFocusTraversalPolicy. If, however, the Component is relying on default focusability, then a DefaultFocusTraversalPolicy will reject all Components with non-focusable peers. This is the default FocusTraversalPolicy for all AWT Containers.
The focusability of a peer is implementation-dependent. Sun recommends that all implementations for a particular native platform construct peers with the same focusability. The recommendations for Windows and Unix are that Canvases, Labels, Panels, Scrollbars, ScrollPanes, Windows, and lightweight Components have non-focusable peers, and all other Components have focusable peers. These recommendations are used in the Sun AWT implementations. Note that the focusability of a Component's peer is different from, and does not impact, the focusability of the Component itself.
Please see
How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
A FocusTraversalPolicy that determines traversal order based on the order of child Components in a Container. From a particular focus cycle root, the policy makes a pre-order traversal of the Component hierarchy, and traverses a Container's children according to the ordering of the array returned by Container.getComponents(). Portions of the hierarchy that are not visible and displayable will not be searched. If client code has explicitly set the focusability of a Component by either overriding Component.isFocusTraversable() or Component.isFocusable(), or by calling Component.setFocusable(), then a DefaultFocusTraversalPolicy behaves exactly like a ContainerOrderFocusTraversalPolicy. If, however, the Component is relying on default focusability, then a DefaultFocusTraversalPolicy will reject all Components with non-focusable peers. This is the default FocusTraversalPolicy for all AWT Containers. The focusability of a peer is implementation-dependent. Sun recommends that all implementations for a particular native platform construct peers with the same focusability. The recommendations for Windows and Unix are that Canvases, Labels, Panels, Scrollbars, ScrollPanes, Windows, and lightweight Components have non-focusable peers, and all other Components have focusable peers. These recommendations are used in the Sun AWT implementations. Note that the focusability of a Component's peer is different from, and does not impact, the focusability of the Component itself. Please see How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
The default KeyboardFocusManager for AWT applications. Focus traversal is done in response to a Component's focus traversal keys, and using a Container's FocusTraversalPolicy.
Please see
How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
The default KeyboardFocusManager for AWT applications. Focus traversal is done in response to a Component's focus traversal keys, and using a Container's FocusTraversalPolicy. Please see How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
The Desktop class allows a Java application to launch associated applications registered on the native desktop to handle a URI or a file.
Supported operations include:
launching the user-default browser to show a specified URI; launching the user-default mail client with an optional mailto URI; launching a registered application to open, edit or print a specified file.
This class provides methods corresponding to these operations. The methods look for the associated application registered on the current platform, and launch it to handle a URI or file. If there is no associated application or the associated application fails to be launched, an exception is thrown.
An application is registered to a URI or file type; for example, the "sxi" file extension is typically registered to StarOffice. The mechanism of registering, accessing, and launching the associated application is platform-dependent.
Each operation is an action type represented by the Desktop.Action class.
Note: when some action is invoked and the associated application is executed, it will be executed on the same system as the one on which the Java application was launched.
The Desktop class allows a Java application to launch associated applications registered on the native desktop to handle a URI or a file. Supported operations include: launching the user-default browser to show a specified URI; launching the user-default mail client with an optional mailto URI; launching a registered application to open, edit or print a specified file. This class provides methods corresponding to these operations. The methods look for the associated application registered on the current platform, and launch it to handle a URI or file. If there is no associated application or the associated application fails to be launched, an exception is thrown. An application is registered to a URI or file type; for example, the "sxi" file extension is typically registered to StarOffice. The mechanism of registering, accessing, and launching the associated application is platform-dependent. Each operation is an action type represented by the Desktop.Action class. Note: when some action is invoked and the associated application is executed, it will be executed on the same system as the one on which the Java application was launched.
A Dialog is a top-level window with a title and a border that is typically used to take some form of input from the user.
The size of the dialog includes any area designated for the border. The dimensions of the border area can be obtained using the getInsets method, however, since these dimensions are platform-dependent, a valid insets value cannot be obtained until the dialog is made displayable by either calling pack or show. Since the border area is included in the overall size of the dialog, the border effectively obscures a portion of the dialog, constraining the area available for rendering and/or displaying subcomponents to the rectangle which has an upper-left corner location of (insets.left, insets.top), and has a size of width - (insets.left insets.right) by height - (insets.top insets.bottom).
The default layout for a dialog is BorderLayout.
A dialog may have its native decorations (i.e. Frame & Titlebar) turned off with setUndecorated. This can only be done while the dialog is not displayable.
A dialog may have another window as its owner when it's constructed. When the owner window of a visible dialog is minimized, the dialog will automatically be hidden from the user. When the owner window is subsequently restored, the dialog is made visible to the user again.
In a multi-screen environment, you can create a Dialog on a different screen device than its owner. See Frame for more information.
A dialog can be either modeless (the default) or modal. A modal dialog is one which blocks input to some other top-level windows in the application, except for any windows created with the dialog as their owner. See AWT Modality specification for details.
Dialogs are capable of generating the following WindowEvents: WindowOpened, WindowClosing, WindowClosed, WindowActivated, WindowDeactivated, WindowGainedFocus, WindowLostFocus.
A Dialog is a top-level window with a title and a border that is typically used to take some form of input from the user. The size of the dialog includes any area designated for the border. The dimensions of the border area can be obtained using the getInsets method, however, since these dimensions are platform-dependent, a valid insets value cannot be obtained until the dialog is made displayable by either calling pack or show. Since the border area is included in the overall size of the dialog, the border effectively obscures a portion of the dialog, constraining the area available for rendering and/or displaying subcomponents to the rectangle which has an upper-left corner location of (insets.left, insets.top), and has a size of width - (insets.left insets.right) by height - (insets.top insets.bottom). The default layout for a dialog is BorderLayout. A dialog may have its native decorations (i.e. Frame & Titlebar) turned off with setUndecorated. This can only be done while the dialog is not displayable. A dialog may have another window as its owner when it's constructed. When the owner window of a visible dialog is minimized, the dialog will automatically be hidden from the user. When the owner window is subsequently restored, the dialog is made visible to the user again. In a multi-screen environment, you can create a Dialog on a different screen device than its owner. See Frame for more information. A dialog can be either modeless (the default) or modal. A modal dialog is one which blocks input to some other top-level windows in the application, except for any windows created with the dialog as their owner. See AWT Modality specification for details. Dialogs are capable of generating the following WindowEvents: WindowOpened, WindowClosing, WindowClosed, WindowActivated, WindowDeactivated, WindowGainedFocus, WindowLostFocus.
The Dimension class encapsulates the width and height of a component (in integer precision) in a single object. The class is associated with certain properties of components. Several methods defined by the Component class and the LayoutManager interface return a Dimension object.
Normally the values of width and height are non-negative integers. The constructors that allow you to create a dimension do not prevent you from setting a negative value for these properties. If the value of width or height is negative, the behavior of some methods defined by other objects is undefined.
The Dimension class encapsulates the width and height of a component (in integer precision) in a single object. The class is associated with certain properties of components. Several methods defined by the Component class and the LayoutManager interface return a Dimension object. Normally the values of width and height are non-negative integers. The constructors that allow you to create a dimension do not prevent you from setting a negative value for these properties. If the value of width or height is negative, the behavior of some methods defined by other objects is undefined.
The DisplayMode class encapsulates the bit depth, height, width, and refresh rate of a GraphicsDevice. The ability to change graphics device's display mode is platform- and configuration-dependent and may not always be available (see GraphicsDevice.isDisplayChangeSupported()).
For more information on full-screen exclusive mode API, see the
Full-Screen Exclusive Mode API Tutorial.
The DisplayMode class encapsulates the bit depth, height, width, and refresh rate of a GraphicsDevice. The ability to change graphics device's display mode is platform- and configuration-dependent and may not always be available (see GraphicsDevice.isDisplayChangeSupported()). For more information on full-screen exclusive mode API, see the Full-Screen Exclusive Mode API Tutorial.
During DnD operations it is possible that a user may wish to drop the subject of the operation on a region of a scrollable GUI control that is not currently visible to the user.
In such situations it is desirable that the GUI control detect this and institute a scroll operation in order to make obscured region(s) visible to the user. This feature is known as autoscrolling.
If a GUI control is both an active DropTarget and is also scrollable, it can receive notifications of autoscrolling gestures by the user from the DnD system by implementing this interface.
An autoscrolling gesture is initiated by the user by keeping the drag cursor motionless with a border region of the Component, referred to as the "autoscrolling region", for a predefined period of time, this will result in repeated scroll requests to the Component until the drag Cursor resumes its motion.
During DnD operations it is possible that a user may wish to drop the subject of the operation on a region of a scrollable GUI control that is not currently visible to the user. In such situations it is desirable that the GUI control detect this and institute a scroll operation in order to make obscured region(s) visible to the user. This feature is known as autoscrolling. If a GUI control is both an active DropTarget and is also scrollable, it can receive notifications of autoscrolling gestures by the user from the DnD system by implementing this interface. An autoscrolling gesture is initiated by the user by keeping the drag cursor motionless with a border region of the Component, referred to as the "autoscrolling region", for a predefined period of time, this will result in repeated scroll requests to the Component until the drag Cursor resumes its motion.
No vars found in this namespace.
This class contains constant values representing the type of action(s) to be performed by a Drag and Drop operation.
This class contains constant values representing the type of action(s) to be performed by a Drag and Drop operation.
A DragGestureEvent is passed to DragGestureListener's dragGestureRecognized() method when a particular DragGestureRecognizer detects that a platform dependent drag initiating gesture has occurred on the Component that it is tracking.
The action field of any DragGestureEvent instance should take one of the following values:
DnDConstants.ACTION_COPY DnDConstants.ACTION_MOVE DnDConstants.ACTION_LINK
Assigning the value different from listed above will cause an unspecified behavior.
A DragGestureEvent is passed to DragGestureListener's dragGestureRecognized() method when a particular DragGestureRecognizer detects that a platform dependent drag initiating gesture has occurred on the Component that it is tracking. The action field of any DragGestureEvent instance should take one of the following values: DnDConstants.ACTION_COPY DnDConstants.ACTION_MOVE DnDConstants.ACTION_LINK Assigning the value different from listed above will cause an unspecified behavior.
The listener interface for receiving drag gesture events. This interface is intended for a drag gesture recognition implementation. See a specification for DragGestureRecognizer for details on how to register the listener interface. Upon recognition of a drag gesture the DragGestureRecognizer calls this interface's dragGestureRecognized() method and passes a DragGestureEvent.
The listener interface for receiving drag gesture events. This interface is intended for a drag gesture recognition implementation. See a specification for DragGestureRecognizer for details on how to register the listener interface. Upon recognition of a drag gesture the DragGestureRecognizer calls this interface's dragGestureRecognized() method and passes a DragGestureEvent.
The DragGestureRecognizer is an abstract base class for the specification of a platform-dependent listener that can be associated with a particular Component in order to identify platform-dependent drag initiating gestures.
The appropriate DragGestureRecognizer subclass instance is obtained from the DragSource associated with a particular Component, or from the Toolkit object via its createDragGestureRecognizer() method.
Once the DragGestureRecognizer is associated with a particular Component it will register the appropriate listener interfaces on that Component in order to track the input events delivered to the Component.
Once the DragGestureRecognizer identifies a sequence of events on the Component as a drag initiating gesture, it will notify its unicast DragGestureListener by invoking its gestureRecognized() method.
When a concrete DragGestureRecognizer instance detects a drag initiating gesture on the Component it is associated with, it fires a DragGestureEvent to the DragGestureListener registered on its unicast event source for DragGestureListener events. This DragGestureListener is responsible for causing the associated DragSource to start the Drag and Drop operation (if appropriate).
The DragGestureRecognizer is an abstract base class for the specification of a platform-dependent listener that can be associated with a particular Component in order to identify platform-dependent drag initiating gestures. The appropriate DragGestureRecognizer subclass instance is obtained from the DragSource associated with a particular Component, or from the Toolkit object via its createDragGestureRecognizer() method. Once the DragGestureRecognizer is associated with a particular Component it will register the appropriate listener interfaces on that Component in order to track the input events delivered to the Component. Once the DragGestureRecognizer identifies a sequence of events on the Component as a drag initiating gesture, it will notify its unicast DragGestureListener by invoking its gestureRecognized() method. When a concrete DragGestureRecognizer instance detects a drag initiating gesture on the Component it is associated with, it fires a DragGestureEvent to the DragGestureListener registered on its unicast event source for DragGestureListener events. This DragGestureListener is responsible for causing the associated DragSource to start the Drag and Drop operation (if appropriate).
The DragSource is the entity responsible for the initiation of the Drag and Drop operation, and may be used in a number of scenarios:
1 default instance per JVM for the lifetime of that JVM. 1 instance per class of potential Drag Initiator object (e.g TextField). [implementation dependent] 1 per instance of a particular Component, or application specific object associated with a Component instance in the GUI. [implementation dependent] Some other arbitrary association. [implementation dependent]
Once the DragSource is obtained, a DragGestureRecognizer should also be obtained to associate the DragSource with a particular Component.
The initial interpretation of the user's gesture, and the subsequent starting of the drag operation are the responsibility of the implementing Component, which is usually implemented by a DragGestureRecognizer.
When a drag gesture occurs, the DragSource's startDrag() method shall be invoked in order to cause processing of the user's navigational gestures and delivery of Drag and Drop protocol notifications. A DragSource shall only permit a single Drag and Drop operation to be current at any one time, and shall reject any further startDrag() requests by throwing an IllegalDnDOperationException until such time as the extant operation is complete.
The startDrag() method invokes the createDragSourceContext() method to instantiate an appropriate DragSourceContext and associate the DragSourceContextPeer with that.
If the Drag and Drop System is unable to initiate a drag operation for some reason, the startDrag() method throws a java.awt.dnd.InvalidDnDOperationException to signal such a condition. Typically this exception is thrown when the underlying platform system is either not in a state to initiate a drag, or the parameters specified are invalid.
Note that during the drag, the set of operations exposed by the source at the start of the drag operation may not change until the operation is complete. The operation(s) are constant for the duration of the operation with respect to the DragSource.
The DragSource is the entity responsible for the initiation of the Drag and Drop operation, and may be used in a number of scenarios: 1 default instance per JVM for the lifetime of that JVM. 1 instance per class of potential Drag Initiator object (e.g TextField). [implementation dependent] 1 per instance of a particular Component, or application specific object associated with a Component instance in the GUI. [implementation dependent] Some other arbitrary association. [implementation dependent] Once the DragSource is obtained, a DragGestureRecognizer should also be obtained to associate the DragSource with a particular Component. The initial interpretation of the user's gesture, and the subsequent starting of the drag operation are the responsibility of the implementing Component, which is usually implemented by a DragGestureRecognizer. When a drag gesture occurs, the DragSource's startDrag() method shall be invoked in order to cause processing of the user's navigational gestures and delivery of Drag and Drop protocol notifications. A DragSource shall only permit a single Drag and Drop operation to be current at any one time, and shall reject any further startDrag() requests by throwing an IllegalDnDOperationException until such time as the extant operation is complete. The startDrag() method invokes the createDragSourceContext() method to instantiate an appropriate DragSourceContext and associate the DragSourceContextPeer with that. If the Drag and Drop System is unable to initiate a drag operation for some reason, the startDrag() method throws a java.awt.dnd.InvalidDnDOperationException to signal such a condition. Typically this exception is thrown when the underlying platform system is either not in a state to initiate a drag, or the parameters specified are invalid. Note that during the drag, the set of operations exposed by the source at the start of the drag operation may not change until the operation is complete. The operation(s) are constant for the duration of the operation with respect to the DragSource.
An abstract adapter class for receiving drag source events. The methods in this class are empty. This class exists only as a convenience for creating listener objects.
Extend this class to create a DragSourceEvent listener and override the methods for the events of interest. (If you implement the DragSourceListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you only have to define methods for events you care about.)
Create a listener object using the extended class and then register it with a DragSource. When the drag enters, moves over, or exits a drop site, when the drop action changes, and when the drag ends, the relevant method in the listener object is invoked, and the DragSourceEvent is passed to it.
The drop site is associated with the previous dragEnter() invocation if the latest invocation of dragEnter() on this adapter corresponds to that drop site and is not followed by a dragExit() invocation on this adapter.
An abstract adapter class for receiving drag source events. The methods in this class are empty. This class exists only as a convenience for creating listener objects. Extend this class to create a DragSourceEvent listener and override the methods for the events of interest. (If you implement the DragSourceListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you only have to define methods for events you care about.) Create a listener object using the extended class and then register it with a DragSource. When the drag enters, moves over, or exits a drop site, when the drop action changes, and when the drag ends, the relevant method in the listener object is invoked, and the DragSourceEvent is passed to it. The drop site is associated with the previous dragEnter() invocation if the latest invocation of dragEnter() on this adapter corresponds to that drop site and is not followed by a dragExit() invocation on this adapter.
The DragSourceContext class is responsible for managing the initiator side of the Drag and Drop protocol. In particular, it is responsible for managing drag event notifications to the java.awt.dnd.DragSourceListeners and java.awt.dnd.DragSourceMotionListeners, and providing the Transferable representing the source data for the drag operation.
Note that the DragSourceContext itself implements the DragSourceListener and DragSourceMotionListener interfaces. This is to allow the platform peer (the DragSourceContextPeer instance) created by the DragSource to notify the DragSourceContext of state changes in the ongoing operation. This allows the DragSourceContext object to interpose itself between the platform and the listeners provided by the initiator of the drag operation.
By default, DragSourceContext sets the cursor as appropriate for the current state of the drag and drop operation. For example, if the user has chosen the move action, and the pointer is over a target that accepts the move action, the default move cursor is shown. When the pointer is over an area that does not accept the transfer, the default "no drop" cursor is shown.
This default handling mechanism is disabled when a custom cursor is set by the setCursor(java.awt.Cursor) method. When the default handling is disabled, it becomes the responsibility of the developer to keep the cursor up to date, by listening to the DragSource events and calling the setCursor() method. Alternatively, you can provide custom cursor behavior by providing custom implementations of the DragSource and the DragSourceContext classes.
The DragSourceContext class is responsible for managing the initiator side of the Drag and Drop protocol. In particular, it is responsible for managing drag event notifications to the java.awt.dnd.DragSourceListeners and java.awt.dnd.DragSourceMotionListeners, and providing the Transferable representing the source data for the drag operation. Note that the DragSourceContext itself implements the DragSourceListener and DragSourceMotionListener interfaces. This is to allow the platform peer (the DragSourceContextPeer instance) created by the DragSource to notify the DragSourceContext of state changes in the ongoing operation. This allows the DragSourceContext object to interpose itself between the platform and the listeners provided by the initiator of the drag operation. By default, DragSourceContext sets the cursor as appropriate for the current state of the drag and drop operation. For example, if the user has chosen the move action, and the pointer is over a target that accepts the move action, the default move cursor is shown. When the pointer is over an area that does not accept the transfer, the default "no drop" cursor is shown. This default handling mechanism is disabled when a custom cursor is set by the setCursor(java.awt.Cursor) method. When the default handling is disabled, it becomes the responsibility of the developer to keep the cursor up to date, by listening to the DragSource events and calling the setCursor() method. Alternatively, you can provide custom cursor behavior by providing custom implementations of the DragSource and the DragSourceContext classes.
The DragSourceDragEvent is delivered from the DragSourceContextPeer, via the DragSourceContext, to the DragSourceListener registered with that DragSourceContext and with its associated DragSource.
The DragSourceDragEvent reports the target drop action and the user drop action that reflect the current state of the drag operation.
Target drop action is one of DnDConstants that represents the drop action selected by the current drop target if this drop action is supported by the drag source or DnDConstants.ACTION_NONE if this drop action is not supported by the drag source.
User drop action depends on the drop actions supported by the drag source and the drop action selected by the user. The user can select a drop action by pressing modifier keys during the drag operation:
Ctrl Shift -> ACTION_LINK Ctrl -> ACTION_COPY Shift -> ACTION_MOVE If the user selects a drop action, the user drop action is one of DnDConstants that represents the selected drop action if this drop action is supported by the drag source or DnDConstants.ACTION_NONE if this drop action is not supported by the drag source.
If the user doesn't select a drop action, the set of DnDConstants that represents the set of drop actions supported by the drag source is searched for DnDConstants.ACTION_MOVE, then for DnDConstants.ACTION_COPY, then for DnDConstants.ACTION_LINK and the user drop action is the first constant found. If no constant is found the user drop action is DnDConstants.ACTION_NONE.
The DragSourceDragEvent is delivered from the DragSourceContextPeer, via the DragSourceContext, to the DragSourceListener registered with that DragSourceContext and with its associated DragSource. The DragSourceDragEvent reports the target drop action and the user drop action that reflect the current state of the drag operation. Target drop action is one of DnDConstants that represents the drop action selected by the current drop target if this drop action is supported by the drag source or DnDConstants.ACTION_NONE if this drop action is not supported by the drag source. User drop action depends on the drop actions supported by the drag source and the drop action selected by the user. The user can select a drop action by pressing modifier keys during the drag operation: Ctrl Shift -> ACTION_LINK Ctrl -> ACTION_COPY Shift -> ACTION_MOVE If the user selects a drop action, the user drop action is one of DnDConstants that represents the selected drop action if this drop action is supported by the drag source or DnDConstants.ACTION_NONE if this drop action is not supported by the drag source. If the user doesn't select a drop action, the set of DnDConstants that represents the set of drop actions supported by the drag source is searched for DnDConstants.ACTION_MOVE, then for DnDConstants.ACTION_COPY, then for DnDConstants.ACTION_LINK and the user drop action is the first constant found. If no constant is found the user drop action is DnDConstants.ACTION_NONE.
The DragSourceDropEvent is delivered from the DragSourceContextPeer, via the DragSourceContext, to the dragDropEnd method of DragSourceListeners registered with that DragSourceContext and with its associated DragSource. It contains sufficient information for the originator of the operation to provide appropriate feedback to the end user when the operation completes.
The DragSourceDropEvent is delivered from the DragSourceContextPeer, via the DragSourceContext, to the dragDropEnd method of DragSourceListeners registered with that DragSourceContext and with its associated DragSource. It contains sufficient information for the originator of the operation to provide appropriate feedback to the end user when the operation completes.
This class is the base class for DragSourceDragEvent and DragSourceDropEvent.
DragSourceEvents are generated whenever the drag enters, moves over, or exits a drop site, when the drop action changes, and when the drag ends. The location for the generated DragSourceEvent specifies the mouse cursor location in screen coordinates at the moment this event occurred.
In a multi-screen environment without a virtual device, the cursor location is specified in the coordinate system of the initiator GraphicsConfiguration. The initiator GraphicsConfiguration is the GraphicsConfiguration of the Component on which the drag gesture for the current drag operation was recognized. If the cursor location is outside the bounds of the initiator GraphicsConfiguration, the reported coordinates are clipped to fit within the bounds of that GraphicsConfiguration.
In a multi-screen environment with a virtual device, the location is specified in the corresponding virtual coordinate system. If the cursor location is outside the bounds of the virtual device the reported coordinates are clipped to fit within the bounds of the virtual device.
This class is the base class for DragSourceDragEvent and DragSourceDropEvent. DragSourceEvents are generated whenever the drag enters, moves over, or exits a drop site, when the drop action changes, and when the drag ends. The location for the generated DragSourceEvent specifies the mouse cursor location in screen coordinates at the moment this event occurred. In a multi-screen environment without a virtual device, the cursor location is specified in the coordinate system of the initiator GraphicsConfiguration. The initiator GraphicsConfiguration is the GraphicsConfiguration of the Component on which the drag gesture for the current drag operation was recognized. If the cursor location is outside the bounds of the initiator GraphicsConfiguration, the reported coordinates are clipped to fit within the bounds of that GraphicsConfiguration. In a multi-screen environment with a virtual device, the location is specified in the corresponding virtual coordinate system. If the cursor location is outside the bounds of the virtual device the reported coordinates are clipped to fit within the bounds of the virtual device.
The DragSourceListener defines the event interface for originators of Drag and Drop operations to track the state of the user's gesture, and to provide appropriate "drag over" feedback to the user throughout the Drag and Drop operation.
The drop site is associated with the previous dragEnter() invocation if the latest invocation of dragEnter() on this listener:
corresponds to that drop site and is not followed by a dragExit() invocation on this listener.
The DragSourceListener defines the event interface for originators of Drag and Drop operations to track the state of the user's gesture, and to provide appropriate "drag over" feedback to the user throughout the Drag and Drop operation. The drop site is associated with the previous dragEnter() invocation if the latest invocation of dragEnter() on this listener: corresponds to that drop site and is not followed by a dragExit() invocation on this listener.
A listener interface for receiving mouse motion events during a drag operation.
The class that is interested in processing mouse motion events during a drag operation either implements this interface or extends the abstract DragSourceAdapter class (overriding only the methods of interest).
Create a listener object using that class and then register it with a DragSource. Whenever the mouse moves during a drag operation initiated with this DragSource, that object's dragMouseMoved method is invoked, and the DragSourceDragEvent is passed to it.
A listener interface for receiving mouse motion events during a drag operation. The class that is interested in processing mouse motion events during a drag operation either implements this interface or extends the abstract DragSourceAdapter class (overriding only the methods of interest). Create a listener object using that class and then register it with a DragSource. Whenever the mouse moves during a drag operation initiated with this DragSource, that object's dragMouseMoved method is invoked, and the DragSourceDragEvent is passed to it.
The DropTarget is associated with a Component when that Component wishes to accept drops during Drag and Drop operations.
Each DropTarget is associated with a FlavorMap. The default FlavorMap hereafter designates the FlavorMap returned by SystemFlavorMap.getDefaultFlavorMap().
The DropTarget is associated with a Component when that Component wishes to accept drops during Drag and Drop operations. Each DropTarget is associated with a FlavorMap. The default FlavorMap hereafter designates the FlavorMap returned by SystemFlavorMap.getDefaultFlavorMap().
this protected nested class implements autoscrolling
this protected nested class implements autoscrolling
An abstract adapter class for receiving drop target events. The methods in this class are empty. This class exists only as a convenience for creating listener objects.
Extend this class to create a DropTargetEvent listener and override the methods for the events of interest. (If you implement the DropTargetListener interface, you have to define all of the methods in it. This abstract class defines a null implementation for every method except drop(DropTargetDropEvent), so you only have to define methods for events you care about.) You must provide an implementation for at least drop(DropTargetDropEvent). This method cannot have a null implementation because its specification requires that you either accept or reject the drop, and, if accepted, indicate whether the drop was successful.
Create a listener object using the extended class and then register it with a DropTarget. When the drag enters, moves over, or exits the operable part of the drop site for that DropTarget, when the drop action changes, and when the drop occurs, the relevant method in the listener object is invoked, and the DropTargetEvent is passed to it.
The operable part of the drop site for the DropTarget is the part of the associated Component's geometry that is not obscured by an overlapping top-level window or by another Component higher in the Z-order that has an associated active DropTarget.
During the drag, the data associated with the current drag operation can be retrieved by calling getTransferable() on DropTargetDragEvent instances passed to the listener's methods.
Note that getTransferable() on the DropTargetDragEvent instance should only be called within the respective listener's method and all the necessary data should be retrieved from the returned Transferable before that method returns.
An abstract adapter class for receiving drop target events. The methods in this class are empty. This class exists only as a convenience for creating listener objects. Extend this class to create a DropTargetEvent listener and override the methods for the events of interest. (If you implement the DropTargetListener interface, you have to define all of the methods in it. This abstract class defines a null implementation for every method except drop(DropTargetDropEvent), so you only have to define methods for events you care about.) You must provide an implementation for at least drop(DropTargetDropEvent). This method cannot have a null implementation because its specification requires that you either accept or reject the drop, and, if accepted, indicate whether the drop was successful. Create a listener object using the extended class and then register it with a DropTarget. When the drag enters, moves over, or exits the operable part of the drop site for that DropTarget, when the drop action changes, and when the drop occurs, the relevant method in the listener object is invoked, and the DropTargetEvent is passed to it. The operable part of the drop site for the DropTarget is the part of the associated Component's geometry that is not obscured by an overlapping top-level window or by another Component higher in the Z-order that has an associated active DropTarget. During the drag, the data associated with the current drag operation can be retrieved by calling getTransferable() on DropTargetDragEvent instances passed to the listener's methods. Note that getTransferable() on the DropTargetDragEvent instance should only be called within the respective listener's method and all the necessary data should be retrieved from the returned Transferable before that method returns.
A DropTargetContext is created whenever the logical cursor associated with a Drag and Drop operation coincides with the visible geometry of a Component associated with a DropTarget. The DropTargetContext provides the mechanism for a potential receiver of a drop operation to both provide the end user with the appropriate drag under feedback, but also to effect the subsequent data transfer if appropriate.
A DropTargetContext is created whenever the logical cursor associated with a Drag and Drop operation coincides with the visible geometry of a Component associated with a DropTarget. The DropTargetContext provides the mechanism for a potential receiver of a drop operation to both provide the end user with the appropriate drag under feedback, but also to effect the subsequent data transfer if appropriate.
The DropTargetDragEvent is delivered to a DropTargetListener via its dragEnter() and dragOver() methods.
The DropTargetDragEvent reports the source drop actions and the user drop action that reflect the current state of the drag operation.
Source drop actions is a bitwise mask of DnDConstants that represents the set of drop actions supported by the drag source for this drag operation.
User drop action depends on the drop actions supported by the drag source and the drop action selected by the user. The user can select a drop action by pressing modifier keys during the drag operation:
Ctrl Shift -> ACTION_LINK Ctrl -> ACTION_COPY Shift -> ACTION_MOVE If the user selects a drop action, the user drop action is one of DnDConstants that represents the selected drop action if this drop action is supported by the drag source or DnDConstants.ACTION_NONE if this drop action is not supported by the drag source.
If the user doesn't select a drop action, the set of DnDConstants that represents the set of drop actions supported by the drag source is searched for DnDConstants.ACTION_MOVE, then for DnDConstants.ACTION_COPY, then for DnDConstants.ACTION_LINK and the user drop action is the first constant found. If no constant is found the user drop action is DnDConstants.ACTION_NONE.
The DropTargetDragEvent is delivered to a DropTargetListener via its dragEnter() and dragOver() methods. The DropTargetDragEvent reports the source drop actions and the user drop action that reflect the current state of the drag operation. Source drop actions is a bitwise mask of DnDConstants that represents the set of drop actions supported by the drag source for this drag operation. User drop action depends on the drop actions supported by the drag source and the drop action selected by the user. The user can select a drop action by pressing modifier keys during the drag operation: Ctrl Shift -> ACTION_LINK Ctrl -> ACTION_COPY Shift -> ACTION_MOVE If the user selects a drop action, the user drop action is one of DnDConstants that represents the selected drop action if this drop action is supported by the drag source or DnDConstants.ACTION_NONE if this drop action is not supported by the drag source. If the user doesn't select a drop action, the set of DnDConstants that represents the set of drop actions supported by the drag source is searched for DnDConstants.ACTION_MOVE, then for DnDConstants.ACTION_COPY, then for DnDConstants.ACTION_LINK and the user drop action is the first constant found. If no constant is found the user drop action is DnDConstants.ACTION_NONE.
The DropTargetDropEvent is delivered via the DropTargetListener drop() method.
The DropTargetDropEvent reports the source drop actions and the user drop action that reflect the current state of the drag-and-drop operation.
Source drop actions is a bitwise mask of DnDConstants that represents the set of drop actions supported by the drag source for this drag-and-drop operation.
User drop action depends on the drop actions supported by the drag source and the drop action selected by the user. The user can select a drop action by pressing modifier keys during the drag operation:
Ctrl Shift -> ACTION_LINK Ctrl -> ACTION_COPY Shift -> ACTION_MOVE If the user selects a drop action, the user drop action is one of DnDConstants that represents the selected drop action if this drop action is supported by the drag source or DnDConstants.ACTION_NONE if this drop action is not supported by the drag source.
If the user doesn't select a drop action, the set of DnDConstants that represents the set of drop actions supported by the drag source is searched for DnDConstants.ACTION_MOVE, then for DnDConstants.ACTION_COPY, then for DnDConstants.ACTION_LINK and the user drop action is the first constant found. If no constant is found the user drop action is DnDConstants.ACTION_NONE.
The DropTargetDropEvent is delivered via the DropTargetListener drop() method. The DropTargetDropEvent reports the source drop actions and the user drop action that reflect the current state of the drag-and-drop operation. Source drop actions is a bitwise mask of DnDConstants that represents the set of drop actions supported by the drag source for this drag-and-drop operation. User drop action depends on the drop actions supported by the drag source and the drop action selected by the user. The user can select a drop action by pressing modifier keys during the drag operation: Ctrl Shift -> ACTION_LINK Ctrl -> ACTION_COPY Shift -> ACTION_MOVE If the user selects a drop action, the user drop action is one of DnDConstants that represents the selected drop action if this drop action is supported by the drag source or DnDConstants.ACTION_NONE if this drop action is not supported by the drag source. If the user doesn't select a drop action, the set of DnDConstants that represents the set of drop actions supported by the drag source is searched for DnDConstants.ACTION_MOVE, then for DnDConstants.ACTION_COPY, then for DnDConstants.ACTION_LINK and the user drop action is the first constant found. If no constant is found the user drop action is DnDConstants.ACTION_NONE.
The DropTargetEvent is the base class for both the DropTargetDragEvent and the DropTargetDropEvent. It encapsulates the current state of the Drag and Drop operations, in particular the current DropTargetContext.
The DropTargetEvent is the base class for both the DropTargetDragEvent and the DropTargetDropEvent. It encapsulates the current state of the Drag and Drop operations, in particular the current DropTargetContext.
The DropTargetListener interface is the callback interface used by the DropTarget class to provide notification of DnD operations that involve the subject DropTarget. Methods of this interface may be implemented to provide "drag under" visual feedback to the user throughout the Drag and Drop operation.
Create a listener object by implementing the interface and then register it with a DropTarget. When the drag enters, moves over, or exits the operable part of the drop site for that DropTarget, when the drop action changes, and when the drop occurs, the relevant method in the listener object is invoked, and the DropTargetEvent is passed to it.
The operable part of the drop site for the DropTarget is the part of the associated Component's geometry that is not obscured by an overlapping top-level window or by another Component higher in the Z-order that has an associated active DropTarget.
During the drag, the data associated with the current drag operation can be retrieved by calling getTransferable() on DropTargetDragEvent instances passed to the listener's methods.
Note that getTransferable() on the DropTargetDragEvent instance should only be called within the respective listener's method and all the necessary data should be retrieved from the returned Transferable before that method returns.
The DropTargetListener interface is the callback interface used by the DropTarget class to provide notification of DnD operations that involve the subject DropTarget. Methods of this interface may be implemented to provide "drag under" visual feedback to the user throughout the Drag and Drop operation. Create a listener object by implementing the interface and then register it with a DropTarget. When the drag enters, moves over, or exits the operable part of the drop site for that DropTarget, when the drop action changes, and when the drop occurs, the relevant method in the listener object is invoked, and the DropTargetEvent is passed to it. The operable part of the drop site for the DropTarget is the part of the associated Component's geometry that is not obscured by an overlapping top-level window or by another Component higher in the Z-order that has an associated active DropTarget. During the drag, the data associated with the current drag operation can be retrieved by calling getTransferable() on DropTargetDragEvent instances passed to the listener's methods. Note that getTransferable() on the DropTargetDragEvent instance should only be called within the respective listener's method and all the necessary data should be retrieved from the returned Transferable before that method returns.
This exception is thrown by various methods in the java.awt.dnd package. It is usually thrown to indicate that the target in question is unable to undertake the requested operation that the present time, since the underlying DnD system is not in the appropriate state.
This exception is thrown by various methods in the java.awt.dnd package. It is usually thrown to indicate that the target in question is unable to undertake the requested operation that the present time, since the underlying DnD system is not in the appropriate state.
This abstract subclass of DragGestureRecognizer defines a DragGestureRecognizer for mouse-based gestures.
Each platform implements its own concrete subclass of this class, available via the Toolkit.createDragGestureRecognizer() method, to encapsulate the recognition of the platform dependent mouse gesture(s) that initiate a Drag and Drop operation.
Mouse drag gesture recognizers should honor the drag gesture motion threshold, available through DragSource.getDragThreshold(). A drag gesture should be recognized only when the distance in either the horizontal or vertical direction between the location of the latest mouse dragged event and the location of the corresponding mouse button pressed event is greater than the drag gesture motion threshold.
Drag gesture recognizers created with DragSource.createDefaultDragGestureRecognizer(java.awt.Component, int, java.awt.dnd.DragGestureListener) follow this convention.
This abstract subclass of DragGestureRecognizer defines a DragGestureRecognizer for mouse-based gestures. Each platform implements its own concrete subclass of this class, available via the Toolkit.createDragGestureRecognizer() method, to encapsulate the recognition of the platform dependent mouse gesture(s) that initiate a Drag and Drop operation. Mouse drag gesture recognizers should honor the drag gesture motion threshold, available through DragSource.getDragThreshold(). A drag gesture should be recognized only when the distance in either the horizontal or vertical direction between the location of the latest mouse dragged event and the location of the corresponding mouse button pressed event is greater than the drag gesture motion threshold. Drag gesture recognizers created with DragSource.createDefaultDragGestureRecognizer(java.awt.Component, int, java.awt.dnd.DragGestureListener) follow this convention.
NOTE: The Event class is obsolete and is available only for backwards compatibility. It has been replaced by the AWTEvent class and its subclasses.
Event is a platform-independent class that encapsulates events from the platform's Graphical User Interface in the Java 1.0 event model. In Java 1.1 and later versions, the Event class is maintained only for backwards compatibility. The information in this class description is provided to assist programmers in converting Java 1.0 programs to the new event model.
In the Java 1.0 event model, an event contains an id field that indicates what type of event it is and which other Event variables are relevant for the event.
For keyboard events, key contains a value indicating which key was activated, and modifiers contains the modifiers for that event. For the KEY_PRESS and KEY_RELEASE event ids, the value of key is the unicode character code for the key. For KEY_ACTION and KEY_ACTION_RELEASE, the value of key is one of the defined action-key identifiers in the Event class (PGUP, PGDN, F1, F2, etc).
NOTE: The Event class is obsolete and is available only for backwards compatibility. It has been replaced by the AWTEvent class and its subclasses. Event is a platform-independent class that encapsulates events from the platform's Graphical User Interface in the Java 1.0 event model. In Java 1.1 and later versions, the Event class is maintained only for backwards compatibility. The information in this class description is provided to assist programmers in converting Java 1.0 programs to the new event model. In the Java 1.0 event model, an event contains an id field that indicates what type of event it is and which other Event variables are relevant for the event. For keyboard events, key contains a value indicating which key was activated, and modifiers contains the modifiers for that event. For the KEY_PRESS and KEY_RELEASE event ids, the value of key is the unicode character code for the key. For KEY_ACTION and KEY_ACTION_RELEASE, the value of key is one of the defined action-key identifiers in the Event class (PGUP, PGDN, F1, F2, etc).
A semantic event which indicates that a component-defined action occurred. This high-level event is generated by a component (such as a Button) when the component-specific action occurs (such as being pressed). The event is passed to every ActionListener object that registered to receive such events using the component's addActionListener method.
Note: To invoke an ActionEvent on a Button using the keyboard, use the Space bar.
The object that implements the ActionListener interface gets this ActionEvent when the event occurs. The listener is therefore spared the details of processing individual mouse movements and mouse clicks, and can instead process a "meaningful" (semantic) event like "button pressed".
An unspecified behavior will be caused if the id parameter of any particular ActionEvent instance is not in the range from ACTION_FIRST to ACTION_LAST.
A semantic event which indicates that a component-defined action occurred. This high-level event is generated by a component (such as a Button) when the component-specific action occurs (such as being pressed). The event is passed to every ActionListener object that registered to receive such events using the component's addActionListener method. Note: To invoke an ActionEvent on a Button using the keyboard, use the Space bar. The object that implements the ActionListener interface gets this ActionEvent when the event occurs. The listener is therefore spared the details of processing individual mouse movements and mouse clicks, and can instead process a "meaningful" (semantic) event like "button pressed". An unspecified behavior will be caused if the id parameter of any particular ActionEvent instance is not in the range from ACTION_FIRST to ACTION_LAST.
The listener interface for receiving action events. The class that is interested in processing an action event implements this interface, and the object created with that class is registered with a component, using the component's addActionListener method. When the action event occurs, that object's actionPerformed method is invoked.
The listener interface for receiving action events. The class that is interested in processing an action event implements this interface, and the object created with that class is registered with a component, using the component's addActionListener method. When the action event occurs, that object's actionPerformed method is invoked.
The adjustment event emitted by Adjustable objects like Scrollbar and ScrollPane. When the user changes the value of the scrolling component, it receives an instance of AdjustmentEvent.
An unspecified behavior will be caused if the id parameter of any particular AdjustmentEvent instance is not in the range from ADJUSTMENT_FIRST to ADJUSTMENT_LAST.
The type of any AdjustmentEvent instance takes one of the following values:
UNIT_INCREMENT
UNIT_DECREMENT
BLOCK_INCREMENT
BLOCK_DECREMENT
TRACK
Assigning the value different from listed above will cause an unspecified behavior.
The adjustment event emitted by Adjustable objects like Scrollbar and ScrollPane. When the user changes the value of the scrolling component, it receives an instance of AdjustmentEvent. An unspecified behavior will be caused if the id parameter of any particular AdjustmentEvent instance is not in the range from ADJUSTMENT_FIRST to ADJUSTMENT_LAST. The type of any AdjustmentEvent instance takes one of the following values: UNIT_INCREMENT UNIT_DECREMENT BLOCK_INCREMENT BLOCK_DECREMENT TRACK Assigning the value different from listed above will cause an unspecified behavior.
The listener interface for receiving adjustment events.
The listener interface for receiving adjustment events.
The listener interface for receiving notification of events dispatched to objects that are instances of Component or MenuComponent or their subclasses. Unlike the other EventListeners in this package, AWTEventListeners passively observe events being dispatched in the AWT, system-wide. Most applications should never use this class; applications which might use AWTEventListeners include event recorders for automated testing, and facilities such as the Java Accessibility package.
The class that is interested in monitoring AWT events implements this interface, and the object created with that class is registered with the Toolkit, using the Toolkit's addAWTEventListener method. When an event is dispatched anywhere in the AWT, that object's eventDispatched method is invoked.
The listener interface for receiving notification of events dispatched to objects that are instances of Component or MenuComponent or their subclasses. Unlike the other EventListeners in this package, AWTEventListeners passively observe events being dispatched in the AWT, system-wide. Most applications should never use this class; applications which might use AWTEventListeners include event recorders for automated testing, and facilities such as the Java Accessibility package. The class that is interested in monitoring AWT events implements this interface, and the object created with that class is registered with the Toolkit, using the Toolkit's addAWTEventListener method. When an event is dispatched anywhere in the AWT, that object's eventDispatched method is invoked.
A class which extends the EventListenerProxy specifically for adding an AWTEventListener for a specific event mask. Instances of this class can be added as AWTEventListeners to a Toolkit object.
The getAWTEventListeners method of Toolkit can return a mixture of AWTEventListener and AWTEventListenerProxy objects.
A class which extends the EventListenerProxy specifically for adding an AWTEventListener for a specific event mask. Instances of this class can be added as AWTEventListeners to a Toolkit object. The getAWTEventListeners method of Toolkit can return a mixture of AWTEventListener and AWTEventListenerProxy objects.
An abstract adapter class for receiving component events. The methods in this class are empty. This class exists as convenience for creating listener objects.
Extend this class to create a ComponentEvent listener and override the methods for the events of interest. (If you implement the ComponentListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.)
Create a listener object using your class and then register it with a component using the component's addComponentListener method. When the component's size, location, or visibility changes, the relevant method in the listener object is invoked, and the ComponentEvent is passed to it.
An abstract adapter class for receiving component events. The methods in this class are empty. This class exists as convenience for creating listener objects. Extend this class to create a ComponentEvent listener and override the methods for the events of interest. (If you implement the ComponentListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.) Create a listener object using your class and then register it with a component using the component's addComponentListener method. When the component's size, location, or visibility changes, the relevant method in the listener object is invoked, and the ComponentEvent is passed to it.
A low-level event which indicates that a component moved, changed size, or changed visibility (also, the root class for the other component-level events).
Component events are provided for notification purposes ONLY; The AWT will automatically handle component moves and resizes internally so that GUI layout works properly regardless of whether a program is receiving these events or not.
In addition to serving as the base class for other component-related events (InputEvent, FocusEvent, WindowEvent, ContainerEvent), this class defines the events that indicate changes in a component's size, position, or visibility.
This low-level event is generated by a component object (such as a List) when the component is moved, resized, rendered invisible, or made visible again. The event is passed to every ComponentListener or ComponentAdapter object which registered to receive such events using the component's addComponentListener method. (ComponentAdapter objects implement the ComponentListener interface.) Each such listener object gets this ComponentEvent when the event occurs.
An unspecified behavior will be caused if the id parameter of any particular ComponentEvent instance is not in the range from COMPONENT_FIRST to COMPONENT_LAST.
A low-level event which indicates that a component moved, changed size, or changed visibility (also, the root class for the other component-level events). Component events are provided for notification purposes ONLY; The AWT will automatically handle component moves and resizes internally so that GUI layout works properly regardless of whether a program is receiving these events or not. In addition to serving as the base class for other component-related events (InputEvent, FocusEvent, WindowEvent, ContainerEvent), this class defines the events that indicate changes in a component's size, position, or visibility. This low-level event is generated by a component object (such as a List) when the component is moved, resized, rendered invisible, or made visible again. The event is passed to every ComponentListener or ComponentAdapter object which registered to receive such events using the component's addComponentListener method. (ComponentAdapter objects implement the ComponentListener interface.) Each such listener object gets this ComponentEvent when the event occurs. An unspecified behavior will be caused if the id parameter of any particular ComponentEvent instance is not in the range from COMPONENT_FIRST to COMPONENT_LAST.
The listener interface for receiving component events. The class that is interested in processing a component event either implements this interface (and all the methods it contains) or extends the abstract ComponentAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addComponentListener method. When the component's size, location, or visibility changes, the relevant method in the listener object is invoked, and the ComponentEvent is passed to it.
Component events are provided for notification purposes ONLY; The AWT will automatically handle component moves and resizes internally so that GUI layout works properly regardless of whether a program registers a ComponentListener or not.
The listener interface for receiving component events. The class that is interested in processing a component event either implements this interface (and all the methods it contains) or extends the abstract ComponentAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addComponentListener method. When the component's size, location, or visibility changes, the relevant method in the listener object is invoked, and the ComponentEvent is passed to it. Component events are provided for notification purposes ONLY; The AWT will automatically handle component moves and resizes internally so that GUI layout works properly regardless of whether a program registers a ComponentListener or not.
An abstract adapter class for receiving container events. The methods in this class are empty. This class exists as convenience for creating listener objects.
Extend this class to create a ContainerEvent listener and override the methods for the events of interest. (If you implement the ContainerListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.)
Create a listener object using the extended class and then register it with a component using the component's addContainerListener method. When the container's contents change because a component has been added or removed, the relevant method in the listener object is invoked, and the ContainerEvent is passed to it.
An abstract adapter class for receiving container events. The methods in this class are empty. This class exists as convenience for creating listener objects. Extend this class to create a ContainerEvent listener and override the methods for the events of interest. (If you implement the ContainerListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.) Create a listener object using the extended class and then register it with a component using the component's addContainerListener method. When the container's contents change because a component has been added or removed, the relevant method in the listener object is invoked, and the ContainerEvent is passed to it.
A low-level event which indicates that a container's contents changed because a component was added or removed.
Container events are provided for notification purposes ONLY; The AWT will automatically handle changes to the containers contents internally so that the program works properly regardless of whether the program is receiving these events or not.
This low-level event is generated by a container object (such as a Panel) when a component is added to it or removed from it. The event is passed to every ContainerListener or ContainerAdapter object which registered to receive such events using the component's addContainerListener method. (ContainerAdapter objects implement the ContainerListener interface.) Each such listener object gets this ContainerEvent when the event occurs.
An unspecified behavior will be caused if the id parameter of any particular ContainerEvent instance is not in the range from CONTAINER_FIRST to CONTAINER_LAST.
A low-level event which indicates that a container's contents changed because a component was added or removed. Container events are provided for notification purposes ONLY; The AWT will automatically handle changes to the containers contents internally so that the program works properly regardless of whether the program is receiving these events or not. This low-level event is generated by a container object (such as a Panel) when a component is added to it or removed from it. The event is passed to every ContainerListener or ContainerAdapter object which registered to receive such events using the component's addContainerListener method. (ContainerAdapter objects implement the ContainerListener interface.) Each such listener object gets this ContainerEvent when the event occurs. An unspecified behavior will be caused if the id parameter of any particular ContainerEvent instance is not in the range from CONTAINER_FIRST to CONTAINER_LAST.
The listener interface for receiving container events. The class that is interested in processing a container event either implements this interface (and all the methods it contains) or extends the abstract ContainerAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addContainerListener method. When the container's contents change because a component has been added or removed, the relevant method in the listener object is invoked, and the ContainerEvent is passed to it.
Container events are provided for notification purposes ONLY; The AWT will automatically handle add and remove operations internally so the program works properly regardless of whether the program registers a ContainerListener or not.
The listener interface for receiving container events. The class that is interested in processing a container event either implements this interface (and all the methods it contains) or extends the abstract ContainerAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addContainerListener method. When the container's contents change because a component has been added or removed, the relevant method in the listener object is invoked, and the ContainerEvent is passed to it. Container events are provided for notification purposes ONLY; The AWT will automatically handle add and remove operations internally so the program works properly regardless of whether the program registers a ContainerListener or not.
No vars found in this namespace.
An abstract adapter class for receiving keyboard focus events. The methods in this class are empty. This class exists as convenience for creating listener objects.
Extend this class to create a FocusEvent listener and override the methods for the events of interest. (If you implement the FocusListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.)
Create a listener object using the extended class and then register it with a component using the component's addFocusListener method. When the component gains or loses the keyboard focus, the relevant method in the listener object is invoked, and the FocusEvent is passed to it.
An abstract adapter class for receiving keyboard focus events. The methods in this class are empty. This class exists as convenience for creating listener objects. Extend this class to create a FocusEvent listener and override the methods for the events of interest. (If you implement the FocusListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.) Create a listener object using the extended class and then register it with a component using the component's addFocusListener method. When the component gains or loses the keyboard focus, the relevant method in the listener object is invoked, and the FocusEvent is passed to it.
A low-level event which indicates that a Component has gained or lost the input focus. This low-level event is generated by a Component (such as a TextField). The event is passed to every FocusListener or FocusAdapter object which registered to receive such events using the Component's addFocusListener method. ( FocusAdapter objects implement the FocusListener interface.) Each such listener object gets this FocusEvent when the event occurs.
There are two levels of focus events: permanent and temporary. Permanent focus change events occur when focus is directly moved from one Component to another, such as through a call to requestFocus() or as the user uses the TAB key to traverse Components. Temporary focus change events occur when focus is temporarily lost for a Component as the indirect result of another operation, such as Window deactivation or a Scrollbar drag. In this case, the original focus state will automatically be restored once that operation is finished, or, for the case of Window deactivation, when the Window is reactivated. Both permanent and temporary focus events are delivered using the FOCUS_GAINED and FOCUS_LOST event ids; the level may be distinguished in the event using the isTemporary() method.
An unspecified behavior will be caused if the id parameter of any particular FocusEvent instance is not in the range from FOCUS_FIRST to FOCUS_LAST.
A low-level event which indicates that a Component has gained or lost the input focus. This low-level event is generated by a Component (such as a TextField). The event is passed to every FocusListener or FocusAdapter object which registered to receive such events using the Component's addFocusListener method. ( FocusAdapter objects implement the FocusListener interface.) Each such listener object gets this FocusEvent when the event occurs. There are two levels of focus events: permanent and temporary. Permanent focus change events occur when focus is directly moved from one Component to another, such as through a call to requestFocus() or as the user uses the TAB key to traverse Components. Temporary focus change events occur when focus is temporarily lost for a Component as the indirect result of another operation, such as Window deactivation or a Scrollbar drag. In this case, the original focus state will automatically be restored once that operation is finished, or, for the case of Window deactivation, when the Window is reactivated. Both permanent and temporary focus events are delivered using the FOCUS_GAINED and FOCUS_LOST event ids; the level may be distinguished in the event using the isTemporary() method. An unspecified behavior will be caused if the id parameter of any particular FocusEvent instance is not in the range from FOCUS_FIRST to FOCUS_LAST.
The listener interface for receiving keyboard focus events on a component. The class that is interested in processing a focus event either implements this interface (and all the methods it contains) or extends the abstract FocusAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addFocusListener method. When the component gains or loses the keyboard focus, the relevant method in the listener object is invoked, and the FocusEvent is passed to it.
The listener interface for receiving keyboard focus events on a component. The class that is interested in processing a focus event either implements this interface (and all the methods it contains) or extends the abstract FocusAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addFocusListener method. When the component gains or loses the keyboard focus, the relevant method in the listener object is invoked, and the FocusEvent is passed to it.
An abstract adapter class for receiving ancestor moved and resized events. The methods in this class are empty. This class exists as a convenience for creating listener objects.
Extend this class and override the method for the event of interest. (If you implement the HierarchyBoundsListener interface, you have to define both methods in it. This abstract class defines null methods for them both, so you only have to define the method for the event you care about.)
Create a listener object using your class and then register it with a Component using the Component's addHierarchyBoundsListener method. When the hierarchy to which the Component belongs changes by resize or movement of an ancestor, the relevant method in the listener object is invoked, and the HierarchyEvent is passed to it.
An abstract adapter class for receiving ancestor moved and resized events. The methods in this class are empty. This class exists as a convenience for creating listener objects. Extend this class and override the method for the event of interest. (If you implement the HierarchyBoundsListener interface, you have to define both methods in it. This abstract class defines null methods for them both, so you only have to define the method for the event you care about.) Create a listener object using your class and then register it with a Component using the Component's addHierarchyBoundsListener method. When the hierarchy to which the Component belongs changes by resize or movement of an ancestor, the relevant method in the listener object is invoked, and the HierarchyEvent is passed to it.
The listener interface for receiving ancestor moved and resized events. The class that is interested in processing these events either implements this interface (and all the methods it contains) or extends the abstract HierarchyBoundsAdapter class (overriding only the method of interest). The listener object created from that class is then registered with a Component using the Component's addHierarchyBoundsListener method. When the hierarchy to which the Component belongs changes by the resizing or movement of an ancestor, the relevant method in the listener object is invoked, and the HierarchyEvent is passed to it.
Hierarchy events are provided for notification purposes ONLY; The AWT will automatically handle changes to the hierarchy internally so that GUI layout works properly regardless of whether a program registers an HierarchyBoundsListener or not.
The listener interface for receiving ancestor moved and resized events. The class that is interested in processing these events either implements this interface (and all the methods it contains) or extends the abstract HierarchyBoundsAdapter class (overriding only the method of interest). The listener object created from that class is then registered with a Component using the Component's addHierarchyBoundsListener method. When the hierarchy to which the Component belongs changes by the resizing or movement of an ancestor, the relevant method in the listener object is invoked, and the HierarchyEvent is passed to it. Hierarchy events are provided for notification purposes ONLY; The AWT will automatically handle changes to the hierarchy internally so that GUI layout works properly regardless of whether a program registers an HierarchyBoundsListener or not.
An event which indicates a change to the Component hierarchy to which Component belongs.
Hierarchy Change Events (HierarchyListener)
addition of an ancestor
removal of an ancestor
hierarchy made displayable
hierarchy made undisplayable
hierarchy shown on the screen (both visible and displayable)
hierarchy hidden on the screen (either invisible or undisplayable)
Ancestor Reshape Events (HierarchyBoundsListener)
an ancestor was resized
an ancestor was moved
Hierarchy events are provided for notification purposes ONLY. The AWT will automatically handle changes to the hierarchy internally so that GUI layout and displayability works properly regardless of whether a program is receiving these events or not.
This event is generated by a Container object (such as a Panel) when the Container is added, removed, moved, or resized, and passed down the hierarchy. It is also generated by a Component object when that object's addNotify, removeNotify, show, or hide method is called. The ANCESTOR_MOVED and ANCESTOR_RESIZED events are dispatched to every HierarchyBoundsListener or HierarchyBoundsAdapter object which registered to receive such events using the Component's addHierarchyBoundsListener method. (HierarchyBoundsAdapter objects implement the HierarchyBoundsListener interface.) The HIERARCHY_CHANGED events are dispatched to every HierarchyListener object which registered to receive such events using the Component's addHierarchyListener method. Each such listener object gets this HierarchyEvent when the event occurs.
An unspecified behavior will be caused if the id parameter of any particular HierarchyEvent instance is not in the range from HIERARCHY_FIRST to HIERARCHY_LAST.
The changeFlags parameter of any HierarchyEvent instance takes one of the following values:
HierarchyEvent.PARENT_CHANGED HierarchyEvent.DISPLAYABILITY_CHANGED HierarchyEvent.SHOWING_CHANGED
Assigning the value different from listed above will cause unspecified behavior.
An event which indicates a change to the Component hierarchy to which Component belongs. Hierarchy Change Events (HierarchyListener) addition of an ancestor removal of an ancestor hierarchy made displayable hierarchy made undisplayable hierarchy shown on the screen (both visible and displayable) hierarchy hidden on the screen (either invisible or undisplayable) Ancestor Reshape Events (HierarchyBoundsListener) an ancestor was resized an ancestor was moved Hierarchy events are provided for notification purposes ONLY. The AWT will automatically handle changes to the hierarchy internally so that GUI layout and displayability works properly regardless of whether a program is receiving these events or not. This event is generated by a Container object (such as a Panel) when the Container is added, removed, moved, or resized, and passed down the hierarchy. It is also generated by a Component object when that object's addNotify, removeNotify, show, or hide method is called. The ANCESTOR_MOVED and ANCESTOR_RESIZED events are dispatched to every HierarchyBoundsListener or HierarchyBoundsAdapter object which registered to receive such events using the Component's addHierarchyBoundsListener method. (HierarchyBoundsAdapter objects implement the HierarchyBoundsListener interface.) The HIERARCHY_CHANGED events are dispatched to every HierarchyListener object which registered to receive such events using the Component's addHierarchyListener method. Each such listener object gets this HierarchyEvent when the event occurs. An unspecified behavior will be caused if the id parameter of any particular HierarchyEvent instance is not in the range from HIERARCHY_FIRST to HIERARCHY_LAST. The changeFlags parameter of any HierarchyEvent instance takes one of the following values: HierarchyEvent.PARENT_CHANGED HierarchyEvent.DISPLAYABILITY_CHANGED HierarchyEvent.SHOWING_CHANGED Assigning the value different from listed above will cause unspecified behavior.
The listener interface for receiving hierarchy changed events. The class that is interested in processing a hierarchy changed event should implement this interface. The listener object created from that class is then registered with a Component using the Component's addHierarchyListener method. When the hierarchy to which the Component belongs changes, the hierarchyChanged method in the listener object is invoked, and the HierarchyEvent is passed to it.
Hierarchy events are provided for notification purposes ONLY; The AWT will automatically handle changes to the hierarchy internally so that GUI layout, displayability, and visibility work properly regardless of whether a program registers a HierarchyListener or not.
The listener interface for receiving hierarchy changed events. The class that is interested in processing a hierarchy changed event should implement this interface. The listener object created from that class is then registered with a Component using the Component's addHierarchyListener method. When the hierarchy to which the Component belongs changes, the hierarchyChanged method in the listener object is invoked, and the HierarchyEvent is passed to it. Hierarchy events are provided for notification purposes ONLY; The AWT will automatically handle changes to the hierarchy internally so that GUI layout, displayability, and visibility work properly regardless of whether a program registers a HierarchyListener or not.
The root event class for all component-level input events.
Input events are delivered to listeners before they are processed normally by the source where they originated. This allows listeners and component subclasses to "consume" the event so that the source will not process them in their default manner. For example, consuming mousePressed events on a Button component will prevent the Button from being activated.
The root event class for all component-level input events. Input events are delivered to listeners before they are processed normally by the source where they originated. This allows listeners and component subclasses to "consume" the event so that the source will not process them in their default manner. For example, consuming mousePressed events on a Button component will prevent the Button from being activated.
Input method events contain information about text that is being composed using an input method. Whenever the text changes, the input method sends an event. If the text component that's currently using the input method is an active client, the event is dispatched to that component. Otherwise, it is dispatched to a separate composition window.
The text included with the input method event consists of two parts: committed text and composed text. Either part may be empty. The two parts together replace any uncommitted composed text sent in previous events, or the currently selected committed text. Committed text should be integrated into the text component's persistent data, it will not be sent again. Composed text may be sent repeatedly, with changes to reflect the user's editing operations. Committed text always precedes composed text.
Input method events contain information about text that is being composed using an input method. Whenever the text changes, the input method sends an event. If the text component that's currently using the input method is an active client, the event is dispatched to that component. Otherwise, it is dispatched to a separate composition window. The text included with the input method event consists of two parts: committed text and composed text. Either part may be empty. The two parts together replace any uncommitted composed text sent in previous events, or the currently selected committed text. Committed text should be integrated into the text component's persistent data, it will not be sent again. Composed text may be sent repeatedly, with changes to reflect the user's editing operations. Committed text always precedes composed text.
The listener interface for receiving input method events. A text editing component has to install an input method event listener in order to work with input methods.
The text editing component also has to provide an instance of InputMethodRequests.
The listener interface for receiving input method events. A text editing component has to install an input method event listener in order to work with input methods. The text editing component also has to provide an instance of InputMethodRequests.
An event which executes the run() method on a Runnable when dispatched by the AWT event dispatcher thread. This class can be used as a reference implementation of ActiveEvent rather than declaring a new class and defining dispatch().
Instances of this class are placed on the EventQueue by calls to invokeLater and invokeAndWait. Client code can use this fact to write replacement functions for invokeLater and invokeAndWait without writing special-case code in any AWTEventListener objects.
An unspecified behavior will be caused if the id parameter of any particular InvocationEvent instance is not in the range from INVOCATION_FIRST to INVOCATION_LAST.
An event which executes the run() method on a Runnable when dispatched by the AWT event dispatcher thread. This class can be used as a reference implementation of ActiveEvent rather than declaring a new class and defining dispatch(). Instances of this class are placed on the EventQueue by calls to invokeLater and invokeAndWait. Client code can use this fact to write replacement functions for invokeLater and invokeAndWait without writing special-case code in any AWTEventListener objects. An unspecified behavior will be caused if the id parameter of any particular InvocationEvent instance is not in the range from INVOCATION_FIRST to INVOCATION_LAST.
A semantic event which indicates that an item was selected or deselected. This high-level event is generated by an ItemSelectable object (such as a List) when an item is selected or deselected by the user. The event is passed to every ItemListener object which registered to receive such events using the component's addItemListener method.
The object that implements the ItemListener interface gets this ItemEvent when the event occurs. The listener is spared the details of processing individual mouse movements and mouse clicks, and can instead process a "meaningful" (semantic) event like "item selected" or "item deselected".
An unspecified behavior will be caused if the id parameter of any particular ItemEvent instance is not in the range from ITEM_FIRST to ITEM_LAST.
The stateChange of any ItemEvent instance takes one of the following values:
ItemEvent.SELECTED
ItemEvent.DESELECTED
Assigning the value different from listed above will cause an unspecified behavior.
A semantic event which indicates that an item was selected or deselected. This high-level event is generated by an ItemSelectable object (such as a List) when an item is selected or deselected by the user. The event is passed to every ItemListener object which registered to receive such events using the component's addItemListener method. The object that implements the ItemListener interface gets this ItemEvent when the event occurs. The listener is spared the details of processing individual mouse movements and mouse clicks, and can instead process a "meaningful" (semantic) event like "item selected" or "item deselected". An unspecified behavior will be caused if the id parameter of any particular ItemEvent instance is not in the range from ITEM_FIRST to ITEM_LAST. The stateChange of any ItemEvent instance takes one of the following values: ItemEvent.SELECTED ItemEvent.DESELECTED Assigning the value different from listed above will cause an unspecified behavior.
The listener interface for receiving item events. The class that is interested in processing an item event implements this interface. The object created with that class is then registered with a component using the component's addItemListener method. When an item-selection event occurs, the listener object's itemStateChanged method is invoked.
The listener interface for receiving item events. The class that is interested in processing an item event implements this interface. The object created with that class is then registered with a component using the component's addItemListener method. When an item-selection event occurs, the listener object's itemStateChanged method is invoked.
An abstract adapter class for receiving keyboard events. The methods in this class are empty. This class exists as convenience for creating listener objects.
Extend this class to create a KeyEvent listener and override the methods for the events of interest. (If you implement the KeyListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.)
Create a listener object using the extended class and then register it with a component using the component's addKeyListener method. When a key is pressed, released, or typed, the relevant method in the listener object is invoked, and the KeyEvent is passed to it.
An abstract adapter class for receiving keyboard events. The methods in this class are empty. This class exists as convenience for creating listener objects. Extend this class to create a KeyEvent listener and override the methods for the events of interest. (If you implement the KeyListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.) Create a listener object using the extended class and then register it with a component using the component's addKeyListener method. When a key is pressed, released, or typed, the relevant method in the listener object is invoked, and the KeyEvent is passed to it.
An event which indicates that a keystroke occurred in a component.
This low-level event is generated by a component object (such as a text field) when a key is pressed, released, or typed. The event is passed to every KeyListener or KeyAdapter object which registered to receive such events using the component's addKeyListener method. (KeyAdapter objects implement the KeyListener interface.) Each such listener object gets this KeyEvent when the event occurs.
"Key typed" events are higher-level and generally do not depend on the platform or keyboard layout. They are generated when a Unicode character is entered, and are the preferred way to find out about character input. In the simplest case, a key typed event is produced by a single key press (e.g., 'a'). Often, however, characters are produced by series of key presses (e.g., 'shift' 'a'), and the mapping from key pressed events to key typed events may be many-to-one or many-to-many. Key releases are not usually necessary to generate a key typed event, but there are some cases where the key typed event is not generated until a key is released (e.g., entering ASCII sequences via the Alt-Numpad method in Windows). No key typed events are generated for keys that don't generate Unicode characters (e.g., action keys, modifier keys, etc.).
The getKeyChar method always returns a valid Unicode character or CHAR_UNDEFINED. Character input is reported by KEY_TYPED events: KEY_PRESSED and KEY_RELEASED events are not necessarily associated with character input. Therefore, the result of the getKeyChar method is guaranteed to be meaningful only for KEY_TYPED events.
For key pressed and key released events, the getKeyCode method returns the event's keyCode. For key typed events, the getKeyCode method always returns VK_UNDEFINED. The getExtendedKeyCode method may also be used with many international keyboard layouts.
"Key pressed" and "key released" events are lower-level and depend on the platform and keyboard layout. They are generated whenever a key is pressed or released, and are the only way to find out about keys that don't generate character input (e.g., action keys, modifier keys, etc.). The key being pressed or released is indicated by the getKeyCode and getExtendedKeyCode methods, which return a virtual key code.
Virtual key codes are used to report which keyboard key has been pressed, rather than a character generated by the combination of one or more keystrokes (such as "A", which comes from shift and "a").
For example, pressing the Shift key will cause a KEY_PRESSED event with a VK_SHIFT keyCode, while pressing the 'a' key will result in a VK_A keyCode. After the 'a' key is released, a KEY_RELEASED event will be fired with VK_A. Separately, a KEY_TYPED event with a keyChar value of 'A' is generated.
Pressing and releasing a key on the keyboard results in the generating the following key events (in order):
KEY_PRESSED
KEY_TYPED (is only generated if a valid Unicode character could be generated.)
KEY_RELEASED
But in some cases (e.g. auto-repeat or input method is activated) the order could be different (and platform dependent).
Notes:
Key combinations which do not result in Unicode characters, such as action keys like F1 and the HELP key, do not generate KEY_TYPED events. Not all keyboards or systems are capable of generating all virtual key codes. No attempt is made in Java to generate these keys artificially. Virtual key codes do not identify a physical key: they depend on the platform and keyboard layout. For example, the key that generates VK_Q when using a U.S. keyboard layout will generate VK_A when using a French keyboard layout. The key that generates VK_Q when using a U.S. keyboard layout also generates a unique code for Russian or Hebrew layout. There is no a VK_ constant for these and many other codes in various layouts. These codes may be obtained by using getExtendedKeyCode and are used whenever a VK_ constant is used. Not all characters have a keycode associated with them. For example, there is no keycode for the question mark because there is no keyboard for which it appears on the primary layer. In order to support the platform-independent handling of action keys, the Java platform uses a few additional virtual key constants for functions that would otherwise have to be recognized by interpreting virtual key codes and modifiers. For example, for Japanese Windows keyboards, VK_ALL_CANDIDATES is returned instead of VK_CONVERT with the ALT modifier. As specified in Focus Specification key events are dispatched to the focus owner by default.
WARNING: Aside from those keys that are defined by the Java language (VK_ENTER, VK_BACK_SPACE, and VK_TAB), do not rely on the values of the VK_ constants. Sun reserves the right to change these values as needed to accommodate a wider range of keyboards in the future.
An unspecified behavior will be caused if the id parameter of any particular KeyEvent instance is not in the range from KEY_FIRST to KEY_LAST.
An event which indicates that a keystroke occurred in a component. This low-level event is generated by a component object (such as a text field) when a key is pressed, released, or typed. The event is passed to every KeyListener or KeyAdapter object which registered to receive such events using the component's addKeyListener method. (KeyAdapter objects implement the KeyListener interface.) Each such listener object gets this KeyEvent when the event occurs. "Key typed" events are higher-level and generally do not depend on the platform or keyboard layout. They are generated when a Unicode character is entered, and are the preferred way to find out about character input. In the simplest case, a key typed event is produced by a single key press (e.g., 'a'). Often, however, characters are produced by series of key presses (e.g., 'shift' 'a'), and the mapping from key pressed events to key typed events may be many-to-one or many-to-many. Key releases are not usually necessary to generate a key typed event, but there are some cases where the key typed event is not generated until a key is released (e.g., entering ASCII sequences via the Alt-Numpad method in Windows). No key typed events are generated for keys that don't generate Unicode characters (e.g., action keys, modifier keys, etc.). The getKeyChar method always returns a valid Unicode character or CHAR_UNDEFINED. Character input is reported by KEY_TYPED events: KEY_PRESSED and KEY_RELEASED events are not necessarily associated with character input. Therefore, the result of the getKeyChar method is guaranteed to be meaningful only for KEY_TYPED events. For key pressed and key released events, the getKeyCode method returns the event's keyCode. For key typed events, the getKeyCode method always returns VK_UNDEFINED. The getExtendedKeyCode method may also be used with many international keyboard layouts. "Key pressed" and "key released" events are lower-level and depend on the platform and keyboard layout. They are generated whenever a key is pressed or released, and are the only way to find out about keys that don't generate character input (e.g., action keys, modifier keys, etc.). The key being pressed or released is indicated by the getKeyCode and getExtendedKeyCode methods, which return a virtual key code. Virtual key codes are used to report which keyboard key has been pressed, rather than a character generated by the combination of one or more keystrokes (such as "A", which comes from shift and "a"). For example, pressing the Shift key will cause a KEY_PRESSED event with a VK_SHIFT keyCode, while pressing the 'a' key will result in a VK_A keyCode. After the 'a' key is released, a KEY_RELEASED event will be fired with VK_A. Separately, a KEY_TYPED event with a keyChar value of 'A' is generated. Pressing and releasing a key on the keyboard results in the generating the following key events (in order): KEY_PRESSED KEY_TYPED (is only generated if a valid Unicode character could be generated.) KEY_RELEASED But in some cases (e.g. auto-repeat or input method is activated) the order could be different (and platform dependent). Notes: Key combinations which do not result in Unicode characters, such as action keys like F1 and the HELP key, do not generate KEY_TYPED events. Not all keyboards or systems are capable of generating all virtual key codes. No attempt is made in Java to generate these keys artificially. Virtual key codes do not identify a physical key: they depend on the platform and keyboard layout. For example, the key that generates VK_Q when using a U.S. keyboard layout will generate VK_A when using a French keyboard layout. The key that generates VK_Q when using a U.S. keyboard layout also generates a unique code for Russian or Hebrew layout. There is no a VK_ constant for these and many other codes in various layouts. These codes may be obtained by using getExtendedKeyCode and are used whenever a VK_ constant is used. Not all characters have a keycode associated with them. For example, there is no keycode for the question mark because there is no keyboard for which it appears on the primary layer. In order to support the platform-independent handling of action keys, the Java platform uses a few additional virtual key constants for functions that would otherwise have to be recognized by interpreting virtual key codes and modifiers. For example, for Japanese Windows keyboards, VK_ALL_CANDIDATES is returned instead of VK_CONVERT with the ALT modifier. As specified in Focus Specification key events are dispatched to the focus owner by default. WARNING: Aside from those keys that are defined by the Java language (VK_ENTER, VK_BACK_SPACE, and VK_TAB), do not rely on the values of the VK_ constants. Sun reserves the right to change these values as needed to accommodate a wider range of keyboards in the future. An unspecified behavior will be caused if the id parameter of any particular KeyEvent instance is not in the range from KEY_FIRST to KEY_LAST.
The listener interface for receiving keyboard events (keystrokes). The class that is interested in processing a keyboard event either implements this interface (and all the methods it contains) or extends the abstract KeyAdapter class (overriding only the methods of interest).
The listener object created from that class is then registered with a component using the component's addKeyListener method. A keyboard event is generated when a key is pressed, released, or typed. The relevant method in the listener object is then invoked, and the KeyEvent is passed to it.
The listener interface for receiving keyboard events (keystrokes). The class that is interested in processing a keyboard event either implements this interface (and all the methods it contains) or extends the abstract KeyAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addKeyListener method. A keyboard event is generated when a key is pressed, released, or typed. The relevant method in the listener object is then invoked, and the KeyEvent is passed to it.
An abstract adapter class for receiving mouse events. The methods in this class are empty. This class exists as convenience for creating listener objects.
Mouse events let you track when a mouse is pressed, released, clicked, moved, dragged, when it enters a component, when it exits and when a mouse wheel is moved.
Extend this class to create a MouseEvent (including drag and motion events) or/and MouseWheelEvent listener and override the methods for the events of interest. (If you implement the MouseListener, MouseMotionListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.)
Create a listener object using the extended class and then register it with a component using the component's addMouseListener addMouseMotionListener, addMouseWheelListener methods. The relevant method in the listener object is invoked and the MouseEvent or MouseWheelEvent is passed to it in following cases:
when a mouse button is pressed, released, or clicked (pressed and released) when the mouse cursor enters or exits the component when the mouse wheel rotated, or mouse moved or dragged
An abstract adapter class for receiving mouse events. The methods in this class are empty. This class exists as convenience for creating listener objects. Mouse events let you track when a mouse is pressed, released, clicked, moved, dragged, when it enters a component, when it exits and when a mouse wheel is moved. Extend this class to create a MouseEvent (including drag and motion events) or/and MouseWheelEvent listener and override the methods for the events of interest. (If you implement the MouseListener, MouseMotionListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.) Create a listener object using the extended class and then register it with a component using the component's addMouseListener addMouseMotionListener, addMouseWheelListener methods. The relevant method in the listener object is invoked and the MouseEvent or MouseWheelEvent is passed to it in following cases: when a mouse button is pressed, released, or clicked (pressed and released) when the mouse cursor enters or exits the component when the mouse wheel rotated, or mouse moved or dragged
An event which indicates that a mouse action occurred in a component. A mouse action is considered to occur in a particular component if and only if the mouse cursor is over the unobscured part of the component's bounds when the action happens. For lightweight components, such as Swing's components, mouse events are only dispatched to the component if the mouse event type has been enabled on the component. A mouse event type is enabled by adding the appropriate mouse-based EventListener to the component (MouseListener or MouseMotionListener), or by invoking Component.enableEvents(long) with the appropriate mask parameter (AWTEvent.MOUSE_EVENT_MASK or AWTEvent.MOUSE_MOTION_EVENT_MASK). If the mouse event type has not been enabled on the component, the corresponding mouse events are dispatched to the first ancestor that has enabled the mouse event type.
For example, if a MouseListener has been added to a component, or enableEvents(AWTEvent.MOUSE_EVENT_MASK) has been invoked, then all the events defined by MouseListener are dispatched to the component. On the other hand, if a MouseMotionListener has not been added and enableEvents has not been invoked with AWTEvent.MOUSE_MOTION_EVENT_MASK, then mouse motion events are not dispatched to the component. Instead the mouse motion events are dispatched to the first ancestors that has enabled mouse motion events.
This low-level event is generated by a component object for:
Mouse Events
a mouse button is pressed
a mouse button is released
a mouse button is clicked (pressed and released)
the mouse cursor enters the unobscured part of component's geometry
the mouse cursor exits the unobscured part of component's geometry
Mouse Motion Events
the mouse is moved
the mouse is dragged
A MouseEvent object is passed to every MouseListener or MouseAdapter object which is registered to receive the "interesting" mouse events using the component's addMouseListener method. (MouseAdapter objects implement the MouseListener interface.) Each such listener object gets a MouseEvent containing the mouse event.
A MouseEvent object is also passed to every MouseMotionListener or MouseMotionAdapter object which is registered to receive mouse motion events using the component's addMouseMotionListener method. (MouseMotionAdapter objects implement the MouseMotionListener interface.) Each such listener object gets a MouseEvent containing the mouse motion event.
When a mouse button is clicked, events are generated and sent to the registered MouseListeners. The state of modal keys can be retrieved using InputEvent.getModifiers() and InputEvent.getModifiersEx(). The button mask returned by InputEvent.getModifiers() reflects only the button that changed state, not the current state of all buttons. (Note: Due to overlap in the values of ALT_MASK/BUTTON2_MASK and META_MASK/BUTTON3_MASK, this is not always true for mouse events involving modifier keys). To get the state of all buttons and modifier keys, use InputEvent.getModifiersEx(). The button which has changed state is returned by getButton()
For example, if the first mouse button is pressed, events are sent in the following order:
id modifiers button MOUSE_PRESSED: BUTTON1_MASK BUTTON1 MOUSE_RELEASED: BUTTON1_MASK BUTTON1 MOUSE_CLICKED: BUTTON1_MASK BUTTON1 When multiple mouse buttons are pressed, each press, release, and click results in a separate event.
For example, if the user presses button 1 followed by button 2, and then releases them in the same order, the following sequence of events is generated:
id modifiers button MOUSE_PRESSED: BUTTON1_MASK BUTTON1 MOUSE_PRESSED: BUTTON2_MASK BUTTON2 MOUSE_RELEASED: BUTTON1_MASK BUTTON1 MOUSE_CLICKED: BUTTON1_MASK BUTTON1 MOUSE_RELEASED: BUTTON2_MASK BUTTON2 MOUSE_CLICKED: BUTTON2_MASK BUTTON2 If button 2 is released first, the MOUSE_RELEASED/MOUSE_CLICKED pair for BUTTON2_MASK arrives first, followed by the pair for BUTTON1_MASK.
Some extra mouse buttons are added to extend the standard set of buttons represented by the following constants:BUTTON1, BUTTON2, and BUTTON3. Extra buttons have no assigned BUTTONx constants as well as their button masks have no assigned BUTTONx_DOWN_MASK constants. Nevertheless, ordinal numbers starting from 4 may be used as button numbers (button ids). Values obtained by the getMaskForButton(button) method may be used as button masks.
MOUSE_DRAGGED events are delivered to the Component in which the mouse button was pressed until the mouse button is released (regardless of whether the mouse position is within the bounds of the Component). Due to platform-dependent Drag&Drop implementations, MOUSE_DRAGGED events may not be delivered during a native Drag&Drop operation.
In a multi-screen environment mouse drag events are delivered to the Component even if the mouse position is outside the bounds of the GraphicsConfiguration associated with that Component. However, the reported position for mouse drag events in this case may differ from the actual mouse position:
In a multi-screen environment without a virtual device:
The reported coordinates for mouse drag events are clipped to fit within the bounds of the GraphicsConfiguration associated with the Component. In a multi-screen environment with a virtual device:
The reported coordinates for mouse drag events are clipped to fit within the bounds of the virtual device associated with the Component.
An unspecified behavior will be caused if the id parameter of any particular MouseEvent instance is not in the range from MOUSE_FIRST to MOUSE_LAST-1 (MOUSE_WHEEL is not acceptable).
An event which indicates that a mouse action occurred in a component. A mouse action is considered to occur in a particular component if and only if the mouse cursor is over the unobscured part of the component's bounds when the action happens. For lightweight components, such as Swing's components, mouse events are only dispatched to the component if the mouse event type has been enabled on the component. A mouse event type is enabled by adding the appropriate mouse-based EventListener to the component (MouseListener or MouseMotionListener), or by invoking Component.enableEvents(long) with the appropriate mask parameter (AWTEvent.MOUSE_EVENT_MASK or AWTEvent.MOUSE_MOTION_EVENT_MASK). If the mouse event type has not been enabled on the component, the corresponding mouse events are dispatched to the first ancestor that has enabled the mouse event type. For example, if a MouseListener has been added to a component, or enableEvents(AWTEvent.MOUSE_EVENT_MASK) has been invoked, then all the events defined by MouseListener are dispatched to the component. On the other hand, if a MouseMotionListener has not been added and enableEvents has not been invoked with AWTEvent.MOUSE_MOTION_EVENT_MASK, then mouse motion events are not dispatched to the component. Instead the mouse motion events are dispatched to the first ancestors that has enabled mouse motion events. This low-level event is generated by a component object for: Mouse Events a mouse button is pressed a mouse button is released a mouse button is clicked (pressed and released) the mouse cursor enters the unobscured part of component's geometry the mouse cursor exits the unobscured part of component's geometry Mouse Motion Events the mouse is moved the mouse is dragged A MouseEvent object is passed to every MouseListener or MouseAdapter object which is registered to receive the "interesting" mouse events using the component's addMouseListener method. (MouseAdapter objects implement the MouseListener interface.) Each such listener object gets a MouseEvent containing the mouse event. A MouseEvent object is also passed to every MouseMotionListener or MouseMotionAdapter object which is registered to receive mouse motion events using the component's addMouseMotionListener method. (MouseMotionAdapter objects implement the MouseMotionListener interface.) Each such listener object gets a MouseEvent containing the mouse motion event. When a mouse button is clicked, events are generated and sent to the registered MouseListeners. The state of modal keys can be retrieved using InputEvent.getModifiers() and InputEvent.getModifiersEx(). The button mask returned by InputEvent.getModifiers() reflects only the button that changed state, not the current state of all buttons. (Note: Due to overlap in the values of ALT_MASK/BUTTON2_MASK and META_MASK/BUTTON3_MASK, this is not always true for mouse events involving modifier keys). To get the state of all buttons and modifier keys, use InputEvent.getModifiersEx(). The button which has changed state is returned by getButton() For example, if the first mouse button is pressed, events are sent in the following order: id modifiers button MOUSE_PRESSED: BUTTON1_MASK BUTTON1 MOUSE_RELEASED: BUTTON1_MASK BUTTON1 MOUSE_CLICKED: BUTTON1_MASK BUTTON1 When multiple mouse buttons are pressed, each press, release, and click results in a separate event. For example, if the user presses button 1 followed by button 2, and then releases them in the same order, the following sequence of events is generated: id modifiers button MOUSE_PRESSED: BUTTON1_MASK BUTTON1 MOUSE_PRESSED: BUTTON2_MASK BUTTON2 MOUSE_RELEASED: BUTTON1_MASK BUTTON1 MOUSE_CLICKED: BUTTON1_MASK BUTTON1 MOUSE_RELEASED: BUTTON2_MASK BUTTON2 MOUSE_CLICKED: BUTTON2_MASK BUTTON2 If button 2 is released first, the MOUSE_RELEASED/MOUSE_CLICKED pair for BUTTON2_MASK arrives first, followed by the pair for BUTTON1_MASK. Some extra mouse buttons are added to extend the standard set of buttons represented by the following constants:BUTTON1, BUTTON2, and BUTTON3. Extra buttons have no assigned BUTTONx constants as well as their button masks have no assigned BUTTONx_DOWN_MASK constants. Nevertheless, ordinal numbers starting from 4 may be used as button numbers (button ids). Values obtained by the getMaskForButton(button) method may be used as button masks. MOUSE_DRAGGED events are delivered to the Component in which the mouse button was pressed until the mouse button is released (regardless of whether the mouse position is within the bounds of the Component). Due to platform-dependent Drag&Drop implementations, MOUSE_DRAGGED events may not be delivered during a native Drag&Drop operation. In a multi-screen environment mouse drag events are delivered to the Component even if the mouse position is outside the bounds of the GraphicsConfiguration associated with that Component. However, the reported position for mouse drag events in this case may differ from the actual mouse position: In a multi-screen environment without a virtual device: The reported coordinates for mouse drag events are clipped to fit within the bounds of the GraphicsConfiguration associated with the Component. In a multi-screen environment with a virtual device: The reported coordinates for mouse drag events are clipped to fit within the bounds of the virtual device associated with the Component. An unspecified behavior will be caused if the id parameter of any particular MouseEvent instance is not in the range from MOUSE_FIRST to MOUSE_LAST-1 (MOUSE_WHEEL is not acceptable).
The listener interface for receiving "interesting" mouse events (press, release, click, enter, and exit) on a component. (To track mouse moves and mouse drags, use the MouseMotionListener.)
The class that is interested in processing a mouse event either implements this interface (and all the methods it contains) or extends the abstract MouseAdapter class (overriding only the methods of interest).
The listener object created from that class is then registered with a component using the component's addMouseListener method. A mouse event is generated when the mouse is pressed, released clicked (pressed and released). A mouse event is also generated when the mouse cursor enters or leaves a component. When a mouse event occurs, the relevant method in the listener object is invoked, and the MouseEvent is passed to it.
The listener interface for receiving "interesting" mouse events (press, release, click, enter, and exit) on a component. (To track mouse moves and mouse drags, use the MouseMotionListener.) The class that is interested in processing a mouse event either implements this interface (and all the methods it contains) or extends the abstract MouseAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addMouseListener method. A mouse event is generated when the mouse is pressed, released clicked (pressed and released). A mouse event is also generated when the mouse cursor enters or leaves a component. When a mouse event occurs, the relevant method in the listener object is invoked, and the MouseEvent is passed to it.
An abstract adapter class for receiving mouse motion events. The methods in this class are empty. This class exists as convenience for creating listener objects.
Mouse motion events occur when a mouse is moved or dragged. (Many such events will be generated in a normal program. To track clicks and other mouse events, use the MouseAdapter.)
Extend this class to create a MouseEvent listener and override the methods for the events of interest. (If you implement the MouseMotionListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.)
Create a listener object using the extended class and then register it with a component using the component's addMouseMotionListener method. When the mouse is moved or dragged, the relevant method in the listener object is invoked and the MouseEvent is passed to it.
An abstract adapter class for receiving mouse motion events. The methods in this class are empty. This class exists as convenience for creating listener objects. Mouse motion events occur when a mouse is moved or dragged. (Many such events will be generated in a normal program. To track clicks and other mouse events, use the MouseAdapter.) Extend this class to create a MouseEvent listener and override the methods for the events of interest. (If you implement the MouseMotionListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.) Create a listener object using the extended class and then register it with a component using the component's addMouseMotionListener method. When the mouse is moved or dragged, the relevant method in the listener object is invoked and the MouseEvent is passed to it.
The listener interface for receiving mouse motion events on a component. (For clicks and other mouse events, use the MouseListener.)
The class that is interested in processing a mouse motion event either implements this interface (and all the methods it contains) or extends the abstract MouseMotionAdapter class (overriding only the methods of interest).
The listener object created from that class is then registered with a component using the component's addMouseMotionListener method. A mouse motion event is generated when the mouse is moved or dragged. (Many such events will be generated). When a mouse motion event occurs, the relevant method in the listener object is invoked, and the MouseEvent is passed to it.
The listener interface for receiving mouse motion events on a component. (For clicks and other mouse events, use the MouseListener.) The class that is interested in processing a mouse motion event either implements this interface (and all the methods it contains) or extends the abstract MouseMotionAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a component using the component's addMouseMotionListener method. A mouse motion event is generated when the mouse is moved or dragged. (Many such events will be generated). When a mouse motion event occurs, the relevant method in the listener object is invoked, and the MouseEvent is passed to it.
An event which indicates that the mouse wheel was rotated in a component.
A wheel mouse is a mouse which has a wheel in place of the middle button. This wheel can be rotated towards or away from the user. Mouse wheels are most often used for scrolling, though other uses are possible.
A MouseWheelEvent object is passed to every MouseWheelListener object which registered to receive the "interesting" mouse events using the component's addMouseWheelListener method. Each such listener object gets a MouseEvent containing the mouse event.
Due to the mouse wheel's special relationship to scrolling Components, MouseWheelEvents are delivered somewhat differently than other MouseEvents. This is because while other MouseEvents usually affect a change on the Component directly under the mouse cursor (for instance, when clicking a button), MouseWheelEvents often have an effect away from the mouse cursor (moving the wheel while over a Component inside a ScrollPane should scroll one of the Scrollbars on the ScrollPane).
MouseWheelEvents start delivery from the Component underneath the mouse cursor. If MouseWheelEvents are not enabled on the Component, the event is delivered to the first ancestor Container with MouseWheelEvents enabled. This will usually be a ScrollPane with wheel scrolling enabled. The source Component and x,y coordinates will be relative to the event's final destination (the ScrollPane). This allows a complex GUI to be installed without modification into a ScrollPane, and for all MouseWheelEvents to be delivered to the ScrollPane for scrolling.
Some AWT Components are implemented using native widgets which display their own scrollbars and handle their own scrolling. The particular Components for which this is true will vary from platform to platform. When the mouse wheel is moved over one of these Components, the event is delivered straight to the native widget, and not propagated to ancestors.
Platforms offer customization of the amount of scrolling that should take place when the mouse wheel is moved. The two most common settings are to scroll a certain number of "units" (commonly lines of text in a text-based component) or an entire "block" (similar to page-up/page-down). The MouseWheelEvent offers methods for conforming to the underlying platform settings. These platform settings can be changed at any time by the user. MouseWheelEvents reflect the most recent settings.
The MouseWheelEvent class includes methods for getting the number of "clicks" by which the mouse wheel is rotated. The getWheelRotation() method returns the integer number of "clicks" corresponding to the number of notches by which the wheel was rotated. In addition to this method, the MouseWheelEvent class provides the getPreciseWheelRotation() method which returns a double number of "clicks" in case a partial rotation occurred. The getPreciseWheelRotation() method is useful if a mouse supports a high-resolution wheel, such as a freely rotating wheel with no notches. Applications can benefit by using this method to process mouse wheel events more precisely, and thus, making visual perception smoother.
An event which indicates that the mouse wheel was rotated in a component. A wheel mouse is a mouse which has a wheel in place of the middle button. This wheel can be rotated towards or away from the user. Mouse wheels are most often used for scrolling, though other uses are possible. A MouseWheelEvent object is passed to every MouseWheelListener object which registered to receive the "interesting" mouse events using the component's addMouseWheelListener method. Each such listener object gets a MouseEvent containing the mouse event. Due to the mouse wheel's special relationship to scrolling Components, MouseWheelEvents are delivered somewhat differently than other MouseEvents. This is because while other MouseEvents usually affect a change on the Component directly under the mouse cursor (for instance, when clicking a button), MouseWheelEvents often have an effect away from the mouse cursor (moving the wheel while over a Component inside a ScrollPane should scroll one of the Scrollbars on the ScrollPane). MouseWheelEvents start delivery from the Component underneath the mouse cursor. If MouseWheelEvents are not enabled on the Component, the event is delivered to the first ancestor Container with MouseWheelEvents enabled. This will usually be a ScrollPane with wheel scrolling enabled. The source Component and x,y coordinates will be relative to the event's final destination (the ScrollPane). This allows a complex GUI to be installed without modification into a ScrollPane, and for all MouseWheelEvents to be delivered to the ScrollPane for scrolling. Some AWT Components are implemented using native widgets which display their own scrollbars and handle their own scrolling. The particular Components for which this is true will vary from platform to platform. When the mouse wheel is moved over one of these Components, the event is delivered straight to the native widget, and not propagated to ancestors. Platforms offer customization of the amount of scrolling that should take place when the mouse wheel is moved. The two most common settings are to scroll a certain number of "units" (commonly lines of text in a text-based component) or an entire "block" (similar to page-up/page-down). The MouseWheelEvent offers methods for conforming to the underlying platform settings. These platform settings can be changed at any time by the user. MouseWheelEvents reflect the most recent settings. The MouseWheelEvent class includes methods for getting the number of "clicks" by which the mouse wheel is rotated. The getWheelRotation() method returns the integer number of "clicks" corresponding to the number of notches by which the wheel was rotated. In addition to this method, the MouseWheelEvent class provides the getPreciseWheelRotation() method which returns a double number of "clicks" in case a partial rotation occurred. The getPreciseWheelRotation() method is useful if a mouse supports a high-resolution wheel, such as a freely rotating wheel with no notches. Applications can benefit by using this method to process mouse wheel events more precisely, and thus, making visual perception smoother.
The listener interface for receiving mouse wheel events on a component. (For clicks and other mouse events, use the MouseListener. For mouse movement and drags, use the MouseMotionListener.)
The class that is interested in processing a mouse wheel event implements this interface (and all the methods it contains).
The listener object created from that class is then registered with a component using the component's addMouseWheelListener method. A mouse wheel event is generated when the mouse wheel is rotated. When a mouse wheel event occurs, that object's mouseWheelMoved method is invoked.
For information on how mouse wheel events are dispatched, see the class description for MouseWheelEvent.
The listener interface for receiving mouse wheel events on a component. (For clicks and other mouse events, use the MouseListener. For mouse movement and drags, use the MouseMotionListener.) The class that is interested in processing a mouse wheel event implements this interface (and all the methods it contains). The listener object created from that class is then registered with a component using the component's addMouseWheelListener method. A mouse wheel event is generated when the mouse wheel is rotated. When a mouse wheel event occurs, that object's mouseWheelMoved method is invoked. For information on how mouse wheel events are dispatched, see the class description for MouseWheelEvent.
The component-level paint event. This event is a special type which is used to ensure that paint/update method calls are serialized along with the other events delivered from the event queue. This event is not designed to be used with the Event Listener model; programs should continue to override paint/update methods in order render themselves properly.
An unspecified behavior will be caused if the id parameter of any particular PaintEvent instance is not in the range from PAINT_FIRST to PAINT_LAST.
The component-level paint event. This event is a special type which is used to ensure that paint/update method calls are serialized along with the other events delivered from the event queue. This event is not designed to be used with the Event Listener model; programs should continue to override paint/update methods in order render themselves properly. An unspecified behavior will be caused if the id parameter of any particular PaintEvent instance is not in the range from PAINT_FIRST to PAINT_LAST.
A semantic event which indicates that an object's text changed. This high-level event is generated by an object (such as a TextComponent) when its text changes. The event is passed to every TextListener object which registered to receive such events using the component's addTextListener method.
The object that implements the TextListener interface gets this TextEvent when the event occurs. The listener is spared the details of processing individual mouse movements and key strokes Instead, it can process a "meaningful" (semantic) event like "text changed".
An unspecified behavior will be caused if the id parameter of any particular TextEvent instance is not in the range from TEXT_FIRST to TEXT_LAST.
A semantic event which indicates that an object's text changed. This high-level event is generated by an object (such as a TextComponent) when its text changes. The event is passed to every TextListener object which registered to receive such events using the component's addTextListener method. The object that implements the TextListener interface gets this TextEvent when the event occurs. The listener is spared the details of processing individual mouse movements and key strokes Instead, it can process a "meaningful" (semantic) event like "text changed". An unspecified behavior will be caused if the id parameter of any particular TextEvent instance is not in the range from TEXT_FIRST to TEXT_LAST.
The listener interface for receiving text events.
The class that is interested in processing a text event implements this interface. The object created with that class is then registered with a component using the component's addTextListener method. When the component's text changes, the listener object's textValueChanged method is invoked.
The listener interface for receiving text events. The class that is interested in processing a text event implements this interface. The object created with that class is then registered with a component using the component's addTextListener method. When the component's text changes, the listener object's textValueChanged method is invoked.
An abstract adapter class for receiving window events. The methods in this class are empty. This class exists as convenience for creating listener objects.
Extend this class to create a WindowEvent listener and override the methods for the events of interest. (If you implement the WindowListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.)
Create a listener object using the extended class and then register it with a Window using the window's addWindowListener method. When the window's status changes by virtue of being opened, closed, activated or deactivated, iconified or deiconified, the relevant method in the listener object is invoked, and the WindowEvent is passed to it.
An abstract adapter class for receiving window events. The methods in this class are empty. This class exists as convenience for creating listener objects. Extend this class to create a WindowEvent listener and override the methods for the events of interest. (If you implement the WindowListener interface, you have to define all of the methods in it. This abstract class defines null methods for them all, so you can only have to define methods for events you care about.) Create a listener object using the extended class and then register it with a Window using the window's addWindowListener method. When the window's status changes by virtue of being opened, closed, activated or deactivated, iconified or deiconified, the relevant method in the listener object is invoked, and the WindowEvent is passed to it.
A low-level event that indicates that a window has changed its status. This low-level event is generated by a Window object when it is opened, closed, activated, deactivated, iconified, or deiconified, or when focus is transfered into or out of the Window.
The event is passed to every WindowListener or WindowAdapter object which registered to receive such events using the window's addWindowListener method. (WindowAdapter objects implement the WindowListener interface.) Each such listener object gets this WindowEvent when the event occurs.
An unspecified behavior will be caused if the id parameter of any particular WindowEvent instance is not in the range from WINDOW_FIRST to WINDOW_LAST.
A low-level event that indicates that a window has changed its status. This low-level event is generated by a Window object when it is opened, closed, activated, deactivated, iconified, or deiconified, or when focus is transfered into or out of the Window. The event is passed to every WindowListener or WindowAdapter object which registered to receive such events using the window's addWindowListener method. (WindowAdapter objects implement the WindowListener interface.) Each such listener object gets this WindowEvent when the event occurs. An unspecified behavior will be caused if the id parameter of any particular WindowEvent instance is not in the range from WINDOW_FIRST to WINDOW_LAST.
The listener interface for receiving WindowEvents, including WINDOW_GAINED_FOCUS and WINDOW_LOST_FOCUS events. The class that is interested in processing a WindowEvent either implements this interface (and all the methods it contains) or extends the abstract WindowAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a Window using the Window's addWindowFocusListener method. When the Window's status changes by virtue of it being opened, closed, activated, deactivated, iconified, or deiconified, or by focus being transfered into or out of the Window, the relevant method in the listener object is invoked, and the WindowEvent is passed to it.
The listener interface for receiving WindowEvents, including WINDOW_GAINED_FOCUS and WINDOW_LOST_FOCUS events. The class that is interested in processing a WindowEvent either implements this interface (and all the methods it contains) or extends the abstract WindowAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a Window using the Window's addWindowFocusListener method. When the Window's status changes by virtue of it being opened, closed, activated, deactivated, iconified, or deiconified, or by focus being transfered into or out of the Window, the relevant method in the listener object is invoked, and the WindowEvent is passed to it.
The listener interface for receiving window events. The class that is interested in processing a window event either implements this interface (and all the methods it contains) or extends the abstract WindowAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a Window using the window's addWindowListener method. When the window's status changes by virtue of being opened, closed, activated or deactivated, iconified or deiconified, the relevant method in the listener object is invoked, and the WindowEvent is passed to it.
The listener interface for receiving window events. The class that is interested in processing a window event either implements this interface (and all the methods it contains) or extends the abstract WindowAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a Window using the window's addWindowListener method. When the window's status changes by virtue of being opened, closed, activated or deactivated, iconified or deiconified, the relevant method in the listener object is invoked, and the WindowEvent is passed to it.
The listener interface for receiving window state events.
The class that is interested in processing a window state event either implements this interface (and all the methods it contains) or extends the abstract WindowAdapter class (overriding only the methods of interest).
The listener object created from that class is then registered with a window using the Window's addWindowStateListener method. When the window's state changes by virtue of being iconified, maximized etc., the windowStateChanged method in the listener object is invoked, and the WindowEvent is passed to it.
The listener interface for receiving window state events. The class that is interested in processing a window state event either implements this interface (and all the methods it contains) or extends the abstract WindowAdapter class (overriding only the methods of interest). The listener object created from that class is then registered with a window using the Window's addWindowStateListener method. When the window's state changes by virtue of being iconified, maximized etc., the windowStateChanged method in the listener object is invoked, and the WindowEvent is passed to it.
EventQueue is a platform-independent class that queues events, both from the underlying peer classes and from trusted application classes.
It encapsulates asynchronous event dispatch machinery which extracts events from the queue and dispatches them by calling dispatchEvent(AWTEvent) method on this EventQueue with the event to be dispatched as an argument. The particular behavior of this machinery is implementation-dependent. The only requirements are that events which were actually enqueued to this queue (note that events being posted to the EventQueue can be coalesced) are dispatched:
Sequentially. That is, it is not permitted that several events from this queue are dispatched simultaneously. In the same order as they are enqueued. That is, if AWTEvent A is enqueued to the EventQueue before AWTEvent B then event B will not be dispatched before event A.
Some browsers partition applets in different code bases into separate contexts, and establish walls between these contexts. In such a scenario, there will be one EventQueue per context. Other browsers place all applets into the same context, implying that there will be only a single, global EventQueue for all applets. This behavior is implementation-dependent. Consult your browser's documentation for more information.
For information on the threading issues of the event dispatch machinery, see AWT Threading Issues.
EventQueue is a platform-independent class that queues events, both from the underlying peer classes and from trusted application classes. It encapsulates asynchronous event dispatch machinery which extracts events from the queue and dispatches them by calling dispatchEvent(AWTEvent) method on this EventQueue with the event to be dispatched as an argument. The particular behavior of this machinery is implementation-dependent. The only requirements are that events which were actually enqueued to this queue (note that events being posted to the EventQueue can be coalesced) are dispatched: Sequentially. That is, it is not permitted that several events from this queue are dispatched simultaneously. In the same order as they are enqueued. That is, if AWTEvent A is enqueued to the EventQueue before AWTEvent B then event B will not be dispatched before event A. Some browsers partition applets in different code bases into separate contexts, and establish walls between these contexts. In such a scenario, there will be one EventQueue per context. Other browsers place all applets into the same context, implying that there will be only a single, global EventQueue for all applets. This behavior is implementation-dependent. Consult your browser's documentation for more information. For information on the threading issues of the event dispatch machinery, see AWT Threading Issues.
The FileDialog class displays a dialog window from which the user can select a file.
Since it is a modal dialog, when the application calls its show method to display the dialog, it blocks the rest of the application until the user has chosen a file.
The FileDialog class displays a dialog window from which the user can select a file. Since it is a modal dialog, when the application calls its show method to display the dialog, it blocks the rest of the application until the user has chosen a file.
A flow layout arranges components in a directional flow, much like lines of text in a paragraph. The flow direction is determined by the container's componentOrientation property and may be one of two values:
ComponentOrientation.LEFT_TO_RIGHT ComponentOrientation.RIGHT_TO_LEFT
Flow layouts are typically used to arrange buttons in a panel. It arranges buttons horizontally until no more buttons fit on the same line. The line alignment is determined by the align property. The possible values are:
LEFT RIGHT CENTER LEADING TRAILING
For example, the following picture shows an applet using the flow layout manager (its default layout manager) to position three buttons:
Here is the code for this applet:
import java.awt.*; import java.applet.Applet;
public class myButtons extends Applet { Button button1, button2, button3; public void init() { button1 = new Button("Ok"); button2 = new Button("Open"); button3 = new Button("Close"); add(button1); add(button2); add(button3); } }
A flow layout lets each component assume its natural (preferred) size.
A flow layout arranges components in a directional flow, much like lines of text in a paragraph. The flow direction is determined by the container's componentOrientation property and may be one of two values: ComponentOrientation.LEFT_TO_RIGHT ComponentOrientation.RIGHT_TO_LEFT Flow layouts are typically used to arrange buttons in a panel. It arranges buttons horizontally until no more buttons fit on the same line. The line alignment is determined by the align property. The possible values are: LEFT RIGHT CENTER LEADING TRAILING For example, the following picture shows an applet using the flow layout manager (its default layout manager) to position three buttons: Here is the code for this applet: import java.awt.*; import java.applet.Applet; public class myButtons extends Applet { Button button1, button2, button3; public void init() { button1 = new Button("Ok"); button2 = new Button("Open"); button3 = new Button("Close"); add(button1); add(button2); add(button3); } } A flow layout lets each component assume its natural (preferred) size.
A FocusTraversalPolicy defines the order in which Components with a particular focus cycle root are traversed. Instances can apply the policy to arbitrary focus cycle roots, allowing themselves to be shared across Containers. They do not need to be reinitialized when the focus cycle roots of a Component hierarchy change.
The core responsibility of a FocusTraversalPolicy is to provide algorithms determining the next and previous Components to focus when traversing forward or backward in a UI. Each FocusTraversalPolicy must also provide algorithms for determining the first, last, and default Components in a traversal cycle. First and last Components are used when normal forward and backward traversal, respectively, wraps. The default Component is the first to receive focus when traversing down into a new focus traversal cycle. A FocusTraversalPolicy can optionally provide an algorithm for determining a Window's initial Component. The initial Component is the first to receive focus when a Window is first made visible.
FocusTraversalPolicy takes into account focus traversal policy providers. When searching for first/last/next/previous Component, if a focus traversal policy provider is encountered, its focus traversal policy is used to perform the search operation.
Please see
How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
A FocusTraversalPolicy defines the order in which Components with a particular focus cycle root are traversed. Instances can apply the policy to arbitrary focus cycle roots, allowing themselves to be shared across Containers. They do not need to be reinitialized when the focus cycle roots of a Component hierarchy change. The core responsibility of a FocusTraversalPolicy is to provide algorithms determining the next and previous Components to focus when traversing forward or backward in a UI. Each FocusTraversalPolicy must also provide algorithms for determining the first, last, and default Components in a traversal cycle. First and last Components are used when normal forward and backward traversal, respectively, wraps. The default Component is the first to receive focus when traversing down into a new focus traversal cycle. A FocusTraversalPolicy can optionally provide an algorithm for determining a Window's initial Component. The initial Component is the first to receive focus when a Window is first made visible. FocusTraversalPolicy takes into account focus traversal policy providers. When searching for first/last/next/previous Component, if a focus traversal policy provider is encountered, its focus traversal policy is used to perform the search operation. Please see How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
The Font class represents fonts, which are used to render text in a visible way. A font provides the information needed to map sequences of characters to sequences of glyphs and to render sequences of glyphs on Graphics and Component objects.
Characters and Glyphs
A character is a symbol that represents an item such as a letter, a digit, or punctuation in an abstract way. For example, 'g', LATIN SMALL LETTER G, is a character.
A glyph is a shape used to render a character or a sequence of characters. In simple writing systems, such as Latin, typically one glyph represents one character. In general, however, characters and glyphs do not have one-to-one correspondence. For example, the character 'á' LATIN SMALL LETTER A WITH ACUTE, can be represented by two glyphs: one for 'a' and one for '´'. On the other hand, the two-character string "fi" can be represented by a single glyph, an "fi" ligature. In complex writing systems, such as Arabic or the South and South-East Asian writing systems, the relationship between characters and glyphs can be more complicated and involve context-dependent selection of glyphs as well as glyph reordering.
A font encapsulates the collection of glyphs needed to render a selected set of characters as well as the tables needed to map sequences of characters to corresponding sequences of glyphs.
Physical and Logical Fonts
The Java Platform distinguishes between two kinds of fonts: physical fonts and logical fonts.
Physical fonts are the actual font libraries containing glyph data and tables to map from character sequences to glyph sequences, using a font technology such as TrueType or PostScript Type 1. All implementations of the Java Platform must support TrueType fonts; support for other font technologies is implementation dependent. Physical fonts may use names such as Helvetica, Palatino, HonMincho, or any number of other font names. Typically, each physical font supports only a limited set of writing systems, for example, only Latin characters or only Japanese and Basic Latin. The set of available physical fonts varies between configurations. Applications that require specific fonts can bundle them and instantiate them using the createFont method.
Logical fonts are the five font families defined by the Java platform which must be supported by any Java runtime environment: Serif, SansSerif, Monospaced, Dialog, and DialogInput. These logical fonts are not actual font libraries. Instead, the logical font names are mapped to physical fonts by the Java runtime environment. The mapping is implementation and usually locale dependent, so the look and the metrics provided by them vary. Typically, each logical font name maps to several physical fonts in order to cover a large range of characters.
Peered AWT components, such as Label and TextField, can only use logical fonts.
For a discussion of the relative advantages and disadvantages of using physical or logical fonts, see the Internationalization FAQ document.
Font Faces and Names
A Font can have many faces, such as heavy, medium, oblique, gothic and regular. All of these faces have similar typographic design.
There are three different names that you can get from a Font object. The logical font name is simply the name that was used to construct the font. The font face name, or just font name for short, is the name of a particular font face, like Helvetica Bold. The family name is the name of the font family that determines the typographic design across several faces, like Helvetica.
The Font class represents an instance of a font face from a collection of font faces that are present in the system resources of the host system. As examples, Arial Bold and Courier Bold Italic are font faces. There can be several Font objects associated with a font face, each differing in size, style, transform and font features.
The getAllFonts method of the GraphicsEnvironment class returns an array of all font faces available in the system. These font faces are returned as Font objects with a size of 1, identity transform and default font features. These base fonts can then be used to derive new Font objects with varying sizes, styles, transforms and font features via the deriveFont methods in this class.
Font and TextAttribute
Font supports most TextAttributes. This makes some operations, such as rendering underlined text, convenient since it is not necessary to explicitly construct a TextLayout object. Attributes can be set on a Font by constructing or deriving it using a Map of TextAttribute values.
The values of some TextAttributes are not serializable, and therefore attempting to serialize an instance of Font that has such values will not serialize them. This means a Font deserialized from such a stream will not compare equal to the original Font that contained the non-serializable attributes. This should very rarely pose a problem since these attributes are typically used only in special circumstances and are unlikely to be serialized.
FOREGROUND and BACKGROUND use Paint values. The subclass Color is serializable, while GradientPaint and TexturePaint are not. CHAR_REPLACEMENT uses GraphicAttribute values. The subclasses ShapeGraphicAttribute and ImageGraphicAttribute are not serializable. INPUT_METHOD_HIGHLIGHT uses InputMethodHighlight values, which are not serializable. See InputMethodHighlight.
Clients who create custom subclasses of Paint and GraphicAttribute can make them serializable and avoid this problem. Clients who use input method highlights can convert these to the platform-specific attributes for that highlight on the current platform and set them on the Font as a workaround.
The Map-based constructor and deriveFont APIs ignore the FONT attribute, and it is not retained by the Font; the static getFont(java.util.Map<? extends java.text.AttributedCharacterIterator.Attribute, ?>) method should be used if the FONT attribute might be present. See TextAttribute.FONT for more information.
Several attributes will cause additional rendering overhead and potentially invoke layout. If a Font has such attributes, the hasLayoutAttributes() method will return true.
Note: Font rotations can cause text baselines to be rotated. In order to account for this (rare) possibility, font APIs are specified to return metrics and take parameters 'in baseline-relative coordinates'. This maps the 'x' coordinate to the advance along the baseline, (positive x is forward along the baseline), and the 'y' coordinate to a distance along the perpendicular to the baseline at 'x' (positive y is 90 degrees clockwise from the baseline vector). APIs for which this is especially important are called out as having 'baseline-relative coordinates.'
The Font class represents fonts, which are used to render text in a visible way. A font provides the information needed to map sequences of characters to sequences of glyphs and to render sequences of glyphs on Graphics and Component objects. Characters and Glyphs A character is a symbol that represents an item such as a letter, a digit, or punctuation in an abstract way. For example, 'g', LATIN SMALL LETTER G, is a character. A glyph is a shape used to render a character or a sequence of characters. In simple writing systems, such as Latin, typically one glyph represents one character. In general, however, characters and glyphs do not have one-to-one correspondence. For example, the character 'á' LATIN SMALL LETTER A WITH ACUTE, can be represented by two glyphs: one for 'a' and one for '´'. On the other hand, the two-character string "fi" can be represented by a single glyph, an "fi" ligature. In complex writing systems, such as Arabic or the South and South-East Asian writing systems, the relationship between characters and glyphs can be more complicated and involve context-dependent selection of glyphs as well as glyph reordering. A font encapsulates the collection of glyphs needed to render a selected set of characters as well as the tables needed to map sequences of characters to corresponding sequences of glyphs. Physical and Logical Fonts The Java Platform distinguishes between two kinds of fonts: physical fonts and logical fonts. Physical fonts are the actual font libraries containing glyph data and tables to map from character sequences to glyph sequences, using a font technology such as TrueType or PostScript Type 1. All implementations of the Java Platform must support TrueType fonts; support for other font technologies is implementation dependent. Physical fonts may use names such as Helvetica, Palatino, HonMincho, or any number of other font names. Typically, each physical font supports only a limited set of writing systems, for example, only Latin characters or only Japanese and Basic Latin. The set of available physical fonts varies between configurations. Applications that require specific fonts can bundle them and instantiate them using the createFont method. Logical fonts are the five font families defined by the Java platform which must be supported by any Java runtime environment: Serif, SansSerif, Monospaced, Dialog, and DialogInput. These logical fonts are not actual font libraries. Instead, the logical font names are mapped to physical fonts by the Java runtime environment. The mapping is implementation and usually locale dependent, so the look and the metrics provided by them vary. Typically, each logical font name maps to several physical fonts in order to cover a large range of characters. Peered AWT components, such as Label and TextField, can only use logical fonts. For a discussion of the relative advantages and disadvantages of using physical or logical fonts, see the Internationalization FAQ document. Font Faces and Names A Font can have many faces, such as heavy, medium, oblique, gothic and regular. All of these faces have similar typographic design. There are three different names that you can get from a Font object. The logical font name is simply the name that was used to construct the font. The font face name, or just font name for short, is the name of a particular font face, like Helvetica Bold. The family name is the name of the font family that determines the typographic design across several faces, like Helvetica. The Font class represents an instance of a font face from a collection of font faces that are present in the system resources of the host system. As examples, Arial Bold and Courier Bold Italic are font faces. There can be several Font objects associated with a font face, each differing in size, style, transform and font features. The getAllFonts method of the GraphicsEnvironment class returns an array of all font faces available in the system. These font faces are returned as Font objects with a size of 1, identity transform and default font features. These base fonts can then be used to derive new Font objects with varying sizes, styles, transforms and font features via the deriveFont methods in this class. Font and TextAttribute Font supports most TextAttributes. This makes some operations, such as rendering underlined text, convenient since it is not necessary to explicitly construct a TextLayout object. Attributes can be set on a Font by constructing or deriving it using a Map of TextAttribute values. The values of some TextAttributes are not serializable, and therefore attempting to serialize an instance of Font that has such values will not serialize them. This means a Font deserialized from such a stream will not compare equal to the original Font that contained the non-serializable attributes. This should very rarely pose a problem since these attributes are typically used only in special circumstances and are unlikely to be serialized. FOREGROUND and BACKGROUND use Paint values. The subclass Color is serializable, while GradientPaint and TexturePaint are not. CHAR_REPLACEMENT uses GraphicAttribute values. The subclasses ShapeGraphicAttribute and ImageGraphicAttribute are not serializable. INPUT_METHOD_HIGHLIGHT uses InputMethodHighlight values, which are not serializable. See InputMethodHighlight. Clients who create custom subclasses of Paint and GraphicAttribute can make them serializable and avoid this problem. Clients who use input method highlights can convert these to the platform-specific attributes for that highlight on the current platform and set them on the Font as a workaround. The Map-based constructor and deriveFont APIs ignore the FONT attribute, and it is not retained by the Font; the static getFont(java.util.Map<? extends java.text.AttributedCharacterIterator.Attribute, ?>) method should be used if the FONT attribute might be present. See TextAttribute.FONT for more information. Several attributes will cause additional rendering overhead and potentially invoke layout. If a Font has such attributes, the hasLayoutAttributes() method will return true. Note: Font rotations can cause text baselines to be rotated. In order to account for this (rare) possibility, font APIs are specified to return metrics and take parameters 'in baseline-relative coordinates'. This maps the 'x' coordinate to the advance along the baseline, (positive x is forward along the baseline), and the 'y' coordinate to a distance along the perpendicular to the baseline at 'x' (positive y is 90 degrees clockwise from the baseline vector). APIs for which this is especially important are called out as having 'baseline-relative coordinates.'
No vars found in this namespace.
The FontRenderContext class is a container for the information needed to correctly measure text. The measurement of text can vary because of rules that map outlines to pixels, and rendering hints provided by an application.
One such piece of information is a transform that scales typographical points to pixels. (A point is defined to be exactly 1/72 of an inch, which is slightly different than the traditional mechanical measurement of a point.) A character that is rendered at 12pt on a 600dpi device might have a different size than the same character rendered at 12pt on a 72dpi device because of such factors as rounding to pixel boundaries and hints that the font designer may have specified.
Anti-aliasing and Fractional-metrics specified by an application can also affect the size of a character because of rounding to pixel boundaries.
Typically, instances of FontRenderContext are obtained from a Graphics2D object. A FontRenderContext which is directly constructed will most likely not represent any actual graphics device, and may lead to unexpected or incorrect results.
The FontRenderContext class is a container for the information needed to correctly measure text. The measurement of text can vary because of rules that map outlines to pixels, and rendering hints provided by an application. One such piece of information is a transform that scales typographical points to pixels. (A point is defined to be exactly 1/72 of an inch, which is slightly different than the traditional mechanical measurement of a point.) A character that is rendered at 12pt on a 600dpi device might have a different size than the same character rendered at 12pt on a 72dpi device because of such factors as rounding to pixel boundaries and hints that the font designer may have specified. Anti-aliasing and Fractional-metrics specified by an application can also affect the size of a character because of rounding to pixel boundaries. Typically, instances of FontRenderContext are obtained from a Graphics2D object. A FontRenderContext which is directly constructed will most likely not represent any actual graphics device, and may lead to unexpected or incorrect results.
The GlyphJustificationInfo class represents information about the justification properties of a glyph. A glyph is the visual representation of one or more characters. Many different glyphs can be used to represent a single character or combination of characters. The four justification properties represented by GlyphJustificationInfo are weight, priority, absorb and limit.
Weight is the overall 'weight' of the glyph in the line. Generally it is proportional to the size of the font. Glyphs with larger weight are allocated a correspondingly larger amount of the change in space.
Priority determines the justification phase in which this glyph is used. All glyphs of the same priority are examined before glyphs of the next priority. If all the change in space can be allocated to these glyphs without exceeding their limits, then glyphs of the next priority are not examined. There are four priorities, kashida, whitespace, interchar, and none. KASHIDA is the first priority examined. NONE is the last priority examined.
Absorb determines whether a glyph absorbs all change in space. Within a given priority, some glyphs may absorb all the change in space. If any of these glyphs are present, no glyphs of later priority are examined.
Limit determines the maximum or minimum amount by which the glyph can change. Left and right sides of the glyph can have different limits.
Each GlyphJustificationInfo represents two sets of metrics, which are growing and shrinking. Growing metrics are used when the glyphs on a line are to be spread apart to fit a larger width. Shrinking metrics are used when the glyphs are to be moved together to fit a smaller width.
The GlyphJustificationInfo class represents information about the justification properties of a glyph. A glyph is the visual representation of one or more characters. Many different glyphs can be used to represent a single character or combination of characters. The four justification properties represented by GlyphJustificationInfo are weight, priority, absorb and limit. Weight is the overall 'weight' of the glyph in the line. Generally it is proportional to the size of the font. Glyphs with larger weight are allocated a correspondingly larger amount of the change in space. Priority determines the justification phase in which this glyph is used. All glyphs of the same priority are examined before glyphs of the next priority. If all the change in space can be allocated to these glyphs without exceeding their limits, then glyphs of the next priority are not examined. There are four priorities, kashida, whitespace, interchar, and none. KASHIDA is the first priority examined. NONE is the last priority examined. Absorb determines whether a glyph absorbs all change in space. Within a given priority, some glyphs may absorb all the change in space. If any of these glyphs are present, no glyphs of later priority are examined. Limit determines the maximum or minimum amount by which the glyph can change. Left and right sides of the glyph can have different limits. Each GlyphJustificationInfo represents two sets of metrics, which are growing and shrinking. Growing metrics are used when the glyphs on a line are to be spread apart to fit a larger width. Shrinking metrics are used when the glyphs are to be moved together to fit a smaller width.
The GlyphMetrics class represents information for a single glyph. A glyph is the visual representation of one or more characters. Many different glyphs can be used to represent a single character or combination of characters. GlyphMetrics instances are produced by Font and are applicable to a specific glyph in a particular Font.
Glyphs are either STANDARD, LIGATURE, COMBINING, or COMPONENT.
STANDARD glyphs are commonly used to represent single characters. LIGATURE glyphs are used to represent sequences of characters. COMPONENT glyphs in a GlyphVector do not correspond to a particular character in a text model. Instead, COMPONENT glyphs are added for typographical reasons, such as Arabic justification. COMBINING glyphs embellish STANDARD or LIGATURE glyphs, such as accent marks. Carets do not appear before COMBINING glyphs.
Other metrics available through GlyphMetrics are the components of the advance, the visual bounds, and the left and right side bearings.
Glyphs for a rotated font, or obtained from a GlyphVector which has applied a rotation to the glyph, can have advances that contain both X and Y components. Usually the advance only has one component.
The advance of a glyph is the distance from the glyph's origin to the origin of the next glyph along the baseline, which is either vertical or horizontal. Note that, in a GlyphVector, the distance from a glyph to its following glyph might not be the glyph's advance, because of kerning or other positioning adjustments.
The bounds is the smallest rectangle that completely contains the outline of the glyph. The bounds rectangle is relative to the glyph's origin. The left-side bearing is the distance from the glyph origin to the left of its bounds rectangle. If the left-side bearing is negative, part of the glyph is drawn to the left of its origin. The right-side bearing is the distance from the right side of the bounds rectangle to the next glyph origin (the origin plus the advance). If negative, part of the glyph is drawn to the right of the next glyph's origin. Note that the bounds does not necessarily enclose all the pixels affected when rendering the glyph, because of rasterization and pixel adjustment effects.
Although instances of GlyphMetrics can be directly constructed, they are almost always obtained from a GlyphVector. Once constructed, GlyphMetrics objects are immutable.
Example: Querying a Font for glyph information
Font font = ...; int glyphIndex = ...; GlyphMetrics metrics = GlyphVector.getGlyphMetrics(glyphIndex); int isStandard = metrics.isStandard(); float glyphAdvance = metrics.getAdvance();
The GlyphMetrics class represents information for a single glyph. A glyph is the visual representation of one or more characters. Many different glyphs can be used to represent a single character or combination of characters. GlyphMetrics instances are produced by Font and are applicable to a specific glyph in a particular Font. Glyphs are either STANDARD, LIGATURE, COMBINING, or COMPONENT. STANDARD glyphs are commonly used to represent single characters. LIGATURE glyphs are used to represent sequences of characters. COMPONENT glyphs in a GlyphVector do not correspond to a particular character in a text model. Instead, COMPONENT glyphs are added for typographical reasons, such as Arabic justification. COMBINING glyphs embellish STANDARD or LIGATURE glyphs, such as accent marks. Carets do not appear before COMBINING glyphs. Other metrics available through GlyphMetrics are the components of the advance, the visual bounds, and the left and right side bearings. Glyphs for a rotated font, or obtained from a GlyphVector which has applied a rotation to the glyph, can have advances that contain both X and Y components. Usually the advance only has one component. The advance of a glyph is the distance from the glyph's origin to the origin of the next glyph along the baseline, which is either vertical or horizontal. Note that, in a GlyphVector, the distance from a glyph to its following glyph might not be the glyph's advance, because of kerning or other positioning adjustments. The bounds is the smallest rectangle that completely contains the outline of the glyph. The bounds rectangle is relative to the glyph's origin. The left-side bearing is the distance from the glyph origin to the left of its bounds rectangle. If the left-side bearing is negative, part of the glyph is drawn to the left of its origin. The right-side bearing is the distance from the right side of the bounds rectangle to the next glyph origin (the origin plus the advance). If negative, part of the glyph is drawn to the right of the next glyph's origin. Note that the bounds does not necessarily enclose all the pixels affected when rendering the glyph, because of rasterization and pixel adjustment effects. Although instances of GlyphMetrics can be directly constructed, they are almost always obtained from a GlyphVector. Once constructed, GlyphMetrics objects are immutable. Example: Querying a Font for glyph information Font font = ...; int glyphIndex = ...; GlyphMetrics metrics = GlyphVector.getGlyphMetrics(glyphIndex); int isStandard = metrics.isStandard(); float glyphAdvance = metrics.getAdvance();
A GlyphVector object is a collection of glyphs containing geometric information for the placement of each glyph in a transformed coordinate space which corresponds to the device on which the GlyphVector is ultimately displayed.
The GlyphVector does not attempt any interpretation of the sequence of glyphs it contains. Relationships between adjacent glyphs in sequence are solely used to determine the placement of the glyphs in the visual coordinate space.
Instances of GlyphVector are created by a Font.
In a text processing application that can cache intermediate representations of text, creation and subsequent caching of a GlyphVector for use during rendering is the fastest method to present the visual representation of characters to a user.
A GlyphVector is associated with exactly one Font, and can provide data useful only in relation to this Font. In addition, metrics obtained from a GlyphVector are not generally geometrically scaleable since the pixelization and spacing are dependent on grid-fitting algorithms within a Font. To facilitate accurate measurement of a GlyphVector and its component glyphs, you must specify a scaling transform, anti-alias mode, and fractional metrics mode when creating the GlyphVector. These characteristics can be derived from the destination device.
For each glyph in the GlyphVector, you can obtain:
the position of the glyph the transform associated with the glyph the metrics of the glyph in the context of the GlyphVector. The metrics of the glyph may be different under different transforms, application specified rendering hints, and the specific instance of the glyph within the GlyphVector.
Altering the data used to create the GlyphVector does not alter the state of the GlyphVector.
Methods are provided to adjust the positions of the glyphs within the GlyphVector. These methods are most appropriate for applications that are performing justification operations for the presentation of the glyphs.
Methods are provided to transform individual glyphs within the GlyphVector. These methods are primarily useful for special effects.
Methods are provided to return both the visual, logical, and pixel bounds of the entire GlyphVector or of individual glyphs within the GlyphVector.
Methods are provided to return a Shape for the GlyphVector, and for individual glyphs within the GlyphVector.
A GlyphVector object is a collection of glyphs containing geometric information for the placement of each glyph in a transformed coordinate space which corresponds to the device on which the GlyphVector is ultimately displayed. The GlyphVector does not attempt any interpretation of the sequence of glyphs it contains. Relationships between adjacent glyphs in sequence are solely used to determine the placement of the glyphs in the visual coordinate space. Instances of GlyphVector are created by a Font. In a text processing application that can cache intermediate representations of text, creation and subsequent caching of a GlyphVector for use during rendering is the fastest method to present the visual representation of characters to a user. A GlyphVector is associated with exactly one Font, and can provide data useful only in relation to this Font. In addition, metrics obtained from a GlyphVector are not generally geometrically scaleable since the pixelization and spacing are dependent on grid-fitting algorithms within a Font. To facilitate accurate measurement of a GlyphVector and its component glyphs, you must specify a scaling transform, anti-alias mode, and fractional metrics mode when creating the GlyphVector. These characteristics can be derived from the destination device. For each glyph in the GlyphVector, you can obtain: the position of the glyph the transform associated with the glyph the metrics of the glyph in the context of the GlyphVector. The metrics of the glyph may be different under different transforms, application specified rendering hints, and the specific instance of the glyph within the GlyphVector. Altering the data used to create the GlyphVector does not alter the state of the GlyphVector. Methods are provided to adjust the positions of the glyphs within the GlyphVector. These methods are most appropriate for applications that are performing justification operations for the presentation of the glyphs. Methods are provided to transform individual glyphs within the GlyphVector. These methods are primarily useful for special effects. Methods are provided to return both the visual, logical, and pixel bounds of the entire GlyphVector or of individual glyphs within the GlyphVector. Methods are provided to return a Shape for the GlyphVector, and for individual glyphs within the GlyphVector.
This class is used with the CHAR_REPLACEMENT attribute.
The GraphicAttribute class represents a graphic embedded in text. Clients subclass this class to implement their own char replacement graphics. Clients wishing to embed shapes and images in text need not subclass this class. Instead, clients can use the ShapeGraphicAttribute and ImageGraphicAttribute classes.
Subclasses must ensure that their objects are immutable once they are constructed. Mutating a GraphicAttribute that is used in a TextLayout results in undefined behavior from the TextLayout.
This class is used with the CHAR_REPLACEMENT attribute. The GraphicAttribute class represents a graphic embedded in text. Clients subclass this class to implement their own char replacement graphics. Clients wishing to embed shapes and images in text need not subclass this class. Instead, clients can use the ShapeGraphicAttribute and ImageGraphicAttribute classes. Subclasses must ensure that their objects are immutable once they are constructed. Mutating a GraphicAttribute that is used in a TextLayout results in undefined behavior from the TextLayout.
The ImageGraphicAttribute class is an implementation of GraphicAttribute which draws images in a TextLayout.
The ImageGraphicAttribute class is an implementation of GraphicAttribute which draws images in a TextLayout.
LayoutPath provides a mapping between locations relative to the baseline and points in user space. Locations consist of an advance along the baseline, and an offset perpendicular to the baseline at the advance. Positive values along the perpendicular are in the direction that is 90 degrees clockwise from the baseline vector. Locations are represented as a Point2D, where x is the advance and y is the offset.
LayoutPath provides a mapping between locations relative to the baseline and points in user space. Locations consist of an advance along the baseline, and an offset perpendicular to the baseline at the advance. Positive values along the perpendicular are in the direction that is 90 degrees clockwise from the baseline vector. Locations are represented as a Point2D, where x is the advance and y is the offset.
The LineBreakMeasurer class allows styled text to be broken into lines (or segments) that fit within a particular visual advance. This is useful for clients who wish to display a paragraph of text that fits within a specific width, called the wrapping width.
LineBreakMeasurer is constructed with an iterator over styled text. The iterator's range should be a single paragraph in the text. LineBreakMeasurer maintains a position in the text for the start of the next text segment. Initially, this position is the start of text. Paragraphs are assigned an overall direction (either left-to-right or right-to-left) according to the bidirectional formatting rules. All segments obtained from a paragraph have the same direction as the paragraph.
Segments of text are obtained by calling the method nextLayout, which returns a TextLayout representing the text that fits within the wrapping width. The nextLayout method moves the current position to the end of the layout returned from nextLayout.
LineBreakMeasurer implements the most commonly used line-breaking policy: Every word that fits within the wrapping width is placed on the line. If the first word does not fit, then all of the characters that fit within the wrapping width are placed on the line. At least one character is placed on each line.
The TextLayout instances returned by LineBreakMeasurer treat tabs like 0-width spaces. Clients who wish to obtain tab-delimited segments for positioning should use the overload of nextLayout which takes a limiting offset in the text. The limiting offset should be the first character after the tab. The TextLayout objects returned from this method end at the limit provided (or before, if the text between the current position and the limit won't fit entirely within the wrapping width).
Clients who are laying out tab-delimited text need a slightly different line-breaking policy after the first segment has been placed on a line. Instead of fitting partial words in the remaining space, they should place words which don't fit in the remaining space entirely on the next line. This change of policy can be requested in the overload of nextLayout which takes a boolean parameter. If this parameter is true, nextLayout returns null if the first word won't fit in the given space. See the tab sample below.
In general, if the text used to construct the LineBreakMeasurer changes, a new LineBreakMeasurer must be constructed to reflect the change. (The old LineBreakMeasurer continues to function properly, but it won't be aware of the text change.) Nevertheless, if the text change is the insertion or deletion of a single character, an existing LineBreakMeasurer can be 'updated' by calling insertChar or deleteChar. Updating an existing LineBreakMeasurer is much faster than creating a new one. Clients who modify text based on user typing should take advantage of these methods.
Examples: Rendering a paragraph in a component
public void paint(Graphics graphics) {
Point2D pen = new Point2D(10, 20);
Graphics2D g2d = (Graphics2D)graphics;
FontRenderContext frc = g2d.getFontRenderContext();
// let styledText be an AttributedCharacterIterator containing at least
// one character
LineBreakMeasurer measurer = new LineBreakMeasurer(styledText, frc);
float wrappingWidth = getSize().width - 15;
while (measurer.getPosition() < fStyledText.length()) {
TextLayout layout = measurer.nextLayout(wrappingWidth);
pen.y = (layout.getAscent());
float dx = layout.isLeftToRight() ?
0 : (wrappingWidth - layout.getAdvance());
layout.draw(graphics, pen.x dx, pen.y);
pen.y = layout.getDescent() layout.getLeading();
}
}
Rendering text with tabs. For simplicity, the overall text direction is assumed to be left-to-right
public void paint(Graphics graphics) {
float leftMargin = 10, rightMargin = 310;
float[] tabStops = { 100, 250 };
// assume styledText is an AttributedCharacterIterator, and the number
// of tabs in styledText is tabCount
int[] tabLocations = new int[tabCount+1];
int i = 0;
for (char c = styledText.first(); c != styledText.DONE; c = styledText.next()) {
if (c == '\t') {
tabLocations[i++] = styledText.getIndex();
}
}
tabLocations[tabCount] = styledText.getEndIndex() - 1;
// Now tabLocations has an entry for every tab's offset in
// the text. For convenience, the last entry is tabLocations
// is the offset of the last character in the text.
LineBreakMeasurer measurer = new LineBreakMeasurer(styledText);
int currentTab = 0;
float verticalPos = 20;
while (measurer.getPosition() < styledText.getEndIndex()) {
// Lay out and draw each line. All segments on a line
// must be computed before any drawing can occur, since
// we must know the largest ascent on the line.
// TextLayouts are computed and stored in a Vector;
// their horizontal positions are stored in a parallel
// Vector.
// lineContainsText is true after first segment is drawn
boolean lineContainsText = false;
boolean lineComplete = false;
float maxAscent = 0, maxDescent = 0;
float horizontalPos = leftMargin;
Vector layouts = new Vector(1);
Vector penPositions = new Vector(1);
while (!lineComplete) {
float wrappingWidth = rightMargin - horizontalPos;
TextLayout layout =
measurer.nextLayout(wrappingWidth,
tabLocations[currentTab]+1,
lineContainsText);
// layout can be null if lineContainsText is true
if (layout != null) {
layouts.addElement(layout);
penPositions.addElement(new Float(horizontalPos));
horizontalPos = layout.getAdvance();
maxAscent = Math.max(maxAscent, layout.getAscent());
maxDescent = Math.max(maxDescent,
layout.getDescent() layout.getLeading());
} else {
lineComplete = true;
}
lineContainsText = true;
if (measurer.getPosition() == tabLocations[currentTab]+1) {
currentTab++;
}
if (measurer.getPosition() == styledText.getEndIndex())
lineComplete = true;
else if (horizontalPos >= tabStops[tabStops.length-1])
lineComplete = true;
if (!lineComplete) {
// move to next tab stop
int j;
for (j=0; horizontalPos >= tabStops[j]; j++) {}
horizontalPos = tabStops[j];
}
}
verticalPos = maxAscent;
Enumeration layoutEnum = layouts.elements();
Enumeration positionEnum = penPositions.elements();
// now iterate through layouts and draw them
while (layoutEnum.hasMoreElements()) {
TextLayout nextLayout = (TextLayout) layoutEnum.nextElement();
Float nextPosition = (Float) positionEnum.nextElement();
nextLayout.draw(graphics, nextPosition.floatValue(), verticalPos);
}
verticalPos = maxDescent;
}
}
The LineBreakMeasurer class allows styled text to be broken into lines (or segments) that fit within a particular visual advance. This is useful for clients who wish to display a paragraph of text that fits within a specific width, called the wrapping width. LineBreakMeasurer is constructed with an iterator over styled text. The iterator's range should be a single paragraph in the text. LineBreakMeasurer maintains a position in the text for the start of the next text segment. Initially, this position is the start of text. Paragraphs are assigned an overall direction (either left-to-right or right-to-left) according to the bidirectional formatting rules. All segments obtained from a paragraph have the same direction as the paragraph. Segments of text are obtained by calling the method nextLayout, which returns a TextLayout representing the text that fits within the wrapping width. The nextLayout method moves the current position to the end of the layout returned from nextLayout. LineBreakMeasurer implements the most commonly used line-breaking policy: Every word that fits within the wrapping width is placed on the line. If the first word does not fit, then all of the characters that fit within the wrapping width are placed on the line. At least one character is placed on each line. The TextLayout instances returned by LineBreakMeasurer treat tabs like 0-width spaces. Clients who wish to obtain tab-delimited segments for positioning should use the overload of nextLayout which takes a limiting offset in the text. The limiting offset should be the first character after the tab. The TextLayout objects returned from this method end at the limit provided (or before, if the text between the current position and the limit won't fit entirely within the wrapping width). Clients who are laying out tab-delimited text need a slightly different line-breaking policy after the first segment has been placed on a line. Instead of fitting partial words in the remaining space, they should place words which don't fit in the remaining space entirely on the next line. This change of policy can be requested in the overload of nextLayout which takes a boolean parameter. If this parameter is true, nextLayout returns null if the first word won't fit in the given space. See the tab sample below. In general, if the text used to construct the LineBreakMeasurer changes, a new LineBreakMeasurer must be constructed to reflect the change. (The old LineBreakMeasurer continues to function properly, but it won't be aware of the text change.) Nevertheless, if the text change is the insertion or deletion of a single character, an existing LineBreakMeasurer can be 'updated' by calling insertChar or deleteChar. Updating an existing LineBreakMeasurer is much faster than creating a new one. Clients who modify text based on user typing should take advantage of these methods. Examples: Rendering a paragraph in a component public void paint(Graphics graphics) { Point2D pen = new Point2D(10, 20); Graphics2D g2d = (Graphics2D)graphics; FontRenderContext frc = g2d.getFontRenderContext(); // let styledText be an AttributedCharacterIterator containing at least // one character LineBreakMeasurer measurer = new LineBreakMeasurer(styledText, frc); float wrappingWidth = getSize().width - 15; while (measurer.getPosition() < fStyledText.length()) { TextLayout layout = measurer.nextLayout(wrappingWidth); pen.y = (layout.getAscent()); float dx = layout.isLeftToRight() ? 0 : (wrappingWidth - layout.getAdvance()); layout.draw(graphics, pen.x dx, pen.y); pen.y = layout.getDescent() layout.getLeading(); } } Rendering text with tabs. For simplicity, the overall text direction is assumed to be left-to-right public void paint(Graphics graphics) { float leftMargin = 10, rightMargin = 310; float[] tabStops = { 100, 250 }; // assume styledText is an AttributedCharacterIterator, and the number // of tabs in styledText is tabCount int[] tabLocations = new int[tabCount+1]; int i = 0; for (char c = styledText.first(); c != styledText.DONE; c = styledText.next()) { if (c == '\t') { tabLocations[i++] = styledText.getIndex(); } } tabLocations[tabCount] = styledText.getEndIndex() - 1; // Now tabLocations has an entry for every tab's offset in // the text. For convenience, the last entry is tabLocations // is the offset of the last character in the text. LineBreakMeasurer measurer = new LineBreakMeasurer(styledText); int currentTab = 0; float verticalPos = 20; while (measurer.getPosition() < styledText.getEndIndex()) { // Lay out and draw each line. All segments on a line // must be computed before any drawing can occur, since // we must know the largest ascent on the line. // TextLayouts are computed and stored in a Vector; // their horizontal positions are stored in a parallel // Vector. // lineContainsText is true after first segment is drawn boolean lineContainsText = false; boolean lineComplete = false; float maxAscent = 0, maxDescent = 0; float horizontalPos = leftMargin; Vector layouts = new Vector(1); Vector penPositions = new Vector(1); while (!lineComplete) { float wrappingWidth = rightMargin - horizontalPos; TextLayout layout = measurer.nextLayout(wrappingWidth, tabLocations[currentTab]+1, lineContainsText); // layout can be null if lineContainsText is true if (layout != null) { layouts.addElement(layout); penPositions.addElement(new Float(horizontalPos)); horizontalPos = layout.getAdvance(); maxAscent = Math.max(maxAscent, layout.getAscent()); maxDescent = Math.max(maxDescent, layout.getDescent() layout.getLeading()); } else { lineComplete = true; } lineContainsText = true; if (measurer.getPosition() == tabLocations[currentTab]+1) { currentTab++; } if (measurer.getPosition() == styledText.getEndIndex()) lineComplete = true; else if (horizontalPos >= tabStops[tabStops.length-1]) lineComplete = true; if (!lineComplete) { // move to next tab stop int j; for (j=0; horizontalPos >= tabStops[j]; j++) {} horizontalPos = tabStops[j]; } } verticalPos = maxAscent; Enumeration layoutEnum = layouts.elements(); Enumeration positionEnum = penPositions.elements(); // now iterate through layouts and draw them while (layoutEnum.hasMoreElements()) { TextLayout nextLayout = (TextLayout) layoutEnum.nextElement(); Float nextPosition = (Float) positionEnum.nextElement(); nextLayout.draw(graphics, nextPosition.floatValue(), verticalPos); } verticalPos = maxDescent; } }
The LineMetrics class allows access to the metrics needed to layout characters along a line and to layout of a set of lines. A LineMetrics object encapsulates the measurement information associated with a run of text.
Fonts can have different metrics for different ranges of characters. The getLineMetrics methods of Font take some text as an argument and return a LineMetrics object describing the metrics of the initial number of characters in that text, as returned by getNumChars().
The LineMetrics class allows access to the metrics needed to layout characters along a line and to layout of a set of lines. A LineMetrics object encapsulates the measurement information associated with a run of text. Fonts can have different metrics for different ranges of characters. The getLineMetrics methods of Font take some text as an argument and return a LineMetrics object describing the metrics of the initial number of characters in that text, as returned by getNumChars().
The MultipleMaster interface represents Type 1 Multiple Master fonts. A particular Font object can implement this interface.
The MultipleMaster interface represents Type 1 Multiple Master fonts. A particular Font object can implement this interface.
The NumericShaper class is used to convert Latin-1 (European) digits to other Unicode decimal digits. Users of this class will primarily be people who wish to present data using national digit shapes, but find it more convenient to represent the data internally using Latin-1 (European) digits. This does not interpret the deprecated numeric shape selector character (U+206E).
Instances of NumericShaper are typically applied as attributes to text with the NUMERIC_SHAPING attribute of the TextAttribute class. For example, this code snippet causes a TextLayout to shape European digits to Arabic in an Arabic context:
Map map = new HashMap(); map.put(TextAttribute.NUMERIC_SHAPING, NumericShaper.getContextualShaper(NumericShaper.ARABIC)); FontRenderContext frc = ...; TextLayout layout = new TextLayout(text, map, frc); layout.draw(g2d, x, y);
It is also possible to perform numeric shaping explicitly using instances of NumericShaper, as this code snippet demonstrates:
char[] text = ...; // shape all EUROPEAN digits (except zero) to ARABIC digits NumericShaper shaper = NumericShaper.getShaper(NumericShaper.ARABIC); shaper.shape(text, start, count);
// shape European digits to ARABIC digits if preceding text is Arabic, or // shape European digits to TAMIL digits if preceding text is Tamil, or // leave European digits alone if there is no preceding text, or // preceding text is neither Arabic nor Tamil NumericShaper shaper = NumericShaper.getContextualShaper(NumericShaper.ARABIC | NumericShaper.TAMIL, NumericShaper.EUROPEAN); shaper.shape(text, start, count);
Bit mask- and enum-based Unicode ranges
This class supports two different programming interfaces to represent Unicode ranges for script-specific digits: bit mask-based ones, such as NumericShaper.ARABIC, and enum-based ones, such as NumericShaper.Range.ARABIC. Multiple ranges can be specified by ORing bit mask-based constants, such as:
NumericShaper.ARABIC | NumericShaper.TAMIL or creating a Set with the NumericShaper.Range constants, such as:
EnumSet.of(NumericShaper.Scirpt.ARABIC, NumericShaper.Range.TAMIL) The enum-based ranges are a super set of the bit mask-based ones.
If the two interfaces are mixed (including serialization), Unicode range values are mapped to their counterparts where such mapping is possible, such as NumericShaper.Range.ARABIC from/to NumericShaper.ARABIC. If any unmappable range values are specified, such as NumericShaper.Range.BALINESE, those ranges are ignored.
Decimal Digits Precedence
A Unicode range may have more than one set of decimal digits. If multiple decimal digits sets are specified for the same Unicode range, one of the sets will take precedence as follows.
Unicode Range
NumericShaper Constants
Precedence
Arabic
NumericShaper.ARABIC
NumericShaper.EASTERN_ARABIC
NumericShaper.EASTERN_ARABIC
NumericShaper.Range.ARABIC
NumericShaper.Range.EASTERN_ARABIC
NumericShaper.Range.EASTERN_ARABIC
Tai Tham
NumericShaper.Range.TAI_THAM_HORA
NumericShaper.Range.TAI_THAM_THAM
NumericShaper.Range.TAI_THAM_THAM
The NumericShaper class is used to convert Latin-1 (European) digits to other Unicode decimal digits. Users of this class will primarily be people who wish to present data using national digit shapes, but find it more convenient to represent the data internally using Latin-1 (European) digits. This does not interpret the deprecated numeric shape selector character (U+206E). Instances of NumericShaper are typically applied as attributes to text with the NUMERIC_SHAPING attribute of the TextAttribute class. For example, this code snippet causes a TextLayout to shape European digits to Arabic in an Arabic context: Map map = new HashMap(); map.put(TextAttribute.NUMERIC_SHAPING, NumericShaper.getContextualShaper(NumericShaper.ARABIC)); FontRenderContext frc = ...; TextLayout layout = new TextLayout(text, map, frc); layout.draw(g2d, x, y); It is also possible to perform numeric shaping explicitly using instances of NumericShaper, as this code snippet demonstrates: char[] text = ...; // shape all EUROPEAN digits (except zero) to ARABIC digits NumericShaper shaper = NumericShaper.getShaper(NumericShaper.ARABIC); shaper.shape(text, start, count); // shape European digits to ARABIC digits if preceding text is Arabic, or // shape European digits to TAMIL digits if preceding text is Tamil, or // leave European digits alone if there is no preceding text, or // preceding text is neither Arabic nor Tamil NumericShaper shaper = NumericShaper.getContextualShaper(NumericShaper.ARABIC | NumericShaper.TAMIL, NumericShaper.EUROPEAN); shaper.shape(text, start, count); Bit mask- and enum-based Unicode ranges This class supports two different programming interfaces to represent Unicode ranges for script-specific digits: bit mask-based ones, such as NumericShaper.ARABIC, and enum-based ones, such as NumericShaper.Range.ARABIC. Multiple ranges can be specified by ORing bit mask-based constants, such as: NumericShaper.ARABIC | NumericShaper.TAMIL or creating a Set with the NumericShaper.Range constants, such as: EnumSet.of(NumericShaper.Scirpt.ARABIC, NumericShaper.Range.TAMIL) The enum-based ranges are a super set of the bit mask-based ones. If the two interfaces are mixed (including serialization), Unicode range values are mapped to their counterparts where such mapping is possible, such as NumericShaper.Range.ARABIC from/to NumericShaper.ARABIC. If any unmappable range values are specified, such as NumericShaper.Range.BALINESE, those ranges are ignored. Decimal Digits Precedence A Unicode range may have more than one set of decimal digits. If multiple decimal digits sets are specified for the same Unicode range, one of the sets will take precedence as follows. Unicode Range NumericShaper Constants Precedence Arabic NumericShaper.ARABIC NumericShaper.EASTERN_ARABIC NumericShaper.EASTERN_ARABIC NumericShaper.Range.ARABIC NumericShaper.Range.EASTERN_ARABIC NumericShaper.Range.EASTERN_ARABIC Tai Tham NumericShaper.Range.TAI_THAM_HORA NumericShaper.Range.TAI_THAM_THAM NumericShaper.Range.TAI_THAM_THAM
The OpenType interface represents OpenType and TrueType fonts. This interface makes it possible to obtain sfnt tables from the font. A particular Font object can implement this interface.
For more information on TrueType and OpenType fonts, see the OpenType specification. ( http://www.microsoft.com/typography/otspec/ ).
The OpenType interface represents OpenType and TrueType fonts. This interface makes it possible to obtain sfnt tables from the font. A particular Font object can implement this interface. For more information on TrueType and OpenType fonts, see the OpenType specification. ( http://www.microsoft.com/typography/otspec/ ).
The ShapeGraphicAttribute class is an implementation of GraphicAttribute that draws shapes in a TextLayout.
The ShapeGraphicAttribute class is an implementation of GraphicAttribute that draws shapes in a TextLayout.
The TextAttribute class defines attribute keys and attribute values used for text rendering.
TextAttribute instances are used as attribute keys to identify attributes in Font, TextLayout, AttributedCharacterIterator, and other classes handling text attributes. Other constants defined in this class can be used as attribute values.
For each text attribute, the documentation provides:
the type of its value, the relevant predefined constants, if any the default effect if the attribute is absent the valid values if there are limitations a description of the effect.
Values
The values of attributes must always be immutable. Where value limitations are given, any value outside of that set is reserved for future use; the value will be treated as the default. The value null is treated the same as the default value and results in the default behavior. If the value is not of the proper type, the attribute will be ignored. The identity of the value does not matter, only the actual value. For example, TextAttribute.WEIGHT_BOLD and new Float(2.0) indicate the same WEIGHT. Attribute values of type Number (used for WEIGHT, WIDTH, POSTURE, SIZE, JUSTIFICATION, and TRACKING) can vary along their natural range and are not restricted to the predefined constants. Number.floatValue() is used to get the actual value from the Number. The values for WEIGHT, WIDTH, and POSTURE are interpolated by the system, which can select the 'nearest available' font or use other techniques to approximate the user's request.
Summary of attributes
Key Value Type Principal Constants Default Value
FAMILY String See Font DIALOG, DIALOG_INPUT, SERIF, SANS_SERIF, and MONOSPACED.
"Default" (use platform default)
WEIGHT Number WEIGHT_REGULAR, WEIGHT_BOLD WEIGHT_REGULAR
WIDTH Number WIDTH_CONDENSED, WIDTH_REGULAR,WIDTH_EXTENDED WIDTH_REGULAR
POSTURE Number POSTURE_REGULAR, POSTURE_OBLIQUE POSTURE_REGULAR
SIZE Number none 12.0
TRANSFORM TransformAttribute See TransformAttribute IDENTITY TransformAttribute.IDENTITY
SUPERSCRIPT Integer SUPERSCRIPT_SUPER, SUPERSCRIPT_SUB 0 (use the standard glyphs and metrics)
FONT Font none null (do not override font resolution)
CHAR_REPLACEMENT GraphicAttribute none null (draw text using font glyphs)
FOREGROUND Paint none null (use current graphics paint)
BACKGROUND Paint none null (do not render background)
UNDERLINE Integer UNDERLINE_ON -1 (do not render underline)
STRIKETHROUGH Boolean STRIKETHROUGH_ON false (do not render strikethrough)
RUN_DIRECTION Boolean RUN_DIRECTION_LTRRUN_DIRECTION_RTL null (use Bidi standard default)
BIDI_EMBEDDING Integer none 0 (use base line direction)
JUSTIFICATION Number JUSTIFICATION_FULL JUSTIFICATION_FULL
INPUT_METHOD_HIGHLIGHT InputMethodHighlight,Annotation (see class) null (do not apply input highlighting)
INPUT_METHOD_UNDERLINE Integer UNDERLINE_LOW_ONE_PIXEL,UNDERLINE_LOW_TWO_PIXEL -1 (do not render underline)
SWAP_COLORS Boolean SWAP_COLORS_ON false (do not swap colors)
NUMERIC_SHAPING NumericShaper none null (do not shape digits)
KERNING Integer KERNING_ON 0 (do not request kerning)
LIGATURES Integer LIGATURES_ON 0 (do not form optional ligatures)
TRACKING Number TRACKING_LOOSE, TRACKING_TIGHT 0 (do not add tracking)
The TextAttribute class defines attribute keys and attribute values used for text rendering. TextAttribute instances are used as attribute keys to identify attributes in Font, TextLayout, AttributedCharacterIterator, and other classes handling text attributes. Other constants defined in this class can be used as attribute values. For each text attribute, the documentation provides: the type of its value, the relevant predefined constants, if any the default effect if the attribute is absent the valid values if there are limitations a description of the effect. Values The values of attributes must always be immutable. Where value limitations are given, any value outside of that set is reserved for future use; the value will be treated as the default. The value null is treated the same as the default value and results in the default behavior. If the value is not of the proper type, the attribute will be ignored. The identity of the value does not matter, only the actual value. For example, TextAttribute.WEIGHT_BOLD and new Float(2.0) indicate the same WEIGHT. Attribute values of type Number (used for WEIGHT, WIDTH, POSTURE, SIZE, JUSTIFICATION, and TRACKING) can vary along their natural range and are not restricted to the predefined constants. Number.floatValue() is used to get the actual value from the Number. The values for WEIGHT, WIDTH, and POSTURE are interpolated by the system, which can select the 'nearest available' font or use other techniques to approximate the user's request. Summary of attributes Key Value Type Principal Constants Default Value FAMILY String See Font DIALOG, DIALOG_INPUT, SERIF, SANS_SERIF, and MONOSPACED. "Default" (use platform default) WEIGHT Number WEIGHT_REGULAR, WEIGHT_BOLD WEIGHT_REGULAR WIDTH Number WIDTH_CONDENSED, WIDTH_REGULAR,WIDTH_EXTENDED WIDTH_REGULAR POSTURE Number POSTURE_REGULAR, POSTURE_OBLIQUE POSTURE_REGULAR SIZE Number none 12.0 TRANSFORM TransformAttribute See TransformAttribute IDENTITY TransformAttribute.IDENTITY SUPERSCRIPT Integer SUPERSCRIPT_SUPER, SUPERSCRIPT_SUB 0 (use the standard glyphs and metrics) FONT Font none null (do not override font resolution) CHAR_REPLACEMENT GraphicAttribute none null (draw text using font glyphs) FOREGROUND Paint none null (use current graphics paint) BACKGROUND Paint none null (do not render background) UNDERLINE Integer UNDERLINE_ON -1 (do not render underline) STRIKETHROUGH Boolean STRIKETHROUGH_ON false (do not render strikethrough) RUN_DIRECTION Boolean RUN_DIRECTION_LTRRUN_DIRECTION_RTL null (use Bidi standard default) BIDI_EMBEDDING Integer none 0 (use base line direction) JUSTIFICATION Number JUSTIFICATION_FULL JUSTIFICATION_FULL INPUT_METHOD_HIGHLIGHT InputMethodHighlight,Annotation (see class) null (do not apply input highlighting) INPUT_METHOD_UNDERLINE Integer UNDERLINE_LOW_ONE_PIXEL,UNDERLINE_LOW_TWO_PIXEL -1 (do not render underline) SWAP_COLORS Boolean SWAP_COLORS_ON false (do not swap colors) NUMERIC_SHAPING NumericShaper none null (do not shape digits) KERNING Integer KERNING_ON 0 (do not request kerning) LIGATURES Integer LIGATURES_ON 0 (do not form optional ligatures) TRACKING Number TRACKING_LOOSE, TRACKING_TIGHT 0 (do not add tracking)
The TextHitInfo class represents a character position in a text model, and a bias, or "side," of the character. Biases are either leading (the left edge, for a left-to-right character) or trailing (the right edge, for a left-to-right character). Instances of TextHitInfo are used to specify caret and insertion positions within text.
For example, consider the text "abc". TextHitInfo.trailing(1) corresponds to the right side of the 'b' in the text.
TextHitInfo is used primarily by TextLayout and clients of TextLayout. Clients of TextLayout query TextHitInfo instances for an insertion offset, where new text is inserted into the text model. The insertion offset is equal to the character position in the TextHitInfo if the bias is leading, and one character after if the bias is trailing. The insertion offset for TextHitInfo.trailing(1) is 2.
Sometimes it is convenient to construct a TextHitInfo with the same insertion offset as an existing one, but on the opposite character. The getOtherHit method constructs a new TextHitInfo with the same insertion offset as an existing one, with a hit on the character on the other side of the insertion offset. Calling getOtherHit on trailing(1) would return leading(2). In general, getOtherHit for trailing(n) returns leading(n+1) and getOtherHit for leading(n) returns trailing(n-1).
Example: Converting a graphical point to an insertion point within a text model
TextLayout layout = ...; Point2D.Float hitPoint = ...; TextHitInfo hitInfo = layout.hitTestChar(hitPoint.x, hitPoint.y); int insPoint = hitInfo.getInsertionIndex(); // insPoint is relative to layout; may need to adjust for use // in a text model
The TextHitInfo class represents a character position in a text model, and a bias, or "side," of the character. Biases are either leading (the left edge, for a left-to-right character) or trailing (the right edge, for a left-to-right character). Instances of TextHitInfo are used to specify caret and insertion positions within text. For example, consider the text "abc". TextHitInfo.trailing(1) corresponds to the right side of the 'b' in the text. TextHitInfo is used primarily by TextLayout and clients of TextLayout. Clients of TextLayout query TextHitInfo instances for an insertion offset, where new text is inserted into the text model. The insertion offset is equal to the character position in the TextHitInfo if the bias is leading, and one character after if the bias is trailing. The insertion offset for TextHitInfo.trailing(1) is 2. Sometimes it is convenient to construct a TextHitInfo with the same insertion offset as an existing one, but on the opposite character. The getOtherHit method constructs a new TextHitInfo with the same insertion offset as an existing one, with a hit on the character on the other side of the insertion offset. Calling getOtherHit on trailing(1) would return leading(2). In general, getOtherHit for trailing(n) returns leading(n+1) and getOtherHit for leading(n) returns trailing(n-1). Example: Converting a graphical point to an insertion point within a text model TextLayout layout = ...; Point2D.Float hitPoint = ...; TextHitInfo hitInfo = layout.hitTestChar(hitPoint.x, hitPoint.y); int insPoint = hitInfo.getInsertionIndex(); // insPoint is relative to layout; may need to adjust for use // in a text model
TextLayout is an immutable graphical representation of styled character data.
It provides the following capabilities:
implicit bidirectional analysis and reordering, cursor positioning and movement, including split cursors for mixed directional text, highlighting, including both logical and visual highlighting for mixed directional text, multiple baselines (roman, hanging, and centered), hit testing, justification, default font substitution, metric information such as ascent, descent, and advance, and rendering
A TextLayout object can be rendered using its draw method.
TextLayout can be constructed either directly or through the use of a LineBreakMeasurer. When constructed directly, the source text represents a single paragraph. LineBreakMeasurer allows styled text to be broken into lines that fit within a particular width. See the LineBreakMeasurer documentation for more information.
TextLayout construction logically proceeds as follows:
paragraph attributes are extracted and examined, text is analyzed for bidirectional reordering, and reordering information is computed if needed, text is segmented into style runs fonts are chosen for style runs, first by using a font if the attribute TextAttribute.FONT is present, otherwise by computing a default font using the attributes that have been defined if text is on multiple baselines, the runs or subruns are further broken into subruns sharing a common baseline, glyphvectors are generated for each run using the chosen font, final bidirectional reordering is performed on the glyphvectors
All graphical information returned from a TextLayout object's methods is relative to the origin of the TextLayout, which is the intersection of the TextLayout object's baseline with its left edge. Also, coordinates passed into a TextLayout object's methods are assumed to be relative to the TextLayout object's origin. Clients usually need to translate between a TextLayout object's coordinate system and the coordinate system in another object (such as a Graphics object).
TextLayout objects are constructed from styled text, but they do not retain a reference to their source text. Thus, changes in the text previously used to generate a TextLayout do not affect the TextLayout.
Three methods on a TextLayout object (getNextRightHit, getNextLeftHit, and hitTestChar) return instances of TextHitInfo. The offsets contained in these TextHitInfo objects are relative to the start of the TextLayout, not to the text used to create the TextLayout. Similarly, TextLayout methods that accept TextHitInfo instances as parameters expect the TextHitInfo object's offsets to be relative to the TextLayout, not to any underlying text storage model.
Examples: Constructing and drawing a TextLayout and its bounding rectangle:
Graphics2D g = ...; Point2D loc = ...; Font font = Font.getFont("Helvetica-bold-italic"); FontRenderContext frc = g.getFontRenderContext(); TextLayout layout = new TextLayout("This is a string", font, frc); layout.draw(g, (float)loc.getX(), (float)loc.getY());
Rectangle2D bounds = layout.getBounds(); bounds.setRect(bounds.getX()+loc.getX(), bounds.getY()+loc.getY(), bounds.getWidth(), bounds.getHeight()); g.draw(bounds);
Hit-testing a TextLayout (determining which character is at a particular graphical location):
Point2D click = ...; TextHitInfo hit = layout.hitTestChar( (float) (click.getX() - loc.getX()), (float) (click.getY() - loc.getY()));
Responding to a right-arrow key press:
int insertionIndex = ...; TextHitInfo next = layout.getNextRightHit(insertionIndex); if (next != null) { // translate graphics to origin of layout on screen g.translate(loc.getX(), loc.getY()); Shape[] carets = layout.getCaretShapes(next.getInsertionIndex()); g.draw(carets[0]); if (carets[1] != null) { g.draw(carets[1]); } }
Drawing a selection range corresponding to a substring in the source text. The selected area may not be visually contiguous:
// selStart, selLimit should be relative to the layout, // not to the source text
int selStart = ..., selLimit = ...; Color selectionColor = ...; Shape selection = layout.getLogicalHighlightShape(selStart, selLimit); // selection may consist of disjoint areas // graphics is assumed to be tranlated to origin of layout g.setColor(selectionColor); g.fill(selection);
Drawing a visually contiguous selection range. The selection range may correspond to more than one substring in the source text. The ranges of the corresponding source text substrings can be obtained with getLogicalRangesForVisualSelection():
TextHitInfo selStart = ..., selLimit = ...; Shape selection = layout.getVisualHighlightShape(selStart, selLimit); g.setColor(selectionColor); g.fill(selection); int[] ranges = getLogicalRangesForVisualSelection(selStart, selLimit); // ranges[0], ranges[1] is the first selection range, // ranges[2], ranges[3] is the second selection range, etc.
Note: Font rotations can cause text baselines to be rotated, and multiple runs with different rotations can cause the baseline to bend or zig-zag. In order to account for this (rare) possibility, some APIs are specified to return metrics and take parameters 'in baseline-relative coordinates' (e.g. ascent, advance), and others are in 'in standard coordinates' (e.g. getBounds). Values in baseline-relative coordinates map the 'x' coordinate to the distance along the baseline, (positive x is forward along the baseline), and the 'y' coordinate to a distance along the perpendicular to the baseline at 'x' (positive y is 90 degrees clockwise from the baseline vector). Values in standard coordinates are measured along the x and y axes, with 0,0 at the origin of the TextLayout. Documentation for each relevant API indicates what values are in what coordinate system. In general, measurement-related APIs are in baseline-relative coordinates, while display-related APIs are in standard coordinates.
TextLayout is an immutable graphical representation of styled character data. It provides the following capabilities: implicit bidirectional analysis and reordering, cursor positioning and movement, including split cursors for mixed directional text, highlighting, including both logical and visual highlighting for mixed directional text, multiple baselines (roman, hanging, and centered), hit testing, justification, default font substitution, metric information such as ascent, descent, and advance, and rendering A TextLayout object can be rendered using its draw method. TextLayout can be constructed either directly or through the use of a LineBreakMeasurer. When constructed directly, the source text represents a single paragraph. LineBreakMeasurer allows styled text to be broken into lines that fit within a particular width. See the LineBreakMeasurer documentation for more information. TextLayout construction logically proceeds as follows: paragraph attributes are extracted and examined, text is analyzed for bidirectional reordering, and reordering information is computed if needed, text is segmented into style runs fonts are chosen for style runs, first by using a font if the attribute TextAttribute.FONT is present, otherwise by computing a default font using the attributes that have been defined if text is on multiple baselines, the runs or subruns are further broken into subruns sharing a common baseline, glyphvectors are generated for each run using the chosen font, final bidirectional reordering is performed on the glyphvectors All graphical information returned from a TextLayout object's methods is relative to the origin of the TextLayout, which is the intersection of the TextLayout object's baseline with its left edge. Also, coordinates passed into a TextLayout object's methods are assumed to be relative to the TextLayout object's origin. Clients usually need to translate between a TextLayout object's coordinate system and the coordinate system in another object (such as a Graphics object). TextLayout objects are constructed from styled text, but they do not retain a reference to their source text. Thus, changes in the text previously used to generate a TextLayout do not affect the TextLayout. Three methods on a TextLayout object (getNextRightHit, getNextLeftHit, and hitTestChar) return instances of TextHitInfo. The offsets contained in these TextHitInfo objects are relative to the start of the TextLayout, not to the text used to create the TextLayout. Similarly, TextLayout methods that accept TextHitInfo instances as parameters expect the TextHitInfo object's offsets to be relative to the TextLayout, not to any underlying text storage model. Examples: Constructing and drawing a TextLayout and its bounding rectangle: Graphics2D g = ...; Point2D loc = ...; Font font = Font.getFont("Helvetica-bold-italic"); FontRenderContext frc = g.getFontRenderContext(); TextLayout layout = new TextLayout("This is a string", font, frc); layout.draw(g, (float)loc.getX(), (float)loc.getY()); Rectangle2D bounds = layout.getBounds(); bounds.setRect(bounds.getX()+loc.getX(), bounds.getY()+loc.getY(), bounds.getWidth(), bounds.getHeight()); g.draw(bounds); Hit-testing a TextLayout (determining which character is at a particular graphical location): Point2D click = ...; TextHitInfo hit = layout.hitTestChar( (float) (click.getX() - loc.getX()), (float) (click.getY() - loc.getY())); Responding to a right-arrow key press: int insertionIndex = ...; TextHitInfo next = layout.getNextRightHit(insertionIndex); if (next != null) { // translate graphics to origin of layout on screen g.translate(loc.getX(), loc.getY()); Shape[] carets = layout.getCaretShapes(next.getInsertionIndex()); g.draw(carets[0]); if (carets[1] != null) { g.draw(carets[1]); } } Drawing a selection range corresponding to a substring in the source text. The selected area may not be visually contiguous: // selStart, selLimit should be relative to the layout, // not to the source text int selStart = ..., selLimit = ...; Color selectionColor = ...; Shape selection = layout.getLogicalHighlightShape(selStart, selLimit); // selection may consist of disjoint areas // graphics is assumed to be tranlated to origin of layout g.setColor(selectionColor); g.fill(selection); Drawing a visually contiguous selection range. The selection range may correspond to more than one substring in the source text. The ranges of the corresponding source text substrings can be obtained with getLogicalRangesForVisualSelection(): TextHitInfo selStart = ..., selLimit = ...; Shape selection = layout.getVisualHighlightShape(selStart, selLimit); g.setColor(selectionColor); g.fill(selection); int[] ranges = getLogicalRangesForVisualSelection(selStart, selLimit); // ranges[0], ranges[1] is the first selection range, // ranges[2], ranges[3] is the second selection range, etc. Note: Font rotations can cause text baselines to be rotated, and multiple runs with different rotations can cause the baseline to bend or zig-zag. In order to account for this (rare) possibility, some APIs are specified to return metrics and take parameters 'in baseline-relative coordinates' (e.g. ascent, advance), and others are in 'in standard coordinates' (e.g. getBounds). Values in baseline-relative coordinates map the 'x' coordinate to the distance along the baseline, (positive x is forward along the baseline), and the 'y' coordinate to a distance along the perpendicular to the baseline at 'x' (positive y is 90 degrees clockwise from the baseline vector). Values in standard coordinates are measured along the x and y axes, with 0,0 at the origin of the TextLayout. Documentation for each relevant API indicates what values are in what coordinate system. In general, measurement-related APIs are in baseline-relative coordinates, while display-related APIs are in standard coordinates.
Defines a policy for determining the strong caret location. This class contains one method, getStrongCaret, which is used to specify the policy that determines the strong caret in dual-caret text. The strong caret is used to move the caret to the left or right. Instances of this class can be passed to getCaretShapes, getNextLeftHit and getNextRightHit to customize strong caret selection.
To specify alternate caret policies, subclass CaretPolicy and override getStrongCaret. getStrongCaret should inspect the two TextHitInfo arguments and choose one of them as the strong caret.
Most clients do not need to use this class.
Defines a policy for determining the strong caret location. This class contains one method, getStrongCaret, which is used to specify the policy that determines the strong caret in dual-caret text. The strong caret is used to move the caret to the left or right. Instances of this class can be passed to getCaretShapes, getNextLeftHit and getNextRightHit to customize strong caret selection. To specify alternate caret policies, subclass CaretPolicy and override getStrongCaret. getStrongCaret should inspect the two TextHitInfo arguments and choose one of them as the strong caret. Most clients do not need to use this class.
The TextMeasurer class provides the primitive operations needed for line break: measuring up to a given advance, determining the advance of a range of characters, and generating a TextLayout for a range of characters. It also provides methods for incremental editing of paragraphs.
A TextMeasurer object is constructed with an AttributedCharacterIterator representing a single paragraph of text. The value returned by the getBeginIndex method of AttributedCharacterIterator defines the absolute index of the first character. The value returned by the getEndIndex method of AttributedCharacterIterator defines the index past the last character. These values define the range of indexes to use in calls to the TextMeasurer. For example, calls to get the advance of a range of text or the line break of a range of text must use indexes between the beginning and end index values. Calls to insertChar and deleteChar reset the TextMeasurer to use the beginning index and end index of the AttributedCharacterIterator passed in those calls.
Most clients will use the more convenient LineBreakMeasurer, which implements the standard line break policy (placing as many words as will fit on each line).
The TextMeasurer class provides the primitive operations needed for line break: measuring up to a given advance, determining the advance of a range of characters, and generating a TextLayout for a range of characters. It also provides methods for incremental editing of paragraphs. A TextMeasurer object is constructed with an AttributedCharacterIterator representing a single paragraph of text. The value returned by the getBeginIndex method of AttributedCharacterIterator defines the absolute index of the first character. The value returned by the getEndIndex method of AttributedCharacterIterator defines the index past the last character. These values define the range of indexes to use in calls to the TextMeasurer. For example, calls to get the advance of a range of text or the line break of a range of text must use indexes between the beginning and end index values. Calls to insertChar and deleteChar reset the TextMeasurer to use the beginning index and end index of the AttributedCharacterIterator passed in those calls. Most clients will use the more convenient LineBreakMeasurer, which implements the standard line break policy (placing as many words as will fit on each line).
The TransformAttribute class provides an immutable wrapper for a transform so that it is safe to use as an attribute.
The TransformAttribute class provides an immutable wrapper for a transform so that it is safe to use as an attribute.
Thrown by method createFont in the Font class to indicate that the specified font is bad.
Thrown by method createFont in the Font class to indicate that the specified font is bad.
The FontMetrics class defines a font metrics object, which encapsulates information about the rendering of a particular font on a particular screen.
Note to subclassers: Since many of these methods form closed, mutually recursive loops, you must take care that you implement at least one of the methods in each such loop to prevent infinite recursion when your subclass is used. In particular, the following is the minimal suggested set of methods to override in order to ensure correctness and prevent infinite recursion (though other subsets are equally feasible):
getAscent() getLeading() getMaxAdvance() charWidth(char) charsWidth(char[], int, int)
Note that the implementations of these methods are inefficient, so they are usually overridden with more efficient toolkit-specific implementations.
When an application asks to place a character at the position (x, y), the character is placed so that its reference point (shown as the dot in the accompanying image) is put at that position. The reference point specifies a horizontal line called the baseline of the character. In normal printing, the baselines of characters should align.
In addition, every character in a font has an ascent, a descent, and an advance width. The ascent is the amount by which the character ascends above the baseline. The descent is the amount by which the character descends below the baseline. The advance width indicates the position at which AWT should place the next character.
An array of characters or a string can also have an ascent, a descent, and an advance width. The ascent of the array is the maximum ascent of any character in the array. The descent is the maximum descent of any character in the array. The advance width is the sum of the advance widths of each of the characters in the character array. The advance of a String is the distance along the baseline of the String. This distance is the width that should be used for centering or right-aligning the String. Note that the advance of a String is not necessarily the sum of the advances of its characters measured in isolation because the width of a character can vary depending on its context. For example, in Arabic text, the shape of a character can change in order to connect to other characters. Also, in some scripts, certain character sequences can be represented by a single shape, called a ligature. Measuring characters individually does not account for these transformations. Font metrics are baseline-relative, meaning that they are generally independent of the rotation applied to the font (modulo possible grid hinting effects). See Font.
The FontMetrics class defines a font metrics object, which encapsulates information about the rendering of a particular font on a particular screen. Note to subclassers: Since many of these methods form closed, mutually recursive loops, you must take care that you implement at least one of the methods in each such loop to prevent infinite recursion when your subclass is used. In particular, the following is the minimal suggested set of methods to override in order to ensure correctness and prevent infinite recursion (though other subsets are equally feasible): getAscent() getLeading() getMaxAdvance() charWidth(char) charsWidth(char[], int, int) Note that the implementations of these methods are inefficient, so they are usually overridden with more efficient toolkit-specific implementations. When an application asks to place a character at the position (x, y), the character is placed so that its reference point (shown as the dot in the accompanying image) is put at that position. The reference point specifies a horizontal line called the baseline of the character. In normal printing, the baselines of characters should align. In addition, every character in a font has an ascent, a descent, and an advance width. The ascent is the amount by which the character ascends above the baseline. The descent is the amount by which the character descends below the baseline. The advance width indicates the position at which AWT should place the next character. An array of characters or a string can also have an ascent, a descent, and an advance width. The ascent of the array is the maximum ascent of any character in the array. The descent is the maximum descent of any character in the array. The advance width is the sum of the advance widths of each of the characters in the character array. The advance of a String is the distance along the baseline of the String. This distance is the width that should be used for centering or right-aligning the String. Note that the advance of a String is not necessarily the sum of the advances of its characters measured in isolation because the width of a character can vary depending on its context. For example, in Arabic text, the shape of a character can change in order to connect to other characters. Also, in some scripts, certain character sequences can be represented by a single shape, called a ligature. Measuring characters individually does not account for these transformations. Font metrics are baseline-relative, meaning that they are generally independent of the rotation applied to the font (modulo possible grid hinting effects). See Font.
A Frame is a top-level window with a title and a border.
The size of the frame includes any area designated for the border. The dimensions of the border area may be obtained using the getInsets method, however, since these dimensions are platform-dependent, a valid insets value cannot be obtained until the frame is made displayable by either calling pack or show. Since the border area is included in the overall size of the frame, the border effectively obscures a portion of the frame, constraining the area available for rendering and/or displaying subcomponents to the rectangle which has an upper-left corner location of (insets.left, insets.top), and has a size of width - (insets.left insets.right) by height - (insets.top insets.bottom).
The default layout for a frame is BorderLayout.
A frame may have its native decorations (i.e. Frame and Titlebar) turned off with setUndecorated. This can only be done while the frame is not displayable.
In a multi-screen environment, you can create a Frame on a different screen device by constructing the Frame with Frame(GraphicsConfiguration) or Frame(String title, GraphicsConfiguration). The GraphicsConfiguration object is one of the GraphicsConfiguration objects of the target screen device.
In a virtual device multi-screen environment in which the desktop area could span multiple physical screen devices, the bounds of all configurations are relative to the virtual-coordinate system. The origin of the virtual-coordinate system is at the upper left-hand corner of the primary physical screen. Depending on the location of the primary screen in the virtual device, negative coordinates are possible, as shown in the following figure.
In such an environment, when calling setLocation, you must pass a virtual coordinate to this method. Similarly, calling getLocationOnScreen on a Frame returns virtual device coordinates. Call the getBounds method of a GraphicsConfiguration to find its origin in the virtual coordinate system.
The following code sets the location of the Frame at (10, 10) relative to the origin of the physical screen of the corresponding GraphicsConfiguration. If the bounds of the GraphicsConfiguration is not taken into account, the Frame location would be set at (10, 10) relative to the virtual-coordinate system and would appear on the primary physical screen, which might be different from the physical screen of the specified GraphicsConfiguration.
Frame f = new Frame(GraphicsConfiguration gc);
Rectangle bounds = gc.getBounds();
f.setLocation(10 bounds.x, 10 bounds.y);
Frames are capable of generating the following types of WindowEvents:
WINDOW_OPENED WINDOW_CLOSING: If the program doesn't explicitly hide or dispose the window while processing this event, the window close operation is canceled. WINDOW_CLOSED WINDOW_ICONIFIED WINDOW_DEICONIFIED WINDOW_ACTIVATED WINDOW_DEACTIVATED WINDOW_GAINED_FOCUS WINDOW_LOST_FOCUS WINDOW_STATE_CHANGED
A Frame is a top-level window with a title and a border. The size of the frame includes any area designated for the border. The dimensions of the border area may be obtained using the getInsets method, however, since these dimensions are platform-dependent, a valid insets value cannot be obtained until the frame is made displayable by either calling pack or show. Since the border area is included in the overall size of the frame, the border effectively obscures a portion of the frame, constraining the area available for rendering and/or displaying subcomponents to the rectangle which has an upper-left corner location of (insets.left, insets.top), and has a size of width - (insets.left insets.right) by height - (insets.top insets.bottom). The default layout for a frame is BorderLayout. A frame may have its native decorations (i.e. Frame and Titlebar) turned off with setUndecorated. This can only be done while the frame is not displayable. In a multi-screen environment, you can create a Frame on a different screen device by constructing the Frame with Frame(GraphicsConfiguration) or Frame(String title, GraphicsConfiguration). The GraphicsConfiguration object is one of the GraphicsConfiguration objects of the target screen device. In a virtual device multi-screen environment in which the desktop area could span multiple physical screen devices, the bounds of all configurations are relative to the virtual-coordinate system. The origin of the virtual-coordinate system is at the upper left-hand corner of the primary physical screen. Depending on the location of the primary screen in the virtual device, negative coordinates are possible, as shown in the following figure. In such an environment, when calling setLocation, you must pass a virtual coordinate to this method. Similarly, calling getLocationOnScreen on a Frame returns virtual device coordinates. Call the getBounds method of a GraphicsConfiguration to find its origin in the virtual coordinate system. The following code sets the location of the Frame at (10, 10) relative to the origin of the physical screen of the corresponding GraphicsConfiguration. If the bounds of the GraphicsConfiguration is not taken into account, the Frame location would be set at (10, 10) relative to the virtual-coordinate system and would appear on the primary physical screen, which might be different from the physical screen of the specified GraphicsConfiguration. Frame f = new Frame(GraphicsConfiguration gc); Rectangle bounds = gc.getBounds(); f.setLocation(10 bounds.x, 10 bounds.y); Frames are capable of generating the following types of WindowEvents: WINDOW_OPENED WINDOW_CLOSING: If the program doesn't explicitly hide or dispose the window while processing this event, the window close operation is canceled. WINDOW_CLOSED WINDOW_ICONIFIED WINDOW_DEICONIFIED WINDOW_ACTIVATED WINDOW_DEACTIVATED WINDOW_GAINED_FOCUS WINDOW_LOST_FOCUS WINDOW_STATE_CHANGED
The AffineTransform class represents a 2D affine transform that performs a linear mapping from 2D coordinates to other 2D coordinates that preserves the "straightness" and "parallelness" of lines. Affine transformations can be constructed using sequences of translations, scales, flips, rotations, and shears.
Such a coordinate transformation can be represented by a 3 row by 3 column matrix with an implied last row of [ 0 0 1 ]. This matrix transforms source coordinates (x,y) into destination coordinates (x',y') by considering them to be a column vector and multiplying the coordinate vector by the matrix according to the following process:
[ x'] [ m00 m01 m02 ] [ x ] [ m00x m01y m02 ]
[ y'] = [ m10 m11 m12 ] [ y ] = [ m10x m11y m12 ]
[ 1 ] [ 0 0 1 ] [ 1 ] [ 1 ]
Handling 90-Degree Rotations
In some variations of the rotate methods in the AffineTransform class, a double-precision argument specifies the angle of rotation in radians. These methods have special handling for rotations of approximately 90 degrees (including multiples such as 180, 270, and 360 degrees), so that the common case of quadrant rotation is handled more efficiently. This special handling can cause angles very close to multiples of 90 degrees to be treated as if they were exact multiples of 90 degrees. For small multiples of 90 degrees the range of angles treated as a quadrant rotation is approximately 0.00000121 degrees wide. This section explains why such special care is needed and how it is implemented.
Since 90 degrees is represented as PI/2 in radians, and since PI is a transcendental (and therefore irrational) number, it is not possible to exactly represent a multiple of 90 degrees as an exact double precision value measured in radians. As a result it is theoretically impossible to describe quadrant rotations (90, 180, 270 or 360 degrees) using these values. Double precision floating point values can get very close to non-zero multiples of PI/2 but never close enough for the sine or cosine to be exactly 0.0, 1.0 or -1.0. The implementations of Math.sin() and Math.cos() correspondingly never return 0.0 for any case other than Math.sin(0.0). These same implementations do, however, return exactly 1.0 and -1.0 for some range of numbers around each multiple of 90 degrees since the correct answer is so close to 1.0 or -1.0 that the double precision significand cannot represent the difference as accurately as it can for numbers that are near 0.0.
The net result of these issues is that if the Math.sin() and Math.cos() methods are used to directly generate the values for the matrix modifications during these radian-based rotation operations then the resulting transform is never strictly classifiable as a quadrant rotation even for a simple case like rotate(Math.PI/2.0), due to minor variations in the matrix caused by the non-0.0 values obtained for the sine and cosine. If these transforms are not classified as quadrant rotations then subsequent code which attempts to optimize further operations based upon the type of the transform will be relegated to its most general implementation.
Because quadrant rotations are fairly common, this class should handle these cases reasonably quickly, both in applying the rotations to the transform and in applying the resulting transform to the coordinates. To facilitate this optimal handling, the methods which take an angle of rotation measured in radians attempt to detect angles that are intended to be quadrant rotations and treat them as such. These methods therefore treat an angle theta as a quadrant rotation if either Math.sin(theta) or Math.cos(theta) returns exactly 1.0 or -1.0. As a rule of thumb, this property holds true for a range of approximately 0.0000000211 radians (or 0.00000121 degrees) around small multiples of Math.PI/2.0.
The AffineTransform class represents a 2D affine transform that performs a linear mapping from 2D coordinates to other 2D coordinates that preserves the "straightness" and "parallelness" of lines. Affine transformations can be constructed using sequences of translations, scales, flips, rotations, and shears. Such a coordinate transformation can be represented by a 3 row by 3 column matrix with an implied last row of [ 0 0 1 ]. This matrix transforms source coordinates (x,y) into destination coordinates (x',y') by considering them to be a column vector and multiplying the coordinate vector by the matrix according to the following process: [ x'] [ m00 m01 m02 ] [ x ] [ m00x m01y m02 ] [ y'] = [ m10 m11 m12 ] [ y ] = [ m10x m11y m12 ] [ 1 ] [ 0 0 1 ] [ 1 ] [ 1 ] Handling 90-Degree Rotations In some variations of the rotate methods in the AffineTransform class, a double-precision argument specifies the angle of rotation in radians. These methods have special handling for rotations of approximately 90 degrees (including multiples such as 180, 270, and 360 degrees), so that the common case of quadrant rotation is handled more efficiently. This special handling can cause angles very close to multiples of 90 degrees to be treated as if they were exact multiples of 90 degrees. For small multiples of 90 degrees the range of angles treated as a quadrant rotation is approximately 0.00000121 degrees wide. This section explains why such special care is needed and how it is implemented. Since 90 degrees is represented as PI/2 in radians, and since PI is a transcendental (and therefore irrational) number, it is not possible to exactly represent a multiple of 90 degrees as an exact double precision value measured in radians. As a result it is theoretically impossible to describe quadrant rotations (90, 180, 270 or 360 degrees) using these values. Double precision floating point values can get very close to non-zero multiples of PI/2 but never close enough for the sine or cosine to be exactly 0.0, 1.0 or -1.0. The implementations of Math.sin() and Math.cos() correspondingly never return 0.0 for any case other than Math.sin(0.0). These same implementations do, however, return exactly 1.0 and -1.0 for some range of numbers around each multiple of 90 degrees since the correct answer is so close to 1.0 or -1.0 that the double precision significand cannot represent the difference as accurately as it can for numbers that are near 0.0. The net result of these issues is that if the Math.sin() and Math.cos() methods are used to directly generate the values for the matrix modifications during these radian-based rotation operations then the resulting transform is never strictly classifiable as a quadrant rotation even for a simple case like rotate(Math.PI/2.0), due to minor variations in the matrix caused by the non-0.0 values obtained for the sine and cosine. If these transforms are not classified as quadrant rotations then subsequent code which attempts to optimize further operations based upon the type of the transform will be relegated to its most general implementation. Because quadrant rotations are fairly common, this class should handle these cases reasonably quickly, both in applying the rotations to the transform and in applying the resulting transform to the coordinates. To facilitate this optimal handling, the methods which take an angle of rotation measured in radians attempt to detect angles that are intended to be quadrant rotations and treat them as such. These methods therefore treat an angle theta as a quadrant rotation if either Math.sin(theta) or Math.cos(theta) returns exactly 1.0 or -1.0. As a rule of thumb, this property holds true for a range of approximately 0.0000000211 radians (or 0.00000121 degrees) around small multiples of Math.PI/2.0.
Arc2D is the abstract superclass for all objects that store a 2D arc defined by a framing rectangle, start angle, angular extent (length of the arc), and a closure type (OPEN, CHORD, or PIE).
The arc is a partial section of a full ellipse which inscribes the framing rectangle of its parent RectangularShape.
The angles are specified relative to the non-square framing rectangle such that 45 degrees always falls on the line from the center of the ellipse to the upper right corner of the framing rectangle. As a result, if the framing rectangle is noticeably longer along one axis than the other, the angles to the start and end of the arc segment will be skewed farther along the longer axis of the frame.
The actual storage representation of the coordinates is left to the subclass.
Arc2D is the abstract superclass for all objects that store a 2D arc defined by a framing rectangle, start angle, angular extent (length of the arc), and a closure type (OPEN, CHORD, or PIE). The arc is a partial section of a full ellipse which inscribes the framing rectangle of its parent RectangularShape. The angles are specified relative to the non-square framing rectangle such that 45 degrees always falls on the line from the center of the ellipse to the upper right corner of the framing rectangle. As a result, if the framing rectangle is noticeably longer along one axis than the other, the angles to the start and end of the arc segment will be skewed farther along the longer axis of the frame. The actual storage representation of the coordinates is left to the subclass.
This class defines an arc specified in double precision.
This class defines an arc specified in double precision.
This class defines an arc specified in float precision.
This class defines an arc specified in float precision.
An Area object stores and manipulates a resolution-independent description of an enclosed area of 2-dimensional space. Area objects can be transformed and can perform various Constructive Area Geometry (CAG) operations when combined with other Area objects. The CAG operations include area addition, subtraction, intersection, and exclusive or. See the linked method documentation for examples of the various operations.
The Area class implements the Shape interface and provides full support for all of its hit-testing and path iteration facilities, but an Area is more specific than a generalized path in a number of ways:
Only closed paths and sub-paths are stored. Area objects constructed from unclosed paths are implicitly closed during construction as if those paths had been filled by the Graphics2D.fill method. The interiors of the individual stored sub-paths are all non-empty and non-overlapping. Paths are decomposed during construction into separate component non-overlapping parts, empty pieces of the path are discarded, and then these non-empty and non-overlapping properties are maintained through all subsequent CAG operations. Outlines of different component sub-paths may touch each other, as long as they do not cross so that their enclosed areas overlap. The geometry of the path describing the outline of the Area resembles the path from which it was constructed only in that it describes the same enclosed 2-dimensional area, but may use entirely different types and ordering of the path segments to do so.
Interesting issues which are not always obvious when using the Area include:
Creating an Area from an unclosed (open) Shape results in a closed outline in the Area object. Creating an Area from a Shape which encloses no area (even when "closed") produces an empty Area. A common example of this issue is that producing an Area from a line will be empty since the line encloses no area. An empty Area will iterate no geometry in its PathIterator objects. A self-intersecting Shape may be split into two (or more) sub-paths each enclosing one of the non-intersecting portions of the original path. An Area may take more path segments to describe the same geometry even when the original outline is simple and obvious. The analysis that the Area class must perform on the path may not reflect the same concepts of "simple and obvious" as a human being perceives.
An Area object stores and manipulates a resolution-independent description of an enclosed area of 2-dimensional space. Area objects can be transformed and can perform various Constructive Area Geometry (CAG) operations when combined with other Area objects. The CAG operations include area addition, subtraction, intersection, and exclusive or. See the linked method documentation for examples of the various operations. The Area class implements the Shape interface and provides full support for all of its hit-testing and path iteration facilities, but an Area is more specific than a generalized path in a number of ways: Only closed paths and sub-paths are stored. Area objects constructed from unclosed paths are implicitly closed during construction as if those paths had been filled by the Graphics2D.fill method. The interiors of the individual stored sub-paths are all non-empty and non-overlapping. Paths are decomposed during construction into separate component non-overlapping parts, empty pieces of the path are discarded, and then these non-empty and non-overlapping properties are maintained through all subsequent CAG operations. Outlines of different component sub-paths may touch each other, as long as they do not cross so that their enclosed areas overlap. The geometry of the path describing the outline of the Area resembles the path from which it was constructed only in that it describes the same enclosed 2-dimensional area, but may use entirely different types and ordering of the path segments to do so. Interesting issues which are not always obvious when using the Area include: Creating an Area from an unclosed (open) Shape results in a closed outline in the Area object. Creating an Area from a Shape which encloses no area (even when "closed") produces an empty Area. A common example of this issue is that producing an Area from a line will be empty since the line encloses no area. An empty Area will iterate no geometry in its PathIterator objects. A self-intersecting Shape may be split into two (or more) sub-paths each enclosing one of the non-intersecting portions of the original path. An Area may take more path segments to describe the same geometry even when the original outline is simple and obvious. The analysis that the Area class must perform on the path may not reflect the same concepts of "simple and obvious" as a human being perceives.
No vars found in this namespace.
The CubicCurve2D class defines a cubic parametric curve segment in (x,y) coordinate space.
This class is only the abstract superclass for all objects which store a 2D cubic curve segment. The actual storage representation of the coordinates is left to the subclass.
The CubicCurve2D class defines a cubic parametric curve segment in (x,y) coordinate space. This class is only the abstract superclass for all objects which store a 2D cubic curve segment. The actual storage representation of the coordinates is left to the subclass.
A cubic parametric curve segment specified with double coordinates.
A cubic parametric curve segment specified with double coordinates.
A cubic parametric curve segment specified with float coordinates.
A cubic parametric curve segment specified with float coordinates.
The Dimension2D class is to encapsulate a width and a height dimension.
This class is only the abstract superclass for all objects that store a 2D dimension. The actual storage representation of the sizes is left to the subclass.
The Dimension2D class is to encapsulate a width and a height dimension. This class is only the abstract superclass for all objects that store a 2D dimension. The actual storage representation of the sizes is left to the subclass.
The Ellipse2D class describes an ellipse that is defined by a framing rectangle.
This class is only the abstract superclass for all objects which store a 2D ellipse. The actual storage representation of the coordinates is left to the subclass.
The Ellipse2D class describes an ellipse that is defined by a framing rectangle. This class is only the abstract superclass for all objects which store a 2D ellipse. The actual storage representation of the coordinates is left to the subclass.
The Double class defines an ellipse specified in double precision.
The Double class defines an ellipse specified in double precision.
The Float class defines an ellipse specified in float precision.
The Float class defines an ellipse specified in float precision.
The FlatteningPathIterator class returns a flattened view of another PathIterator object. Other Shape classes can use this class to provide flattening behavior for their paths without having to perform the interpolation calculations themselves.
The FlatteningPathIterator class returns a flattened view of another PathIterator object. Other Shape classes can use this class to provide flattening behavior for their paths without having to perform the interpolation calculations themselves.
The GeneralPath class represents a geometric path constructed from straight lines, and quadratic and cubic (Bézier) curves. It can contain multiple subpaths.
GeneralPath is a legacy final class which exactly implements the behavior of its superclass Path2D.Float. Together with Path2D.Double, the Path2D classes provide full implementations of a general geometric path that support all of the functionality of the Shape and PathIterator interfaces with the ability to explicitly select different levels of internal coordinate precision.
Use Path2D.Float (or this legacy GeneralPath subclass) when dealing with data that can be represented and used with floating point precision. Use Path2D.Double for data that requires the accuracy or range of double precision.
The GeneralPath class represents a geometric path constructed from straight lines, and quadratic and cubic (Bézier) curves. It can contain multiple subpaths. GeneralPath is a legacy final class which exactly implements the behavior of its superclass Path2D.Float. Together with Path2D.Double, the Path2D classes provide full implementations of a general geometric path that support all of the functionality of the Shape and PathIterator interfaces with the ability to explicitly select different levels of internal coordinate precision. Use Path2D.Float (or this legacy GeneralPath subclass) when dealing with data that can be represented and used with floating point precision. Use Path2D.Double for data that requires the accuracy or range of double precision.
The IllegalPathStateException represents an exception that is thrown if an operation is performed on a path that is in an illegal state with respect to the particular operation being performed, such as appending a path segment to a GeneralPath without an initial moveto.
The IllegalPathStateException represents an exception that is thrown if an operation is performed on a path that is in an illegal state with respect to the particular operation being performed, such as appending a path segment to a GeneralPath without an initial moveto.
This Line2D represents a line segment in (x,y) coordinate space. This class, like all of the Java 2D API, uses a default coordinate system called user space in which the y-axis values increase downward and x-axis values increase to the right. For more information on the user space coordinate system, see the
Coordinate Systems section of the Java 2D Programmer's Guide.
This class is only the abstract superclass for all objects that store a 2D line segment. The actual storage representation of the coordinates is left to the subclass.
This Line2D represents a line segment in (x,y) coordinate space. This class, like all of the Java 2D API, uses a default coordinate system called user space in which the y-axis values increase downward and x-axis values increase to the right. For more information on the user space coordinate system, see the Coordinate Systems section of the Java 2D Programmer's Guide. This class is only the abstract superclass for all objects that store a 2D line segment. The actual storage representation of the coordinates is left to the subclass.
A line segment specified with double coordinates.
A line segment specified with double coordinates.
A line segment specified with float coordinates.
A line segment specified with float coordinates.
The NoninvertibleTransformException class represents an exception that is thrown if an operation is performed requiring the inverse of an AffineTransform object but the AffineTransform is in a non-invertible state.
The NoninvertibleTransformException class represents an exception that is thrown if an operation is performed requiring the inverse of an AffineTransform object but the AffineTransform is in a non-invertible state.
The Path2D class provides a simple, yet flexible shape which represents an arbitrary geometric path. It can fully represent any path which can be iterated by the PathIterator interface including all of its segment types and winding rules and it implements all of the basic hit testing methods of the Shape interface.
Use Path2D.Float when dealing with data that can be represented and used with floating point precision. Use Path2D.Double for data that requires the accuracy or range of double precision.
Path2D provides exactly those facilities required for basic construction and management of a geometric path and implementation of the above interfaces with little added interpretation. If it is useful to manipulate the interiors of closed geometric shapes beyond simple hit testing then the Area class provides additional capabilities specifically targeted at closed figures. While both classes nominally implement the Shape interface, they differ in purpose and together they provide two useful views of a geometric shape where Path2D deals primarily with a trajectory formed by path segments and Area deals more with interpretation and manipulation of enclosed regions of 2D geometric space.
The PathIterator interface has more detailed descriptions of the types of segments that make up a path and the winding rules that control how to determine which regions are inside or outside the path.
The Path2D class provides a simple, yet flexible shape which represents an arbitrary geometric path. It can fully represent any path which can be iterated by the PathIterator interface including all of its segment types and winding rules and it implements all of the basic hit testing methods of the Shape interface. Use Path2D.Float when dealing with data that can be represented and used with floating point precision. Use Path2D.Double for data that requires the accuracy or range of double precision. Path2D provides exactly those facilities required for basic construction and management of a geometric path and implementation of the above interfaces with little added interpretation. If it is useful to manipulate the interiors of closed geometric shapes beyond simple hit testing then the Area class provides additional capabilities specifically targeted at closed figures. While both classes nominally implement the Shape interface, they differ in purpose and together they provide two useful views of a geometric shape where Path2D deals primarily with a trajectory formed by path segments and Area deals more with interpretation and manipulation of enclosed regions of 2D geometric space. The PathIterator interface has more detailed descriptions of the types of segments that make up a path and the winding rules that control how to determine which regions are inside or outside the path.
The Double class defines a geometric path with coordinates stored in double precision floating point.
The Double class defines a geometric path with coordinates stored in double precision floating point.
The Float class defines a geometric path with coordinates stored in single precision floating point.
The Float class defines a geometric path with coordinates stored in single precision floating point.
The PathIterator interface provides the mechanism for objects that implement the Shape interface to return the geometry of their boundary by allowing a caller to retrieve the path of that boundary a segment at a time. This interface allows these objects to retrieve the path of their boundary a segment at a time by using 1st through 3rd order Bézier curves, which are lines and quadratic or cubic Bézier splines.
Multiple subpaths can be expressed by using a "MOVETO" segment to create a discontinuity in the geometry to move from the end of one subpath to the beginning of the next.
Each subpath can be closed manually by ending the last segment in the subpath on the same coordinate as the beginning "MOVETO" segment for that subpath or by using a "CLOSE" segment to append a line segment from the last point back to the first. Be aware that manually closing an outline as opposed to using a "CLOSE" segment to close the path might result in different line style decorations being used at the end points of the subpath. For example, the BasicStroke object uses a line "JOIN" decoration to connect the first and last points if a "CLOSE" segment is encountered, whereas simply ending the path on the same coordinate as the beginning coordinate results in line "CAP" decorations being used at the ends.
The PathIterator interface provides the mechanism for objects that implement the Shape interface to return the geometry of their boundary by allowing a caller to retrieve the path of that boundary a segment at a time. This interface allows these objects to retrieve the path of their boundary a segment at a time by using 1st through 3rd order Bézier curves, which are lines and quadratic or cubic Bézier splines. Multiple subpaths can be expressed by using a "MOVETO" segment to create a discontinuity in the geometry to move from the end of one subpath to the beginning of the next. Each subpath can be closed manually by ending the last segment in the subpath on the same coordinate as the beginning "MOVETO" segment for that subpath or by using a "CLOSE" segment to append a line segment from the last point back to the first. Be aware that manually closing an outline as opposed to using a "CLOSE" segment to close the path might result in different line style decorations being used at the end points of the subpath. For example, the BasicStroke object uses a line "JOIN" decoration to connect the first and last points if a "CLOSE" segment is encountered, whereas simply ending the path on the same coordinate as the beginning coordinate results in line "CAP" decorations being used at the ends.
The Point2D class defines a point representing a location in (x,y) coordinate space.
This class is only the abstract superclass for all objects that store a 2D coordinate. The actual storage representation of the coordinates is left to the subclass.
The Point2D class defines a point representing a location in (x,y) coordinate space. This class is only the abstract superclass for all objects that store a 2D coordinate. The actual storage representation of the coordinates is left to the subclass.
The Double class defines a point specified in double precision.
The Double class defines a point specified in double precision.
The Float class defines a point specified in float precision.
The Float class defines a point specified in float precision.
The QuadCurve2D class defines a quadratic parametric curve segment in (x,y) coordinate space.
This class is only the abstract superclass for all objects that store a 2D quadratic curve segment. The actual storage representation of the coordinates is left to the subclass.
The QuadCurve2D class defines a quadratic parametric curve segment in (x,y) coordinate space. This class is only the abstract superclass for all objects that store a 2D quadratic curve segment. The actual storage representation of the coordinates is left to the subclass.
A quadratic parametric curve segment specified with double coordinates.
A quadratic parametric curve segment specified with double coordinates.
A quadratic parametric curve segment specified with float coordinates.
A quadratic parametric curve segment specified with float coordinates.
The Rectangle2D class describes a rectangle defined by a location (x,y) and dimension (w x h).
This class is only the abstract superclass for all objects that store a 2D rectangle. The actual storage representation of the coordinates is left to the subclass.
The Rectangle2D class describes a rectangle defined by a location (x,y) and dimension (w x h). This class is only the abstract superclass for all objects that store a 2D rectangle. The actual storage representation of the coordinates is left to the subclass.
The Double class defines a rectangle specified in double coordinates.
The Double class defines a rectangle specified in double coordinates.
The Float class defines a rectangle specified in float coordinates.
The Float class defines a rectangle specified in float coordinates.
RectangularShape is the base class for a number of Shape objects whose geometry is defined by a rectangular frame. This class does not directly specify any specific geometry by itself, but merely provides manipulation methods inherited by a whole category of Shape objects. The manipulation methods provided by this class can be used to query and modify the rectangular frame, which provides a reference for the subclasses to define their geometry.
RectangularShape is the base class for a number of Shape objects whose geometry is defined by a rectangular frame. This class does not directly specify any specific geometry by itself, but merely provides manipulation methods inherited by a whole category of Shape objects. The manipulation methods provided by this class can be used to query and modify the rectangular frame, which provides a reference for the subclasses to define their geometry.
The RoundRectangle2D class defines a rectangle with rounded corners defined by a location (x,y), a dimension (w x h), and the width and height of an arc with which to round the corners.
This class is the abstract superclass for all objects that store a 2D rounded rectangle. The actual storage representation of the coordinates is left to the subclass.
The RoundRectangle2D class defines a rectangle with rounded corners defined by a location (x,y), a dimension (w x h), and the width and height of an arc with which to round the corners. This class is the abstract superclass for all objects that store a 2D rounded rectangle. The actual storage representation of the coordinates is left to the subclass.
The Double class defines a rectangle with rounded corners all specified in double coordinates.
The Double class defines a rectangle with rounded corners all specified in double coordinates.
The Float class defines a rectangle with rounded corners all specified in float coordinates.
The Float class defines a rectangle with rounded corners all specified in float coordinates.
The GradientPaint class provides a way to fill a Shape with a linear color gradient pattern. If Point P1 with Color C1 and Point P2 with Color C2 are specified in user space, the Color on the P1, P2 connecting line is proportionally changed from C1 to C2. Any point P not on the extended P1, P2 connecting line has the color of the point P' that is the perpendicular projection of P on the extended P1, P2 connecting line. Points on the extended line outside of the P1, P2 segment can be colored in one of two ways.
If the gradient is cyclic then the points on the extended P1, P2 connecting line cycle back and forth between the colors C1 and C2.
If the gradient is acyclic then points on the P1 side of the segment have the constant Color C1 while points on the P2 side have the constant Color C2.
The GradientPaint class provides a way to fill a Shape with a linear color gradient pattern. If Point P1 with Color C1 and Point P2 with Color C2 are specified in user space, the Color on the P1, P2 connecting line is proportionally changed from C1 to C2. Any point P not on the extended P1, P2 connecting line has the color of the point P' that is the perpendicular projection of P on the extended P1, P2 connecting line. Points on the extended line outside of the P1, P2 segment can be colored in one of two ways. If the gradient is cyclic then the points on the extended P1, P2 connecting line cycle back and forth between the colors C1 and C2. If the gradient is acyclic then points on the P1 side of the segment have the constant Color C1 while points on the P2 side have the constant Color C2.
The Graphics class is the abstract base class for all graphics contexts that allow an application to draw onto components that are realized on various devices, as well as onto off-screen images.
A Graphics object encapsulates state information needed for the basic rendering operations that Java supports. This state information includes the following properties:
The Component object on which to draw. A translation origin for rendering and clipping coordinates. The current clip. The current color. The current font. The current logical pixel operation function (XOR or Paint). The current XOR alternation color (see setXORMode(java.awt.Color)).
Coordinates are infinitely thin and lie between the pixels of the output device. Operations that draw the outline of a figure operate by traversing an infinitely thin path between pixels with a pixel-sized pen that hangs down and to the right of the anchor point on the path. Operations that fill a figure operate by filling the interior of that infinitely thin path. Operations that render horizontal text render the ascending portion of character glyphs entirely above the baseline coordinate.
The graphics pen hangs down and to the right from the path it traverses. This has the following implications:
If you draw a figure that covers a given rectangle, that figure occupies one extra row of pixels on the right and bottom edges as compared to filling a figure that is bounded by that same rectangle. If you draw a horizontal line along the same y coordinate as the baseline of a line of text, that line is drawn entirely below the text, except for any descenders.
All coordinates that appear as arguments to the methods of this Graphics object are considered relative to the translation origin of this Graphics object prior to the invocation of the method.
All rendering operations modify only pixels which lie within the area bounded by the current clip, which is specified by a Shape in user space and is controlled by the program using the Graphics object. This user clip is transformed into device space and combined with the device clip, which is defined by the visibility of windows and device extents. The combination of the user clip and device clip defines the composite clip, which determines the final clipping region. The user clip cannot be modified by the rendering system to reflect the resulting composite clip. The user clip can only be changed through the setClip or clipRect methods. All drawing or writing is done in the current color, using the current paint mode, and in the current font.
The Graphics class is the abstract base class for all graphics contexts that allow an application to draw onto components that are realized on various devices, as well as onto off-screen images. A Graphics object encapsulates state information needed for the basic rendering operations that Java supports. This state information includes the following properties: The Component object on which to draw. A translation origin for rendering and clipping coordinates. The current clip. The current color. The current font. The current logical pixel operation function (XOR or Paint). The current XOR alternation color (see setXORMode(java.awt.Color)). Coordinates are infinitely thin and lie between the pixels of the output device. Operations that draw the outline of a figure operate by traversing an infinitely thin path between pixels with a pixel-sized pen that hangs down and to the right of the anchor point on the path. Operations that fill a figure operate by filling the interior of that infinitely thin path. Operations that render horizontal text render the ascending portion of character glyphs entirely above the baseline coordinate. The graphics pen hangs down and to the right from the path it traverses. This has the following implications: If you draw a figure that covers a given rectangle, that figure occupies one extra row of pixels on the right and bottom edges as compared to filling a figure that is bounded by that same rectangle. If you draw a horizontal line along the same y coordinate as the baseline of a line of text, that line is drawn entirely below the text, except for any descenders. All coordinates that appear as arguments to the methods of this Graphics object are considered relative to the translation origin of this Graphics object prior to the invocation of the method. All rendering operations modify only pixels which lie within the area bounded by the current clip, which is specified by a Shape in user space and is controlled by the program using the Graphics object. This user clip is transformed into device space and combined with the device clip, which is defined by the visibility of windows and device extents. The combination of the user clip and device clip defines the composite clip, which determines the final clipping region. The user clip cannot be modified by the rendering system to reflect the resulting composite clip. The user clip can only be changed through the setClip or clipRect methods. All drawing or writing is done in the current color, using the current paint mode, and in the current font.
This Graphics2D class extends the Graphics class to provide more sophisticated control over geometry, coordinate transformations, color management, and text layout. This is the fundamental class for rendering 2-dimensional shapes, text and images on the Java(tm) platform.
Coordinate Spaces All coordinates passed to a Graphics2D object are specified in a device-independent coordinate system called User Space, which is used by applications. The Graphics2D object contains an AffineTransform object as part of its rendering state that defines how to convert coordinates from user space to device-dependent coordinates in Device Space.
Coordinates in device space usually refer to individual device pixels and are aligned on the infinitely thin gaps between these pixels. Some Graphics2D objects can be used to capture rendering operations for storage into a graphics metafile for playback on a concrete device of unknown physical resolution at a later time. Since the resolution might not be known when the rendering operations are captured, the Graphics2D Transform is set up to transform user coordinates to a virtual device space that approximates the expected resolution of the target device. Further transformations might need to be applied at playback time if the estimate is incorrect.
Some of the operations performed by the rendering attribute objects occur in the device space, but all Graphics2D methods take user space coordinates.
Every Graphics2D object is associated with a target that defines where rendering takes place. A GraphicsConfiguration object defines the characteristics of the rendering target, such as pixel format and resolution. The same rendering target is used throughout the life of a Graphics2D object.
When creating a Graphics2D object, the GraphicsConfiguration specifies the default transform for the target of the Graphics2D (a Component or Image). This default transform maps the user space coordinate system to screen and printer device coordinates such that the origin maps to the upper left hand corner of the target region of the device with increasing X coordinates extending to the right and increasing Y coordinates extending downward. The scaling of the default transform is set to identity for those devices that are close to 72 dpi, such as screen devices. The scaling of the default transform is set to approximately 72 user space coordinates per square inch for high resolution devices, such as printers. For image buffers, the default transform is the Identity transform.
Rendering Process The Rendering Process can be broken down into four phases that are controlled by the Graphics2D rendering attributes. The renderer can optimize many of these steps, either by caching the results for future calls, by collapsing multiple virtual steps into a single operation, or by recognizing various attributes as common simple cases that can be eliminated by modifying other parts of the operation.
The steps in the rendering process are:
Determine what to render.
Constrain the rendering operation to the current Clip. The Clip is specified by a Shape in user space and is controlled by the program using the various clip manipulation methods of Graphics and Graphics2D. This user clip is transformed into device space by the current Transform and combined with the device clip, which is defined by the visibility of windows and device extents. The combination of the user clip and device clip defines the composite clip, which determines the final clipping region. The user clip is not modified by the rendering system to reflect the resulting composite clip.
Determine what colors to render.
Apply the colors to the destination drawing surface using the current Composite attribute in the Graphics2D context.
The three types of rendering operations, along with details of each of their particular rendering processes are:
Shape operations
If the operation is a draw(Shape) operation, then the createStrokedShape method on the current Stroke attribute in the Graphics2D context is used to construct a new Shape object that contains the outline of the specified Shape.
The Shape is transformed from user space to device space using the current Transform in the Graphics2D context.
The outline of the Shape is extracted using the getPathIterator method of Shape, which returns a PathIterator object that iterates along the boundary of the Shape.
If the Graphics2D object cannot handle the curved segments that the PathIterator object returns then it can call the alternate getPathIterator method of Shape, which flattens the Shape.
The current Paint in the Graphics2D context is queried for a PaintContext, which specifies the colors to render in device space.
Text operations
The following steps are used to determine the set of glyphs required to render the indicated String:
If the argument is a String, then the current Font in the Graphics2D context is asked to convert the Unicode characters in the String into a set of glyphs for presentation with whatever basic layout and shaping algorithms the font implements.
If the argument is an AttributedCharacterIterator, the iterator is asked to convert itself to a TextLayout using its embedded font attributes. The TextLayout implements more sophisticated glyph layout algorithms that perform Unicode bi-directional layout adjustments automatically for multiple fonts of differing writing directions.
If the argument is a GlyphVector, then the GlyphVector object already contains the appropriate font-specific glyph codes with explicit coordinates for the position of each glyph.
The current Font is queried to obtain outlines for the indicated glyphs. These outlines are treated as shapes in user space relative to the position of each glyph that was determined in step 1.
The character outlines are filled as indicated above under Shape operations.
The current Paint is queried for a PaintContext, which specifies the colors to render in device space.
Image Operations
The region of interest is defined by the bounding box of the source Image. This bounding box is specified in Image Space, which is the Image object's local coordinate system.
If an AffineTransform is passed to drawImage(Image, AffineTransform, ImageObserver), the AffineTransform is used to transform the bounding box from image space to user space. If no AffineTransform is supplied, the bounding box is treated as if it is already in user space.
The bounding box of the source Image is transformed from user space into device space using the current Transform. Note that the result of transforming the bounding box does not necessarily result in a rectangular region in device space.
The Image object determines what colors to render, sampled according to the source to destination coordinate mapping specified by the current Transform and the optional image transform.
Default Rendering Attributes The default values for the Graphics2D rendering attributes are:
Paint The color of the Component. Font The Font of the Component. Stroke A square pen with a linewidth of 1, no dashing, miter segment joins and square end caps. Transform The getDefaultTransform for the GraphicsConfiguration of the Component. Composite The AlphaComposite.SRC_OVER rule. Clip No rendering Clip, the output is clipped to the Component.
Rendering Compatibility Issues The JDK(tm) 1.1 rendering model is based on a pixelization model that specifies that coordinates are infinitely thin, lying between the pixels. Drawing operations are performed using a one-pixel wide pen that fills the pixel below and to the right of the anchor point on the path. The JDK 1.1 rendering model is consistent with the capabilities of most of the existing class of platform renderers that need to resolve integer coordinates to a discrete pen that must fall completely on a specified number of pixels.
The Java 2D(tm) (Java(tm) 2 platform) API supports antialiasing renderers. A pen with a width of one pixel does not need to fall completely on pixel N as opposed to pixel N+1. The pen can fall partially on both pixels. It is not necessary to choose a bias direction for a wide pen since the blending that occurs along the pen traversal edges makes the sub-pixel position of the pen visible to the user. On the other hand, when antialiasing is turned off by setting the KEY_ANTIALIASING hint key to the VALUE_ANTIALIAS_OFF hint value, the renderer might need to apply a bias to determine which pixel to modify when the pen is straddling a pixel boundary, such as when it is drawn along an integer coordinate in device space. While the capabilities of an antialiasing renderer make it no longer necessary for the rendering model to specify a bias for the pen, it is desirable for the antialiasing and non-antialiasing renderers to perform similarly for the common cases of drawing one-pixel wide horizontal and vertical lines on the screen. To ensure that turning on antialiasing by setting the KEY_ANTIALIASING hint key to VALUE_ANTIALIAS_ON does not cause such lines to suddenly become twice as wide and half as opaque, it is desirable to have the model specify a path for such lines so that they completely cover a particular set of pixels to help increase their crispness.
Java 2D API maintains compatibility with JDK 1.1 rendering behavior, such that legacy operations and existing renderer behavior is unchanged under Java 2D API. Legacy methods that map onto general draw and fill methods are defined, which clearly indicates how Graphics2D extends Graphics based on settings of Stroke and Transform attributes and rendering hints. The definition performs identically under default attribute settings. For example, the default Stroke is a BasicStroke with a width of 1 and no dashing and the default Transform for screen drawing is an Identity transform.
The following two rules provide predictable rendering behavior whether aliasing or antialiasing is being used.
Device coordinates are defined to be between device pixels which avoids any inconsistent results between aliased and antialiased rendering. If coordinates were defined to be at a pixel's center, some of the pixels covered by a shape, such as a rectangle, would only be half covered. With aliased rendering, the half covered pixels would either be rendered inside the shape or outside the shape. With anti-aliased rendering, the pixels on the entire edge of the shape would be half covered. On the other hand, since coordinates are defined to be between pixels, a shape like a rectangle would have no half covered pixels, whether or not it is rendered using antialiasing. Lines and paths stroked using the BasicStroke object may be "normalized" to provide consistent rendering of the outlines when positioned at various points on the drawable and whether drawn with aliased or antialiased rendering. This normalization process is controlled by the KEY_STROKE_CONTROL hint. The exact normalization algorithm is not specified, but the goals of this normalization are to ensure that lines are rendered with consistent visual appearance regardless of how they fall on the pixel grid and to promote more solid horizontal and vertical lines in antialiased mode so that they resemble their non-antialiased counterparts more closely. A typical normalization step might promote antialiased line endpoints to pixel centers to reduce the amount of blending or adjust the subpixel positioning of non-antialiased lines so that the floating point line widths round to even or odd pixel counts with equal likelihood. This process can move endpoints by up to half a pixel (usually towards positive infinity along both axes) to promote these consistent results.
The following definitions of general legacy methods perform identically to previously specified behavior under default attribute settings:
For fill operations, including fillRect, fillRoundRect, fillOval, fillArc, fillPolygon, and clearRect, fill can now be called with the desired Shape. For example, when filling a rectangle:
fill(new Rectangle(x, y, w, h)); is called.
Similarly, for draw operations, including drawLine, drawRect, drawRoundRect, drawOval, drawArc, drawPolyline, and drawPolygon, draw can now be called with the desired Shape. For example, when drawing a rectangle:
draw(new Rectangle(x, y, w, h)); is called.
The draw3DRect and fill3DRect methods were implemented in terms of the drawLine and fillRect methods in the Graphics class which would predicate their behavior upon the current Stroke and Paint objects in a Graphics2D context. This class overrides those implementations with versions that use the current Color exclusively, overriding the current Paint and which uses fillRect to describe the exact same behavior as the preexisting methods regardless of the setting of the current Stroke.
The Graphics class defines only the setColor method to control the color to be painted. Since the Java 2D API extends the Color object to implement the new Paint interface, the existing setColor method is now a convenience method for setting the current Paint attribute to a Color object. setColor(c) is equivalent to setPaint(c).
The Graphics class defines two methods for controlling how colors are applied to the destination.
The setPaintMode method is implemented as a convenience method to set the default Composite, equivalent to setComposite(new AlphaComposite.SrcOver).
The setXORMode(Color xorcolor) method is implemented as a convenience method to set a special Composite object that ignores the Alpha components of source colors and sets the destination color to the value:
dstpixel = (PixelOf(srccolor) ^ PixelOf(xorcolor) ^ dstpixel);
This Graphics2D class extends the Graphics class to provide more sophisticated control over geometry, coordinate transformations, color management, and text layout. This is the fundamental class for rendering 2-dimensional shapes, text and images on the Java(tm) platform. Coordinate Spaces All coordinates passed to a Graphics2D object are specified in a device-independent coordinate system called User Space, which is used by applications. The Graphics2D object contains an AffineTransform object as part of its rendering state that defines how to convert coordinates from user space to device-dependent coordinates in Device Space. Coordinates in device space usually refer to individual device pixels and are aligned on the infinitely thin gaps between these pixels. Some Graphics2D objects can be used to capture rendering operations for storage into a graphics metafile for playback on a concrete device of unknown physical resolution at a later time. Since the resolution might not be known when the rendering operations are captured, the Graphics2D Transform is set up to transform user coordinates to a virtual device space that approximates the expected resolution of the target device. Further transformations might need to be applied at playback time if the estimate is incorrect. Some of the operations performed by the rendering attribute objects occur in the device space, but all Graphics2D methods take user space coordinates. Every Graphics2D object is associated with a target that defines where rendering takes place. A GraphicsConfiguration object defines the characteristics of the rendering target, such as pixel format and resolution. The same rendering target is used throughout the life of a Graphics2D object. When creating a Graphics2D object, the GraphicsConfiguration specifies the default transform for the target of the Graphics2D (a Component or Image). This default transform maps the user space coordinate system to screen and printer device coordinates such that the origin maps to the upper left hand corner of the target region of the device with increasing X coordinates extending to the right and increasing Y coordinates extending downward. The scaling of the default transform is set to identity for those devices that are close to 72 dpi, such as screen devices. The scaling of the default transform is set to approximately 72 user space coordinates per square inch for high resolution devices, such as printers. For image buffers, the default transform is the Identity transform. Rendering Process The Rendering Process can be broken down into four phases that are controlled by the Graphics2D rendering attributes. The renderer can optimize many of these steps, either by caching the results for future calls, by collapsing multiple virtual steps into a single operation, or by recognizing various attributes as common simple cases that can be eliminated by modifying other parts of the operation. The steps in the rendering process are: Determine what to render. Constrain the rendering operation to the current Clip. The Clip is specified by a Shape in user space and is controlled by the program using the various clip manipulation methods of Graphics and Graphics2D. This user clip is transformed into device space by the current Transform and combined with the device clip, which is defined by the visibility of windows and device extents. The combination of the user clip and device clip defines the composite clip, which determines the final clipping region. The user clip is not modified by the rendering system to reflect the resulting composite clip. Determine what colors to render. Apply the colors to the destination drawing surface using the current Composite attribute in the Graphics2D context. The three types of rendering operations, along with details of each of their particular rendering processes are: Shape operations If the operation is a draw(Shape) operation, then the createStrokedShape method on the current Stroke attribute in the Graphics2D context is used to construct a new Shape object that contains the outline of the specified Shape. The Shape is transformed from user space to device space using the current Transform in the Graphics2D context. The outline of the Shape is extracted using the getPathIterator method of Shape, which returns a PathIterator object that iterates along the boundary of the Shape. If the Graphics2D object cannot handle the curved segments that the PathIterator object returns then it can call the alternate getPathIterator method of Shape, which flattens the Shape. The current Paint in the Graphics2D context is queried for a PaintContext, which specifies the colors to render in device space. Text operations The following steps are used to determine the set of glyphs required to render the indicated String: If the argument is a String, then the current Font in the Graphics2D context is asked to convert the Unicode characters in the String into a set of glyphs for presentation with whatever basic layout and shaping algorithms the font implements. If the argument is an AttributedCharacterIterator, the iterator is asked to convert itself to a TextLayout using its embedded font attributes. The TextLayout implements more sophisticated glyph layout algorithms that perform Unicode bi-directional layout adjustments automatically for multiple fonts of differing writing directions. If the argument is a GlyphVector, then the GlyphVector object already contains the appropriate font-specific glyph codes with explicit coordinates for the position of each glyph. The current Font is queried to obtain outlines for the indicated glyphs. These outlines are treated as shapes in user space relative to the position of each glyph that was determined in step 1. The character outlines are filled as indicated above under Shape operations. The current Paint is queried for a PaintContext, which specifies the colors to render in device space. Image Operations The region of interest is defined by the bounding box of the source Image. This bounding box is specified in Image Space, which is the Image object's local coordinate system. If an AffineTransform is passed to drawImage(Image, AffineTransform, ImageObserver), the AffineTransform is used to transform the bounding box from image space to user space. If no AffineTransform is supplied, the bounding box is treated as if it is already in user space. The bounding box of the source Image is transformed from user space into device space using the current Transform. Note that the result of transforming the bounding box does not necessarily result in a rectangular region in device space. The Image object determines what colors to render, sampled according to the source to destination coordinate mapping specified by the current Transform and the optional image transform. Default Rendering Attributes The default values for the Graphics2D rendering attributes are: Paint The color of the Component. Font The Font of the Component. Stroke A square pen with a linewidth of 1, no dashing, miter segment joins and square end caps. Transform The getDefaultTransform for the GraphicsConfiguration of the Component. Composite The AlphaComposite.SRC_OVER rule. Clip No rendering Clip, the output is clipped to the Component. Rendering Compatibility Issues The JDK(tm) 1.1 rendering model is based on a pixelization model that specifies that coordinates are infinitely thin, lying between the pixels. Drawing operations are performed using a one-pixel wide pen that fills the pixel below and to the right of the anchor point on the path. The JDK 1.1 rendering model is consistent with the capabilities of most of the existing class of platform renderers that need to resolve integer coordinates to a discrete pen that must fall completely on a specified number of pixels. The Java 2D(tm) (Java(tm) 2 platform) API supports antialiasing renderers. A pen with a width of one pixel does not need to fall completely on pixel N as opposed to pixel N+1. The pen can fall partially on both pixels. It is not necessary to choose a bias direction for a wide pen since the blending that occurs along the pen traversal edges makes the sub-pixel position of the pen visible to the user. On the other hand, when antialiasing is turned off by setting the KEY_ANTIALIASING hint key to the VALUE_ANTIALIAS_OFF hint value, the renderer might need to apply a bias to determine which pixel to modify when the pen is straddling a pixel boundary, such as when it is drawn along an integer coordinate in device space. While the capabilities of an antialiasing renderer make it no longer necessary for the rendering model to specify a bias for the pen, it is desirable for the antialiasing and non-antialiasing renderers to perform similarly for the common cases of drawing one-pixel wide horizontal and vertical lines on the screen. To ensure that turning on antialiasing by setting the KEY_ANTIALIASING hint key to VALUE_ANTIALIAS_ON does not cause such lines to suddenly become twice as wide and half as opaque, it is desirable to have the model specify a path for such lines so that they completely cover a particular set of pixels to help increase their crispness. Java 2D API maintains compatibility with JDK 1.1 rendering behavior, such that legacy operations and existing renderer behavior is unchanged under Java 2D API. Legacy methods that map onto general draw and fill methods are defined, which clearly indicates how Graphics2D extends Graphics based on settings of Stroke and Transform attributes and rendering hints. The definition performs identically under default attribute settings. For example, the default Stroke is a BasicStroke with a width of 1 and no dashing and the default Transform for screen drawing is an Identity transform. The following two rules provide predictable rendering behavior whether aliasing or antialiasing is being used. Device coordinates are defined to be between device pixels which avoids any inconsistent results between aliased and antialiased rendering. If coordinates were defined to be at a pixel's center, some of the pixels covered by a shape, such as a rectangle, would only be half covered. With aliased rendering, the half covered pixels would either be rendered inside the shape or outside the shape. With anti-aliased rendering, the pixels on the entire edge of the shape would be half covered. On the other hand, since coordinates are defined to be between pixels, a shape like a rectangle would have no half covered pixels, whether or not it is rendered using antialiasing. Lines and paths stroked using the BasicStroke object may be "normalized" to provide consistent rendering of the outlines when positioned at various points on the drawable and whether drawn with aliased or antialiased rendering. This normalization process is controlled by the KEY_STROKE_CONTROL hint. The exact normalization algorithm is not specified, but the goals of this normalization are to ensure that lines are rendered with consistent visual appearance regardless of how they fall on the pixel grid and to promote more solid horizontal and vertical lines in antialiased mode so that they resemble their non-antialiased counterparts more closely. A typical normalization step might promote antialiased line endpoints to pixel centers to reduce the amount of blending or adjust the subpixel positioning of non-antialiased lines so that the floating point line widths round to even or odd pixel counts with equal likelihood. This process can move endpoints by up to half a pixel (usually towards positive infinity along both axes) to promote these consistent results. The following definitions of general legacy methods perform identically to previously specified behavior under default attribute settings: For fill operations, including fillRect, fillRoundRect, fillOval, fillArc, fillPolygon, and clearRect, fill can now be called with the desired Shape. For example, when filling a rectangle: fill(new Rectangle(x, y, w, h)); is called. Similarly, for draw operations, including drawLine, drawRect, drawRoundRect, drawOval, drawArc, drawPolyline, and drawPolygon, draw can now be called with the desired Shape. For example, when drawing a rectangle: draw(new Rectangle(x, y, w, h)); is called. The draw3DRect and fill3DRect methods were implemented in terms of the drawLine and fillRect methods in the Graphics class which would predicate their behavior upon the current Stroke and Paint objects in a Graphics2D context. This class overrides those implementations with versions that use the current Color exclusively, overriding the current Paint and which uses fillRect to describe the exact same behavior as the preexisting methods regardless of the setting of the current Stroke. The Graphics class defines only the setColor method to control the color to be painted. Since the Java 2D API extends the Color object to implement the new Paint interface, the existing setColor method is now a convenience method for setting the current Paint attribute to a Color object. setColor(c) is equivalent to setPaint(c). The Graphics class defines two methods for controlling how colors are applied to the destination. The setPaintMode method is implemented as a convenience method to set the default Composite, equivalent to setComposite(new AlphaComposite.SrcOver). The setXORMode(Color xorcolor) method is implemented as a convenience method to set a special Composite object that ignores the Alpha components of source colors and sets the destination color to the value: dstpixel = (PixelOf(srccolor) ^ PixelOf(xorcolor) ^ dstpixel);
The GraphicsConfigTemplate class is used to obtain a valid GraphicsConfiguration. A user instantiates one of these objects and then sets all non-default attributes as desired. The GraphicsDevice.getBestConfiguration(java.awt.GraphicsConfigTemplate) method found in the GraphicsDevice class is then called with this GraphicsConfigTemplate. A valid GraphicsConfiguration is returned that meets or exceeds what was requested in the GraphicsConfigTemplate.
The GraphicsConfigTemplate class is used to obtain a valid GraphicsConfiguration. A user instantiates one of these objects and then sets all non-default attributes as desired. The GraphicsDevice.getBestConfiguration(java.awt.GraphicsConfigTemplate) method found in the GraphicsDevice class is then called with this GraphicsConfigTemplate. A valid GraphicsConfiguration is returned that meets or exceeds what was requested in the GraphicsConfigTemplate.
The GraphicsConfiguration class describes the characteristics of a graphics destination such as a printer or monitor. There can be many GraphicsConfiguration objects associated with a single graphics device, representing different drawing modes or capabilities. The corresponding native structure will vary from platform to platform. For example, on X11 windowing systems, each visual is a different GraphicsConfiguration. On Microsoft Windows, GraphicsConfigurations represent PixelFormats available in the current resolution and color depth.
In a virtual device multi-screen environment in which the desktop area could span multiple physical screen devices, the bounds of the GraphicsConfiguration objects are relative to the virtual coordinate system. When setting the location of a component, use getBounds to get the bounds of the desired GraphicsConfiguration and offset the location with the coordinates of the GraphicsConfiguration, as the following code sample illustrates:
Frame f = new Frame(gc); // where gc is a GraphicsConfiguration
Rectangle bounds = gc.getBounds();
f.setLocation(10 bounds.x, 10 bounds.y);
To determine if your environment is a virtual device environment, call getBounds on all of the GraphicsConfiguration objects in your system. If any of the origins of the returned bounds is not (0, 0), your environment is a virtual device environment.
You can also use getBounds to determine the bounds of the virtual device. To do this, first call getBounds on all of the GraphicsConfiguration objects in your system. Then calculate the union of all of the bounds returned from the calls to getBounds. The union is the bounds of the virtual device. The following code sample calculates the bounds of the virtual device.
Rectangle virtualBounds = new Rectangle();
GraphicsEnvironment ge = GraphicsEnvironment.
getLocalGraphicsEnvironment();
GraphicsDevice[] gs =
ge.getScreenDevices();
for (int j = 0; j < gs.length; j++) {
GraphicsDevice gd = gs[j];
GraphicsConfiguration[] gc =
gd.getConfigurations();
for (int i=0; i < gc.length; i++) {
virtualBounds =
virtualBounds.union(gc[i].getBounds());
}
}
The GraphicsConfiguration class describes the characteristics of a graphics destination such as a printer or monitor. There can be many GraphicsConfiguration objects associated with a single graphics device, representing different drawing modes or capabilities. The corresponding native structure will vary from platform to platform. For example, on X11 windowing systems, each visual is a different GraphicsConfiguration. On Microsoft Windows, GraphicsConfigurations represent PixelFormats available in the current resolution and color depth. In a virtual device multi-screen environment in which the desktop area could span multiple physical screen devices, the bounds of the GraphicsConfiguration objects are relative to the virtual coordinate system. When setting the location of a component, use getBounds to get the bounds of the desired GraphicsConfiguration and offset the location with the coordinates of the GraphicsConfiguration, as the following code sample illustrates: Frame f = new Frame(gc); // where gc is a GraphicsConfiguration Rectangle bounds = gc.getBounds(); f.setLocation(10 bounds.x, 10 bounds.y); To determine if your environment is a virtual device environment, call getBounds on all of the GraphicsConfiguration objects in your system. If any of the origins of the returned bounds is not (0, 0), your environment is a virtual device environment. You can also use getBounds to determine the bounds of the virtual device. To do this, first call getBounds on all of the GraphicsConfiguration objects in your system. Then calculate the union of all of the bounds returned from the calls to getBounds. The union is the bounds of the virtual device. The following code sample calculates the bounds of the virtual device. Rectangle virtualBounds = new Rectangle(); GraphicsEnvironment ge = GraphicsEnvironment. getLocalGraphicsEnvironment(); GraphicsDevice[] gs = ge.getScreenDevices(); for (int j = 0; j < gs.length; j++) { GraphicsDevice gd = gs[j]; GraphicsConfiguration[] gc = gd.getConfigurations(); for (int i=0; i < gc.length; i++) { virtualBounds = virtualBounds.union(gc[i].getBounds()); } }
The GraphicsDevice class describes the graphics devices that might be available in a particular graphics environment. These include screen and printer devices. Note that there can be many screens and many printers in an instance of GraphicsEnvironment. Each graphics device has one or more GraphicsConfiguration objects associated with it. These objects specify the different configurations in which the GraphicsDevice can be used.
In a multi-screen environment, the GraphicsConfiguration objects can be used to render components on multiple screens. The following code sample demonstrates how to create a JFrame object for each GraphicsConfiguration on each screen device in the GraphicsEnvironment:
GraphicsEnvironment ge = GraphicsEnvironment. getLocalGraphicsEnvironment(); GraphicsDevice[] gs = ge.getScreenDevices(); for (int j = 0; j < gs.length; j++) { GraphicsDevice gd = gs[j]; GraphicsConfiguration[] gc = gd.getConfigurations(); for (int i=0; i < gc.length; i++) { JFrame f = new JFrame(gs[j].getDefaultConfiguration()); Canvas c = new Canvas(gc[i]); Rectangle gcBounds = gc[i].getBounds(); int xoffs = gcBounds.x; int yoffs = gcBounds.y; f.getContentPane().add(c); f.setLocation((i50)+xoffs, (i60)+yoffs); f.show(); } }
For more information on full-screen exclusive mode API, see the
Full-Screen Exclusive Mode API Tutorial.
The GraphicsDevice class describes the graphics devices that might be available in a particular graphics environment. These include screen and printer devices. Note that there can be many screens and many printers in an instance of GraphicsEnvironment. Each graphics device has one or more GraphicsConfiguration objects associated with it. These objects specify the different configurations in which the GraphicsDevice can be used. In a multi-screen environment, the GraphicsConfiguration objects can be used to render components on multiple screens. The following code sample demonstrates how to create a JFrame object for each GraphicsConfiguration on each screen device in the GraphicsEnvironment: GraphicsEnvironment ge = GraphicsEnvironment. getLocalGraphicsEnvironment(); GraphicsDevice[] gs = ge.getScreenDevices(); for (int j = 0; j < gs.length; j++) { GraphicsDevice gd = gs[j]; GraphicsConfiguration[] gc = gd.getConfigurations(); for (int i=0; i < gc.length; i++) { JFrame f = new JFrame(gs[j].getDefaultConfiguration()); Canvas c = new Canvas(gc[i]); Rectangle gcBounds = gc[i].getBounds(); int xoffs = gcBounds.x; int yoffs = gcBounds.y; f.getContentPane().add(c); f.setLocation((i*50)+xoffs, (i*60)+yoffs); f.show(); } } For more information on full-screen exclusive mode API, see the Full-Screen Exclusive Mode API Tutorial.
The GraphicsEnvironment class describes the collection of GraphicsDevice objects and Font objects available to a Java(tm) application on a particular platform. The resources in this GraphicsEnvironment might be local or on a remote machine. GraphicsDevice objects can be screens, printers or image buffers and are the destination of Graphics2D drawing methods. Each GraphicsDevice has a number of GraphicsConfiguration objects associated with it. These objects specify the different configurations in which the GraphicsDevice can be used.
The GraphicsEnvironment class describes the collection of GraphicsDevice objects and Font objects available to a Java(tm) application on a particular platform. The resources in this GraphicsEnvironment might be local or on a remote machine. GraphicsDevice objects can be screens, printers or image buffers and are the destination of Graphics2D drawing methods. Each GraphicsDevice has a number of GraphicsConfiguration objects associated with it. These objects specify the different configurations in which the GraphicsDevice can be used.
The GridBagConstraints class specifies constraints for components that are laid out using the GridBagLayout class.
The GridBagConstraints class specifies constraints for components that are laid out using the GridBagLayout class.
The GridBagLayout class is a flexible layout manager that aligns components vertically, horizontally or along their baseline without requiring that the components be of the same size. Each GridBagLayout object maintains a dynamic, rectangular grid of cells, with each component occupying one or more cells, called its display area.
Each component managed by a GridBagLayout is associated with an instance of GridBagConstraints. The constraints object specifies where a component's display area should be located on the grid and how the component should be positioned within its display area. In addition to its constraints object, the GridBagLayout also considers each component's minimum and preferred sizes in order to determine a component's size.
The overall orientation of the grid depends on the container's ComponentOrientation property. For horizontal left-to-right orientations, grid coordinate (0,0) is in the upper left corner of the container with x increasing to the right and y increasing downward. For horizontal right-to-left orientations, grid coordinate (0,0) is in the upper right corner of the container with x increasing to the left and y increasing downward.
To use a grid bag layout effectively, you must customize one or more of the GridBagConstraints objects that are associated with its components. You customize a GridBagConstraints object by setting one or more of its instance variables:
GridBagConstraints.gridx, GridBagConstraints.gridy Specifies the cell containing the leading corner of the component's display area, where the cell at the origin of the grid has address gridx = 0, gridy = 0. For horizontal left-to-right layout, a component's leading corner is its upper left. For horizontal right-to-left layout, a component's leading corner is its upper right. Use GridBagConstraints.RELATIVE (the default value) to specify that the component be placed immediately following (along the x axis for gridx or the y axis for gridy) the component that was added to the container just before this component was added. GridBagConstraints.gridwidth, GridBagConstraints.gridheight Specifies the number of cells in a row (for gridwidth) or column (for gridheight) in the component's display area. The default value is 1. Use GridBagConstraints.REMAINDER to specify that the component's display area will be from gridx to the last cell in the row (for gridwidth) or from gridy to the last cell in the column (for gridheight).
Use GridBagConstraints.RELATIVE to specify that the component's display area will be from gridx to the next to the last cell in its row (for gridwidth or from gridy to the next to the last cell in its column (for gridheight).
GridBagConstraints.fill Used when the component's display area is larger than the component's requested size to determine whether (and how) to resize the component. Possible values are GridBagConstraints.NONE (the default), GridBagConstraints.HORIZONTAL (make the component wide enough to fill its display area horizontally, but don't change its height), GridBagConstraints.VERTICAL (make the component tall enough to fill its display area vertically, but don't change its width), and GridBagConstraints.BOTH (make the component fill its display area entirely). GridBagConstraints.ipadx, GridBagConstraints.ipady Specifies the component's internal padding within the layout, how much to add to the minimum size of the component. The width of the component will be at least its minimum width plus ipadx pixels. Similarly, the height of the component will be at least the minimum height plus ipady pixels. GridBagConstraints.insets Specifies the component's external padding, the minimum amount of space between the component and the edges of its display area. GridBagConstraints.anchor Specifies where the component should be positioned in its display area. There are three kinds of possible values: absolute, orientation-relative, and baseline-relative Orientation relative values are interpreted relative to the container's ComponentOrientation property while absolute values are not. Baseline relative values are calculated relative to the baseline. Valid values are:
Absolute Values Orientation Relative Values Baseline Relative Values
GridBagConstraints.NORTH GridBagConstraints.SOUTH GridBagConstraints.WEST GridBagConstraints.EAST GridBagConstraints.NORTHWEST GridBagConstraints.NORTHEAST GridBagConstraints.SOUTHWEST GridBagConstraints.SOUTHEAST GridBagConstraints.CENTER (the default)
GridBagConstraints.PAGE_START GridBagConstraints.PAGE_END GridBagConstraints.LINE_START GridBagConstraints.LINE_END GridBagConstraints.FIRST_LINE_START GridBagConstraints.FIRST_LINE_END GridBagConstraints.LAST_LINE_START GridBagConstraints.LAST_LINE_END
GridBagConstraints.BASELINE GridBagConstraints.BASELINE_LEADING GridBagConstraints.BASELINE_TRAILING GridBagConstraints.ABOVE_BASELINE GridBagConstraints.ABOVE_BASELINE_LEADING GridBagConstraints.ABOVE_BASELINE_TRAILING GridBagConstraints.BELOW_BASELINE GridBagConstraints.BELOW_BASELINE_LEADING GridBagConstraints.BELOW_BASELINE_TRAILING
GridBagConstraints.weightx, GridBagConstraints.weighty Used to determine how to distribute space, which is important for specifying resizing behavior. Unless you specify a weight for at least one component in a row (weightx) and column (weighty), all the components clump together in the center of their container. This is because when the weight is zero (the default), the GridBagLayout object puts any extra space between its grid of cells and the edges of the container.
Each row may have a baseline; the baseline is determined by the components in that row that have a valid baseline and are aligned along the baseline (the component's anchor value is one of BASELINE, BASELINE_LEADING or BASELINE_TRAILING). If none of the components in the row has a valid baseline, the row does not have a baseline.
If a component spans rows it is aligned either to the baseline of the start row (if the baseline-resize behavior is CONSTANT_ASCENT) or the end row (if the baseline-resize behavior is CONSTANT_DESCENT). The row that the component is aligned to is called the prevailing row.
The following figure shows a baseline layout and includes a component that spans rows:
This layout consists of three components: A panel that starts in row 0 and ends in row 1. The panel has a baseline-resize behavior of CONSTANT_DESCENT and has an anchor of BASELINE. As the baseline-resize behavior is CONSTANT_DESCENT the prevailing row for the panel is row 1. Two buttons, each with a baseline-resize behavior of CENTER_OFFSET and an anchor of BASELINE.
Because the second button and the panel share the same prevailing row, they are both aligned along their baseline.
Components positioned using one of the baseline-relative values resize differently than when positioned using an absolute or orientation-relative value. How components change is dictated by how the baseline of the prevailing row changes. The baseline is anchored to the bottom of the display area if any components with the same prevailing row have a baseline-resize behavior of CONSTANT_DESCENT, otherwise the baseline is anchored to the top of the display area. The following rules dictate the resize behavior:
Resizable components positioned above the baseline can only grow as tall as the baseline. For example, if the baseline is at 100 and anchored at the top, a resizable component positioned above the baseline can never grow more than 100 units. Similarly, resizable components positioned below the baseline can only grow as high as the difference between the display height and the baseline. Resizable components positioned on the baseline with a baseline-resize behavior of OTHER are only resized if the baseline at the resized size fits within the display area. If the baseline is such that it does not fit within the display area the component is not resized. Components positioned on the baseline that do not have a baseline-resize behavior of OTHER can only grow as tall as display height - baseline baseline of component.
If you position a component along the baseline, but the component does not have a valid baseline, it will be vertically centered in its space. Similarly if you have positioned a component relative to the baseline and none of the components in the row have a valid baseline the component is vertically centered.
The following figures show ten components (all buttons) managed by a grid bag layout. Figure 2 shows the layout for a horizontal, left-to-right container and Figure 3 shows the layout for a horizontal, right-to-left container.
Figure 2: Horizontal, Left-to-Right Figure 3: Horizontal, Right-to-Left
Each of the ten components has the fill field of its associated GridBagConstraints object set to GridBagConstraints.BOTH. In addition, the components have the following non-default constraints:
Button1, Button2, Button3: weightx = 1.0 Button4: weightx = 1.0, gridwidth = GridBagConstraints.REMAINDER Button5: gridwidth = GridBagConstraints.REMAINDER Button6: gridwidth = GridBagConstraints.RELATIVE Button7: gridwidth = GridBagConstraints.REMAINDER Button8: gridheight = 2, weighty = 1.0 Button9, Button 10: gridwidth = GridBagConstraints.REMAINDER
Here is the code that implements the example shown above:
import java.awt.; import java.util.; import java.applet.Applet;
public class GridBagEx1 extends Applet {
protected void makebutton(String name,
GridBagLayout gridbag,
GridBagConstraints c) {
Button button = new Button(name);
gridbag.setConstraints(button, c);
add(button);
}
public void init() {
GridBagLayout gridbag = new GridBagLayout();
GridBagConstraints c = new GridBagConstraints();
setFont(new Font("SansSerif", Font.PLAIN, 14));
setLayout(gridbag);
c.fill = GridBagConstraints.BOTH;
c.weightx = 1.0;
makebutton("Button1", gridbag, c);
makebutton("Button2", gridbag, c);
makebutton("Button3", gridbag, c);
c.gridwidth = GridBagConstraints.REMAINDER; //end row
makebutton("Button4", gridbag, c);
c.weightx = 0.0; //reset to the default
makebutton("Button5", gridbag, c); //another row
c.gridwidth = GridBagConstraints.RELATIVE; //next-to-last in row
makebutton("Button6", gridbag, c);
c.gridwidth = GridBagConstraints.REMAINDER; //end row
makebutton("Button7", gridbag, c);
c.gridwidth = 1; //reset to the default
c.gridheight = 2;
c.weighty = 1.0;
makebutton("Button8", gridbag, c);
c.weighty = 0.0; //reset to the default
c.gridwidth = GridBagConstraints.REMAINDER; //end row
c.gridheight = 1; //reset to the default
makebutton("Button9", gridbag, c);
makebutton("Button10", gridbag, c);
setSize(300, 100);
}
public static void main(String args[]) {
Frame f = new Frame("GridBag Layout Example");
GridBagEx1 ex1 = new GridBagEx1();
ex1.init();
f.add("Center", ex1);
f.pack();
f.setSize(f.getPreferredSize());
f.show();
}
}
The GridBagLayout class is a flexible layout manager that aligns components vertically, horizontally or along their baseline without requiring that the components be of the same size. Each GridBagLayout object maintains a dynamic, rectangular grid of cells, with each component occupying one or more cells, called its display area. Each component managed by a GridBagLayout is associated with an instance of GridBagConstraints. The constraints object specifies where a component's display area should be located on the grid and how the component should be positioned within its display area. In addition to its constraints object, the GridBagLayout also considers each component's minimum and preferred sizes in order to determine a component's size. The overall orientation of the grid depends on the container's ComponentOrientation property. For horizontal left-to-right orientations, grid coordinate (0,0) is in the upper left corner of the container with x increasing to the right and y increasing downward. For horizontal right-to-left orientations, grid coordinate (0,0) is in the upper right corner of the container with x increasing to the left and y increasing downward. To use a grid bag layout effectively, you must customize one or more of the GridBagConstraints objects that are associated with its components. You customize a GridBagConstraints object by setting one or more of its instance variables: GridBagConstraints.gridx, GridBagConstraints.gridy Specifies the cell containing the leading corner of the component's display area, where the cell at the origin of the grid has address gridx = 0, gridy = 0. For horizontal left-to-right layout, a component's leading corner is its upper left. For horizontal right-to-left layout, a component's leading corner is its upper right. Use GridBagConstraints.RELATIVE (the default value) to specify that the component be placed immediately following (along the x axis for gridx or the y axis for gridy) the component that was added to the container just before this component was added. GridBagConstraints.gridwidth, GridBagConstraints.gridheight Specifies the number of cells in a row (for gridwidth) or column (for gridheight) in the component's display area. The default value is 1. Use GridBagConstraints.REMAINDER to specify that the component's display area will be from gridx to the last cell in the row (for gridwidth) or from gridy to the last cell in the column (for gridheight). Use GridBagConstraints.RELATIVE to specify that the component's display area will be from gridx to the next to the last cell in its row (for gridwidth or from gridy to the next to the last cell in its column (for gridheight). GridBagConstraints.fill Used when the component's display area is larger than the component's requested size to determine whether (and how) to resize the component. Possible values are GridBagConstraints.NONE (the default), GridBagConstraints.HORIZONTAL (make the component wide enough to fill its display area horizontally, but don't change its height), GridBagConstraints.VERTICAL (make the component tall enough to fill its display area vertically, but don't change its width), and GridBagConstraints.BOTH (make the component fill its display area entirely). GridBagConstraints.ipadx, GridBagConstraints.ipady Specifies the component's internal padding within the layout, how much to add to the minimum size of the component. The width of the component will be at least its minimum width plus ipadx pixels. Similarly, the height of the component will be at least the minimum height plus ipady pixels. GridBagConstraints.insets Specifies the component's external padding, the minimum amount of space between the component and the edges of its display area. GridBagConstraints.anchor Specifies where the component should be positioned in its display area. There are three kinds of possible values: absolute, orientation-relative, and baseline-relative Orientation relative values are interpreted relative to the container's ComponentOrientation property while absolute values are not. Baseline relative values are calculated relative to the baseline. Valid values are: Absolute Values Orientation Relative Values Baseline Relative Values GridBagConstraints.NORTH GridBagConstraints.SOUTH GridBagConstraints.WEST GridBagConstraints.EAST GridBagConstraints.NORTHWEST GridBagConstraints.NORTHEAST GridBagConstraints.SOUTHWEST GridBagConstraints.SOUTHEAST GridBagConstraints.CENTER (the default) GridBagConstraints.PAGE_START GridBagConstraints.PAGE_END GridBagConstraints.LINE_START GridBagConstraints.LINE_END GridBagConstraints.FIRST_LINE_START GridBagConstraints.FIRST_LINE_END GridBagConstraints.LAST_LINE_START GridBagConstraints.LAST_LINE_END GridBagConstraints.BASELINE GridBagConstraints.BASELINE_LEADING GridBagConstraints.BASELINE_TRAILING GridBagConstraints.ABOVE_BASELINE GridBagConstraints.ABOVE_BASELINE_LEADING GridBagConstraints.ABOVE_BASELINE_TRAILING GridBagConstraints.BELOW_BASELINE GridBagConstraints.BELOW_BASELINE_LEADING GridBagConstraints.BELOW_BASELINE_TRAILING GridBagConstraints.weightx, GridBagConstraints.weighty Used to determine how to distribute space, which is important for specifying resizing behavior. Unless you specify a weight for at least one component in a row (weightx) and column (weighty), all the components clump together in the center of their container. This is because when the weight is zero (the default), the GridBagLayout object puts any extra space between its grid of cells and the edges of the container. Each row may have a baseline; the baseline is determined by the components in that row that have a valid baseline and are aligned along the baseline (the component's anchor value is one of BASELINE, BASELINE_LEADING or BASELINE_TRAILING). If none of the components in the row has a valid baseline, the row does not have a baseline. If a component spans rows it is aligned either to the baseline of the start row (if the baseline-resize behavior is CONSTANT_ASCENT) or the end row (if the baseline-resize behavior is CONSTANT_DESCENT). The row that the component is aligned to is called the prevailing row. The following figure shows a baseline layout and includes a component that spans rows: This layout consists of three components: A panel that starts in row 0 and ends in row 1. The panel has a baseline-resize behavior of CONSTANT_DESCENT and has an anchor of BASELINE. As the baseline-resize behavior is CONSTANT_DESCENT the prevailing row for the panel is row 1. Two buttons, each with a baseline-resize behavior of CENTER_OFFSET and an anchor of BASELINE. Because the second button and the panel share the same prevailing row, they are both aligned along their baseline. Components positioned using one of the baseline-relative values resize differently than when positioned using an absolute or orientation-relative value. How components change is dictated by how the baseline of the prevailing row changes. The baseline is anchored to the bottom of the display area if any components with the same prevailing row have a baseline-resize behavior of CONSTANT_DESCENT, otherwise the baseline is anchored to the top of the display area. The following rules dictate the resize behavior: Resizable components positioned above the baseline can only grow as tall as the baseline. For example, if the baseline is at 100 and anchored at the top, a resizable component positioned above the baseline can never grow more than 100 units. Similarly, resizable components positioned below the baseline can only grow as high as the difference between the display height and the baseline. Resizable components positioned on the baseline with a baseline-resize behavior of OTHER are only resized if the baseline at the resized size fits within the display area. If the baseline is such that it does not fit within the display area the component is not resized. Components positioned on the baseline that do not have a baseline-resize behavior of OTHER can only grow as tall as display height - baseline baseline of component. If you position a component along the baseline, but the component does not have a valid baseline, it will be vertically centered in its space. Similarly if you have positioned a component relative to the baseline and none of the components in the row have a valid baseline the component is vertically centered. The following figures show ten components (all buttons) managed by a grid bag layout. Figure 2 shows the layout for a horizontal, left-to-right container and Figure 3 shows the layout for a horizontal, right-to-left container. Figure 2: Horizontal, Left-to-Right Figure 3: Horizontal, Right-to-Left Each of the ten components has the fill field of its associated GridBagConstraints object set to GridBagConstraints.BOTH. In addition, the components have the following non-default constraints: Button1, Button2, Button3: weightx = 1.0 Button4: weightx = 1.0, gridwidth = GridBagConstraints.REMAINDER Button5: gridwidth = GridBagConstraints.REMAINDER Button6: gridwidth = GridBagConstraints.RELATIVE Button7: gridwidth = GridBagConstraints.REMAINDER Button8: gridheight = 2, weighty = 1.0 Button9, Button 10: gridwidth = GridBagConstraints.REMAINDER Here is the code that implements the example shown above: import java.awt.*; import java.util.*; import java.applet.Applet; public class GridBagEx1 extends Applet { protected void makebutton(String name, GridBagLayout gridbag, GridBagConstraints c) { Button button = new Button(name); gridbag.setConstraints(button, c); add(button); } public void init() { GridBagLayout gridbag = new GridBagLayout(); GridBagConstraints c = new GridBagConstraints(); setFont(new Font("SansSerif", Font.PLAIN, 14)); setLayout(gridbag); c.fill = GridBagConstraints.BOTH; c.weightx = 1.0; makebutton("Button1", gridbag, c); makebutton("Button2", gridbag, c); makebutton("Button3", gridbag, c); c.gridwidth = GridBagConstraints.REMAINDER; //end row makebutton("Button4", gridbag, c); c.weightx = 0.0; //reset to the default makebutton("Button5", gridbag, c); //another row c.gridwidth = GridBagConstraints.RELATIVE; //next-to-last in row makebutton("Button6", gridbag, c); c.gridwidth = GridBagConstraints.REMAINDER; //end row makebutton("Button7", gridbag, c); c.gridwidth = 1; //reset to the default c.gridheight = 2; c.weighty = 1.0; makebutton("Button8", gridbag, c); c.weighty = 0.0; //reset to the default c.gridwidth = GridBagConstraints.REMAINDER; //end row c.gridheight = 1; //reset to the default makebutton("Button9", gridbag, c); makebutton("Button10", gridbag, c); setSize(300, 100); } public static void main(String args[]) { Frame f = new Frame("GridBag Layout Example"); GridBagEx1 ex1 = new GridBagEx1(); ex1.init(); f.add("Center", ex1); f.pack(); f.setSize(f.getPreferredSize()); f.show(); } }
The GridBagLayoutInfo is an utility class for GridBagLayout layout manager. It stores align, size and baseline parameters for every component within a container.
The GridBagLayoutInfo is an utility class for GridBagLayout layout manager. It stores align, size and baseline parameters for every component within a container.
No vars found in this namespace.
The GridLayout class is a layout manager that lays out a container's components in a rectangular grid. The container is divided into equal-sized rectangles, and one component is placed in each rectangle. For example, the following is an applet that lays out six buttons into three rows and two columns:
import java.awt.*; import java.applet.Applet; public class ButtonGrid extends Applet { public void init() { setLayout(new GridLayout(3,2)); add(new Button("1")); add(new Button("2")); add(new Button("3")); add(new Button("4")); add(new Button("5")); add(new Button("6")); } }
If the container's ComponentOrientation property is horizontal and left-to-right, the above example produces the output shown in Figure 1. If the container's ComponentOrientation property is horizontal and right-to-left, the example produces the output shown in Figure 2.
Figure 1: Horizontal, Left-to-Right
Figure 2: Horizontal, Right-to-Left
When both the number of rows and the number of columns have been set to non-zero values, either by a constructor or by the setRows and setColumns methods, the number of columns specified is ignored. Instead, the number of columns is determined from the specified number of rows and the total number of components in the layout. So, for example, if three rows and two columns have been specified and nine components are added to the layout, they will be displayed as three rows of three columns. Specifying the number of columns affects the layout only when the number of rows is set to zero.
The GridLayout class is a layout manager that lays out a container's components in a rectangular grid. The container is divided into equal-sized rectangles, and one component is placed in each rectangle. For example, the following is an applet that lays out six buttons into three rows and two columns: import java.awt.*; import java.applet.Applet; public class ButtonGrid extends Applet { public void init() { setLayout(new GridLayout(3,2)); add(new Button("1")); add(new Button("2")); add(new Button("3")); add(new Button("4")); add(new Button("5")); add(new Button("6")); } } If the container's ComponentOrientation property is horizontal and left-to-right, the above example produces the output shown in Figure 1. If the container's ComponentOrientation property is horizontal and right-to-left, the example produces the output shown in Figure 2. Figure 1: Horizontal, Left-to-Right Figure 2: Horizontal, Right-to-Left When both the number of rows and the number of columns have been set to non-zero values, either by a constructor or by the setRows and setColumns methods, the number of columns specified is ignored. Instead, the number of columns is determined from the specified number of rows and the total number of components in the layout. So, for example, if three rows and two columns have been specified and nine components are added to the layout, they will be displayed as three rows of three columns. Specifying the number of columns affects the layout only when the number of rows is set to zero.
Thrown when code that is dependent on a keyboard, display, or mouse is called in an environment that does not support a keyboard, display, or mouse.
Thrown when code that is dependent on a keyboard, display, or mouse is called in an environment that does not support a keyboard, display, or mouse.
Signals that an AWT component is not in an appropriate state for the requested operation.
Signals that an AWT component is not in an appropriate state for the requested operation.
No vars found in this namespace.
Provides methods to control text input facilities such as input methods and keyboard layouts. Two methods handle both input methods and keyboard layouts: selectInputMethod lets a client component select an input method or keyboard layout by locale, getLocale lets a client component obtain the locale of the current input method or keyboard layout. The other methods more specifically support interaction with input methods: They let client components control the behavior of input methods, and dispatch events from the client component to the input method.
By default, one InputContext instance is created per Window instance, and this input context is shared by all components within the window's container hierarchy. However, this means that only one text input operation is possible at any one time within a window, and that the text needs to be committed when moving the focus from one text component to another. If this is not desired, text components can create their own input context instances.
The Java Platform supports input methods that have been developed in the Java programming language, using the interfaces in the java.awt.im.spi package, and installed into a Java SE Runtime Environment as extensions. Implementations may also support using the native input methods of the platforms they run on; however, not all platforms and locales provide input methods. Keyboard layouts are provided by the host platform.
Input methods are unavailable if (a) no input method written in the Java programming language has been installed and (b) the Java Platform implementation or the underlying platform does not support native input methods. In this case, input contexts can still be created and used; their behavior is specified with the individual methods below.
Provides methods to control text input facilities such as input methods and keyboard layouts. Two methods handle both input methods and keyboard layouts: selectInputMethod lets a client component select an input method or keyboard layout by locale, getLocale lets a client component obtain the locale of the current input method or keyboard layout. The other methods more specifically support interaction with input methods: They let client components control the behavior of input methods, and dispatch events from the client component to the input method. By default, one InputContext instance is created per Window instance, and this input context is shared by all components within the window's container hierarchy. However, this means that only one text input operation is possible at any one time within a window, and that the text needs to be committed when moving the focus from one text component to another. If this is not desired, text components can create their own input context instances. The Java Platform supports input methods that have been developed in the Java programming language, using the interfaces in the java.awt.im.spi package, and installed into a Java SE Runtime Environment as extensions. Implementations may also support using the native input methods of the platforms they run on; however, not all platforms and locales provide input methods. Keyboard layouts are provided by the host platform. Input methods are unavailable if (a) no input method written in the Java programming language has been installed and (b) the Java Platform implementation or the underlying platform does not support native input methods. In this case, input contexts can still be created and used; their behavior is specified with the individual methods below.
An InputMethodHighlight is used to describe the highlight attributes of text being composed. The description can be at two levels: at the abstract level it specifies the conversion state and whether the text is selected; at the concrete level it specifies style attributes used to render the highlight. An InputMethodHighlight must provide the description at the abstract level; it may or may not provide the description at the concrete level. If no concrete style is provided, a renderer should use Toolkit.mapInputMethodHighlight(java.awt.im.InputMethodHighlight) to map to a concrete style.
The abstract description consists of three fields: selected, state, and variation. selected indicates whether the text range is the one that the input method is currently working on, for example, the segment for which conversion candidates are currently shown in a menu. state represents the conversion state. State values are defined by the input method framework and should be distinguished in all mappings from abstract to concrete styles. Currently defined state values are raw (unconverted) and converted. These state values are recommended for use before and after the main conversion step of text composition, say, before and after kana->kanji or pinyin->hanzi conversion. The variation field allows input methods to express additional information about the conversion results.
InputMethodHighlight instances are typically used as attribute values returned from AttributedCharacterIterator for the INPUT_METHOD_HIGHLIGHT attribute. They may be wrapped into Annotation instances to indicate separate text segments.
An InputMethodHighlight is used to describe the highlight attributes of text being composed. The description can be at two levels: at the abstract level it specifies the conversion state and whether the text is selected; at the concrete level it specifies style attributes used to render the highlight. An InputMethodHighlight must provide the description at the abstract level; it may or may not provide the description at the concrete level. If no concrete style is provided, a renderer should use Toolkit.mapInputMethodHighlight(java.awt.im.InputMethodHighlight) to map to a concrete style. The abstract description consists of three fields: selected, state, and variation. selected indicates whether the text range is the one that the input method is currently working on, for example, the segment for which conversion candidates are currently shown in a menu. state represents the conversion state. State values are defined by the input method framework and should be distinguished in all mappings from abstract to concrete styles. Currently defined state values are raw (unconverted) and converted. These state values are recommended for use before and after the main conversion step of text composition, say, before and after kana->kanji or pinyin->hanzi conversion. The variation field allows input methods to express additional information about the conversion results. InputMethodHighlight instances are typically used as attribute values returned from AttributedCharacterIterator for the INPUT_METHOD_HIGHLIGHT attribute. They may be wrapped into Annotation instances to indicate separate text segments.
InputMethodRequests defines the requests that a text editing component has to handle in order to work with input methods. The component can implement this interface itself or use a separate object that implements it. The object implementing this interface must be returned from the component's getInputMethodRequests method.
The text editing component also has to provide an input method event listener.
The interface is designed to support one of two input user interfaces:
on-the-spot input, where the composed text is displayed as part of the text component's text body. below-the-spot input, where the composed text is displayed in a separate composition window just below the insertion point where the text will be inserted when it is committed. Note that, if text is selected within the component's text body, this text will be replaced by the committed text upon commitment; therefore it is not considered part of the context that the text is input into.
InputMethodRequests defines the requests that a text editing component has to handle in order to work with input methods. The component can implement this interface itself or use a separate object that implements it. The object implementing this interface must be returned from the component's getInputMethodRequests method. The text editing component also has to provide an input method event listener. The interface is designed to support one of two input user interfaces: on-the-spot input, where the composed text is displayed as part of the text component's text body. below-the-spot input, where the composed text is displayed in a separate composition window just below the insertion point where the text will be inserted when it is committed. Note that, if text is selected within the component's text body, this text will be replaced by the committed text upon commitment; therefore it is not considered part of the context that the text is input into.
Defines additional Unicode subsets for use by input methods. Unlike the UnicodeBlock subsets defined in the Character.UnicodeBlock class, these constants do not directly correspond to Unicode code blocks.
Defines additional Unicode subsets for use by input methods. Unlike the UnicodeBlock subsets defined in the Character.UnicodeBlock class, these constants do not directly correspond to Unicode code blocks.
No vars found in this namespace.
Defines the interface for an input method that supports complex text input. Input methods traditionally support text input for languages that have more characters than can be represented on a standard-size keyboard, such as Chinese, Japanese, and Korean. However, they may also be used to support phonetic text input for English or character reordering for Thai.
Subclasses of InputMethod can be loaded by the input method framework; they can then be selected either through the API (InputContext.selectInputMethod) or the user interface (the input method selection menu).
Defines the interface for an input method that supports complex text input. Input methods traditionally support text input for languages that have more characters than can be represented on a standard-size keyboard, such as Chinese, Japanese, and Korean. However, they may also be used to support phonetic text input for English or character reordering for Thai. Subclasses of InputMethod can be loaded by the input method framework; they can then be selected either through the API (InputContext.selectInputMethod) or the user interface (the input method selection menu).
Provides methods that input methods can use to communicate with their client components or to request other services. This interface is implemented by the input method framework, and input methods call its methods on the instance they receive through InputMethod.setInputMethodContext(java.awt.im.spi.InputMethodContext). There should be no other implementors or callers.
Provides methods that input methods can use to communicate with their client components or to request other services. This interface is implemented by the input method framework, and input methods call its methods on the instance they receive through InputMethod.setInputMethodContext(java.awt.im.spi.InputMethodContext). There should be no other implementors or callers.
Defines methods that provide sufficient information about an input method to enable selection and loading of that input method. The input method itself is only loaded when it is actually used.
Defines methods that provide sufficient information about an input method to enable selection and loading of that input method. The input method itself is only loaded when it is actually used.
The abstract class Image is the superclass of all classes that represent graphical images. The image must be obtained in a platform-specific manner.
The abstract class Image is the superclass of all classes that represent graphical images. The image must be obtained in a platform-specific manner.
This class uses an affine transform to perform a linear mapping from 2D coordinates in the source image or Raster to 2D coordinates in the destination image or Raster. The type of interpolation that is used is specified through a constructor, either by a RenderingHints object or by one of the integer interpolation types defined in this class.
If a RenderingHints object is specified in the constructor, the interpolation hint and the rendering quality hint are used to set the interpolation type for this operation. The color rendering hint and the dithering hint can be used when color conversion is required.
Note that the following constraints have to be met:
The source and destination must be different. For Raster objects, the number of bands in the source must be equal to the number of bands in the destination.
This class uses an affine transform to perform a linear mapping from 2D coordinates in the source image or Raster to 2D coordinates in the destination image or Raster. The type of interpolation that is used is specified through a constructor, either by a RenderingHints object or by one of the integer interpolation types defined in this class. If a RenderingHints object is specified in the constructor, the interpolation hint and the rendering quality hint are used to set the interpolation type for this operation. The color rendering hint and the dithering hint can be used when color conversion is required. Note that the following constraints have to be met: The source and destination must be different. For Raster objects, the number of bands in the source must be equal to the number of bands in the destination.
An ImageFilter class for scaling images using a simple area averaging algorithm that produces smoother results than the nearest neighbor algorithm. This class extends the basic ImageFilter Class to scale an existing image and provide a source for a new image containing the resampled image. The pixels in the source image are blended to produce pixels for an image of the specified size. The blending process is analogous to scaling up the source image to a multiple of the destination size using pixel replication and then scaling it back down to the destination size by simply averaging all the pixels in the supersized image that fall within a given pixel of the destination image. If the data from the source is not delivered in TopDownLeftRight order then the filter will back off to a simple pixel replication behavior and utilize the requestTopDownLeftRightResend() method to refilter the pixels in a better way at the end. It is meant to be used in conjunction with a FilteredImageSource object to produce scaled versions of existing images. Due to implementation dependencies, there may be differences in pixel values of an image filtered on different platforms.
An ImageFilter class for scaling images using a simple area averaging algorithm that produces smoother results than the nearest neighbor algorithm. This class extends the basic ImageFilter Class to scale an existing image and provide a source for a new image containing the resampled image. The pixels in the source image are blended to produce pixels for an image of the specified size. The blending process is analogous to scaling up the source image to a multiple of the destination size using pixel replication and then scaling it back down to the destination size by simply averaging all the pixels in the supersized image that fall within a given pixel of the destination image. If the data from the source is not delivered in TopDownLeftRight order then the filter will back off to a simple pixel replication behavior and utilize the requestTopDownLeftRightResend() method to refilter the pixels in a better way at the end. It is meant to be used in conjunction with a FilteredImageSource object to produce scaled versions of existing images. Due to implementation dependencies, there may be differences in pixel values of an image filtered on different platforms.
This class performs an arbitrary linear combination of the bands in a Raster, using a specified matrix.
The width of the matrix must be equal to the number of bands in the source Raster, optionally plus one. If there is one more column in the matrix than the number of bands, there is an implied 1 at the end of the vector of band samples representing a pixel. The height of the matrix must be equal to the number of bands in the destination.
For example, a 3-banded Raster might have the following transformation applied to each pixel in order to invert the second band of the Raster.
[ 1.0 0.0 0.0 0.0 ] [ b1 ] [ 0.0 -1.0 0.0 255.0 ] x [ b2 ] [ 0.0 0.0 1.0 0.0 ] [ b3 ] [ 1 ]
Note that the source and destination can be the same object.
This class performs an arbitrary linear combination of the bands in a Raster, using a specified matrix. The width of the matrix must be equal to the number of bands in the source Raster, optionally plus one. If there is one more column in the matrix than the number of bands, there is an implied 1 at the end of the vector of band samples representing a pixel. The height of the matrix must be equal to the number of bands in the destination. For example, a 3-banded Raster might have the following transformation applied to each pixel in order to invert the second band of the Raster. [ 1.0 0.0 0.0 0.0 ] [ b1 ] [ 0.0 -1.0 0.0 255.0 ] x [ b2 ] [ 0.0 0.0 1.0 0.0 ] [ b3 ] [ 1 ] Note that the source and destination can be the same object.
This class represents image data which is stored in a band interleaved fashion and for which each sample of a pixel occupies one data element of the DataBuffer. It subclasses ComponentSampleModel but provides a more efficient implementation for accessing band interleaved image data than is provided by ComponentSampleModel. This class should typically be used when working with images which store sample data for each band in a different bank of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. The pixel stride for a BandedSampleModel is one. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. Bank indices denote the correspondence between a bank of the data buffer and a band of image data. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT, and TYPE_DOUBLE datatypes
This class represents image data which is stored in a band interleaved fashion and for which each sample of a pixel occupies one data element of the DataBuffer. It subclasses ComponentSampleModel but provides a more efficient implementation for accessing band interleaved image data than is provided by ComponentSampleModel. This class should typically be used when working with images which store sample data for each band in a different bank of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. The pixel stride for a BandedSampleModel is one. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. Bank indices denote the correspondence between a bank of the data buffer and a band of image data. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT, and TYPE_DOUBLE datatypes
The BufferedImage subclass describes an Image with an accessible buffer of image data. A BufferedImage is comprised of a ColorModel and a Raster of image data. The number and types of bands in the SampleModel of the Raster must match the number and types required by the ColorModel to represent its color and alpha components. All BufferedImage objects have an upper left corner coordinate of (0, 0). Any Raster used to construct a BufferedImage must therefore have minX=0 and minY=0.
This class relies on the data fetching and setting methods of Raster, and on the color characterization methods of ColorModel.
The BufferedImage subclass describes an Image with an accessible buffer of image data. A BufferedImage is comprised of a ColorModel and a Raster of image data. The number and types of bands in the SampleModel of the Raster must match the number and types required by the ColorModel to represent its color and alpha components. All BufferedImage objects have an upper left corner coordinate of (0, 0). Any Raster used to construct a BufferedImage must therefore have minX=0 and minY=0. This class relies on the data fetching and setting methods of Raster, and on the color characterization methods of ColorModel.
The BufferedImageFilter class subclasses an ImageFilter to provide a simple means of using a single-source/single-destination image operator (BufferedImageOp) to filter a BufferedImage in the Image Producer/Consumer/Observer paradigm. Examples of these image operators are: ConvolveOp, AffineTransformOp and LookupOp.
The BufferedImageFilter class subclasses an ImageFilter to provide a simple means of using a single-source/single-destination image operator (BufferedImageOp) to filter a BufferedImage in the Image Producer/Consumer/Observer paradigm. Examples of these image operators are: ConvolveOp, AffineTransformOp and LookupOp.
This interface describes single-input/single-output operations performed on BufferedImage objects. It is implemented by AffineTransformOp, ConvolveOp, ColorConvertOp, RescaleOp, and LookupOp. These objects can be passed into a BufferedImageFilter to operate on a BufferedImage in the ImageProducer-ImageFilter-ImageConsumer paradigm.
Classes that implement this interface must specify whether or not they allow in-place filtering-- filter operations where the source object is equal to the destination object.
This interface cannot be used to describe more sophisticated operations such as those that take multiple sources. Note that this restriction also means that the values of the destination pixels prior to the operation are not used as input to the filter operation.
This interface describes single-input/single-output operations performed on BufferedImage objects. It is implemented by AffineTransformOp, ConvolveOp, ColorConvertOp, RescaleOp, and LookupOp. These objects can be passed into a BufferedImageFilter to operate on a BufferedImage in the ImageProducer-ImageFilter-ImageConsumer paradigm. Classes that implement this interface must specify whether or not they allow in-place filtering-- filter operations where the source object is equal to the destination object. This interface cannot be used to describe more sophisticated operations such as those that take multiple sources. Note that this restriction also means that the values of the destination pixels prior to the operation are not used as input to the filter operation.
The BufferStrategy class represents the mechanism with which to organize complex memory on a particular Canvas or Window. Hardware and software limitations determine whether and how a particular buffer strategy can be implemented. These limitations are detectable through the capabilities of the GraphicsConfiguration used when creating the Canvas or Window.
It is worth noting that the terms buffer and surface are meant to be synonymous: an area of contiguous memory, either in video device memory or in system memory.
There are several types of complex buffer strategies, including sequential ring buffering and blit buffering. Sequential ring buffering (i.e., double or triple buffering) is the most common; an application draws to a single back buffer and then moves the contents to the front (display) in a single step, either by copying the data or moving the video pointer. Moving the video pointer exchanges the buffers so that the first buffer drawn becomes the front buffer, or what is currently displayed on the device; this is called page flipping.
Alternatively, the contents of the back buffer can be copied, or blitted forward in a chain instead of moving the video pointer.
Double buffering:
*********** ***********
* * ------> * *
[To display] <---- * Front B * Show * Back B. * <---- Rendering * * <------ * * *********** ***********
Triple buffering:
[To *********** *********** *********** display] * * --------+---------+------> * * <---- * Front B * Show * Mid. B. * * Back B. * <---- Rendering * * <------ * * <----- * * *********** *********** ***********
Here is an example of how buffer strategies can be created and used:
// Check the capabilities of the GraphicsConfiguration ...
// Create our component Window w = new Window(gc);
// Show our window w.setVisible(true);
// Create a general double-buffering strategy w.createBufferStrategy(2); BufferStrategy strategy = w.getBufferStrategy();
// Main loop while (!done) { // Prepare for rendering the next frame // ...
// Render single frame
do {
// The following loop ensures that the contents of the drawing buffer
// are consistent in case the underlying surface was recreated
do {
// Get a new graphics context every time through the loop
// to make sure the strategy is validated
Graphics graphics = strategy.getDrawGraphics();
// Render to graphics
// ...
// Dispose the graphics
graphics.dispose();
// Repeat the rendering if the drawing buffer contents
// were restored
} while (strategy.contentsRestored());
// Display the buffer
strategy.show();
// Repeat the rendering if the drawing buffer was lost
} while (strategy.contentsLost());
}
// Dispose the window w.setVisible(false); w.dispose();
The BufferStrategy class represents the mechanism with which to organize complex memory on a particular Canvas or Window. Hardware and software limitations determine whether and how a particular buffer strategy can be implemented. These limitations are detectable through the capabilities of the GraphicsConfiguration used when creating the Canvas or Window. It is worth noting that the terms buffer and surface are meant to be synonymous: an area of contiguous memory, either in video device memory or in system memory. There are several types of complex buffer strategies, including sequential ring buffering and blit buffering. Sequential ring buffering (i.e., double or triple buffering) is the most common; an application draws to a single back buffer and then moves the contents to the front (display) in a single step, either by copying the data or moving the video pointer. Moving the video pointer exchanges the buffers so that the first buffer drawn becomes the front buffer, or what is currently displayed on the device; this is called page flipping. Alternatively, the contents of the back buffer can be copied, or blitted forward in a chain instead of moving the video pointer. Double buffering: *********** *********** * * ------> * * [To display] <---- * Front B * Show * Back B. * <---- Rendering * * <------ * * *********** *********** Triple buffering: [To *********** *********** *********** display] * * --------+---------+------> * * <---- * Front B * Show * Mid. B. * * Back B. * <---- Rendering * * <------ * * <----- * * *********** *********** *********** Here is an example of how buffer strategies can be created and used: // Check the capabilities of the GraphicsConfiguration ... // Create our component Window w = new Window(gc); // Show our window w.setVisible(true); // Create a general double-buffering strategy w.createBufferStrategy(2); BufferStrategy strategy = w.getBufferStrategy(); // Main loop while (!done) { // Prepare for rendering the next frame // ... // Render single frame do { // The following loop ensures that the contents of the drawing buffer // are consistent in case the underlying surface was recreated do { // Get a new graphics context every time through the loop // to make sure the strategy is validated Graphics graphics = strategy.getDrawGraphics(); // Render to graphics // ... // Dispose the graphics graphics.dispose(); // Repeat the rendering if the drawing buffer contents // were restored } while (strategy.contentsRestored()); // Display the buffer strategy.show(); // Repeat the rendering if the drawing buffer was lost } while (strategy.contentsLost()); } // Dispose the window w.setVisible(false); w.dispose();
This class defines a lookup table object. The output of a lookup operation using an object of this class is interpreted as an unsigned byte quantity. The lookup table contains byte data arrays for one or more bands (or components) of an image, and it contains an offset which will be subtracted from the input values before indexing the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands.
This class defines a lookup table object. The output of a lookup operation using an object of this class is interpreted as an unsigned byte quantity. The lookup table contains byte data arrays for one or more bands (or components) of an image, and it contains an offset which will be subtracted from the input values before indexing the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands.
This class performs a pixel-by-pixel color conversion of the data in the source image. The resulting color values are scaled to the precision of the destination image. Color conversion can be specified via an array of ColorSpace objects or an array of ICC_Profile objects.
If the source is a BufferedImage with premultiplied alpha, the color components are divided by the alpha component before color conversion. If the destination is a BufferedImage with premultiplied alpha, the color components are multiplied by the alpha component after conversion. Rasters are treated as having no alpha channel, i.e. all bands are color bands.
If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used to control color conversion.
Note that Source and Destination may be the same object.
This class performs a pixel-by-pixel color conversion of the data in the source image. The resulting color values are scaled to the precision of the destination image. Color conversion can be specified via an array of ColorSpace objects or an array of ICC_Profile objects. If the source is a BufferedImage with premultiplied alpha, the color components are divided by the alpha component before color conversion. If the destination is a BufferedImage with premultiplied alpha, the color components are multiplied by the alpha component after conversion. Rasters are treated as having no alpha channel, i.e. all bands are color bands. If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used to control color conversion. Note that Source and Destination may be the same object.
The ColorModel abstract class encapsulates the methods for translating a pixel value to color components (for example, red, green, and blue) and an alpha component. In order to render an image to the screen, a printer, or another image, pixel values must be converted to color and alpha components. As arguments to or return values from methods of this class, pixels are represented as 32-bit ints or as arrays of primitive types. The number, order, and interpretation of color components for a ColorModel is specified by its ColorSpace. A ColorModel used with pixel data that does not include alpha information treats all pixels as opaque, which is an alpha value of 1.0.
This ColorModel class supports two representations of pixel values. A pixel value can be a single 32-bit int or an array of primitive types. The Java(tm) Platform 1.0 and 1.1 APIs represented pixels as single byte or single int values. For purposes of the ColorModel class, pixel value arguments were passed as ints. The Java(tm) 2 Platform API introduced additional classes for representing images. With BufferedImage or RenderedImage objects, based on Raster and SampleModel classes, pixel values might not be conveniently representable as a single int. Consequently, ColorModel now has methods that accept pixel values represented as arrays of primitive types. The primitive type used by a particular ColorModel object is called its transfer type.
ColorModel objects used with images for which pixel values are not conveniently representable as a single int throw an IllegalArgumentException when methods taking a single int pixel argument are called. Subclasses of ColorModel must specify the conditions under which this occurs. This does not occur with DirectColorModel or IndexColorModel objects.
Currently, the transfer types supported by the Java 2D(tm) API are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, DataBuffer.TYPE_INT, DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE. Most rendering operations will perform much faster when using ColorModels and images based on the first three of these types. In addition, some image filtering operations are not supported for ColorModels and images based on the latter three types. The transfer type for a particular ColorModel object is specified when the object is created, either explicitly or by default. All subclasses of ColorModel must specify what the possible transfer types are and how the number of elements in the primitive arrays representing pixels is determined.
For BufferedImages, the transfer type of its Raster and of the Raster object's SampleModel (available from the getTransferType methods of these classes) must match that of the ColorModel. The number of elements in an array representing a pixel for the Raster and SampleModel (available from the getNumDataElements methods of these classes) must match that of the ColorModel.
The algorithm used to convert from pixel values to color and alpha components varies by subclass. For example, there is not necessarily a one-to-one correspondence between samples obtained from the SampleModel of a BufferedImage object's Raster and color/alpha components. Even when there is such a correspondence, the number of bits in a sample is not necessarily the same as the number of bits in the corresponding color/alpha component. Each subclass must specify how the translation from pixel values to color/alpha components is done.
Methods in the ColorModel class use two different representations of color and alpha components - a normalized form and an unnormalized form. In the normalized form, each component is a float value between some minimum and maximum values. For the alpha component, the minimum is 0.0 and the maximum is 1.0. For color components the minimum and maximum values for each component can be obtained from the ColorSpace object. These values will often be 0.0 and 1.0 (e.g. normalized component values for the default sRGB color space range from 0.0 to 1.0), but some color spaces have component values with different upper and lower limits. These limits can be obtained using the getMinValue and getMaxValue methods of the ColorSpace class. Normalized color component values are not premultiplied. All ColorModels must support the normalized form.
In the unnormalized form, each component is an unsigned integral value between 0 and 2n - 1, where n is the number of significant bits for a particular component. If pixel values for a particular ColorModel represent color samples premultiplied by the alpha sample, unnormalized color component values are also premultiplied. The unnormalized form is used only with instances of ColorModel whose ColorSpace has minimum component values of 0.0 for all components and maximum values of 1.0 for all components. The unnormalized form for color and alpha components can be a convenient representation for ColorModels whose normalized component values all lie between 0.0 and 1.0. In such cases the integral value 0 maps to 0.0 and the value 2n - 1 maps to 1.0. In other cases, such as when the normalized component values can be either negative or positive, the unnormalized form is not convenient. Such ColorModel objects throw an IllegalArgumentException when methods involving an unnormalized argument are called. Subclasses of ColorModel must specify the conditions under which this occurs.
The ColorModel abstract class encapsulates the methods for translating a pixel value to color components (for example, red, green, and blue) and an alpha component. In order to render an image to the screen, a printer, or another image, pixel values must be converted to color and alpha components. As arguments to or return values from methods of this class, pixels are represented as 32-bit ints or as arrays of primitive types. The number, order, and interpretation of color components for a ColorModel is specified by its ColorSpace. A ColorModel used with pixel data that does not include alpha information treats all pixels as opaque, which is an alpha value of 1.0. This ColorModel class supports two representations of pixel values. A pixel value can be a single 32-bit int or an array of primitive types. The Java(tm) Platform 1.0 and 1.1 APIs represented pixels as single byte or single int values. For purposes of the ColorModel class, pixel value arguments were passed as ints. The Java(tm) 2 Platform API introduced additional classes for representing images. With BufferedImage or RenderedImage objects, based on Raster and SampleModel classes, pixel values might not be conveniently representable as a single int. Consequently, ColorModel now has methods that accept pixel values represented as arrays of primitive types. The primitive type used by a particular ColorModel object is called its transfer type. ColorModel objects used with images for which pixel values are not conveniently representable as a single int throw an IllegalArgumentException when methods taking a single int pixel argument are called. Subclasses of ColorModel must specify the conditions under which this occurs. This does not occur with DirectColorModel or IndexColorModel objects. Currently, the transfer types supported by the Java 2D(tm) API are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, DataBuffer.TYPE_INT, DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE. Most rendering operations will perform much faster when using ColorModels and images based on the first three of these types. In addition, some image filtering operations are not supported for ColorModels and images based on the latter three types. The transfer type for a particular ColorModel object is specified when the object is created, either explicitly or by default. All subclasses of ColorModel must specify what the possible transfer types are and how the number of elements in the primitive arrays representing pixels is determined. For BufferedImages, the transfer type of its Raster and of the Raster object's SampleModel (available from the getTransferType methods of these classes) must match that of the ColorModel. The number of elements in an array representing a pixel for the Raster and SampleModel (available from the getNumDataElements methods of these classes) must match that of the ColorModel. The algorithm used to convert from pixel values to color and alpha components varies by subclass. For example, there is not necessarily a one-to-one correspondence between samples obtained from the SampleModel of a BufferedImage object's Raster and color/alpha components. Even when there is such a correspondence, the number of bits in a sample is not necessarily the same as the number of bits in the corresponding color/alpha component. Each subclass must specify how the translation from pixel values to color/alpha components is done. Methods in the ColorModel class use two different representations of color and alpha components - a normalized form and an unnormalized form. In the normalized form, each component is a float value between some minimum and maximum values. For the alpha component, the minimum is 0.0 and the maximum is 1.0. For color components the minimum and maximum values for each component can be obtained from the ColorSpace object. These values will often be 0.0 and 1.0 (e.g. normalized component values for the default sRGB color space range from 0.0 to 1.0), but some color spaces have component values with different upper and lower limits. These limits can be obtained using the getMinValue and getMaxValue methods of the ColorSpace class. Normalized color component values are not premultiplied. All ColorModels must support the normalized form. In the unnormalized form, each component is an unsigned integral value between 0 and 2n - 1, where n is the number of significant bits for a particular component. If pixel values for a particular ColorModel represent color samples premultiplied by the alpha sample, unnormalized color component values are also premultiplied. The unnormalized form is used only with instances of ColorModel whose ColorSpace has minimum component values of 0.0 for all components and maximum values of 1.0 for all components. The unnormalized form for color and alpha components can be a convenient representation for ColorModels whose normalized component values all lie between 0.0 and 1.0. In such cases the integral value 0 maps to 0.0 and the value 2n - 1 maps to 1.0. In other cases, such as when the normalized component values can be either negative or positive, the unnormalized form is not convenient. Such ColorModel objects throw an IllegalArgumentException when methods involving an unnormalized argument are called. Subclasses of ColorModel must specify the conditions under which this occurs.
A ColorModel class that works with pixel values that represent color and alpha information as separate samples and that store each sample in a separate data element. This class can be used with an arbitrary ColorSpace. The number of color samples in the pixel values must be same as the number of color components in the ColorSpace. There may be a single alpha sample.
For those methods that use a primitive array pixel representation of type transferType, the array length is the same as the number of color and alpha samples. Color samples are stored first in the array followed by the alpha sample, if present. The order of the color samples is specified by the ColorSpace. Typically, this order reflects the name of the color space type. For example, for TYPE_RGB, index 0 corresponds to red, index 1 to green, and index 2 to blue.
The translation from pixel sample values to color/alpha components for display or processing purposes is based on a one-to-one correspondence of samples to components. Depending on the transfer type used to create an instance of ComponentColorModel, the pixel sample values represented by that instance may be signed or unsigned and may be of integral type or float or double (see below for details). The translation from sample values to normalized color/alpha components must follow certain rules. For float and double samples, the translation is an identity, i.e. normalized component values are equal to the corresponding sample values. For integral samples, the translation should be only a simple scale and offset, where the scale and offset constants may be different for each component. The result of applying the scale and offset constants is a set of color/alpha component values, which are guaranteed to fall within a certain range. Typically, the range for a color component will be the range defined by the getMinValue and getMaxValue methods of the ColorSpace class. The range for an alpha component should be 0.0 to 1.0.
Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT have pixel sample values which are treated as unsigned integral values. The number of bits in a color or alpha sample of a pixel value might not be the same as the number of bits for the corresponding color or alpha sample passed to the ComponentColorModel(ColorSpace, int[], boolean, boolean, int, int) constructor. In that case, this class assumes that the least significant n bits of a sample value hold the component value, where n is the number of significant bits for the component passed to the constructor. It also assumes that any higher-order bits in a sample value are zero. Thus, sample values range from 0 to 2n - 1. This class maps these sample values to normalized color component values such that 0 maps to the value obtained from the ColorSpace's getMinValue method for each component and 2n - 1 maps to the value obtained from getMaxValue. To create a ComponentColorModel with a different color sample mapping requires subclassing this class and overriding the getNormalizedComponents(Object, float[], int) method. The mapping for an alpha sample always maps 0 to 0.0 and 2n - 1 to 1.0.
For instances with unsigned sample values, the unnormalized color/alpha component representation is only supported if two conditions hold. First, sample value value 0 must map to normalized component value 0.0 and sample value 2n - 1 to 1.0. Second the min/max range of all color components of the ColorSpace must be 0.0 to 1.0. In this case, the component representation is the n least significant bits of the corresponding sample. Thus each component is an unsigned integral value between 0 and 2n - 1, where n is the number of significant bits for a particular component. If these conditions are not met, any method taking an unnormalized component argument will throw an IllegalArgumentException.
Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE have pixel sample values which are treated as signed short, float, or double values. Such instances do not support the unnormalized color/alpha component representation, so any methods taking such a representation as an argument will throw an IllegalArgumentException when called on one of these instances. The normalized component values of instances of this class have a range which depends on the transfer type as follows: for float samples, the full range of the float data type; for double samples, the full range of the float data type (resulting from casting double to float); for short samples, from approximately -maxVal to maxVal, where maxVal is the per component maximum value for the ColorSpace (-32767 maps to -maxVal, 0 maps to 0.0, and 32767 maps to maxVal). A subclass may override the scaling for short sample values to normalized component values by overriding the getNormalizedComponents(Object, float[], int) method. For float and double samples, the normalized component values are taken to be equal to the corresponding sample values, and subclasses should not attempt to add any non-identity scaling for these transfer types.
Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE use all the bits of all sample values. Thus all color/alpha components have 16 bits when using DataBuffer.TYPE_SHORT, 32 bits when using DataBuffer.TYPE_FLOAT, and 64 bits when using DataBuffer.TYPE_DOUBLE. When the ComponentColorModel(ColorSpace, int[], boolean, boolean, int, int) form of constructor is used with one of these transfer types, the bits array argument is ignored.
It is possible to have color/alpha sample values which cannot be reasonably interpreted as component values for rendering. This can happen when ComponentColorModel is subclassed to override the mapping of unsigned sample values to normalized color component values or when signed sample values outside a certain range are used. (As an example, specifying an alpha component as a signed short value outside the range 0 to 32767, normalized range 0.0 to 1.0, can lead to unexpected results.) It is the responsibility of applications to appropriately scale pixel data before rendering such that color components fall within the normalized range of the ColorSpace (obtained using the getMinValue and getMaxValue methods of the ColorSpace class) and the alpha component is between 0.0 and 1.0. If color or alpha component values fall outside these ranges, rendering results are indeterminate.
Methods that use a single int pixel representation throw an IllegalArgumentException, unless the number of components for the ComponentColorModel is one and the component value is unsigned -- in other words, a single color component using a transfer type of DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT and no alpha.
A ComponentColorModel can be used in conjunction with a ComponentSampleModel, a BandedSampleModel, or a PixelInterleavedSampleModel to construct a BufferedImage.
A ColorModel class that works with pixel values that represent color and alpha information as separate samples and that store each sample in a separate data element. This class can be used with an arbitrary ColorSpace. The number of color samples in the pixel values must be same as the number of color components in the ColorSpace. There may be a single alpha sample. For those methods that use a primitive array pixel representation of type transferType, the array length is the same as the number of color and alpha samples. Color samples are stored first in the array followed by the alpha sample, if present. The order of the color samples is specified by the ColorSpace. Typically, this order reflects the name of the color space type. For example, for TYPE_RGB, index 0 corresponds to red, index 1 to green, and index 2 to blue. The translation from pixel sample values to color/alpha components for display or processing purposes is based on a one-to-one correspondence of samples to components. Depending on the transfer type used to create an instance of ComponentColorModel, the pixel sample values represented by that instance may be signed or unsigned and may be of integral type or float or double (see below for details). The translation from sample values to normalized color/alpha components must follow certain rules. For float and double samples, the translation is an identity, i.e. normalized component values are equal to the corresponding sample values. For integral samples, the translation should be only a simple scale and offset, where the scale and offset constants may be different for each component. The result of applying the scale and offset constants is a set of color/alpha component values, which are guaranteed to fall within a certain range. Typically, the range for a color component will be the range defined by the getMinValue and getMaxValue methods of the ColorSpace class. The range for an alpha component should be 0.0 to 1.0. Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT have pixel sample values which are treated as unsigned integral values. The number of bits in a color or alpha sample of a pixel value might not be the same as the number of bits for the corresponding color or alpha sample passed to the ComponentColorModel(ColorSpace, int[], boolean, boolean, int, int) constructor. In that case, this class assumes that the least significant n bits of a sample value hold the component value, where n is the number of significant bits for the component passed to the constructor. It also assumes that any higher-order bits in a sample value are zero. Thus, sample values range from 0 to 2n - 1. This class maps these sample values to normalized color component values such that 0 maps to the value obtained from the ColorSpace's getMinValue method for each component and 2n - 1 maps to the value obtained from getMaxValue. To create a ComponentColorModel with a different color sample mapping requires subclassing this class and overriding the getNormalizedComponents(Object, float[], int) method. The mapping for an alpha sample always maps 0 to 0.0 and 2n - 1 to 1.0. For instances with unsigned sample values, the unnormalized color/alpha component representation is only supported if two conditions hold. First, sample value value 0 must map to normalized component value 0.0 and sample value 2n - 1 to 1.0. Second the min/max range of all color components of the ColorSpace must be 0.0 to 1.0. In this case, the component representation is the n least significant bits of the corresponding sample. Thus each component is an unsigned integral value between 0 and 2n - 1, where n is the number of significant bits for a particular component. If these conditions are not met, any method taking an unnormalized component argument will throw an IllegalArgumentException. Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE have pixel sample values which are treated as signed short, float, or double values. Such instances do not support the unnormalized color/alpha component representation, so any methods taking such a representation as an argument will throw an IllegalArgumentException when called on one of these instances. The normalized component values of instances of this class have a range which depends on the transfer type as follows: for float samples, the full range of the float data type; for double samples, the full range of the float data type (resulting from casting double to float); for short samples, from approximately -maxVal to maxVal, where maxVal is the per component maximum value for the ColorSpace (-32767 maps to -maxVal, 0 maps to 0.0, and 32767 maps to maxVal). A subclass may override the scaling for short sample values to normalized component values by overriding the getNormalizedComponents(Object, float[], int) method. For float and double samples, the normalized component values are taken to be equal to the corresponding sample values, and subclasses should not attempt to add any non-identity scaling for these transfer types. Instances of ComponentColorModel created with transfer types DataBuffer.TYPE_SHORT, DataBuffer.TYPE_FLOAT, and DataBuffer.TYPE_DOUBLE use all the bits of all sample values. Thus all color/alpha components have 16 bits when using DataBuffer.TYPE_SHORT, 32 bits when using DataBuffer.TYPE_FLOAT, and 64 bits when using DataBuffer.TYPE_DOUBLE. When the ComponentColorModel(ColorSpace, int[], boolean, boolean, int, int) form of constructor is used with one of these transfer types, the bits array argument is ignored. It is possible to have color/alpha sample values which cannot be reasonably interpreted as component values for rendering. This can happen when ComponentColorModel is subclassed to override the mapping of unsigned sample values to normalized color component values or when signed sample values outside a certain range are used. (As an example, specifying an alpha component as a signed short value outside the range 0 to 32767, normalized range 0.0 to 1.0, can lead to unexpected results.) It is the responsibility of applications to appropriately scale pixel data before rendering such that color components fall within the normalized range of the ColorSpace (obtained using the getMinValue and getMaxValue methods of the ColorSpace class) and the alpha component is between 0.0 and 1.0. If color or alpha component values fall outside these ranges, rendering results are indeterminate. Methods that use a single int pixel representation throw an IllegalArgumentException, unless the number of components for the ComponentColorModel is one and the component value is unsigned -- in other words, a single color component using a transfer type of DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT and no alpha. A ComponentColorModel can be used in conjunction with a ComponentSampleModel, a BandedSampleModel, or a PixelInterleavedSampleModel to construct a BufferedImage.
This class represents image data which is stored such that each sample of a pixel occupies one data element of the DataBuffer. It stores the N samples which make up a pixel in N separate data array elements. Different bands may be in different banks of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. This class can support different kinds of interleaving, e.g. band interleaving, scanline interleaving, and pixel interleaving. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. This class can represent image data for which each sample is an unsigned integral number which can be stored in 8, 16, or 32 bits (using DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT, respectively), data for which each sample is a signed integral number which can be stored in 16 bits (using DataBuffer.TYPE_SHORT), or data for which each sample is a signed float or double quantity (using DataBuffer.TYPE_FLOAT or DataBuffer.TYPE_DOUBLE, respectively). All samples of a given ComponentSampleModel are stored with the same precision. All strides and offsets must be non-negative. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT, TYPE_DOUBLE,
This class represents image data which is stored such that each sample of a pixel occupies one data element of the DataBuffer. It stores the N samples which make up a pixel in N separate data array elements. Different bands may be in different banks of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. This class can support different kinds of interleaving, e.g. band interleaving, scanline interleaving, and pixel interleaving. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. This class can represent image data for which each sample is an unsigned integral number which can be stored in 8, 16, or 32 bits (using DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT, respectively), data for which each sample is a signed integral number which can be stored in 16 bits (using DataBuffer.TYPE_SHORT), or data for which each sample is a signed float or double quantity (using DataBuffer.TYPE_FLOAT or DataBuffer.TYPE_DOUBLE, respectively). All samples of a given ComponentSampleModel are stored with the same precision. All strides and offsets must be non-negative. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT, TYPE_DOUBLE,
This class implements a convolution from the source to the destination. Convolution using a convolution kernel is a spatial operation that computes the output pixel from an input pixel by multiplying the kernel with the surround of the input pixel. This allows the output pixel to be affected by the immediate neighborhood in a way that can be mathematically specified with a kernel.
This class operates with BufferedImage data in which color components are premultiplied with the alpha component. If the Source BufferedImage has an alpha component, and the color components are not premultiplied with the alpha component, then the data are premultiplied before being convolved. If the Destination has color components which are not premultiplied, then alpha is divided out before storing into the Destination (if alpha is 0, the color components are set to 0). If the Destination has no alpha component, then the resulting alpha is discarded after first dividing it out of the color components.
Rasters are treated as having no alpha channel. If the above treatment of the alpha channel in BufferedImages is not desired, it may be avoided by getting the Raster of a source BufferedImage and using the filter method of this class which works with Rasters.
If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required.
Note that the Source and the Destination may not be the same object.
This class implements a convolution from the source to the destination. Convolution using a convolution kernel is a spatial operation that computes the output pixel from an input pixel by multiplying the kernel with the surround of the input pixel. This allows the output pixel to be affected by the immediate neighborhood in a way that can be mathematically specified with a kernel. This class operates with BufferedImage data in which color components are premultiplied with the alpha component. If the Source BufferedImage has an alpha component, and the color components are not premultiplied with the alpha component, then the data are premultiplied before being convolved. If the Destination has color components which are not premultiplied, then alpha is divided out before storing into the Destination (if alpha is 0, the color components are set to 0). If the Destination has no alpha component, then the resulting alpha is discarded after first dividing it out of the color components. Rasters are treated as having no alpha channel. If the above treatment of the alpha channel in BufferedImages is not desired, it may be avoided by getting the Raster of a source BufferedImage and using the filter method of this class which works with Rasters. If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required. Note that the Source and the Destination may not be the same object.
No vars found in this namespace.
An ImageFilter class for cropping images. This class extends the basic ImageFilter Class to extract a given rectangular region of an existing Image and provide a source for a new image containing just the extracted region. It is meant to be used in conjunction with a FilteredImageSource object to produce cropped versions of existing images.
An ImageFilter class for cropping images. This class extends the basic ImageFilter Class to extract a given rectangular region of an existing Image and provide a source for a new image containing just the extracted region. It is meant to be used in conjunction with a FilteredImageSource object to produce cropped versions of existing images.
This class exists to wrap one or more data arrays. Each data array in the DataBuffer is referred to as a bank. Accessor methods for getting and setting elements of the DataBuffer's banks exist with and without a bank specifier. The methods without a bank specifier use the default 0th bank. The DataBuffer can optionally take an offset per bank, so that data in an existing array can be used even if the interesting data doesn't start at array location zero. Getting or setting the 0th element of a bank, uses the (0+offset)th element of the array. The size field specifies how much of the data array is available for use. Size offset for a given bank should never be greater than the length of the associated data array. The data type of a data buffer indicates the type of the data array(s) and may also indicate additional semantics, e.g. storing unsigned 8-bit data in elements of a byte array. The data type may be TYPE_UNDEFINED or one of the types defined below. Other types may be added in the future. Generally, an object of class DataBuffer will be cast down to one of its data type specific subclasses to access data type specific methods for improved performance. Currently, the Java 2D(tm) API image classes use TYPE_BYTE, TYPE_USHORT, TYPE_INT, TYPE_SHORT, TYPE_FLOAT, and TYPE_DOUBLE DataBuffers to store image data.
This class exists to wrap one or more data arrays. Each data array in the DataBuffer is referred to as a bank. Accessor methods for getting and setting elements of the DataBuffer's banks exist with and without a bank specifier. The methods without a bank specifier use the default 0th bank. The DataBuffer can optionally take an offset per bank, so that data in an existing array can be used even if the interesting data doesn't start at array location zero. Getting or setting the 0th element of a bank, uses the (0+offset)th element of the array. The size field specifies how much of the data array is available for use. Size offset for a given bank should never be greater than the length of the associated data array. The data type of a data buffer indicates the type of the data array(s) and may also indicate additional semantics, e.g. storing unsigned 8-bit data in elements of a byte array. The data type may be TYPE_UNDEFINED or one of the types defined below. Other types may be added in the future. Generally, an object of class DataBuffer will be cast down to one of its data type specific subclasses to access data type specific methods for improved performance. Currently, the Java 2D(tm) API image classes use TYPE_BYTE, TYPE_USHORT, TYPE_INT, TYPE_SHORT, TYPE_FLOAT, and TYPE_DOUBLE DataBuffers to store image data.
This class extends DataBuffer and stores data internally as bytes. Values stored in the byte array(s) of this DataBuffer are treated as unsigned values.
Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array, as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally as bytes. Values stored in the byte array(s) of this DataBuffer are treated as unsigned values. Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array, as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally in double form.
Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally in double form. Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally in float form.
Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally in float form. Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally as integers.
Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally as integers. Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally as shorts.
Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally as shorts. Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally as shorts. Values stored in the short array(s) of this DataBuffer are treated as unsigned values.
Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
This class extends DataBuffer and stores data internally as shorts. Values stored in the short array(s) of this DataBuffer are treated as unsigned values. Note that some implementations may function more efficiently if they can maintain control over how the data for an image is stored. For example, optimizations such as caching an image in video memory require that the implementation track all modifications to that data. Other implementations may operate better if they can store the data in locations other than a Java array. To maintain optimum compatibility with various optimizations it is best to avoid constructors and methods which expose the underlying storage as a Java array as noted below in the documentation for those methods.
The DirectColorModel class is a ColorModel class that works with pixel values that represent RGB color and alpha information as separate samples and that pack all samples for a single pixel into a single int, short, or byte quantity. This class can be used only with ColorSpaces of type ColorSpace.TYPE_RGB. In addition, for each component of the ColorSpace, the minimum normalized component value obtained via the getMinValue() method of ColorSpace must be 0.0, and the maximum value obtained via the getMaxValue() method must be 1.0 (these min/max values are typical for RGB spaces). There must be three color samples in the pixel values and there can be a single alpha sample. For those methods that use a primitive array pixel representation of type transferType, the array length is always one. The transfer types supported are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT. Color and alpha samples are stored in the single element of the array in bits indicated by bit masks. Each bit mask must be contiguous and masks must not overlap. The same masks apply to the single int pixel representation used by other methods. The correspondence of masks and color/alpha samples is as follows:
Masks are identified by indices running from 0 through 2 if no alpha is present, or 3 if an alpha is present. The first three indices refer to color samples; index 0 corresponds to red, index 1 to green, and index 2 to blue. Index 3 corresponds to the alpha sample, if present.
The translation from pixel values to color/alpha components for display or processing purposes is a one-to-one correspondence of samples to components. A DirectColorModel is typically used with image data which uses masks to define packed samples. For example, a DirectColorModel can be used in conjunction with a SinglePixelPackedSampleModel to construct a BufferedImage. Normally the masks used by the SampleModel and the ColorModel would be the same. However, if they are different, the color interpretation of pixel data will be done according to the masks of the ColorModel.
A single int pixel representation is valid for all objects of this class, since it is always possible to represent pixel values used with this class in a single int. Therefore, methods which use this representation will not throw an IllegalArgumentException due to an invalid pixel value.
This color model is similar to an X11 TrueColor visual. The default RGB ColorModel specified by the getRGBdefault method is a DirectColorModel with the following parameters:
Number of bits: 32 Red mask: 0x00ff0000 Green mask: 0x0000ff00 Blue mask: 0x000000ff Alpha mask: 0xff000000 Color space: sRGB isAlphaPremultiplied: False Transparency: Transparency.TRANSLUCENT transferType: DataBuffer.TYPE_INT
Many of the methods in this class are final. This is because the underlying native graphics code makes assumptions about the layout and operation of this class and those assumptions are reflected in the implementations of the methods here that are marked final. You can subclass this class for other reasons, but you cannot override or modify the behavior of those methods.
The DirectColorModel class is a ColorModel class that works with pixel values that represent RGB color and alpha information as separate samples and that pack all samples for a single pixel into a single int, short, or byte quantity. This class can be used only with ColorSpaces of type ColorSpace.TYPE_RGB. In addition, for each component of the ColorSpace, the minimum normalized component value obtained via the getMinValue() method of ColorSpace must be 0.0, and the maximum value obtained via the getMaxValue() method must be 1.0 (these min/max values are typical for RGB spaces). There must be three color samples in the pixel values and there can be a single alpha sample. For those methods that use a primitive array pixel representation of type transferType, the array length is always one. The transfer types supported are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT. Color and alpha samples are stored in the single element of the array in bits indicated by bit masks. Each bit mask must be contiguous and masks must not overlap. The same masks apply to the single int pixel representation used by other methods. The correspondence of masks and color/alpha samples is as follows: Masks are identified by indices running from 0 through 2 if no alpha is present, or 3 if an alpha is present. The first three indices refer to color samples; index 0 corresponds to red, index 1 to green, and index 2 to blue. Index 3 corresponds to the alpha sample, if present. The translation from pixel values to color/alpha components for display or processing purposes is a one-to-one correspondence of samples to components. A DirectColorModel is typically used with image data which uses masks to define packed samples. For example, a DirectColorModel can be used in conjunction with a SinglePixelPackedSampleModel to construct a BufferedImage. Normally the masks used by the SampleModel and the ColorModel would be the same. However, if they are different, the color interpretation of pixel data will be done according to the masks of the ColorModel. A single int pixel representation is valid for all objects of this class, since it is always possible to represent pixel values used with this class in a single int. Therefore, methods which use this representation will not throw an IllegalArgumentException due to an invalid pixel value. This color model is similar to an X11 TrueColor visual. The default RGB ColorModel specified by the getRGBdefault method is a DirectColorModel with the following parameters: Number of bits: 32 Red mask: 0x00ff0000 Green mask: 0x0000ff00 Blue mask: 0x000000ff Alpha mask: 0xff000000 Color space: sRGB isAlphaPremultiplied: False Transparency: Transparency.TRANSLUCENT transferType: DataBuffer.TYPE_INT Many of the methods in this class are final. This is because the underlying native graphics code makes assumptions about the layout and operation of this class and those assumptions are reflected in the implementations of the methods here that are marked final. You can subclass this class for other reasons, but you cannot override or modify the behavior of those methods.
This class is an implementation of the ImageProducer interface which takes an existing image and a filter object and uses them to produce image data for a new filtered version of the original image. Here is an example which filters an image by swapping the red and blue compents:
Image src = getImage("doc:///demo/images/duke/T1.gif");
ImageFilter colorfilter = new RedBlueSwapFilter();
Image img = createImage(new FilteredImageSource(src.getSource(),
colorfilter));
This class is an implementation of the ImageProducer interface which takes an existing image and a filter object and uses them to produce image data for a new filtered version of the original image. Here is an example which filters an image by swapping the red and blue compents: Image src = getImage("doc:///demo/images/duke/T1.gif"); ImageFilter colorfilter = new RedBlueSwapFilter(); Image img = createImage(new FilteredImageSource(src.getSource(), colorfilter));
The interface for objects expressing interest in image data through the ImageProducer interfaces. When a consumer is added to an image producer, the producer delivers all of the data about the image using the method calls defined in this interface.
The interface for objects expressing interest in image data through the ImageProducer interfaces. When a consumer is added to an image producer, the producer delivers all of the data about the image using the method calls defined in this interface.
This class implements a filter for the set of interface methods that are used to deliver data from an ImageProducer to an ImageConsumer. It is meant to be used in conjunction with a FilteredImageSource object to produce filtered versions of existing images. It is a base class that provides the calls needed to implement a "Null filter" which has no effect on the data being passed through. Filters should subclass this class and override the methods which deal with the data that needs to be filtered and modify it as necessary.
This class implements a filter for the set of interface methods that are used to deliver data from an ImageProducer to an ImageConsumer. It is meant to be used in conjunction with a FilteredImageSource object to produce filtered versions of existing images. It is a base class that provides the calls needed to implement a "Null filter" which has no effect on the data being passed through. Filters should subclass this class and override the methods which deal with the data that needs to be filtered and modify it as necessary.
An asynchronous update interface for receiving notifications about Image information as the Image is constructed.
An asynchronous update interface for receiving notifications about Image information as the Image is constructed.
The interface for objects which can produce the image data for Images. Each image contains an ImageProducer which is used to reconstruct the image whenever it is needed, for example, when a new size of the Image is scaled, or when the width or height of the Image is being requested.
The interface for objects which can produce the image data for Images. Each image contains an ImageProducer which is used to reconstruct the image whenever it is needed, for example, when a new size of the Image is scaled, or when the width or height of the Image is being requested.
The ImagingOpException is thrown if one of the BufferedImageOp or RasterOp filter methods cannot process the image.
The ImagingOpException is thrown if one of the BufferedImageOp or RasterOp filter methods cannot process the image.
The IndexColorModel class is a ColorModel class that works with pixel values consisting of a single sample that is an index into a fixed colormap in the default sRGB color space. The colormap specifies red, green, blue, and optional alpha components corresponding to each index. All components are represented in the colormap as 8-bit unsigned integral values. Some constructors allow the caller to specify "holes" in the colormap by indicating which colormap entries are valid and which represent unusable colors via the bits set in a BigInteger object. This color model is similar to an X11 PseudoColor visual.
Some constructors provide a means to specify an alpha component for each pixel in the colormap, while others either provide no such means or, in some cases, a flag to indicate whether the colormap data contains alpha values. If no alpha is supplied to the constructor, an opaque alpha component (alpha = 1.0) is assumed for each entry. An optional transparent pixel value can be supplied that indicates a pixel to be made completely transparent, regardless of any alpha component supplied or assumed for that pixel value. Note that the color components in the colormap of an IndexColorModel objects are never pre-multiplied with the alpha components.
The transparency of an IndexColorModel object is determined by examining the alpha components of the colors in the colormap and choosing the most specific value after considering the optional alpha values and any transparent index specified. The transparency value is Transparency.OPAQUE only if all valid colors in the colormap are opaque and there is no valid transparent pixel. If all valid colors in the colormap are either completely opaque (alpha = 1.0) or completely transparent (alpha = 0.0), which typically occurs when a valid transparent pixel is specified, the value is Transparency.BITMASK. Otherwise, the value is Transparency.TRANSLUCENT, indicating that some valid color has an alpha component that is neither completely transparent nor completely opaque (0.0 < alpha < 1.0).
If an IndexColorModel object has a transparency value of Transparency.OPAQUE, then the hasAlpha and getNumComponents methods (both inherited from ColorModel) return false and 3, respectively. For any other transparency value, hasAlpha returns true and getNumComponents returns 4.
The values used to index into the colormap are taken from the least significant n bits of pixel representations where n is based on the pixel size specified in the constructor. For pixel sizes smaller than 8 bits, n is rounded up to a power of two (3 becomes 4 and 5,6,7 become 8). For pixel sizes between 8 and 16 bits, n is equal to the pixel size. Pixel sizes larger than 16 bits are not supported by this class. Higher order bits beyond n are ignored in pixel representations. Index values greater than or equal to the map size, but less than 2n, are undefined and return 0 for all color and alpha components.
For those methods that use a primitive array pixel representation of type transferType, the array length is always one. The transfer types supported are DataBuffer.TYPE_BYTE and DataBuffer.TYPE_USHORT. A single int pixel representation is valid for all objects of this class, since it is always possible to represent pixel values used with this class in a single int. Therefore, methods that use this representation do not throw an IllegalArgumentException due to an invalid pixel value.
Many of the methods in this class are final. The reason for this is that the underlying native graphics code makes assumptions about the layout and operation of this class and those assumptions are reflected in the implementations of the methods here that are marked final. You can subclass this class for other reasons, but you cannot override or modify the behaviour of those methods.
The IndexColorModel class is a ColorModel class that works with pixel values consisting of a single sample that is an index into a fixed colormap in the default sRGB color space. The colormap specifies red, green, blue, and optional alpha components corresponding to each index. All components are represented in the colormap as 8-bit unsigned integral values. Some constructors allow the caller to specify "holes" in the colormap by indicating which colormap entries are valid and which represent unusable colors via the bits set in a BigInteger object. This color model is similar to an X11 PseudoColor visual. Some constructors provide a means to specify an alpha component for each pixel in the colormap, while others either provide no such means or, in some cases, a flag to indicate whether the colormap data contains alpha values. If no alpha is supplied to the constructor, an opaque alpha component (alpha = 1.0) is assumed for each entry. An optional transparent pixel value can be supplied that indicates a pixel to be made completely transparent, regardless of any alpha component supplied or assumed for that pixel value. Note that the color components in the colormap of an IndexColorModel objects are never pre-multiplied with the alpha components. The transparency of an IndexColorModel object is determined by examining the alpha components of the colors in the colormap and choosing the most specific value after considering the optional alpha values and any transparent index specified. The transparency value is Transparency.OPAQUE only if all valid colors in the colormap are opaque and there is no valid transparent pixel. If all valid colors in the colormap are either completely opaque (alpha = 1.0) or completely transparent (alpha = 0.0), which typically occurs when a valid transparent pixel is specified, the value is Transparency.BITMASK. Otherwise, the value is Transparency.TRANSLUCENT, indicating that some valid color has an alpha component that is neither completely transparent nor completely opaque (0.0 < alpha < 1.0). If an IndexColorModel object has a transparency value of Transparency.OPAQUE, then the hasAlpha and getNumComponents methods (both inherited from ColorModel) return false and 3, respectively. For any other transparency value, hasAlpha returns true and getNumComponents returns 4. The values used to index into the colormap are taken from the least significant n bits of pixel representations where n is based on the pixel size specified in the constructor. For pixel sizes smaller than 8 bits, n is rounded up to a power of two (3 becomes 4 and 5,6,7 become 8). For pixel sizes between 8 and 16 bits, n is equal to the pixel size. Pixel sizes larger than 16 bits are not supported by this class. Higher order bits beyond n are ignored in pixel representations. Index values greater than or equal to the map size, but less than 2n, are undefined and return 0 for all color and alpha components. For those methods that use a primitive array pixel representation of type transferType, the array length is always one. The transfer types supported are DataBuffer.TYPE_BYTE and DataBuffer.TYPE_USHORT. A single int pixel representation is valid for all objects of this class, since it is always possible to represent pixel values used with this class in a single int. Therefore, methods that use this representation do not throw an IllegalArgumentException due to an invalid pixel value. Many of the methods in this class are final. The reason for this is that the underlying native graphics code makes assumptions about the layout and operation of this class and those assumptions are reflected in the implementations of the methods here that are marked final. You can subclass this class for other reasons, but you cannot override or modify the behaviour of those methods.
The Kernel class defines a matrix that describes how a specified pixel and its surrounding pixels affect the value computed for the pixel's position in the output image of a filtering operation. The X origin and Y origin indicate the kernel matrix element that corresponds to the pixel position for which an output value is being computed.
The Kernel class defines a matrix that describes how a specified pixel and its surrounding pixels affect the value computed for the pixel's position in the output image of a filtering operation. The X origin and Y origin indicate the kernel matrix element that corresponds to the pixel position for which an output value is being computed.
This class implements a lookup operation from the source to the destination. The LookupTable object may contain a single array or multiple arrays, subject to the restrictions below.
For Rasters, the lookup operates on bands. The number of lookup arrays may be one, in which case the same array is applied to all bands, or it must equal the number of Source Raster bands.
For BufferedImages, the lookup operates on color and alpha components. The number of lookup arrays may be one, in which case the same array is applied to all color (but not alpha) components. Otherwise, the number of lookup arrays may equal the number of Source color components, in which case no lookup of the alpha component (if present) is performed. If neither of these cases apply, the number of lookup arrays must equal the number of Source color components plus alpha components, in which case lookup is performed for all color and alpha components. This allows non-uniform rescaling of multi-band BufferedImages.
BufferedImage sources with premultiplied alpha data are treated in the same manner as non-premultiplied images for purposes of the lookup. That is, the lookup is done per band on the raw data of the BufferedImage source without regard to whether the data is premultiplied. If a color conversion is required to the destination ColorModel, the premultiplied state of both source and destination will be taken into account for this step.
Images with an IndexColorModel cannot be used.
If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required.
This class allows the Source to be the same as the Destination.
This class implements a lookup operation from the source to the destination. The LookupTable object may contain a single array or multiple arrays, subject to the restrictions below. For Rasters, the lookup operates on bands. The number of lookup arrays may be one, in which case the same array is applied to all bands, or it must equal the number of Source Raster bands. For BufferedImages, the lookup operates on color and alpha components. The number of lookup arrays may be one, in which case the same array is applied to all color (but not alpha) components. Otherwise, the number of lookup arrays may equal the number of Source color components, in which case no lookup of the alpha component (if present) is performed. If neither of these cases apply, the number of lookup arrays must equal the number of Source color components plus alpha components, in which case lookup is performed for all color and alpha components. This allows non-uniform rescaling of multi-band BufferedImages. BufferedImage sources with premultiplied alpha data are treated in the same manner as non-premultiplied images for purposes of the lookup. That is, the lookup is done per band on the raw data of the BufferedImage source without regard to whether the data is premultiplied. If a color conversion is required to the destination ColorModel, the premultiplied state of both source and destination will be taken into account for this step. Images with an IndexColorModel cannot be used. If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required. This class allows the Source to be the same as the Destination.
This abstract class defines a lookup table object. ByteLookupTable and ShortLookupTable are subclasses, which contain byte and short data, respectively. A lookup table contains data arrays for one or more bands (or components) of an image (for example, separate arrays for R, G, and B), and it contains an offset which will be subtracted from the input values before indexing into the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands. All arrays must be the same size.
This abstract class defines a lookup table object. ByteLookupTable and ShortLookupTable are subclasses, which contain byte and short data, respectively. A lookup table contains data arrays for one or more bands (or components) of an image (for example, separate arrays for R, G, and B), and it contains an offset which will be subtracted from the input values before indexing into the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands. All arrays must be the same size.
This class is an implementation of the ImageProducer interface which uses an array to produce pixel values for an Image. Here is an example which calculates a 100x100 image representing a fade from black to blue along the X axis and a fade from black to red along the Y axis:
int w = 100;
int h = 100;
int pix[] = new int[w * h];
int index = 0;
for (int y = 0; y < h; y++) {
int red = (y * 255) / (h - 1);
for (int x = 0; x < w; x++) {
int blue = (x * 255) / (w - 1);
pix[index++] = (255 << 24) | (red << 16) | blue;
}
}
Image img = createImage(new MemoryImageSource(w, h, pix, 0, w));
The MemoryImageSource is also capable of managing a memory image which varies over time to allow animation or custom rendering. Here is an example showing how to set up the animation source and signal changes in the data (adapted from the MemoryAnimationSourceDemo by Garth Dickie):
int pixels[];
MemoryImageSource source;
public void init() {
int width = 50;
int height = 50;
int size = width * height;
pixels = new int[size];
int value = getBackground().getRGB();
for (int i = 0; i < size; i++) {
pixels[i] = value;
}
source = new MemoryImageSource(width, height, pixels, 0, width);
source.setAnimated(true);
image = createImage(source);
}
public void run() {
Thread me = Thread.currentThread( );
me.setPriority(Thread.MIN_PRIORITY);
while (true) {
try {
Thread.sleep(10);
} catch( InterruptedException e ) {
return;
}
// Modify the values in the pixels array at (x, y, w, h)
// Send the new data to the interested ImageConsumers
source.newPixels(x, y, w, h);
}
}
This class is an implementation of the ImageProducer interface which uses an array to produce pixel values for an Image. Here is an example which calculates a 100x100 image representing a fade from black to blue along the X axis and a fade from black to red along the Y axis: int w = 100; int h = 100; int pix[] = new int[w * h]; int index = 0; for (int y = 0; y < h; y++) { int red = (y * 255) / (h - 1); for (int x = 0; x < w; x++) { int blue = (x * 255) / (w - 1); pix[index++] = (255 << 24) | (red << 16) | blue; } } Image img = createImage(new MemoryImageSource(w, h, pix, 0, w)); The MemoryImageSource is also capable of managing a memory image which varies over time to allow animation or custom rendering. Here is an example showing how to set up the animation source and signal changes in the data (adapted from the MemoryAnimationSourceDemo by Garth Dickie): int pixels[]; MemoryImageSource source; public void init() { int width = 50; int height = 50; int size = width * height; pixels = new int[size]; int value = getBackground().getRGB(); for (int i = 0; i < size; i++) { pixels[i] = value; } source = new MemoryImageSource(width, height, pixels, 0, width); source.setAnimated(true); image = createImage(source); } public void run() { Thread me = Thread.currentThread( ); me.setPriority(Thread.MIN_PRIORITY); while (true) { try { Thread.sleep(10); } catch( InterruptedException e ) { return; } // Modify the values in the pixels array at (x, y, w, h) // Send the new data to the interested ImageConsumers source.newPixels(x, y, w, h); } }
The MultiPixelPackedSampleModel class represents one-banded images and can pack multiple one-sample pixels into one data element. Pixels are not allowed to span data elements. The data type can be DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT. Each pixel must be a power of 2 number of bits and a power of 2 number of pixels must fit exactly in one data element. Pixel bit stride is equal to the number of bits per pixel. Scanline stride is in data elements and the last several data elements might be padded with unused pixels. Data bit offset is the offset in bits from the beginning of the DataBuffer to the first pixel and must be a multiple of pixel bit stride.
The following code illustrates extracting the bits for pixel x, y from DataBuffer data and storing the pixel data in data elements of type dataType:
int dataElementSize = DataBuffer.getDataTypeSize(dataType);
int bitnum = dataBitOffset x*pixelBitStride;
int element = data.getElem(y*scanlineStride bitnum/dataElementSize);
int shift = dataElementSize - (bitnum & (dataElementSize-1))
- pixelBitStride;
int pixel = (element >> shift) & ((1 << pixelBitStride) - 1);
The MultiPixelPackedSampleModel class represents one-banded images and can pack multiple one-sample pixels into one data element. Pixels are not allowed to span data elements. The data type can be DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, or DataBuffer.TYPE_INT. Each pixel must be a power of 2 number of bits and a power of 2 number of pixels must fit exactly in one data element. Pixel bit stride is equal to the number of bits per pixel. Scanline stride is in data elements and the last several data elements might be padded with unused pixels. Data bit offset is the offset in bits from the beginning of the DataBuffer to the first pixel and must be a multiple of pixel bit stride. The following code illustrates extracting the bits for pixel x, y from DataBuffer data and storing the pixel data in data elements of type dataType: int dataElementSize = DataBuffer.getDataTypeSize(dataType); int bitnum = dataBitOffset x*pixelBitStride; int element = data.getElem(y*scanlineStride bitnum/dataElementSize); int shift = dataElementSize - (bitnum & (dataElementSize-1)) - pixelBitStride; int pixel = (element >> shift) & ((1 << pixelBitStride) - 1);
The PackedColorModel class is an abstract ColorModel class that works with pixel values which represent color and alpha information as separate samples and which pack all samples for a single pixel into a single int, short, or byte quantity. This class can be used with an arbitrary ColorSpace. The number of color samples in the pixel values must be the same as the number of color components in the ColorSpace. There can be a single alpha sample. The array length is always 1 for those methods that use a primitive array pixel representation of type transferType. The transfer types supported are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT. Color and alpha samples are stored in the single element of the array in bits indicated by bit masks. Each bit mask must be contiguous and masks must not overlap. The same masks apply to the single int pixel representation used by other methods. The correspondence of masks and color/alpha samples is as follows:
Masks are identified by indices running from 0 through getNumComponents - 1. The first getNumColorComponents indices refer to color samples. If an alpha sample is present, it corresponds the last index. The order of the color indices is specified by the ColorSpace. Typically, this reflects the name of the color space type (for example, TYPE_RGB), index 0 corresponds to red, index 1 to green, and index 2 to blue.
The translation from pixel values to color/alpha components for display or processing purposes is a one-to-one correspondence of samples to components. A PackedColorModel is typically used with image data that uses masks to define packed samples. For example, a PackedColorModel can be used in conjunction with a SinglePixelPackedSampleModel to construct a BufferedImage. Normally the masks used by the SampleModel and the ColorModel would be the same. However, if they are different, the color interpretation of pixel data is done according to the masks of the ColorModel.
A single int pixel representation is valid for all objects of this class since it is always possible to represent pixel values used with this class in a single int. Therefore, methods that use this representation do not throw an IllegalArgumentException due to an invalid pixel value.
A subclass of PackedColorModel is DirectColorModel, which is similar to an X11 TrueColor visual.
The PackedColorModel class is an abstract ColorModel class that works with pixel values which represent color and alpha information as separate samples and which pack all samples for a single pixel into a single int, short, or byte quantity. This class can be used with an arbitrary ColorSpace. The number of color samples in the pixel values must be the same as the number of color components in the ColorSpace. There can be a single alpha sample. The array length is always 1 for those methods that use a primitive array pixel representation of type transferType. The transfer types supported are DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT. Color and alpha samples are stored in the single element of the array in bits indicated by bit masks. Each bit mask must be contiguous and masks must not overlap. The same masks apply to the single int pixel representation used by other methods. The correspondence of masks and color/alpha samples is as follows: Masks are identified by indices running from 0 through getNumComponents - 1. The first getNumColorComponents indices refer to color samples. If an alpha sample is present, it corresponds the last index. The order of the color indices is specified by the ColorSpace. Typically, this reflects the name of the color space type (for example, TYPE_RGB), index 0 corresponds to red, index 1 to green, and index 2 to blue. The translation from pixel values to color/alpha components for display or processing purposes is a one-to-one correspondence of samples to components. A PackedColorModel is typically used with image data that uses masks to define packed samples. For example, a PackedColorModel can be used in conjunction with a SinglePixelPackedSampleModel to construct a BufferedImage. Normally the masks used by the SampleModel and the ColorModel would be the same. However, if they are different, the color interpretation of pixel data is done according to the masks of the ColorModel. A single int pixel representation is valid for all objects of this class since it is always possible to represent pixel values used with this class in a single int. Therefore, methods that use this representation do not throw an IllegalArgumentException due to an invalid pixel value. A subclass of PackedColorModel is DirectColorModel, which is similar to an X11 TrueColor visual.
The PixelGrabber class implements an ImageConsumer which can be attached to an Image or ImageProducer object to retrieve a subset of the pixels in that image. Here is an example:
public void handlesinglepixel(int x, int y, int pixel) { int alpha = (pixel >> 24) & 0xff; int red = (pixel >> 16) & 0xff; int green = (pixel >> 8) & 0xff; int blue = (pixel ) & 0xff; // Deal with the pixel as necessary... }
public void handlepixels(Image img, int x, int y, int w, int h) { int[] pixels = new int[w * h]; PixelGrabber pg = new PixelGrabber(img, x, y, w, h, pixels, 0, w); try { pg.grabPixels(); } catch (InterruptedException e) { System.err.println("interrupted waiting for pixels!"); return; } if ((pg.getStatus() & ImageObserver.ABORT) != 0) { System.err.println("image fetch aborted or errored"); return; } for (int j = 0; j < h; j++) { for (int i = 0; i < w; i++) { handlesinglepixel(x+i, y+j, pixels[j * w i]); } } }
The PixelGrabber class implements an ImageConsumer which can be attached to an Image or ImageProducer object to retrieve a subset of the pixels in that image. Here is an example: public void handlesinglepixel(int x, int y, int pixel) { int alpha = (pixel >> 24) & 0xff; int red = (pixel >> 16) & 0xff; int green = (pixel >> 8) & 0xff; int blue = (pixel ) & 0xff; // Deal with the pixel as necessary... } public void handlepixels(Image img, int x, int y, int w, int h) { int[] pixels = new int[w * h]; PixelGrabber pg = new PixelGrabber(img, x, y, w, h, pixels, 0, w); try { pg.grabPixels(); } catch (InterruptedException e) { System.err.println("interrupted waiting for pixels!"); return; } if ((pg.getStatus() & ImageObserver.ABORT) != 0) { System.err.println("image fetch aborted or errored"); return; } for (int j = 0; j < h; j++) { for (int i = 0; i < w; i++) { handlesinglepixel(x+i, y+j, pixels[j * w i]); } } }
This class represents image data which is stored in a pixel interleaved fashion and for which each sample of a pixel occupies one data element of the DataBuffer. It subclasses ComponentSampleModel but provides a more efficient implementation for accessing pixel interleaved image data than is provided by ComponentSampleModel. This class stores sample data for all bands in a single bank of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. Bank indices denote the correspondence between a bank of the data buffer and a band of image data. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT and TYPE_DOUBLE datatypes.
This class represents image data which is stored in a pixel interleaved fashion and for which each sample of a pixel occupies one data element of the DataBuffer. It subclasses ComponentSampleModel but provides a more efficient implementation for accessing pixel interleaved image data than is provided by ComponentSampleModel. This class stores sample data for all bands in a single bank of the DataBuffer. Accessor methods are provided so that image data can be manipulated directly. Pixel stride is the number of data array elements between two samples for the same band on the same scanline. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Band offsets denote the number of data array elements from the first data array element of the bank of the DataBuffer holding each band to the first sample of the band. The bands are numbered from 0 to N-1. Bank indices denote the correspondence between a bank of the data buffer and a band of image data. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_SHORT, TYPE_INT, TYPE_FLOAT and TYPE_DOUBLE datatypes.
A class representing a rectangular array of pixels. A Raster encapsulates a DataBuffer that stores the sample values and a SampleModel that describes how to locate a given sample value in a DataBuffer.
A Raster defines values for pixels occupying a particular rectangular area of the plane, not necessarily including (0, 0). The rectangle, known as the Raster's bounding rectangle and available by means of the getBounds method, is defined by minX, minY, width, and height values. The minX and minY values define the coordinate of the upper left corner of the Raster. References to pixels outside of the bounding rectangle may result in an exception being thrown, or may result in references to unintended elements of the Raster's associated DataBuffer. It is the user's responsibility to avoid accessing such pixels.
A SampleModel describes how samples of a Raster are stored in the primitive array elements of a DataBuffer. Samples may be stored one per data element, as in a PixelInterleavedSampleModel or BandedSampleModel, or packed several to an element, as in a SinglePixelPackedSampleModel or MultiPixelPackedSampleModel. The SampleModel is also controls whether samples are sign extended, allowing unsigned data to be stored in signed Java data types such as byte, short, and int.
Although a Raster may live anywhere in the plane, a SampleModel makes use of a simple coordinate system that starts at (0, 0). A Raster therefore contains a translation factor that allows pixel locations to be mapped between the Raster's coordinate system and that of the SampleModel. The translation from the SampleModel coordinate system to that of the Raster may be obtained by the getSampleModelTranslateX and getSampleModelTranslateY methods.
A Raster may share a DataBuffer with another Raster either by explicit construction or by the use of the createChild and createTranslatedChild methods. Rasters created by these methods can return a reference to the Raster they were created from by means of the getParent method. For a Raster that was not constructed by means of a call to createTranslatedChild or createChild, getParent will return null.
The createTranslatedChild method returns a new Raster that shares all of the data of the current Raster, but occupies a bounding rectangle of the same width and height but with a different starting point. For example, if the parent Raster occupied the region (10, 10) to (100, 100), and the translated Raster was defined to start at (50, 50), then pixel (20, 20) of the parent and pixel (60, 60) of the child occupy the same location in the DataBuffer shared by the two Rasters. In the first case, (-10, -10) should be added to a pixel coordinate to obtain the corresponding SampleModel coordinate, and in the second case (-50, -50) should be added.
The translation between a parent and child Raster may be determined by subtracting the child's sampleModelTranslateX and sampleModelTranslateY values from those of the parent.
The createChild method may be used to create a new Raster occupying only a subset of its parent's bounding rectangle (with the same or a translated coordinate system) or with a subset of the bands of its parent.
All constructors are protected. The correct way to create a Raster is to use one of the static create methods defined in this class. These methods create instances of Raster that use the standard Interleaved, Banded, and Packed SampleModels and that may be processed more efficiently than a Raster created by combining an externally generated SampleModel and DataBuffer.
A class representing a rectangular array of pixels. A Raster encapsulates a DataBuffer that stores the sample values and a SampleModel that describes how to locate a given sample value in a DataBuffer. A Raster defines values for pixels occupying a particular rectangular area of the plane, not necessarily including (0, 0). The rectangle, known as the Raster's bounding rectangle and available by means of the getBounds method, is defined by minX, minY, width, and height values. The minX and minY values define the coordinate of the upper left corner of the Raster. References to pixels outside of the bounding rectangle may result in an exception being thrown, or may result in references to unintended elements of the Raster's associated DataBuffer. It is the user's responsibility to avoid accessing such pixels. A SampleModel describes how samples of a Raster are stored in the primitive array elements of a DataBuffer. Samples may be stored one per data element, as in a PixelInterleavedSampleModel or BandedSampleModel, or packed several to an element, as in a SinglePixelPackedSampleModel or MultiPixelPackedSampleModel. The SampleModel is also controls whether samples are sign extended, allowing unsigned data to be stored in signed Java data types such as byte, short, and int. Although a Raster may live anywhere in the plane, a SampleModel makes use of a simple coordinate system that starts at (0, 0). A Raster therefore contains a translation factor that allows pixel locations to be mapped between the Raster's coordinate system and that of the SampleModel. The translation from the SampleModel coordinate system to that of the Raster may be obtained by the getSampleModelTranslateX and getSampleModelTranslateY methods. A Raster may share a DataBuffer with another Raster either by explicit construction or by the use of the createChild and createTranslatedChild methods. Rasters created by these methods can return a reference to the Raster they were created from by means of the getParent method. For a Raster that was not constructed by means of a call to createTranslatedChild or createChild, getParent will return null. The createTranslatedChild method returns a new Raster that shares all of the data of the current Raster, but occupies a bounding rectangle of the same width and height but with a different starting point. For example, if the parent Raster occupied the region (10, 10) to (100, 100), and the translated Raster was defined to start at (50, 50), then pixel (20, 20) of the parent and pixel (60, 60) of the child occupy the same location in the DataBuffer shared by the two Rasters. In the first case, (-10, -10) should be added to a pixel coordinate to obtain the corresponding SampleModel coordinate, and in the second case (-50, -50) should be added. The translation between a parent and child Raster may be determined by subtracting the child's sampleModelTranslateX and sampleModelTranslateY values from those of the parent. The createChild method may be used to create a new Raster occupying only a subset of its parent's bounding rectangle (with the same or a translated coordinate system) or with a subset of the bands of its parent. All constructors are protected. The correct way to create a Raster is to use one of the static create methods defined in this class. These methods create instances of Raster that use the standard Interleaved, Banded, and Packed SampleModels and that may be processed more efficiently than a Raster created by combining an externally generated SampleModel and DataBuffer.
The RasterFormatException is thrown if there is invalid layout information in the Raster.
The RasterFormatException is thrown if there is invalid layout information in the Raster.
This interface describes single-input/single-output operations performed on Raster objects. It is implemented by such classes as AffineTransformOp, ConvolveOp, and LookupOp. The Source and Destination objects must contain the appropriate number of bands for the particular classes implementing this interface. Otherwise, an exception is thrown. This interface cannot be used to describe more sophisticated Ops such as ones that take multiple sources. Each class implementing this interface will specify whether or not it will allow an in-place filtering operation (i.e. source object equal to the destination object). Note that the restriction to single-input operations means that the values of destination pixels prior to the operation are not used as input to the filter operation.
This interface describes single-input/single-output operations performed on Raster objects. It is implemented by such classes as AffineTransformOp, ConvolveOp, and LookupOp. The Source and Destination objects must contain the appropriate number of bands for the particular classes implementing this interface. Otherwise, an exception is thrown. This interface cannot be used to describe more sophisticated Ops such as ones that take multiple sources. Each class implementing this interface will specify whether or not it will allow an in-place filtering operation (i.e. source object equal to the destination object). Note that the restriction to single-input operations means that the values of destination pixels prior to the operation are not used as input to the filter operation.
ContextualRenderedImageFactory provides an interface for the functionality that may differ between instances of RenderableImageOp. Thus different operations on RenderableImages may be performed by a single class such as RenderedImageOp through the use of multiple instances of ContextualRenderedImageFactory. The name ContextualRenderedImageFactory is commonly shortened to "CRIF."
All operations that are to be used in a rendering-independent chain must implement ContextualRenderedImageFactory.
Classes that implement this interface must provide a constructor with no arguments.
ContextualRenderedImageFactory provides an interface for the functionality that may differ between instances of RenderableImageOp. Thus different operations on RenderableImages may be performed by a single class such as RenderedImageOp through the use of multiple instances of ContextualRenderedImageFactory. The name ContextualRenderedImageFactory is commonly shortened to "CRIF." All operations that are to be used in a rendering-independent chain must implement ContextualRenderedImageFactory. Classes that implement this interface must provide a constructor with no arguments.
No vars found in this namespace.
A ParameterBlock encapsulates all the information about sources and parameters (Objects) required by a RenderableImageOp, or other classes that process images.
Although it is possible to place arbitrary objects in the source Vector, users of this class may impose semantic constraints such as requiring all sources to be RenderedImages or RenderableImage. ParameterBlock itself is merely a container and performs no checking on source or parameter types.
All parameters in a ParameterBlock are objects; convenience add and set methods are available that take arguments of base type and construct the appropriate subclass of Number (such as Integer or Float). Corresponding get methods perform a downward cast and have return values of base type; an exception will be thrown if the stored values do not have the correct type. There is no way to distinguish between the results of "short s; add(s)" and "add(new Short(s))".
Note that the get and set methods operate on references. Therefore, one must be careful not to share references between ParameterBlocks when this is inappropriate. For example, to create a new ParameterBlock that is equal to an old one except for an added source, one might be tempted to write:
ParameterBlock addSource(ParameterBlock pb, RenderableImage im) { ParameterBlock pb1 = new ParameterBlock(pb.getSources()); pb1.addSource(im); return pb1; }
This code will have the side effect of altering the original ParameterBlock, since the getSources operation returned a reference to its source Vector. Both pb and pb1 share their source Vector, and a change in either is visible to both.
A correct way to write the addSource function is to clone the source Vector:
ParameterBlock addSource (ParameterBlock pb, RenderableImage im) { ParameterBlock pb1 = new ParameterBlock(pb.getSources().clone()); pb1.addSource(im); return pb1; }
The clone method of ParameterBlock has been defined to perform a clone of both the source and parameter Vectors for this reason. A standard, shallow clone is available as shallowClone.
The addSource, setSource, add, and set methods are defined to return 'this' after adding their argument. This allows use of syntax like:
ParameterBlock pb = new ParameterBlock(); op = new RenderableImageOp("operation", pb.add(arg1).add(arg2));
A ParameterBlock encapsulates all the information about sources and parameters (Objects) required by a RenderableImageOp, or other classes that process images. Although it is possible to place arbitrary objects in the source Vector, users of this class may impose semantic constraints such as requiring all sources to be RenderedImages or RenderableImage. ParameterBlock itself is merely a container and performs no checking on source or parameter types. All parameters in a ParameterBlock are objects; convenience add and set methods are available that take arguments of base type and construct the appropriate subclass of Number (such as Integer or Float). Corresponding get methods perform a downward cast and have return values of base type; an exception will be thrown if the stored values do not have the correct type. There is no way to distinguish between the results of "short s; add(s)" and "add(new Short(s))". Note that the get and set methods operate on references. Therefore, one must be careful not to share references between ParameterBlocks when this is inappropriate. For example, to create a new ParameterBlock that is equal to an old one except for an added source, one might be tempted to write: ParameterBlock addSource(ParameterBlock pb, RenderableImage im) { ParameterBlock pb1 = new ParameterBlock(pb.getSources()); pb1.addSource(im); return pb1; } This code will have the side effect of altering the original ParameterBlock, since the getSources operation returned a reference to its source Vector. Both pb and pb1 share their source Vector, and a change in either is visible to both. A correct way to write the addSource function is to clone the source Vector: ParameterBlock addSource (ParameterBlock pb, RenderableImage im) { ParameterBlock pb1 = new ParameterBlock(pb.getSources().clone()); pb1.addSource(im); return pb1; } The clone method of ParameterBlock has been defined to perform a clone of both the source and parameter Vectors for this reason. A standard, shallow clone is available as shallowClone. The addSource, setSource, add, and set methods are defined to return 'this' after adding their argument. This allows use of syntax like: ParameterBlock pb = new ParameterBlock(); op = new RenderableImageOp("operation", pb.add(arg1).add(arg2));
A RenderableImage is a common interface for rendering-independent images (a notion which subsumes resolution independence). That is, images which are described and have operations applied to them independent of any specific rendering of the image. For example, a RenderableImage can be rotated and cropped in resolution-independent terms. Then, it can be rendered for various specific contexts, such as a draft preview, a high-quality screen display, or a printer, each in an optimal fashion.
A RenderedImage is returned from a RenderableImage via the createRendering() method, which takes a RenderContext. The RenderContext specifies how the RenderedImage should be constructed. Note that it is not possible to extract pixels directly from a RenderableImage.
The createDefaultRendering() and createScaledRendering() methods are convenience methods that construct an appropriate RenderContext internally. All of the rendering methods may return a reference to a previously produced rendering.
A RenderableImage is a common interface for rendering-independent images (a notion which subsumes resolution independence). That is, images which are described and have operations applied to them independent of any specific rendering of the image. For example, a RenderableImage can be rotated and cropped in resolution-independent terms. Then, it can be rendered for various specific contexts, such as a draft preview, a high-quality screen display, or a printer, each in an optimal fashion. A RenderedImage is returned from a RenderableImage via the createRendering() method, which takes a RenderContext. The RenderContext specifies how the RenderedImage should be constructed. Note that it is not possible to extract pixels directly from a RenderableImage. The createDefaultRendering() and createScaledRendering() methods are convenience methods that construct an appropriate RenderContext internally. All of the rendering methods may return a reference to a previously produced rendering.
This class handles the renderable aspects of an operation with help from its associated instance of a ContextualRenderedImageFactory.
This class handles the renderable aspects of an operation with help from its associated instance of a ContextualRenderedImageFactory.
An adapter class that implements ImageProducer to allow the asynchronous production of a RenderableImage. The size of the ImageConsumer is determined by the scale factor of the usr2dev transform in the RenderContext. If the RenderContext is null, the default rendering of the RenderableImage is used. This class implements an asynchronous production that produces the image in one thread at one resolution. This class may be subclassed to implement versions that will render the image using several threads. These threads could render either the same image at progressively better quality, or different sections of the image at a single resolution.
An adapter class that implements ImageProducer to allow the asynchronous production of a RenderableImage. The size of the ImageConsumer is determined by the scale factor of the usr2dev transform in the RenderContext. If the RenderContext is null, the default rendering of the RenderableImage is used. This class implements an asynchronous production that produces the image in one thread at one resolution. This class may be subclassed to implement versions that will render the image using several threads. These threads could render either the same image at progressively better quality, or different sections of the image at a single resolution.
A RenderContext encapsulates the information needed to produce a specific rendering from a RenderableImage. It contains the area to be rendered specified in rendering-independent terms, the resolution at which the rendering is to be performed, and hints used to control the rendering process.
Users create RenderContexts and pass them to the RenderableImage via the createRendering method. Most of the methods of RenderContexts are not meant to be used directly by applications, but by the RenderableImage and operator classes to which it is passed.
The AffineTransform parameter passed into and out of this class are cloned. The RenderingHints and Shape parameters are not necessarily cloneable and are therefore only reference copied. Altering RenderingHints or Shape instances that are in use by instances of RenderContext may have undesired side effects.
A RenderContext encapsulates the information needed to produce a specific rendering from a RenderableImage. It contains the area to be rendered specified in rendering-independent terms, the resolution at which the rendering is to be performed, and hints used to control the rendering process. Users create RenderContexts and pass them to the RenderableImage via the createRendering method. Most of the methods of RenderContexts are not meant to be used directly by applications, but by the RenderableImage and operator classes to which it is passed. The AffineTransform parameter passed into and out of this class are cloned. The RenderingHints and Shape parameters are not necessarily cloneable and are therefore only reference copied. Altering RenderingHints or Shape instances that are in use by instances of RenderContext may have undesired side effects.
The RenderedImageFactory interface (often abbreviated RIF) is intended to be implemented by classes that wish to act as factories to produce different renderings, for example by executing a series of BufferedImageOps on a set of sources, depending on a specific set of parameters, properties, and rendering hints.
The RenderedImageFactory interface (often abbreviated RIF) is intended to be implemented by classes that wish to act as factories to produce different renderings, for example by executing a series of BufferedImageOps on a set of sources, depending on a specific set of parameters, properties, and rendering hints.
RenderedImage is a common interface for objects which contain or can produce image data in the form of Rasters. The image data may be stored/produced as a single tile or a regular array of tiles.
RenderedImage is a common interface for objects which contain or can produce image data in the form of Rasters. The image data may be stored/produced as a single tile or a regular array of tiles.
An ImageFilter class for scaling images using the simplest algorithm. This class extends the basic ImageFilter Class to scale an existing image and provide a source for a new image containing the resampled image. The pixels in the source image are sampled to produce pixels for an image of the specified size by replicating rows and columns of pixels to scale up or omitting rows and columns of pixels to scale down. It is meant to be used in conjunction with a FilteredImageSource object to produce scaled versions of existing images. Due to implementation dependencies, there may be differences in pixel values of an image filtered on different platforms.
An ImageFilter class for scaling images using the simplest algorithm. This class extends the basic ImageFilter Class to scale an existing image and provide a source for a new image containing the resampled image. The pixels in the source image are sampled to produce pixels for an image of the specified size by replicating rows and columns of pixels to scale up or omitting rows and columns of pixels to scale down. It is meant to be used in conjunction with a FilteredImageSource object to produce scaled versions of existing images. Due to implementation dependencies, there may be differences in pixel values of an image filtered on different platforms.
This class performs a pixel-by-pixel rescaling of the data in the source image by multiplying the sample values for each pixel by a scale factor and then adding an offset. The scaled sample values are clipped to the minimum/maximum representable in the destination image.
The pseudo code for the rescaling operation is as follows:
for each pixel from Source object { for each band/component of the pixel { dstElement = (srcElement*scaleFactor) offset } }
For Rasters, rescaling operates on bands. The number of sets of scaling constants may be one, in which case the same constants are applied to all bands, or it must equal the number of Source Raster bands.
For BufferedImages, rescaling operates on color and alpha components. The number of sets of scaling constants may be one, in which case the same constants are applied to all color (but not alpha) components. Otherwise, the number of sets of scaling constants may equal the number of Source color components, in which case no rescaling of the alpha component (if present) is performed. If neither of these cases apply, the number of sets of scaling constants must equal the number of Source color components plus alpha components, in which case all color and alpha components are rescaled.
BufferedImage sources with premultiplied alpha data are treated in the same manner as non-premultiplied images for purposes of rescaling. That is, the rescaling is done per band on the raw data of the BufferedImage source without regard to whether the data is premultiplied. If a color conversion is required to the destination ColorModel, the premultiplied state of both source and destination will be taken into account for this step.
Images with an IndexColorModel cannot be rescaled.
If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required.
Note that in-place operation is allowed (i.e. the source and destination can be the same object).
This class performs a pixel-by-pixel rescaling of the data in the source image by multiplying the sample values for each pixel by a scale factor and then adding an offset. The scaled sample values are clipped to the minimum/maximum representable in the destination image. The pseudo code for the rescaling operation is as follows: for each pixel from Source object { for each band/component of the pixel { dstElement = (srcElement*scaleFactor) offset } } For Rasters, rescaling operates on bands. The number of sets of scaling constants may be one, in which case the same constants are applied to all bands, or it must equal the number of Source Raster bands. For BufferedImages, rescaling operates on color and alpha components. The number of sets of scaling constants may be one, in which case the same constants are applied to all color (but not alpha) components. Otherwise, the number of sets of scaling constants may equal the number of Source color components, in which case no rescaling of the alpha component (if present) is performed. If neither of these cases apply, the number of sets of scaling constants must equal the number of Source color components plus alpha components, in which case all color and alpha components are rescaled. BufferedImage sources with premultiplied alpha data are treated in the same manner as non-premultiplied images for purposes of rescaling. That is, the rescaling is done per band on the raw data of the BufferedImage source without regard to whether the data is premultiplied. If a color conversion is required to the destination ColorModel, the premultiplied state of both source and destination will be taken into account for this step. Images with an IndexColorModel cannot be rescaled. If a RenderingHints object is specified in the constructor, the color rendering hint and the dithering hint may be used when color conversion is required. Note that in-place operation is allowed (i.e. the source and destination can be the same object).
This class provides an easy way to create an ImageFilter which modifies the pixels of an image in the default RGB ColorModel. It is meant to be used in conjunction with a FilteredImageSource object to produce filtered versions of existing images. It is an abstract class that provides the calls needed to channel all of the pixel data through a single method which converts pixels one at a time in the default RGB ColorModel regardless of the ColorModel being used by the ImageProducer. The only method which needs to be defined to create a useable image filter is the filterRGB method. Here is an example of a definition of a filter which swaps the red and blue components of an image:
class RedBlueSwapFilter extends RGBImageFilter {
public RedBlueSwapFilter() {
// The filter's operation does not depend on the
// pixel's location, so IndexColorModels can be
// filtered directly.
canFilterIndexColorModel = true;
}
public int filterRGB(int x, int y, int rgb) {
return ((rgb & 0xff00ff00)
| ((rgb & 0xff0000) >> 16)
| ((rgb & 0xff) << 16));
}
}
This class provides an easy way to create an ImageFilter which modifies the pixels of an image in the default RGB ColorModel. It is meant to be used in conjunction with a FilteredImageSource object to produce filtered versions of existing images. It is an abstract class that provides the calls needed to channel all of the pixel data through a single method which converts pixels one at a time in the default RGB ColorModel regardless of the ColorModel being used by the ImageProducer. The only method which needs to be defined to create a useable image filter is the filterRGB method. Here is an example of a definition of a filter which swaps the red and blue components of an image: class RedBlueSwapFilter extends RGBImageFilter { public RedBlueSwapFilter() { // The filter's operation does not depend on the // pixel's location, so IndexColorModels can be // filtered directly. canFilterIndexColorModel = true; } public int filterRGB(int x, int y, int rgb) { return ((rgb & 0xff00ff00) | ((rgb & 0xff0000) >> 16) | ((rgb & 0xff) << 16)); } }
This abstract class defines an interface for extracting samples of pixels in an image. All image data is expressed as a collection of pixels. Each pixel consists of a number of samples. A sample is a datum for one band of an image and a band consists of all samples of a particular type in an image. For example, a pixel might contain three samples representing its red, green and blue components. There are three bands in the image containing this pixel. One band consists of all the red samples from all pixels in the image. The second band consists of all the green samples and the remaining band consists of all of the blue samples. The pixel can be stored in various formats. For example, all samples from a particular band can be stored contiguously or all samples from a single pixel can be stored contiguously.
Subclasses of SampleModel specify the types of samples they can represent (e.g. unsigned 8-bit byte, signed 16-bit short, etc.) and may specify how the samples are organized in memory. In the Java 2D(tm) API, built-in image processing operators may not operate on all possible sample types, but generally will work for unsigned integral samples of 16 bits or less. Some operators support a wider variety of sample types.
A collection of pixels is represented as a Raster, which consists of a DataBuffer and a SampleModel. The SampleModel allows access to samples in the DataBuffer and may provide low-level information that a programmer can use to directly manipulate samples and pixels in the DataBuffer.
This class is generally a fall back method for dealing with images. More efficient code will cast the SampleModel to the appropriate subclass and extract the information needed to directly manipulate pixels in the DataBuffer.
This abstract class defines an interface for extracting samples of pixels in an image. All image data is expressed as a collection of pixels. Each pixel consists of a number of samples. A sample is a datum for one band of an image and a band consists of all samples of a particular type in an image. For example, a pixel might contain three samples representing its red, green and blue components. There are three bands in the image containing this pixel. One band consists of all the red samples from all pixels in the image. The second band consists of all the green samples and the remaining band consists of all of the blue samples. The pixel can be stored in various formats. For example, all samples from a particular band can be stored contiguously or all samples from a single pixel can be stored contiguously. Subclasses of SampleModel specify the types of samples they can represent (e.g. unsigned 8-bit byte, signed 16-bit short, etc.) and may specify how the samples are organized in memory. In the Java 2D(tm) API, built-in image processing operators may not operate on all possible sample types, but generally will work for unsigned integral samples of 16 bits or less. Some operators support a wider variety of sample types. A collection of pixels is represented as a Raster, which consists of a DataBuffer and a SampleModel. The SampleModel allows access to samples in the DataBuffer and may provide low-level information that a programmer can use to directly manipulate samples and pixels in the DataBuffer. This class is generally a fall back method for dealing with images. More efficient code will cast the SampleModel to the appropriate subclass and extract the information needed to directly manipulate pixels in the DataBuffer.
This class defines a lookup table object. The output of a lookup operation using an object of this class is interpreted as an unsigned short quantity. The lookup table contains short data arrays for one or more bands (or components) of an image, and it contains an offset which will be subtracted from the input values before indexing the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands.
This class defines a lookup table object. The output of a lookup operation using an object of this class is interpreted as an unsigned short quantity. The lookup table contains short data arrays for one or more bands (or components) of an image, and it contains an offset which will be subtracted from the input values before indexing the arrays. This allows an array smaller than the native data size to be provided for a constrained input. If there is only one array in the lookup table, it will be applied to all bands.
This class represents pixel data packed such that the N samples which make up a single pixel are stored in a single data array element, and each data data array element holds samples for only one pixel. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_INT data types. All data array elements reside in the first bank of a DataBuffer. Accessor methods are provided so that the image data can be manipulated directly. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Bit masks are the masks required to extract the samples representing the bands of the pixel. Bit offsets are the offsets in bits into the data array element of the samples representing the bands of the pixel.
The following code illustrates extracting the bits of the sample representing band b for pixel x,y from DataBuffer data:
int sample = data.getElem(y * scanlineStride x);
sample = (sample & bitMasks[b]) >>> bitOffsets[b];
This class represents pixel data packed such that the N samples which make up a single pixel are stored in a single data array element, and each data data array element holds samples for only one pixel. This class supports TYPE_BYTE, TYPE_USHORT, TYPE_INT data types. All data array elements reside in the first bank of a DataBuffer. Accessor methods are provided so that the image data can be manipulated directly. Scanline stride is the number of data array elements between a given sample and the corresponding sample in the same column of the next scanline. Bit masks are the masks required to extract the samples representing the bands of the pixel. Bit offsets are the offsets in bits into the data array element of the samples representing the bands of the pixel. The following code illustrates extracting the bits of the sample representing band b for pixel x,y from DataBuffer data: int sample = data.getElem(y * scanlineStride x); sample = (sample & bitMasks[b]) >>> bitOffsets[b];
An interface for objects that wish to be informed when tiles of a WritableRenderedImage become modifiable by some writer via a call to getWritableTile, and when they become unmodifiable via the last call to releaseWritableTile.
An interface for objects that wish to be informed when tiles of a WritableRenderedImage become modifiable by some writer via a call to getWritableTile, and when they become unmodifiable via the last call to releaseWritableTile.
VolatileImage is an image which can lose its contents at any time due to circumstances beyond the control of the application (e.g., situations caused by the operating system or by other applications). Because of the potential for hardware acceleration, a VolatileImage object can have significant performance benefits on some platforms.
The drawing surface of an image (the memory where the image contents actually reside) can be lost or invalidated, causing the contents of that memory to go away. The drawing surface thus needs to be restored or recreated and the contents of that surface need to be re-rendered. VolatileImage provides an interface for allowing the user to detect these problems and fix them when they occur.
When a VolatileImage object is created, limited system resources such as video memory (VRAM) may be allocated in order to support the image. When a VolatileImage object is no longer used, it may be garbage-collected and those system resources will be returned, but this process does not happen at guaranteed times. Applications that create many VolatileImage objects (for example, a resizing window may force recreation of its back buffer as the size changes) may run out of optimal system resources for new VolatileImage objects simply because the old objects have not yet been removed from the system. (New VolatileImage objects may still be created, but they may not perform as well as those created in accelerated memory). The flush method may be called at any time to proactively release the resources used by a VolatileImage so that it does not prevent subsequent VolatileImage objects from being accelerated. In this way, applications can have more control over the state of the resources taken up by obsolete VolatileImage objects.
This image should not be subclassed directly but should be created by using the Component.createVolatileImage or GraphicsConfiguration.createCompatibleVolatileImage(int, int) methods.
An example of using a VolatileImage object follows:
// image creation VolatileImage vImg = createVolatileImage(w, h);
// rendering to the image void renderOffscreen() { do { if (vImg.validate(getGraphicsConfiguration()) == VolatileImage.IMAGE_INCOMPATIBLE) { // old vImg doesn't work with new GraphicsConfig; re-create it vImg = createVolatileImage(w, h); } Graphics2D g = vImg.createGraphics(); // // miscellaneous rendering commands... // g.dispose(); } while (vImg.contentsLost()); }
// copying from the image (here, gScreen is the Graphics // object for the onscreen window) do { int returnCode = vImg.validate(getGraphicsConfiguration()); if (returnCode == VolatileImage.IMAGE_RESTORED) { // Contents need to be restored renderOffscreen(); // restore contents } else if (returnCode == VolatileImage.IMAGE_INCOMPATIBLE) { // old vImg doesn't work with new GraphicsConfig; re-create it vImg = createVolatileImage(w, h); renderOffscreen(); } gScreen.drawImage(vImg, 0, 0, this); } while (vImg.contentsLost());
Note that this class subclasses from the Image class, which includes methods that take an ImageObserver parameter for asynchronous notifications as information is received from a potential ImageProducer. Since this VolatileImage is not loaded from an asynchronous source, the various methods that take an ImageObserver parameter will behave as if the data has already been obtained from the ImageProducer. Specifically, this means that the return values from such methods will never indicate that the information is not yet available and the ImageObserver used in such methods will never need to be recorded for an asynchronous callback notification.
VolatileImage is an image which can lose its contents at any time due to circumstances beyond the control of the application (e.g., situations caused by the operating system or by other applications). Because of the potential for hardware acceleration, a VolatileImage object can have significant performance benefits on some platforms. The drawing surface of an image (the memory where the image contents actually reside) can be lost or invalidated, causing the contents of that memory to go away. The drawing surface thus needs to be restored or recreated and the contents of that surface need to be re-rendered. VolatileImage provides an interface for allowing the user to detect these problems and fix them when they occur. When a VolatileImage object is created, limited system resources such as video memory (VRAM) may be allocated in order to support the image. When a VolatileImage object is no longer used, it may be garbage-collected and those system resources will be returned, but this process does not happen at guaranteed times. Applications that create many VolatileImage objects (for example, a resizing window may force recreation of its back buffer as the size changes) may run out of optimal system resources for new VolatileImage objects simply because the old objects have not yet been removed from the system. (New VolatileImage objects may still be created, but they may not perform as well as those created in accelerated memory). The flush method may be called at any time to proactively release the resources used by a VolatileImage so that it does not prevent subsequent VolatileImage objects from being accelerated. In this way, applications can have more control over the state of the resources taken up by obsolete VolatileImage objects. This image should not be subclassed directly but should be created by using the Component.createVolatileImage or GraphicsConfiguration.createCompatibleVolatileImage(int, int) methods. An example of using a VolatileImage object follows: // image creation VolatileImage vImg = createVolatileImage(w, h); // rendering to the image void renderOffscreen() { do { if (vImg.validate(getGraphicsConfiguration()) == VolatileImage.IMAGE_INCOMPATIBLE) { // old vImg doesn't work with new GraphicsConfig; re-create it vImg = createVolatileImage(w, h); } Graphics2D g = vImg.createGraphics(); // // miscellaneous rendering commands... // g.dispose(); } while (vImg.contentsLost()); } // copying from the image (here, gScreen is the Graphics // object for the onscreen window) do { int returnCode = vImg.validate(getGraphicsConfiguration()); if (returnCode == VolatileImage.IMAGE_RESTORED) { // Contents need to be restored renderOffscreen(); // restore contents } else if (returnCode == VolatileImage.IMAGE_INCOMPATIBLE) { // old vImg doesn't work with new GraphicsConfig; re-create it vImg = createVolatileImage(w, h); renderOffscreen(); } gScreen.drawImage(vImg, 0, 0, this); } while (vImg.contentsLost()); Note that this class subclasses from the Image class, which includes methods that take an ImageObserver parameter for asynchronous notifications as information is received from a potential ImageProducer. Since this VolatileImage is not loaded from an asynchronous source, the various methods that take an ImageObserver parameter will behave as if the data has already been obtained from the ImageProducer. Specifically, this means that the return values from such methods will never indicate that the information is not yet available and the ImageObserver used in such methods will never need to be recorded for an asynchronous callback notification.
This class extends Raster to provide pixel writing capabilities. Refer to the class comment for Raster for descriptions of how a Raster stores pixels.
The constructors of this class are protected. To instantiate a WritableRaster, use one of the createWritableRaster factory methods in the Raster class.
This class extends Raster to provide pixel writing capabilities. Refer to the class comment for Raster for descriptions of how a Raster stores pixels. The constructors of this class are protected. To instantiate a WritableRaster, use one of the createWritableRaster factory methods in the Raster class.
WriteableRenderedImage is a common interface for objects which contain or can produce image data in the form of Rasters and which can be modified and/or written over. The image data may be stored/produced as a single tile or a regular array of tiles.
WritableRenderedImage provides notification to other interested objects when a tile is checked out for writing (via the getWritableTile method) and when the last writer of a particular tile relinquishes its access (via a call to releaseWritableTile). Additionally, it allows any caller to determine whether any tiles are currently checked out (via hasTileWriters), and to obtain a list of such tiles (via getWritableTileIndices, in the form of a Vector of Point objects).
Objects wishing to be notified of changes in tile writability must implement the TileObserver interface, and are added by a call to addTileObserver. Multiple calls to addTileObserver for the same object will result in multiple notifications. An existing observer may reduce its notifications by calling removeTileObserver; if the observer had no notifications the operation is a no-op.
It is necessary for a WritableRenderedImage to ensure that notifications occur only when the first writer acquires a tile and the last writer releases it.
WriteableRenderedImage is a common interface for objects which contain or can produce image data in the form of Rasters and which can be modified and/or written over. The image data may be stored/produced as a single tile or a regular array of tiles. WritableRenderedImage provides notification to other interested objects when a tile is checked out for writing (via the getWritableTile method) and when the last writer of a particular tile relinquishes its access (via a call to releaseWritableTile). Additionally, it allows any caller to determine whether any tiles are currently checked out (via hasTileWriters), and to obtain a list of such tiles (via getWritableTileIndices, in the form of a Vector of Point objects). Objects wishing to be notified of changes in tile writability must implement the TileObserver interface, and are added by a call to addTileObserver. Multiple calls to addTileObserver for the same object will result in multiple notifications. An existing observer may reduce its notifications by calling removeTileObserver; if the observer had no notifications the operation is a no-op. It is necessary for a WritableRenderedImage to ensure that notifications occur only when the first writer acquires a tile and the last writer releases it.
Capabilities and properties of images.
Capabilities and properties of images.
An Insets object is a representation of the borders of a container. It specifies the space that a container must leave at each of its edges. The space can be a border, a blank space, or a title.
An Insets object is a representation of the borders of a container. It specifies the space that a container must leave at each of its edges. The space can be a border, a blank space, or a title.
The interface for objects which contain a set of items for which zero or more can be selected.
The interface for objects which contain a set of items for which zero or more can be selected.
A set of attributes which control a print job.
Instances of this class control the number of copies, default selection, destination, print dialog, file and printer names, page ranges, multiple document handling (including collation), and multi-page imposition (such as duplex) of every print job which uses the instance. Attribute names are compliant with the Internet Printing Protocol (IPP) 1.1 where possible. Attribute values are partially compliant where possible.
To use a method which takes an inner class type, pass a reference to one of the constant fields of the inner class. Client code cannot create new instances of the inner class types because none of those classes has a public constructor. For example, to set the print dialog type to the cross-platform, pure Java print dialog, use the following code:
import java.awt.JobAttributes;
public class PureJavaPrintDialogExample { public void setPureJavaPrintDialog(JobAttributes jobAttributes) { jobAttributes.setDialog(JobAttributes.DialogType.COMMON); } }
Every IPP attribute which supports an attributeName-default value has a corresponding setattributeNameToDefault method. Default value fields are not provided.
A set of attributes which control a print job. Instances of this class control the number of copies, default selection, destination, print dialog, file and printer names, page ranges, multiple document handling (including collation), and multi-page imposition (such as duplex) of every print job which uses the instance. Attribute names are compliant with the Internet Printing Protocol (IPP) 1.1 where possible. Attribute values are partially compliant where possible. To use a method which takes an inner class type, pass a reference to one of the constant fields of the inner class. Client code cannot create new instances of the inner class types because none of those classes has a public constructor. For example, to set the print dialog type to the cross-platform, pure Java print dialog, use the following code: import java.awt.JobAttributes; public class PureJavaPrintDialogExample { public void setPureJavaPrintDialog(JobAttributes jobAttributes) { jobAttributes.setDialog(JobAttributes.DialogType.COMMON); } } Every IPP attribute which supports an attributeName-default value has a corresponding setattributeNameToDefault method. Default value fields are not provided.
A type-safe enumeration of possible default selection states.
A type-safe enumeration of possible default selection states.
A type-safe enumeration of possible job destinations.
A type-safe enumeration of possible job destinations.
A type-safe enumeration of possible dialogs to display to the user.
A type-safe enumeration of possible dialogs to display to the user.
A type-safe enumeration of possible multiple copy handling states. It is used to control how the sheets of multiple copies of a single document are collated.
A type-safe enumeration of possible multiple copy handling states. It is used to control how the sheets of multiple copies of a single document are collated.
A type-safe enumeration of possible multi-page impositions. These impositions are in compliance with IPP 1.1.
A type-safe enumeration of possible multi-page impositions. These impositions are in compliance with IPP 1.1.
The KeyboardFocusManager is responsible for managing the active and focused Windows, and the current focus owner. The focus owner is defined as the Component in an application that will typically receive all KeyEvents generated by the user. The focused Window is the Window that is, or contains, the focus owner. Only a Frame or a Dialog can be the active Window. The native windowing system may denote the active Window or its children with special decorations, such as a highlighted title bar. The active Window is always either the focused Window, or the first Frame or Dialog that is an owner of the focused Window.
The KeyboardFocusManager is both a centralized location for client code to query for the focus owner and initiate focus changes, and an event dispatcher for all FocusEvents, WindowEvents related to focus, and KeyEvents.
Some browsers partition applets in different code bases into separate contexts, and establish walls between these contexts. In such a scenario, there will be one KeyboardFocusManager per context. Other browsers place all applets into the same context, implying that there will be only a single, global KeyboardFocusManager for all applets. This behavior is implementation-dependent. Consult your browser's documentation for more information. No matter how many contexts there may be, however, there can never be more than one focus owner, focused Window, or active Window, per ClassLoader.
Please see
How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
The KeyboardFocusManager is responsible for managing the active and focused Windows, and the current focus owner. The focus owner is defined as the Component in an application that will typically receive all KeyEvents generated by the user. The focused Window is the Window that is, or contains, the focus owner. Only a Frame or a Dialog can be the active Window. The native windowing system may denote the active Window or its children with special decorations, such as a highlighted title bar. The active Window is always either the focused Window, or the first Frame or Dialog that is an owner of the focused Window. The KeyboardFocusManager is both a centralized location for client code to query for the focus owner and initiate focus changes, and an event dispatcher for all FocusEvents, WindowEvents related to focus, and KeyEvents. Some browsers partition applets in different code bases into separate contexts, and establish walls between these contexts. In such a scenario, there will be one KeyboardFocusManager per context. Other browsers place all applets into the same context, implying that there will be only a single, global KeyboardFocusManager for all applets. This behavior is implementation-dependent. Consult your browser's documentation for more information. No matter how many contexts there may be, however, there can never be more than one focus owner, focused Window, or active Window, per ClassLoader. Please see How to Use the Focus Subsystem, a section in The Java Tutorial, and the Focus Specification for more information.
A KeyEventDispatcher cooperates with the current KeyboardFocusManager in the targeting and dispatching of all KeyEvents. KeyEventDispatchers registered with the current KeyboardFocusManager will receive KeyEvents before they are dispatched to their targets, allowing each KeyEventDispatcher to retarget the event, consume it, dispatch the event itself, or make other changes.
Note that KeyboardFocusManager itself implements KeyEventDispatcher. By default, the current KeyboardFocusManager will be the sink for all KeyEvents not dispatched by the registered KeyEventDispatchers. The current KeyboardFocusManager cannot be completely deregistered as a KeyEventDispatcher. However, if a KeyEventDispatcher reports that it dispatched the KeyEvent, regardless of whether it actually did so, the KeyboardFocusManager will take no further action with regard to the KeyEvent. (While it is possible for client code to register the current KeyboardFocusManager as a KeyEventDispatcher one or more times, this is usually unnecessary and not recommended.)
A KeyEventDispatcher cooperates with the current KeyboardFocusManager in the targeting and dispatching of all KeyEvents. KeyEventDispatchers registered with the current KeyboardFocusManager will receive KeyEvents before they are dispatched to their targets, allowing each KeyEventDispatcher to retarget the event, consume it, dispatch the event itself, or make other changes. Note that KeyboardFocusManager itself implements KeyEventDispatcher. By default, the current KeyboardFocusManager will be the sink for all KeyEvents not dispatched by the registered KeyEventDispatchers. The current KeyboardFocusManager cannot be completely deregistered as a KeyEventDispatcher. However, if a KeyEventDispatcher reports that it dispatched the KeyEvent, regardless of whether it actually did so, the KeyboardFocusManager will take no further action with regard to the KeyEvent. (While it is possible for client code to register the current KeyboardFocusManager as a KeyEventDispatcher one or more times, this is usually unnecessary and not recommended.)
A KeyEventPostProcessor cooperates with the current KeyboardFocusManager in the final resolution of all unconsumed KeyEvents. KeyEventPostProcessors registered with the current KeyboardFocusManager will receive KeyEvents after the KeyEvents have been dispatched to and handled by their targets. KeyEvents that would have been otherwise discarded because no Component in the application currently owns the focus will also be forwarded to registered KeyEventPostProcessors. This will allow applications to implement features that require global KeyEvent post-handling, such as menu shortcuts.
Note that the KeyboardFocusManager itself implements KeyEventPostProcessor. By default, the current KeyboardFocusManager will be the final KeyEventPostProcessor in the chain. The current KeyboardFocusManager cannot be completely deregistered as a KeyEventPostProcessor. However, if a KeyEventPostProcessor reports that no further post-processing of the KeyEvent should take place, the AWT will consider the event fully handled and will take no additional action with regard to the event. (While it is possible for client code to register the current KeyboardFocusManager as a KeyEventPostProcessor one or more times, this is usually unnecessary and not recommended.)
A KeyEventPostProcessor cooperates with the current KeyboardFocusManager in the final resolution of all unconsumed KeyEvents. KeyEventPostProcessors registered with the current KeyboardFocusManager will receive KeyEvents after the KeyEvents have been dispatched to and handled by their targets. KeyEvents that would have been otherwise discarded because no Component in the application currently owns the focus will also be forwarded to registered KeyEventPostProcessors. This will allow applications to implement features that require global KeyEvent post-handling, such as menu shortcuts. Note that the KeyboardFocusManager itself implements KeyEventPostProcessor. By default, the current KeyboardFocusManager will be the final KeyEventPostProcessor in the chain. The current KeyboardFocusManager cannot be completely deregistered as a KeyEventPostProcessor. However, if a KeyEventPostProcessor reports that no further post-processing of the KeyEvent should take place, the AWT will consider the event fully handled and will take no additional action with regard to the event. (While it is possible for client code to register the current KeyboardFocusManager as a KeyEventPostProcessor one or more times, this is usually unnecessary and not recommended.)
A Label object is a component for placing text in a container. A label displays a single line of read-only text. The text can be changed by the application, but a user cannot edit it directly.
For example, the code . . .
setLayout(new FlowLayout(FlowLayout.CENTER, 10, 10)); add(new Label("Hi There!")); add(new Label("Another Label"));
produces the following labels:
A Label object is a component for placing text in a container. A label displays a single line of read-only text. The text can be changed by the application, but a user cannot edit it directly. For example, the code . . . setLayout(new FlowLayout(FlowLayout.CENTER, 10, 10)); add(new Label("Hi There!")); add(new Label("Another Label")); produces the following labels:
Defines the interface for classes that know how to lay out Containers.
Swing's painting architecture assumes the children of a JComponent do not overlap. If a JComponent's LayoutManager allows children to overlap, the JComponent must override isOptimizedDrawingEnabled to return false.
Defines the interface for classes that know how to lay out Containers. Swing's painting architecture assumes the children of a JComponent do not overlap. If a JComponent's LayoutManager allows children to overlap, the JComponent must override isOptimizedDrawingEnabled to return false.
Defines an interface for classes that know how to layout Containers based on a layout constraints object.
This interface extends the LayoutManager interface to deal with layouts explicitly in terms of constraint objects that specify how and where components should be added to the layout.
This minimal extension to LayoutManager is intended for tool providers who wish to the creation of constraint-based layouts. It does not yet provide full, general support for custom constraint-based layout managers.
Defines an interface for classes that know how to layout Containers based on a layout constraints object. This interface extends the LayoutManager interface to deal with layouts explicitly in terms of constraint objects that specify how and where components should be added to the layout. This minimal extension to LayoutManager is intended for tool providers who wish to the creation of constraint-based layouts. It does not yet provide full, general support for custom constraint-based layout managers.
The LinearGradientPaint class provides a way to fill a Shape with a linear color gradient pattern. The user may specify two or more gradient colors, and this paint will provide an interpolation between each color. The user also specifies start and end points which define where in user space the color gradient should begin and end.
The user must provide an array of floats specifying how to distribute the colors along the gradient. These values should range from 0.0 to 1.0 and act like keyframes along the gradient (they mark where the gradient should be exactly a particular color).
In the event that the user does not set the first keyframe value equal to 0 and/or the last keyframe value equal to 1, keyframes will be created at these positions and the first and last colors will be replicated there. So, if a user specifies the following arrays to construct a gradient:
{Color.BLUE, Color.RED}, {.3f, .7f}
this will be converted to a gradient with the following keyframes:
{Color.BLUE, Color.BLUE, Color.RED, Color.RED}, {0f, .3f, .7f, 1f}
The user may also select what action the LinearGradientPaint object takes when it is filling the space outside the start and end points by setting CycleMethod to either REFLECTION or REPEAT. The distances between any two colors in any of the reflected or repeated copies of the gradient are the same as the distance between those same two colors between the start and end points. Note that some minor variations in distances may occur due to sampling at the granularity of a pixel. If no cycle method is specified, NO_CYCLE will be chosen by default, which means the endpoint colors will be used to fill the remaining area.
The colorSpace parameter allows the user to specify in which colorspace the interpolation should be performed, default sRGB or linearized RGB.
The following code demonstrates typical usage of LinearGradientPaint:
Point2D start = new Point2D.Float(0, 0);
Point2D end = new Point2D.Float(50, 50);
float[] dist = {0.0f, 0.2f, 1.0f};
Color[] colors = {Color.RED, Color.WHITE, Color.BLUE};
LinearGradientPaint p =
new LinearGradientPaint(start, end, dist, colors);
This code will create a LinearGradientPaint which interpolates between red and white for the first 20% of the gradient and between white and blue for the remaining 80%.
This image demonstrates the example code above for each of the three cycle methods:
The LinearGradientPaint class provides a way to fill a Shape with a linear color gradient pattern. The user may specify two or more gradient colors, and this paint will provide an interpolation between each color. The user also specifies start and end points which define where in user space the color gradient should begin and end. The user must provide an array of floats specifying how to distribute the colors along the gradient. These values should range from 0.0 to 1.0 and act like keyframes along the gradient (they mark where the gradient should be exactly a particular color). In the event that the user does not set the first keyframe value equal to 0 and/or the last keyframe value equal to 1, keyframes will be created at these positions and the first and last colors will be replicated there. So, if a user specifies the following arrays to construct a gradient: {Color.BLUE, Color.RED}, {.3f, .7f} this will be converted to a gradient with the following keyframes: {Color.BLUE, Color.BLUE, Color.RED, Color.RED}, {0f, .3f, .7f, 1f} The user may also select what action the LinearGradientPaint object takes when it is filling the space outside the start and end points by setting CycleMethod to either REFLECTION or REPEAT. The distances between any two colors in any of the reflected or repeated copies of the gradient are the same as the distance between those same two colors between the start and end points. Note that some minor variations in distances may occur due to sampling at the granularity of a pixel. If no cycle method is specified, NO_CYCLE will be chosen by default, which means the endpoint colors will be used to fill the remaining area. The colorSpace parameter allows the user to specify in which colorspace the interpolation should be performed, default sRGB or linearized RGB. The following code demonstrates typical usage of LinearGradientPaint: Point2D start = new Point2D.Float(0, 0); Point2D end = new Point2D.Float(50, 50); float[] dist = {0.0f, 0.2f, 1.0f}; Color[] colors = {Color.RED, Color.WHITE, Color.BLUE}; LinearGradientPaint p = new LinearGradientPaint(start, end, dist, colors); This code will create a LinearGradientPaint which interpolates between red and white for the first 20% of the gradient and between white and blue for the remaining 80%. This image demonstrates the example code above for each of the three cycle methods:
The List component presents the user with a scrolling list of text items. The list can be set up so that the user can choose either one item or multiple items.
For example, the code . . .
List lst = new List(4, false); lst.add("Mercury"); lst.add("Venus"); lst.add("Earth"); lst.add("JavaSoft"); lst.add("Mars"); lst.add("Jupiter"); lst.add("Saturn"); lst.add("Uranus"); lst.add("Neptune"); lst.add("Pluto"); cnt.add(lst);
where cnt is a container, produces the following scrolling list:
If the List allows multiple selections, then clicking on an item that is already selected deselects it. In the preceding example, only one item from the scrolling list can be selected at a time, since the second argument when creating the new scrolling list is false. If the List does not allow multiple selections, selecting an item causes any other selected item to be deselected.
Note that the list in the example shown was created with four visible rows. Once the list has been created, the number of visible rows cannot be changed. A default List is created with four rows, so that lst = new List() is equivalent to list = new List(4, false).
Beginning with Java 1.1, the Abstract Window Toolkit sends the List object all mouse, keyboard, and focus events that occur over it. (The old AWT event model is being maintained only for backwards compatibility, and its use is discouraged.)
When an item is selected or deselected by the user, AWT sends an instance of ItemEvent to the list. When the user double-clicks on an item in a scrolling list, AWT sends an instance of ActionEvent to the list following the item event. AWT also generates an action event when the user presses the return key while an item in the list is selected.
If an application wants to perform some action based on an item in this list being selected or activated by the user, it should implement ItemListener or ActionListener as appropriate and register the new listener to receive events from this list.
For multiple-selection scrolling lists, it is considered a better user interface to use an external gesture (such as clicking on a button) to trigger the action.
The List component presents the user with a scrolling list of text items. The list can be set up so that the user can choose either one item or multiple items. For example, the code . . . List lst = new List(4, false); lst.add("Mercury"); lst.add("Venus"); lst.add("Earth"); lst.add("JavaSoft"); lst.add("Mars"); lst.add("Jupiter"); lst.add("Saturn"); lst.add("Uranus"); lst.add("Neptune"); lst.add("Pluto"); cnt.add(lst); where cnt is a container, produces the following scrolling list: If the List allows multiple selections, then clicking on an item that is already selected deselects it. In the preceding example, only one item from the scrolling list can be selected at a time, since the second argument when creating the new scrolling list is false. If the List does not allow multiple selections, selecting an item causes any other selected item to be deselected. Note that the list in the example shown was created with four visible rows. Once the list has been created, the number of visible rows cannot be changed. A default List is created with four rows, so that lst = new List() is equivalent to list = new List(4, false). Beginning with Java 1.1, the Abstract Window Toolkit sends the List object all mouse, keyboard, and focus events that occur over it. (The old AWT event model is being maintained only for backwards compatibility, and its use is discouraged.) When an item is selected or deselected by the user, AWT sends an instance of ItemEvent to the list. When the user double-clicks on an item in a scrolling list, AWT sends an instance of ActionEvent to the list following the item event. AWT also generates an action event when the user presses the return key while an item in the list is selected. If an application wants to perform some action based on an item in this list being selected or activated by the user, it should implement ItemListener or ActionListener as appropriate and register the new listener to receive events from this list. For multiple-selection scrolling lists, it is considered a better user interface to use an external gesture (such as clicking on a button) to trigger the action.
The MediaTracker class is a utility class to track the status of a number of media objects. Media objects could include audio clips as well as images, though currently only images are supported.
To use a media tracker, create an instance of MediaTracker and call its addImage method for each image to be tracked. In addition, each image can be assigned a unique identifier. This identifier controls the priority order in which the images are fetched. It can also be used to identify unique subsets of the images that can be waited on independently. Images with a lower ID are loaded in preference to those with a higher ID number.
Tracking an animated image might not always be useful due to the multi-part nature of animated image loading and painting, but it is supported. MediaTracker treats an animated image as completely loaded when the first frame is completely loaded. At that point, the MediaTracker signals any waiters that the image is completely loaded. If no ImageObservers are observing the image when the first frame has finished loading, the image might flush itself to conserve resources (see Image.flush()).
Here is an example of using MediaTracker:
import java.applet.Applet; import java.awt.Color; import java.awt.Image; import java.awt.Graphics; import java.awt.MediaTracker;
public class ImageBlaster extends Applet implements Runnable { MediaTracker tracker; Image bg; Image anim[] = new Image[5]; int index; Thread animator;
// Get the images for the background (id == 0)
// and the animation frames (id == 1)
// and add them to the MediaTracker
public void init() {
tracker = new MediaTracker(this);
bg = getImage(getDocumentBase(),
"images/background.gif");
tracker.addImage(bg, 0);
for (int i = 0; i < 5; i++) {
anim[i] = getImage(getDocumentBase(),
"images/anim"+i+".gif");
tracker.addImage(anim[i], 1);
}
}
// Start the animation thread.
public void start() {
animator = new Thread(this);
animator.start();
}
// Stop the animation thread.
public void stop() {
animator = null;
}
// Run the animation thread.
// First wait for the background image to fully load
// and paint. Then wait for all of the animation
// frames to finish loading. Finally, loop and
// increment the animation frame index.
public void run() {
try {
tracker.waitForID(0);
tracker.waitForID(1);
} catch (InterruptedException e) {
return;
}
Thread me = Thread.currentThread();
while (animator == me) {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
break;
}
synchronized (this) {
index++;
if (index >= anim.length) {
index = 0;
}
}
repaint();
}
}
// The background image fills the frame so we
// don't need to clear the applet on repaints.
// Just call the paint method.
public void update(Graphics g) {
paint(g);
}
// Paint a large red rectangle if there are any errors
// loading the images. Otherwise always paint the
// background so that it appears incrementally as it
// is loading. Finally, only paint the current animation
// frame if all of the frames (id == 1) are done loading,
// so that we don't get partial animations.
public void paint(Graphics g) {
if ((tracker.statusAll(false) & MediaTracker.ERRORED) != 0) {
g.setColor(Color.red);
g.fillRect(0, 0, size().width, size().height);
return;
}
g.drawImage(bg, 0, 0, this);
if (tracker.statusID(1, false) == MediaTracker.COMPLETE) {
g.drawImage(anim[index], 10, 10, this);
}
}
}
The MediaTracker class is a utility class to track the status of a number of media objects. Media objects could include audio clips as well as images, though currently only images are supported. To use a media tracker, create an instance of MediaTracker and call its addImage method for each image to be tracked. In addition, each image can be assigned a unique identifier. This identifier controls the priority order in which the images are fetched. It can also be used to identify unique subsets of the images that can be waited on independently. Images with a lower ID are loaded in preference to those with a higher ID number. Tracking an animated image might not always be useful due to the multi-part nature of animated image loading and painting, but it is supported. MediaTracker treats an animated image as completely loaded when the first frame is completely loaded. At that point, the MediaTracker signals any waiters that the image is completely loaded. If no ImageObservers are observing the image when the first frame has finished loading, the image might flush itself to conserve resources (see Image.flush()). Here is an example of using MediaTracker: import java.applet.Applet; import java.awt.Color; import java.awt.Image; import java.awt.Graphics; import java.awt.MediaTracker; public class ImageBlaster extends Applet implements Runnable { MediaTracker tracker; Image bg; Image anim[] = new Image[5]; int index; Thread animator; // Get the images for the background (id == 0) // and the animation frames (id == 1) // and add them to the MediaTracker public void init() { tracker = new MediaTracker(this); bg = getImage(getDocumentBase(), "images/background.gif"); tracker.addImage(bg, 0); for (int i = 0; i < 5; i++) { anim[i] = getImage(getDocumentBase(), "images/anim"+i+".gif"); tracker.addImage(anim[i], 1); } } // Start the animation thread. public void start() { animator = new Thread(this); animator.start(); } // Stop the animation thread. public void stop() { animator = null; } // Run the animation thread. // First wait for the background image to fully load // and paint. Then wait for all of the animation // frames to finish loading. Finally, loop and // increment the animation frame index. public void run() { try { tracker.waitForID(0); tracker.waitForID(1); } catch (InterruptedException e) { return; } Thread me = Thread.currentThread(); while (animator == me) { try { Thread.sleep(100); } catch (InterruptedException e) { break; } synchronized (this) { index++; if (index >= anim.length) { index = 0; } } repaint(); } } // The background image fills the frame so we // don't need to clear the applet on repaints. // Just call the paint method. public void update(Graphics g) { paint(g); } // Paint a large red rectangle if there are any errors // loading the images. Otherwise always paint the // background so that it appears incrementally as it // is loading. Finally, only paint the current animation // frame if all of the frames (id == 1) are done loading, // so that we don't get partial animations. public void paint(Graphics g) { if ((tracker.statusAll(false) & MediaTracker.ERRORED) != 0) { g.setColor(Color.red); g.fillRect(0, 0, size().width, size().height); return; } g.drawImage(bg, 0, 0, this); if (tracker.statusID(1, false) == MediaTracker.COMPLETE) { g.drawImage(anim[index], 10, 10, this); } } }
A Menu object is a pull-down menu component that is deployed from a menu bar.
A menu can optionally be a tear-off menu. A tear-off menu can be opened and dragged away from its parent menu bar or menu. It remains on the screen after the mouse button has been released. The mechanism for tearing off a menu is platform dependent, since the look and feel of the tear-off menu is determined by its peer. On platforms that do not support tear-off menus, the tear-off property is ignored.
Each item in a menu must belong to the MenuItem class. It can be an instance of MenuItem, a submenu (an instance of Menu), or a check box (an instance of CheckboxMenuItem).
A Menu object is a pull-down menu component that is deployed from a menu bar. A menu can optionally be a tear-off menu. A tear-off menu can be opened and dragged away from its parent menu bar or menu. It remains on the screen after the mouse button has been released. The mechanism for tearing off a menu is platform dependent, since the look and feel of the tear-off menu is determined by its peer. On platforms that do not support tear-off menus, the tear-off property is ignored. Each item in a menu must belong to the MenuItem class. It can be an instance of MenuItem, a submenu (an instance of Menu), or a check box (an instance of CheckboxMenuItem).
The MenuBar class encapsulates the platform's concept of a menu bar bound to a frame. In order to associate the menu bar with a Frame object, call the frame's setMenuBar method.
target for cross references This is what a menu bar might look like:
A menu bar handles keyboard shortcuts for menu items, passing them along to its child menus. (Keyboard shortcuts, which are optional, provide the user with an alternative to the mouse for invoking a menu item and the action that is associated with it.) Each menu item can maintain an instance of MenuShortcut. The MenuBar class defines several methods, shortcuts() and getShortcutMenuItem(java.awt.MenuShortcut) that retrieve information about the shortcuts a given menu bar is managing.
The MenuBar class encapsulates the platform's concept of a menu bar bound to a frame. In order to associate the menu bar with a Frame object, call the frame's setMenuBar method. target for cross references This is what a menu bar might look like: A menu bar handles keyboard shortcuts for menu items, passing them along to its child menus. (Keyboard shortcuts, which are optional, provide the user with an alternative to the mouse for invoking a menu item and the action that is associated with it.) Each menu item can maintain an instance of MenuShortcut. The MenuBar class defines several methods, shortcuts() and getShortcutMenuItem(java.awt.MenuShortcut) that retrieve information about the shortcuts a given menu bar is managing.
The abstract class MenuComponent is the superclass of all menu-related components. In this respect, the class MenuComponent is analogous to the abstract superclass Component for AWT components.
Menu components receive and process AWT events, just as components do, through the method processEvent.
The abstract class MenuComponent is the superclass of all menu-related components. In this respect, the class MenuComponent is analogous to the abstract superclass Component for AWT components. Menu components receive and process AWT events, just as components do, through the method processEvent.
The super class of all menu related containers.
The super class of all menu related containers.
All items in a menu must belong to the class MenuItem, or one of its subclasses.
The default MenuItem object embodies a simple labeled menu item.
This picture of a menu bar shows five menu items:
The first two items are simple menu items, labeled "Basic" and "Simple". Following these two items is a separator, which is itself a menu item, created with the label "-". Next is an instance of CheckboxMenuItem labeled "Check". The final menu item is a submenu labeled "More Examples", and this submenu is an instance of Menu.
When a menu item is selected, AWT sends an action event to the menu item. Since the event is an instance of ActionEvent, the processEvent method examines the event and passes it along to processActionEvent. The latter method redirects the event to any ActionListener objects that have registered an interest in action events generated by this menu item.
Note that the subclass Menu overrides this behavior and does not send any event to the frame until one of its subitems is selected.
All items in a menu must belong to the class MenuItem, or one of its subclasses. The default MenuItem object embodies a simple labeled menu item. This picture of a menu bar shows five menu items: The first two items are simple menu items, labeled "Basic" and "Simple". Following these two items is a separator, which is itself a menu item, created with the label "-". Next is an instance of CheckboxMenuItem labeled "Check". The final menu item is a submenu labeled "More Examples", and this submenu is an instance of Menu. When a menu item is selected, AWT sends an action event to the menu item. Since the event is an instance of ActionEvent, the processEvent method examines the event and passes it along to processActionEvent. The latter method redirects the event to any ActionListener objects that have registered an interest in action events generated by this menu item. Note that the subclass Menu overrides this behavior and does not send any event to the frame until one of its subitems is selected.
The MenuShortcutclass represents a keyboard accelerator for a MenuItem.
Menu shortcuts are created using virtual keycodes, not characters. For example, a menu shortcut for Ctrl-a (assuming that Control is the accelerator key) would be created with code like the following:
MenuShortcut ms = new MenuShortcut(KeyEvent.VK_A, false); or alternatively
MenuShortcut ms = new MenuShortcut(KeyEvent.getExtendedKeyCodeForChar('A'), false);
Menu shortcuts may also be constructed for a wider set of keycodes using the java.awt.event.KeyEvent.getExtendedKeyCodeForChar call. For example, a menu shortcut for "Ctrl+cyrillic ef" is created by
MenuShortcut ms = new MenuShortcut(KeyEvent.getExtendedKeyCodeForChar('?'), false);
Note that shortcuts created with a keycode or an extended keycode defined as a constant in KeyEvent work regardless of the current keyboard layout. However, a shortcut made of an extended keycode not listed in KeyEvent only work if the current keyboard layout produces a corresponding letter.
The accelerator key is platform-dependent and may be obtained via Toolkit.getMenuShortcutKeyMask().
The MenuShortcutclass represents a keyboard accelerator for a MenuItem. Menu shortcuts are created using virtual keycodes, not characters. For example, a menu shortcut for Ctrl-a (assuming that Control is the accelerator key) would be created with code like the following: MenuShortcut ms = new MenuShortcut(KeyEvent.VK_A, false); or alternatively MenuShortcut ms = new MenuShortcut(KeyEvent.getExtendedKeyCodeForChar('A'), false); Menu shortcuts may also be constructed for a wider set of keycodes using the java.awt.event.KeyEvent.getExtendedKeyCodeForChar call. For example, a menu shortcut for "Ctrl+cyrillic ef" is created by MenuShortcut ms = new MenuShortcut(KeyEvent.getExtendedKeyCodeForChar('?'), false); Note that shortcuts created with a keycode or an extended keycode defined as a constant in KeyEvent work regardless of the current keyboard layout. However, a shortcut made of an extended keycode not listed in KeyEvent only work if the current keyboard layout produces a corresponding letter. The accelerator key is platform-dependent and may be obtained via Toolkit.getMenuShortcutKeyMask().
MouseInfo provides methods for getting information about the mouse, such as mouse pointer location and the number of mouse buttons.
MouseInfo provides methods for getting information about the mouse, such as mouse pointer location and the number of mouse buttons.
This is the superclass for Paints which use a multiple color gradient to fill in their raster. It provides storage for variables and enumerated values common to LinearGradientPaint and RadialGradientPaint.
This is the superclass for Paints which use a multiple color gradient to fill in their raster. It provides storage for variables and enumerated values common to LinearGradientPaint and RadialGradientPaint.
A set of attributes which control the output of a printed page.
Instances of this class control the color state, paper size (media type), orientation, logical origin, print quality, and resolution of every page which uses the instance. Attribute names are compliant with the Internet Printing Protocol (IPP) 1.1 where possible. Attribute values are partially compliant where possible.
To use a method which takes an inner class type, pass a reference to one of the constant fields of the inner class. Client code cannot create new instances of the inner class types because none of those classes has a public constructor. For example, to set the color state to monochrome, use the following code:
import java.awt.PageAttributes;
public class MonochromeExample { public void setMonochrome(PageAttributes pageAttributes) { pageAttributes.setColor(PageAttributes.ColorType.MONOCHROME); } }
Every IPP attribute which supports an attributeName-default value has a corresponding setattributeNameToDefault method. Default value fields are not provided.
A set of attributes which control the output of a printed page. Instances of this class control the color state, paper size (media type), orientation, logical origin, print quality, and resolution of every page which uses the instance. Attribute names are compliant with the Internet Printing Protocol (IPP) 1.1 where possible. Attribute values are partially compliant where possible. To use a method which takes an inner class type, pass a reference to one of the constant fields of the inner class. Client code cannot create new instances of the inner class types because none of those classes has a public constructor. For example, to set the color state to monochrome, use the following code: import java.awt.PageAttributes; public class MonochromeExample { public void setMonochrome(PageAttributes pageAttributes) { pageAttributes.setColor(PageAttributes.ColorType.MONOCHROME); } } Every IPP attribute which supports an attributeName-default value has a corresponding setattributeNameToDefault method. Default value fields are not provided.
A type-safe enumeration of possible color states.
A type-safe enumeration of possible color states.
A type-safe enumeration of possible paper sizes. These sizes are in compliance with IPP 1.1.
A type-safe enumeration of possible paper sizes. These sizes are in compliance with IPP 1.1.
A type-safe enumeration of possible orientations. These orientations are in partial compliance with IPP 1.1.
A type-safe enumeration of possible orientations. These orientations are in partial compliance with IPP 1.1.
A type-safe enumeration of possible origins.
A type-safe enumeration of possible origins.
A type-safe enumeration of possible print qualities. These print qualities are in compliance with IPP 1.1.
A type-safe enumeration of possible print qualities. These print qualities are in compliance with IPP 1.1.
This Paint interface defines how color patterns can be generated for Graphics2D operations. A class implementing the Paint interface is added to the Graphics2D context in order to define the color pattern used by the draw and fill methods.
Instances of classes implementing Paint must be read-only because the Graphics2D does not clone these objects when they are set as an attribute with the setPaint method or when the Graphics2D object is itself cloned.
This Paint interface defines how color patterns can be generated for Graphics2D operations. A class implementing the Paint interface is added to the Graphics2D context in order to define the color pattern used by the draw and fill methods. Instances of classes implementing Paint must be read-only because the Graphics2D does not clone these objects when they are set as an attribute with the setPaint method or when the Graphics2D object is itself cloned.
The PaintContext interface defines the encapsulated and optimized environment to generate color patterns in device space for fill or stroke operations on a Graphics2D. The PaintContext provides the necessary colors for Graphics2D operations in the form of a Raster associated with a ColorModel. The PaintContext maintains state for a particular paint operation. In a multi-threaded environment, several contexts can exist simultaneously for a single Paint object.
The PaintContext interface defines the encapsulated and optimized environment to generate color patterns in device space for fill or stroke operations on a Graphics2D. The PaintContext provides the necessary colors for Graphics2D operations in the form of a Raster associated with a ColorModel. The PaintContext maintains state for a particular paint operation. In a multi-threaded environment, several contexts can exist simultaneously for a single Paint object.
Panel is the simplest container class. A panel provides space in which an application can attach any other component, including other panels.
The default layout manager for a panel is the FlowLayout layout manager.
Panel is the simplest container class. A panel provides space in which an application can attach any other component, including other panels. The default layout manager for a panel is the FlowLayout layout manager.
A point representing a location in (x,y) coordinate space, specified in integer precision.
A point representing a location in (x,y) coordinate space, specified in integer precision.
A class that describes the pointer position. It provides the GraphicsDevice where the pointer is and the Point that represents the coordinates of the pointer.
Instances of this class should be obtained via MouseInfo.getPointerInfo(). The PointerInfo instance is not updated dynamically as the mouse moves. To get the updated location, you must call MouseInfo.getPointerInfo() again.
A class that describes the pointer position. It provides the GraphicsDevice where the pointer is and the Point that represents the coordinates of the pointer. Instances of this class should be obtained via MouseInfo.getPointerInfo(). The PointerInfo instance is not updated dynamically as the mouse moves. To get the updated location, you must call MouseInfo.getPointerInfo() again.
The Polygon class encapsulates a description of a closed, two-dimensional region within a coordinate space. This region is bounded by an arbitrary number of line segments, each of which is one side of the polygon. Internally, a polygon comprises of a list of (x,y) coordinate pairs, where each pair defines a vertex of the polygon, and two successive pairs are the endpoints of a line that is a side of the polygon. The first and final pairs of (x,y) points are joined by a line segment that closes the polygon. This Polygon is defined with an even-odd winding rule. See WIND_EVEN_ODD for a definition of the even-odd winding rule. This class's hit-testing methods, which include the contains, intersects and inside methods, use the insideness definition described in the Shape class comments.
The Polygon class encapsulates a description of a closed, two-dimensional region within a coordinate space. This region is bounded by an arbitrary number of line segments, each of which is one side of the polygon. Internally, a polygon comprises of a list of (x,y) coordinate pairs, where each pair defines a vertex of the polygon, and two successive pairs are the endpoints of a line that is a side of the polygon. The first and final pairs of (x,y) points are joined by a line segment that closes the polygon. This Polygon is defined with an even-odd winding rule. See WIND_EVEN_ODD for a definition of the even-odd winding rule. This class's hit-testing methods, which include the contains, intersects and inside methods, use the insideness definition described in the Shape class comments.
A class that implements a menu which can be dynamically popped up at a specified position within a component.
As the inheritance hierarchy implies, a PopupMenu can be used anywhere a Menu can be used. However, if you use a PopupMenu like a Menu (e.g., you add it to a MenuBar), then you cannot call show on that PopupMenu.
A class that implements a menu which can be dynamically popped up at a specified position within a component. As the inheritance hierarchy implies, a PopupMenu can be used anywhere a Menu can be used. However, if you use a PopupMenu like a Menu (e.g., you add it to a MenuBar), then you cannot call show on that PopupMenu.
The Book class provides a representation of a document in which pages may have different page formats and page painters. This class uses the Pageable interface to interact with a PrinterJob.
The Book class provides a representation of a document in which pages may have different page formats and page painters. This class uses the Pageable interface to interact with a PrinterJob.
No vars found in this namespace.
The Pageable implementation represents a set of pages to be printed. The Pageable object returns the total number of pages in the set as well as the PageFormat and Printable for a specified page.
The Pageable implementation represents a set of pages to be printed. The Pageable object returns the total number of pages in the set as well as the PageFormat and Printable for a specified page.
The PageFormat class describes the size and orientation of a page to be printed.
The PageFormat class describes the size and orientation of a page to be printed.
The Paper class describes the physical characteristics of a piece of paper.
When creating a Paper object, it is the application's responsibility to ensure that the paper size and the imageable area are compatible. For example, if the paper size is changed from 11 x 17 to 8.5 x 11, the application might need to reduce the imageable area so that whatever is printed fits on the page.
The Paper class describes the physical characteristics of a piece of paper. When creating a Paper object, it is the application's responsibility to ensure that the paper size and the imageable area are compatible. For example, if the paper size is changed from 11 x 17 to 8.5 x 11, the application might need to reduce the imageable area so that whatever is printed fits on the page.
The Printable interface is implemented by the print methods of the current page painter, which is called by the printing system to render a page. When building a Pageable, pairs of PageFormat instances and instances that implement this interface are used to describe each page. The instance implementing Printable is called to print the page's graphics.
A Printable(..) may be set on a PrinterJob. When the client subsequently initiates printing by calling PrinterJob.print(..) control
is handed to the printing system until all pages have been printed. It does this by calling Printable.print(..) until all pages in the document have been printed. In using the Printable interface the printing commits to image the contents of a page whenever requested by the printing system.
The parameters to Printable.print(..) include a PageFormat which describes the printable area of the page, needed for calculating the contents that will fit the page, and the page index, which specifies the zero-based print stream index of the requested page.
For correct printing behaviour, the following points should be observed:
The printing system may request a page index more than once. On each occasion equal PageFormat parameters will be supplied.
The printing system will call Printable.print(..) with page indexes which increase monotonically, although as noted above, the Printable should expect multiple calls for a page index and that page indexes may be skipped, when page ranges are specified by the client, or by a user through a print dialog.
If multiple collated copies of a document are requested, and the printer cannot natively support this, then the document may be imaged multiple times. Printing will start each copy from the lowest print stream page index page.
With the exception of re-imaging an entire document for multiple collated copies, the increasing page index order means that when page N is requested if a client needs to calculate page break position, it may safely discard any state related to pages < N, and make current that for page N. "State" usually is just the calculated position in the document that corresponds to the start of the page.
When called by the printing system the Printable must inspect and honour the supplied PageFormat parameter as well as the page index. The format of the page to be drawn is specified by the supplied PageFormat. The size, orientation and imageable area of the page is therefore already determined and rendering must be within this imageable area. This is key to correct printing behaviour, and it has the implication that the client has the responsibility of tracking what content belongs on the specified page.
When the Printable is obtained from a client-supplied Pageable then the client may provide different PageFormats for each page index. Calculations of page breaks must account for this.
The Printable interface is implemented by the print methods of the current page painter, which is called by the printing system to render a page. When building a Pageable, pairs of PageFormat instances and instances that implement this interface are used to describe each page. The instance implementing Printable is called to print the page's graphics. A Printable(..) may be set on a PrinterJob. When the client subsequently initiates printing by calling PrinterJob.print(..) control is handed to the printing system until all pages have been printed. It does this by calling Printable.print(..) until all pages in the document have been printed. In using the Printable interface the printing commits to image the contents of a page whenever requested by the printing system. The parameters to Printable.print(..) include a PageFormat which describes the printable area of the page, needed for calculating the contents that will fit the page, and the page index, which specifies the zero-based print stream index of the requested page. For correct printing behaviour, the following points should be observed: The printing system may request a page index more than once. On each occasion equal PageFormat parameters will be supplied. The printing system will call Printable.print(..) with page indexes which increase monotonically, although as noted above, the Printable should expect multiple calls for a page index and that page indexes may be skipped, when page ranges are specified by the client, or by a user through a print dialog. If multiple collated copies of a document are requested, and the printer cannot natively support this, then the document may be imaged multiple times. Printing will start each copy from the lowest print stream page index page. With the exception of re-imaging an entire document for multiple collated copies, the increasing page index order means that when page N is requested if a client needs to calculate page break position, it may safely discard any state related to pages < N, and make current that for page N. "State" usually is just the calculated position in the document that corresponds to the start of the page. When called by the printing system the Printable must inspect and honour the supplied PageFormat parameter as well as the page index. The format of the page to be drawn is specified by the supplied PageFormat. The size, orientation and imageable area of the page is therefore already determined and rendering must be within this imageable area. This is key to correct printing behaviour, and it has the implication that the client has the responsibility of tracking what content belongs on the specified page. When the Printable is obtained from a client-supplied Pageable then the client may provide different PageFormats for each page index. Calculations of page breaks must account for this.
The PrinterAbortException class is a subclass of PrinterException and is used to indicate that a user or application has terminated the print job while it was in the process of printing.
The PrinterAbortException class is a subclass of PrinterException and is used to indicate that a user or application has terminated the print job while it was in the process of printing.
The PrinterException class and its subclasses are used to indicate that an exceptional condition has occurred in the print system.
The PrinterException class and its subclasses are used to indicate that an exceptional condition has occurred in the print system.
The PrinterGraphics interface is implemented by Graphics objects that are passed to Printable objects to render a page. It allows an application to find the PrinterJob object that is controlling the printing.
The PrinterGraphics interface is implemented by Graphics objects that are passed to Printable objects to render a page. It allows an application to find the PrinterJob object that is controlling the printing.
The PrinterIOException class is a subclass of PrinterException and is used to indicate that an IO error of some sort has occurred while printing.
As of release 1.4, this exception has been retrofitted to conform to the general purpose exception-chaining mechanism. The "IOException that terminated the print job" that is provided at construction time and accessed via the getIOException() method is now known as the cause, and may be accessed via the Throwable.getCause() method, as well as the aforementioned "legacy method."
The PrinterIOException class is a subclass of PrinterException and is used to indicate that an IO error of some sort has occurred while printing. As of release 1.4, this exception has been retrofitted to conform to the general purpose exception-chaining mechanism. The "IOException that terminated the print job" that is provided at construction time and accessed via the getIOException() method is now known as the cause, and may be accessed via the Throwable.getCause() method, as well as the aforementioned "legacy method."
The PrinterJob class is the principal class that controls printing. An application calls methods in this class to set up a job, optionally to invoke a print dialog with the user, and then to print the pages of the job.
The PrinterJob class is the principal class that controls printing. An application calls methods in this class to set up a job, optionally to invoke a print dialog with the user, and then to print the pages of the job.
An abstract class which provides a print graphics context for a page.
An abstract class which provides a print graphics context for a page.
An abstract class which initiates and executes a print job. It provides access to a print graphics object which renders to an appropriate print device.
An abstract class which initiates and executes a print job. It provides access to a print graphics object which renders to an appropriate print device.
The RadialGradientPaint class provides a way to fill a shape with a circular radial color gradient pattern. The user may specify 2 or more gradient colors, and this paint will provide an interpolation between each color.
The user must specify the circle controlling the gradient pattern, which is described by a center point and a radius. The user can also specify a separate focus point within that circle, which controls the location of the first color of the gradient. By default the focus is set to be the center of the circle.
This paint will map the first color of the gradient to the focus point, and the last color to the perimeter of the circle, interpolating smoothly for any in-between colors specified by the user. Any line drawn from the focus point to the circumference will thus span all the gradient colors.
Specifying a focus point outside of the radius of the circle will cause the rings of the gradient pattern to be centered on the point just inside the edge of the circle in the direction of the focus point. The rendering will internally use this modified location as if it were the specified focus point.
The user must provide an array of floats specifying how to distribute the colors along the gradient. These values should range from 0.0 to 1.0 and act like keyframes along the gradient (they mark where the gradient should be exactly a particular color).
In the event that the user does not set the first keyframe value equal to 0 and/or the last keyframe value equal to 1, keyframes will be created at these positions and the first and last colors will be replicated there. So, if a user specifies the following arrays to construct a gradient:
{Color.BLUE, Color.RED}, {.3f, .7f}
this will be converted to a gradient with the following keyframes:
{Color.BLUE, Color.BLUE, Color.RED, Color.RED}, {0f, .3f, .7f, 1f}
The user may also select what action the RadialGradientPaint object takes when it is filling the space outside the circle's radius by setting CycleMethod to either REFLECTION or REPEAT. The gradient color proportions are equal for any particular line drawn from the focus point. The following figure shows that the distance AB is equal to the distance BC, and the distance AD is equal to the distance DE.
If the gradient and graphics rendering transforms are uniformly scaled and the user sets the focus so that it coincides with the center of the circle, the gradient color proportions are equal for any line drawn from the center. The following figure shows the distances AB, BC, AD, and DE. They are all equal.
Note that some minor variations in distances may occur due to sampling at the granularity of a pixel. If no cycle method is specified, NO_CYCLE will be chosen by default, which means the the last keyframe color will be used to fill the remaining area.
The colorSpace parameter allows the user to specify in which colorspace the interpolation should be performed, default sRGB or linearized RGB.
The following code demonstrates typical usage of RadialGradientPaint, where the center and focus points are the same:
Point2D center = new Point2D.Float(50, 50);
float radius = 25;
float[] dist = {0.0f, 0.2f, 1.0f};
Color[] colors = {Color.RED, Color.WHITE, Color.BLUE};
RadialGradientPaint p =
new RadialGradientPaint(center, radius, dist, colors);
This image demonstrates the example code above, with default (centered) focus for each of the three cycle methods:
It is also possible to specify a non-centered focus point, as in the following code:
Point2D center = new Point2D.Float(50, 50);
float radius = 25;
Point2D focus = new Point2D.Float(40, 40);
float[] dist = {0.0f, 0.2f, 1.0f};
Color[] colors = {Color.RED, Color.WHITE, Color.BLUE};
RadialGradientPaint p =
new RadialGradientPaint(center, radius, focus,
dist, colors,
CycleMethod.NO_CYCLE);
This image demonstrates the previous example code, with non-centered focus for each of the three cycle methods:
The RadialGradientPaint class provides a way to fill a shape with a circular radial color gradient pattern. The user may specify 2 or more gradient colors, and this paint will provide an interpolation between each color. The user must specify the circle controlling the gradient pattern, which is described by a center point and a radius. The user can also specify a separate focus point within that circle, which controls the location of the first color of the gradient. By default the focus is set to be the center of the circle. This paint will map the first color of the gradient to the focus point, and the last color to the perimeter of the circle, interpolating smoothly for any in-between colors specified by the user. Any line drawn from the focus point to the circumference will thus span all the gradient colors. Specifying a focus point outside of the radius of the circle will cause the rings of the gradient pattern to be centered on the point just inside the edge of the circle in the direction of the focus point. The rendering will internally use this modified location as if it were the specified focus point. The user must provide an array of floats specifying how to distribute the colors along the gradient. These values should range from 0.0 to 1.0 and act like keyframes along the gradient (they mark where the gradient should be exactly a particular color). In the event that the user does not set the first keyframe value equal to 0 and/or the last keyframe value equal to 1, keyframes will be created at these positions and the first and last colors will be replicated there. So, if a user specifies the following arrays to construct a gradient: {Color.BLUE, Color.RED}, {.3f, .7f} this will be converted to a gradient with the following keyframes: {Color.BLUE, Color.BLUE, Color.RED, Color.RED}, {0f, .3f, .7f, 1f} The user may also select what action the RadialGradientPaint object takes when it is filling the space outside the circle's radius by setting CycleMethod to either REFLECTION or REPEAT. The gradient color proportions are equal for any particular line drawn from the focus point. The following figure shows that the distance AB is equal to the distance BC, and the distance AD is equal to the distance DE. If the gradient and graphics rendering transforms are uniformly scaled and the user sets the focus so that it coincides with the center of the circle, the gradient color proportions are equal for any line drawn from the center. The following figure shows the distances AB, BC, AD, and DE. They are all equal. Note that some minor variations in distances may occur due to sampling at the granularity of a pixel. If no cycle method is specified, NO_CYCLE will be chosen by default, which means the the last keyframe color will be used to fill the remaining area. The colorSpace parameter allows the user to specify in which colorspace the interpolation should be performed, default sRGB or linearized RGB. The following code demonstrates typical usage of RadialGradientPaint, where the center and focus points are the same: Point2D center = new Point2D.Float(50, 50); float radius = 25; float[] dist = {0.0f, 0.2f, 1.0f}; Color[] colors = {Color.RED, Color.WHITE, Color.BLUE}; RadialGradientPaint p = new RadialGradientPaint(center, radius, dist, colors); This image demonstrates the example code above, with default (centered) focus for each of the three cycle methods: It is also possible to specify a non-centered focus point, as in the following code: Point2D center = new Point2D.Float(50, 50); float radius = 25; Point2D focus = new Point2D.Float(40, 40); float[] dist = {0.0f, 0.2f, 1.0f}; Color[] colors = {Color.RED, Color.WHITE, Color.BLUE}; RadialGradientPaint p = new RadialGradientPaint(center, radius, focus, dist, colors, CycleMethod.NO_CYCLE); This image demonstrates the previous example code, with non-centered focus for each of the three cycle methods:
A Rectangle specifies an area in a coordinate space that is enclosed by the Rectangle object's upper-left point (x,y) in the coordinate space, its width, and its height.
A Rectangle object's width and height are public fields. The constructors that create a Rectangle, and the methods that can modify one, do not prevent setting a negative value for width or height.
A Rectangle whose width or height is exactly zero has location along those axes with zero dimension, but is otherwise considered empty. The isEmpty() method will return true for such a Rectangle. Methods which test if an empty Rectangle contains or intersects a point or rectangle will always return false if either dimension is zero. Methods which combine such a Rectangle with a point or rectangle will include the location of the Rectangle on that axis in the result as if the add(Point) method were being called.
A Rectangle whose width or height is negative has neither location nor dimension along those axes with negative dimensions. Such a Rectangle is treated as non-existant along those axes. Such a Rectangle is also empty with respect to containment calculations and methods which test if it contains or intersects a point or rectangle will always return false. Methods which combine such a Rectangle with a point or rectangle will ignore the Rectangle entirely in generating the result. If two Rectangle objects are combined and each has a negative dimension, the result will have at least one negative dimension.
Methods which affect only the location of a Rectangle will operate on its location regardless of whether or not it has a negative or zero dimension along either axis.
Note that a Rectangle constructed with the default no-argument constructor will have dimensions of 0x0 and therefore be empty. That Rectangle will still have a location of (0,0) and will contribute that location to the union and add operations. Code attempting to accumulate the bounds of a set of points should therefore initially construct the Rectangle with a specifically negative width and height or it should use the first point in the set to construct the Rectangle. For example:
Rectangle bounds = new Rectangle(0, 0, -1, -1);
for (int i = 0; i < points.length; i++) {
bounds.add(points[i]);
}
or if we know that the points array contains at least one point:
Rectangle bounds = new Rectangle(points[0]);
for (int i = 1; i < points.length; i++) {
bounds.add(points[i]);
}
This class uses 32-bit integers to store its location and dimensions. Frequently operations may produce a result that exceeds the range of a 32-bit integer. The methods will calculate their results in a way that avoids any 32-bit overflow for intermediate results and then choose the best representation to store the final results back into the 32-bit fields which hold the location and dimensions. The location of the result will be stored into the x and y fields by clipping the true result to the nearest 32-bit value. The values stored into the width and height dimension fields will be chosen as the 32-bit values that encompass the largest part of the true result as possible. Generally this means that the dimension will be clipped independently to the range of 32-bit integers except that if the location had to be moved to store it into its pair of 32-bit fields then the dimensions will be adjusted relative to the "best representation" of the location. If the true result had a negative dimension and was therefore non-existant along one or both axes, the stored dimensions will be negative numbers in those axes. If the true result had a location that could be represented within the range of 32-bit integers, but zero dimension along one or both axes, then the stored dimensions will be zero in those axes.
A Rectangle specifies an area in a coordinate space that is enclosed by the Rectangle object's upper-left point (x,y) in the coordinate space, its width, and its height. A Rectangle object's width and height are public fields. The constructors that create a Rectangle, and the methods that can modify one, do not prevent setting a negative value for width or height. A Rectangle whose width or height is exactly zero has location along those axes with zero dimension, but is otherwise considered empty. The isEmpty() method will return true for such a Rectangle. Methods which test if an empty Rectangle contains or intersects a point or rectangle will always return false if either dimension is zero. Methods which combine such a Rectangle with a point or rectangle will include the location of the Rectangle on that axis in the result as if the add(Point) method were being called. A Rectangle whose width or height is negative has neither location nor dimension along those axes with negative dimensions. Such a Rectangle is treated as non-existant along those axes. Such a Rectangle is also empty with respect to containment calculations and methods which test if it contains or intersects a point or rectangle will always return false. Methods which combine such a Rectangle with a point or rectangle will ignore the Rectangle entirely in generating the result. If two Rectangle objects are combined and each has a negative dimension, the result will have at least one negative dimension. Methods which affect only the location of a Rectangle will operate on its location regardless of whether or not it has a negative or zero dimension along either axis. Note that a Rectangle constructed with the default no-argument constructor will have dimensions of 0x0 and therefore be empty. That Rectangle will still have a location of (0,0) and will contribute that location to the union and add operations. Code attempting to accumulate the bounds of a set of points should therefore initially construct the Rectangle with a specifically negative width and height or it should use the first point in the set to construct the Rectangle. For example: Rectangle bounds = new Rectangle(0, 0, -1, -1); for (int i = 0; i < points.length; i++) { bounds.add(points[i]); } or if we know that the points array contains at least one point: Rectangle bounds = new Rectangle(points[0]); for (int i = 1; i < points.length; i++) { bounds.add(points[i]); } This class uses 32-bit integers to store its location and dimensions. Frequently operations may produce a result that exceeds the range of a 32-bit integer. The methods will calculate their results in a way that avoids any 32-bit overflow for intermediate results and then choose the best representation to store the final results back into the 32-bit fields which hold the location and dimensions. The location of the result will be stored into the x and y fields by clipping the true result to the nearest 32-bit value. The values stored into the width and height dimension fields will be chosen as the 32-bit values that encompass the largest part of the true result as possible. Generally this means that the dimension will be clipped independently to the range of 32-bit integers except that if the location had to be moved to store it into its pair of 32-bit fields then the dimensions will be adjusted relative to the "best representation" of the location. If the true result had a negative dimension and was therefore non-existant along one or both axes, the stored dimensions will be negative numbers in those axes. If the true result had a location that could be represented within the range of 32-bit integers, but zero dimension along one or both axes, then the stored dimensions will be zero in those axes.
The RenderingHints class defines and manages collections of keys and associated values which allow an application to provide input into the choice of algorithms used by other classes which perform rendering and image manipulation services. The Graphics2D class, and classes that implement BufferedImageOp and RasterOp all provide methods to get and possibly to set individual or groups of RenderingHints keys and their associated values. When those implementations perform any rendering or image manipulation operations they should examine the values of any RenderingHints that were requested by the caller and tailor the algorithms used accordingly and to the best of their ability.
Note that since these keys and values are hints, there is no requirement that a given implementation supports all possible choices indicated below or that it can respond to requests to modify its choice of algorithm. The values of the various hint keys may also interact such that while all variants of a given key are supported in one situation, the implementation may be more restricted when the values associated with other keys are modified. For example, some implementations may be able to provide several types of dithering when the antialiasing hint is turned off, but have little control over dithering when antialiasing is on. The full set of supported keys and hints may also vary by destination since runtimes may use different underlying modules to render to the screen, or to BufferedImage objects, or while printing.
Implementations are free to ignore the hints completely, but should try to use an implementation algorithm that is as close as possible to the request. If an implementation supports a given algorithm when any value is used for an associated hint key, then minimally it must do so when the value for that key is the exact value that specifies the algorithm.
The keys used to control the hints are all special values that subclass the associated RenderingHints.Key class. Many common hints are expressed below as static constants in this class, but the list is not meant to be exhaustive. Other hints may be created by other packages by defining new objects which subclass the Key class and defining the associated values.
The RenderingHints class defines and manages collections of keys and associated values which allow an application to provide input into the choice of algorithms used by other classes which perform rendering and image manipulation services. The Graphics2D class, and classes that implement BufferedImageOp and RasterOp all provide methods to get and possibly to set individual or groups of RenderingHints keys and their associated values. When those implementations perform any rendering or image manipulation operations they should examine the values of any RenderingHints that were requested by the caller and tailor the algorithms used accordingly and to the best of their ability. Note that since these keys and values are hints, there is no requirement that a given implementation supports all possible choices indicated below or that it can respond to requests to modify its choice of algorithm. The values of the various hint keys may also interact such that while all variants of a given key are supported in one situation, the implementation may be more restricted when the values associated with other keys are modified. For example, some implementations may be able to provide several types of dithering when the antialiasing hint is turned off, but have little control over dithering when antialiasing is on. The full set of supported keys and hints may also vary by destination since runtimes may use different underlying modules to render to the screen, or to BufferedImage objects, or while printing. Implementations are free to ignore the hints completely, but should try to use an implementation algorithm that is as close as possible to the request. If an implementation supports a given algorithm when any value is used for an associated hint key, then minimally it must do so when the value for that key is the exact value that specifies the algorithm. The keys used to control the hints are all special values that subclass the associated RenderingHints.Key class. Many common hints are expressed below as static constants in this class, but the list is not meant to be exhaustive. Other hints may be created by other packages by defining new objects which subclass the Key class and defining the associated values.
Defines the base type of all keys used along with the RenderingHints class to control various algorithm choices in the rendering and imaging pipelines. Instances of this class are immutable and unique which means that tests for matches can be made using the == operator instead of the more expensive equals() method.
Defines the base type of all keys used along with the RenderingHints class to control various algorithm choices in the rendering and imaging pipelines. Instances of this class are immutable and unique which means that tests for matches can be made using the == operator instead of the more expensive equals() method.
This class is used to generate native system input events for the purposes of test automation, self-running demos, and other applications where control of the mouse and keyboard is needed. The primary purpose of Robot is to facilitate automated testing of Java platform implementations.
Using the class to generate input events differs from posting events to the AWT event queue or AWT components in that the events are generated in the platform's native input queue. For example, Robot.mouseMove will actually move the mouse cursor instead of just generating mouse move events.
Note that some platforms require special privileges or extensions to access low-level input control. If the current platform configuration does not allow input control, an AWTException will be thrown when trying to construct Robot objects. For example, X-Window systems will throw the exception if the XTEST 2.2 standard extension is not supported (or not enabled) by the X server.
Applications that use Robot for purposes other than self-testing should handle these error conditions gracefully.
This class is used to generate native system input events for the purposes of test automation, self-running demos, and other applications where control of the mouse and keyboard is needed. The primary purpose of Robot is to facilitate automated testing of Java platform implementations. Using the class to generate input events differs from posting events to the AWT event queue or AWT components in that the events are generated in the platform's native input queue. For example, Robot.mouseMove will actually move the mouse cursor instead of just generating mouse move events. Note that some platforms require special privileges or extensions to access low-level input control. If the current platform configuration does not allow input control, an AWTException will be thrown when trying to construct Robot objects. For example, X-Window systems will throw the exception if the XTEST 2.2 standard extension is not supported (or not enabled) by the X server. Applications that use Robot for purposes other than self-testing should handle these error conditions gracefully.
The Scrollbar class embodies a scroll bar, a familiar user-interface object. A scroll bar provides a convenient means for allowing a user to select from a range of values. The following three vertical scroll bars could be used as slider controls to pick the red, green, and blue components of a color:
Each scroll bar in this example could be created with code similar to the following:
redSlider=new Scrollbar(Scrollbar.VERTICAL, 0, 1, 0, 255); add(redSlider);
Alternatively, a scroll bar can represent a range of values. For example, if a scroll bar is used for scrolling through text, the width of the "bubble" (also called the "thumb" or "scroll box") can be used to represent the amount of text that is visible. Here is an example of a scroll bar that represents a range:
The value range represented by the bubble in this example is the visible amount. The horizontal scroll bar in this example could be created with code like the following:
ranger = new Scrollbar(Scrollbar.HORIZONTAL, 0, 60, 0, 300); add(ranger);
Note that the actual maximum value of the scroll bar is the maximum minus the visible amount. In the previous example, because the maximum is 300 and the visible amount is 60, the actual maximum value is 240. The range of the scrollbar track is 0 - 300. The left side of the bubble indicates the value of the scroll bar.
Normally, the user changes the value of the scroll bar by making a gesture with the mouse. For example, the user can drag the scroll bar's bubble up and down, or click in the scroll bar's unit increment or block increment areas. Keyboard gestures can also be mapped to the scroll bar. By convention, the Page Up and Page Down keys are equivalent to clicking in the scroll bar's block increment and block decrement areas.
When the user changes the value of the scroll bar, the scroll bar receives an instance of AdjustmentEvent. The scroll bar processes this event, passing it along to any registered listeners.
Any object that wishes to be notified of changes to the scroll bar's value should implement AdjustmentListener, an interface defined in the package java.awt.event. Listeners can be added and removed dynamically by calling the methods addAdjustmentListener and removeAdjustmentListener.
The AdjustmentEvent class defines five types of adjustment event, listed here:
AdjustmentEvent.TRACK is sent out when the user drags the scroll bar's bubble. AdjustmentEvent.UNIT_INCREMENT is sent out when the user clicks in the left arrow of a horizontal scroll bar, or the top arrow of a vertical scroll bar, or makes the equivalent gesture from the keyboard. AdjustmentEvent.UNIT_DECREMENT is sent out when the user clicks in the right arrow of a horizontal scroll bar, or the bottom arrow of a vertical scroll bar, or makes the equivalent gesture from the keyboard. AdjustmentEvent.BLOCK_INCREMENT is sent out when the user clicks in the track, to the left of the bubble on a horizontal scroll bar, or above the bubble on a vertical scroll bar. By convention, the Page Up key is equivalent, if the user is using a keyboard that defines a Page Up key. AdjustmentEvent.BLOCK_DECREMENT is sent out when the user clicks in the track, to the right of the bubble on a horizontal scroll bar, or below the bubble on a vertical scroll bar. By convention, the Page Down key is equivalent, if the user is using a keyboard that defines a Page Down key.
The JDK 1.0 event system is supported for backwards compatibility, but its use with newer versions of the platform is discouraged. The five types of adjustment events introduced with JDK 1.1 correspond to the five event types that are associated with scroll bars in previous platform versions. The following list gives the adjustment event type, and the corresponding JDK 1.0 event type it replaces.
AdjustmentEvent.TRACK replaces Event.SCROLL_ABSOLUTE AdjustmentEvent.UNIT_INCREMENT replaces Event.SCROLL_LINE_UP AdjustmentEvent.UNIT_DECREMENT replaces Event.SCROLL_LINE_DOWN AdjustmentEvent.BLOCK_INCREMENT replaces Event.SCROLL_PAGE_UP AdjustmentEvent.BLOCK_DECREMENT replaces Event.SCROLL_PAGE_DOWN
Note: We recommend using a Scrollbar for value selection only. If you want to implement a scrollable component inside a container, we recommend you use a ScrollPane. If you use a Scrollbar for this purpose, you are likely to encounter issues with painting, key handling, sizing and positioning.
The Scrollbar class embodies a scroll bar, a familiar user-interface object. A scroll bar provides a convenient means for allowing a user to select from a range of values. The following three vertical scroll bars could be used as slider controls to pick the red, green, and blue components of a color: Each scroll bar in this example could be created with code similar to the following: redSlider=new Scrollbar(Scrollbar.VERTICAL, 0, 1, 0, 255); add(redSlider); Alternatively, a scroll bar can represent a range of values. For example, if a scroll bar is used for scrolling through text, the width of the "bubble" (also called the "thumb" or "scroll box") can be used to represent the amount of text that is visible. Here is an example of a scroll bar that represents a range: The value range represented by the bubble in this example is the visible amount. The horizontal scroll bar in this example could be created with code like the following: ranger = new Scrollbar(Scrollbar.HORIZONTAL, 0, 60, 0, 300); add(ranger); Note that the actual maximum value of the scroll bar is the maximum minus the visible amount. In the previous example, because the maximum is 300 and the visible amount is 60, the actual maximum value is 240. The range of the scrollbar track is 0 - 300. The left side of the bubble indicates the value of the scroll bar. Normally, the user changes the value of the scroll bar by making a gesture with the mouse. For example, the user can drag the scroll bar's bubble up and down, or click in the scroll bar's unit increment or block increment areas. Keyboard gestures can also be mapped to the scroll bar. By convention, the Page Up and Page Down keys are equivalent to clicking in the scroll bar's block increment and block decrement areas. When the user changes the value of the scroll bar, the scroll bar receives an instance of AdjustmentEvent. The scroll bar processes this event, passing it along to any registered listeners. Any object that wishes to be notified of changes to the scroll bar's value should implement AdjustmentListener, an interface defined in the package java.awt.event. Listeners can be added and removed dynamically by calling the methods addAdjustmentListener and removeAdjustmentListener. The AdjustmentEvent class defines five types of adjustment event, listed here: AdjustmentEvent.TRACK is sent out when the user drags the scroll bar's bubble. AdjustmentEvent.UNIT_INCREMENT is sent out when the user clicks in the left arrow of a horizontal scroll bar, or the top arrow of a vertical scroll bar, or makes the equivalent gesture from the keyboard. AdjustmentEvent.UNIT_DECREMENT is sent out when the user clicks in the right arrow of a horizontal scroll bar, or the bottom arrow of a vertical scroll bar, or makes the equivalent gesture from the keyboard. AdjustmentEvent.BLOCK_INCREMENT is sent out when the user clicks in the track, to the left of the bubble on a horizontal scroll bar, or above the bubble on a vertical scroll bar. By convention, the Page Up key is equivalent, if the user is using a keyboard that defines a Page Up key. AdjustmentEvent.BLOCK_DECREMENT is sent out when the user clicks in the track, to the right of the bubble on a horizontal scroll bar, or below the bubble on a vertical scroll bar. By convention, the Page Down key is equivalent, if the user is using a keyboard that defines a Page Down key. The JDK 1.0 event system is supported for backwards compatibility, but its use with newer versions of the platform is discouraged. The five types of adjustment events introduced with JDK 1.1 correspond to the five event types that are associated with scroll bars in previous platform versions. The following list gives the adjustment event type, and the corresponding JDK 1.0 event type it replaces. AdjustmentEvent.TRACK replaces Event.SCROLL_ABSOLUTE AdjustmentEvent.UNIT_INCREMENT replaces Event.SCROLL_LINE_UP AdjustmentEvent.UNIT_DECREMENT replaces Event.SCROLL_LINE_DOWN AdjustmentEvent.BLOCK_INCREMENT replaces Event.SCROLL_PAGE_UP AdjustmentEvent.BLOCK_DECREMENT replaces Event.SCROLL_PAGE_DOWN Note: We recommend using a Scrollbar for value selection only. If you want to implement a scrollable component inside a container, we recommend you use a ScrollPane. If you use a Scrollbar for this purpose, you are likely to encounter issues with painting, key handling, sizing and positioning.
A container class which implements automatic horizontal and/or vertical scrolling for a single child component. The display policy for the scrollbars can be set to:
as needed: scrollbars created and shown only when needed by scrollpane always: scrollbars created and always shown by the scrollpane never: scrollbars never created or shown by the scrollpane
The state of the horizontal and vertical scrollbars is represented by two ScrollPaneAdjustable objects (one for each dimension) which implement the Adjustable interface. The API provides methods to access those objects such that the attributes on the Adjustable object (such as unitIncrement, value, etc.) can be manipulated.
Certain adjustable properties (minimum, maximum, blockIncrement, and visibleAmount) are set internally by the scrollpane in accordance with the geometry of the scrollpane and its child and these should not be set by programs using the scrollpane.
If the scrollbar display policy is defined as "never", then the scrollpane can still be programmatically scrolled using the setScrollPosition() method and the scrollpane will move and clip the child's contents appropriately. This policy is useful if the program needs to create and manage its own adjustable controls.
The placement of the scrollbars is controlled by platform-specific properties set by the user outside of the program.
The initial size of this container is set to 100x100, but can be reset using setSize().
Scrolling with the wheel on a wheel-equipped mouse is enabled by default. This can be disabled using setWheelScrollingEnabled. Wheel scrolling can be customized by setting the block and unit increment of the horizontal and vertical Adjustables. For information on how mouse wheel events are dispatched, see the class description for MouseWheelEvent.
Insets are used to define any space used by scrollbars and any borders created by the scroll pane. getInsets() can be used to get the current value for the insets. If the value of scrollbarsAlwaysVisible is false, then the value of the insets will change dynamically depending on whether the scrollbars are currently visible or not.
A container class which implements automatic horizontal and/or vertical scrolling for a single child component. The display policy for the scrollbars can be set to: as needed: scrollbars created and shown only when needed by scrollpane always: scrollbars created and always shown by the scrollpane never: scrollbars never created or shown by the scrollpane The state of the horizontal and vertical scrollbars is represented by two ScrollPaneAdjustable objects (one for each dimension) which implement the Adjustable interface. The API provides methods to access those objects such that the attributes on the Adjustable object (such as unitIncrement, value, etc.) can be manipulated. Certain adjustable properties (minimum, maximum, blockIncrement, and visibleAmount) are set internally by the scrollpane in accordance with the geometry of the scrollpane and its child and these should not be set by programs using the scrollpane. If the scrollbar display policy is defined as "never", then the scrollpane can still be programmatically scrolled using the setScrollPosition() method and the scrollpane will move and clip the child's contents appropriately. This policy is useful if the program needs to create and manage its own adjustable controls. The placement of the scrollbars is controlled by platform-specific properties set by the user outside of the program. The initial size of this container is set to 100x100, but can be reset using setSize(). Scrolling with the wheel on a wheel-equipped mouse is enabled by default. This can be disabled using setWheelScrollingEnabled. Wheel scrolling can be customized by setting the block and unit increment of the horizontal and vertical Adjustables. For information on how mouse wheel events are dispatched, see the class description for MouseWheelEvent. Insets are used to define any space used by scrollbars and any borders created by the scroll pane. getInsets() can be used to get the current value for the insets. If the value of scrollbarsAlwaysVisible is false, then the value of the insets will change dynamically depending on whether the scrollbars are currently visible or not.
This class represents the state of a horizontal or vertical scrollbar of a ScrollPane. Objects of this class are returned by ScrollPane methods.
This class represents the state of a horizontal or vertical scrollbar of a ScrollPane. Objects of this class are returned by ScrollPane methods.
A helper interface to run the nested event loop.
Objects that implement this interface are created with the EventQueue.createSecondaryLoop() method. The interface provides two methods, enter() and exit(), which can be used to start and stop the event loop.
When the enter() method is called, the current thread is blocked until the loop is terminated by the exit() method. Also, a new event loop is started on the event dispatch thread, which may or may not be the current thread. The loop can be terminated on any thread by calling its exit() method. After the loop is terminated, the SecondaryLoop object can be reused to run a new nested event loop.
A typical use case of applying this interface is AWT and Swing modal dialogs. When a modal dialog is shown on the event dispatch thread, it enters a new secondary loop. Later, when the dialog is hidden or disposed, it exits the loop, and the thread continues its execution.
The following example illustrates a simple use case of secondary loops:
SecondaryLoop loop;
JButton jButton = new JButton("Button"); jButton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { Toolkit tk = Toolkit.getDefaultToolkit(); EventQueue eq = tk.getSystemEventQueue(); loop = eq.createSecondaryLoop();
// Spawn a new thread to do the work
Thread worker = new WorkerThread();
worker.start();
// Enter the loop to block the current event
// handler, but leave UI responsive
if (!loop.enter()) {
// Report an error
}
}
});
class WorkerThread extends Thread { @Override public void run() { // Perform calculations doSomethingUseful();
// Exit the loop
loop.exit();
}
}
A helper interface to run the nested event loop. Objects that implement this interface are created with the EventQueue.createSecondaryLoop() method. The interface provides two methods, enter() and exit(), which can be used to start and stop the event loop. When the enter() method is called, the current thread is blocked until the loop is terminated by the exit() method. Also, a new event loop is started on the event dispatch thread, which may or may not be the current thread. The loop can be terminated on any thread by calling its exit() method. After the loop is terminated, the SecondaryLoop object can be reused to run a new nested event loop. A typical use case of applying this interface is AWT and Swing modal dialogs. When a modal dialog is shown on the event dispatch thread, it enters a new secondary loop. Later, when the dialog is hidden or disposed, it exits the loop, and the thread continues its execution. The following example illustrates a simple use case of secondary loops: SecondaryLoop loop; JButton jButton = new JButton("Button"); jButton.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { Toolkit tk = Toolkit.getDefaultToolkit(); EventQueue eq = tk.getSystemEventQueue(); loop = eq.createSecondaryLoop(); // Spawn a new thread to do the work Thread worker = new WorkerThread(); worker.start(); // Enter the loop to block the current event // handler, but leave UI responsive if (!loop.enter()) { // Report an error } } }); class WorkerThread extends Thread { @Override public void run() { // Perform calculations doSomethingUseful(); // Exit the loop loop.exit(); } }
The Shape interface provides definitions for objects that represent some form of geometric shape. The Shape is described by a PathIterator object, which can express the outline of the Shape as well as a rule for determining how the outline divides the 2D plane into interior and exterior points. Each Shape object provides callbacks to get the bounding box of the geometry, determine whether points or rectangles lie partly or entirely within the interior of the Shape, and retrieve a PathIterator object that describes the trajectory path of the Shape outline.
Definition of insideness: A point is considered to lie inside a Shape if and only if:
it lies completely inside theShape boundary or
it lies exactly on the Shape boundary and the space immediately adjacent to the point in the increasing X direction is entirely inside the boundary or
it lies exactly on a horizontal boundary segment and the space immediately adjacent to the point in the increasing Y direction is inside the boundary.
The contains and intersects methods consider the interior of a Shape to be the area it encloses as if it were filled. This means that these methods consider unclosed shapes to be implicitly closed for the purpose of determining if a shape contains or intersects a rectangle or if a shape contains a point.
The Shape interface provides definitions for objects that represent some form of geometric shape. The Shape is described by a PathIterator object, which can express the outline of the Shape as well as a rule for determining how the outline divides the 2D plane into interior and exterior points. Each Shape object provides callbacks to get the bounding box of the geometry, determine whether points or rectangles lie partly or entirely within the interior of the Shape, and retrieve a PathIterator object that describes the trajectory path of the Shape outline. Definition of insideness: A point is considered to lie inside a Shape if and only if: it lies completely inside theShape boundary or it lies exactly on the Shape boundary and the space immediately adjacent to the point in the increasing X direction is entirely inside the boundary or it lies exactly on a horizontal boundary segment and the space immediately adjacent to the point in the increasing Y direction is inside the boundary. The contains and intersects methods consider the interior of a Shape to be the area it encloses as if it were filled. This means that these methods consider unclosed shapes to be implicitly closed for the purpose of determining if a shape contains or intersects a rectangle or if a shape contains a point.
The splash screen can be displayed at application startup, before the Java Virtual Machine (JVM) starts. The splash screen is displayed as an undecorated window containing an image. You can use GIF, JPEG, or PNG files for the image. Animation is supported for the GIF format, while transparency is supported both for GIF and PNG. The window is positioned at the center of the screen. The position on multi-monitor systems is not specified. It is platform and implementation dependent. The splash screen window is closed automatically as soon as the first window is displayed by Swing/AWT (may be also closed manually using the Java API, see below).
If your application is packaged in a jar file, you can use the "SplashScreen-Image" option in a manifest file to show a splash screen. Place the image in the jar archive and specify the path in the option. The path should not have a leading slash.
For example, in the manifest.mf file:
Manifest-Version: 1.0 Main-Class: Test SplashScreen-Image: filename.gif
If the Java implementation provides the command-line interface and you run your application by using the command line or a shortcut, use the Java application launcher option to show a splash screen. The Oracle reference implementation allows you to specify the splash screen image location with the -splash: option.
For example:
java -splash:filename.gif Test The command line interface has higher precedence over the manifest setting.
The splash screen will be displayed as faithfully as possible to present the whole splash screen image given the limitations of the target platform and display.
It is implied that the specified image is presented on the screen "as is", i.e. preserving the exact color values as specified in the image file. Under certain circumstances, though, the presented image may differ, e.g. when applying color dithering to present a 32 bits per pixel (bpp) image on a 16 or 8 bpp screen. The native platform display configuration may also affect the colors of the displayed image (e.g. color profiles, etc.)
The SplashScreen class provides the API for controlling the splash screen. This class may be used to close the splash screen, change the splash screen image, get the splash screen native window position/size, and paint in the splash screen. It cannot be used to create the splash screen. You should use the options provided by the Java implementation for that.
This class cannot be instantiated. Only a single instance of this class can exist, and it may be obtained by using the getSplashScreen() static method. In case the splash screen has not been created at application startup via the command line or manifest file option, the getSplashScreen method returns null.
The splash screen can be displayed at application startup, before the Java Virtual Machine (JVM) starts. The splash screen is displayed as an undecorated window containing an image. You can use GIF, JPEG, or PNG files for the image. Animation is supported for the GIF format, while transparency is supported both for GIF and PNG. The window is positioned at the center of the screen. The position on multi-monitor systems is not specified. It is platform and implementation dependent. The splash screen window is closed automatically as soon as the first window is displayed by Swing/AWT (may be also closed manually using the Java API, see below). If your application is packaged in a jar file, you can use the "SplashScreen-Image" option in a manifest file to show a splash screen. Place the image in the jar archive and specify the path in the option. The path should not have a leading slash. For example, in the manifest.mf file: Manifest-Version: 1.0 Main-Class: Test SplashScreen-Image: filename.gif If the Java implementation provides the command-line interface and you run your application by using the command line or a shortcut, use the Java application launcher option to show a splash screen. The Oracle reference implementation allows you to specify the splash screen image location with the -splash: option. For example: java -splash:filename.gif Test The command line interface has higher precedence over the manifest setting. The splash screen will be displayed as faithfully as possible to present the whole splash screen image given the limitations of the target platform and display. It is implied that the specified image is presented on the screen "as is", i.e. preserving the exact color values as specified in the image file. Under certain circumstances, though, the presented image may differ, e.g. when applying color dithering to present a 32 bits per pixel (bpp) image on a 16 or 8 bpp screen. The native platform display configuration may also affect the colors of the displayed image (e.g. color profiles, etc.) The SplashScreen class provides the API for controlling the splash screen. This class may be used to close the splash screen, change the splash screen image, get the splash screen native window position/size, and paint in the splash screen. It cannot be used to create the splash screen. You should use the options provided by the Java implementation for that. This class cannot be instantiated. Only a single instance of this class can exist, and it may be obtained by using the getSplashScreen() static method. In case the splash screen has not been created at application startup via the command line or manifest file option, the getSplashScreen method returns null.
The Stroke interface allows a Graphics2D object to obtain a Shape that is the decorated outline, or stylistic representation of the outline, of the specified Shape. Stroking a Shape is like tracing its outline with a marking pen of the appropriate size and shape. The area where the pen would place ink is the area enclosed by the outline Shape.
The methods of the Graphics2D interface that use the outline Shape returned by a Stroke object include draw and any other methods that are implemented in terms of that method, such as drawLine, drawRect, drawRoundRect, drawOval, drawArc, drawPolyline, and drawPolygon.
The objects of the classes implementing Stroke must be read-only because Graphics2D does not clone these objects either when they are set as an attribute with the setStroke method or when the Graphics2D object is itself cloned. If a Stroke object is modified after it is set in the Graphics2D context then the behavior of subsequent rendering would be undefined.
The Stroke interface allows a Graphics2D object to obtain a Shape that is the decorated outline, or stylistic representation of the outline, of the specified Shape. Stroking a Shape is like tracing its outline with a marking pen of the appropriate size and shape. The area where the pen would place ink is the area enclosed by the outline Shape. The methods of the Graphics2D interface that use the outline Shape returned by a Stroke object include draw and any other methods that are implemented in terms of that method, such as drawLine, drawRect, drawRoundRect, drawOval, drawArc, drawPolyline, and drawPolygon. The objects of the classes implementing Stroke must be read-only because Graphics2D does not clone these objects either when they are set as an attribute with the setStroke method or when the Graphics2D object is itself cloned. If a Stroke object is modified after it is set in the Graphics2D context then the behavior of subsequent rendering would be undefined.
A class to encapsulate symbolic colors representing the color of native GUI objects on a system. For systems which support the dynamic update of the system colors (when the user changes the colors) the actual RGB values of these symbolic colors will also change dynamically. In order to compare the "current" RGB value of a SystemColor object with a non-symbolic Color object, getRGB should be used rather than equals.
Note that the way in which these system colors are applied to GUI objects may vary slightly from platform to platform since GUI objects may be rendered differently on each platform.
System color values may also be available through the getDesktopProperty method on java.awt.Toolkit.
A class to encapsulate symbolic colors representing the color of native GUI objects on a system. For systems which support the dynamic update of the system colors (when the user changes the colors) the actual RGB values of these symbolic colors will also change dynamically. In order to compare the "current" RGB value of a SystemColor object with a non-symbolic Color object, getRGB should be used rather than equals. Note that the way in which these system colors are applied to GUI objects may vary slightly from platform to platform since GUI objects may be rendered differently on each platform. System color values may also be available through the getDesktopProperty method on java.awt.Toolkit.
The SystemTray class represents the system tray for a desktop. On Microsoft Windows it is referred to as the "Taskbar Status Area", on Gnome it is referred to as the "Notification Area", on KDE it is referred to as the "System Tray". The system tray is shared by all applications running on the desktop.
On some platforms the system tray may not be present or may not be supported, in this case getSystemTray() throws UnsupportedOperationException. To detect whether the system tray is supported, use isSupported().
The SystemTray may contain one or more TrayIcons, which are added to the tray using the add(java.awt.TrayIcon) method, and removed when no longer needed, using the remove(java.awt.TrayIcon). TrayIcon consists of an image, a popup menu and a set of associated listeners. Please see the TrayIcon class for details.
Every Java application has a single SystemTray instance that allows the app to interface with the system tray of the desktop while the app is running. The SystemTray instance can be obtained from the getSystemTray() method. An application may not create its own instance of SystemTray.
The following code snippet demonstrates how to access and customize the system tray:
TrayIcon trayIcon = null;
if (SystemTray.isSupported()) {
// get the SystemTray instance
SystemTray tray = SystemTray.getSystemTray();
// load an image
Image image = Toolkit.getDefaultToolkit().getImage(...);
// create a action listener to listen for default action executed on the tray icon
ActionListener listener = new ActionListener() {
public void actionPerformed(ActionEvent e) {
// execute default action of the application
// ...
}
};
// create a popup menu
PopupMenu popup = new PopupMenu();
// create menu item for the default action
MenuItem defaultItem = new MenuItem(...);
defaultItem.addActionListener(listener);
popup.add(defaultItem);
/// ... add other items
// construct a TrayIcon
trayIcon = new TrayIcon(image, "Tray Demo", popup);
// set the TrayIcon properties
trayIcon.addActionListener(listener);
// ...
// add the tray image
try {
tray.add(trayIcon);
} catch (AWTException e) {
System.err.println(e);
}
// ...
} else {
// disable tray option in your application or
// perform other actions
...
}
// ...
// some time later
// the application state has changed - update the image
if (trayIcon != null) {
trayIcon.setImage(updatedImage);
}
// ...
The SystemTray class represents the system tray for a desktop. On Microsoft Windows it is referred to as the "Taskbar Status Area", on Gnome it is referred to as the "Notification Area", on KDE it is referred to as the "System Tray". The system tray is shared by all applications running on the desktop. On some platforms the system tray may not be present or may not be supported, in this case getSystemTray() throws UnsupportedOperationException. To detect whether the system tray is supported, use isSupported(). The SystemTray may contain one or more TrayIcons, which are added to the tray using the add(java.awt.TrayIcon) method, and removed when no longer needed, using the remove(java.awt.TrayIcon). TrayIcon consists of an image, a popup menu and a set of associated listeners. Please see the TrayIcon class for details. Every Java application has a single SystemTray instance that allows the app to interface with the system tray of the desktop while the app is running. The SystemTray instance can be obtained from the getSystemTray() method. An application may not create its own instance of SystemTray. The following code snippet demonstrates how to access and customize the system tray: TrayIcon trayIcon = null; if (SystemTray.isSupported()) { // get the SystemTray instance SystemTray tray = SystemTray.getSystemTray(); // load an image Image image = Toolkit.getDefaultToolkit().getImage(...); // create a action listener to listen for default action executed on the tray icon ActionListener listener = new ActionListener() { public void actionPerformed(ActionEvent e) { // execute default action of the application // ... } }; // create a popup menu PopupMenu popup = new PopupMenu(); // create menu item for the default action MenuItem defaultItem = new MenuItem(...); defaultItem.addActionListener(listener); popup.add(defaultItem); /// ... add other items // construct a TrayIcon trayIcon = new TrayIcon(image, "Tray Demo", popup); // set the TrayIcon properties trayIcon.addActionListener(listener); // ... // add the tray image try { tray.add(trayIcon); } catch (AWTException e) { System.err.println(e); } // ... } else { // disable tray option in your application or // perform other actions ... } // ... // some time later // the application state has changed - update the image if (trayIcon != null) { trayIcon.setImage(updatedImage); } // ...
A TextArea object is a multi-line region that displays text. It can be set to allow editing or to be read-only.
The following image shows the appearance of a text area:
This text area could be created by the following line of code:
new TextArea("Hello", 5, 40);
A TextArea object is a multi-line region that displays text. It can be set to allow editing or to be read-only. The following image shows the appearance of a text area: This text area could be created by the following line of code: new TextArea("Hello", 5, 40);
The TextComponent class is the superclass of any component that allows the editing of some text.
A text component embodies a string of text. The TextComponent class defines a set of methods that determine whether or not this text is editable. If the component is editable, it defines another set of methods that supports a text insertion caret.
In addition, the class defines methods that are used to maintain a current selection from the text. The text selection, a substring of the component's text, is the target of editing operations. It is also referred to as the selected text.
The TextComponent class is the superclass of any component that allows the editing of some text. A text component embodies a string of text. The TextComponent class defines a set of methods that determine whether or not this text is editable. If the component is editable, it defines another set of methods that supports a text insertion caret. In addition, the class defines methods that are used to maintain a current selection from the text. The text selection, a substring of the component's text, is the target of editing operations. It is also referred to as the selected text.
A TextField object is a text component that allows for the editing of a single line of text.
For example, the following image depicts a frame with four text fields of varying widths. Two of these text fields display the predefined text "Hello".
Here is the code that produces these four text fields:
TextField tf1, tf2, tf3, tf4; // a blank text field tf1 = new TextField(); // blank field of 20 columns tf2 = new TextField("", 20); // predefined text displayed tf3 = new TextField("Hello!"); // predefined text in 30 columns tf4 = new TextField("Hello", 30);
Every time the user types a key in the text field, one or more key events are sent to the text field. A KeyEvent may be one of three types: keyPressed, keyReleased, or keyTyped. The properties of a key event indicate which of these types it is, as well as additional information about the event, such as what modifiers are applied to the key event and the time at which the event occurred.
The key event is passed to every KeyListener or KeyAdapter object which registered to receive such events using the component's addKeyListener method. (KeyAdapter objects implement the KeyListener interface.)
It is also possible to fire an ActionEvent. If action events are enabled for the text field, they may be fired by pressing the Return key.
The TextField class's processEvent method examines the action event and passes it along to processActionEvent. The latter method redirects the event to any ActionListener objects that have registered to receive action events generated by this text field.
A TextField object is a text component that allows for the editing of a single line of text. For example, the following image depicts a frame with four text fields of varying widths. Two of these text fields display the predefined text "Hello". Here is the code that produces these four text fields: TextField tf1, tf2, tf3, tf4; // a blank text field tf1 = new TextField(); // blank field of 20 columns tf2 = new TextField("", 20); // predefined text displayed tf3 = new TextField("Hello!"); // predefined text in 30 columns tf4 = new TextField("Hello", 30); Every time the user types a key in the text field, one or more key events are sent to the text field. A KeyEvent may be one of three types: keyPressed, keyReleased, or keyTyped. The properties of a key event indicate which of these types it is, as well as additional information about the event, such as what modifiers are applied to the key event and the time at which the event occurred. The key event is passed to every KeyListener or KeyAdapter object which registered to receive such events using the component's addKeyListener method. (KeyAdapter objects implement the KeyListener interface.) It is also possible to fire an ActionEvent. If action events are enabled for the text field, they may be fired by pressing the Return key. The TextField class's processEvent method examines the action event and passes it along to processActionEvent. The latter method redirects the event to any ActionListener objects that have registered to receive action events generated by this text field.
The TexturePaint class provides a way to fill a Shape with a texture that is specified as a BufferedImage. The size of the BufferedImage object should be small because the BufferedImage data is copied by the TexturePaint object. At construction time, the texture is anchored to the upper left corner of a Rectangle2D that is specified in user space. Texture is computed for locations in the device space by conceptually replicating the specified Rectangle2D infinitely in all directions in user space and mapping the BufferedImage to each replicated Rectangle2D.
The TexturePaint class provides a way to fill a Shape with a texture that is specified as a BufferedImage. The size of the BufferedImage object should be small because the BufferedImage data is copied by the TexturePaint object. At construction time, the texture is anchored to the upper left corner of a Rectangle2D that is specified in user space. Texture is computed for locations in the device space by conceptually replicating the specified Rectangle2D infinitely in all directions in user space and mapping the BufferedImage to each replicated Rectangle2D.
This class is the abstract superclass of all actual implementations of the Abstract Window Toolkit. Subclasses of the Toolkit class are used to bind the various components to particular native toolkit implementations.
Many GUI events may be delivered to user asynchronously, if the opposite is not specified explicitly. As well as many GUI operations may be performed asynchronously. This fact means that if the state of a component is set, and then the state immediately queried, the returned value may not yet reflect the requested change. This behavior includes, but is not limited to:
Scrolling to a specified position. For example, calling ScrollPane.setScrollPosition and then getScrollPosition may return an incorrect value if the original request has not yet been processed.
Moving the focus from one component to another. For more information, see Timing Focus Transfers, a section in The Swing Tutorial.
Making a top-level container visible. Calling setVisible(true) on a Window, Frame or Dialog may occur asynchronously.
Setting the size or location of a top-level container. Calls to setSize, setBounds or setLocation on a Window, Frame or Dialog are forwarded to the underlying window management system and may be ignored or modified. See Window for more information.
Most applications should not call any of the methods in this class directly. The methods defined by Toolkit are the "glue" that joins the platform-independent classes in the java.awt package with their counterparts in java.awt.peer. Some methods defined by Toolkit query the native operating system directly.
This class is the abstract superclass of all actual implementations of the Abstract Window Toolkit. Subclasses of the Toolkit class are used to bind the various components to particular native toolkit implementations. Many GUI events may be delivered to user asynchronously, if the opposite is not specified explicitly. As well as many GUI operations may be performed asynchronously. This fact means that if the state of a component is set, and then the state immediately queried, the returned value may not yet reflect the requested change. This behavior includes, but is not limited to: Scrolling to a specified position. For example, calling ScrollPane.setScrollPosition and then getScrollPosition may return an incorrect value if the original request has not yet been processed. Moving the focus from one component to another. For more information, see Timing Focus Transfers, a section in The Swing Tutorial. Making a top-level container visible. Calling setVisible(true) on a Window, Frame or Dialog may occur asynchronously. Setting the size or location of a top-level container. Calls to setSize, setBounds or setLocation on a Window, Frame or Dialog are forwarded to the underlying window management system and may be ignored or modified. See Window for more information. Most applications should not call any of the methods in this class directly. The methods defined by Toolkit are the "glue" that joins the platform-independent classes in the java.awt package with their counterparts in java.awt.peer. Some methods defined by Toolkit query the native operating system directly.
The Transparency interface defines the common transparency modes for implementing classes.
The Transparency interface defines the common transparency modes for implementing classes.
A TrayIcon object represents a tray icon that can be added to the system tray. A TrayIcon can have a tooltip (text), an image, a popup menu, and a set of listeners associated with it.
A TrayIcon can generate various MouseEvents and supports adding corresponding listeners to receive notification of these events. TrayIcon processes some of the events by itself. For example, by default, when the right-mouse click is performed on the TrayIcon it displays the specified popup menu. When the mouse hovers over the TrayIcon the tooltip is displayed.
Note: When the MouseEvent is dispatched to its registered listeners its component property will be set to null. (See ComponentEvent.getComponent()) The source property will be set to this TrayIcon. (See EventObject.getSource())
Note: A well-behaved TrayIcon implementation will assign different gestures to showing a popup menu and selecting a tray icon.
A TrayIcon can generate an ActionEvent. On some platforms, this occurs when the user selects the tray icon using either the mouse or keyboard.
If a SecurityManager is installed, the AWTPermission accessSystemTray must be granted in order to create a TrayIcon. Otherwise the constructor will throw a SecurityException.
See the SystemTray class overview for an example on how to use the TrayIcon API.
A TrayIcon object represents a tray icon that can be added to the system tray. A TrayIcon can have a tooltip (text), an image, a popup menu, and a set of listeners associated with it. A TrayIcon can generate various MouseEvents and supports adding corresponding listeners to receive notification of these events. TrayIcon processes some of the events by itself. For example, by default, when the right-mouse click is performed on the TrayIcon it displays the specified popup menu. When the mouse hovers over the TrayIcon the tooltip is displayed. Note: When the MouseEvent is dispatched to its registered listeners its component property will be set to null. (See ComponentEvent.getComponent()) The source property will be set to this TrayIcon. (See EventObject.getSource()) Note: A well-behaved TrayIcon implementation will assign different gestures to showing a popup menu and selecting a tray icon. A TrayIcon can generate an ActionEvent. On some platforms, this occurs when the user selects the tray icon using either the mouse or keyboard. If a SecurityManager is installed, the AWTPermission accessSystemTray must be granted in order to create a TrayIcon. Otherwise the constructor will throw a SecurityException. See the SystemTray class overview for an example on how to use the TrayIcon API.
A Window object is a top-level window with no borders and no menubar. The default layout for a window is BorderLayout.
A window must have either a frame, dialog, or another window defined as its owner when it's constructed.
In a multi-screen environment, you can create a Window on a different screen device by constructing the Window with Window(Window, GraphicsConfiguration). The GraphicsConfiguration object is one of the GraphicsConfiguration objects of the target screen device.
In a virtual device multi-screen environment in which the desktop area could span multiple physical screen devices, the bounds of all configurations are relative to the virtual device coordinate system. The origin of the virtual-coordinate system is at the upper left-hand corner of the primary physical screen. Depending on the location of the primary screen in the virtual device, negative coordinates are possible, as shown in the following figure.
In such an environment, when calling setLocation, you must pass a virtual coordinate to this method. Similarly, calling getLocationOnScreen on a Window returns virtual device coordinates. Call the getBounds method of a GraphicsConfiguration to find its origin in the virtual coordinate system.
The following code sets the location of a Window at (10, 10) relative to the origin of the physical screen of the corresponding GraphicsConfiguration. If the bounds of the GraphicsConfiguration is not taken into account, the Window location would be set at (10, 10) relative to the virtual-coordinate system and would appear on the primary physical screen, which might be different from the physical screen of the specified GraphicsConfiguration.
Window w = new Window(Window owner, GraphicsConfiguration gc);
Rectangle bounds = gc.getBounds();
w.setLocation(10 bounds.x, 10 bounds.y);
Note: the location and size of top-level windows (including Windows, Frames, and Dialogs) are under the control of the desktop's window management system. Calls to setLocation, setSize, and setBounds are requests (not directives) which are forwarded to the window management system. Every effort will be made to honor such requests. However, in some cases the window management system may ignore such requests, or modify the requested geometry in order to place and size the Window in a way that more closely matches the desktop settings.
Due to the asynchronous nature of native event handling, the results returned by getBounds, getLocation, getLocationOnScreen, and getSize might not reflect the actual geometry of the Window on screen until the last request has been processed. During the processing of subsequent requests these values might change accordingly while the window management system fulfills the requests.
An application may set the size and location of an invisible Window arbitrarily, but the window management system may subsequently change its size and/or location when the Window is made visible. One or more ComponentEvents will be generated to indicate the new geometry.
Windows are capable of generating the following WindowEvents: WindowOpened, WindowClosed, WindowGainedFocus, WindowLostFocus.
A Window object is a top-level window with no borders and no menubar. The default layout for a window is BorderLayout. A window must have either a frame, dialog, or another window defined as its owner when it's constructed. In a multi-screen environment, you can create a Window on a different screen device by constructing the Window with Window(Window, GraphicsConfiguration). The GraphicsConfiguration object is one of the GraphicsConfiguration objects of the target screen device. In a virtual device multi-screen environment in which the desktop area could span multiple physical screen devices, the bounds of all configurations are relative to the virtual device coordinate system. The origin of the virtual-coordinate system is at the upper left-hand corner of the primary physical screen. Depending on the location of the primary screen in the virtual device, negative coordinates are possible, as shown in the following figure. In such an environment, when calling setLocation, you must pass a virtual coordinate to this method. Similarly, calling getLocationOnScreen on a Window returns virtual device coordinates. Call the getBounds method of a GraphicsConfiguration to find its origin in the virtual coordinate system. The following code sets the location of a Window at (10, 10) relative to the origin of the physical screen of the corresponding GraphicsConfiguration. If the bounds of the GraphicsConfiguration is not taken into account, the Window location would be set at (10, 10) relative to the virtual-coordinate system and would appear on the primary physical screen, which might be different from the physical screen of the specified GraphicsConfiguration. Window w = new Window(Window owner, GraphicsConfiguration gc); Rectangle bounds = gc.getBounds(); w.setLocation(10 bounds.x, 10 bounds.y); Note: the location and size of top-level windows (including Windows, Frames, and Dialogs) are under the control of the desktop's window management system. Calls to setLocation, setSize, and setBounds are requests (not directives) which are forwarded to the window management system. Every effort will be made to honor such requests. However, in some cases the window management system may ignore such requests, or modify the requested geometry in order to place and size the Window in a way that more closely matches the desktop settings. Due to the asynchronous nature of native event handling, the results returned by getBounds, getLocation, getLocationOnScreen, and getSize might not reflect the actual geometry of the Window on screen until the last request has been processed. During the processing of subsequent requests these values might change accordingly while the window management system fulfills the requests. An application may set the size and location of an invisible Window arbitrarily, but the window management system may subsequently change its size and/or location when the Window is made visible. One or more ComponentEvents will be generated to indicate the new geometry. Windows are capable of generating the following WindowEvents: WindowOpened, WindowClosed, WindowGainedFocus, WindowLostFocus.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close