An instance of the AudioFileFormat class describes an audio file, including the file type, the file's length in bytes, the length in sample frames of the audio data contained in the file, and the format of the audio data.
The AudioSystem class includes methods for determining the format of an audio file, obtaining an audio input stream from an audio file, and writing an audio file from an audio input stream.
An AudioFileFormat object can include a set of properties. A property is a pair of key and value: the key is of type String, the associated property value is an arbitrary object. Properties specify additional informational meta data (like a author, copyright, or file duration). Properties are optional information, and file reader and file writer implementations are not required to provide or recognize properties.
The following table lists some common properties that should be used in implementations:
Audio File Format Properties
Property key Value type Description
"duration" Long playback duration of the file in microseconds
"author" String name of the author of this file
"title" String title of this file
"copyright" String copyright message
"date" Date date of the recording or release
"comment" String an arbitrary text
An instance of the AudioFileFormat class describes an audio file, including the file type, the file's length in bytes, the length in sample frames of the audio data contained in the file, and the format of the audio data. The AudioSystem class includes methods for determining the format of an audio file, obtaining an audio input stream from an audio file, and writing an audio file from an audio input stream. An AudioFileFormat object can include a set of properties. A property is a pair of key and value: the key is of type String, the associated property value is an arbitrary object. Properties specify additional informational meta data (like a author, copyright, or file duration). Properties are optional information, and file reader and file writer implementations are not required to provide or recognize properties. The following table lists some common properties that should be used in implementations: Audio File Format Properties Property key Value type Description "duration" Long playback duration of the file in microseconds "author" String name of the author of this file "title" String title of this file "copyright" String copyright message "date" Date date of the recording or release "comment" String an arbitrary text
An instance of the Type class represents one of the standard types of audio file. Static instances are provided for the common types.
An instance of the Type class represents one of the standard types of audio file. Static instances are provided for the common types.
AudioFormat is the class that specifies a particular arrangement of data in a sound stream. By examining the information stored in the audio format, you can discover how to interpret the bits in the binary sound data.
Every data line has an audio format associated with its data stream. The audio format of a source (playback) data line indicates what kind of data the data line expects to receive for output. For a target (capture) data line, the audio format specifies the kind of the data that can be read from the line. Sound files also have audio formats, of course. The AudioFileFormat class encapsulates an AudioFormat in addition to other, file-specific information. Similarly, an AudioInputStream has an AudioFormat.
The AudioFormat class accommodates a number of common sound-file encoding techniques, including pulse-code modulation (PCM), mu-law encoding, and a-law encoding. These encoding techniques are predefined, but service providers can create new encoding types. The encoding that a specific format uses is named by its encoding field.
In addition to the encoding, the audio format includes other properties that further specify the exact arrangement of the data. These include the number of channels, sample rate, sample size, byte order, frame rate, and frame size. Sounds may have different numbers of audio channels: one for mono, two for stereo. The sample rate measures how many "snapshots" (samples) of the sound pressure are taken per second, per channel. (If the sound is stereo rather than mono, two samples are actually measured at each instant of time: one for the left channel, and another for the right channel; however, the sample rate still measures the number per channel, so the rate is the same regardless of the number of channels. This is the standard use of the term.) The sample size indicates how many bits are used to store each snapshot; 8 and 16 are typical values. For 16-bit samples (or any other sample size larger than a byte), byte order is important; the bytes in each sample are arranged in either the "little-endian" or "big-endian" style. For encodings like PCM, a frame consists of the set of samples for all channels at a given point in time, and so the size of a frame (in bytes) is always equal to the size of a sample (in bytes) times the number of channels. However, with some other sorts of encodings a frame can contain a bundle of compressed data for a whole series of samples, as well as additional, non-sample data. For such encodings, the sample rate and sample size refer to the data after it is decoded into PCM, and so they are completely different from the frame rate and frame size.
An AudioFormat object can include a set of properties. A property is a pair of key and value: the key is of type String, the associated property value is an arbitrary object. Properties specify additional format specifications, like the bit rate for compressed formats. Properties are mainly used as a means to transport additional information of the audio format to and from the service providers. Therefore, properties are ignored in the matches(AudioFormat) method. However, methods which rely on the installed service providers, like (AudioFormat, AudioFormat) isConversionSupported may consider properties, depending on the respective service provider implementation.
The following table lists some common properties which service providers should use, if applicable:
Audio Format Properties
Property key Value type Description
"bitrate" Integer average bit rate in bits per second
"vbr" Boolean true, if the file is encoded in variable bit rate (VBR)
"quality" Integer encoding/conversion quality, 1..100
Vendors of service providers (plugins) are encouraged to seek information about other already established properties in third party plugins, and follow the same conventions.
AudioFormat is the class that specifies a particular arrangement of data in a sound stream. By examining the information stored in the audio format, you can discover how to interpret the bits in the binary sound data. Every data line has an audio format associated with its data stream. The audio format of a source (playback) data line indicates what kind of data the data line expects to receive for output. For a target (capture) data line, the audio format specifies the kind of the data that can be read from the line. Sound files also have audio formats, of course. The AudioFileFormat class encapsulates an AudioFormat in addition to other, file-specific information. Similarly, an AudioInputStream has an AudioFormat. The AudioFormat class accommodates a number of common sound-file encoding techniques, including pulse-code modulation (PCM), mu-law encoding, and a-law encoding. These encoding techniques are predefined, but service providers can create new encoding types. The encoding that a specific format uses is named by its encoding field. In addition to the encoding, the audio format includes other properties that further specify the exact arrangement of the data. These include the number of channels, sample rate, sample size, byte order, frame rate, and frame size. Sounds may have different numbers of audio channels: one for mono, two for stereo. The sample rate measures how many "snapshots" (samples) of the sound pressure are taken per second, per channel. (If the sound is stereo rather than mono, two samples are actually measured at each instant of time: one for the left channel, and another for the right channel; however, the sample rate still measures the number per channel, so the rate is the same regardless of the number of channels. This is the standard use of the term.) The sample size indicates how many bits are used to store each snapshot; 8 and 16 are typical values. For 16-bit samples (or any other sample size larger than a byte), byte order is important; the bytes in each sample are arranged in either the "little-endian" or "big-endian" style. For encodings like PCM, a frame consists of the set of samples for all channels at a given point in time, and so the size of a frame (in bytes) is always equal to the size of a sample (in bytes) times the number of channels. However, with some other sorts of encodings a frame can contain a bundle of compressed data for a whole series of samples, as well as additional, non-sample data. For such encodings, the sample rate and sample size refer to the data after it is decoded into PCM, and so they are completely different from the frame rate and frame size. An AudioFormat object can include a set of properties. A property is a pair of key and value: the key is of type String, the associated property value is an arbitrary object. Properties specify additional format specifications, like the bit rate for compressed formats. Properties are mainly used as a means to transport additional information of the audio format to and from the service providers. Therefore, properties are ignored in the matches(AudioFormat) method. However, methods which rely on the installed service providers, like (AudioFormat, AudioFormat) isConversionSupported may consider properties, depending on the respective service provider implementation. The following table lists some common properties which service providers should use, if applicable: Audio Format Properties Property key Value type Description "bitrate" Integer average bit rate in bits per second "vbr" Boolean true, if the file is encoded in variable bit rate (VBR) "quality" Integer encoding/conversion quality, 1..100 Vendors of service providers (plugins) are encouraged to seek information about other already established properties in third party plugins, and follow the same conventions.
The Encoding class names the specific type of data representation used for an audio stream. The encoding includes aspects of the sound format other than the number of channels, sample rate, sample size, frame rate, frame size, and byte order.
One ubiquitous type of audio encoding is pulse-code modulation (PCM), which is simply a linear (proportional) representation of the sound waveform. With PCM, the number stored in each sample is proportional to the instantaneous amplitude of the sound pressure at that point in time. The numbers may be signed or unsigned integers or floats. Besides PCM, other encodings include mu-law and a-law, which are nonlinear mappings of the sound amplitude that are often used for recording speech.
You can use a predefined encoding by referring to one of the static objects created by this class, such as PCM_SIGNED or PCM_UNSIGNED. Service providers can create new encodings, such as compressed audio formats, and make these available through the AudioSystem class.
The Encoding class is static, so that all AudioFormat objects that have the same encoding will refer to the same object (rather than different instances of the same class). This allows matches to be made by checking that two format's encodings are equal.
The Encoding class names the specific type of data representation used for an audio stream. The encoding includes aspects of the sound format other than the number of channels, sample rate, sample size, frame rate, frame size, and byte order. One ubiquitous type of audio encoding is pulse-code modulation (PCM), which is simply a linear (proportional) representation of the sound waveform. With PCM, the number stored in each sample is proportional to the instantaneous amplitude of the sound pressure at that point in time. The numbers may be signed or unsigned integers or floats. Besides PCM, other encodings include mu-law and a-law, which are nonlinear mappings of the sound amplitude that are often used for recording speech. You can use a predefined encoding by referring to one of the static objects created by this class, such as PCM_SIGNED or PCM_UNSIGNED. Service providers can create new encodings, such as compressed audio formats, and make these available through the AudioSystem class. The Encoding class is static, so that all AudioFormat objects that have the same encoding will refer to the same object (rather than different instances of the same class). This allows matches to be made by checking that two format's encodings are equal.
An audio input stream is an input stream with a specified audio format and length. The length is expressed in sample frames, not bytes. Several methods are provided for reading a certain number of bytes from the stream, or an unspecified number of bytes. The audio input stream keeps track of the last byte that was read. You can skip over an arbitrary number of bytes to get to a later position for reading. An audio input stream may support marks. When you set a mark, the current position is remembered so that you can return to it later.
The AudioSystem class includes many methods that manipulate AudioInputStream objects. For example, the methods let you:
obtain an audio input stream from an external audio file, stream, or URL write an external file from an audio input stream convert an audio input stream to a different audio format
An audio input stream is an input stream with a specified audio format and length. The length is expressed in sample frames, not bytes. Several methods are provided for reading a certain number of bytes from the stream, or an unspecified number of bytes. The audio input stream keeps track of the last byte that was read. You can skip over an arbitrary number of bytes to get to a later position for reading. An audio input stream may support marks. When you set a mark, the current position is remembered so that you can return to it later. The AudioSystem class includes many methods that manipulate AudioInputStream objects. For example, the methods let you: obtain an audio input stream from an external audio file, stream, or URL write an external file from an audio input stream convert an audio input stream to a different audio format
The AudioPermission class represents access rights to the audio system resources. An AudioPermission contains a target name but no actions list; you either have the named permission or you don't.
The target name is the name of the audio permission (see the table below). The names follow the hierarchical property-naming convention. Also, an asterisk can be used to represent all the audio permissions.
The following table lists the possible AudioPermission target names. For each name, the table provides a description of exactly what that permission allows, as well as a discussion of the risks of granting code the permission.
Permission Target Name What the Permission Allows Risks of Allowing this Permission
play Audio playback through the audio device or devices on the system. Allows the application to obtain and manipulate lines and mixers for audio playback (rendering). In some cases use of this permission may affect other applications because the audio from one line may be mixed with other audio being played on the system, or because manipulation of a mixer affects the audio for all lines using that mixer.
record Audio recording through the audio device or devices on the system. Allows the application to obtain and manipulate lines and mixers for audio recording (capture). In some cases use of this permission may affect other applications because manipulation of a mixer affects the audio for all lines using that mixer. This permission can enable an applet or application to eavesdrop on a user.
The AudioPermission class represents access rights to the audio system resources. An AudioPermission contains a target name but no actions list; you either have the named permission or you don't. The target name is the name of the audio permission (see the table below). The names follow the hierarchical property-naming convention. Also, an asterisk can be used to represent all the audio permissions. The following table lists the possible AudioPermission target names. For each name, the table provides a description of exactly what that permission allows, as well as a discussion of the risks of granting code the permission. Permission Target Name What the Permission Allows Risks of Allowing this Permission play Audio playback through the audio device or devices on the system. Allows the application to obtain and manipulate lines and mixers for audio playback (rendering). In some cases use of this permission may affect other applications because the audio from one line may be mixed with other audio being played on the system, or because manipulation of a mixer affects the audio for all lines using that mixer. record Audio recording through the audio device or devices on the system. Allows the application to obtain and manipulate lines and mixers for audio recording (capture). In some cases use of this permission may affect other applications because manipulation of a mixer affects the audio for all lines using that mixer. This permission can enable an applet or application to eavesdrop on a user.
The AudioSystem class acts as the entry point to the sampled-audio system resources. This class lets you query and access the mixers that are installed on the system. AudioSystem includes a number of methods for converting audio data between different formats, and for translating between audio files and streams. It also provides a method for obtaining a Line directly from the AudioSystem without dealing explicitly with mixers.
Properties can be used to specify the default mixer for specific line types. Both system properties and a properties file are considered. The sound.properties properties file is read from an implementation-specific location (typically it is the lib directory in the Java installation directory). If a property exists both as a system property and in the properties file, the system property takes precedence. If none is specified, a suitable default is chosen among the available devices. The syntax of the properties file is specified in Properties.load. The following table lists the available property keys and which methods consider them:
Audio System Property Keys
Property Key Interface Affected Method(s)
javax.sound.sampled.Clip Clip getLine(javax.sound.sampled.Line.Info), getClip()
javax.sound.sampled.Port Port getLine(javax.sound.sampled.Line.Info)
javax.sound.sampled.SourceDataLine SourceDataLine getLine(javax.sound.sampled.Line.Info), getSourceDataLine(javax.sound.sampled.AudioFormat)
javax.sound.sampled.TargetDataLine TargetDataLine getLine(javax.sound.sampled.Line.Info), getTargetDataLine(javax.sound.sampled.AudioFormat)
The property value consists of the provider class name and the mixer name, separated by the hash mark ("#"). The provider class name is the fully-qualified name of a concrete mixer provider class. The mixer name is matched against the String returned by the getName method of Mixer.Info. Either the class name, or the mixer name may be omitted. If only the class name is specified, the trailing hash mark is optional.
If the provider class is specified, and it can be successfully retrieved from the installed providers, the list of Mixer.Info objects is retrieved from the provider. Otherwise, or when these mixers do not provide a subsequent match, the list is retrieved from getMixerInfo() to contain all available Mixer.Info objects.
If a mixer name is specified, the resulting list of Mixer.Info objects is searched: the first one with a matching name, and whose Mixer provides the respective line interface, will be returned. If no matching Mixer.Info object is found, or the mixer name is not specified, the first mixer from the resulting list, which provides the respective line interface, will be returned.
For example, the property javax.sound.sampled.Clip with a value "com.sun.media.sound.MixerProvider#SunClip" will have the following consequences when getLine is called requesting a Clip instance: if the class com.sun.media.sound.MixerProvider exists in the list of installed mixer providers, the first Clip from the first mixer with name "SunClip" will be returned. If it cannot be found, the first Clip from the first mixer of the specified provider will be returned, regardless of name. If there is none, the first Clip from the first Mixer with name "SunClip" in the list of all mixers (as returned by getMixerInfo) will be returned, or, if not found, the first Clip of the first Mixerthat can be found in the list of all mixers is returned. If that fails, too, an IllegalArgumentException is thrown.
The AudioSystem class acts as the entry point to the sampled-audio system resources. This class lets you query and access the mixers that are installed on the system. AudioSystem includes a number of methods for converting audio data between different formats, and for translating between audio files and streams. It also provides a method for obtaining a Line directly from the AudioSystem without dealing explicitly with mixers. Properties can be used to specify the default mixer for specific line types. Both system properties and a properties file are considered. The sound.properties properties file is read from an implementation-specific location (typically it is the lib directory in the Java installation directory). If a property exists both as a system property and in the properties file, the system property takes precedence. If none is specified, a suitable default is chosen among the available devices. The syntax of the properties file is specified in Properties.load. The following table lists the available property keys and which methods consider them: Audio System Property Keys Property Key Interface Affected Method(s) javax.sound.sampled.Clip Clip getLine(javax.sound.sampled.Line.Info), getClip() javax.sound.sampled.Port Port getLine(javax.sound.sampled.Line.Info) javax.sound.sampled.SourceDataLine SourceDataLine getLine(javax.sound.sampled.Line.Info), getSourceDataLine(javax.sound.sampled.AudioFormat) javax.sound.sampled.TargetDataLine TargetDataLine getLine(javax.sound.sampled.Line.Info), getTargetDataLine(javax.sound.sampled.AudioFormat) The property value consists of the provider class name and the mixer name, separated by the hash mark ("#"). The provider class name is the fully-qualified name of a concrete mixer provider class. The mixer name is matched against the String returned by the getName method of Mixer.Info. Either the class name, or the mixer name may be omitted. If only the class name is specified, the trailing hash mark is optional. If the provider class is specified, and it can be successfully retrieved from the installed providers, the list of Mixer.Info objects is retrieved from the provider. Otherwise, or when these mixers do not provide a subsequent match, the list is retrieved from getMixerInfo() to contain all available Mixer.Info objects. If a mixer name is specified, the resulting list of Mixer.Info objects is searched: the first one with a matching name, and whose Mixer provides the respective line interface, will be returned. If no matching Mixer.Info object is found, or the mixer name is not specified, the first mixer from the resulting list, which provides the respective line interface, will be returned. For example, the property javax.sound.sampled.Clip with a value "com.sun.media.sound.MixerProvider#SunClip" will have the following consequences when getLine is called requesting a Clip instance: if the class com.sun.media.sound.MixerProvider exists in the list of installed mixer providers, the first Clip from the first mixer with name "SunClip" will be returned. If it cannot be found, the first Clip from the first mixer of the specified provider will be returned, regardless of name. If there is none, the first Clip from the first Mixer with name "SunClip" in the list of all mixers (as returned by getMixerInfo) will be returned, or, if not found, the first Clip of the first Mixerthat can be found in the list of all mixers is returned. If that fails, too, an IllegalArgumentException is thrown.
A BooleanControl provides the ability to switch between two possible settings that affect a line's audio. The settings are boolean values (true and false). A graphical user interface might represent the control by a two-state button, an on/off switch, two mutually exclusive buttons, or a checkbox (among other possibilities). For example, depressing a button might activate a MUTE control to silence the line's audio.
As with other Control subclasses, a method is provided that returns string labels for the values, suitable for display in the user interface.
A BooleanControl provides the ability to switch between two possible settings that affect a line's audio. The settings are boolean values (true and false). A graphical user interface might represent the control by a two-state button, an on/off switch, two mutually exclusive buttons, or a checkbox (among other possibilities). For example, depressing a button might activate a MUTE control to silence the line's audio. As with other Control subclasses, a method is provided that returns string labels for the values, suitable for display in the user interface.
An instance of the BooleanControl.Type class identifies one kind of boolean control. Static instances are provided for the common types.
An instance of the BooleanControl.Type class identifies one kind of boolean control. Static instances are provided for the common types.
The Clip interface represents a special kind of data line whose audio data can be loaded prior to playback, instead of being streamed in real time.
Because the data is pre-loaded and has a known length, you can set a clip to start playing at any position in its audio data. You can also create a loop, so that when the clip is played it will cycle repeatedly. Loops are specified with a starting and ending sample frame, along with the number of times that the loop should be played.
Clips may be obtained from a Mixer that supports lines of this type. Data is loaded into a clip when it is opened.
Playback of an audio clip may be started and stopped using the start and stop methods. These methods do not reset the media position; start causes playback to continue from the position where playback was last stopped. To restart playback from the beginning of the clip's audio data, simply follow the invocation of stop with setFramePosition(0), which rewinds the media to the beginning of the clip.
The Clip interface represents a special kind of data line whose audio data can be loaded prior to playback, instead of being streamed in real time. Because the data is pre-loaded and has a known length, you can set a clip to start playing at any position in its audio data. You can also create a loop, so that when the clip is played it will cycle repeatedly. Loops are specified with a starting and ending sample frame, along with the number of times that the loop should be played. Clips may be obtained from a Mixer that supports lines of this type. Data is loaded into a clip when it is opened. Playback of an audio clip may be started and stopped using the start and stop methods. These methods do not reset the media position; start causes playback to continue from the position where playback was last stopped. To restart playback from the beginning of the clip's audio data, simply follow the invocation of stop with setFramePosition(0), which rewinds the media to the beginning of the clip.
A CompoundControl, such as a graphic equalizer, provides control over two or more related properties, each of which is itself represented as a Control.
A CompoundControl, such as a graphic equalizer, provides control over two or more related properties, each of which is itself represented as a Control.
An instance of the CompoundControl.Type inner class identifies one kind of compound control. Static instances are provided for the common types.
An instance of the CompoundControl.Type inner class identifies one kind of compound control. Static instances are provided for the common types.
No vars found in this namespace.
Lines often have a set of controls, such as gain and pan, that affect the audio signal passing through the line. Java Sound's Line objects let you obtain a particular control object by passing its class as the argument to a getControl method.
Because the various types of controls have different purposes and features, all of their functionality is accessed from the subclasses that define each kind of control.
Lines often have a set of controls, such as gain and pan, that affect the audio signal passing through the line. Java Sound's Line objects let you obtain a particular control object by passing its class as the argument to a getControl method. Because the various types of controls have different purposes and features, all of their functionality is accessed from the subclasses that define each kind of control.
An instance of the Type class represents the type of the control. Static instances are provided for the common types.
An instance of the Type class represents the type of the control. Static instances are provided for the common types.
No vars found in this namespace.
DataLine adds media-related functionality to its superinterface, Line. This functionality includes transport-control methods that start, stop, drain, and flush the audio data that passes through the line. A data line can also report the current position, volume, and audio format of the media. Data lines are used for output of audio by means of the subinterfaces SourceDataLine or Clip, which allow an application program to write data. Similarly, audio input is handled by the subinterface TargetDataLine, which allows data to be read.
A data line has an internal buffer in which the incoming or outgoing audio data is queued. The drain() method blocks until this internal buffer becomes empty, usually because all queued data has been processed. The flush() method discards any available queued data from the internal buffer.
A data line produces START and STOP events whenever it begins or ceases active presentation or capture of data. These events can be generated in response to specific requests, or as a result of less direct state changes. For example, if start() is called on an inactive data line, and data is available for capture or playback, a START event will be generated shortly, when data playback or capture actually begins. Or, if the flow of data to an active data line is constricted so that a gap occurs in the presentation of data, a STOP event is generated.
Mixers often support synchronized control of multiple data lines. Synchronization can be established through the Mixer interface's synchronize method. See the description of the Mixer interface for a more complete description.
DataLine adds media-related functionality to its superinterface, Line. This functionality includes transport-control methods that start, stop, drain, and flush the audio data that passes through the line. A data line can also report the current position, volume, and audio format of the media. Data lines are used for output of audio by means of the subinterfaces SourceDataLine or Clip, which allow an application program to write data. Similarly, audio input is handled by the subinterface TargetDataLine, which allows data to be read. A data line has an internal buffer in which the incoming or outgoing audio data is queued. The drain() method blocks until this internal buffer becomes empty, usually because all queued data has been processed. The flush() method discards any available queued data from the internal buffer. A data line produces START and STOP events whenever it begins or ceases active presentation or capture of data. These events can be generated in response to specific requests, or as a result of less direct state changes. For example, if start() is called on an inactive data line, and data is available for capture or playback, a START event will be generated shortly, when data playback or capture actually begins. Or, if the flow of data to an active data line is constricted so that a gap occurs in the presentation of data, a STOP event is generated. Mixers often support synchronized control of multiple data lines. Synchronization can be established through the Mixer interface's synchronize method. See the description of the Mixer interface for a more complete description.
Besides the class information inherited from its superclass, DataLine.Info provides additional information specific to data lines. This information includes:
the audio formats supported by the data line the minimum and maximum sizes of its internal buffer
Because a Line.Info knows the class of the line its describes, a DataLine.Info object can describe DataLine subinterfaces such as SourceDataLine, TargetDataLine, and Clip. You can query a mixer for lines of any of these types, passing an appropriate instance of DataLine.Info as the argument to a method such as Mixer.getLine(Line.Info).
Besides the class information inherited from its superclass, DataLine.Info provides additional information specific to data lines. This information includes: the audio formats supported by the data line the minimum and maximum sizes of its internal buffer Because a Line.Info knows the class of the line its describes, a DataLine.Info object can describe DataLine subinterfaces such as SourceDataLine, TargetDataLine, and Clip. You can query a mixer for lines of any of these types, passing an appropriate instance of DataLine.Info as the argument to a method such as Mixer.getLine(Line.Info).
A EnumControl provides control over a set of discrete possible values, each represented by an object. In a graphical user interface, such a control might be represented by a set of buttons, each of which chooses one value or setting. For example, a reverb control might provide several preset reverberation settings, instead of providing continuously adjustable parameters of the sort that would be represented by FloatControl objects.
Controls that provide a choice between only two settings can often be implemented instead as a BooleanControl, and controls that provide a set of values along some quantifiable dimension might be implemented instead as a FloatControl with a coarse resolution. However, a key feature of EnumControl is that the returned values are arbitrary objects, rather than numerical or boolean values. This means that each returned object can provide further information. As an example, the settings of a REVERB control are instances of ReverbType that can be queried for the parameter values used for each setting.
A EnumControl provides control over a set of discrete possible values, each represented by an object. In a graphical user interface, such a control might be represented by a set of buttons, each of which chooses one value or setting. For example, a reverb control might provide several preset reverberation settings, instead of providing continuously adjustable parameters of the sort that would be represented by FloatControl objects. Controls that provide a choice between only two settings can often be implemented instead as a BooleanControl, and controls that provide a set of values along some quantifiable dimension might be implemented instead as a FloatControl with a coarse resolution. However, a key feature of EnumControl is that the returned values are arbitrary objects, rather than numerical or boolean values. This means that each returned object can provide further information. As an example, the settings of a REVERB control are instances of ReverbType that can be queried for the parameter values used for each setting.
An instance of the EnumControl.Type inner class identifies one kind of enumerated control. Static instances are provided for the common types.
An instance of the EnumControl.Type inner class identifies one kind of enumerated control. Static instances are provided for the common types.
A FloatControl object provides control over a range of floating-point values. Float controls are often represented in graphical user interfaces by continuously adjustable objects such as sliders or rotary knobs. Concrete subclasses of FloatControl implement controls, such as gain and pan, that affect a line's audio signal in some way that an application can manipulate. The FloatControl.Type inner class provides static instances of types that are used to identify some common kinds of float control.
The FloatControl abstract class provides methods to set and get the control's current floating-point value. Other methods obtain the possible range of values and the control's resolution (the smallest increment between returned values). Some float controls allow ramping to a new value over a specified period of time. FloatControl also includes methods that return string labels for the minimum, maximum, and midpoint positions of the control.
A FloatControl object provides control over a range of floating-point values. Float controls are often represented in graphical user interfaces by continuously adjustable objects such as sliders or rotary knobs. Concrete subclasses of FloatControl implement controls, such as gain and pan, that affect a line's audio signal in some way that an application can manipulate. The FloatControl.Type inner class provides static instances of types that are used to identify some common kinds of float control. The FloatControl abstract class provides methods to set and get the control's current floating-point value. Other methods obtain the possible range of values and the control's resolution (the smallest increment between returned values). Some float controls allow ramping to a new value over a specified period of time. FloatControl also includes methods that return string labels for the minimum, maximum, and midpoint positions of the control.
An instance of the FloatControl.Type inner class identifies one kind of float control. Static instances are provided for the common types.
An instance of the FloatControl.Type inner class identifies one kind of float control. Static instances are provided for the common types.
The Line interface represents a mono or multi-channel audio feed. A line is an element of the digital audio "pipeline," such as a mixer, an input or output port, or a data path into or out of a mixer.
A line can have controls, such as gain, pan, and reverb. The controls themselves are instances of classes that extend the base Control class. The Line interface provides two accessor methods for obtaining the line's controls: getControls returns the entire set, and getControl returns a single control of specified type.
Lines exist in various states at different times. When a line opens, it reserves system resources for itself, and when it closes, these resources are freed for other objects or applications. The isOpen() method lets you discover whether a line is open or closed. An open line need not be processing data, however. Such processing is typically initiated by subinterface methods such as SourceDataLine.write and TargetDataLine.read.
You can register an object to receive notifications whenever the line's state changes. The object must implement the LineListener interface, which consists of the single method update. This method will be invoked when a line opens and closes (and, if it's a DataLine, when it starts and stops).
An object can be registered to listen to multiple lines. The event it receives in its update method will specify which line created the event, what type of event it was (OPEN, CLOSE, START, or STOP), and how many sample frames the line had processed at the time the event occurred.
Certain line operations, such as open and close, can generate security exceptions if invoked by unprivileged code when the line is a shared audio resource.
The Line interface represents a mono or multi-channel audio feed. A line is an element of the digital audio "pipeline," such as a mixer, an input or output port, or a data path into or out of a mixer. A line can have controls, such as gain, pan, and reverb. The controls themselves are instances of classes that extend the base Control class. The Line interface provides two accessor methods for obtaining the line's controls: getControls returns the entire set, and getControl returns a single control of specified type. Lines exist in various states at different times. When a line opens, it reserves system resources for itself, and when it closes, these resources are freed for other objects or applications. The isOpen() method lets you discover whether a line is open or closed. An open line need not be processing data, however. Such processing is typically initiated by subinterface methods such as SourceDataLine.write and TargetDataLine.read. You can register an object to receive notifications whenever the line's state changes. The object must implement the LineListener interface, which consists of the single method update. This method will be invoked when a line opens and closes (and, if it's a DataLine, when it starts and stops). An object can be registered to listen to multiple lines. The event it receives in its update method will specify which line created the event, what type of event it was (OPEN, CLOSE, START, or STOP), and how many sample frames the line had processed at the time the event occurred. Certain line operations, such as open and close, can generate security exceptions if invoked by unprivileged code when the line is a shared audio resource.
A Line.Info object contains information about a line. The only information provided by Line.Info itself is the Java class of the line. A subclass of Line.Info adds other kinds of information about the line. This additional information depends on which Line subinterface is implemented by the kind of line that the Line.Info subclass describes.
A Line.Info can be retrieved using various methods of the Line, Mixer, and AudioSystem interfaces. Other such methods let you pass a Line.Info as an argument, to learn whether lines matching the specified configuration are available and to obtain them.
A Line.Info object contains information about a line. The only information provided by Line.Info itself is the Java class of the line. A subclass of Line.Info adds other kinds of information about the line. This additional information depends on which Line subinterface is implemented by the kind of line that the Line.Info subclass describes. A Line.Info can be retrieved using various methods of the Line, Mixer, and AudioSystem interfaces. Other such methods let you pass a Line.Info as an argument, to learn whether lines matching the specified configuration are available and to obtain them.
The LineEvent class encapsulates information that a line sends its listeners whenever the line opens, closes, starts, or stops. Each of these four state changes is represented by a corresponding type of event. A listener receives the event as a parameter to its update method. By querying the event, the listener can learn the type of event, the line responsible for the event, and how much data the line had processed when the event occurred.
Although this class implements Serializable, attempts to serialize a LineEvent object will fail.
The LineEvent class encapsulates information that a line sends its listeners whenever the line opens, closes, starts, or stops. Each of these four state changes is represented by a corresponding type of event. A listener receives the event as a parameter to its update method. By querying the event, the listener can learn the type of event, the line responsible for the event, and how much data the line had processed when the event occurred. Although this class implements Serializable, attempts to serialize a LineEvent object will fail.
The LineEvent.Type inner class identifies what kind of event occurred on a line. Static instances are provided for the common types (OPEN, CLOSE, START, and STOP).
The LineEvent.Type inner class identifies what kind of event occurred on a line. Static instances are provided for the common types (OPEN, CLOSE, START, and STOP).
Instances of classes that implement the LineListener interface can register to receive events when a line's status changes.
Instances of classes that implement the LineListener interface can register to receive events when a line's status changes.
A LineUnavailableException is an exception indicating that a line cannot be opened because it is unavailable. This situation arises most commonly when a requested line is already in use by another application.
A LineUnavailableException is an exception indicating that a line cannot be opened because it is unavailable. This situation arises most commonly when a requested line is already in use by another application.
A mixer is an audio device with one or more lines. It need not be designed for mixing audio signals. A mixer that actually mixes audio has multiple input (source) lines and at least one output (target) line. The former are often instances of classes that implement SourceDataLine, and the latter, TargetDataLine. Port objects, too, are either source lines or target lines. A mixer can accept prerecorded, loopable sound as input, by having some of its source lines be instances of objects that implement the Clip interface.
Through methods of the Line interface, which Mixer extends, a mixer might provide a set of controls that are global to the mixer. For example, the mixer can have a master gain control. These global controls are distinct from the controls belonging to each of the mixer's individual lines.
Some mixers, especially those with internal digital mixing capabilities, may provide additional capabilities by implementing the DataLine interface.
A mixer can support synchronization of its lines. When one line in a synchronized group is started or stopped, the other lines in the group automatically start or stop simultaneously with the explicitly affected one.
A mixer is an audio device with one or more lines. It need not be designed for mixing audio signals. A mixer that actually mixes audio has multiple input (source) lines and at least one output (target) line. The former are often instances of classes that implement SourceDataLine, and the latter, TargetDataLine. Port objects, too, are either source lines or target lines. A mixer can accept prerecorded, loopable sound as input, by having some of its source lines be instances of objects that implement the Clip interface. Through methods of the Line interface, which Mixer extends, a mixer might provide a set of controls that are global to the mixer. For example, the mixer can have a master gain control. These global controls are distinct from the controls belonging to each of the mixer's individual lines. Some mixers, especially those with internal digital mixing capabilities, may provide additional capabilities by implementing the DataLine interface. A mixer can support synchronization of its lines. When one line in a synchronized group is started or stopped, the other lines in the group automatically start or stop simultaneously with the explicitly affected one.
The Mixer.Info class represents information about an audio mixer, including the product's name, version, and vendor, along with a textual description. This information may be retrieved through the getMixerInfo method of the Mixer interface.
The Mixer.Info class represents information about an audio mixer, including the product's name, version, and vendor, along with a textual description. This information may be retrieved through the getMixerInfo method of the Mixer interface.
Ports are simple lines for input or output of audio to or from audio devices. Common examples of ports that act as source lines (mixer inputs) include the microphone, line input, and CD-ROM drive. Ports that act as target lines (mixer outputs) include the speaker, headphone, and line output. You can access port using a Port.Info object.
Ports are simple lines for input or output of audio to or from audio devices. Common examples of ports that act as source lines (mixer inputs) include the microphone, line input, and CD-ROM drive. Ports that act as target lines (mixer outputs) include the speaker, headphone, and line output. You can access port using a Port.Info object.
No vars found in this namespace.
The Port.Info class extends Line.Info with additional information specific to ports, including the port's name and whether it is a source or a target for its mixer. By definition, a port acts as either a source or a target to its mixer, but not both. (Audio input ports are sources; audio output ports are targets.)
To learn what ports are available, you can retrieve port info objects through the getSourceLineInfo and getTargetLineInfo methods of the Mixer interface. Instances of the Port.Info class may also be constructed and used to obtain lines matching the parameters specified in the Port.Info object.
The Port.Info class extends Line.Info with additional information specific to ports, including the port's name and whether it is a source or a target for its mixer. By definition, a port acts as either a source or a target to its mixer, but not both. (Audio input ports are sources; audio output ports are targets.) To learn what ports are available, you can retrieve port info objects through the getSourceLineInfo and getTargetLineInfo methods of the Mixer interface. Instances of the Port.Info class may also be constructed and used to obtain lines matching the parameters specified in the Port.Info object.
The ReverbType class provides methods for accessing various reverberation settings to be applied to an audio signal.
Reverberation simulates the reflection of sound off of the walls, ceiling, and floor of a room. Depending on the size of the room, and how absorbent or reflective the materials in the room's surfaces are, the sound might bounce around for a long time before dying away.
The reverberation parameters provided by ReverbType consist of the delay time and intensity of early reflections, the delay time and intensity of late reflections, and an overall decay time. Early reflections are the initial individual low-order reflections of the direct signal off the surfaces in the room. The late Reflections are the dense, high-order reflections that characterize the room's reverberation. The delay times for the start of these two reflection types give the listener a sense of the overall size and complexity of the room's shape and contents. The larger the room, the longer the reflection delay times. The early and late reflections' intensities define the gain (in decibels) of the reflected signals as compared to the direct signal. These intensities give the listener an impression of the absorptive nature of the surfaces and objects in the room. The decay time defines how long the reverberation takes to exponentially decay until it is no longer perceptible ("effective zero"). The larger and less absorbent the surfaces, the longer the decay time.
The set of parameters defined here may not include all aspects of reverberation as specified by some systems. For example, the Midi Manufacturer's Association (MMA) has an Interactive Audio Special Interest Group (IASIG), which has a 3-D Working Group that has defined a Level 2 Spec (I3DL2). I3DL2 supports filtering of reverberation and control of reverb density. These properties are not included in the JavaSound 1.0 definition of a reverb control. In such a case, the implementing system should either extend the defined reverb control to include additional parameters, or else interpret the system's additional capabilities in a way that fits the model described here.
If implementing JavaSound on a I3DL2-compliant device:
Filtering is disabled (high-frequency attenuations are set to 0.0 dB) Density parameters are set to midway between minimum and maximum
The following table shows what parameter values an implementation might use for a representative set of reverberation settings.
Reverberation Types and Parameters
Type Decay Time (ms) Late Intensity (dB) Late Delay (ms) Early Intensity (dB) Early Delay(ms)
Cavern 2250 -2.0 41.3 -1.4 10.3
Dungeon 1600 -1.0 10.3 -0.7 2.6
Garage 900 -6.0 14.7 -4.0 3.9
Acoustic Lab 280 -3.0 8.0 -2.0 2.0
Closet 150 -10.0 2.5 -7.0 0.6
The ReverbType class provides methods for accessing various reverberation settings to be applied to an audio signal. Reverberation simulates the reflection of sound off of the walls, ceiling, and floor of a room. Depending on the size of the room, and how absorbent or reflective the materials in the room's surfaces are, the sound might bounce around for a long time before dying away. The reverberation parameters provided by ReverbType consist of the delay time and intensity of early reflections, the delay time and intensity of late reflections, and an overall decay time. Early reflections are the initial individual low-order reflections of the direct signal off the surfaces in the room. The late Reflections are the dense, high-order reflections that characterize the room's reverberation. The delay times for the start of these two reflection types give the listener a sense of the overall size and complexity of the room's shape and contents. The larger the room, the longer the reflection delay times. The early and late reflections' intensities define the gain (in decibels) of the reflected signals as compared to the direct signal. These intensities give the listener an impression of the absorptive nature of the surfaces and objects in the room. The decay time defines how long the reverberation takes to exponentially decay until it is no longer perceptible ("effective zero"). The larger and less absorbent the surfaces, the longer the decay time. The set of parameters defined here may not include all aspects of reverberation as specified by some systems. For example, the Midi Manufacturer's Association (MMA) has an Interactive Audio Special Interest Group (IASIG), which has a 3-D Working Group that has defined a Level 2 Spec (I3DL2). I3DL2 supports filtering of reverberation and control of reverb density. These properties are not included in the JavaSound 1.0 definition of a reverb control. In such a case, the implementing system should either extend the defined reverb control to include additional parameters, or else interpret the system's additional capabilities in a way that fits the model described here. If implementing JavaSound on a I3DL2-compliant device: Filtering is disabled (high-frequency attenuations are set to 0.0 dB) Density parameters are set to midway between minimum and maximum The following table shows what parameter values an implementation might use for a representative set of reverberation settings. Reverberation Types and Parameters Type Decay Time (ms) Late Intensity (dB) Late Delay (ms) Early Intensity (dB) Early Delay(ms) Cavern 2250 -2.0 41.3 -1.4 10.3 Dungeon 1600 -1.0 10.3 -0.7 2.6 Garage 900 -6.0 14.7 -4.0 3.9 Acoustic Lab 280 -3.0 8.0 -2.0 2.0 Closet 150 -10.0 2.5 -7.0 0.6
A source data line is a data line to which data may be written. It acts as a source to its mixer. An application writes audio bytes to a source data line, which handles the buffering of the bytes and delivers them to the mixer. The mixer may mix the samples with those from other sources and then deliver the mix to a target such as an output port (which may represent an audio output device on a sound card).
Note that the naming convention for this interface reflects the relationship between the line and its mixer. From the perspective of an application, a source data line may act as a target for audio data.
A source data line can be obtained from a mixer by invoking the getLine method of Mixer with an appropriate DataLine.Info object.
The SourceDataLine interface provides a method for writing audio data to the data line's buffer. Applications that play or mix audio should write data to the source data line quickly enough to keep the buffer from underflowing (emptying), which could cause discontinuities in the audio that are perceived as clicks. Applications can use the available method defined in the DataLine interface to determine the amount of data currently queued in the data line's buffer. The amount of data which can be written to the buffer without blocking is the difference between the buffer size and the amount of queued data. If the delivery of audio output stops due to underflow, a STOP event is generated. A START event is generated when the audio output resumes.
A source data line is a data line to which data may be written. It acts as a source to its mixer. An application writes audio bytes to a source data line, which handles the buffering of the bytes and delivers them to the mixer. The mixer may mix the samples with those from other sources and then deliver the mix to a target such as an output port (which may represent an audio output device on a sound card). Note that the naming convention for this interface reflects the relationship between the line and its mixer. From the perspective of an application, a source data line may act as a target for audio data. A source data line can be obtained from a mixer by invoking the getLine method of Mixer with an appropriate DataLine.Info object. The SourceDataLine interface provides a method for writing audio data to the data line's buffer. Applications that play or mix audio should write data to the source data line quickly enough to keep the buffer from underflowing (emptying), which could cause discontinuities in the audio that are perceived as clicks. Applications can use the available method defined in the DataLine interface to determine the amount of data currently queued in the data line's buffer. The amount of data which can be written to the buffer without blocking is the difference between the buffer size and the amount of queued data. If the delivery of audio output stops due to underflow, a STOP event is generated. A START event is generated when the audio output resumes.
Provider for audio file reading services. Classes providing concrete implementations can parse the format information from one or more types of audio file, and can produce audio input streams from files of these types.
Provider for audio file reading services. Classes providing concrete implementations can parse the format information from one or more types of audio file, and can produce audio input streams from files of these types.
Provider for audio file writing services. Classes providing concrete implementations can write one or more types of audio file from an audio stream.
Provider for audio file writing services. Classes providing concrete implementations can write one or more types of audio file from an audio stream.
No vars found in this namespace.
A format conversion provider provides format conversion services from one or more input formats to one or more output formats. Converters include codecs, which encode and/or decode audio data, as well as transcoders, etc. Format converters provide methods for determining what conversions are supported and for obtaining an audio stream from which converted data can be read.
The source format represents the format of the incoming audio data, which will be converted.
The target format represents the format of the processed, converted audio data. This is the format of the data that can be read from the stream returned by one of the getAudioInputStream methods.
A format conversion provider provides format conversion services from one or more input formats to one or more output formats. Converters include codecs, which encode and/or decode audio data, as well as transcoders, etc. Format converters provide methods for determining what conversions are supported and for obtaining an audio stream from which converted data can be read. The source format represents the format of the incoming audio data, which will be converted. The target format represents the format of the processed, converted audio data. This is the format of the data that can be read from the stream returned by one of the getAudioInputStream methods.
A provider or factory for a particular mixer type. This mechanism allows the implementation to determine how resources are managed in creation / management of a mixer.
A provider or factory for a particular mixer type. This mechanism allows the implementation to determine how resources are managed in creation / management of a mixer.
A target data line is a type of DataLine from which audio data can be read. The most common example is a data line that gets its data from an audio capture device. (The device is implemented as a mixer that writes to the target data line.)
Note that the naming convention for this interface reflects the relationship between the line and its mixer. From the perspective of an application, a target data line may act as a source for audio data.
The target data line can be obtained from a mixer by invoking the getLine method of Mixer with an appropriate DataLine.Info object.
The TargetDataLine interface provides a method for reading the captured data from the target data line's buffer.Applications that record audio should read data from the target data line quickly enough to keep the buffer from overflowing, which could cause discontinuities in the captured data that are perceived as clicks. Applications can use the available method defined in the DataLine interface to determine the amount of data currently queued in the data line's buffer. If the buffer does overflow, the oldest queued data is discarded and replaced by new data.
A target data line is a type of DataLine from which audio data can be read. The most common example is a data line that gets its data from an audio capture device. (The device is implemented as a mixer that writes to the target data line.) Note that the naming convention for this interface reflects the relationship between the line and its mixer. From the perspective of an application, a target data line may act as a source for audio data. The target data line can be obtained from a mixer by invoking the getLine method of Mixer with an appropriate DataLine.Info object. The TargetDataLine interface provides a method for reading the captured data from the target data line's buffer.Applications that record audio should read data from the target data line quickly enough to keep the buffer from overflowing, which could cause discontinuities in the captured data that are perceived as clicks. Applications can use the available method defined in the DataLine interface to determine the amount of data currently queued in the data line's buffer. If the buffer does overflow, the oldest queued data is discarded and replaced by new data.
An UnsupportedAudioFileException is an exception indicating that an operation failed because a file did not contain valid data of a recognized file type and format.
An UnsupportedAudioFileException is an exception indicating that an operation failed because a file did not contain valid data of a recognized file type and format.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close