Liking cljdoc? Tell your friends :D

javax.sound.midi.ControllerEventListener

The ControllerEventListener interface should be implemented by classes whose instances need to be notified when a Sequencer has processed a requested type of MIDI control-change event. To register a ControllerEventListener object to receive such notifications, invoke the addControllerEventListener method of Sequencer, specifying the types of MIDI controllers about which you are interested in getting control-change notifications.

The ControllerEventListener interface should be implemented
by classes whose instances need to be notified when a Sequencer
has processed a requested type of MIDI control-change event.
To register a ControllerEventListener object to receive such
notifications, invoke the
addControllerEventListener method of Sequencer,
specifying the types of MIDI controllers about which you are interested in
getting control-change notifications.
raw docstring

javax.sound.midi.core

No vars found in this namespace.

javax.sound.midi.Instrument

An instrument is a sound-synthesis algorithm with certain parameter settings, usually designed to emulate a specific real-world musical instrument or to achieve a specific sort of sound effect. Instruments are typically stored in collections called soundbanks. Before the instrument can be used to play notes, it must first be loaded onto a synthesizer, and then it must be selected for use on one or more channels, via a program-change command. MIDI notes that are subsequently received on those channels will be played using the sound of the selected instrument.

An instrument is a sound-synthesis algorithm with certain parameter
settings, usually designed to emulate a specific real-world
musical instrument or to achieve a specific sort of sound effect.
Instruments are typically stored in collections called soundbanks.
Before the instrument can be used to play notes, it must first be loaded
onto a synthesizer, and then it must be selected for use on
one or more channels, via a program-change command.  MIDI notes
that are subsequently received on those channels will be played using
the sound of the selected instrument.
raw docstring

javax.sound.midi.InvalidMidiDataException

An InvalidMidiDataException indicates that inappropriate MIDI data was encountered. This often means that the data is invalid in and of itself, from the perspective of the MIDI specification. An example would be an undefined status byte. However, the exception might simply mean that the data was invalid in the context it was used, or that the object to which the data was given was unable to parse or use it. For example, a file reader might not be able to parse a Type 2 MIDI file, even though that format is defined in the MIDI specification.

An InvalidMidiDataException indicates that inappropriate MIDI
data was encountered. This often means that the data is invalid in and of
itself, from the perspective of the MIDI specification.  An example would
be an undefined status byte.  However, the exception might simply
mean that the data was invalid in the context it was used, or that
the object to which the data was given was unable to parse or use it.
For example, a file reader might not be able to parse a Type 2 MIDI file, even
though that format is defined in the MIDI specification.
raw docstring

javax.sound.midi.MetaEventListener

The MetaEventListener interface should be implemented by classes whose instances need to be notified when a Sequencer has processed a MetaMessage. To register a MetaEventListener object to receive such notifications, pass it as the argument to the addMetaEventListener method of Sequencer.

The MetaEventListener interface should be implemented
by classes whose instances need to be notified when a Sequencer
has processed a MetaMessage.
To register a MetaEventListener object to receive such
notifications, pass it as the argument to the
addMetaEventListener
method of Sequencer.
raw docstring

javax.sound.midi.MetaMessage

A MetaMessage is a MidiMessage that is not meaningful to synthesizers, but that can be stored in a MIDI file and interpreted by a sequencer program. (See the discussion in the MidiMessage class description.) The Standard MIDI Files specification defines various types of meta-events, such as sequence number, lyric, cue point, and set tempo. There are also meta-events for such information as lyrics, copyrights, tempo indications, time and key signatures, markers, etc. For more information, see the Standard MIDI Files 1.0 specification, which is part of the Complete MIDI 1.0 Detailed Specification published by the MIDI Manufacturer's Association (http://www.midi.org).

When data is being transported using MIDI wire protocol, a ShortMessage with the status value 0xFF represents a system reset message. In MIDI files, this same status value denotes a MetaMessage. The types of meta-message are distinguished from each other by the first byte that follows the status byte 0xFF. The subsequent bytes are data bytes. As with system exclusive messages, there are an arbitrary number of data bytes, depending on the type of MetaMessage.

A MetaMessage is a MidiMessage that is not meaningful to synthesizers, but
that can be stored in a MIDI file and interpreted by a sequencer program.
(See the discussion in the MidiMessage
class description.)  The Standard MIDI Files specification defines
various types of meta-events, such as sequence number, lyric, cue point,
and set tempo.  There are also meta-events
for such information as lyrics, copyrights, tempo indications, time and key
signatures, markers, etc.  For more information, see the Standard MIDI Files 1.0
specification, which is part of the Complete MIDI 1.0 Detailed Specification
published by the MIDI Manufacturer's Association
(http://www.midi.org).


When data is being transported using MIDI wire protocol,
a ShortMessage with the status value 0xFF represents
a system reset message.  In MIDI files, this same status value denotes a MetaMessage.
The types of meta-message are distinguished from each other by the first byte
that follows the status byte 0xFF.  The subsequent bytes are data
bytes.  As with system exclusive messages, there are an arbitrary number of
data bytes, depending on the type of MetaMessage.
raw docstring

javax.sound.midi.MidiChannel

A MidiChannel object represents a single MIDI channel. Generally, each MidiChannel method processes a like-named MIDI "channel voice" or "channel mode" message as defined by the MIDI specification. However, MidiChannel adds some "get" methods that retrieve the value most recently set by one of the standard MIDI channel messages. Similarly, methods for per-channel solo and mute have been added.

A Synthesizer object has a collection of MidiChannels, usually one for each of the 16 channels prescribed by the MIDI 1.0 specification. The Synthesizer generates sound when its MidiChannels receive noteOn messages.

See the MIDI 1.0 Specification for more information about the prescribed behavior of the MIDI channel messages, which are not exhaustively documented here. The specification is titled MIDI Reference: The Complete MIDI 1.0 Detailed Specification, and is published by the MIDI Manufacturer's Association ( http://www.midi.org).

MIDI was originally a protocol for reporting the gestures of a keyboard musician. This genesis is visible in the MidiChannel API, which preserves such MIDI concepts as key number, key velocity, and key pressure. It should be understood that the MIDI data does not necessarily originate with a keyboard player (the source could be a different kind of musician, or software). Some devices might generate constant values for velocity and pressure, regardless of how the note was performed. Also, the MIDI specification often leaves it up to the synthesizer to use the data in the way the implementor sees fit. For example, velocity data need not always be mapped to volume and/or brightness.

A MidiChannel object represents a single MIDI channel.
Generally, each MidiChannel method processes a like-named MIDI
"channel voice" or "channel mode" message as defined by the MIDI specification. However,
MidiChannel adds some "get" methods  that retrieve the value
most recently set by one of the standard MIDI channel messages.  Similarly,
methods for per-channel solo and mute have been added.

A Synthesizer object has a collection
of MidiChannels, usually one for each of the 16 channels
prescribed by the MIDI 1.0 specification.  The Synthesizer
generates sound when its MidiChannels receive
noteOn messages.

See the MIDI 1.0 Specification for more information about the prescribed
behavior of the MIDI channel messages, which are not exhaustively
documented here.  The specification is titled MIDI Reference:
The Complete MIDI 1.0 Detailed Specification, and is published by
the MIDI Manufacturer's Association (
http://www.midi.org).

MIDI was originally a protocol for reporting the gestures of a keyboard
musician.  This genesis is visible in the MidiChannel API, which
preserves such MIDI concepts as key number, key velocity, and key pressure.
It should be understood that the MIDI data does not necessarily originate
with a keyboard player (the source could be a different kind of musician, or
software).  Some devices might generate constant values for velocity
and pressure, regardless of how the note was performed.
Also, the MIDI specification often leaves it up to the
synthesizer to use the data in the way the implementor sees fit.  For
example, velocity data need not always be mapped to volume and/or brightness.
raw docstring

javax.sound.midi.MidiDevice

MidiDevice is the base interface for all MIDI devices. Common devices include synthesizers, sequencers, MIDI input ports, and MIDI output ports.

A MidiDevice can be a transmitter or a receiver of MIDI events, or both. Therefore, it can provide Transmitter or Receiver instances (or both). Typically, MIDI IN ports provide transmitters, MIDI OUT ports and synthesizers provide receivers. A Sequencer typically provides transmitters for playback and receivers for recording.

A MidiDevice can be opened and closed explicitly as well as implicitly. Explicit opening is accomplished by calling open(), explicit closing is done by calling close() on the MidiDevice instance. If an application opens a MidiDevice explicitly, it has to close it explicitly to free system resources and enable the application to exit cleanly. Implicit opening is done by calling MidiSystem.getReceiver and MidiSystem.getTransmitter. The MidiDevice used by MidiSystem.getReceiver and MidiSystem.getTransmitter is implementation-dependant unless the properties javax.sound.midi.Receiver and javax.sound.midi.Transmitter are used (see the description of properties to select default providers in MidiSystem). A MidiDevice that was opened implicitly, is closed implicitly by closing the Receiver or Transmitter that resulted in opening it. If more than one implicitly opening Receiver or Transmitter were obtained by the application, the device is closed after the last Receiver or Transmitter has been closed. On the other hand, calling getReceiver or getTransmitter on the device instance directly does not open the device implicitly. Closing these Transmitters and Receivers does not close the device implicitly. To use a device with Receivers or Transmitters obtained this way, the device has to be opened and closed explicitly.

If implicit and explicit opening and closing are mixed on the same MidiDevice instance, the following rules apply:

After an explicit open (either before or after implicit opens), the device will not be closed by implicit closing. The only way to close an explicitly opened device is an explicit close.

An explicit close always closes the device, even if it also has been opened implicitly. A subsequent implicit close has no further effect.

To detect if a MidiDevice represents a hardware MIDI port, the following programming technique can be used:

MidiDevice device = ...; if ( ! (device instanceof Sequencer) && ! (device instanceof Synthesizer)) { // we're now sure that device represents a MIDI port // ... }

A MidiDevice includes a MidiDevice.Info object to provide manufacturer information and so on.

MidiDevice is the base interface for all MIDI devices.
Common devices include synthesizers, sequencers, MIDI input ports, and MIDI
output ports.

A MidiDevice can be a transmitter or a receiver of
MIDI events, or both. Therefore, it can provide Transmitter
or Receiver instances (or both). Typically, MIDI IN ports
provide transmitters, MIDI OUT ports and synthesizers provide
receivers. A Sequencer typically provides transmitters for playback
and receivers for recording.

A MidiDevice can be opened and closed explicitly as
well as implicitly. Explicit opening is accomplished by calling
open(), explicit closing is done by calling close() on the MidiDevice instance.
If an application opens a MidiDevice
explicitly, it has to close it explicitly to free system resources
and enable the application to exit cleanly. Implicit opening is
done by calling MidiSystem.getReceiver and MidiSystem.getTransmitter. The MidiDevice used by
MidiSystem.getReceiver and
MidiSystem.getTransmitter is implementation-dependant
unless the properties javax.sound.midi.Receiver
and javax.sound.midi.Transmitter are used (see the
description of properties to select default providers in
MidiSystem). A MidiDevice
that was opened implicitly, is closed implicitly by closing the
Receiver or Transmitter that resulted in
opening it. If more than one implicitly opening
Receiver or Transmitter were obtained by
the application, the device is closed after the last
Receiver or Transmitter has been
closed. On the other hand, calling getReceiver or
getTransmitter on the device instance directly does
not open the device implicitly. Closing these
Transmitters and Receivers does not close
the device implicitly. To use a device with Receivers
or Transmitters obtained this way, the device has to
be opened and closed explicitly.

If implicit and explicit opening and closing are mixed on the
same MidiDevice instance, the following rules apply:


After an explicit open (either before or after implicit
opens), the device will not be closed by implicit closing. The only
way to close an explicitly opened device is an explicit close.

An explicit close always closes the device, even if it also has
been opened implicitly. A subsequent implicit close has no further
effect.


To detect if a MidiDevice represents a hardware MIDI port, the
following programming technique can be used:



MidiDevice device = ...;
if ( ! (device instanceof Sequencer) && ! (device instanceof Synthesizer)) {
  // we're now sure that device represents a MIDI port
  // ...
}


A MidiDevice includes a MidiDevice.Info object
to provide manufacturer information and so on.
raw docstring

javax.sound.midi.MidiDevice$Info

A MidiDevice.Info object contains assorted data about a MidiDevice, including its name, the company who created it, and descriptive text.

A MidiDevice.Info object contains assorted
data about a MidiDevice, including its
name, the company who created it, and descriptive text.
raw docstring

javax.sound.midi.MidiDeviceReceiver

MidiDeviceReceiver is a Receiver which represents a MIDI input connector of a MidiDevice (see MidiDevice.getReceiver()).

MidiDeviceReceiver is a Receiver which represents
a MIDI input connector of a MidiDevice
(see MidiDevice.getReceiver()).
raw docstring

javax.sound.midi.MidiDeviceTransmitter

MidiDeviceTransmitter is a Transmitter which represents a MIDI input connector of a MidiDevice (see MidiDevice.getTransmitter()).

MidiDeviceTransmitter is a Transmitter which represents
a MIDI input connector of a MidiDevice
(see MidiDevice.getTransmitter()).
raw docstring

javax.sound.midi.MidiEvent

MIDI events contain a MIDI message and a corresponding time-stamp expressed in ticks, and can represent the MIDI event information stored in a MIDI file or a Sequence object. The duration of a tick is specified by the timing information contained in the MIDI file or Sequence object.

In Java Sound, MidiEvent objects are typically contained in a Track, and Tracks are likewise contained in a Sequence.

MIDI events contain a MIDI message and a corresponding time-stamp
expressed in ticks, and can represent the MIDI event information
stored in a MIDI file or a Sequence object.  The
duration of a tick is specified by the timing information contained
in the MIDI file or Sequence object.

In Java Sound, MidiEvent objects are typically contained in a
Track, and Tracks are likewise
contained in a Sequence.
raw docstring

javax.sound.midi.MidiFileFormat

A MidiFileFormat object encapsulates a MIDI file's type, as well as its length and timing information.

A MidiFileFormat object can include a set of properties. A property is a pair of key and value: the key is of type String, the associated property value is an arbitrary object. Properties specify additional informational meta data (like a author, or copyright). Properties are optional information, and file reader and file writer implementations are not required to provide or recognize properties.

The following table lists some common properties that should be used in implementations:

MIDI File Format Properties

Property key Value type Description

"author" String name of the author of this file

"title" String title of this file

"copyright" String copyright message

"date" Date date of the recording or release

"comment" String an arbitrary text

A MidiFileFormat object encapsulates a MIDI file's
type, as well as its length and timing information.

A MidiFileFormat object can
include a set of properties. A property is a pair of key and value:
the key is of type String, the associated property
value is an arbitrary object.
Properties specify additional informational
meta data (like a author, or copyright).
Properties are optional information, and file reader and file
writer implementations are not required to provide or
recognize properties.

The following table lists some common properties that should
be used in implementations:


   MIDI File Format Properties

  Property key
  Value type
  Description


  "author"
  String
  name of the author of this file


  "title"
  String
  title of this file


  "copyright"
  String
  copyright message


  "date"
  Date
  date of the recording or release


  "comment"
  String
  an arbitrary text
raw docstring

javax.sound.midi.MidiMessage

MidiMessage is the base class for MIDI messages. They include not only the standard MIDI messages that a synthesizer can respond to, but also "meta-events" that can be used by sequencer programs. There are meta-events for such information as lyrics, copyrights, tempo indications, time and key signatures, markers, etc. For more information, see the Standard MIDI Files 1.0 specification, which is part of the Complete MIDI 1.0 Detailed Specification published by the MIDI Manufacturer's Association (http://www.midi.org).

The base MidiMessage class provides access to three types of information about a MIDI message:

The messages's status byte The total length of the message in bytes (the status byte plus any data bytes) A byte array containing the complete message

MidiMessage includes methods to get, but not set, these values. Setting them is a subclass responsibility.

The MIDI standard expresses MIDI data in bytes. However, because JavaTM uses signed bytes, the Java Sound API uses integers instead of bytes when expressing MIDI data. For example, the getStatus() method of MidiMessage returns MIDI status bytes as integers. If you are processing MIDI data that originated outside Java Sound and now is encoded as signed bytes, the bytes can can be converted to integers using this conversion: int i = (int)(byte & 0xFF)

If you simply need to pass a known MIDI byte value as a method parameter, it can be expressed directly as an integer, using (for example) decimal or hexadecimal notation. For instance, to pass the "active sensing" status byte as the first argument to ShortMessage's setMessage(int) method, you can express it as 254 or 0xFE.

MidiMessage is the base class for MIDI messages.  They include
not only the standard MIDI messages that a synthesizer can respond to, but also
"meta-events" that can be used by sequencer programs.  There are meta-events
for such information as lyrics, copyrights, tempo indications, time and key
signatures, markers, etc.  For more information, see the Standard MIDI Files 1.0
specification, which is part of the Complete MIDI 1.0 Detailed Specification
published by the MIDI Manufacturer's Association
(http://www.midi.org).

The base MidiMessage class provides access to three types of
information about a MIDI message:

The messages's status byte
The total length of the message in bytes (the status byte plus any data bytes)
A byte array containing the complete message


MidiMessage includes methods to get, but not set, these values.
Setting them is a subclass responsibility.


The MIDI standard expresses MIDI data in bytes.  However, because
JavaTM uses signed bytes, the Java Sound API uses integers
instead of bytes when expressing MIDI data.  For example, the
getStatus() method of
MidiMessage returns MIDI status bytes as integers.  If you are
processing MIDI data that originated outside Java Sound and now
is encoded as signed bytes, the bytes can
can be converted to integers using this conversion:
int i = (int)(byte & 0xFF)

If you simply need to pass a known MIDI byte value as a method parameter,
it can be expressed directly as an integer, using (for example) decimal or
hexadecimal notation.  For instance, to pass the "active sensing" status byte
as the first argument to ShortMessage's
setMessage(int)
method, you can express it as 254 or 0xFE.
raw docstring

javax.sound.midi.MidiSystem

The MidiSystem class provides access to the installed MIDI system resources, including devices such as synthesizers, sequencers, and MIDI input and output ports. A typical simple MIDI application might begin by invoking one or more MidiSystem methods to learn what devices are installed and to obtain the ones needed in that application.

The class also has methods for reading files, streams, and URLs that contain standard MIDI file data or soundbanks. You can query the MidiSystem for the format of a specified MIDI file.

You cannot instantiate a MidiSystem; all the methods are static.

Properties can be used to specify default MIDI devices. Both system properties and a properties file are considered. The sound.properties properties file is read from an implementation-specific location (typically it is the lib directory in the Java installation directory). If a property exists both as a system property and in the properties file, the system property takes precedence. If none is specified, a suitable default is chosen among the available devices. The syntax of the properties file is specified in Properties.load. The following table lists the available property keys and which methods consider them:

MIDI System Property Keys

Property Key Interface Affected Method

javax.sound.midi.Receiver Receiver getReceiver()

javax.sound.midi.Sequencer Sequencer getSequencer()

javax.sound.midi.Synthesizer Synthesizer getSynthesizer()

javax.sound.midi.Transmitter Transmitter getTransmitter()

The property value consists of the provider class name and the device name, separated by the hash mark ("#"). The provider class name is the fully-qualified name of a concrete MIDI device provider class. The device name is matched against the String returned by the getName method of MidiDevice.Info. Either the class name, or the device name may be omitted. If only the class name is specified, the trailing hash mark is optional.

If the provider class is specified, and it can be successfully retrieved from the installed providers, the list of MidiDevice.Info objects is retrieved from the provider. Otherwise, or when these devices do not provide a subsequent match, the list is retrieved from getMidiDeviceInfo() to contain all available MidiDevice.Info objects.

If a device name is specified, the resulting list of MidiDevice.Info objects is searched: the first one with a matching name, and whose MidiDevice implements the respective interface, will be returned. If no matching MidiDevice.Info object is found, or the device name is not specified, the first suitable device from the resulting list will be returned. For Sequencer and Synthesizer, a device is suitable if it implements the respective interface; whereas for Receiver and Transmitter, a device is suitable if it implements neither Sequencer nor Synthesizer and provides at least one Receiver or Transmitter, respectively.

For example, the property javax.sound.midi.Receiver with a value "com.sun.media.sound.MidiProvider#SunMIDI1" will have the following consequences when getReceiver is called: if the class com.sun.media.sound.MidiProvider exists in the list of installed MIDI device providers, the first Receiver device with name "SunMIDI1" will be returned. If it cannot be found, the first Receiver from that provider will be returned, regardless of name. If there is none, the first Receiver with name "SunMIDI1" in the list of all devices (as returned by getMidiDeviceInfo) will be returned, or, if not found, the first Receiver that can be found in the list of all devices is returned. If that fails, too, a MidiUnavailableException is thrown.

The MidiSystem class provides access to the installed MIDI
system resources, including devices such as synthesizers, sequencers, and
MIDI input and output ports.  A typical simple MIDI application might
begin by invoking one or more MidiSystem methods to learn
what devices are installed and to obtain the ones needed in that
application.

The class also has methods for reading files, streams, and  URLs that
contain standard MIDI file data or soundbanks.  You can query the
MidiSystem for the format of a specified MIDI file.

You cannot instantiate a MidiSystem; all the methods are
static.

Properties can be used to specify default MIDI devices.
Both system properties and a properties file are considered.
The sound.properties properties file is read from
an implementation-specific location (typically it is the lib
directory in the Java installation directory).
If a property exists both as a system property and in the
properties file, the system property takes precedence. If none is
specified, a suitable default is chosen among the available devices.
The syntax of the properties file is specified in
Properties.load. The
following table lists the available property keys and which methods
consider them:


 MIDI System Property Keys

  Property Key
  Interface
  Affected Method


  javax.sound.midi.Receiver
  Receiver
  getReceiver()


  javax.sound.midi.Sequencer
  Sequencer
  getSequencer()


  javax.sound.midi.Synthesizer
  Synthesizer
  getSynthesizer()


  javax.sound.midi.Transmitter
  Transmitter
  getTransmitter()



The property value consists of the provider class name
and the device name, separated by the hash mark ("#").
The provider class name is the fully-qualified
name of a concrete MIDI device provider class. The device name is matched against
the String returned by the getName
method of MidiDevice.Info.
Either the class name, or the device name may be omitted.
If only the class name is specified, the trailing hash mark
is optional.

If the provider class is specified, and it can be
successfully retrieved from the installed providers,
the list of
MidiDevice.Info objects is retrieved
from the provider. Otherwise, or when these devices
do not provide a subsequent match, the list is retrieved
from getMidiDeviceInfo() to contain
all available MidiDevice.Info objects.

If a device name is specified, the resulting list of
MidiDevice.Info objects is searched:
the first one with a matching name, and whose
MidiDevice implements the
respective interface, will be returned.
If no matching MidiDevice.Info object
is found, or the device name is not specified,
the first suitable device from the resulting
list will be returned. For Sequencer and Synthesizer,
a device is suitable if it implements the respective
interface; whereas for Receiver and Transmitter, a device is
suitable if it
implements neither Sequencer nor Synthesizer and provides
at least one Receiver or Transmitter, respectively.

For example, the property javax.sound.midi.Receiver
with a value
"com.sun.media.sound.MidiProvider#SunMIDI1"
will have the following consequences when
getReceiver is called:
if the class com.sun.media.sound.MidiProvider exists
in the list of installed MIDI device providers,
the first Receiver device with name
"SunMIDI1" will be returned. If it cannot
be found, the first Receiver from that provider
will be returned, regardless of name.
If there is none, the first Receiver with name
"SunMIDI1" in the list of all devices
(as returned by getMidiDeviceInfo) will be returned,
or, if not found, the first Receiver that can
be found in the list of all devices is returned.
If that fails, too, a MidiUnavailableException
is thrown.
raw docstring

javax.sound.midi.MidiUnavailableException

A MidiUnavailableException is thrown when a requested MIDI component cannot be opened or created because it is unavailable. This often occurs when a device is in use by another application. More generally, it can occur when there is a finite number of a certain kind of resource that can be used for some purpose, and all of them are already in use (perhaps all by this application). For an example of the latter case, see the setReceiver method of Transmitter.

A MidiUnavailableException is thrown when a requested MIDI
component cannot be opened or created because it is unavailable.  This often
occurs when a device is in use by another application.  More generally, it
can occur when there is a finite number of a certain kind of resource that can
be used for some purpose, and all of them are already in use (perhaps all by
this application).  For an example of the latter case, see the
setReceiver method of
Transmitter.
raw docstring

javax.sound.midi.Patch

A Patch object represents a location, on a MIDI synthesizer, into which a single instrument is stored (loaded). Every Instrument object has its own Patch object that specifies the memory location into which that instrument should be loaded. The location is specified abstractly by a bank index and a program number (not by any scheme that directly refers to a specific address or offset in RAM). This is a hierarchical indexing scheme: MIDI provides for up to 16384 banks, each of which contains up to 128 program locations. For example, a minimal sort of synthesizer might have only one bank of instruments, and only 32 instruments (programs) in that bank.

To select what instrument should play the notes on a particular MIDI channel, two kinds of MIDI message are used that specify a patch location: a bank-select command, and a program-change channel command. The Java Sound equivalent is the programChange(int, int) method of MidiChannel.

A Patch object represents a location, on a MIDI
synthesizer, into which a single instrument is stored (loaded).
Every Instrument object has its own Patch
object that specifies the memory location
into which that instrument should be loaded. The
location is specified abstractly by a bank index and a program number (not by
any scheme that directly refers to a specific address or offset in RAM).
This is a hierarchical indexing scheme: MIDI provides for up to 16384 banks,
each of which contains up to 128 program locations.  For example, a
minimal sort of synthesizer might have only one bank of instruments, and
only 32 instruments (programs) in that bank.

To select what instrument should play the notes on a particular MIDI
channel, two kinds of MIDI message are used that specify a patch location:
a bank-select command, and a program-change channel command.  The Java Sound
equivalent is the
programChange(int, int)
method of MidiChannel.
raw docstring

javax.sound.midi.Receiver

A Receiver receives MidiEvent objects and typically does something useful in response, such as interpreting them to generate sound or raw MIDI output. Common MIDI receivers include synthesizers and MIDI Out ports.

A Receiver receives MidiEvent objects and
typically does something useful in response, such as interpreting them to
generate sound or raw MIDI output.  Common MIDI receivers include
synthesizers and MIDI Out ports.
raw docstring

javax.sound.midi.Sequence

A Sequence is a data structure containing musical information (often an entire song or composition) that can be played back by a Sequencer object. Specifically, the Sequence contains timing information and one or more tracks. Each track consists of a series of MIDI events (such as note-ons, note-offs, program changes, and meta-events). The sequence's timing information specifies the type of unit that is used to time-stamp the events in the sequence.

A Sequence can be created from a MIDI file by reading the file into an input stream and invoking one of the getSequence methods of MidiSystem. A sequence can also be built from scratch by adding new Tracks to an empty Sequence, and adding MidiEvent objects to these Tracks.

A Sequence is a data structure containing musical
information (often an entire song or composition) that can be played
back by a Sequencer object. Specifically, the
Sequence contains timing
information and one or more tracks.  Each track consists of a
series of MIDI events (such as note-ons, note-offs, program changes, and meta-events).
The sequence's timing information specifies the type of unit that is used
to time-stamp the events in the sequence.

A Sequence can be created from a MIDI file by reading the file
into an input stream and invoking one of the getSequence methods of
MidiSystem.  A sequence can also be built from scratch by adding new
Tracks to an empty Sequence, and adding
MidiEvent objects to these Tracks.
raw docstring

javax.sound.midi.Sequencer

A hardware or software device that plays back a MIDI sequence is known as a sequencer. A MIDI sequence contains lists of time-stamped MIDI data, such as might be read from a standard MIDI file. Most sequencers also provide functions for creating and editing sequences.

The Sequencer interface includes methods for the following basic MIDI sequencer operations:

obtaining a sequence from MIDI file data starting and stopping playback moving to an arbitrary position in the sequence changing the tempo (speed) of playback synchronizing playback to an internal clock or to received MIDI messages controlling the timing of another device

In addition, the following operations are supported, either directly, or indirectly through objects that the Sequencer has access to:

editing the data by adding or deleting individual MIDI events or entire tracks muting or soloing individual tracks in the sequence notifying listener objects about any meta-events or control-change events encountered while playing back the sequence.

A hardware or software device that plays back a MIDI
sequence is known as a sequencer.
A MIDI sequence contains lists of time-stamped MIDI data, such as
might be read from a standard MIDI file.  Most
sequencers also provide functions for creating and editing sequences.

The Sequencer interface includes methods for the following
basic MIDI sequencer operations:

obtaining a sequence from MIDI file data
starting and stopping playback
moving to an arbitrary position in the sequence
changing the tempo (speed) of playback
synchronizing playback to an internal clock or to received MIDI
messages
controlling the timing of another device

In addition, the following operations are supported, either directly, or
indirectly through objects that the Sequencer has access to:

editing the data by adding or deleting individual MIDI events or entire
tracks
muting or soloing individual tracks in the sequence
notifying listener objects about any meta-events or
control-change events encountered while playing back the sequence.
raw docstring

javax.sound.midi.Sequencer$SyncMode

A SyncMode object represents one of the ways in which a MIDI sequencer's notion of time can be synchronized with a master or slave device. If the sequencer is being synchronized to a master, the sequencer revises its current time in response to messages from the master. If the sequencer has a slave, the sequencer similarly sends messages to control the slave's timing.

There are three predefined modes that specify possible masters for a sequencer: INTERNAL_CLOCK, MIDI_SYNC, and MIDI_TIME_CODE. The latter two work if the sequencer receives MIDI messages from another device. In these two modes, the sequencer's time gets reset based on system real-time timing clock messages or MIDI time code (MTC) messages, respectively. These two modes can also be used as slave modes, in which case the sequencer sends the corresponding types of MIDI messages to its receiver (whether or not the sequencer is also receiving them from a master). A fourth mode, NO_SYNC, is used to indicate that the sequencer should not control its receiver's timing.

A SyncMode object represents one of the ways in which
a MIDI sequencer's notion of time can be synchronized with a master
or slave device.
If the sequencer is being synchronized to a master, the
sequencer revises its current time in response to messages from
the master.  If the sequencer has a slave, the sequencer
similarly sends messages to control the slave's timing.

There are three predefined modes that specify possible masters
for a sequencer: INTERNAL_CLOCK,
MIDI_SYNC, and MIDI_TIME_CODE.  The
latter two work if the sequencer receives MIDI messages from
another device.  In these two modes, the sequencer's time gets reset
based on system real-time timing clock messages or MIDI time code
(MTC) messages, respectively.  These two modes can also be used
as slave modes, in which case the sequencer sends the corresponding
types of MIDI messages to its receiver (whether or not the sequencer
is also receiving them from a master).  A fourth mode,
NO_SYNC, is used to indicate that the sequencer should
not control its receiver's timing.
raw docstring

javax.sound.midi.ShortMessage

A ShortMessage contains a MIDI message that has at most two data bytes following its status byte. The types of MIDI message that satisfy this criterion are channel voice, channel mode, system common, and system real-time--in other words, everything except system exclusive and meta-events. The ShortMessage class provides methods for getting and setting the contents of the MIDI message.

A number of ShortMessage methods have integer parameters by which you specify a MIDI status or data byte. If you know the numeric value, you can express it directly. For system common and system real-time messages, you can often use the corresponding fields of ShortMessage, such as SYSTEM_RESET. For channel messages, the upper four bits of the status byte are specified by a command value and the lower four bits are specified by a MIDI channel number. To convert incoming MIDI data bytes that are in the form of Java's signed bytes, you can use the conversion code given in the MidiMessage class description.

A ShortMessage contains a MIDI message that has at most
two data bytes following its status byte.  The types of MIDI message
that satisfy this criterion are channel voice, channel mode, system common,
and system real-time--in other words, everything except system exclusive
and meta-events.  The ShortMessage class provides methods
for getting and setting the contents of the MIDI message.

A number of ShortMessage methods have integer parameters by which
you specify a MIDI status or data byte.  If you know the numeric value, you
can express it directly.  For system common and system real-time messages,
you can often use the corresponding fields of ShortMessage, such as
SYSTEM_RESET.  For channel messages,
the upper four bits of the status byte are specified by a command value and
the lower four bits are specified by a MIDI channel number. To
convert incoming MIDI data bytes that are in the form of Java's signed bytes,
you can use the conversion code
given in the MidiMessage class description.
raw docstring

javax.sound.midi.Soundbank

A Soundbank contains a set of Instruments that can be loaded into a Synthesizer. Note that a Java Sound Soundbank is different from a MIDI bank. MIDI permits up to 16383 banks, each containing up to 128 instruments (also sometimes called programs, patches, or timbres). However, a Soundbank can contain 16383 times 128 instruments, because the instruments within a Soundbank are indexed by both a MIDI program number and a MIDI bank number (via a Patch object). Thus, a Soundbank can be thought of as a collection of MIDI banks.

Soundbank includes methods that return String objects containing the sound bank's name, manufacturer, version number, and description. The precise content and format of these strings is left to the implementor.

Different synthesizers use a variety of synthesis techniques. A common one is wavetable synthesis, in which a segment of recorded sound is played back, often with looping and pitch change. The Downloadable Sound (DLS) format uses segments of recorded sound, as does the Headspace Engine. Soundbanks and Instruments that are based on wavetable synthesis (or other uses of stored sound recordings) should typically implement the getResources() method to provide access to these recorded segments. This is optional, however; the method can return an zero-length array if the synthesis technique doesn't use sampled sound (FM synthesis and physical modeling are examples of such techniques), or if it does but the implementor chooses not to make the samples accessible.

A Soundbank contains a set of Instruments
that can be loaded into a Synthesizer.
Note that a Java Sound Soundbank is different from a MIDI bank.
MIDI permits up to 16383 banks, each containing up to 128 instruments
(also sometimes called programs, patches, or timbres).
However, a Soundbank can contain 16383 times 128 instruments,
because the instruments within a Soundbank are indexed by both
a MIDI program number and a MIDI bank number (via a Patch
object). Thus, a Soundbank can be thought of as a collection
of MIDI banks.

Soundbank includes methods that return String
objects containing the sound bank's name, manufacturer, version number, and
description.  The precise content and format of these strings is left
to the implementor.

Different synthesizers use a variety of synthesis techniques.  A common
one is wavetable synthesis, in which a segment of recorded sound is
played back, often with looping and pitch change.  The Downloadable Sound
(DLS) format uses segments of recorded sound, as does the Headspace Engine.
Soundbanks and Instruments that are based on
wavetable synthesis (or other uses of stored sound recordings) should
typically implement the getResources()
method to provide access to these recorded segments.  This is optional,
however; the method can return an zero-length array if the synthesis technique
doesn't use sampled sound (FM synthesis and physical modeling are examples
of such techniques), or if it does but the implementor chooses not to make the
samples accessible.
raw docstring

javax.sound.midi.SoundbankResource

A SoundbankResource represents any audio resource stored in a Soundbank. Common soundbank resources include:

Instruments. An instrument may be specified in a variety of ways. However, all soundbanks have some mechanism for defining instruments. In doing so, they may reference other resources stored in the soundbank. Each instrument has a Patch which specifies the MIDI program and bank by which it may be referenced in MIDI messages. Instrument information may be stored in Instrument objects. Audio samples. A sample typically is a sampled audio waveform which contains a short sound recording whose duration is a fraction of a second, or at most a few seconds. These audio samples may be used by a Synthesizer to synthesize sound in response to MIDI commands, or extracted for use by an application. (The terminology reflects musicians' use of the word "sample" to refer collectively to a series of contiguous audio samples or frames, rather than to a single, instantaneous sample.) The data class for an audio sample will be an object that encapsulates the audio sample data itself and information about how to interpret it (the format of the audio data), such as an AudioInputStream. Embedded sequences. A sound bank may contain built-in song data stored in a data object such as a Sequence.

Synthesizers that use wavetable synthesis or related techniques play back the audio in a sample when synthesizing notes, often when emulating the real-world instrument that was originally recorded. However, there is not necessarily a one-to-one correspondence between the Instruments and samples in a Soundbank. A single Instrument can use multiple SoundbankResources (typically for notes of dissimilar pitch or brightness). Also, more than one Instrument can use the same sample.

A SoundbankResource represents any audio resource stored
in a Soundbank.  Common soundbank resources include:

Instruments.  An instrument may be specified in a variety of
ways.  However, all soundbanks have some mechanism for defining
instruments.  In doing so, they may reference other resources
stored in the soundbank.  Each instrument has a Patch
which specifies the MIDI program and bank by which it may be
referenced in MIDI messages.  Instrument information may be
stored in Instrument objects.
Audio samples.  A sample typically is a sampled audio waveform
which contains a short sound recording whose duration is a fraction of
a second, or at most a few seconds.  These audio samples may be
used by a Synthesizer to synthesize sound in response to MIDI
commands, or extracted for use by an application.
(The terminology reflects musicians' use of the word "sample" to refer
collectively to a series of contiguous audio samples or frames, rather than
to a single, instantaneous sample.)
The data class for an audio sample will be an object
that encapsulates the audio sample data itself and information
about how to interpret it (the format of the audio data), such
as an AudioInputStream.
Embedded sequences.  A sound bank may contain built-in
song data stored in a data object such as a Sequence.


Synthesizers that use wavetable synthesis or related
techniques play back the audio in a sample when
synthesizing notes, often when emulating the real-world instrument that
was originally recorded.  However, there is not necessarily a one-to-one
correspondence between the Instruments and samples
in a Soundbank.  A single Instrument can use
multiple SoundbankResources (typically for notes of dissimilar pitch or
brightness).  Also, more than one Instrument can use the same
sample.
raw docstring

javax.sound.midi.spi.core

No vars found in this namespace.

javax.sound.midi.spi.MidiDeviceProvider

A MidiDeviceProvider is a factory or provider for a particular type of MIDI device. This mechanism allows the implementation to determine how resources are managed in the creation and management of a device.

A MidiDeviceProvider is a factory or provider for a particular type
of MIDI device. This mechanism allows the implementation to determine how
resources are managed in the creation and management of a device.
raw docstring

javax.sound.midi.spi.MidiFileReader

A MidiFileReader supplies MIDI file-reading services. Classes implementing this interface can parse the format information from one or more types of MIDI file, and can produce a Sequence object from files of these types.

A MidiFileReader supplies MIDI file-reading services. Classes
implementing this interface can parse the format information from one or more
types of MIDI file, and can produce a Sequence object from files of
these types.
raw docstring

javax.sound.midi.spi.MidiFileWriter

A MidiFileWriter supplies MIDI file-writing services. Classes that implement this interface can write one or more types of MIDI file from a Sequence object.

A MidiFileWriter supplies MIDI file-writing services. Classes that
implement this interface can write one or more types of MIDI file from a
Sequence object.
raw docstring

javax.sound.midi.spi.SoundbankReader

A SoundbankReader supplies soundbank file-reading services. Concrete subclasses of SoundbankReader parse a given soundbank file, producing a Soundbank object that can be loaded into a Synthesizer.

A SoundbankReader supplies soundbank file-reading services. Concrete
subclasses of SoundbankReader parse a given soundbank file, producing
a Soundbank object that can be loaded into a
Synthesizer.
raw docstring

javax.sound.midi.Synthesizer

A Synthesizer generates sound. This usually happens when one of the Synthesizer's MidiChannel objects receives a noteOn message, either directly or via the Synthesizer object. Many Synthesizers support Receivers, through which MIDI events can be delivered to the Synthesizer. In such cases, the Synthesizer typically responds by sending a corresponding message to the appropriate MidiChannel, or by processing the event itself if the event isn't one of the MIDI channel messages.

The Synthesizer interface includes methods for loading and unloading instruments from soundbanks. An instrument is a specification for synthesizing a certain type of sound, whether that sound emulates a traditional instrument or is some kind of sound effect or other imaginary sound. A soundbank is a collection of instruments, organized by bank and program number (via the instrument's Patch object). Different Synthesizer classes might implement different sound-synthesis techniques, meaning that some instruments and not others might be compatible with a given synthesizer. Also, synthesizers may have a limited amount of memory for instruments, meaning that not every soundbank and instrument can be used by every synthesizer, even if the synthesis technique is compatible. To see whether the instruments from a certain soundbank can be played by a given synthesizer, invoke the isSoundbankSupported method of Synthesizer.

"Loading" an instrument means that that instrument becomes available for synthesizing notes. The instrument is loaded into the bank and program location specified by its Patch object. Loading does not necessarily mean that subsequently played notes will immediately have the sound of this newly loaded instrument. For the instrument to play notes, one of the synthesizer's MidiChannel objects must receive (or have received) a program-change message that causes that particular instrument's bank and program number to be selected.

A Synthesizer generates sound.  This usually happens when one of
the Synthesizer's MidiChannel objects receives a
noteOn message, either
directly or via the Synthesizer object.
Many Synthesizers support Receivers, through which
MIDI events can be delivered to the Synthesizer.
In such cases, the Synthesizer typically responds by sending
a corresponding message to the appropriate MidiChannel, or by
processing the event itself if the event isn't one of the MIDI channel
messages.

The Synthesizer interface includes methods for loading and
unloading instruments from soundbanks.  An instrument is a specification for synthesizing a
certain type of sound, whether that sound emulates a traditional instrument or is
some kind of sound effect or other imaginary sound. A soundbank is a collection of instruments, organized
by bank and program number (via the instrument's Patch object).
Different Synthesizer classes might implement different sound-synthesis
techniques, meaning that some instruments and not others might be compatible with a
given synthesizer.
Also, synthesizers may have a limited amount of memory for instruments, meaning
that not every soundbank and instrument can be used by every synthesizer, even if
the synthesis technique is compatible.
To see whether the instruments from
a certain soundbank can be played by a given synthesizer, invoke the
isSoundbankSupported method of
Synthesizer.

"Loading" an instrument means that that instrument becomes available for
synthesizing notes.  The instrument is loaded into the bank and
program location specified by its Patch object.  Loading does
not necessarily mean that subsequently played notes will immediately have
the sound of this newly loaded instrument.  For the instrument to play notes,
one of the synthesizer's MidiChannel objects must receive (or have received)
a program-change message that causes that particular instrument's
bank and program number to be selected.
raw docstring

javax.sound.midi.SysexMessage

A SysexMessage object represents a MIDI system exclusive message.

When a system exclusive message is read from a MIDI file, it always has a defined length. Data from a system exclusive message from a MIDI file should be stored in the data array of a SysexMessage as follows: the system exclusive message status byte (0xF0 or 0xF7), all message data bytes, and finally the end-of-exclusive flag (0xF7). The length reported by the SysexMessage object is therefore the length of the system exclusive data plus two: one byte for the status byte and one for the end-of-exclusive flag.

As dictated by the Standard MIDI Files specification, two status byte values are legal for a SysexMessage read from a MIDI file:

0xF0: System Exclusive message (same as in MIDI wire protocol) 0xF7: Special System Exclusive message

When Java Sound is used to handle system exclusive data that is being received using MIDI wire protocol, it should place the data in one or more SysexMessages. In this case, the length of the system exclusive data is not known in advance; the end of the system exclusive data is marked by an end-of-exclusive flag (0xF7) in the MIDI wire byte stream.

0xF0: System Exclusive message (same as in MIDI wire protocol) 0xF7: End of Exclusive (EOX)

The first SysexMessage object containing data for a particular system exclusive message should have the status value 0xF0. If this message contains all the system exclusive data for the message, it should end with the status byte 0xF7 (EOX). Otherwise, additional system exclusive data should be sent in one or more SysexMessages with a status value of 0xF7. The SysexMessage containing the last of the data for the system exclusive message should end with the value 0xF7 (EOX) to mark the end of the system exclusive message.

If system exclusive data from SysexMessages objects is being transmitted using MIDI wire protocol, only the initial 0xF0 status byte, the system exclusive data itself, and the final 0xF7 (EOX) byte should be propagated; any 0xF7 status bytes used to indicate that a SysexMessage contains continuing system exclusive data should not be propagated via MIDI wire protocol.

A SysexMessage object represents a MIDI system exclusive message.

When a system exclusive message is read from a MIDI file, it always has
a defined length.  Data from a system exclusive message from a MIDI file
should be stored in the data array of a SysexMessage as
follows: the system exclusive message status byte (0xF0 or 0xF7), all
message data bytes, and finally the end-of-exclusive flag (0xF7).
The length reported by the SysexMessage object is therefore
the length of the system exclusive data plus two: one byte for the status
byte and one for the end-of-exclusive flag.

As dictated by the Standard MIDI Files specification, two status byte values are legal
for a SysexMessage read from a MIDI file:

0xF0: System Exclusive message (same as in MIDI wire protocol)
0xF7: Special System Exclusive message


When Java Sound is used to handle system exclusive data that is being received
using MIDI wire protocol, it should place the data in one or more
SysexMessages.  In this case, the length of the system exclusive data
is not known in advance; the end of the system exclusive data is marked by an
end-of-exclusive flag (0xF7) in the MIDI wire byte stream.

0xF0: System Exclusive message (same as in MIDI wire protocol)
0xF7: End of Exclusive (EOX)

The first SysexMessage object containing data for a particular system
exclusive message should have the status value 0xF0.  If this message contains all
the system exclusive data
for the message, it should end with the status byte 0xF7 (EOX).
Otherwise, additional system exclusive data should be sent in one or more
SysexMessages with a status value of 0xF7.  The SysexMessage
containing the last of the data for the system exclusive message should end with the
value 0xF7 (EOX) to mark the end of the system exclusive message.

If system exclusive data from SysexMessages objects is being transmitted
using MIDI wire protocol, only the initial 0xF0 status byte, the system exclusive
data itself, and the final 0xF7 (EOX) byte should be propagated; any 0xF7 status
bytes used to indicate that a SysexMessage contains continuing system
exclusive data should not be propagated via MIDI wire protocol.
raw docstring

javax.sound.midi.Track

A MIDI track is an independent stream of MIDI events (time-stamped MIDI data) that can be stored along with other tracks in a standard MIDI file. The MIDI specification allows only 16 channels of MIDI data, but tracks are a way to get around this limitation. A MIDI file can contain any number of tracks, each containing its own stream of up to 16 channels of MIDI data.

A Track occupies a middle level in the hierarchy of data played by a Sequencer: sequencers play sequences, which contain tracks, which contain MIDI events. A sequencer may provide controls that mute or solo individual tracks.

The timing information and resolution for a track is controlled by and stored in the sequence containing the track. A given Track is considered to belong to the particular Sequence that maintains its timing. For this reason, a new (empty) track is created by calling the Sequence.createTrack() method, rather than by directly invoking a Track constructor.

The Track class provides methods to edit the track by adding or removing MidiEvent objects from it. These operations keep the event list in the correct time order. Methods are also included to obtain the track's size, in terms of either the number of events it contains or its duration in ticks.

A MIDI track is an independent stream of MIDI events (time-stamped MIDI
data) that can be stored along with other tracks in a standard MIDI file.
The MIDI specification allows only 16 channels of MIDI data, but tracks
are a way to get around this limitation.  A MIDI file can contain any number
of tracks, each containing its own stream of up to 16 channels of MIDI data.

A Track occupies a middle level in the hierarchy of data played
by a Sequencer: sequencers play sequences, which contain tracks,
which contain MIDI events.  A sequencer may provide controls that mute
or solo individual tracks.

The timing information and resolution for a track is controlled by and stored
in the sequence containing the track. A given Track
is considered to belong to the particular Sequence that
maintains its timing. For this reason, a new (empty) track is created by calling the
Sequence.createTrack() method, rather than by directly invoking a
Track constructor.

The Track class provides methods to edit the track by adding
or removing MidiEvent objects from it.  These operations keep
the event list in the correct time order.  Methods are also
included to obtain the track's size, in terms of either the number of events
it contains or its duration in ticks.
raw docstring

javax.sound.midi.Transmitter

A Transmitter sends MidiEvent objects to one or more Receivers. Common MIDI transmitters include sequencers and MIDI input ports.

A Transmitter sends MidiEvent objects to one or more
Receivers. Common MIDI transmitters include sequencers
and MIDI input ports.
raw docstring

javax.sound.midi.VoiceStatus

A VoiceStatus object contains information about the current status of one of the voices produced by a Synthesizer.

MIDI synthesizers are generally capable of producing some maximum number of simultaneous notes, also referred to as voices. A voice is a stream of successive single notes, and the process of assigning incoming MIDI notes to specific voices is known as voice allocation. However, the voice-allocation algorithm and the contents of each voice are normally internal to a MIDI synthesizer and hidden from outside view. One can, of course, learn from MIDI messages which notes the synthesizer is playing, and one might be able deduce something about the assignment of notes to voices. But MIDI itself does not provide a means to report which notes a synthesizer has assigned to which voice, nor even to report how many voices the synthesizer is capable of synthesizing.

In Java Sound, however, a Synthesizer class can expose the contents of its voices through its getVoiceStatus() method. This behavior is recommended but optional; synthesizers that don't expose their voice allocation simply return a zero-length array. A Synthesizer that does report its voice status should maintain this information at all times for all of its voices, whether they are currently sounding or not. In other words, a given type of Synthesizer always has a fixed number of voices, equal to the maximum number of simultaneous notes it is capable of sounding.

If the voice is not currently processing a MIDI note, it is considered inactive. A voice is inactive when it has been given no note-on commands, or when every note-on command received has been terminated by a corresponding note-off (or by an "all notes off" message). For example, this happens when a synthesizer capable of playing 16 simultaneous notes is told to play a four-note chord; only four voices are active in this case (assuming no earlier notes are still playing). Usually, a voice whose status is reported as active is producing audible sound, but this is not always true; it depends on the details of the instrument (that is, the synthesis algorithm) and how long the note has been going on. For example, a voice may be synthesizing the sound of a single hand-clap. Because this sound dies away so quickly, it may become inaudible before a note-off message is received. In such a situation, the voice is still considered active even though no sound is currently being produced.

Besides its active or inactive status, the VoiceStatus class provides fields that reveal the voice's current MIDI channel, bank and program number, MIDI note number, and MIDI volume. All of these can change during the course of a voice. While the voice is inactive, each of these fields has an unspecified value, so you should check the active field first.

A VoiceStatus object contains information about the current
status of one of the voices produced by a Synthesizer.

MIDI synthesizers are generally capable of producing some maximum number of
simultaneous notes, also referred to as voices.  A voice is a stream
of successive single notes, and the process of assigning incoming MIDI notes to
specific voices is known as voice allocation.
However, the voice-allocation algorithm and the contents of each voice are
normally internal to a MIDI synthesizer and hidden from outside view.  One can, of
course, learn from MIDI messages which notes the synthesizer is playing, and
one might be able deduce something about the assignment of notes to voices.
But MIDI itself does not provide a means to report which notes a
synthesizer has assigned to which voice, nor even to report how many voices
the synthesizer is capable of synthesizing.

In Java Sound, however, a
Synthesizer class can expose the contents of its voices through its
getVoiceStatus() method.
This behavior is recommended but optional;
synthesizers that don't expose their voice allocation simply return a
zero-length array. A Synthesizer that does report its voice status
should maintain this information at
all times for all of its voices, whether they are currently sounding or
not.  In other words, a given type of Synthesizer always has a fixed
number of voices, equal to the maximum number of simultaneous notes it is
capable of sounding.


If the voice is not currently processing a MIDI note, it
is considered inactive.  A voice is inactive when it has
been given no note-on commands, or when every note-on command received has
been terminated by a corresponding note-off (or by an "all notes off"
message).  For example, this happens when a synthesizer capable of playing 16
simultaneous notes is told to play a four-note chord; only
four voices are active in this case (assuming no earlier notes are still playing).
Usually, a voice whose status is reported as active is producing audible sound, but this
is not always true; it depends on the details of the instrument (that
is, the synthesis algorithm) and how long the note has been going on.
For example, a voice may be synthesizing the sound of a single hand-clap.  Because
this sound dies away so quickly, it may become inaudible before a note-off
message is received.  In such a situation, the voice is still considered active
even though no sound is currently being produced.

Besides its active or inactive status, the VoiceStatus class
provides fields that reveal the voice's current MIDI channel, bank and
program number, MIDI note number, and MIDI volume.  All of these can
change during the course of a voice.  While the voice is inactive, each
of these fields has an unspecified value, so you should check the active
field first.
raw docstring

javax.sound.sampled.AudioFileFormat

An instance of the AudioFileFormat class describes an audio file, including the file type, the file's length in bytes, the length in sample frames of the audio data contained in the file, and the format of the audio data.

The AudioSystem class includes methods for determining the format of an audio file, obtaining an audio input stream from an audio file, and writing an audio file from an audio input stream.

An AudioFileFormat object can include a set of properties. A property is a pair of key and value: the key is of type String, the associated property value is an arbitrary object. Properties specify additional informational meta data (like a author, copyright, or file duration). Properties are optional information, and file reader and file writer implementations are not required to provide or recognize properties.

The following table lists some common properties that should be used in implementations:

Audio File Format Properties

Property key Value type Description

"duration" Long playback duration of the file in microseconds

"author" String name of the author of this file

"title" String title of this file

"copyright" String copyright message

"date" Date date of the recording or release

"comment" String an arbitrary text

An instance of the AudioFileFormat class describes
an audio file, including the file type, the file's length in bytes,
the length in sample frames of the audio data contained in the file,
and the format of the audio data.

The AudioSystem class includes methods for determining the format
of an audio file, obtaining an audio input stream from an audio file, and
writing an audio file from an audio input stream.

An AudioFileFormat object can
include a set of properties. A property is a pair of key and value:
the key is of type String, the associated property
value is an arbitrary object.
Properties specify additional informational
meta data (like a author, copyright, or file duration).
Properties are optional information, and file reader and file
writer implementations are not required to provide or
recognize properties.

The following table lists some common properties that should
be used in implementations:


 Audio File Format Properties

  Property key
  Value type
  Description


  "duration"
  Long
  playback duration of the file in microseconds


  "author"
  String
  name of the author of this file


  "title"
  String
  title of this file


  "copyright"
  String
  copyright message


  "date"
  Date
  date of the recording or release


  "comment"
  String
  an arbitrary text
raw docstring

javax.sound.sampled.AudioFileFormat$Type

An instance of the Type class represents one of the standard types of audio file. Static instances are provided for the common types.

An instance of the Type class represents one of the
standard types of audio file.  Static instances are provided for the
common types.
raw docstring

javax.sound.sampled.AudioFormat

AudioFormat is the class that specifies a particular arrangement of data in a sound stream. By examining the information stored in the audio format, you can discover how to interpret the bits in the binary sound data.

Every data line has an audio format associated with its data stream. The audio format of a source (playback) data line indicates what kind of data the data line expects to receive for output. For a target (capture) data line, the audio format specifies the kind of the data that can be read from the line. Sound files also have audio formats, of course. The AudioFileFormat class encapsulates an AudioFormat in addition to other, file-specific information. Similarly, an AudioInputStream has an AudioFormat.

The AudioFormat class accommodates a number of common sound-file encoding techniques, including pulse-code modulation (PCM), mu-law encoding, and a-law encoding. These encoding techniques are predefined, but service providers can create new encoding types. The encoding that a specific format uses is named by its encoding field.

In addition to the encoding, the audio format includes other properties that further specify the exact arrangement of the data. These include the number of channels, sample rate, sample size, byte order, frame rate, and frame size. Sounds may have different numbers of audio channels: one for mono, two for stereo. The sample rate measures how many "snapshots" (samples) of the sound pressure are taken per second, per channel. (If the sound is stereo rather than mono, two samples are actually measured at each instant of time: one for the left channel, and another for the right channel; however, the sample rate still measures the number per channel, so the rate is the same regardless of the number of channels. This is the standard use of the term.) The sample size indicates how many bits are used to store each snapshot; 8 and 16 are typical values. For 16-bit samples (or any other sample size larger than a byte), byte order is important; the bytes in each sample are arranged in either the "little-endian" or "big-endian" style. For encodings like PCM, a frame consists of the set of samples for all channels at a given point in time, and so the size of a frame (in bytes) is always equal to the size of a sample (in bytes) times the number of channels. However, with some other sorts of encodings a frame can contain a bundle of compressed data for a whole series of samples, as well as additional, non-sample data. For such encodings, the sample rate and sample size refer to the data after it is decoded into PCM, and so they are completely different from the frame rate and frame size.

An AudioFormat object can include a set of properties. A property is a pair of key and value: the key is of type String, the associated property value is an arbitrary object. Properties specify additional format specifications, like the bit rate for compressed formats. Properties are mainly used as a means to transport additional information of the audio format to and from the service providers. Therefore, properties are ignored in the matches(AudioFormat) method. However, methods which rely on the installed service providers, like (AudioFormat, AudioFormat) isConversionSupported may consider properties, depending on the respective service provider implementation.

The following table lists some common properties which service providers should use, if applicable:

Audio Format Properties

Property key Value type Description

"bitrate" Integer average bit rate in bits per second

"vbr" Boolean true, if the file is encoded in variable bit rate (VBR)

"quality" Integer encoding/conversion quality, 1..100

Vendors of service providers (plugins) are encouraged to seek information about other already established properties in third party plugins, and follow the same conventions.

AudioFormat is the class that specifies a particular arrangement of data in a sound stream.
By examining the information stored in the audio format, you can discover how to interpret the bits in the
binary sound data.

Every data line has an audio format associated with its data stream. The audio format of a source (playback) data line indicates
what kind of data the data line expects to receive for output.  For a target (capture) data line, the audio format specifies the kind
of the data that can be read from the line.
Sound files also have audio formats, of course.  The AudioFileFormat
class encapsulates an AudioFormat in addition to other,
file-specific information.  Similarly, an AudioInputStream has an
AudioFormat.

The AudioFormat class accommodates a number of common sound-file encoding techniques, including
pulse-code modulation (PCM), mu-law encoding, and a-law encoding.  These encoding techniques are predefined,
but service providers can create new encoding types.
The encoding that a specific format uses is named by its encoding field.

In addition to the encoding, the audio format includes other properties that further specify the exact
arrangement of the data.
These include the number of channels, sample rate, sample size, byte order, frame rate, and frame size.
Sounds may have different numbers of audio channels: one for mono, two for stereo.
The sample rate measures how many "snapshots" (samples) of the sound pressure are taken per second, per channel.
(If the sound is stereo rather than mono, two samples are actually measured at each instant of time: one for the left channel,
and another for the right channel; however, the sample rate still measures the number per channel, so the rate is the same
regardless of the number of channels.   This is the standard use of the term.)
The sample size indicates how many bits are used to store each snapshot; 8 and 16 are typical values.
For 16-bit samples (or any other sample size larger than a byte),
byte order is important; the bytes in each sample are arranged in
either the "little-endian" or "big-endian" style.
For encodings like PCM, a frame consists of the set of samples for all channels at a given
point in time, and so the size of a frame (in bytes) is always equal to the size of a sample (in bytes) times
the number of channels.  However, with some other sorts of encodings a frame can contain
a bundle of compressed data for a whole series of samples, as well as additional, non-sample
data.  For such encodings, the sample rate and sample size refer to the data after it is decoded into PCM,
and so they are completely different from the frame rate and frame size.

An AudioFormat object can include a set of
properties. A property is a pair of key and value: the key
is of type String, the associated property
value is an arbitrary object. Properties specify
additional format specifications, like the bit rate for
compressed formats. Properties are mainly used as a means
to transport additional information of the audio format
to and from the service providers. Therefore, properties
are ignored in the matches(AudioFormat) method.
However, methods which rely on the installed service
providers, like (AudioFormat, AudioFormat) isConversionSupported may consider
properties, depending on the respective service provider
implementation.

The following table lists some common properties which
service providers should use, if applicable:


 Audio Format Properties

  Property key
  Value type
  Description


  "bitrate"
  Integer
  average bit rate in bits per second


  "vbr"
  Boolean
  true, if the file is encoded in variable bit
      rate (VBR)


  "quality"
  Integer
  encoding/conversion quality, 1..100



Vendors of service providers (plugins) are encouraged
to seek information about other already established
properties in third party plugins, and follow the same
conventions.
raw docstring

javax.sound.sampled.AudioFormat$Encoding

The Encoding class names the specific type of data representation used for an audio stream. The encoding includes aspects of the sound format other than the number of channels, sample rate, sample size, frame rate, frame size, and byte order.

One ubiquitous type of audio encoding is pulse-code modulation (PCM), which is simply a linear (proportional) representation of the sound waveform. With PCM, the number stored in each sample is proportional to the instantaneous amplitude of the sound pressure at that point in time. The numbers may be signed or unsigned integers or floats. Besides PCM, other encodings include mu-law and a-law, which are nonlinear mappings of the sound amplitude that are often used for recording speech.

You can use a predefined encoding by referring to one of the static objects created by this class, such as PCM_SIGNED or PCM_UNSIGNED. Service providers can create new encodings, such as compressed audio formats, and make these available through the AudioSystem class.

The Encoding class is static, so that all AudioFormat objects that have the same encoding will refer to the same object (rather than different instances of the same class). This allows matches to be made by checking that two format's encodings are equal.

The Encoding class  names the  specific type of data representation
used for an audio stream.   The encoding includes aspects of the
sound format other than the number of channels, sample rate, sample size,
frame rate, frame size, and byte order.

One ubiquitous type of audio encoding is pulse-code modulation (PCM),
which is simply a linear (proportional) representation of the sound
waveform.  With PCM, the number stored in each sample is proportional
to the instantaneous amplitude of the sound pressure at that point in
time.  The numbers may be signed or unsigned integers or floats.
Besides PCM, other encodings include mu-law and a-law, which are nonlinear
mappings of the sound amplitude that are often used for recording speech.

You can use a predefined encoding by referring to one of the static
objects created by this class, such as PCM_SIGNED or
PCM_UNSIGNED.  Service providers can create new encodings, such as
compressed audio formats, and make
these available through the AudioSystem class.

The Encoding class is static, so that all
AudioFormat objects that have the same encoding will refer
to the same object (rather than different instances of the same class).
This allows matches to be made by checking that two format's encodings
are equal.
raw docstring

javax.sound.sampled.AudioInputStream

An audio input stream is an input stream with a specified audio format and length. The length is expressed in sample frames, not bytes. Several methods are provided for reading a certain number of bytes from the stream, or an unspecified number of bytes. The audio input stream keeps track of the last byte that was read. You can skip over an arbitrary number of bytes to get to a later position for reading. An audio input stream may support marks. When you set a mark, the current position is remembered so that you can return to it later.

The AudioSystem class includes many methods that manipulate AudioInputStream objects. For example, the methods let you:

obtain an audio input stream from an external audio file, stream, or URL write an external file from an audio input stream convert an audio input stream to a different audio format

An audio input stream is an input stream with a specified audio format and
length.  The length is expressed in sample frames, not bytes.
Several methods are provided for reading a certain number of bytes from
the stream, or an unspecified number of bytes.
The audio input stream keeps track  of the last byte that was read.
You can skip over an arbitrary number of bytes to get to a later position
for reading. An audio input stream may support marks.  When you set a mark,
the current position is remembered so that you can return to it later.

The AudioSystem class includes many methods that manipulate
AudioInputStream objects.
For example, the methods let you:

 obtain an
audio input stream from an external audio file, stream, or URL
 write an external file from an audio input stream
 convert an audio input stream to a different audio format
raw docstring

javax.sound.sampled.AudioPermission

The AudioPermission class represents access rights to the audio system resources. An AudioPermission contains a target name but no actions list; you either have the named permission or you don't.

The target name is the name of the audio permission (see the table below). The names follow the hierarchical property-naming convention. Also, an asterisk can be used to represent all the audio permissions.

The following table lists the possible AudioPermission target names. For each name, the table provides a description of exactly what that permission allows, as well as a discussion of the risks of granting code the permission.

Permission Target Name What the Permission Allows Risks of Allowing this Permission

play Audio playback through the audio device or devices on the system. Allows the application to obtain and manipulate lines and mixers for audio playback (rendering). In some cases use of this permission may affect other applications because the audio from one line may be mixed with other audio being played on the system, or because manipulation of a mixer affects the audio for all lines using that mixer.

record Audio recording through the audio device or devices on the system. Allows the application to obtain and manipulate lines and mixers for audio recording (capture). In some cases use of this permission may affect other applications because manipulation of a mixer affects the audio for all lines using that mixer. This permission can enable an applet or application to eavesdrop on a user.

The AudioPermission class represents access rights to the audio
system resources.  An AudioPermission contains a target name
but no actions list; you either have the named permission or you don't.

The target name is the name of the audio permission (see the table below).
The names follow the hierarchical property-naming convention. Also, an asterisk
can be used to represent all the audio permissions.

The following table lists the possible AudioPermission target names.
For each name, the table provides a description of exactly what that permission
allows, as well as a discussion of the risks of granting code the permission.




Permission Target Name
What the Permission Allows
Risks of Allowing this Permission



play
Audio playback through the audio device or devices on the system.
Allows the application to obtain and manipulate lines and mixers for
audio playback (rendering).
In some cases use of this permission may affect other
applications because the audio from one line may be mixed with other audio
being played on the system, or because manipulation of a mixer affects the
audio for all lines using that mixer.



record
Audio recording through the audio device or devices on the system.
Allows the application to obtain and manipulate lines and mixers for
audio recording (capture).
In some cases use of this permission may affect other
applications because manipulation of a mixer affects the audio for all lines
using that mixer.
This permission can enable an applet or application to eavesdrop on a user.
raw docstring

javax.sound.sampled.AudioSystem

The AudioSystem class acts as the entry point to the sampled-audio system resources. This class lets you query and access the mixers that are installed on the system. AudioSystem includes a number of methods for converting audio data between different formats, and for translating between audio files and streams. It also provides a method for obtaining a Line directly from the AudioSystem without dealing explicitly with mixers.

Properties can be used to specify the default mixer for specific line types. Both system properties and a properties file are considered. The sound.properties properties file is read from an implementation-specific location (typically it is the lib directory in the Java installation directory). If a property exists both as a system property and in the properties file, the system property takes precedence. If none is specified, a suitable default is chosen among the available devices. The syntax of the properties file is specified in Properties.load. The following table lists the available property keys and which methods consider them:

Audio System Property Keys

Property Key Interface Affected Method(s)

javax.sound.sampled.Clip Clip getLine(javax.sound.sampled.Line.Info), getClip()

javax.sound.sampled.Port Port getLine(javax.sound.sampled.Line.Info)

javax.sound.sampled.SourceDataLine SourceDataLine getLine(javax.sound.sampled.Line.Info), getSourceDataLine(javax.sound.sampled.AudioFormat)

javax.sound.sampled.TargetDataLine TargetDataLine getLine(javax.sound.sampled.Line.Info), getTargetDataLine(javax.sound.sampled.AudioFormat)

The property value consists of the provider class name and the mixer name, separated by the hash mark ("#"). The provider class name is the fully-qualified name of a concrete mixer provider class. The mixer name is matched against the String returned by the getName method of Mixer.Info. Either the class name, or the mixer name may be omitted. If only the class name is specified, the trailing hash mark is optional.

If the provider class is specified, and it can be successfully retrieved from the installed providers, the list of Mixer.Info objects is retrieved from the provider. Otherwise, or when these mixers do not provide a subsequent match, the list is retrieved from getMixerInfo() to contain all available Mixer.Info objects.

If a mixer name is specified, the resulting list of Mixer.Info objects is searched: the first one with a matching name, and whose Mixer provides the respective line interface, will be returned. If no matching Mixer.Info object is found, or the mixer name is not specified, the first mixer from the resulting list, which provides the respective line interface, will be returned.

For example, the property javax.sound.sampled.Clip with a value "com.sun.media.sound.MixerProvider#SunClip" will have the following consequences when getLine is called requesting a Clip instance: if the class com.sun.media.sound.MixerProvider exists in the list of installed mixer providers, the first Clip from the first mixer with name "SunClip" will be returned. If it cannot be found, the first Clip from the first mixer of the specified provider will be returned, regardless of name. If there is none, the first Clip from the first Mixer with name "SunClip" in the list of all mixers (as returned by getMixerInfo) will be returned, or, if not found, the first Clip of the first Mixerthat can be found in the list of all mixers is returned. If that fails, too, an IllegalArgumentException is thrown.

The AudioSystem class acts as the entry point to the
sampled-audio system resources. This class lets you query and
access the mixers that are installed on the system.
AudioSystem includes a number of
methods for converting audio data between different formats, and for
translating between audio files and streams. It also provides a method
for obtaining a Line directly from the
AudioSystem without dealing explicitly
with mixers.

Properties can be used to specify the default mixer
for specific line types.
Both system properties and a properties file are considered.
The sound.properties properties file is read from
an implementation-specific location (typically it is the lib
directory in the Java installation directory).
If a property exists both as a system property and in the
properties file, the system property takes precedence. If none is
specified, a suitable default is chosen among the available devices.
The syntax of the properties file is specified in
Properties.load. The
following table lists the available property keys and which methods
consider them:


 Audio System Property Keys

  Property Key
  Interface
  Affected Method(s)


  javax.sound.sampled.Clip
  Clip
  getLine(javax.sound.sampled.Line.Info), getClip()


  javax.sound.sampled.Port
  Port
  getLine(javax.sound.sampled.Line.Info)


  javax.sound.sampled.SourceDataLine
  SourceDataLine
  getLine(javax.sound.sampled.Line.Info), getSourceDataLine(javax.sound.sampled.AudioFormat)


  javax.sound.sampled.TargetDataLine
  TargetDataLine
  getLine(javax.sound.sampled.Line.Info), getTargetDataLine(javax.sound.sampled.AudioFormat)



The property value consists of the provider class name
and the mixer name, separated by the hash mark ("#").
The provider class name is the fully-qualified
name of a concrete mixer provider class. The mixer name is matched against
the String returned by the getName
method of Mixer.Info.
Either the class name, or the mixer name may be omitted.
If only the class name is specified, the trailing hash mark
is optional.

If the provider class is specified, and it can be
successfully retrieved from the installed providers, the list of
Mixer.Info objects is retrieved
from the provider. Otherwise, or when these mixers
do not provide a subsequent match, the list is retrieved
from getMixerInfo() to contain
all available Mixer.Info objects.

If a mixer name is specified, the resulting list of
Mixer.Info objects is searched:
the first one with a matching name, and whose
Mixer provides the
respective line interface, will be returned.
If no matching Mixer.Info object
is found, or the mixer name is not specified,
the first mixer from the resulting
list, which provides the respective line
interface, will be returned.

For example, the property javax.sound.sampled.Clip
with a value
"com.sun.media.sound.MixerProvider#SunClip"
will have the following consequences when
getLine is called requesting a Clip
instance:
if the class com.sun.media.sound.MixerProvider exists
in the list of installed mixer providers,
the first Clip from the first mixer with name
"SunClip" will be returned. If it cannot
be found, the first Clip from the first mixer
of the specified provider will be returned, regardless of name.
If there is none, the first Clip from the first
Mixer with name
"SunClip" in the list of all mixers
(as returned by getMixerInfo) will be returned,
or, if not found, the first Clip of the first
Mixerthat can be found in the list of all
mixers is returned.
If that fails, too, an IllegalArgumentException
is thrown.
raw docstring

javax.sound.sampled.BooleanControl

A BooleanControl provides the ability to switch between two possible settings that affect a line's audio. The settings are boolean values (true and false). A graphical user interface might represent the control by a two-state button, an on/off switch, two mutually exclusive buttons, or a checkbox (among other possibilities). For example, depressing a button might activate a MUTE control to silence the line's audio.

As with other Control subclasses, a method is provided that returns string labels for the values, suitable for display in the user interface.

A BooleanControl provides the ability to switch between
two possible settings that affect a line's audio.  The settings are boolean
values (true and false).  A graphical user interface
might represent the control by a two-state button, an on/off switch, two
mutually exclusive buttons, or a checkbox (among other possibilities).
For example, depressing a button might activate a
MUTE control to silence
the line's audio.

As with other Control subclasses, a method is
provided that returns string labels for the values, suitable for
display in the user interface.
raw docstring

javax.sound.sampled.BooleanControl$Type

An instance of the BooleanControl.Type class identifies one kind of boolean control. Static instances are provided for the common types.

An instance of the BooleanControl.Type class identifies one kind of
boolean control.  Static instances are provided for the
common types.
raw docstring

javax.sound.sampled.Clip

The Clip interface represents a special kind of data line whose audio data can be loaded prior to playback, instead of being streamed in real time.

Because the data is pre-loaded and has a known length, you can set a clip to start playing at any position in its audio data. You can also create a loop, so that when the clip is played it will cycle repeatedly. Loops are specified with a starting and ending sample frame, along with the number of times that the loop should be played.

Clips may be obtained from a Mixer that supports lines of this type. Data is loaded into a clip when it is opened.

Playback of an audio clip may be started and stopped using the start and stop methods. These methods do not reset the media position; start causes playback to continue from the position where playback was last stopped. To restart playback from the beginning of the clip's audio data, simply follow the invocation of stop with setFramePosition(0), which rewinds the media to the beginning of the clip.

The Clip interface represents a special kind of data line whose
audio data can be loaded prior to playback, instead of being streamed in
real time.

Because the data is pre-loaded and has a known length, you can set a clip
to start playing at any position in its audio data.  You can also create a
loop, so that when the clip is played it will cycle repeatedly.  Loops are
specified with a starting and ending sample frame, along with the number of
times that the loop should be played.

Clips may be obtained from a Mixer that supports lines
of this type.  Data is loaded into a clip when it is opened.

Playback of an audio clip may be started and stopped using the start
and stop methods.  These methods do not reset the media position;
start causes playback to continue from the position where playback
was last stopped.  To restart playback from the beginning of the clip's audio
data, simply follow the invocation of stop
with setFramePosition(0), which rewinds the media to the beginning
of the clip.
raw docstring

javax.sound.sampled.CompoundControl

A CompoundControl, such as a graphic equalizer, provides control over two or more related properties, each of which is itself represented as a Control.

A CompoundControl, such as a graphic equalizer, provides control
over two or more related properties, each of which is itself represented as
a Control.
raw docstring

javax.sound.sampled.CompoundControl$Type

An instance of the CompoundControl.Type inner class identifies one kind of compound control. Static instances are provided for the common types.

An instance of the CompoundControl.Type inner class identifies one kind of
compound control.  Static instances are provided for the
common types.
raw docstring

No vars found in this namespace.

javax.sound.sampled.Control

Lines often have a set of controls, such as gain and pan, that affect the audio signal passing through the line. Java Sound's Line objects let you obtain a particular control object by passing its class as the argument to a getControl method.

Because the various types of controls have different purposes and features, all of their functionality is accessed from the subclasses that define each kind of control.

Lines often have a set of controls, such as gain and pan, that affect
the audio signal passing through the line.  Java Sound's Line objects
let you obtain a particular control object by passing its class as the
argument to a getControl method.

Because the various types of controls have different purposes and features,
all of their functionality is accessed from the subclasses that define
each kind of control.
raw docstring

javax.sound.sampled.Control$Type

An instance of the Type class represents the type of the control. Static instances are provided for the common types.

An instance of the Type class represents the type of
the control.  Static instances are provided for the
common types.
raw docstring

javax.sound.sampled.core

No vars found in this namespace.

javax.sound.sampled.DataLine

DataLine adds media-related functionality to its superinterface, Line. This functionality includes transport-control methods that start, stop, drain, and flush the audio data that passes through the line. A data line can also report the current position, volume, and audio format of the media. Data lines are used for output of audio by means of the subinterfaces SourceDataLine or Clip, which allow an application program to write data. Similarly, audio input is handled by the subinterface TargetDataLine, which allows data to be read.

A data line has an internal buffer in which the incoming or outgoing audio data is queued. The drain() method blocks until this internal buffer becomes empty, usually because all queued data has been processed. The flush() method discards any available queued data from the internal buffer.

A data line produces START and STOP events whenever it begins or ceases active presentation or capture of data. These events can be generated in response to specific requests, or as a result of less direct state changes. For example, if start() is called on an inactive data line, and data is available for capture or playback, a START event will be generated shortly, when data playback or capture actually begins. Or, if the flow of data to an active data line is constricted so that a gap occurs in the presentation of data, a STOP event is generated.

Mixers often support synchronized control of multiple data lines. Synchronization can be established through the Mixer interface's synchronize method. See the description of the Mixer interface for a more complete description.

DataLine adds media-related functionality to its
superinterface, Line.  This functionality includes
transport-control methods that start, stop, drain, and flush
the audio data that passes through the line.  A data line can also
report the current position, volume, and audio format of the media.
Data lines are used for output of audio by means of the
subinterfaces SourceDataLine or
Clip, which allow an application program to write data.  Similarly,
audio input is handled by the subinterface TargetDataLine,
which allows data to be read.

A data line has an internal buffer in which
the incoming or outgoing audio data is queued.  The
drain() method blocks until this internal buffer
becomes empty, usually because all queued data has been processed.  The
flush() method discards any available queued data
from the internal buffer.

A data line produces START and
STOP events whenever
it begins or ceases active presentation or capture of data.  These events
can be generated in response to specific requests, or as a result of
less direct state changes.  For example, if start() is called
on an inactive data line, and data is available for capture or playback, a
START event will be generated shortly, when data playback
or capture actually begins.  Or, if the flow of data to an active data
line is constricted so that a gap occurs in the presentation of data,
a STOP event is generated.

Mixers often support synchronized control of multiple data lines.
Synchronization can be established through the Mixer interface's
synchronize method.
See the description of the Mixer interface
for a more complete description.
raw docstring

javax.sound.sampled.DataLine$Info

Besides the class information inherited from its superclass, DataLine.Info provides additional information specific to data lines. This information includes:

the audio formats supported by the data line the minimum and maximum sizes of its internal buffer

Because a Line.Info knows the class of the line its describes, a DataLine.Info object can describe DataLine subinterfaces such as SourceDataLine, TargetDataLine, and Clip. You can query a mixer for lines of any of these types, passing an appropriate instance of DataLine.Info as the argument to a method such as Mixer.getLine(Line.Info).

Besides the class information inherited from its superclass,
DataLine.Info provides additional information specific to data lines.
This information includes:

 the audio formats supported by the data line
 the minimum and maximum sizes of its internal buffer

Because a Line.Info knows the class of the line its describes, a
DataLine.Info object can describe DataLine
subinterfaces such as SourceDataLine,
TargetDataLine, and Clip.
You can query a mixer for lines of any of these types, passing an appropriate
instance of DataLine.Info as the argument to a method such as
Mixer.getLine(Line.Info).
raw docstring

javax.sound.sampled.EnumControl

A EnumControl provides control over a set of discrete possible values, each represented by an object. In a graphical user interface, such a control might be represented by a set of buttons, each of which chooses one value or setting. For example, a reverb control might provide several preset reverberation settings, instead of providing continuously adjustable parameters of the sort that would be represented by FloatControl objects.

Controls that provide a choice between only two settings can often be implemented instead as a BooleanControl, and controls that provide a set of values along some quantifiable dimension might be implemented instead as a FloatControl with a coarse resolution. However, a key feature of EnumControl is that the returned values are arbitrary objects, rather than numerical or boolean values. This means that each returned object can provide further information. As an example, the settings of a REVERB control are instances of ReverbType that can be queried for the parameter values used for each setting.

A EnumControl provides control over a set of
discrete possible values, each represented by an object.  In a
graphical user interface, such a control might be represented
by a set of buttons, each of which chooses one value or setting.  For
example, a reverb control might provide several preset reverberation
settings, instead of providing continuously adjustable parameters
of the sort that would be represented by FloatControl
objects.

Controls that provide a choice between only two settings can often be implemented
instead as a BooleanControl, and controls that provide
a set of values along some quantifiable dimension might be implemented
instead as a FloatControl with a coarse resolution.
However, a key feature of EnumControl is that the returned values
are arbitrary objects, rather than numerical or boolean values.  This means that each
returned object can provide further information.  As an example, the settings
of a REVERB control are instances of
ReverbType that can be queried for the parameter values
used for each setting.
raw docstring

javax.sound.sampled.EnumControl$Type

An instance of the EnumControl.Type inner class identifies one kind of enumerated control. Static instances are provided for the common types.

An instance of the EnumControl.Type inner class identifies one kind of
enumerated control.  Static instances are provided for the
common types.
raw docstring

javax.sound.sampled.FloatControl

A FloatControl object provides control over a range of floating-point values. Float controls are often represented in graphical user interfaces by continuously adjustable objects such as sliders or rotary knobs. Concrete subclasses of FloatControl implement controls, such as gain and pan, that affect a line's audio signal in some way that an application can manipulate. The FloatControl.Type inner class provides static instances of types that are used to identify some common kinds of float control.

The FloatControl abstract class provides methods to set and get the control's current floating-point value. Other methods obtain the possible range of values and the control's resolution (the smallest increment between returned values). Some float controls allow ramping to a new value over a specified period of time. FloatControl also includes methods that return string labels for the minimum, maximum, and midpoint positions of the control.

A FloatControl object provides control over a range of
floating-point values.  Float controls are often
represented in graphical user interfaces by continuously
adjustable objects such as sliders or rotary knobs.  Concrete subclasses
of FloatControl implement controls, such as gain and pan, that
affect a line's audio signal in some way that an application can manipulate.
The FloatControl.Type
inner class provides static instances of types that are used to
identify some common kinds of float control.

The FloatControl abstract class provides methods to set and get
the control's current floating-point value.  Other methods obtain the possible
range of values and the control's resolution (the smallest increment between
returned values).  Some float controls allow ramping to a
new value over a specified period of time.  FloatControl also
includes methods that return string labels for the minimum, maximum, and midpoint
positions of the control.
raw docstring

javax.sound.sampled.FloatControl$Type

An instance of the FloatControl.Type inner class identifies one kind of float control. Static instances are provided for the common types.

An instance of the FloatControl.Type inner class identifies one kind of
float control.  Static instances are provided for the
common types.
raw docstring

javax.sound.sampled.Line

The Line interface represents a mono or multi-channel audio feed. A line is an element of the digital audio "pipeline," such as a mixer, an input or output port, or a data path into or out of a mixer.

A line can have controls, such as gain, pan, and reverb. The controls themselves are instances of classes that extend the base Control class. The Line interface provides two accessor methods for obtaining the line's controls: getControls returns the entire set, and getControl returns a single control of specified type.

Lines exist in various states at different times. When a line opens, it reserves system resources for itself, and when it closes, these resources are freed for other objects or applications. The isOpen() method lets you discover whether a line is open or closed. An open line need not be processing data, however. Such processing is typically initiated by subinterface methods such as SourceDataLine.write and TargetDataLine.read.

You can register an object to receive notifications whenever the line's state changes. The object must implement the LineListener interface, which consists of the single method update. This method will be invoked when a line opens and closes (and, if it's a DataLine, when it starts and stops).

An object can be registered to listen to multiple lines. The event it receives in its update method will specify which line created the event, what type of event it was (OPEN, CLOSE, START, or STOP), and how many sample frames the line had processed at the time the event occurred.

Certain line operations, such as open and close, can generate security exceptions if invoked by unprivileged code when the line is a shared audio resource.

The Line interface represents a mono or multi-channel
audio feed. A line is an element of the digital audio
"pipeline," such as a mixer, an input or output port,
or a data path into or out of a mixer.

A line can have controls, such as gain, pan, and reverb.
The controls themselves are instances of classes that extend the
base Control class.
The Line interface provides two accessor methods for
obtaining the line's controls: getControls returns the
entire set, and getControl returns a single control of
specified type.

Lines exist in various states at different times.  When a line opens, it reserves system
resources for itself, and when it closes, these resources are freed for
other objects or applications. The isOpen() method lets
you discover whether a line is open or closed.
An open line need not be processing data, however.  Such processing is
typically initiated by subinterface methods such as
SourceDataLine.write and
TargetDataLine.read.

You can register an object to receive notifications whenever the line's
state changes.  The object must implement the LineListener
interface, which consists of the single method
update.
This method will be invoked when a line opens and closes (and, if it's a
DataLine, when it starts and stops).

An object can be registered to listen to multiple lines.  The event it
receives in its update method will specify which line created
the event, what type of event it was
(OPEN, CLOSE, START, or STOP),
and how many sample frames the line had processed at the time the event occurred.

Certain line operations, such as open and close, can generate security
exceptions if invoked by unprivileged code when the line is a shared audio
resource.
raw docstring

javax.sound.sampled.Line$Info

A Line.Info object contains information about a line. The only information provided by Line.Info itself is the Java class of the line. A subclass of Line.Info adds other kinds of information about the line. This additional information depends on which Line subinterface is implemented by the kind of line that the Line.Info subclass describes.

A Line.Info can be retrieved using various methods of the Line, Mixer, and AudioSystem interfaces. Other such methods let you pass a Line.Info as an argument, to learn whether lines matching the specified configuration are available and to obtain them.

A Line.Info object contains information about a line.
The only information provided by Line.Info itself
is the Java class of the line.
A subclass of Line.Info adds other kinds of information
about the line.  This additional information depends on which Line
subinterface is implemented by the kind of line that the Line.Info
subclass describes.

A Line.Info can be retrieved using various methods of
the Line, Mixer, and AudioSystem
interfaces.  Other such methods let you pass a Line.Info as
an argument, to learn whether lines matching the specified configuration
are available and to obtain them.
raw docstring

javax.sound.sampled.LineEvent

The LineEvent class encapsulates information that a line sends its listeners whenever the line opens, closes, starts, or stops. Each of these four state changes is represented by a corresponding type of event. A listener receives the event as a parameter to its update method. By querying the event, the listener can learn the type of event, the line responsible for the event, and how much data the line had processed when the event occurred.

Although this class implements Serializable, attempts to serialize a LineEvent object will fail.

The LineEvent class encapsulates information that a line
sends its listeners whenever the line opens, closes, starts, or stops.
Each of these four state changes is represented by a corresponding
type of event.  A listener receives the event as a parameter to its
update method.  By querying the event,
the listener can learn the type of event, the line responsible for
the event, and how much data the line had processed when the event occurred.

Although this class implements Serializable, attempts to
serialize a LineEvent object will fail.
raw docstring

javax.sound.sampled.LineEvent$Type

The LineEvent.Type inner class identifies what kind of event occurred on a line. Static instances are provided for the common types (OPEN, CLOSE, START, and STOP).

The LineEvent.Type inner class identifies what kind of event occurred on a line.
Static instances are provided for the common types (OPEN, CLOSE, START, and STOP).
raw docstring

javax.sound.sampled.LineListener

Instances of classes that implement the LineListener interface can register to receive events when a line's status changes.

Instances of classes that implement the LineListener interface can register to
receive events when a line's status changes.
raw docstring

javax.sound.sampled.LineUnavailableException

A LineUnavailableException is an exception indicating that a line cannot be opened because it is unavailable. This situation arises most commonly when a requested line is already in use by another application.

A LineUnavailableException is an exception indicating that a
line cannot be opened because it is unavailable.  This situation
arises most commonly when a requested line is already in use
by another application.
raw docstring

javax.sound.sampled.Mixer

A mixer is an audio device with one or more lines. It need not be designed for mixing audio signals. A mixer that actually mixes audio has multiple input (source) lines and at least one output (target) line. The former are often instances of classes that implement SourceDataLine, and the latter, TargetDataLine. Port objects, too, are either source lines or target lines. A mixer can accept prerecorded, loopable sound as input, by having some of its source lines be instances of objects that implement the Clip interface.

Through methods of the Line interface, which Mixer extends, a mixer might provide a set of controls that are global to the mixer. For example, the mixer can have a master gain control. These global controls are distinct from the controls belonging to each of the mixer's individual lines.

Some mixers, especially those with internal digital mixing capabilities, may provide additional capabilities by implementing the DataLine interface.

A mixer can support synchronization of its lines. When one line in a synchronized group is started or stopped, the other lines in the group automatically start or stop simultaneously with the explicitly affected one.

A mixer is an audio device with one or more lines.  It need not be
designed for mixing audio signals.  A mixer that actually mixes audio
has multiple input (source) lines and at least one output (target) line.
The former are often instances of classes that implement
SourceDataLine,
and the latter, TargetDataLine.  Port
objects, too, are either source lines or target lines.
A mixer can accept prerecorded, loopable sound as input, by having
some of its source lines be instances of objects that implement the
Clip interface.

Through methods of the Line interface, which Mixer extends,
a mixer might provide a set of controls that are global to the mixer.  For example,
the mixer can have a master gain control.  These global controls are distinct
from the controls belonging to each of the mixer's individual lines.

Some mixers, especially
those with internal digital mixing capabilities, may provide
additional capabilities by implementing the DataLine interface.

A mixer can support synchronization of its lines.  When one line in
a synchronized group is started or stopped, the other lines in the group
automatically start or stop simultaneously with the explicitly affected one.
raw docstring

javax.sound.sampled.Mixer$Info

The Mixer.Info class represents information about an audio mixer, including the product's name, version, and vendor, along with a textual description. This information may be retrieved through the getMixerInfo method of the Mixer interface.

The Mixer.Info class represents information about an audio mixer,
including the product's name, version, and vendor, along with a textual
description.  This information may be retrieved through the
getMixerInfo
method of the Mixer interface.
raw docstring

javax.sound.sampled.Port

Ports are simple lines for input or output of audio to or from audio devices. Common examples of ports that act as source lines (mixer inputs) include the microphone, line input, and CD-ROM drive. Ports that act as target lines (mixer outputs) include the speaker, headphone, and line output. You can access port using a Port.Info object.

Ports are simple lines for input or output of audio to or from audio devices.
Common examples of ports that act as source lines (mixer inputs) include the microphone,
line input, and CD-ROM drive.  Ports that act as target lines (mixer outputs) include the
speaker, headphone, and line output.  You can access port using a Port.Info
object.
raw docstring

No vars found in this namespace.

javax.sound.sampled.Port$Info

The Port.Info class extends Line.Info with additional information specific to ports, including the port's name and whether it is a source or a target for its mixer. By definition, a port acts as either a source or a target to its mixer, but not both. (Audio input ports are sources; audio output ports are targets.)

To learn what ports are available, you can retrieve port info objects through the getSourceLineInfo and getTargetLineInfo methods of the Mixer interface. Instances of the Port.Info class may also be constructed and used to obtain lines matching the parameters specified in the Port.Info object.

The Port.Info class extends Line.Info
with additional information specific to ports, including the port's name
and whether it is a source or a target for its mixer.
By definition, a port acts as either a source or a target to its mixer,
but not both.  (Audio input ports are sources; audio output ports are targets.)

To learn what ports are available, you can retrieve port info objects through the
getSourceLineInfo and
getTargetLineInfo
methods of the Mixer interface.  Instances of the
Port.Info class may also be constructed and used to obtain
lines matching the parameters specified in the Port.Info object.
raw docstring

javax.sound.sampled.ReverbType

The ReverbType class provides methods for accessing various reverberation settings to be applied to an audio signal.

Reverberation simulates the reflection of sound off of the walls, ceiling, and floor of a room. Depending on the size of the room, and how absorbent or reflective the materials in the room's surfaces are, the sound might bounce around for a long time before dying away.

The reverberation parameters provided by ReverbType consist of the delay time and intensity of early reflections, the delay time and intensity of late reflections, and an overall decay time. Early reflections are the initial individual low-order reflections of the direct signal off the surfaces in the room. The late Reflections are the dense, high-order reflections that characterize the room's reverberation. The delay times for the start of these two reflection types give the listener a sense of the overall size and complexity of the room's shape and contents. The larger the room, the longer the reflection delay times. The early and late reflections' intensities define the gain (in decibels) of the reflected signals as compared to the direct signal. These intensities give the listener an impression of the absorptive nature of the surfaces and objects in the room. The decay time defines how long the reverberation takes to exponentially decay until it is no longer perceptible ("effective zero"). The larger and less absorbent the surfaces, the longer the decay time.

The set of parameters defined here may not include all aspects of reverberation as specified by some systems. For example, the Midi Manufacturer's Association (MMA) has an Interactive Audio Special Interest Group (IASIG), which has a 3-D Working Group that has defined a Level 2 Spec (I3DL2). I3DL2 supports filtering of reverberation and control of reverb density. These properties are not included in the JavaSound 1.0 definition of a reverb control. In such a case, the implementing system should either extend the defined reverb control to include additional parameters, or else interpret the system's additional capabilities in a way that fits the model described here.

If implementing JavaSound on a I3DL2-compliant device:

Filtering is disabled (high-frequency attenuations are set to 0.0 dB) Density parameters are set to midway between minimum and maximum

The following table shows what parameter values an implementation might use for a representative set of reverberation settings.

Reverberation Types and Parameters

Type Decay Time (ms) Late Intensity (dB) Late Delay (ms) Early Intensity (dB) Early Delay(ms)

Cavern 2250 -2.0 41.3 -1.4 10.3

Dungeon 1600 -1.0 10.3 -0.7 2.6

Garage 900 -6.0 14.7 -4.0 3.9

Acoustic Lab 280 -3.0 8.0 -2.0 2.0

Closet 150 -10.0 2.5 -7.0 0.6

The ReverbType class provides methods for
accessing various reverberation settings to be applied to
an audio signal.

Reverberation simulates the reflection of sound off of
the walls, ceiling, and floor of a room.  Depending on
the size of the room, and how absorbent or reflective the materials in the
room's surfaces are, the sound might bounce around for a
long time before dying away.

The reverberation parameters provided by ReverbType consist
of the delay time and intensity of early reflections, the delay time and
intensity of late reflections, and an overall decay time.
Early reflections are the initial individual low-order reflections of the
direct signal off the surfaces in the room.
The late Reflections are the dense, high-order reflections that characterize
the room's reverberation.
The delay times for the start of these two reflection types give the listener
a sense of the overall size and complexity of the room's shape and contents.
The larger the room, the longer the reflection delay times.
The early and late reflections' intensities define the gain (in decibels) of the reflected
signals as compared to the direct signal.  These intensities give the
listener an impression of the absorptive nature of the surfaces and objects
in the room.
The decay time defines how long the reverberation takes to exponentially
decay until it is no longer perceptible ("effective zero").
The larger and less absorbent the surfaces, the longer the decay time.

The set of parameters defined here may not include all aspects of reverberation
as specified by some systems.  For example, the Midi Manufacturer's Association
(MMA) has an Interactive Audio Special Interest Group (IASIG), which has a
3-D Working Group that has defined a Level 2 Spec (I3DL2).  I3DL2
supports filtering of reverberation and
control of reverb density.  These properties are not included in the JavaSound 1.0
definition of a reverb control.  In such a case, the implementing system
should either extend the defined reverb control to include additional
parameters, or else interpret the system's additional capabilities in a way that fits
the model described here.

If implementing JavaSound on a I3DL2-compliant device:

Filtering is disabled (high-frequency attenuations are set to 0.0 dB)
Density parameters are set to midway between minimum and maximum


The following table shows what parameter values an implementation might use for a
representative set of reverberation settings.


Reverberation Types and Parameters




 Type
 Decay Time (ms)
 Late Intensity (dB)
 Late Delay (ms)
 Early Intensity (dB)
 Early Delay(ms)



 Cavern
 2250
 -2.0
 41.3
 -1.4
 10.3



 Dungeon
 1600
 -1.0
 10.3
 -0.7
 2.6



 Garage
 900
 -6.0
 14.7
 -4.0
 3.9



 Acoustic Lab
 280
 -3.0
 8.0
 -2.0
 2.0



 Closet
 150
 -10.0
 2.5
 -7.0
 0.6
raw docstring

javax.sound.sampled.SourceDataLine

A source data line is a data line to which data may be written. It acts as a source to its mixer. An application writes audio bytes to a source data line, which handles the buffering of the bytes and delivers them to the mixer. The mixer may mix the samples with those from other sources and then deliver the mix to a target such as an output port (which may represent an audio output device on a sound card).

Note that the naming convention for this interface reflects the relationship between the line and its mixer. From the perspective of an application, a source data line may act as a target for audio data.

A source data line can be obtained from a mixer by invoking the getLine method of Mixer with an appropriate DataLine.Info object.

The SourceDataLine interface provides a method for writing audio data to the data line's buffer. Applications that play or mix audio should write data to the source data line quickly enough to keep the buffer from underflowing (emptying), which could cause discontinuities in the audio that are perceived as clicks. Applications can use the available method defined in the DataLine interface to determine the amount of data currently queued in the data line's buffer. The amount of data which can be written to the buffer without blocking is the difference between the buffer size and the amount of queued data. If the delivery of audio output stops due to underflow, a STOP event is generated. A START event is generated when the audio output resumes.

A source data line is a data line to which data may be written.  It acts as
a source to its mixer. An application writes audio bytes to a source data line,
which handles the buffering of the bytes and delivers them to the mixer.
The mixer may mix the samples with those from other sources and then deliver
the mix to a target such as an output port (which may represent an audio output
device on a sound card).

Note that the naming convention for this interface reflects the relationship
between the line and its mixer.  From the perspective of an application,
a source data line may act as a target for audio data.

A source data line can be obtained from a mixer by invoking the
getLine method of Mixer with
an appropriate DataLine.Info object.

The SourceDataLine interface provides a method for writing
audio data to the data line's buffer. Applications that play or mix
audio should write data to the source data line quickly enough to keep the
buffer from underflowing (emptying), which could cause discontinuities in
the audio that are perceived as clicks.  Applications can use the
available method defined in the
DataLine interface to determine the amount of data currently
queued in the data line's buffer.  The amount of data which can be written
to the buffer without blocking is the difference between the buffer size
and the amount of queued data.  If the delivery of audio output
stops due to underflow, a STOP event is
generated.  A START event is generated
when the audio output resumes.
raw docstring

javax.sound.sampled.spi.AudioFileReader

Provider for audio file reading services. Classes providing concrete implementations can parse the format information from one or more types of audio file, and can produce audio input streams from files of these types.

Provider for audio file reading services.  Classes providing concrete
implementations can parse the format information from one or more types of
audio file, and can produce audio input streams from files of these types.
raw docstring

javax.sound.sampled.spi.AudioFileWriter

Provider for audio file writing services. Classes providing concrete implementations can write one or more types of audio file from an audio stream.

Provider for audio file writing services.  Classes providing concrete
implementations can write one or more types of audio file from an audio
stream.
raw docstring

javax.sound.sampled.spi.core

No vars found in this namespace.

javax.sound.sampled.spi.FormatConversionProvider

A format conversion provider provides format conversion services from one or more input formats to one or more output formats. Converters include codecs, which encode and/or decode audio data, as well as transcoders, etc. Format converters provide methods for determining what conversions are supported and for obtaining an audio stream from which converted data can be read.

The source format represents the format of the incoming audio data, which will be converted.

The target format represents the format of the processed, converted audio data. This is the format of the data that can be read from the stream returned by one of the getAudioInputStream methods.

A format conversion provider provides format conversion services
from one or more input formats to one or more output formats.
Converters include codecs, which encode and/or decode audio data,
as well as transcoders, etc.  Format converters provide methods for
determining what conversions are supported and for obtaining an audio
stream from which converted data can be read.

The source format represents the format of the incoming
audio data, which will be converted.

The target format represents the format of the processed, converted
audio data.  This is the format of the data that can be read from
the stream returned by one of the getAudioInputStream methods.
raw docstring

javax.sound.sampled.spi.MixerProvider

A provider or factory for a particular mixer type. This mechanism allows the implementation to determine how resources are managed in creation / management of a mixer.

A provider or factory for a particular mixer type.
This mechanism allows the implementation to determine
how resources are managed in creation / management of
a mixer.
raw docstring

javax.sound.sampled.TargetDataLine

A target data line is a type of DataLine from which audio data can be read. The most common example is a data line that gets its data from an audio capture device. (The device is implemented as a mixer that writes to the target data line.)

Note that the naming convention for this interface reflects the relationship between the line and its mixer. From the perspective of an application, a target data line may act as a source for audio data.

The target data line can be obtained from a mixer by invoking the getLine method of Mixer with an appropriate DataLine.Info object.

The TargetDataLine interface provides a method for reading the captured data from the target data line's buffer.Applications that record audio should read data from the target data line quickly enough to keep the buffer from overflowing, which could cause discontinuities in the captured data that are perceived as clicks. Applications can use the available method defined in the DataLine interface to determine the amount of data currently queued in the data line's buffer. If the buffer does overflow, the oldest queued data is discarded and replaced by new data.

A target data line is a type of DataLine from which
audio data can be read.  The most common example is a data line that gets
its data from an audio capture device.  (The device is implemented as a
mixer that writes to the target data line.)

Note that the naming convention for this interface reflects the relationship
between the line and its mixer.  From the perspective of an application,
a target data line may act as a source for audio data.

The target data line can be obtained from a mixer by invoking the
getLine
method of Mixer with an appropriate
DataLine.Info object.

The TargetDataLine interface provides a method for reading the
captured data from the target data line's buffer.Applications
that record audio should read data from the target data line quickly enough
to keep the buffer from overflowing, which could cause discontinuities in
the captured data that are perceived as clicks.  Applications can use the
available method defined in the
DataLine interface to determine the amount of data currently
queued in the data line's buffer.  If the buffer does overflow,
the oldest queued data is discarded and replaced by new data.
raw docstring

javax.sound.sampled.UnsupportedAudioFileException

An UnsupportedAudioFileException is an exception indicating that an operation failed because a file did not contain valid data of a recognized file type and format.

An UnsupportedAudioFileException is an exception indicating that an
operation failed because a file did not contain valid data of a recognized file
type and format.
raw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close