The AnalyserNode interface represents a node able to provide
frequency and time-domain analysis information. It is an web.audio.AudioNode
passes the audio stream unchanged from the input to the output,
allows you to take the generated data, process it, and create
visualizations.
The AnalyserNode interface represents a node able to provide frequency and time-domain analysis information. It is an `web.audio.AudioNode` passes the audio stream unchanged from the input to the output, allows you to take the generated data, process it, and create visualizations.
The AudioBuffer interface represents a short audio asset residing
memory, created from an audio file using the AudioContext.decodeAudioData()
or from raw data using AudioContext.createBuffer()
. Once put
an AudioBuffer, the audio can then be played by being passed
an web.audio.AudioBufferSourceNode
.
The AudioBuffer interface represents a short audio asset residing memory, created from an audio file using the `AudioContext.decodeAudioData()` or from raw data using `AudioContext.createBuffer()`. Once put an AudioBuffer, the audio can then be played by being passed an `web.audio.AudioBufferSourceNode`.
The AudioBufferSourceNode interface is an web.audio.AudioScheduledSourceNode
represents an audio source consisting of in-memory audio data,
in an web.audio.AudioBuffer
. It's especially useful for playing
audio which has particularly stringent timing accuracy requirements,
as for sounds that must match a specific rhythm and can be kept
memory rather than being played from disk or the network.
The AudioBufferSourceNode interface is an `web.audio.AudioScheduledSourceNode` represents an audio source consisting of in-memory audio data, in an `web.audio.AudioBuffer`. It's especially useful for playing audio which has particularly stringent timing accuracy requirements, as for sounds that must match a specific rhythm and can be kept memory rather than being played from disk or the network.
The AudioContext interface represents an audio-processing graph
from audio modules linked together, each represented by an web.audio.AudioNode
.
The AudioContext interface represents an audio-processing graph from audio modules linked together, each represented by an `web.audio.AudioNode`.
The AudioContextOptions dictionary is used to specify configuration
when constructing a new web.audio.AudioContext
object to represent
graph of web audio nodes.
The AudioContextOptions dictionary is used to specify configuration when constructing a new `web.audio.AudioContext` object to represent graph of web audio nodes.
AudioDestinationNode has no output (as it is the output, no more can be linked after it in the audio graph) and one input. The of channels in the input must be between 0 and the maxChannelCount or an exception is raised.
AudioDestinationNode has no output (as it is the output, no more can be linked after it in the audio graph) and one input. The of channels in the input must be between 0 and the maxChannelCount or an exception is raised.
The AudioListener interface represents the position and orientation
the unique person listening to the audio scene, and is used in
spatialization. All web.audio.PannerNode
s spatialize in relation
the AudioListener stored in the BaseAudioContext.listener
attribute.
The AudioListener interface represents the position and orientation the unique person listening to the audio scene, and is used in spatialization. All `web.audio.PannerNode`s spatialize in relation the AudioListener stored in the `BaseAudioContext.listener` attribute.
The AudioNode interface is a generic interface for representing audio processing module. Examples include:
The AudioNode interface is a generic interface for representing audio processing module. Examples include:
The Web Audio API's AudioParam interface represents an audio-related
usually a parameter of an web.audio.AudioNode
(such as GainNode.gain
).
The Web Audio API's AudioParam interface represents an audio-related usually a parameter of an `web.audio.AudioNode` (such as `GainNode.gain`).
The AudioParamDescriptor dictionary of the Web Audio API specifies
for an web.audio.AudioParam
objects.
The AudioParamDescriptor dictionary of the Web Audio API specifies for an `web.audio.AudioParam` objects.
The Web Audio API interface AudioParamMap represents a set of
audio parameters, each described as a mapping of a web.DOMString
the parameter to the web.audio.AudioParam
object representing
value.
The Web Audio API interface AudioParamMap represents a set of audio parameters, each described as a mapping of a `web.DOMString` the parameter to the `web.audio.AudioParam` object representing value.
The AudioScheduledSourceNode interface—part of the Web Audio
a parent interface for several types of audio source node interfaces
share the ability to be started and stopped, optionally at specified
Specifically, this interface defines the start()
and stop()
as well as the onended
event handler.
The AudioScheduledSourceNode interface—part of the Web Audio a parent interface for several types of audio source node interfaces share the ability to be started and stopped, optionally at specified Specifically, this interface defines the `start()` and `stop()` as well as the `onended` event handler.
AudioScheduledSourceNode Events.
AudioScheduledSourceNode Events.
The AudioWorkletGlobalScope interface of the Web Audio API represents
global execution context for user-supplied code, which defines
web.audio.AudioWorkletProcessor
-derived classes. Each web.audio.BaseAudioContext
a single web.audio.AudioWorklet
available under the audioWorklet
which runs its code in a single AudioWorkletGlobalScope.
The AudioWorkletGlobalScope interface of the Web Audio API represents global execution context for user-supplied code, which defines `web.audio.AudioWorkletProcessor`-derived classes. Each `web.audio.BaseAudioContext` a single `web.audio.AudioWorklet` available under the `audioWorklet` which runs its code in a single AudioWorkletGlobalScope.
The AudioWorkletNode interface of the Web Audio API represents
base class for a user-defined web.audio.AudioNode
, which can
connected to an audio routing graph along with other nodes. It
an associated web.audio.AudioWorkletProcessor
, which does the
audio processing in a Web Audio rendering thread.
The AudioWorkletNode interface of the Web Audio API represents base class for a user-defined `web.audio.AudioNode`, which can connected to an audio routing graph along with other nodes. It an associated `web.audio.AudioWorkletProcessor`, which does the audio processing in a Web Audio rendering thread.
The AudioWorkletNodeOptions dictionary of the Web Audio API is
to specify configuration options when constructing a new web.audio.AudioWorkletNode
for custom audio processing.
The AudioWorkletNodeOptions dictionary of the Web Audio API is to specify configuration options when constructing a new `web.audio.AudioWorkletNode` for custom audio processing.
The AudioWorkletProcessor interface of the Web Audio API represents
audio processing code behind a custom web.audio.AudioWorkletNode
.
lives in the web.audio.AudioWorkletGlobalScope
and runs on
Web Audio rendering thread. In turn, an web.audio.AudioWorkletNode
on it runs on the main thread.
The AudioWorkletProcessor interface of the Web Audio API represents audio processing code behind a custom `web.audio.AudioWorkletNode`. lives in the `web.audio.AudioWorkletGlobalScope` and runs on Web Audio rendering thread. In turn, an `web.audio.AudioWorkletNode` on it runs on the main thread.
The BaseAudioContext interface of the Web Audio API acts as a
definition for online and offline audio-processing graphs, as
by web.audio.AudioContext
and web.audio.OfflineAudioContext
The BaseAudioContext interface of the Web Audio API acts as a definition for online and offline audio-processing graphs, as by `web.audio.AudioContext` and `web.audio.OfflineAudioContext`
The BiquadFilterNode interface represents a simple low-order
and is created using the AudioContext.createBiquadFilter()
It is an web.audio.AudioNode
that can represent different kinds
filters, tone control devices, and graphic equalizers.
The BiquadFilterNode interface represents a simple low-order and is created using the `AudioContext.createBiquadFilter()` It is an `web.audio.AudioNode` that can represent different kinds filters, tone control devices, and graphic equalizers.
The ConstantSourceNode interface—part of the Web Audio API—represents
audio source (based upon web.audio.AudioScheduledSourceNode
)
output is single unchanging value. This makes it useful for cases
which you need a constant value coming in from an audio source.
addition, it can be used like a constructible web.audio.AudioParam
automating the value of its offset
or by connecting another
to it; see Controlling multiple parameters with ConstantSourceNode.
The ConstantSourceNode interface—part of the Web Audio API—represents audio source (based upon `web.audio.AudioScheduledSourceNode`) output is single unchanging value. This makes it useful for cases which you need a constant value coming in from an audio source. addition, it can be used like a constructible `web.audio.AudioParam` automating the value of its `offset` or by connecting another to it; see Controlling multiple parameters with ConstantSourceNode.
The ConvolverNode interface is an web.audio.AudioNode
that
a Linear Convolution on a given web.audio.AudioBuffer
, often
to achieve a reverb effect. A ConvolverNode always has exactly
input and one output.
The ConvolverNode interface is an `web.audio.AudioNode` that a Linear Convolution on a given `web.audio.AudioBuffer`, often to achieve a reverb effect. A ConvolverNode always has exactly input and one output.
web.audio interfaces.
web.audio interfaces.
No vars found in this namespace.
The DelayNode interface represents a delay-line; an web.audio.AudioNode
module that causes a delay between the arrival of an input data
its propagation to the output.
The DelayNode interface represents a delay-line; an `web.audio.AudioNode` module that causes a delay between the arrival of an input data its propagation to the output.
Inherits properties from its parent, web.audio.AudioNode
.
Inherits properties from its parent, `web.audio.AudioNode`.
The GainNode interface represents a change in volume. It is an audio-processing module that causes a given gain to be applied the input data before its propagation to the output. A GainNode has exactly one input and one output, both with the same number channels.
The GainNode interface represents a change in volume. It is an audio-processing module that causes a given gain to be applied the input data before its propagation to the output. A GainNode has exactly one input and one output, both with the same number channels.
The IIRFilterNode interface of the Web Audio API is a web.audio.AudioNode
which implements a general infinite impulse response (IIR) filter;
type of filter can be used to implement tone control devices
graphic equalizers as well. It lets the parameters of the filter
be specified, so that it can be tuned as needed.
The IIRFilterNode interface of the Web Audio API is a `web.audio.AudioNode` which implements a general infinite impulse response (IIR) filter; type of filter can be used to implement tone control devices graphic equalizers as well. It lets the parameters of the filter be specified, so that it can be tuned as needed.
Inherits properties from its parent, web.audio.AudioNode
.
Inherits properties from its parent, `web.audio.AudioNode`.
The MediaStreamAudioSourceNode interface is a type of web.audio.AudioNode
operates as an audio source whose media is received from a web.streams.MediaStream
using the WebRTC or Media Capture and Streams APIs.
The MediaStreamAudioSourceNode interface is a type of `web.audio.AudioNode` operates as an audio source whose media is received from a `web.streams.MediaStream` using the WebRTC or Media Capture and Streams APIs.
The MediaStreamAudioSourceOptions dictionary provides configuration
used when creating a web.audio.MediaStreamAudioSourceNode
using
constructor.
The MediaStreamAudioSourceOptions dictionary provides configuration used when creating a `web.audio.MediaStreamAudioSourceNode` using constructor.
The MediaStreamTrackAudioSourceOptions dictionary is used when
options to the MediaStreamTrackAudioSourceNode()
constructor.
The MediaStreamTrackAudioSourceOptions dictionary is used when options to the `MediaStreamTrackAudioSourceNode()` constructor.
The Web Audio API OfflineAudioCompletionEvent interface represents
that occur when the processing of an web.audio.OfflineAudioContext
terminated. The complete event implements this interface.
The Web Audio API OfflineAudioCompletionEvent interface represents that occur when the processing of an `web.audio.OfflineAudioContext` terminated. The complete event implements this interface.
The OfflineAudioContext interface is an web.audio.AudioContext
representing an audio-processing graph built from linked together
In contrast with a standard web.audio.AudioContext
, an OfflineAudioContext
render the audio to the device hardware; instead, it generates
as fast as it can, and outputs the result to an web.audio.AudioBuffer
.
The OfflineAudioContext interface is an `web.audio.AudioContext` representing an audio-processing graph built from linked together In contrast with a standard `web.audio.AudioContext`, an OfflineAudioContext render the audio to the device hardware; instead, it generates as fast as it can, and outputs the result to an `web.audio.AudioBuffer`.
OfflineAudioContext Events.
OfflineAudioContext Events.
The OscillatorNode interface represents a periodic waveform,
as a sine wave. It is an web.audio.AudioScheduledSourceNode
module that causes a specified frequency of a given wave to be
effect, a constant tone.
The OscillatorNode interface represents a periodic waveform, as a sine wave. It is an `web.audio.AudioScheduledSourceNode` module that causes a specified frequency of a given wave to be effect, a constant tone.
A PannerNode always has exactly one input and one output: the can be mono or stereo but the output is always stereo (2 channels); can't have panning effects without at least two audio channels!
A PannerNode always has exactly one input and one output: the can be mono or stereo but the output is always stereo (2 channels); can't have panning effects without at least two audio channels!
The pan
property takes a unitless value between -1 (full left
and 1 (full right pan). This interface was introduced as a much
way to apply a simple panning effect than having to use a full
The `pan` property takes a unitless value between -1 (full left and 1 (full right pan). This interface was introduced as a much way to apply a simple panning effect than having to use a full
A WaveShaperNode always has exactly one input and one output.
A WaveShaperNode always has exactly one input and one output.
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close