AudioContext
The AudioContext
interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode.
Properties
The audioWorklet
read-only property of the processing. Available only in secure contexts.
The baseLatency
read-only property of the seconds of processing latency incurred by the AudioContext
passing an audio buffer from the AudioDestinationNode — i.e., the end of the audio graph — into the host system's audio subsystem ready for playing.
The currentTime
read-only property of the BaseAudioContext interface returns a double representing an ever-increasing hardware timestamp in seconds that can be used for scheduling audio playback, visualizing timelines, etc.
The destination
property of the BaseAudioContext interface returns an AudioDestinationNode representing the final destination of all audio in the context.
The listener
property of the BaseAudioContext interface returns an AudioListener object that can then be used for implementing 3D audio spatialization.
The outputLatency
read-only property of the AudioContext Interface provides an estimation of the output latency of the current audio context.
The sampleRate
property of the BaseAudioContext interface returns a floating point number representing the sample rate, in samples per second, used by all nodes in this audio context.
The state
read-only property of the BaseAudioContext interface returns the current state of the AudioContext
.
Functions
The createAnalyser()
method of the can be used to expose audio time and frequency data and create data visualizations.
The createBiquadFilter()
method of the BaseAudioContext interface creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types.
The createBuffer()
method of the BaseAudioContext Interface is used to create a new, empty AudioBuffer object, which can then be populated by data, and played via an AudioBufferSourceNode.
The createBufferSource()
method of the BaseAudioContext Interface is used to create a new AudioBufferSourceNode, which can be used to play audio data contained within an AudioBuffer object.
The createChannelMerger()
method of the BaseAudioContext interface creates a ChannelMergerNode, which combines channels from multiple audio streams into a single audio stream.
The createChannelSplitter()
method of the BaseAudioContext Interface is used to create a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.
The createConstantSource()
property of the BaseAudioContext interface creates a outputs a monaural (one-channel) sound signal whose samples all have the same value.
The createConvolver()
method of the BaseAudioContext interface creates a ConvolverNode, which is commonly used to apply reverb effects to your audio.
The createDelay()
method of the which is used to delay the incoming audio signal by a certain amount of time.
The createDynamicsCompressor()
method of the BaseAudioContext Interface is used to create a DynamicsCompressorNode, which can be used to apply compression to an audio signal.
The createGain()
method of the BaseAudioContext interface creates a GainNode, which can be used to control the overall gain (or volume) of the audio graph.
The createIIRFilter()
method of the BaseAudioContext interface creates an IIRFilterNode, which represents a general infinite impulse response (IIR) filter which can be configured to serve as various types of filter.
The createMediaElementSource()
method of the AudioContext Interface is used to create a new MediaElementAudioSourceNode object, given an existing HTML audio or video element, the audio from which can then be played and manipulated.
The createMediaStreamDestination()
method of the AudioContext Interface is used to create a new MediaStreamAudioDestinationNode object associated with a WebRTC MediaStream representing an audio stream, which may be stored in a local file or sent to another computer.
The createMediaStreamSource()
method of the AudioContext Interface is used to create a new MediaStreamAudioSourceNode object, given a media stream (say, from a MediaDevices.getUserMedia instance), the audio from which can then be played and manipulated.
The createOscillator()
method of the BaseAudioContext interface creates an OscillatorNode, a source representing a periodic waveform.
The createPanner()
method of the BaseAudioContext Interface is used to create a new PannerNode, which is used to spatialize an incoming audio stream in 3D space.
The createPeriodicWave()
method of the BaseAudioContext interface is used to create a PeriodicWave.
The createStereoPanner()
method of the BaseAudioContext interface creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.
The createWaveShaper()
method of the BaseAudioContext interface creates a WaveShaperNode, which represents a non-linear distortion.
The decodeAudioData()
method of the BaseAudioContext Interface is used to asynchronously decode audio file data contained in an rate, then passed to a callback or promise.
The getOutputTimestamp()
method of the containing two audio timestamp values relating to the current audio context.
The suspend()
method of the AudioContext Interface suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process — this is useful if you want an application to power down the audio hardware when it will not be using an audio context for a while.