class OfflineAudioContext extends AudioContext
The OfflineAudioContext interface is an AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an OfflineAudioContext doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.
It is important to note that, whereas you can create a new AudioContext using the new AudioContext() constructor with no arguments, the new OfflineAudioContext() constructor requires three arguments:
- Annotations
- @JSType() @native() @JSGlobal()
new OfflineAudioContext(numOfChannels, length, sampleRate)
This works in exactly the same way as when you create a new AudioBuffer with the AudioContext.createBuffer method. For more detail, read Audio buffers: frames, samples and channels from our Basic concepts guide.
- Alphabetic
- By Inheritance
- OfflineAudioContext
- AudioContext
- EventTarget
- Object
- Any
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
-
new
OfflineAudioContext(numOfChannels: Int, length: Int, sampleRate: Int)
- numOfChannels
An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.
- length
An integer representing the size of the buffer in sample-frames.
- sampleRate
The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000, with 44100 being the most commonly used.
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
addEventListener[T <: Event](type: String, listener: Function1[T, _], options: EventListenerOptions): Unit
The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on.
The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).
This implementation accepts a settings object of type EventListenerOptions.
MDN
- Definition Classes
- EventTarget
-
def
addEventListener[T <: Event](type: String, listener: Function1[T, _], useCapture: Boolean = js.native): Unit
The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on.
The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).
MDN
- Definition Classes
- EventTarget
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
close(): Promise[Unit]
Closes the audio context, releasing any system audio resources that it uses.
Closes the audio context, releasing any system audio resources that it uses.
- Definition Classes
- AudioContext
-
def
createAnalyser(): AnalyserNode
Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.
Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.
- Definition Classes
- AudioContext
-
def
createBiquadFilter(): BiquadFilterNode
Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.
Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.
- Definition Classes
- AudioContext
-
def
createBuffer(numOfChannels: Int, length: Int, sampleRate: Int): AudioBuffer
Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.
Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.
- numOfChannels
An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.
- length
An integer representing the size of the buffer in sample-frames.
- sampleRate
The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000.
- Definition Classes
- AudioContext
-
def
createBufferSource(): AudioBufferSourceNode
Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object.
Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.
- Definition Classes
- AudioContext
-
def
createChannelMerger(numberOfInputs: Int = 6): ChannelMergerNode
Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.
Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.
- numberOfInputs
The number of channels in the input audio streams, which the output stream will contain; the default is 6 is this parameter is not specified.
- Definition Classes
- AudioContext
-
def
createChannelSplitter(numberOfOutputs: Int = 6): ChannelSplitterNode
Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.
Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.
- numberOfOutputs
The number of channels in the input audio stream that you want to output separately; the default is 6 is this parameter is not specified.
- Definition Classes
- AudioContext
-
def
createConvolver(): ConvolverNode
Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.
Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.
- Definition Classes
- AudioContext
-
def
createDelay(maxDelayTime: Int): DelayNode
Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount.
Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.
- maxDelayTime
The maximum amount of time, in seconds, that the audio signal can be delayed by. The default value is 0.
- Definition Classes
- AudioContext
-
def
createDynamicsCompressor(): DynamicsCompressorNode
Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.
Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.
- Definition Classes
- AudioContext
-
def
createGain(): GainNode
Creates a GainNode, which can be used to control the overall volume of the audio graph.
Creates a GainNode, which can be used to control the overall volume of the audio graph.
- Definition Classes
- AudioContext
-
def
createMediaElementSource(myMediaElement: HTMLMediaElement): MediaElementAudioSourceNode
Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement.
Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.
- myMediaElement
An HTMLMediaElement object that you want to feed into an audio processing graph to manipulate.
- Definition Classes
- AudioContext
-
def
createMediaStreamDestination(): MediaStreamAudioDestinationNode
Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.
Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.
- Definition Classes
- AudioContext
-
def
createMediaStreamSource(stream: MediaStream): MediaStreamAudioSourceNode
Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.
Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.
- stream
A MediaStream object that you want to feed into an audio processing graph to manipulate.
- Definition Classes
- AudioContext
-
def
createOscillator(): OscillatorNode
Creates an OscillatorNode, a source representing a periodic waveform.
Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.
- Definition Classes
- AudioContext
-
def
createPanner(): PannerNode
Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.
Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.
- Definition Classes
- AudioContext
-
def
createPeriodicWave(real: Float32Array, imag: Float32Array): PeriodicWave
Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.
Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.
- Definition Classes
- AudioContext
-
def
createStereoPanner(): StereoPannerNode
Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.
Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.
- Definition Classes
- AudioContext
-
def
createWaveShaper(): WaveShaperNode
Creates a WaveShaperNode, which is used to implement non-linear distortion effects.
Creates a WaveShaperNode, which is used to implement non-linear distortion effects.
- Definition Classes
- AudioContext
-
def
currentTime: Double
Returns a double representing an ever-increasing hardware time in seconds used for scheduling.
Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and cannot be stopped, paused or reset.
- Definition Classes
- AudioContext
-
def
decodeAudioData(audioData: ArrayBuffer, successCallback: Function1[AudioBuffer, _] = js.native, errorCallback: Function0[_] = js.native): Promise[AudioBuffer]
Asynchronously decodes audio file data contained in an ArrayBuffer.
Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.
- audioData
An ArrayBuffer containing the audio data to be decoded, usually grabbed from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer.
- successCallback
A callback function to be invoked when the decoding successfully finishes. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data. Usually you'll want to put the decoded data into an AudioBufferSourceNode, from which it can be played and manipulated how you want.
- errorCallback
An optional error callback, to be invoked if an error occurs when the audio data is being decoded.
- Definition Classes
- AudioContext
-
val
destination: AudioDestinationNode
Returns an AudioDestinationNode representing the final destination of all audio in the context.
Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.
- Definition Classes
- AudioContext
-
def
dispatchEvent(evt: Event): Boolean
Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order.
Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order. The normal event processing rules (including the capturing and optional bubbling phase) apply to events dispatched manually with dispatchEvent().
MDN
- Definition Classes
- EventTarget
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hasOwnProperty(v: String): Boolean
- Definition Classes
- Object
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isPrototypeOf(v: Object): Boolean
- Definition Classes
- Object
-
val
listener: AudioListener
Returns the AudioListener object, used for 3D spatialization.
Returns the AudioListener object, used for 3D spatialization.
- Definition Classes
- AudioContext
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
propertyIsEnumerable(v: String): Boolean
- Definition Classes
- Object
-
def
removeEventListener[T <: Event](type: String, listener: Function1[T, _], options: EventListenerOptions): Unit
Removes the event listener previously registered with EventTarget.addEventListener.
Removes the event listener previously registered with EventTarget.addEventListener.
This implementation accepts a settings object of type EventListenerOptions.
MDN
- Definition Classes
- EventTarget
-
def
removeEventListener[T <: Event](type: String, listener: Function1[T, _], useCapture: Boolean = js.native): Unit
Removes the event listener previously registered with EventTarget.addEventListener.
Removes the event listener previously registered with EventTarget.addEventListener.
MDN
- Definition Classes
- EventTarget
-
def
resume(): Promise[Unit]
Resumes the progression of time in an audio context that has previously been suspended.
Resumes the progression of time in an audio context that has previously been suspended.
- Definition Classes
- AudioContext
-
val
sampleRate: Double
Returns a float representing the sample rate (in samples per second) used by all nodes in this context.
Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The sample-rate of an AudioContext cannot be changed.
- Definition Classes
- AudioContext
-
def
startRendering(): Promise[AudioBuffer]
The promise-based startRendering() method of the OfflineAudioContext Interface starts rendering the audio graph, taking into account the current connections and the current scheduled changes.
The promise-based startRendering() method of the OfflineAudioContext Interface starts rendering the audio graph, taking into account the current connections and the current scheduled changes.
When the method is invoked, the rendering is started and a promise is raised. When the rendering is completed, the promise resolves with an AudioBuffer containing the rendered audio.
-
def
state: String
Returns the current state of the AudioContext.
Returns the current state of the AudioContext.
- Definition Classes
- AudioContext
-
def
suspend(): Promise[Unit]
Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.
Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.
- Definition Classes
- AudioContext
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toLocaleString(): String
- Definition Classes
- Object
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
def
valueOf(): Any
- Definition Classes
- Object
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()