In previous recipes, we acknowledged that JavaScript timers do not have the fidelity required for scripting audio. Web Audio circumvents this limitation by adding the automation support. Automation allows applications to schedule the predefined audio behaviors ahead of time, thereby allowing applications to schedule audio events independent of the code execution timing.
As previously mentioned, the AudioParam
interface has automation support which allows applications to build pretty sophisticated automated behaviors. Let's take a look at the AudioParam
automation methods:
The setValueAtTime()
method sets the audio parameter value to value
at the time startTime
:
The following is a diagram illustrating its behavior:
The linearRampToValueAtTime()
method linearly ramps the audio parameter value from the previously set value to the given value, value
, at the time endTime
:
The following diagrams illustrate the behavior when ramping up or down to the target value respectively:
The exponentialRampToValueAtTime()
method exponentially ramps the audio parameter value from the previously set value to the given value, value
, at the time endTime
:
The following are the diagrams illustrating its behavior:
The setTargetAtTime()
method ramps the audio parameter so that it approaches the target value, value
, starting at the time startTime
. The timeConstant
parameter controls the approach rate of the value:
The following are the diagrams illustrating its behavior:
The setValueCurveAtTime()
method applies an array of arbitrary values to the audio parameter. The array values are distributed evenly throughout the automation duration, and the applied value is calculated using linear interpolation:
The following is the diagram illustrating its behavior:
The cancelScheduledValues()
method cancels all the scheduled parameter changes starting at the time startTime
or later:
Like the playback automation methods we discussed in the previous recipe, all time parameters are in seconds and are relative to the audio context's time coordinate system.
Note
Wondering how to specify the start time for some automation methods such as linearRampToValueAtTime()
and exponentialRampToValueAtTime()
?
When an automation method does not have a start time parameter, its behavior starts at the nearest previous automation event or the audio context current time, whichever is later.
There are several key rules in regards to scheduling the automation events:
If an event is added at a time when there is already an event of the exact same type, the new event replaces the old one.
If an event is added at a time when there is already an event of a different type, it is scheduled to occur immediately after it.
Events may not overlap – some events occur over time, such as the linearRampToValueAtTime()
automation behavior. No events may be scheduled in the time when such events are active, otherwise Web Audio will throw a runtime exception.
We leverage the AudioParam
automation support to implement ducking. The following is the overview of the ducking logic implemented in the AudioLayer
class:
We add a GainNode
instance into the node graph as the duck controller.
When a sound effect is played, we script the duck controller's gain
audio parameter to reduce the audio output gain level for the duration of the sound effect.
If ducking is reactivated while it is still active, we revise the scheduled ducking events so that they end at the appropriate time.
The following is the node graph diagram produced by the code:
Tip
Why use two GainNode instances instead of one?
It's a good idea to split up the independent scripted audio gain behaviors into separate GainNode
instances. This ensures that the scripted behaviors will interact properly.
Now, let's take a look at AudioLayer.setDuck()
which implements the ducking behavior:
The AudioLayer.setDuck()
method takes a duration (in seconds) indicating how long the duck behavior should be applied:
We cache the duck controller's gain
audio parameter in duckGain
:
We cancel any existing leftover scheduled duck behaviors, thereby allowing us to start with a clean slate:
We employ the linearRampToValueAtTime()
automation behavior to schedule the transition in—the audio parameter is scripted to linearly ramp from the existing volume to the duck volume, DUCK_VOLUME
, over the time, TRANSITIONIN_SECS
. Because there are no future events scheduled, the behavior starts at the current audio context time:
Note
If the volume is already at DUCK_VOLUME
, the transition has no effect, thereby creating the effect of extending the ducking behavior.
We add an automation event to mark the start of the TRANSITIONOUT
section. We do this by scheduling a setValueAtTime()
automation behavior:
Finally, we set up the TRANSITIONOUT
section using a linearRampToValueAtTime()
automation behavior. We arrange the transition to occur over TRANSITIONOUT_SECS
by scheduling its end time to occur after the TRANSITIONOUT_SECS
duration of the previous setValueAtTime()
automation behavior:
The following is a graph illustrating the automation we've applied to duckGain
, the duck controller's gain
audio parameter:
In order to have the sound effects activation duck the music volume, the sound effects and music have to be played on separate audio layers. That's why this recipe instantiates two AudioLayer
instances—one for music playback and the other for sound effect playback.
The dedicated music AudioLayer
instance is cached in the WebAudioApp
attribute, musicLayer
, and the dedicated sound effects AudioLayer
instance is cached in WebAudioApp
attribute sfxLayer
:
Whenever a sound effects button is clicked, we play the sound and simultaneously activate the duck behavior on the music layer. This logic is implemented as part of the behavior of the sound effect's click
event handler in WebAudioApp.initSfx()
:
We activate ducking on webAudioApp.musicLayer
, the music's AudioLayer
instance. The ducking duration is set to the sound effects duration (we read the sound effects sample duration from its AudioBuffer
instance).
The ducking behavior is just one demonstration of the power of automation. The possibilities are endless given the breadth of automation-friendly audio parameters available in Web Audio. Other possible effects that are achievable through automation include fades, tempo matching, and cyclic panning effects.
Please refer to the latest online W3C Web Audio documentation at http://www.w3.org/TR/webaudio/ for a complete list of available audio parameters.
Advanced automation techniques
Web Audio allows the output from an AudioNode
instance to drive an audio parameter. This is accomplished by connecting an AudioNode
instance to an AudioParam
instance:
The previous code connects an AudioNode
instance to a target AudioParam
instance. destinationNode
is the target AudioParam
instance, and outputIndex
is the AudioNode
output to connect to it.
This functionality allows applications to automate audio parameters using controller data from data files—the controller data is loaded into an AudioBuffer
instance, and is injected into the node graph using an AudioBufferSourceNode
instance.
The following node graph illustrates this approach for controlling the output volume using controller data from a file:
The automation data can be generated even at runtime using JavaScript. The following node graph employs this method to automate a sound sample's output volume:
Unfortunately, the implementation details for accomplishing these effects are beyond the scope of this book. Therefore, I leave the task of producing working examples of these cases to you, the readers.