An AudioContext is a simple alias for a Json.Decode.Value. By making clever use of a context's computed properties we can decode information such as the current time or the context's state whenver we need it.
const context = new AudioContext()
const App = Elm.Main.init({
node: document.querySelector('#app'),
flags: context
})
By passing an AudioContext to the Elm app through flags, we can pass it on to the property accessors below whenever we need to query the state of the context.
The state of an AudioContext encoded as a nice Elm union type. Mostly handy to prevent unecessary calcultions of audio graphs if the context is suspended or closed.
from : Json.Decode.Value -> Maybe AudioContext
currentTime : AudioContext -> Basics.Float
Get the time since an AudioContext was started. This is necessary if you want to use scheduled audio properties to update values in the future (like an amplitude envelope perhaps).
sampleRate : AudioContext -> Basics.Float
Find out what sample rate an AudioContext is running at, in samples per second.
state : AudioContext -> State
Find out what state an AudioContext is currently in. An AudioContext can either be Suspended, Running, or Closed.
It is common for AudioContexts to start in a Suspended state and must be resumed after some user interaction event. By using a port we can resume an AudioContext after a user interactios with our Elm app.
baseLatency : AudioContext -> Basics.Float
The base latency of an AudioContext is the number of seconds of processing latency incurred by the AudioContext passing the audio from the AudioDestinationNode to the audio subsystem.
outputLatency : AudioContext -> Basics.Float
The output latency of an Audio Context is the time, in seconds, between the browser requesting the host system to play a buffer and the time at which the first sample in the buffer is actually processed by the audio output device.
every : Basics.Float -> Basics.Float -> msg -> (Basics.Float -> msg) -> AudioContext -> Platform.Sub.Sub msg
This function works like Time.every, and allows us to get an AudioContext's current time according to some interval. There are some important differences between this and Time.every, however.
In javascript land setInterval can be hugely inconsistent, making musical timing difficult as the interval drifts over time. To combat this we can combine setInterval with a short interval and an AudioContext to look ahead in time, making it possible to schedule sample-accurate updates.
Because of this, the AudioContext time returned by every
will usually be a few
milliseconds in the future. This works great when combined with scheduled parameter
updates!
type alias Model =
{ time : Float
, context : AudioContext
, freq : Float
, ...
}
type Msg
= NoOp
| NextStep Float
| ...
audio model =
osc [ setValueAtTime (freq model.freq) model.time ]
[ dac ]
...
-- Every 250ms move to the next step in a sequencer.
subscriptions model =
every 0.25 model.time NoOp NextStep model.context
Because we poll rapidly with Time.every, we provide a NoOp msg to return whenver we're not at the next time interval. This is a necessary evil because of how Elm handles time subscriptions.
at : Basics.Float -> msg -> (Basics.Float -> msg) -> AudioContext -> Platform.Sub.Sub msg