Thursday, March 28, 2019

Resource Pools of Game Audio



I gave a presentation awhile back at the 2017 Austin Game Conference on the difference resource pools available to an audio engine and how to balance their usage during development.

The presentation slides are available here:

https://docs.google.com/presentation/d/1q3qWDH3rpmw_T0smNRI9VS6h0eaTKWPujtwrjAbYnmg/

...and PDF:


Here are the presentation notes for posterity...these probably read better in-conjunction with the slides, but you can probably get the gist:

The goal today is to shed some light on a few of the technical considerations that can have a profound effect on the resulting sound of any game but especially those facing limitations in any of the three major areas of storage, memory, and runtime processing.

If you’re currently in the process of creating an game, app, or experience and haven’t faced these challenges I hope this talk will surface some considerations that can help increase the quality of audio through the understanding of these valuable resource pools.

Resources, pipeline, and workflow are all fundamental components of game development and the constraints they place on game audio can conspire to enable creativity or disaster, depending on whether you can get your head around the resource limitations you’ll be faced with.

Well understood constraints can enable creative decisions that work within the confines of development whereas a failure to understand the confines of your capacity can lead to last-minute/ hasty choices & increases the potential for overlooking the obvious.

In the multi-platform marketplace, that there is rarely a single cross-platform solution to service all scenarios. This is especially true when discussing these fundamental resources.

The important thing is to understand how the three resource pools interact and how the unique qualities of each can be leveraged to create a great audio experience for the player.

I wanted to get some terminology out of the way up-front in the hope that when I start throwing around verbiage it all locks into place for you.

If you’re used to hearing these terms, please bear with me as we bring everyone up to the same level of understanding.

Media - AKA Audio Files
Seek Time - how long it takes to find the data either in RAM or storage.
Voices/ Instances - A single sound file being rendered
Render - audio being played
Synthesize - is the act of combining one or more things
(Audio) Event - is a currency of communication between the audio engine and the game engine.
Encoding - is the process of converting/ compressing PCM audio to reduce size
Decoding - is the process of converting a compressed format to PCM for playback
DSP - is Digital Signal Processing and is commonly used as part of a PlugIn and can be used to modify sound either in realtime or as part of Encoding.
Streaming - allows for the playback of large files using a small portion of RAM
Buffer - A reserved portion of RAM for processing audio streams or the final render.

Before a game or app can be played or executed it must first arrive on your local disk in order to run.

Whether by disk or download the consideration of total file size for audio is one that will spring up time and again across every platform.

The size of you final audio media deliverable for a title is a difficult number to pull out of thin air at the beginning of a project.

I’ve seen some great interactive spreadsheets auto-sum huge collections of theoretical data into precise forecasts of storage needs rendered moot in the final moments of development by a late-breaking scope change or decision making higher up the food-chain.

That doesn’t mean that the exercise lacked merit, in-fact, it helped establish the audio team's commitment to being solutions driven with a great depth of understanding over their (potential) content footprint.

It’s in your best interest to begin discussions about resources for audio as soon as possible. Messaging any findings, and re-engaging the wider development team on a regular basis can help remind people about audio’s contribution to resources.

Anyone who has ever shipped a title will tell you that storage space is often at a premium for audio, whether that’s due to the limitation of physical media or the over-the-air-data concerns of mobile development.

Cost of size over cellular or thousands of wavs on a BlueRay.

Sample-rate, Compression, and variation management play a significant role in the final audio size and often means trade-offs and sacrifices in-order to get things in-shape.

And even when you’ve got the initial storage size in-place for distribution, there’s the additional step of getting the stored data onto the device and ready to be accessed by the app.

There is the question of whether the data now needs to be copied from the physical media or packed format in preparation for launch.
The speed at which this process is able to be executed is tied directly to the speed of the HDD, SSD, or DVD.

Requires communication to make it work
So again, some storage considerations that are important to keep in-mind:

  • Download Size
  • Unpacked Size
  • Final Data Size
  • “Seek” Speed (pretty fast)

Additionally,
  • Amount of time to download over cellular
  • Amount of time to download over WiFi
  • Amount of time to download over broadband
  • Amount of time to copy from physical media to HDD/SSD
  • Amount of time to unpack to HDD/SDD
  • Seek speed of storage media
  • Available storage on physical disk
  • Available storage on device
Keep in mind, that in mobile (and potentially on console as well) it’s pretty common behavior to delete the largest app in order to create space for the new hotness.

When the project directors start looking for ways to increase the potential for the app to stick around, whether it’s on device or console, you can bet they’ll go looking for the largest contributors to the media footprint.

Any audio folks in the room will know that…

These are some of the largest contributors to the audio content footprint.

As one of the largest contributors to size on disk the question of storage, file size, and their resulting quality is one of the foremost concerns of most content creators.

While each generation edges closer towards the ability to leave this concern behind, there will always be the need to optimize size considerations for new platforms and playback mechanisms.

Storage size is a concern

Things are better, but there’s always new restricted platforms to consider

R.andom A.ccess M.emory

RAM is the interim location of all sound media and also reserves memory for use by the audio engine.

RAM is often the most valuable resource due to it’s speed and fluidity. It allows for the storage of media that can be played back and processed on-demand with low-latency during gameplay.

In addition to storing instances of sound files, RAM is also used to store audio variables as well as some decoding & DSP processing resources that need to be available to the audio engine.

Some of the benefits & uses of RAM include:
  • ●Faster Seek
  • ●Temporarily Store Media
  • ●Streaming Buffers
    • ○Size
    • ○Seek Speed
  • ●Audio Engine
    • ○Sound playback
    • ○Audio variables
    • ○Simultaneous voices
    • ○Decoding
    • ○Effects processing (DSP)
The speed of access makes this pool a valuable resource & fundamental to the eventual sound that is rendered by the game. 

The amount of RAM allocated for audio also ultimately determines the maximum number of voices that can be played back by the audio engine.

In short, RAM is comprised of:

  • MEDIA - Instances of audio files
  • VOICES - Maximum number of physical voices
  • DECODING - Compressed audio being decompressed at runtime
  • DSP - Processing executed on audio being rendered
As the interim memory location for both audio files and data concerning the playback of audio, RAM is another critical component of the resources used by audio during gameplay.

While RAM allocation on console has increased to match low-spec PC’s, mobile continues to face restrictions that require the thoughtful use of what is available.

The Central Processing Unit is the brains of the computer where most calculations take place.

It is the powerhouse of runtime execution responsible for synthesizing the audio and rendering it to the output of a device.

This means everything from applying DSP, calculating variables such as:

  • Volume, pitch, position for each voice
  • Keeping virtual voice information available so they can be smoothly returned to physical voices if necessary
  • Applying DSP across voices
  • Decoding of every compressed audio file from RAM in preparation for rendering the final output
  • Streaming of and media from storage, through buffers, and synthesized along with the rest of the audio
  • as well as the manipulation of all data moving in & out of the other resource pools.
The total number of physical voices playing simultaneously is the greatest contributor to an increase in CPU and can multiply other aspects that also affect CPU, such as: DSP and decompression required to render a voice at runtime.

The fastest way to reduce CPU is often the management, limiting, and limiting behavior of physical voices that are being requested for playback by the application.

It is imperative that the robust ability to creatively and comprehensively control the number and types of sound situationally be provided early to allow for good decision-making in advance of optimization later in the project.

DSP at runtime provides flexibility and malleability over sound that allows for the extension and manipulation during the playback of linear audio (media or sequenced synthesis).

Hardware vs. Software Decompression

Sound files can be huge

Until we have the data throughput to push around gigabytes of audio data there will continue to be a quality compromise between size and fidelity.

This challenge mirrors the fight for fidelity over MP4, MP3, Vorbis and other “lossy” compressed formats across other media.

The fidelity of a codec should be evaluated in-context i.e. most sounds aren't played alone, therefore their compression artifacts (if any) may well be masked by other sounds (psycho-acoustics and all that).

This opens an opportunity for cranking up the compression a notch to either save more space or CPU (sometimes both). However, this may be too specific for the level of this presentation and its targeted audience?

In addition to the loading of media into RAM for playback, sounds can also be streamed from physical media (BluRay) or across a network.

Streaming is done by allocating a “Buffer” of portion of RAM that is used to pass through or stream sequential audio data while rendering the sound to the output.

Like the tape-head on a cassette deck, the data is streamed through the buffer and played back by the audio engine.

While the increase in CPU performance has slowed over the past few years, the need to optimize audio’s use is greater than ever. As the centerpiece of every platform the needs of the CPU and processing for audio continue to be fundamental to the execution of data and the rendering of audio.

Now that you have a clearer idea of the role these three resource pools play in the rendering and playback of sound, it’s important to understand how they can be optimized before it becomes a problem for your development.

Here are a few suggestions for ways that your can reign in your resource budgets.

The first area that can have the greatest impact on CPU & RAM is in the optimization of voices across the entire project

Voices can be optimized globally, per-game object, based on mixer buss associations, or at the Event level or for the entire project.

Additionally, voices can be removed from the processing queue based on their volume or based on the quality of the device/ platform. 

Should be optimized throughout production. Limit early/ limit often. (Mix early/ Mix often)
Old School NES example as well as non-verbal communication

Voices limited globally due to hardware restrictions but illustrates the point.

It’s easy to imagine quickly filling up the number of voices available unconditionally when working on a game with waves of NPC’s, mass destruction, and complex interactions.

But what if your experience needs to play and communicate using audio across both high-end as well as low-end devices?

By detecting the quality of device and using that variable to scale the maximum voices and then coupling these values with a way to prioritize the voices that are heard on low-end devices you can create a system that allows the right voices through in order to communicate their intention of audio.

In this example, we’ll first hear the high-end device version of a music piece with a huge number of the voices being utilized.

Second we’ll hear a low-end device version what the music would sound like using a very limited number of voices.

Additionally voices can usually be limited by game object and can include behaviors when limits are reached in order to achieve the correct effect or sound.

Discard oldest instance to stop the oldest playing instance with the lowest priority.
Discard newest instance to stop the newest playing instance with the lowest priority.

Ultimately, the limiting of voices can be used creatively in-order to shape the sound of the game in an appropriate way.

One technique that proved to be invaluable on mobile (PvZ2) was the loading of compressed audio from storage & decoding it directly into RAM for low latency (uncompressed) playback.

While the sound quality between compressed and uncompressed was maintained, this allowed for sounds that were played back frequently to pay the cost of decoding only once when the content was loaded/ copied into memory (instead of each time the sound was requested to play).

For commonly played sounds, this had a direct effect on the amount of CPU that was used at runtime (lower) while we were able to deliver (smaller) compressed audio footprint on device.

When decompressed we did expand the audio footprint X10 into RAM but the trade-off between CPU & Storage/ Download made this an acceptable compromise.

It was once common to reserve the streaming of audio for large sound files that would take up too much space in RAM.

As resources have become more plentiful in the way of multiple cores, giant hard-drives, and copious amounts of RAM, streaming ALL audio files or optimizing the way that sound files are accessed by the game at runtime is evolving to help solve some of the problems associated with manually managing your own soundbanks.

Several AAA titles have had success with streaming their audio into RAM on-demand, keeping it around until it’s no longer needed, and then unloading it when it makes sense.

This helps to keep storage low by only ever having a single version of a sound on disk.

It also helps keep RAM usage low at runtime because only the sound files that are still in-use will be loaded.

I remember hearing about the idea of “Loose-loading audio files” in 2009 right here in Austin in a presentation given by David Thall where he had achieved this media loading strategy at Insomniac Games. Since then audio middleware manufactures have added development tools to help solve the file-duplication problem that can arise from manually managing soundbank associations for media and leverage the increasing speed of CPU’s in order to manage data more efficiently.

Limiting the number of audio files in your app doesn’t have to mean reducing the number of variations for a given sound.

The ability to perform sound design directly within an audio toolset allows for the combination & dynamic recombination of sound elements.

This “granular” or element-based approach, where elements _of_ a sound are used as a library within the audio engine, can be creatively combined at runtime and net big saving in storage.

Whether it’s creating a library of material sounds that can be dynamically combined depending on surface type or the creation of instrument soundbanks that can be played back via MIDI files the creation of sound components that can be dynamically combined by the audio engine at runtime can offset the need to create large, linear sound files and instead leverage the capabilities of today’s full featured audio engines and toolsets.

Additionally, the coming procedural & synthesis explosion is soon to be upon us and in some places the “runtime funtime” (™ Jaclyn Shumate) style of sound design is already leading the charge.

With multiple companies pushing towards accessible authoring of modeled and synthesized sound with incredible results, it’s only a matter of time before we’re offsetting our storage needs with realistic sounding approximations for little to no file size.

Replacing media with procedural models or synthesis not only gives you the flexibility of parameterizing aspects of the sound dynamically but also reduces the storage footprint.

As the authoring and quality of these techniques continues to grow, there will be less and less dependency on rendered audio and more focus on generating sound at runtime.

We’ve now gone over the functions and interdependency on the three main resource pools that can affect on the audio for you game, application, or experience.

Additionally, we looked at some opportunities to optimize different resources towards maximizing the resources available.

But the hidden message throughout all of this is audio’s dependency on these resources in-order to make great audio and the way that is in-service to the development team and relies on understanding, communicating, and advocating for the resources needed.

Hopefully this has helped give you a deeper appreciation for the challenge & equips you for the discussions yet to come.

Here are some additionally resources to help you go further into optimization:

https://www.audiokinetic.com/library/edge/?source=SDK&id=goingfurther__optimizingmempools__reducing__memory.html

https://blog.audiokinetic.com/how-to-get-a-hold-on-your-voices-optimizing-for-cpu-part-1/

https://blog.audiokinetic.com/wwise-cpu-optimizations-general-guidelines/

https://www.audiokinetic.com/library/edge/?source=Help&id=managing_priority

http://www.theappguruz.com/blog/optimize-game-sounds-in-unity

https://www.gdcvault.com/play/1021642/Optimizing-Audio-for-Mobile

https://www.gdcvault.com/play/1023058/Mobile-Audio-Design-Optimization

Thursday, September 06, 2018

Lost Landscapes | Pedalboards & Processing




Part 2 of a 3 article series, discussing the creation of the playback mechanism that was used to create the album Lost Landscapes.





The processing capabilities available to me during improvisation allow for the spontaneous dynamic ebb and flow of effervescent guitar clouds and cascading waves of delay. What begins as a single note, a chord, or physical interaction can then be modified through the series of effects pedal accoutrements and realtime property manipulation. As a specialist in interactive audio, I’m all too familiar with the opportunity to parameterize the properties of DSP at runtime in games and virtual reality, which has fed-back into building an effects processing rig that allows me to leverage the dynamics possibilities of improvisation in the making of the album Lost Landscapes.

Simply-put: A pedalboard is a group of effect pedals that takes in a signal and modifies it any number of ways. The two pedalboard configuration for Lost Landscapes was arranged in a Mono/ Stereo signal chain that branches and weaves in and out of various effect possabilities. During the spontaneous composition of the resulting albums-worth of material, different effects were turned on and off, properties were changed and/ or modified while simultaneously playing the guitar, and sound was borne in response. This first part will be an overview of the signal chain from guitar to speaker cabinets that aims to illustrate the options that were available when recording.

From the output of a golden Vox SDC 55 guitar into the mono pedalboard, the electronic dream of a thousand falling stars then branches: one path leads to a 50 watt Silvertone 1484 plugged into a Marshall 4X12 and the other goes to a series of stereo effects pedals and rack-mount processors that are output to a customized stereo Silvertone 1474 running 50 watts per-channel, and then branched into two Silvertone 2X12 cabinets. The goal of this amplifier configuration is to allow for the ability to control the following aspects of the final output: the blend between the mono and stereo outputs, discreet stereo effect processing, and to increase the width of the stereo signals towards a broad soundstage.

While the mono pedalboard holds-down most of the overdrive, distortion, fuzz, wah, chorus, phasor, and octave-dividing duties, the stereo branch is focused on Reverbs, delays, and stereo panning. The mix between the mono and the stereo pedalboard helps retain the definition and edge of the guitar from the mono rig, while letting the stereo rig reverberate and support the clarity of guitar playing with fluffy-clouds of noise. It’s simply an attempt at the layering of multiple amplifier outputs towards a fully balanced representation of the guitars expression.

Mono Pedalboard and Vox SDC-55

Mono Pedalboard Alternate Configuration

Mono)))

The guitar signal starts out on the lower shelf of the two-tiered mono pedalboard with the Boss OC-2 Octaver, which doubles the guitar signal one and two octaves below the original played note. It’s used in the middle section on Contents/ Weightless to add low end during a solo section and fills in the low-frequencies like a pillow of butter; supportive, yet smooth. From there the signal flows into an Electro Harmonix Big Muff from the Russian/ Sovtek era. This distortion is at the heart of my pedalboard and can be traced back to early days of gear acquisition; it sounds like a pack of angry bees stuffed into the mouth of a limestone cave opening exhaling the cool dank from within. There are a few more distortions further down the line I can use to push things further towards maximum saturation, but the Big Muff is usually in there somewhere. I like to keep the Dunlop Jimi Hendrix Wah early in the flow, after the Big Muff but before other distortions, to allow for it’s expression to get picked up by effects downstream from it like at the end of ‘Squall’ and towards the beginning of ‘This Could Be The End’. Things jump back into a triple-threat of distortion-stacking opportunity with the Boss DF-2 Super Distortion & Feedbacker (modded by Quint Hankel), ProCo Rat, and re-housed DOD FX56 American Metal. Different colors or flavors, the DF-2 is a full-frequency blanket, the Rat tears the roof off of high end, and the American Metal decimates everything in its path with a scooped-mid low frequency impact that pushes your hair back and can be heard front-and-center on ‘Everything Is Heavy’ and in the searing background lead of ‘Squall’.

Boss Octaver OC-2
Re-Housed DOD FX56 American Metal

Things get pretty modulated from here. The Boss CE-2B Bass Chorus is is buried in the dirt section between the ProCo RAT and American Metal. The old argument of whether to put modulation before or after distortion is solved by allowing for both options! After dirt, we head into a re-housed Ibanez CS5 Soundtank Chorus. What’s cool about this box is that it was one of my first re-houses where I discovered a couple of extra trim-pots on the circuit board that I externalized with knobs that were accessible from outside of the enclosure. These trims were meant to be a set-it-and-forget-it tuning for the chorus but allowed for a wider range of tonality: from crystalline to syrupy. You can hear its almost leslie-like tonality at the end of ‘Let’s Fold Space’ and, while I’ve since abandoned both chorus and flange as part of my new rig, it always remains within arms reach as a magic box of glowing vibration. The Mu-tron Phasor II that follows is one of the best around, with a character and sound that cannot be beat and is usually set to a slow phase with plenty of Feedback, undulating into a roiling sea of lysergic syrup. Something about the extremes that it reaches or the subtle distortion and boost it lends to certain frequencies makes it indispensable and coveted.

The upper shelf picks up with the Boss TR-2 Tremolo and the vibrations it lends come in handy at this point in the signal flow in-order to apply a tremolo to the entire signal, all the way down through the signal chain. Things get kicked sideways from here into the Boss AW-2 Auto Wah, usually set to a fast and bubbling deep-wobble used for lead runs. At this point, just in-case things need a high-frequency boost, the AW-2 is followed by an Ibanez TS9 Tube Screamer set to a barely-there crunch that brings a lift that pushes things into clarity. For all of the delay that will follow, the Ibanez DML Digital Modulation Delay is the first stop for elevating things into the far-out reaches of outer space. Nothing fancy here, just a nicely disorienting (max) 512ms of delay repeats w/ the occasional modulation for extra weirdness. The time knob also holds up well for realtime tweaking and can set off some deep-dive swells that hit the spot. The DOD FX75-B Stereo Flanger provides all of the darksider-influenced jet-engine takeoff modulation and, because you can never have enough phasers, the Boss PH-2 Super Phaser comes next. With the controls set to a fast and warbly syncopation, the signal quickly becomes bathed in even more bubbly goodness. Remember when I said that Green Russian Big Muff was one of my favorites? Well, why not have two?! The second Big Muff in the signal chain means that I can choose where the blanket of mud is applied in the chain and is especially nice when choosing to run it in either a before-wah or after-wah configuration.

From here things start to get (more) interesting. The mono pedalboard branches at this point using a Morley ABY Selector/ Combiner with one output routed to the stereo pedalboard and the other routed into a Line 6 DL4. The DL4 allows for the ability to throw some kind of wild delay or modulation on the signal routed ONLY to the 60W Silvertone 1484 running in mono to the Marshall 4X12. Whether that’s with a long delay or by whipping things in reverse on the fly, this flexibility to modify only the mono branch of the rig comes in super handy when I just want to push sound from the mono amplifier into someplace weird. The DL4 has had it’s spring-loaded pcb-mounted switches replaced with real soft switches and the loop mode mod has made the looping functionality accessible with a footswitch.





The entirety of the mono and stereo pedalboard effect pedals are powered by a trio of Juice Box Pedalboard Power Supplies: 1 for each tier of the mono pedalboard and 1 for the stereo pedalboard. The isolated outputs are used to feed a junction box housed in an old wooden box that used to hold Dominos. Wiring from the Dominos boxes run to each individual pedal. Each of the ends has been fitted with a trimmed-down 3.5mm plug oriented at a 90 degree angle insuring the tightest fit possible. There have been a ton of advancements with solderless plugs that would be fun to investigate, but for now this DIY solution is serving me pretty well.

Stereo Pedalboard


Living in Stereo

After signal leaves the mono pedalboard it gets folded out into stereo using a Boss CH-1 Super Chorus and then into a couple of One Control Black Loop dual effect loop pedals for a total of two stereo effect loops. During the recording of Lost Landscapes these boxes were separate and required 4 footswitches in order to switch on and off the stereo effect loops. I’ve since frankenstein-ed an interface that allows allows for the switching of a stereo effects loop with a single footswitch. The first processor in the stereo effects loop chain is the Digitech DSP128. This is a processor that goes back to my teenage years kicking-out-the-jams in my parents garage and throughout my time with the Minneapolis band February (Carrot Top/ Saint Marie Records). This is a standard multi-effects unit from the late 80’s with all of the algorithms you expect: long & multi-tap delays, reverse reverb, and the densest wall of gigantic unrealistic reverberation that can be imagined. For the kinds of sounds I’m reaching for, the concept of reality is heavily malleable. I’m happy to say that Digitech hadn’t quite sorted out representing the real world digitally in the best possible way, back when this box was in production. It’s become one of the foundations of my sound and (as you’ll see) I found a way to love it even more later in the signal chain.

Rack Processors
Dual-Stereo Effect Loops


The next piece of the stereo effects loop chain is a pair of Ibanez DM1000 Digital Delay processors....which is partially incorrect, as I’ve replaced the delay board in each of them with delay boards from the fancier Ibanez DM1100 Digital Delay and further extended the delay range capability. With up to 3600ms of delay at normal operation, these boxes come packed with modulation and feedback controls that can lift-off the spaceship with no problem...but then I got to snooping around inside. I’d had a DM1000 since leveraging it to tremendous effect of the lead track off Lost Chocolate Lab | The Butterfly’s View and found myself inside the unit for some reason when I spotted a couple of small trim-pots (potentiometers)...basically knobs....on the circuit board for setting the feedback range and clock speed of the DSP chip. The clock speed can be used to tune the balance between delay time and “fidelity”, which is to say, you can push the delay time much further than 3600ms if you’re willing to sacrifice the fidelity of the repeats. This was all I needed to embark on a serious adventure of modification.

Ibanez DM1000 Internal Trim Pots

I started by externalizing these controls on the front panel of the unit by removing the duplicate output jacks and replacing them with knobs for the controls. This escalated quickly to the point where I added a second DM1000, swapped out the delay boards from a couple of DM1100, and then wired those clock speed knobs up to a couple of hijacked Dunlop Cry Baby GCB 95s so I could control each side independently at my feet. This allows for the bending of time and space in realtime as heard at the beginning of the track “Let’s Fold Space”. When this ability is coupled with the another unique feature of the DM series; The Hold Button, things can get pretty wild. The Hold button essentially grabs whatever audio is in the delay line and loops it. Not quite the same as all those fancy looper pedals out there, this is a little more like the sample and hold functionality often found on old analog synthesizers. When coupled with a momentary and latching footswitch on the ground this technique results in the time-bending illustrated on the track ‘Let’s Fold Space’. One thing to note is that the delay output of the two Ibanez DM units are removed from the stereo signal path at this point and output to one of the stereo channels on the Silvertone 1474. The clean channel is passed back to the rest of the stereo path that follows, while the delay output is removed from further travels through the stereo pedalboard.

Ibanez DM1000 w/ Externalized Clock Speed and Feedback Controls
Modified Dunlop Crybaby (Dual Expression Pedal)


After leaving the dual-stereo effects loops, the signal winds its way into the TC Electronic Flashback X4 for long delays and reverse-playback trickery. This is yet another flavor and opportunity to make choices about delay and special effects at a later-stage in the processing chain. One thing that stands out about the Flashback X4 is their Tone Print technology. Initially Tone Prints could be seen as a gimmicky way to capitalize on the star power of custom presets and the adoption of a sound-alike mindset but, once you peek under the hood and see the possibilities the technology unlocks, there’s no doubt that it enables some incredibly creative control. I’ve swapped out the circuit board mounted footswitches for sturdier soft-switches on both the Flashback X4 and Line 6 DL4 and there are a couple other mods I’d like to undertake someday as well.

TC Electronic Flashback X4 Soft-Switches Before Replacement 


From there it goes back into the rack for a final helping of gigantic Reverbs courtesy of the Digitech DSP 128+. While I’ve had the DSP128 for years, the DSP 128+ makes up for losing some of the output flexibility of its predecessor with the edition of, what they call, ‘Ultimate Reverb’. This algorithm which takes the insanity of the previous iteration and drives it even further into another dimension of spaced-out reverberation. Additionally, certain settings allow for a new footswitchable ‘Hold’ function which (you guessed it) grabs what’s in the delay buffer and repeats it infinitely. From the outputs of the DSP 128+ signal winds its way back into a Boss PN-2 Stereo Tremolo/ Pan where the resulting cloud of Reverb and delay can be bounced between left and right channels in a mind-melting illustration of stereophonics (Illustrated to great effect during the song “Shall We Start Over”). The final leg of the journey takes us back to the rack into Symetrix 525 Dual Gated Compressor/ Limiter to reign-in the cloud of atmospherics and bring some additional volume and sustain to the Reverb tails. The compressor outputs to a 60W Silvertone 1474 modified to run in stereo.



Stereo Silvertone Sidebar

I used to work in a rock and roll repair shop in St. Paul, Minnesota called The Good Guys (no affiliation with the California electronics franchise). I held the non-glamorous job of interfacing with high-strung gigging musicians and their precious cargo while shepharding them through the proposed repair process; soldering jammed input jacks, intermittent connections shaken loose by rock and roll vibrations, etc. On one of the many trips back and forth between the cramped intake room, through the shelved hallway, and into the back room where the electronics repair magic happened there happened to be pieces of gear that went unclaimed, unfixed, or otherwise neglected and forgotten (aka the stuff that dreams are made of). Among the dead Oberheim OBXa, the parted-out Yamaha DX7, and sad looking Peavy was a tuxedo-black mint condition Silvertone 1474 (This really is a story that dreams are made of!). It’s hard to remember exactly what happened next, or how the conversation evolved (It surely had something to do with the Silvertone 1484 I was already gigging with in the Minneapolis band February) but over the next years I would unleash that beautiful amplifier to rock the wide open dimensional-plains with a maelstrom of guitar noise. But before that could happen, it had to be resurrected from its slumber.

By this point in my guitar playing, I had already decided upon a stereo setup leveraging the Digitech DSP 128, even going so far as to employ a Samson Power Amp and a couple if giant 15” & horn PA speakers for maximum stereo separation. Somewhere in the course of discussing the resurrection of the 1474, technician Quint Hankel mentioned that the amp could easily be converted to operate as a stereo amp and set to work making this dream come true. With the 4 6L6 tubes now splitting duty across a left and right output, each routed to one of the internal speakers in the 2X12 cabinet, this customized monster of tone became part of a stack of Silvertones that was the backbone of my live rig in February and in the early days of Lost Chocolate Lab.

Stereo Silvertone 1474 and Mono Silvertone 1484
Stereo Silvertone 1474

After years of use, Don Mills of Seattle’s Golden Phi Amplification Engineering took a second pass at the amp, swapped out two of the output transformers, and cleaned up some parasitic noise that had crept into one of the channels. Meanwhile, in order to keep things flexible, I build a speaker junction breakout box that allows me to route sound in stereo to either; the two internal speakers or by leveraging both speakers in the cabinet as one side of the stereo amplification while leveraging another cabinet for the other side. With the flip of a switch and some cable manipulation the amplifier can wide in a two cabinet configuration or stay compact as a stereo combo amp. There are 2 channels of inputs allowing for a mix of two individual stereo signals, used to great effect during the recording of Lost Landscapes to handle the stereo output of the Digitech DSP128 as well as both of the Ibanez DM1100s. The resulting tone is heavily colored, very forward sounding, with more than a little bit of that milkshake-thick grit that Silvertones are known for.

Stereo Operation (2 Cabinet)
Mono Operation (Internal Cabinet)

For the recording of Lost Landscapes, there are a total of 3 distinct outputs:


  1. Mono Pedalboard to Silvertone 1484
  2. Stereo output of the Ibanez Delays into Channel 1 of the modified Stereo Silvertone 1474
  3. Stereo Output of the Symetrix Compressor into Channel 2 of the modified Stereo Silvertone 1474


These 3 signals are then blended to taste in the room with an ear towards allowing the mono pedalboard to give definition to the cloud-formations emanating from the stereo Silvertone amplifier.

Master Control Program

Recording Rundown

Armed with a trusty Goldtop Vox SDC-55 through the aforementioned pedalboards, the mono branch runs into a 60W Silvertone 1484 connected to a Marshall 4X12 with 2 mics on it (SM58 & Sennheiser MKH 416). The stereo pedalboard runs into a Silvertone 1474 modified to run stereo into 2X 2X12 (the 1474 & 1484) cabinets, mic'd with a matched pair of AKG 414's running directly into an RME Fireface connected to a Mac Laptop and then into the Reaper DAW.

The recording took place during the week of December 26th 2016 in 3 sessions. The first was a 2 hour fully improvised set. The second session, after some tuning and recording confirmation, was spontaneously composed the same day. These two takes comprised the basic tracks for Lost Landscapes. The next day I performed live overdubs across a handful of sections that had a vibe or semblance that resonated when I listened back. I added some markers to the timeline for rough in/out transitions but otherwise improvised alongside the first pass wearing headphones.

Rough mixes were bounced of the sections that held together across repeat listening and the long process of sculpting and mixing began. With a total of 4 tracks for each of the 2 passes, I was juggling the balance between 8 simultaneous tracks for each song. Working with sound as my day job, and already having applied a healthy-dose of DSP during recording via effect pedals, I didn’t want to spend time doing a lot of post-processing on the final tracks with additional plugins. Ultimately, I did some noise reduction using Izotope RX, employed plenty of equalization and dynamic EQ with just a touch of compression. Mixing was done in Ableton Live with the final sequencing ending-up back in Reaper.

While I did a fair amount of auditioning across multiple studio environments, I knew that I didn’t have the best frequency balance after working the EQ of each track aggressively over the year of micro-tweaking. I called on the artistic services of Heba Kadry at Timeless Mastering in New York to help restore balance to the frequencies across the entire 80 minute album. I saw Heba speak at the 2017 Audio Engineering Society New York Conference on a panel entitled “Mastering 201: Beyond the Basics” where she discussed the art of “sonic sculpting”. The words she used to describe her process resonated and when the time came, months later, to master the project it seemed like a good fit. I learned a lot through the process and her work helped clear away some sound-cobwebs and reduce some fatiguing frequencies that had built-up as a part of the equalization that I’d done.



Video Landscapes

The other piece of the puzzle that came together during the mixing of the album were the 80 minutes of videos that would accompany each track. Hours of landscape footage recorded during my commute to work between the University District of Seattle and South Bellevue became the source material forming the basis for the Lost Landscapes video companion. The hour plus commute served as the perfect backdrop for the sound of instrumental-soundscape guitar atmospherics. Staring out the window reviewing tracks and watching the Washington landscapes pass-by in an endless succession of (equal parts) monotony and beauty seemed like the perfect way to frame the visual aspects, or at least establish a baseline of content that would serve as the raw materials to create visuals that I felt fit the expression of sound.

Video footage was captured using an old Sony handheld digital video camera and then transferred onto a Microsoft Surface for processing and editing in Adobe After Effects. New to the whole AE suite of tools, I felt my way along with different effects and editing techniques, like: mirror, opacity, saturation, and echo until something interesting started rendering. Sometimes I was happy to let the content speak for itself, other times I happily pushed it into a level of abstraction that felt fitting for the music.

The aerial footage for Squall was provided by friend and fellow game audio colleague Jesse Rope. I knew he went pretty deep into nature and found creative ways to capture his adventures. I asked if he had anything that might go well as a visual accompaniment with some weird noise and searing guitar sonics. The video edits he supplied were inspiring and easily comprise one of the most epic visual unfoldings of time and space set to the tone of caterwauling guitars.



The videos were teased over several months on Instagram in advance of the first single release. Small 1 minute edits of each song and video were cropped out of the final videos and bumpered with transparent text that allowed a peek into the world behind the black screen overlay. The first single off of Lost Landscapes: Squall (Radio Edit) was launched and became the preview track for the Lost Album Pre-Release on Bandcamp, corresponding with the availability of Lost Chocolate Lab Logo Stickers and Patches. The album is set to be Released on August 31st with live set of solo guitar atmospheric improvisations weaving threads of the album set against a projected backdrop of visuals created to accompany the release and performance.



Lost Chocolate Lab plays Lost Landscapes in Seattle at The Chapel on September 14th 2018 at 8pm













Wednesday, March 14, 2018

Flashback - Arcade Auction


It happened in an instant, the morning arrived and somehow I was there to greet it. Like most childhood memories it’s hard to remember exactly “how”, but the “why” was clear: I was totally into video games. Which was what led to that morning in the pre-adolescent hours before dawn in the abandoned Minnesota State Fairgrounds on that precious weekend. The cold-chill wind and desolation was everywhere, except for inside the warehouse-sized arcade, whose garage doors stood open to the fall colors licked by the sun outside.

I knew what to expect, I had been through this before a few years earlier. There would be a few hours of frantic scrabbling through a maze of dormant obelisks; frantically plugging, switching, cajoling, and wiggling the selection of arcade cabinets, jukeboxes, and pinball machines assembled for auction. When luck was on your side, the flip of a power switch would cause an eruption of light and sound and break the early morning silence, signaling an opportunity to test-out one of the fine machines soon to be auctioned off to the highest bidder.


It wasn’t that long ago when my dad had rolled-up to our first arcade auction three years ago; That day feels as random in my memory as any. Lured by the potential for free video games, my young, eager mind was baited hook, line and sinker with the promise of unbridled play-time while serious spenders entertained potential amusements for their own home. Fuzzy memories of games like Battlezone, Tapper, Mr. Do, and a million more crowd this first experience as I slowly learned the tricks to making machines sing the song of free-play. Find the power switch, look for the coin-box key, jiggle the wire where quarters flow and cross your fingers that it would all work out.

We somehow left that first auction with a Kiss Pinball machine. It felt like a fluke, definitely not in my expectation to leave that day with a personal plaything (likely more a result of my dad’s love of music coupled with my love of video games). From there on it was rock and roll all night and pinball everyday! Friends and neighbors piled into the basement on occasion to chase the silver ball and trade high-scores for the next few years until the table lost its local lustre and fell into disuse. The story of how it ended up sold at three times the purchase price one summer wasn’t mine, but here I was again lined up at the crack-of-dawn for another auction...this time with some liquid cash in the family and an agenda to replenish the home arcade.

The garage door lifted on the warehouse space, the florescent lights flickered to life, and the portable heaters began blowing, quickly heating up the space. The race was on to assess every machine and weight them in a prioritized, if not overly nostalgic, list of potential. There was Donkey Kong Jr. a favorite and quarter-hog if ever there was one, my dad was focused on Pinbot and lucky for him there was more than one that day, a three-screen Darius held an obscene fascination for me, and it was there too.

Every machine that could be played was assessed on its merits and prioritized in expectation of the auction itself, which was a whole different ball game when you’ve got a stake in the game. Imagine a young tweener scrambling between machines and executing the aforementioned steps towards free-play, going from one-to-the-next in-hopes that a.) it would power on b.) the coin-box would open c.) credits could be racked up and the “assessment” could begin. This process, an exercise in the attempt/ reward loop that so underlies video games, I was living in a real-life simulation hurtling towards the potential of acquiring my own personal amusement device.



I remember the auction moving perilously fast. The Pinbots went for more than we could bid, first-tier arcade machines went fast and pricey, the reality of acquiring any machine was swiftly slipping away. There were some dark-horses in the running, my dad bid and won the purchase of a pinball machine called Paragon. All swords and sworcery, this table came straight from the 70’s and was the perfect fit for a bearded-bard like my dad. With graphics that looked like they should be airbrushed on the side of a van with a heart-shaped window and wall-to-wall shag carpeting, the table felt slow and languorous to my 1980’s addled mind. The rest of the auction passed by in a flash of heat and bidding that resulting in the acquisition of two machines that were underdogs on our list.


The Adventures of Major Havok from Atari. I knew Tempest, boy...did I know that machine. The brilliant red, green, and yellow of it’s vector-graphics display still stands out in any arcade ensemble. Not to mention the spinning wheel mechanism of play. However, Major Havok was another thing entirely. Where Tempest see’s you expunging scourge and hurtling ever forward in difficulty and speed, this Major Havok had some diversity. An introductory screen where you can play breakout and use warp codes to jump ahead? A 3D Space Invaders-like shoot ‘em up? Lunar Lander-like section? Maze shooter? Then top it off with a space base infiltration and explosion? An incredibly deep playing experience that has withstood the test of time and continued to be a challenge even all these years later. Owen Rubin was the chief designer of Major Havoc (see comments, thanks Jason!) and tuning and some level design by soon to become visionary in the video game industry, Mark Cerny.


Then there was Orbitor1 by Stern. If you’ve ever played this table, you would know. It’s playfield; concave plastic stretched over a molded moonscape, backlit with flickering lights. It’s bumpers, spinning and magnetic centrifugal forces that attract and propel the shiny pinball to greater reaches and wilder trajectories. With a synthesized robot voice that implores you to “Shoot Pinball Again” or announces that “You Got Double” when multi-ball is unlocked, the sound of Orbitor1 can be called a formative experience to my game audio career. If you’ve ever happened across this one, you remember! There is nothing quite like the feeling of having the ball flung behind the flippers and coming out the other side with your game still in-progress. Orbitor1 is a one-of-kind experience that demands to be played.

How we ended up at the end of that auction with three machines, two of which stand as strange anomalies in their genre, at the end of that fateful day still escapes my understanding of history. Here I stand, 30-ish years later at their parting and it’s impossible to tell you just how deep their influence on me has been. Even through their years of dormancy and disuse, their bond and legacy in me has been a continuum that runs through my story. The year Orbitor1 spent in a loft overlooking downtown St. Paul. When the kids were young, in the garage entertaining on sunny days. The thread of video games runs through my history to this day as a Technical Sound Designer in the gaming industry. These formative experiences shaped my view of the future from those early days and is somehow responsible for my place in time.

It is with a hope for the future space explorer, pinball wizard, or arcade archaeologist that these machines will find a new home with the Seattle Pinball Museum and their legacy and story will be long told.



Wow wow wow wow very nice try again,
Damian Kastbauer

Tuesday, March 14, 2017

Tales of a Technical Sound Designer




I haven't been writing much lately. An article here or there, but nothing too crazy. It dawned on me last year that I used to do quite a bit of writing. Writing has been one of the ways that I use to process my experiences and sharing those experiences with others has been a fundamental part of my growth. It turns out that after 10-odd years of writing and processing, I was looking back on a body of work that represented my formative years in game audio. Between officially published articles, interviews, and a couple of series, there was more than enough to pull together everything into some semblance of a form. So much everything, that I decided to self-publish a two-volume collection that is now available for purchase digitally (PDF) or printed on-demand (B&W or Colour) as: Game Audio: Tales of a Technical Sound Designer

Order here: eBook & Paperback Editions & Amazon Kindle Editions

The articles contained within continue to be available online: Game Developer Magazine in the GDC Vault, Audio Implementation Greats at DesigningSound.org, Lost Chocolate Blog in the very same place it's always been, with a few articles and interviews strewn across the net-scape. There is something to pulling all of these together and the strength-in-presentation they acquire by doing so. A bit of history and hopefully some timeless insights into game audio and the process of discovering ones passion comes into focus through the 500 total pages across the two volumes. (It also highlights my curious relationship with words, phrasing, and my struggle to frame these ideas in a way that communicates passion and complexity.) A worthy en-devour for those who are interested in charting a path through time and possibly pickup some game audio nuggets of wisdom along the way.

What a way it has been!





My first-few articles writing for Game Developer Magazine, reviewing audio middleware tools, or the Audio Implementation Greats Series, attempting to highlight the unique position of audio implementation and elevate it into an art in-its-own-right, feel like the first-steps on a journey that I've been on since taking the first steps so long ago. The inappropriate grammar, the run-on sentences, the oblique references, the terrible 1980's song quotes, all align in what I hope is an enjoyable expedition into the mind of a technical sound designer. Wild pontification on everything interactive audio: from the now-past to potential futures and beyond!

Meanwhile, somewhere between-the-lines of various interviews, a loose definition of Technical Sound Designer can be found. A sticky-wicket to nail down, the nomenclature once quothed by Rob Bridgett, Technical Sound Designer has grown to encompass many things to many people and potentially surpasses the narrow restriction of language in doing so. For an industry that has continued to fan-out in specializations (See "Knowing a Thing Or Two" in Volume 01) is there room for a "Technical Music Designer"? What about a "Technical Audio Director"? Where does that leave "Audio Implementor"...does that imply entry level experience?

The exciting part is that this is all being discussed TODAY and will likely continue to be a nebulous blob of uncertainty for a while. Maybe you'll come across some wild terminology within these two volumes that has settled into a kind of standardization. When I first jumped in, nobody could decide what to call an Event. Since then, with the help of audio middleware, we seem to be equipped with much of the vocabulary we need to discuss our craft. Then along came VR Audio and things are just getting started again.

But before I get ahead of myself, I just wanted to take the time to thank everyone for their support and inspiration over the years. A project (or career) of this scope does not happen by itself or without the help and understanding of many people along the way. Thank you.

If you have a chance to read through these florescent tomes of game audio, feel free to drop a line and let me know how it went. I'd be fascinated to hear about your epiphanies or frustrations with these writings.