It’s no secret that over the years game audio has evolved from sound chip generated blips and beeps with simple musical melodies to three-dimensional SFX with epic sound tracks. In the past, game audio has been viewed as a backdrop to the game’s visuals. However, more recent advances in game audio demonstrates that sound is no longer a subtle component of the game experience. In fact, sound is one of the key factors for total player immersion into to 3D virtual environment. Interactive soundscapes are the vehicle that forms the interactive partnership between the player and the game. Game audio designers are tasked with immersing the player into these 3D worlds and keeping them there by creating a unified soundscape made up diegetic (actual) and non-diegetic (commentary) sounds for both linear and interactive segments of the game.
As emergent game play becomes the more desired technique in game design, the outcome is a globally designed game system comprised of rules and boundaries for player interactions, rather than scripted paths and events. Players can use basic elements of the game such as the story or strategic moves to play the game in a way that was not specifically designed or implemented by the game designer. What does this mean for game audio designers? Simply put, game audio designers need to create techniques to adapt to this emergent and highly user interactive medium.
Emergent game design encourages replayability as each time the player plays the game they make different decisions, which changes the game as a whole and results in different possibilities for action and endings. Additional sounds and music are needed for the action brought on by these different choices. For example in the game Scribblenauts, the player can choose from over 10,000 pre-programmed words to create objects that will be needed to complete tasks in the game by writing it out. For instance, if a player wants a saw, they simply type or write, “saw” and it will appear in the game. The player then has the freedom to use and move the saw around in any strategic manner possible. What’s interesting about this from an audio perspective is the fact that there are so many unique sound effects associated with the magnitude of objects that can be created by the player. The same magnitude of sounds can be found in larger more open-ended game worlds but the draw in Scribblenauts is both the ability to create objects and to see and hear them in action on screen.
As game play choices and games become more filmic and realistic by nature, game audio designers adapt film audio techniques to achieve an epic and adaptive sound for their game environments. In the past, one of the biggest problems facing game audio has been the endless repetition of SFX that are triggered constantly along with short pieces of music which loop endlessly through the game without changing.
With increased memory available on modern consoles, game audio designers are filling their games' soundscape with a variety of sounds that greatly improve the quality of this new generation of games:
- Crisp dialog that cuts through the mix: Drake’s Fortune Uncharted 2 Among Thieves recorded all dialog while capturing the physical performance for the game on the Mo-Cap stage. Having all of the actors read their lines together on a single stage added a more definitive spontaneity to the scenes that you just can’t capture when recording lines separately in a sound booth.
- Minimizing repetition using alternate SFX: Game animations such as footsteps, effort noises or armor movement can become very repetitive if there are only a couple of sounds, which are constantly triggered for each of those animations. Adding subtle variety to footsteps in a game can break up that repetition quickly and make walking around the game world more enjoyable for the player. Good Foley, ambient loops that change subtly with the environment as well as flow consistently between in-game and cut scenes and attention to the little details enhance the games audio and immerse the player fully into the world. It can also provide more information to the user about their environment that they may not have been aware of.
- Mixing the soundscape with enough dynamics: God of War 3 is a great example of a dynamic soundscape. The carefully mastered mix allows the sound elements to change subtly as Kratos moves through the world. Action in the distance sound closer as the action moves into what was the foreground. Ambient tracks mixed in a way that allows action in the foreground to supersede while allowing the music track to add emotional depth to the scene.
- Adaptive musical score: Adaptive Musical scores “adapt” as the game play changes and evolves based on the player’s choices. The use of branching music systems or stems allows the layering of musical or percussive tracks layers that can be flown in over the core layer to add enhanced tension or a positive vibe to the over all mood. Layers are used to build intensity as the player needs to be directed to move forward and additional layers with more tension and maybe more rhythm will be added as enemies approach and surround the players’ character. Layers of music can also build intensity, as the player’s character health is low. Real time DSP effects can be used in conjunction with these musical layers to filter the sound to give the effect of a dizzying loss of life. Layers can then be stripped away to lighten up the mood a bit by removing some of the rhythmic tracks and pulling back on the intensity as your player regains health or kills off the surrounding enemies and gets back to exploring the game and moving onto the next stage. L.A. Noire is a great example of using musical cues that adapt to changing game play or to lead the player to the next action in the game. During the first mission or tutorial phase the player is instructed to search for clues. On-screen text informs the player that the music will fade down when all clues in that location have been discovered and musical chimes will indicate objects that can be examined. A small chime indicates objects that need to be inspected further. Once the tutorial phase is over, it is up to the player to follow the musical cues through the game.
As game audio evolves, so does the technical aspect of sound design in games. Audio designers have the ability to handle implementation of audio by using middle ware such as Fmod or Wwise with little to no programmer involvement therefore generating more control over the soundscape. Creating audio for interactive game segments can be a challenge as the players’ actions are able to alter the course of the game constantly and the sound needs to evolve along with those changes. Audio middleware such as FMOD helps the audio designer overcome the issue of repetition in game audio by enabling the creation of a dynamic sound environment while optimizing resources of the game’s platform. It also allows the audio implementer to see what is happening to the layers of music as situations develop in the game. It unveils a sort of behind the scenes look at the process from the viewpoint of the middleware. This allows the audio implementer to be sure the music flows seamlessly from simple to percussive and complex and back to simple again with out any hiccups or a break in the sonic soundscape, which would quickly draw the deeply immersed player out of the game world.
Gina Zdanowicz is the Founder of Seriallab Studios, Lead Audio Designer at Mini Monster Media, LLC and a Game Audio Instructor at Berkleemusic. Seriallab Studios is a full service audio content provider supplying custom music and sound effects to the video game industry. Seriallab Studios has been involved in the audio development of 40+ titles in the last few years.