Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2011-404
$eventId
ID of event: wwdc2011
$eventContentId
ID of session without event part: 404
$eventShortId
Shortened ID of event: wwdc11
$year
Year of session: 2011
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2011] [Session 404] Audio Devel...

WWDC11 • Session 404

Audio Development for Games

Graphics, Media, and Games • iOS, OS X • 45:33

iOS and Mac OS X provides an impressive array of audio technologies tailored for use in games. See how to apply the high-level classes for audio in AV Foundation for simple playback and understand the recommended practices for looping sounds and effects. Then dive deeper into game audio and learn how easy it is to integrate OpenAL into your immersive games and create a thrilling 3D audio experience.

Speakers: Kapil Krishnamurthy, James McCartney

Unlisted on Apple Developer site

Downloads from Apple

HD Video (137.9 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good morning. My name is Kapil Krishnamurthy, and I work in Core Audio Engineering. Today we're going to be talking about audio development for games. So let's get started. Now the first thing on the agenda for today is this notion of simple game. Under that, we're going to be taking a look at AV Audio Player.

After that, we're going to move and take a look at how you can create and manage your audio assets. We're going to talk about a few things that you can do here to prevent computational overhead. We'll then move on to a more complex game. We'll take a look at spatial or 3D audio and the OpenAL API. Finally, we're going to take a look at audio session and how that ties in with games. So let's get started.

All right, so what is a simple game? If I were to think of a simple game, I would think of one with maybe a background score and some sound effects triggered by objects in the game or characters in the game. Thinking of examples, maybe a board game, like Scrabble, Monopoly, or maybe a classic game, like Tetris. So the control that you want here is control over all of these sounds. You'd want to be able to change the volume, pan the sounds left or right, maybe loop some sounds. The API that we recommend for all of this is AVAudioPlayer.

So AV Audio Player is an audio API that can play both compressed and uncompressed formats. It has four basic playback options: play, pause, seek, and stop. These are quite similar to what you would find on a traditional tape deck. If you want to play multiple sounds in your game, you can use multiple AV Audio Player objects. You also have the control that I just talked about, that is volume, panning the sounds, and looping the sounds.

Let's take a look at how you would create a player. So the first thing you do here is you create a URL pointing to the file in your app bundle. What's convenient here is that you don't need to worry about whether the file is compressed or uncompressed. AV Audio Player will handle this for you. So once you create your URL, you use that to create an instance of AV Audio Player.

Let's take a look at some of the properties that AV Audio Player has to offer. You can set the volume on a scale of 0 to 1, you can pan the sound left or right, in this example, the sound is panned entirely to the left. You can set the number of loops, or as I like to think of it, the number of repeats.

So, if it's set to 3, the sound plays through once and then loops three times. You can set the current time, which is the offset from the beginning of the file, and you can also set the delegate. AV Audio Player has a number of read-only properties as well, such as the duration, number of channels, and the play state.

Now let's look at the playback controls. Prepare to Play basically allocates the resources that the player needs and also performs priming. This improves the responsiveness of play, and you would generally call Prepare to Play when you're during the initialization phase. Play plays the sound and it resumes playing if the sound is paused or stopped.

What's important here is that if your sound has been stopped, play will resume playback exactly from where you left off. And if you wanted to start playback from the beginning, you need to set the current time to zero. Pause will pause playback. What's important with respect to pause is that the player's resources are still allocated. And this is the basic difference between pause and stop. When you stop the player, the resources are deallocated.

Now I once again want to go over current time. Current time is the property that sets the time in seconds. If you pause or stop a player while it is playing and you want to reset the playback from the beginning, you need to explicitly set current time to zero.

AV Audio Player also has delegate methods. What are delegate methods? Delegate methods are methods that get called when a certain event happens. For instance, if your audio player finishes playing, if an interruption begins, an interruption ends, or if there's a decode error. Now let's take a look at some code as to what you need to do to prepare your player. The first thing I do here is create an instance of the player. Once I do that, I set the delegate. And then just prepare the player to play the sound. Then during game play time, I go ahead and I call play and the sound plays instantly.

All right, so that was an overview of AV Audio Player. Let's move on to the next section. That's tuning your assets, your audio assets for your game. We're going to look at a few things here that you can do to prevent additional computational overhead. The first point here that I'd like to make is that it is important to create game assets at the same sampling rate.

Now, sample rate conversions can be expensive, and you'd like to avoid this as much as possible.

[Transcript missing]

Now let's talk about sample rate conversions. I mentioned in my previous slide that it's preferable-- or what we recommend is to have all of your assets at the same sampling rate.

Now consider an example here where I have a number of several AV audio players and each of them playing a source file at a different sampling rate. I have my output hardware rendering at a particular sampling rate. In this example, it's 44K. Now, what needs to happen here is that all of this input material needs to get rate converted and then mixed to get to the output rate.

But I need to use four sample rate converters here. And how I could avoid this is to have all of my assets at the same sampling rate. When I do that, all of the source material can be mixed and just has to go through one sample rate converter. And so by doing this one simple step, you eliminate a lot of overhead computation.

Now, how do you pick a sampling rate? Well, this is a question that you can answer, because it really depends on the types of sounds that you're playing in your game. Are you playing sounds with a lot of low-frequency content, like maybe explosions, or sounds with a lot of high-frequency content, maybe glass shattering? Well, what you should do here is you pick a sample rate, something that captures the fidelity of all of these sounds, and you stick with that sampling rate for all of your assets.

Now, if you're using uncompressed files, and you pick a sampling rate of 44K, then what could happen is your overall asset size could get really large. And this brings me to my next point, which is AAC decoding is inexpensive. So what we recommend doing in such a case is encode all of your source material to AAC, keep the quality high, and the asset size small.

Now, to help you create, manage your assets, we have some tools on Mac OS X. And one of these tools is called AFConvert. It's a command line tool, and you can use it to convert between data formats, file formats, and sampling rates. Here's an example of me using AFConvert, where I have a source file that has PCM data. I convert it to the MPEG-4 audio file format with AAC data.

Accompanying AF Convert, we have another tool called AF Info that displays file metadata. Running AF Info on this file that I just created, I can see a whole lot of information. I have information like the file format. I can look at the data format, which tells me that I have one channel of audio. It's at 44K sampling rate, and the data is AAC. So we recommend that you use these tools and create and manage your audio assets. Now to talk about the next section, I'd like to invite James McCartney from Core Audio.

Hello. I'm going to talk about working with compressed audio in your game. So first I'm going to define a few terms that we use in Core Audio, samples, frames, and packets, so you'll know what we mean when we say these things. So a sample is the value of the amplitude of a waveform at a particular moment in time.

And the The samples that you get when you sample a waveform is called linear pulse code modulation or linear PCM. So linear PCM means uncompressed audio. A frame is a sample per channel at a particular time. And so when you have one frame of audio, that's all the channels at a particular moment. Here you can see a left and right sample from a stream of interleaved stereo.

Then a packet is the smallest cohesive unit of data for a format. For linear PCM, one packet equals one frame. And for compressed formats, a packet is some number of bytes that when you decode them, you get a bunch of frames of linear PCM. So this graphic shows the relationships between a packet of, in this case, AAC or any compressed format, and a bunch of frames of linear PCM. When you decode the packet of AAC, you get a bunch of frames of PCM, and then you can reverse that and encode back to AAC.

Now, when you're working with compressed audio, there's one aspect you need to be aware of, and that's leading and trailing frames, which are also known as priming and remainder. Leading frames express the processing latency of the codec. It's the amount of time that the codec needs to build up a context of the audio. And then trailing frames are the excess frames within the last packet that are not part of the audio you are encoding. They're just extra space in the last packet.

So here's an example of what that looks like. In this example, I'm compressing 2205 frames of audio, which is 50 milliseconds of audio. Now that extensively takes up a little more than two packets, but we get five packets of audio out due to the leading frames, and then there's extra space in the last packet, and that becomes trailing frames.

So AAC has 2,112 leading frames. Then you have your 2205 frames of the audio data you encoded, and then you have 803 frames left over, and that total is five times 1,024 frames per packet, or 5,120 frames. So now when you're decoding this, this is what the process looks like.

You put in your first packet, you get 1,024 leading frames out. You put in your second packet, you get more leading frames out. And then you put in your third packet, and you get the last of the leading frames out, and now you're starting to get your audio out.

and to put in the fourth packet, you get more of your audio. Then when you put the last packet in, you get the rest of your audio plus whatever extra space there was in the last packet. Now all this leading and trailing frames are stuff to be thrown away. And then you're left with your audio that you're encoding that you're interested in.

And this also illustrates why you can't butt splice together packets of compressed data and expect it to sound okay. Because those two packets at the beginning are required to build up the context for the decode process. And if you don't have them or if you've chopped them off and just stuck some other audio there, you're going to get like a little bleep or something when you play it back. So don't butt splice compressed audio.

Now this is kind of a lot of bother and you can get rid of having to deal with it at all by using AV Audio Player. It handles all the decoding and getting rid of the leading and trailing frames for you. Another API you can use is Extended Audio File, which allows you to ignore the fact or ignore whatever format the file is in and just deal with it like it's uncompressed data.

If you were to, on that previous example, if you were to run AF info on that file, you can see that there's five packets in the file. And AF info will tell you the number of valid frames, the number of priming frames, and the number of remainder frames in the file, and then the total number of frames in the file.

So extended audio file combines an audio file and an audio converter within a single object and allows you to just deal with the file as if it were uncompressed even though it might be in some compressed format. So you can just deal with it as if it were uncompressed data and specify your reading and writing file positions in sample frames.

Okay, now I'm going to talk about spatial audio. Kapil went over what a simple game might be like where you're playing incidental sounds and a background soundtrack, and that might have some spatial ambience baked into the sounds you are playing. But if you wanted a more dynamic environment, spatial environment, you might want to have a listener and multiple sources, and you might want to have panning, directional cues, reverberation. And obstruction and occlusion filtering, which simulates objects in the environment.

And you'd like it to be low latency. So what are some spatial cues? When you hear a sound in an environment, there are several things that your cognitive faculties -- you have some built-in hardwired signal processing that allows you to figure out where things are. And one of them is when you hear something from one of the speakers, you can hear the sound of the speaker.

And when you hear something from one side, you hear an interaural intensity difference. That just means that a sound is louder in one ear than the other, and that your brain will tell you it's over there. And there's an interaural time delay, so you'll hear it first in this ear and then in this ear. And you can also use that cue to determine where something is.

Then there's filtering due to the head, which your brain kind of knows when it hears certain spectral relationships between your ears that something may be in a certain direction. Then there's also filtering due to distance, which is as things get farther away, they get -- there's some high frequency attenuation.

There's also reverberation, which simulates sound reflections in the environment. These will depend on the size of the room and what the walls are made out of, which affects the decay time and the high-frequency damping. Then there's obstruction and occlusion, which simulate filtering of sound due to objects in the environment that block the propagation of the sound.

Obstruction is when the direct path of the sound is blocked by a column like this one in the middle of the room. If the monster is behind the column and he's screaming at you, then you're not going to hear his direct scream perfectly clearly, but you'll hear the reverberation unfiltered.

Now in occlusion, the sound source is in an adjacent space and you're hearing the reverberation and the direct sound both being filtered by the occluding wall. So in Core Audio, we implement all these spatial cues and effects by using a 3D mixer. And the 3D mixer supports several spatialization modes and it supports reverb and filters for occlusion and occlusion. It supports reverb and filters for occlusion and obstruction.

So on iOS, the 3D mixer supports two specialization modes. One is equal power panning, which is just intensity panning between speakers. And then there's also a spherical head specialization mode, which simulates the interaural intensity difference, the interaural time delay cues, filtering due to the head, and distance filtering.

Now on the Mac OS X 3D Mixer, there's a few more spatialization modes. There's head-related transfer function, which does spectral filtering of the sound based on a measured head. And then there's also for multi-channel output hardware, you have sound field and vector-based panning, which can give you spatialization over multiple speakers.

Now, new for iOS 5, the 3D mixer has reverb, occlusion and obstruction support. And so this is the signal path of the 3D mixer. There's the input signal, goes through the occlusion filter, which is going to filter both reverb and the direct path. Then there's a reverb send, which has a blend control, which allows you to give a wet-dry mix to every input source.

Then there's an obstruction filter, which is just operating on the direct sound. The reverb is on its own path now. Then there's the panner, which, um... and I are going to talk about the 3D mixer. It implements the spatialization mode for the input source and then there's a gain on the input source and then all the inputs and the reverb are mixed together.

So the 3D mixer is a listener-centered coordinate system. You specify the location of sound sources with azimuth, elevation and distance. Azimuth is an angle where zero is directly in front of the listener. Positive angles are to the right and negative angles are to the left. 180 degrees is directly behind the listener. And then distance is in meters from the listener.

Then you have elevation, which is an angle above for positive numbers or below the horizon for negative numbers. And you also have a distance attenuation curve you can set on the mixer, which specifies how sound attenuates over distance. There's a reference distance below which the audio is unattenuated, and then it gets attenuated out to the maximum distance above which there's no further attenuation. Thank you.

So we hope that you use this stuff. It will give depth to your game and make it more engaging. And the more you use it and the more feedback we get, the better we can make it. So please use it. And now I'm going to bring Kapil back up to talk about OpenAL.

All right. So how do you get access to all of this stuff? Well, the answer is OpenAL. What is OpenAL? OpenAL is an open standard audio API for 3D or spatial audio. OpenAL was written, designed to complement OpenGL, and they share the same coordinate system. We have implementations of OpenAL on both Mac OS X and on iOS.

So let me run you through the OpenAL coordinate system. OpenAL has a right-handed Cartesian coordinate system. So if I hold my right hand up with my thumb pointing to the right, that's the positive x-axis. My index finger pointing upwards, that's the positive y-axis. And my middle finger pointing towards me, that's the positive z-axis. The intersection of all three of them is the origin.

Now, in OpenAL, you can not only set source coordinates and listener coordinates as this three-dimensional coordinate, but you can also orient the listener, which is like the direction and the rotation of the listener. What OpenAL does is that it handles the panning of the sources when you reorient the listener. Now, take a look at this example here, where the listener is looking straight ahead, and there are two sources to the right. So if you were to listen to that audio, it would sound like there are two sources to the right.

Now if you reorient the listener like so, one of the objects is straight on and the other object is to the left. So again, if you were to listen to the audio, it should sound like one of the objects is straight ahead of you and the other one is to the left. OpenAL handles the translation of the source coordinates under the hood and pans these objects correctly.

Now let's take a look at OpenAL's API objects. There are four basic API objects, and it starts off with the context. The context is basically the environment that has the listener. The listener is implicit to the context, and it also has sources. Now, the OpenAL context is built on top of the 3D mixer that James had talked about. The 3D mixer has all of these spatial audio capabilities.

So the next API object are OpenAL sources that are tied to a context. Each source has certain properties, like for instance, you can set a 3D coordinate on a source, and sources have sounds associated with them. Now, where are these sounds stored? They're stored in OpenAL buffer objects.

What's important here is that you can share a buffer with multiple sources. So if you had a particular sound, like maybe a gunshot or something, and you wanted to play it with multiple sources, you can associate that buffer to multiple sources. The fourth API object is the OpenAL device, or it's the output hardware. The context renders to a device, and that's how that fits in.

So let's take a look at some code as to how we can get started playing a sound with OpenAL. The first thing I need to do here is create a device. That's my output device. I then take the device ID and I create a context because my context is going to render to the device. Only one context can be rendering at any given time. So that's why I take this context and I make it the current context. Next, I need to have a source that's tied to my context. So I generate a source.

I need to have a sound that I associate with my source. So to do that, I then generate a buffer. Now, I need to get some audio data and stick it into my OpenAL buffer. And I'm going to do that with the help of the extended audio file API that James had mentioned. Once I do that, I get this audio data and I stick it into my OpenAL buffer using the AL buffer data static call. Let's take a look at how we can get this audio data.

So the first thing I do is once again I create a URL pointing to the file in my app bundle. This file can be compressed or uncompressed. It doesn't matter. A .exe audio file will handle this for you. Once you open your file, you now set up a client format. What is the client format? The client format is the format that you want your data in.

And why is that important? It's important because OpenAL only accepts data in certain formats. Like, it doesn't accept compressed data. It requires the data that you give it to be uncompressed. So I need to set up my client format. The first thing I do is I set up some flags. I want my data to be sign integer. And the format that I want here in this particular example is that I want it to be a 22K, linear PCM data with the flags that I've set up, mono or one channel of audio, and 16 bit.

So once I set up this format, I just set that property on the .exe audio file. Now I've defined my Client Format. So I am going to get data in a certain format. What do I need to do next? I need to stick it into a buffer. For that, I have to allocate some space, some chunk of memory. How much space do I allocate? Well, it's the number of frames times the size of the data type that I set in my Client Format, which is 16-bit sign integer.

So once I have that data size, I allocate that much space, I create an audio buffer list, because that's what extAudioFile.read requires, and I read this data into that buffer list. Now, in this particular example, I'm reading all of my data into one buffer. But that's not what you would do if you had a larger sound. What you should do there is have multiple buffers, allocate space appropriately, and load chunks of your sound into each one of these buffers.

So now that I have the data in my buffer, I go back to OpenAL and I stick that data into my OpenAL buffer. You can see now how all of this ties in. The format is mono, 16-bit sign integer. Data pointer is pointing to the location and memory that contains my data. And data size is the amount of size it takes.

Once I've done that, I now attach this OpenAL buffer to my source, and the source now has a sound associated with it. That was the hard part. Now all we have to do is just set some source and listener attributes. Here I set my source position, I set a reference distance, which deals with distance attenuation, and I want my source to be looping that sound. So if you loaded data from a compressed file, you observe how easy it is now to loop this sound. You didn't have to deal with any of the complexities like the leading or trailing frames. You can just loop this sound, and it's that easy.

I can then go ahead and set some listener attributes like the position, the orientation, and I'm ready to start playing my sound. During gameplay, I can go ahead and change the source position or the listener position. And that's an overview of how you use OpenAL. This is how OpenAL natively works. Now we're going to take a look at OpenAL extensions.

What are OpenAL extensions? OpenAL extensions are a mechanism for augmenting the existing API set. What you need to do is determine if an extension is present at runtime, and then you get pointers for the extension's functions, and you use those functions. So we're going to talk about two extensions today.

One of them is the Apple Spatial Audio extension, or the ASA extension, that has the reverb, occlusion, and obstruction effects. This is new to iOS 5 and is already available on Mac OS X. The second extension that we're going to talk about is the Source Notifications extension, which is basically a callback mechanism for source state. This is new to both iOS 5 and Mac OS X.

So first, the ASA extension. This comprises of three spatial effects, the reverb, occlusion, and obstruction. I'm just going to briefly summarize what James had talked about. Reverb is a simulation of a room environment. Occlusion is an effect that simulates a source being in a different space as the listener. An obstruction is an effect that simulates an obstacle in between the source and the listener.

So there are some listener properties that you can set. You can turn the reverb on or off. It's off by default. And you can set a global level from minus 40 dB to plus 40 dB. We also have a number of preset room types that you can use in your game.

For instance, if your game is a first-person shooter and you go into a particular environment like a medium chamber, then you could use the medium chamber preset and have things sound the way they should. If you're looking for a particular sound, maybe something we don't have here as a preset, but that's something that you can tweak.

For instance, if your game involves an airport and you think an airport sounds brighter than a large room, what you can do is set the large room preset and then use the EQ that we've included after that to change the sound or tweak it in a way that you want. This EQ is a standard parametric EQ. With a gain, bandwidth, and center frequency control.

Now you can also set certain source properties, and these are properties you would set on each source tied to the context. You can set a send level that goes from 0 to 1. 0 will be completely dry, 1 is completely wet. You can set an occlusion level, which is basically a low-pass filter effect. Minus 100 dB is the maximum amount of attenuation you can set. 0 dB would be no attenuation. Similar control for the obstruction. Again, a low-pass filter effect. You can set an attenuation level from 0 dB to minus 100 dB.

Here's an example of me using this extension. I'm setting a listener property here, which is just turning the reverb on, and setting a source property, which is setting a certain amount of send level for a source. Let's take a look at a quick demo of the ASA extension.

So what I have here is a particular environment that has my source that's moving up here and a listener that's moving down here. Now just to make things interesting, I'm going to assume that my source is a demon or a monster or something like that. And this is what it sounds like, dry.

[Transcript missing]

Now, let's take the monster and the listener and put them in a medium-sized chamber. That's what the first preset is, and let's hear what that sounds like.

I can move the listener around as well. . . . Now you get an idea of what that sounds like. Let's say that this is just a little too close for comfort. So we're going to move all of this to a bigger environment. Let's assume that I have a basement below my house that's really massive and has no furniture. For an echoey environment simulation, here's what that would sound like. *Loud Vibration* Now when you use the reverb, you want it to be a subtle effect. It's not a very in-your-face effect. But this kind of gives you an idea of what it sounds like.

Now, let's assume that I get out of the basement and I slam the door shut. And this is what it sounds like. So you can hear the source continue to growl away in the basement. And as I move or walk up the stairs away from the basement, this is what it sounds like.

So what you're hearing here is the occlusion filter. And what this simulates is the listener being in a different space as the source. So that's a quick example of what the ASA extension sounds like. And this is just scratching the surface. There's so much more you can do with it. So go ahead, try it, and give us feedback.

So let's move on to the next extension. The second extension that we have here is a source state notifications extension. What this extension achieves is that it eliminates the need for polling to check for a change in source state or a change in the number of buffers processed. The user is notified via a callback mechanism.

Now, what are the notification types that you can sign up for? One is the source state. There are four basic source state types in OpenAL. They are initial, playing, paused, and stopped. If you have a change in source state, then you will be notified with the source ID and the new state. You can also be notified if a new buffer is processed. Or if you've queued up several buffers and that entire queue of buffers has looped over.

Now here's how you do things natively in OpenAL. It is a common practice to queue up a number of buffers on a source. So let's assume that your sound is divided into chunks, you put them into different buffers, and then you queue that up on the source. As each buffer gets processed one by one, you take that out, and then you load more data into it, and then you queue that up once again.

So in OpenAL, what you are required to do is to poll continuously to check the number of buffers processed. But this is expensive, and we don't recommend this anymore. Instead, what you should do today is use the extension that we've provided. You register for a notification that is with a source ID. I want to be notified when a new buffer is processed, and call me at my method, which is handleNotification.

And what happens is that every time a new buffer gets processed, you get called there with your source type and the notification. And you do something based on the source type and the notification ID. You can sign up for any of the notifications that I talked about, which is source state, buffers processed, or if the entire queue has looped.

So just to quickly summarize OpenAL, we first talked about the coordinate system and the listener's orientation. I talked about the API objects, which is the context, the sources, the buffers, and the device. We took a look at some code, and then we moved on to OpenAL extensions, where we talked about the Apple Spatial Audio extension and the Source Notifications extension.

Let's move on to the last section in our talk today, which is audio session and games. Now audio session is an API that deals with the application's interaction with this complex audio environment. But I'm just going to talk about a section of it today. We have an entire talk on audio session that's going to be held tomorrow morning, so I encourage you to go and watch that one. But what I'm going to talk about is certain things that are directly relevant to games. I'm going to talk about what audio session category you pick, how you detect if background audio is running, and how you respond to interruptions.

So I'm going to start off by talking about the ambient category. In the ambient category, audio obeys the ringer switch and audio obeys the screen lock. What that means is that if a user is playing your game and he's sitting in a public environment where he doesn't want the game sounds to disturb other people, he goes and he turns off the ringer switch and he continues to see the graphics and all the videos, but there's no audio.

How do you pick between this ambient category or solo ambient? That brings us to our next topic, which is, is your game's audio primary? Now, you could have a user who's playing background audio like the iPod or Pandora, and then he opens your game. So you're left with a decision to make here, which is, do you want to interrupt the background audio and play your game's soundtrack, or do you not want to do that? Amen.

What you could do here is present the user with a prompt in your game that says, Play game sounds. If the user picks yes, then what he wants you to do is interrupt the background audio. So you pick the solo ambient category and you play your game soundtrack. If the user picks no, then don't play your soundtrack. You pick the ambient category and don't play your soundtrack. Continue to let Pandora or the iPod play.

Now, why this doesn't always work is because in most cases users don't know what you're talking about when you present them with a generic prompt like that. So we don't recommend doing things this way. How you should approach this problem is use the audio session property to check if other audio is playing. If other audio is playing, you pick the ambient category. If other audio isn't playing, you pick the solo ambient category.

Now, what are interruptions? Interruptions can be higher-priority audio, like a phone call or the clock's alarm going off. And what this does to your application is that it makes your session inactive and your currently playing audio is stopped. How do you recover from an interruption? Well, this really depends on the API that you're using. But in general, you reactivate some state.

Now, we've talked about two APIs today, AV Audio Player and OpenAIR. In AV Audio Player's case, when you have an interruption, the AV Audio Players are all stopped. You need to override the AV Audio Player's delegate methods if you want to show this in your UI or if you want to restart playback once the interruption has gone away.

So taking a look at that, here's what it would look like. Begin interruption, playback automatically stops. You can show it in the UI if you want to. And when the interruption goes away, you can resume or restart playback of sounds if that's what you desire in your game. And once again, you can update the UI.

In the case of OpenAL, you need to use the AV audio session delegate methods. And what you have to do here is first invalidate the context when you're interrupted and make the context current once again when the interruption ends. So taking a look at the AV audio session delegate methods, when the interruption comes in or begins, I make my current context equal null, or I'm invalidating the context.

When the interruption goes away, I set my session active once again, and I make my context the current context. So to summarize that, when you're using audio session and you're dealing with games, you have to decide on what audio session category you want to use. And that's dependent on whether background audio is playing or not. And you have to handle interruptions in your application.

So to sum all of this up, we started off by talking about a simple game, an AV audio player. We moved on to audio assets. We talked about sample rate conversions, and we also talked about compressed files. We then moved on to a complex game where we took a look at spatial audio and how you can use OpenAL and its extensions. And finally, we talked about audio session and games.

So a related session is audio session management for iOS, which is tomorrow morning at 11:30 a.m. Alan and Eric are evangelists at Apple, and you can contact them if you have questions again. There's a wealth of information online at developer.apple.com. We also have the Apple Developer Forums where you can post questions. So I hope all of this will help you in your game development and we wish you good luck. Thank you.