Graphics • 51:23
Mac OS X features a state-of-the-art audio engine in Core Audio, enabling the most powerful music and audio applications available today. Learn how you can harness these capabilities for your own development. This session brings you up to speed on the latest innovations including OpenAL for gaming audio, as well as cover basics such as codecs, APIs, audio file formats, and more.
Speakers: James McCartney, Doug Wyatt, Bill Stewart, Bob Aron
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Good morning. Welcome to session 200, Core Audio in Depth. Please welcome your digital audio plumber, James McCartney. Test? OK. OK, I'm going to talk about new features for Tiger in the audio toolbox. First, the audio converter. First of all, I guess I'll introduce what's going to happen. I'm going to speak first, then Doug Wyatt will speak about some other new features for Tiger, and then Bill Stewart will speak.
So the audio converter is a-- what is it? It's an API for Core Audio that allows you to convert different audio formats from one to another. It does floating point integer conversion, interleaving and deinterleaving channels, sample rate conversion, channel reordering, and encoding and decoding compressed formats. And any codec that's installed on the system can be used by the audio converter to convert data. The API looks like this. These are the calls. You create a new audio converter using Audio Converter New.
And you set properties like setting the bit rate for encoding. And then you use Audio Converter Fill Complex Buffer to-- to fill your buffers with the converted data, and you pass an input data proc for getting the input data to the converter. It's a full model. And then if you want to stop a stream and restart the converter, you'd use Audio Converter Reset to flush out all the buffers and start over again. And then Audio Converter Dispose will dispose the converter.
So new in Tiger is a new call called Audio Converter New Specific, and that allows you to choose a specific manufacturer's codec. By default, when you create a new audio converter, you get the default codec for a certain format. So this allows you to specifically choose a certain manufacturer's codec.
You can see the difference between Audio Converter New Specific and Audio Converter New is that there's an array of audio class descriptions that are passed in. And you pass in a list of audio class descriptions, and it will choose the earliest matching codec in the list. So you would put the codec manufacturers in the order that you preferred that they matched. The audio class description looks like this. There's three fields that are four character codes. The first is the type, which is either ADEC or AINC for decoder and encoder.
And then the second one, Subtype, should match the format ID code for the format. So like AAC space for AAC or .mp3 for mp3 or whatever the format type is, IMA4. And then the last four-chart code is the manufacturer code. So for Apple's AAC decoder, it would look as shown here. The type is ADEC for decoder. Subtype is AAC space for AAC and then APPL for Apple manufacturer. Okay, another new feature for the Audio Converter is the Audio Converter property settings.
This is a CFArray that you get back from the audio converter that contains the parameter settings for the converter. And you can use that for building a user interface for the audio converter or configuring another audio converter similarly to the current one without having to run through and do all the parameter settings again. The property settings.
So the format of the CF object is the CFArray of CFDictionary's for each converter that's in the audio converter chain. An audio converter, when it converts audio, may have several components inside of it, like a float-to-integer conversion or sample rate converters or interleavers and deinterleavers or encoders or decoders. And so each subconverter that has property settings in the chain will have a CFDictionary that contains all the property settings for that converter.
So each converter dictionary is a CFDictionary and it has several keys. The first is converter, which is a non-localized, the first key is converter and then the value for that key is the non-localized name of the converter. This is like a key that you would use for matching. Like if you want to find this sample rate converter in the audio converter chain, you would go through and look for sample rate converter and that will always be the same.
And then name, the key name, the value for that is the localized name of the converter. So that would change depending on the local language. And then parameters is a key that is a dictionary. It was an array of dictionaries for each parameter. So each parameter of the audio converter will have a dictionary that describes it.
And so the parameter dictionaries look like this. There's a key, the key key is the non-localized name of the parameter, which you use for matching again. And then name is the localized name of the parameter for displaying to the user. And then available values is all the possible values for that parameter. So if it's sample rates, it would be a list of the sample rates that are supported. Or if it's bit rates, it would be the list of the bit rates that are supported. That would be all the bit rates supported for a codec.
And then limited values would be the ones that would be enabled at the current time for the current settings of the converter or codec. So if you have the codec set up for, like in AAC's case, for a large number of channels, then you can't choose lower bit rates.
So those values would not appear in the limited values list. And then current value contains the current value. It's an index into the list of available values. And then hint is a Boolean which tells you if it's a normal or expert parameter. And then summary is a little help string for the parameter. So if someone moused over it on the user interface, interface, you could show that to the user to describe the parameter.
Okay, so this is what a dictionary looks like. This would be for an audio converter that just has a sample rate converter in it. So there's, at the top level it's a CFArray, there's one entry in it, it's a CFDictionary that contains the parameters for the sample rate converter, or contains the keys for the sample rate converter. So the first key is converter, which is sample rate converter, that's what you could use for matching. You can find it. The name is sample rate converter, that will change depending on the local language.
And then you have a parameter array. So there's two parameters for the sample rate converter. The first parameter is called quality, and this next parameter is called priming method. So then you have the localized name for those parameters. And available values for quality would be minimum, low, medium, high, maximum.
And then for priming method it would be pre, normal, or none. And then you have limited values, which is basically the same in this case, that doesn't change. And current value would be the index into the available values array. And summary, the help string you showed to the user. And then the hint field tells you whether it's normal or expert. Thank you for listening.
The priming method is an expert parameter. So, okay, that's the audio converter. That's just the new features basically in a short review of what it is. So Audio File is Core Audio's interface to audio files parsing. And the newest feature for Tiger is Audio File Component. So if you have an audio file format and you say, gee, I wish Core Audio could read this file format, well, you can write your own format parsing component and install it in the system. And then everybody who uses the Audio File API will be able to read the format that you wrote your component for. So there are component plugins for the Audio File API.
And we're going to provide SDK code so that you can write your own component. And there will be a sample component implemented there. And now if your file container contains a compressed format, then you'll want to write an audio codec so that people will be able to decode and encode to that data format and then put it in your file format or some other file format. Okay.
So this is the audio file component API. It's basically a mirror of the audio file API. The audio file API will just call down to these. And the most important thing about the audio file component API is that you don't call this API. It's called from the audio toolbox framework.
Your audio file component implements the component selectors that this API calls into. So this API is basically glue code. You can access it through the audio toolbox if you wanted to test your component directly. For normal use, it's meant to be called from the audio file API, calling your component for the client. And the SDK has classes that-- that implement those calls. So you would only have to subclass one of the classes we provide in the SDK to create your audio file parsing component.
So, as an aside, the Audio Codec API is another similar, it's a plug-in API for the audio converter. And you don't call the Audio Codec API either. It's meant to be called from the toolbox by the audio converter. So if you write an audio codec, or if you want to convert data, you shouldn't use the audio codec directly. You should use the audio converter.
The audio converter instantiates the codecs when someone requests to do a certain conversion. And there's a sample codec implementation in developer examples as well. That brings us to what's the philosophical split between a container for data and the data itself. Audio File API is meant to deal with the container for the data.
When you open an audio file and you say, I want PCM samples out of this, the audio file provides you access to the raw data as it is in the file. So if you want to convert that from whatever format it's in to what you're interested in, maybe PCM, then you would need to use the audio converter to do that.
So if, now for extending the audio file API, you have audio file components to support new file types and then to extend what new data formats you can handle, you have the audio codec API for writing components for converting compressed formats. And then to find out what capabilities are available in the system, for audio file, you use audio file get global info that will allow you to find out what file types are supported on the system. And then for audio converter, you use the audio format API, which will tell you what audio formats that have codecs installed on the systems and you can get information about those formats.
So, okay, well, I just blew through those. Okay. So here's an example of using the audio format API to find out what encoders are installed in the system. Basically, you have a get property info to find out what's the size of the list you need to allocate to get the format IDs back.
And then you call audio format get property, and that will give you a list of all the four-character codes that would go in the subtype field of the audio class description or the format ID field of an audio stream basic description. And so you have an array there of the format IDs, and then I have a loop here that goes through and asks the audio format API to get the format name as a CFString, and then I print it out there. So that's how you do it. That's how you could print out a list of all the encoder-- available formats for encoding on the system.
And similarly, here's a code example for printing out all the file types that are writable on the system. It's the same setup. Basically, you call audio file, get global info to find out how big of a list you need to allocate, and audio file, get global info to-- audio file, get global info size for the size of the list, and audio file, get global info for getting the list of file types. Those are also four-character codes. And then you can get the name, the CFString name for each file type, and that's printed out in the loop there as well.
So, okay, another new feature in Tiger is to be able to get and set chunks in chunky file formats like AIFF and WAV files. So, and the data that you get back or set should be in the raw native format of the file. There's no parsing done on what the contents of the chunk are.
So, there's an audio file property, chunk IDs, which allow you to get the list of four-character codes for the chunks that are in the file. And then audio file count user data will be if you have a certain chunk ID that you're interested in and you want to find out how many of that chunk are in the file. You can call this and get the number of that chunk, a number of occurrences of that chunk in the file.
And then get user data size will tell you the size of a specific chunk. You pass up the file and the chunk ID and the index of the chunk ID in the file. So, if there's like four and you want the second one, you can pass one for the index. It's zero-based. So, and then that will tell you the size of that chunk.
So, then you could allocate a buffer and you use audio file get user data to get the data. So, you can get the data for that chunk out of the file. And so, if you're writing an audio file component to support your own file format and it was a chunky file format, then there's these call down into selectors in the audio file component API as well. So, you could pass back your own chunks from your file format. So, all right.
So, then get user data actually gets the data out of the file. There's a flags feel. There's a flags feel for this for it's basically reserved for future use. So, then you pass up the size and the buffer for the user data. And then audio file set user data is you pass up the data and the size and the data and it sets that on the file. All right.
[Transcript missing]
And so the file marker is a structure and it contains the name, position, and other information about the marker. Okay. And with that, it's time for Doug Wyatt. Thanks, James.
So I'd like to introduce a few more new features in the audio toolbox for Tiger. We have new features for high-level access to audio files. This builds on top of the audio file APIs. We have some new audio units to support scheduled playback of audio files and scheduled playback of arbitrary audio buffers. And we have a new UI component, which is in Cocoa, which you can use in your applications to present the user with lists of audio formats and file formats, as James was just describing. And we also have an audio unit that supports rendering on a separate thread.
So first, the Extended Audio File API. It provides an even higher level of access to an audio file compared to the Audio File API. As James was mentioning, we have good reasons for separating the audio file and converter APIs at the lower levels, but it becomes really useful at a higher level to just be able to sequentially access an audio file in an arbitrary PCM format. Those of you who have our SDK, you've seen the CA Audio File C++ class in there, and this basically replaces it.
So diagrammatically, how this fits into the picture, extended audio file contains an audio converter and an audio file. It uses the audio file API to communicate with the file on disk in its native file format. But through the embedded audio converter, it then presents your application with the ability to just do I/O in the PCM format that you prefer.
This is a very simple API by comparison to the lower level ones. Pretty much all you have to do is open or create a file, configure the converter if you're doing encoding, and then you can sequentially read from the file or write to it if you're encoding or just writing PCM. You can seek around within the file down to sample accuracy even within an encoded file. And you can also tell, which is the logical thing to have if you can seek. And so this API is described in this header file here.
So the second new Tiger feature I'd like to mention is the audio format selection view, which is a Cocoa UI. It's a subclass of NSViews, so you can do things like just embed it in a save panel. and others. So, what is the Mac OS X? Well, it's a very simple, easy-to-use, and very easy-to-use, as the accessory.
It's subclassable, meaning that it's got hooks so you can decide things like, well, I know I always want to write this particular file format, so don't show that menu. And it will, as it says there, let you select an audio file type, data format that's appropriate to that file type, any encoding parameters, sample rate, and the channel layout if it's more than stereo. I'll just give you a quick peek at that. In the demo, please.
And so here I've got it embedded as the safe panel on an application. And right now I've got AIFF selected, so my available audio data formats are just integer formats. But if I select AIFC, then suddenly I've got access to the encoded formats which can be embedded in an AIFC file. And if I were to select an encoded format, I'd have a button here where I could configure the encoder, but that's not hooked up yet. So this isn't in your seed, but it will be in future Tiger seeds. Back to the slides, please.
Okay, so the way it works, you just give the view object the source data format that you're encoding from and its channel layout if it's got more than two channels. And what you get back are an audio file type ID like AIFF or WAV. You get back the output data format like float or 24-bitint or 16-bitint, bigger little endian, whatever you're encoding to or just writing to. And you get audio converter properties back again if you're encoding.
Okay, I just did that. Okay, so next. We have several new audio units in Tiger, and there's a couple of them which are of a new class called generators, meaning that they don't take audio or MIDI in. You either interact with them programmatically by setting properties on them, or by bringing up their user interface and letting the user do something that tells them to make noise of some sort, for example, a test tone generator.
This one, the AU Audio File Player, falls into the programmatic category for the most part. It's designed to be a nice building block for playing back little slices of audio files or large slices with sample-accurate precision. It's built on top of the Extended Audio File API, which gives it the ability to play back encoded formats.
So it's particularly appropriate for situations where you want to stream a long file off of disk. It's sample accurate, so you could use it as a building block in even a digital audio workstation kind of application. And it's also usable in simpler situations where you just want to play a long file asynchronously. The one thing it's not good for is playing short sounds.
If all your application is going to do is play a few short sounds here and there, it's not worth the overhead because this audio unit will create a separate thread for reading the audio off of disk. And if you've got a 200K audio file that's an alert sound or something, you don't need that thread overhead. You might as well just load the file into memory and play it. You can use NSSound to do that, and you can also use the system sound APIs, which are really the best way to be playing alert sounds on the system now.
So conceptually, the way that the AU Audio File Player works, there's this hardware timeline. If you've ever looked at the timestamps going by on an audio unit, you'll see that they're typically these giant numbers, and it's the number of samples since the hardware was started. So there's that timeline that's going along.
And then to play audio files, in this example, I've got two events, two chunks of audio files that I want to play at some point along this timeline. And instead of having to know where on the timeline I am, I just construct my sequence of events to be relative to time zero, which is when I'm going to start playing this sequence of events.
So the one on the left, I'm playing at time zero. It's file A.AIF, and I'm playing the first 100,000 samples out of it. And I've got another chunk out of audio file B that I want to play a little bit later. So that's how the events look that I schedule when I want to have them played relative to time zero. And after that, I just need to set a start time. And I can do this in one of two ways.
If I'm in a situation where I need to synchronize very accurately, and I know exactly where on that audio device's timeline I want to start playing those events, I can specify the start time in sample numbers. The other alternative, in a simpler situation, is I can just say start now by passing a start time of minus one.
So just to give you a little more detail about how this API or this audio unit is used, you open the audio files in your application externally to the audio unit. Your application owns those references to the audio file, not the unit. So your application keeps it open. You pass an array of audio file IDs to the unit.
So when it comes time for it to read off disk, it will have the files open. You schedule the events that you initially want to have play. Then you prime the unit, which is just a set property call on it. And what that does is goes and fills up the disk buffers enough so that when you do say start, it will have the data off the disk already. Then you set the start time.
And as the events play, you get callbacks saying this one's been done, this one's been done. And in response to those callbacks, or other events in your application, you can always schedule additional events onto the file player unit. And at any time, you can just call AudioUnitReset and it will stop playing. And there's about eight paragraphs of documentation on the tiger seed in AudioUnitProperties.h describing how to use this unit programmatically, and that is in your seed.
Just to zoom in one more level, here's the structure that you use for scheduling regions to the audio file. It's got an audio timestamp. You can actually schedule using sample numbers or host times, which can be useful sometimes. And the audio timestamp structure, as you may know, contains both a sample number and a host time.
You have an optional callback completion proc that gets called when the region has been either successfully loaded from disk. Well, actually, it's always called when it's successfully loaded from disk, but that's also used to tell you if it didn't get it off the disk and played in time.
The structure also contains a reference to an audio file object. You just say where in the file in terms of sample frames and number frames that you want to play. This can even be in an encoded format. For example, it's an MP3 file. You say I want sample 512. Well, that's a bit of the ways into the first packet. It will actually seek into the middle of the packet and begin playback there.
And I've got a little demo application which illustrates the AU Audio file. So what this does is lets me generate a random playlist out of a bunch of audio files on my hard disk. I can say how long I want the playlist to be and what the minimum slice size is. The longest one is. Click that. And this is a little slow because it's actually parsing through all of the frames of the MP3s. And so it's generated a playlist here. Oh, they're all the same length. That's no fun. Oh, it's OK.
So it's my little music concrete player. So that's dynamically reading and decoding the MP3s on the I/O, or rather on the disk read thread, and preparing them to be played back. So it wasn't cheating and generating the playlist ahead of time into an audio buffer. It was actually playing those back as we went.
Okay, so one related audio unit, in fact, this one is a subclass, or actually the base class, I'm sorry, of AU Audio File Player, is the AU Scheduled Sound Player. And it shares a lot of the same semantics, like how to set the start time on it in particular.
The difference is instead of scheduling chunks of audio files to it, you give it audio buffers and memory to play. And one internal client for this is the speech synthesizer. And if your application dynamically generates audio into buffers, you may find this a useful way to pump it out into a chain of audio units. And this unit's also documented in AudioUnitProperties.h and is in the seed.
The last of the three new audio units I'd like to talk about is the AU Deferred Renderer. So this one is a converter audio unit. And the idea here is to have it pull its input on a separate thread from the one on which it's being pulled for output.
And this lets you put a processor-intensive task on a separate thread instead of on the real-time audio thread. So you can process your audio in larger chunks at a slightly lower priority. There's opportunities for taking advantage of multiprocessing. The API is documented in detail in AudioUnitProperties.h And I'll just give you an illustration of when you might want to use it. So here's an example.
It could be anything that I'm using as my source here, but it's an AU Audio file player, and I've got a CPU-intensive audio unit going to an audio output unit. And I'm running the hardware at 128 frames, which is around 3 milliseconds at 44.1 kilohertz. And the fullness of those green IO cycle rectangles is how much of each render cycle is being occupied by this CPU-intensive audio unit. And so we're kind of running on the edge here. We're probably consuming around 85% of the CPU just rendering that audio.
So what you can do if latency isn't so important and you want to actually get some more CPU power back is use the deferred renderer. So now on the upper line here we have the source, the CPU intensive audio unit. Those are running on a separate thread owned by the deferred renderer, which is on the lower line here, and it's still being driven by the audio output unit.
So what this does is it periodically just wakes up the lower priority thread to do work and pulls that at the larger grain, which you can specify. So in this example, We still have our 128 frame I/O cycles, but we're only doing our rendering in 512 frame cycles. And depending on the algorithm, of course, your mileage may vary, but in many algorithms you can get a good performance gain by processing more samples at a time.
So just to review, I mentioned the extended audio file API, which combines an audio file with an audio converter. And all of these are in the seed, by the way, of Panther. There's the audio format, except the audio format selection view, excuse me. That will be in the new Core Audio Kit framework. We have three new programmable audio units, the AU Audio File Player, the AU Scheduled Sound Player, and the AU Deferred Renderer. And with that, I'll bring up Bill Stewart, who's going to talk about Mac OS X and OpenAL.
[Transcript missing]
Okay, so OpenAL and Mac OS X. We had some conversations about this last year in the games developer session, and I wanted to give you an update of where we are. So OpenAL is, just to give you some background if people don't know what it is, it's an engine that's designed for games. It's designed to be a complement to OpenGL, so it follows similar API conventions, coordinate systems, and so forth.
It's been originally developed at Creative Labs. There's now a specific website devoted to it. It supports multiple platforms and has for a number of years, I think it's been available for about four years now. So on the PCs, there's implementations for the original Mac OS, Mac OS X, Windows and Linux, and you'll also see some support that was recently announced at the Games Developer Conference for game consoles, PlayStations.
GameCube and Xbox. So it's a pretty broad API and it was one of the things that attracted us to support this because of its broad reach. To give you some example of the games that are supporting it in brackets, there's a list of the platforms that are doing it. So there's quite a good collection of games. The OpenAL website actually has a full list of all of the different games and apps. And companies that are supporting it. So if you're interested to see the sort of coverage, that's a good website to look at.
And in the Games Developer Conference of 2004, we announced that we were going to support this in the API, provide an implementation for it. The implementation that we did was based on pre-existing Core Audio services, and I'll give you a bit of an overview of how we've done this.
We also made the source available for the implementation of the Core Audio API for it on the website. OpenAL has both hardware and software implementations. You've got implementations that are implemented in sound cards like Creative Sound Blaster and so forth, and there's also implementations that run on CPUs.
Ours, of course, is running on the CPU. And one of the things we're announcing in this conference is that we're going to be pre-installing the framework in Tiger. It's actually on your TigerSeed disks at the moment. OpenAL framework, that's essentially what's available on the OpenAL website. As we continue to develop and enhance the OpenAL to Core Audio bridge, we'll be putting the source back into the OpenAL organization. So it's also a good reference if you want to see how to use Core Audio in different ways as well, and we fully support the open source. So that's good news, I think.
One of the things that's fairly unique about OpenAL on Mac OS X is that it uses system sound services and automatically will configure to the system that the user has currently installed. So if you have a surround system and the user has gone and set up the surround in Audio/Midi setup, it will be automatically configured and will just play through the surround system.
If you've just got stereo, then the OpenAL logic that's in the implementation will find that it's a stereo device, and then the mixes that it will do will be for stereo. And you can have a look at how to configure systems, which you would need to do if you're doing any kind of multi-channel surround playback in applications, utilities.
There's some abilities there to go in, say, where your speakers are and what channels, and so forth. And having done that, then you're just ready to go, and that's a system-wide service. So other applications that are doing surround will just work in the same way that this does.
This is a diagram of the implementation that we do for it. It uses two audio units. It uses the output unit. Output unit AUHAL is an audio unit that interfaces to the device. We recommend very strongly that unless you're doing very, very particular things with devices that you use the output unit rather than the audio device API. It provides a lot of management of just device state that's very handy for you. The output unit is just going to get data and output it from the device, and it's getting data from the 3D mixer audio unit. This audio unit can take any number of inputs.
It has different parameters that you can set on the inputs like a pitch parameter, distant parameters that can be particular for each input. You can have mono or stereo inputs. And the 3D mixer will just take these. It will take panning coordinates for each input, place them in a sound field. You can describe with the 3D mixer different panning algorithms. One is called vector panning.
Vector panning will just look at the location of a sound. Let's say it's there, and it will look to the closest two speakers, and it will just place the sound in those two speakers. The one we use in OpenAO, and I think it's the more pleasing algorithm for this sort of usage, is a sound field approach.
What that will do is in each speaker it creates a representation of the sound field at that location. So if you have a sound over there, that sound will be in all of the speakers, but it will be very quiet. In this one, as you get closer to the location of the sound, it's louder.
And so they're the kind of things that the 3D Mixer does, and this is also the sort of things that they do on the creative cards and on other hardware implementations. Then the inputs to the 3D Mixer are OAL sources, and a source object is something in the OpenAL API that you can manipulate, that you can set properties on, and so forth. And one of the main things the OpenAL source has is a list of buffers that you can play.
And you can be sharing buffers between different sources. The buffer can have different format characteristics, mono, stereo, etc., different sample rates. And the game engine just basically queues the buffers up to be played. Now in the line between the two buffers, we're using the audio converter API that James went through.
And that'll take the buffers that are on this, it'll convert them. We've spent a couple of years on this, and it's a really good idea. We've spent some time doing optimizations to the blitters that we've most of that in path. To the end, some of the right conversions are all there.
And then we also provide a utility call in the OpenAL library that uses the audio file API to read...
[Transcript missing]
So we spent a lot of time over the last couple of months just looking at our components of this rendering process, seeing whether... The problems we were seeing where we could optimize the performance, making sure that the that we were looking at, you know, distance practice.
One of the things that distance will do is frequencies so ... you'll hear less high frequencies coming through, and there's also a volume attenuation. In OpenAL you can specify a very nice sort of volume curve, and we've supported a similar thing in the 3D mixer for quite some time, where you can say, "well, in this distance the sound did not get louder or quieter." So you can imagine, like, you're going up to a B or something, and if we take a foot of that B it will be the same. The sound should get quieter and then you can't hear the sound at all. You can specify those conditions when you're doing a game so that your sounds have characteristics of how they're going to be mixed, how they're going to be played.
and you can imagine if you've got an engine sitting in the corner over here, then in this room we're never going to get far enough. We're going to talk about some of the things we went through when we were looking at doing this because this enhances the game experience.
As we went through all of this optimization work, we managed to gain about a 50% improvement in the 3D mixer audio unit as a part of this work. We also profiled a couple of games. Blizzard uses the applications directly, so we had a look at some of their uses of our system.
We overall, I would say, about a 50% improvement in rendering from what we had in Panther. And then to give you some idea of the difference between the implementation we provide now and the implementation that we have previously for Mac OS X, on a PowerBook 1 GHz G4, we take about a 5% CPU load to mix 64 sources into stereo.
So that's 5%. So that's not doing sort of some fancy stuff. You'll see a bit more of a load if you're doing other kinds of things. But if you think of a 1Gb PowerBook G4, that's not a high-end machine in today's standards. We're not saying 5% on a 2.5 Joule G5. This is a fairly representative system.
And the kinds of improvements we saw, the next best implementation that was previously available, was about 20% for that same test. So overall, with the improvements to the 3D mixer plus more optimal blitters in the audio converter and so forth, we're down at about 25% of what was previously available.
So that should be really good news for you when you're looking at your game. And I think you've heard enough of all of us talking, so we're going to kill some aliens. Thank you. Unreal Tournament, and maybe if walks around and we've got sound coming from all the speakers here.
Have we got the front speakers on? Useless. Left and right. Are you guys hearing sound from all? It's hard to tell from up here. As Bob's sort of moving around, you can see probably the people kind of around here are getting the best listening experience. I'm not. You can see how the sound is moving around. It should be coming to the front of you. I don't know why that is. Find the quiet sound.
You can hear just sort of how the sounds are kind of phasing in and out. You can hear some of the distance work. And that's the things we were talking about earlier.
[Transcript missing]
We have an excellent survival video on this on a pretty regular basis. We have a very good team of developers, and we have a very good team of developers. We have a very good team of developers, and we have a very good team of developers.