Graphics and Media • 1:07:33
The powerful capabilities of Core Audio, Mac OS X's world-class audio architecture, are easily extended through the creation of Audio Units. Audio Units can generate, modify, or amplify audio data to perform basic audio processing tasks or create amazing aural environments. Bring your laptop to this hands-on session and learn the steps to create and validate Audio Units and integrate them into your media application.
Speakers: Chris Rogers, Michael Hopkins
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
My name is Chris Rogers and Michael Hopkins and I are going to show you how to make an effect Audio Unit and a really nice Cocoa view to go along with it. But first, for those of you who don't really know what Audio Units are about, and for those who do, it won't take long, I'm going to take a brief tour through what an Audio Unit is. So, an Audio Unit is a plug-in specification for OS X. It's packaged as a component, and as such, an Audio Unit host would use a component manager to discover the component's Audio Units available on the system and load them up.
Why should you write an Audio Unit? Well, first of all, it's a specification that's supported by all the top audio applications, or most of them, and quite a large number of other interesting applications. So if your application isn't listed here, your favorite application, then don't be offended. There are quite a lot of really cool ones out there.
And it adds value to your technology. If you have a look at any of the audio forums out there on the web, you'll see that musicians who use OS X are really, really hungry for new plug-ins to run in their Audio Unit hosts. So they know all about Audio Units and they want people to write them. So it's a good money-making possibility, too.
So there are a variety of different types of Audio Units, and the type that we're going to concentrate on today is the effect type. But there are a number of other different types, and we'll take a quick walk through those different types. So the effect type, first of all, just takes audio input, processes it in some way. Example is a reverb for simulating a concert hall, graphic EQ for changing frequencies around bass and treble. Instruments are for simulating synthesizers, software synthesizers.
Couple different examples there. A new type of Audio Unit in Tiger is a generator that takes no input, but it just produces output. And we ship a couple of those in Tiger, the file player and AUNet receive, which, along with AUNet send, is able to stream high-quality audio across the network.
There's another type, Mixer, which, as its name implies, it just takes a bunch of audio streams and mixes in some way. And there are a couple different types that we support on Tiger. The 3D Mixer is, for example, used as a basis of our Open AL implementation, which Bob Aaron is going to talk about in a later presentation.
Output Unit is used to talk to audio hardware, so it would be the final stage in a graph of these Audio Units that are chained together. Format Converter can do things like sample rate conversion, bit depth conversion, or time stretching, that type of operation. And finally, Offline Unit will process audio in a non-linear manner, perhaps even backwards, like an Audio Unit that would take a file and reverse it.
So, why don't we actually see what an Audio Unit looks like running in a demo here. So, could we switch to the demo machine, please? I'm going to be using the AU Lab hosting application, which actually ships on your system. You should have it installed there. I'm going to launch a document that I made--oops. Wrong icon. This is the AU Lab document.
Okay. So what we see here is three separate windows. There's a mixer window here. Here I have a window which is the user interface to the file player, which I mentioned before. And right here, we have a generic view on a filter effect, which is actually what we're going to be taking you through today, showing you how to build this resonant low-pass filter. And this is--this right here is just a basic user interface. It's called a generic view.
And the generic view is able to--to, um, To work with any effect Audio Unit. So even if you don't write a custom UI, the system is capable of bringing up a generic view. So why don't I play a loop here and I'll just play with the filter for a little while. Sweep the cutoff frequency.
Okay. Also, we can add another effect. If you click over here, it brings up a pop-up menu of all of the different effect Audio Units that are installed on the system. So I'm going to add a reverb here just really quickly. That's also a generic view. You see that the reverb has more parameters than the filter does.
So, that's pretty simple. The user interface for our filter, though, it's a little bit boring, so maybe we can do something more interesting than that. We'll see a little bit later. Mike will show us how to make a custom view. Can we go back to slides, please? So, I'd like to take a little bit more of a step back, look more abstractly at how hosts deal with Audio Units. So, off to the left we see the AU Lab mixer window, but this could represent any host application.
When I chose the reverb, I added a reverb effect from a list of Audio Units in a pop-up menu. What the host does at that time is it will It will open the component, the Audio Unit component, using the Component Manager and then call Audio Unit Initialize on it in order to initialize the Audio Unit.
It will then typically, in a hosting application, bring up a window for the user interface. And in this case, it's showing the generic view that we just saw. But if the Audio Unit supports a custom view, then the host will instantiate the custom view in the window instead. And the way that the host knows if the Audio Unit supports a custom view is by asking it.
And we'll see exactly how it does that in a little while. The host gives a reference to the Audio Unit, or a reference to the view. It gives it a link to the Audio Unit so that the viewer is able to communicate with the Audio Unit to display its parameters and manipulate the Audio Unit in various ways.
Another type of Audio Unit that I talked about briefly was the Instrument Software Synthesizer. So if you're a musician, you could be playing on a MIDI keyboard and those MIDI events that you're playing on the keyboard would go through the Core MIDI system. The host would receive those Core MIDI events and dispatch them to the instrument.
A mixer audio unit could be used as a basis of a mixing engine in a host, which it is, in fact, in AU Lab. And then the output of that mixer is sent to the output unit and then on to the Core Audio HAL, down through the drivers and to the audio device itself.
So a component is packaged as a bundle on OS X. An audio unit is a component. So it has a .component extension. And in the finder, it just appears as a simple file. But it's actually a directory. And inside of there, there are several files which are kind of interesting to look at.
And many of these files are the same for any type of bundle. Some are specific to audio units. The Info.plist file contains the bundle identifier and version number of the bundle, some other information. Inside of the Mac OS directory is the executable, where the code of the audio unit lives.
Inside of the resources folder, if there's a custom view, the view can go there. And we see there, Cocoa Filter View Bundle. There's some localized resources. And in the .rsrc file, there's some component information, such as a component type, subtype, manufacturer, component name, and that type of thing.
The audio unit, once it's built, it must be installed in a particular location. And there are two main places that that could go, either in the ~/library or /library. And the only difference is that in the ~/library location, the audio unit will only be visible to that particular user. So if you have multiple users on the machine and somebody else logs in, they won't see the audio unit. If you install it in a /library location, then it will be visible system-wide no matter which user is logged on.
So as a step in developing your Audio Unit, it's really important to validate that it works correctly. In the past, digital audio workstation applications have had tremendous problems with plugins crashing the system, really unstable plugins. And if you load up 10 or 15 various plugins into a hosting application and one of them crashes, it's difficult to even know which one is crashing the system sometimes. It can be quite difficult.
So we've put some effort into developing a command line tool called AUVAL, which will put your Audio Unit through kind of a torture test and really make sure that it will work robustly in your host application. And we'll be using this command line tool in a little while to actually see how we can validate the filter Audio Unit that I demoed.
So, covered a bit of background about Audio Units in general. Why don't we see if we can move on to actual code and looking at methods and that type of thing. Before we do that, we would have to set up a project, an Xcode. Easiest way to do that is just say "New Project" and then there's an Audio Unit template available so you can choose that your new project is going to be an Audio Unit.
Or, Audio Unit with a view, either Carbon or Cocoa. It will set up your project with all of the relevant source files so you can just essentially build and you'll have a working Audio Unit. It doesn't do very much, but it works. Before you build though, there are a couple things that you'll need to configure. In the resource file, you'll need to set your component, subtype, manufacturer, and name. And I'll show you how to do that in Xcode in a few minutes. And also, you'll need to configure the Info.plist file so that the bundle identifier has your company name in there.
And some of you may be wondering, well, what are the implications about building an Audio Unit on the Intel platform? Essentially, it just works. If you use our SDK-based classes and support files, It will do all the right things and build it for you. At Apple, when we were working on porting our Audio Units over to the Intel platform, we had to change, I think, one line in a resource file or something like that to make this all work. So you'll be able to get this up and running pretty quickly.
So what kind of custom code do you need to write? I mean, if we have all these base classes and stuff, does most of the work, but you still are going to have to implement some custom code, of course. And the custom code mainly revolves around dealing with parameters, properties, and signal processing. And I'll talk about each of those in some detail.
This is what the base class hierarchy looks like. And we're mainly interested in subclassing two of these classes, the AU kernel base and the AU effect base. This is what the base class hierarchy looks like. And we're mainly interested in subclassing two of these classes, the AU kernel base and the AU effect base. So getting into more detail with AU Effect Base, when you subclass that, You're overriding methods for parameters, properties, and factory presets, potentially. And what AU Effect Base does is it instantiates several instances of your AU Kernel Base subclass.
So AU Effect Base assumes that it's doing end-to-end channel processing. So, for example, two channels are two channels, stereo in, stereo out, or one-to-one, mono in, mono out. If your Audio Unit is not like that, such as maybe a stereoizer effect, it's mono in, stereo out, then you wouldn't want to use AU Effect Base. You would subclass AU Base instead. In AU Kernel Base, this is where you put your actual DSP, your signal processing code. And as I said, AU Effect Base instantiates N different instances of your A kernel base subclass, one per channel.
So I mentioned parameters in passing. What parameters do is provide real-time control on the processing. They're floating point values. They have a name, unit type, min, max, default value, flags. And some examples are filter cutoff, like we saw in the low pass filter that I demoed. And since that's a frequency, it's a cutoff frequency, it would have a unit type of hertz. If you had a delay effect, the delay time parameter would probably be unit type of milliseconds or seconds.
To define a parameter, you need to override the get parameter info method in your AU Effect Base subclass. And in there, what you do is you define the name, the unit type, the min, max, default value, and flags. In addition to that, in the constructor to your subclass to AU Effect Base, you'll need to call set parameter for each of your parameters. And this has dual purpose. First of all, it defines the parameter. And second of all, it gives it the initial value.
Properties are a very versatile way of getting information back and forth between audio unit and host or view. There are a bunch of required properties that are implemented by the base classes, so you don't have to worry too much about implementing those. Those are handled in AU_Base and AU_EffectBase.
But you're able to define custom properties for passing arbitrary information back and forth between your audio unit and your custom view. And we'll do exactly that in a minute. To define a property, there are two methods that you'll need to subclass: GetPropertyInfo and GetProperty. They're both methods of AU effect base.
So I want to talk a little bit about rendering because it's actually kind of at the crux of what Audio Units do. In a typical scenario, rendering occurs on a continuous stream of audio, continuous uninterrupted stream of audio. And what the host does is call Audio Unit Render successively, time after time, multiple times, to process slices of that audio stream.
So when the host calls Audio Unit Render, it provides a number of sample frames to process. So it may ask the Audio Unit to render 512 sample frames. And then it will call the Audio Unit again and ask to render the next 512, and so on. But you shouldn't assume that it's going to be 512 or any particular value, or that the number is going to be the same from call to call. That's for the host to decide. And the Audio Unit is required to render exactly the number that the host asks for.
Also, the host is going to be calling Audio Unit Render in a particular thread context. That may be in a real-time thread talking to an audio device, or it may be in a secondary feeder thread, or it may be in an offline context, just processing a file. As an Audio Unit, you are completely unaware of what thread context you're running in. And you shouldn't make any assumptions about the thread that you are running in.
Your job really is just to render the audio and put the results into the audio buffer list that the host gives you. And for effects, what happens is the Audio Unit, first of all, gets its input, it pulls its input, and processes it, and then writes the output into the audio buffer list.
Now, I said that processing occurs on a continuous stream of audio in a typical scenario. But what happens if the stream is interrupted for some reason? And a case where that might be true is in a digital audio workstation. You have a timeline change. You're playing along, and all of a sudden, the user moves the timeline to maybe the start of the track. Well, that's an interruption in the continuity of the stream. So in that case, the host is expected to call Audio Unit Reset on the Audio Unit, muting or unmuting a track in a digital audio workstation. That's another example.
So when the host calls Audio Unit Render, your process method is called. And process is a method of your AU kernel base subclass. That's where you do the interesting signal processing, your specialized code that makes your Audio Unit special, gives it that really distinctive sound. Your reset method gets called when the host calls Audio Unit Reset.
That's where the continuity of the audio stream is being broken and you're expected to reset your filter state. If you're a reverb, clear out all of your delay buffers and all that type of stuff so you don't have this giant reverb tail that keeps playing out even though you're playing at a different part of the timeline.
Factory presets are something that you can decide to put in your Audio Unit if you want. And it's basically a set of parameter values that are useful to the user. So for example, in a reverb, there might be a set of parameters that make up a cathedral setting or a large room or something like that. And we're going to make a couple factory presets in the filter.
In order to do that, there are basically two methods that you're going to need to override in AU Effect Base. The Get Presets method is what gets called when the host is interested in determining what presets are available, what factory presets are available. So you give it a list of those. That's a CF over ARAF. And there's the new factory preset set method, which gets called when the host decides to set a particular preset on you.
So I'd like to go to the demo machine, please, and actually show you in Xcode some of this Now, even though you may have this code on your machine, you may be looking at it, it probably isn't really very efficient for you to follow along as I move quickly through the different files here.
But we can-- we can look at all of this in more detail in the lab tomorrow, if any of you have additional questions or if you have, like, some particularly difficult things that you need to do with audio units that we don't cover today. The first thing that you should do when you create a new project using the Audio Unit Template is configure some basic information about your component.
Now this is FilterVersion.h. And we need to give the component a unique identifier so that it can be uniquely identified as being different from other Audio Units on the system. And the way the Component Manager does that is through ComponentType, SubType, and Manufacturer. The type is already determined for an effect. It's a four-character code, which is AUFX. The subtype, we're free to choose. And for the purposes of this project, we'll call the type FILT, all capital letters.
And you need to choose a four-character code that has at least one uppercase character, because four-character codes with lowercase characters are reserved for Apple's use. After making the subtype, you should define a four-character code for your company. And it has to be a unique four-character code, and it's recommended that you register your company's four-character code with Apple if you don't already have one.
Aside from the subtype of manufacturer, the Audio Unit has a name. So when we were in AU Lab and we looked in the pop-up menu of available effects, it gave a list of names. So you should, first of all, put your company name here, colon, and then make up a name for your Audio Unit. We're going to call this Filter. Pretty boring name, but I'm sure that you guys have some much more interesting effects that you can show us.
After that, in the targets, If you look at the information for the target and you go to the properties setting, there's an identifier here and it says com.apple_demo_audiounit.filter_demo. And what you need to do is make that say com.yourcompany_name.audiounit. Your Audio Unit Name. So instead of Filter Demo, it would be Migrate Reverb or whatever.
[Transcript missing]
Your Audio Unit Name. So instead of Filter Demo, it would be Migrate Reverb or whatever. This is a method of AU Effect Base and Filter is our subclass of that. And we're defining two parameters, cutoff frequency and resonance. We're giving a name. And this is a constant here, which is a CFStringRef. So you can localize this to a particular language if you want. We do the same for resonance.
We give it a unit type of hertz, minimum, maximum, default value, and some flags. Since we're publishing this parameter as having units of hertz, we also give it a flag telling it to display logarithmic. So in the generic view, when it creates a slider for changing this parameter, it will be on a log scale, which is how we want to control frequency.
So similarly, we set up all this information for the resonance parameter. So it's just a single method here that we have to worry about for parameters. And also, in the constructor, as I mentioned, you need to call set parameter initially, just to define the parameter. So we call it for the two parameters, and that defines the parameter and gives the initial value.
Okay, that takes care of parameters. Why don't we go and look at the signal processing code? And that's in a different class. It's our filter kernel class where the actual signal processing goes on. The process method is probably the most important method there. That's where we do the interesting filtering. And we're given a source buffer, a destination buffer, and a number of frames to process.
So, the first thing we do is we get the two parameter values. We do some kind of bounds checking stuff here. And then what we do is given those parameters, the cutoff frequency and resonance, we need to calculate some internal coefficients for our filter, which is a biquad filter. It's not, you don't have to understand how that works. But this is the method that we call to calculate our internal coefficients given the frequency and the resonance.
Once again, the math isn't really that important unless you're interested in that. Finally, we've got the source buffer, destination buffer, number of frames to process. All we're doing is going through a loop here, getting the input sample, doing the filtering operation, and writing the output to the destination buffer. I mentioned Audio Unit Reset, and when the host calls Audio Unit Reset, Our reset method is called, and that's where we clear out our filter state. And as I said, for a reverb or a delay effect, you would clear out your delay buffers at that point.
Um, finally in the code, I'd like to go over quickly how, um, How factory presets work. So, we have two methods that I mentioned. Get presets. And this is what the host calls to get a list of the available factory presets. We just create a CFArray here and we go through a loop and append a structure to the array.
And what the structure looks like is It's called AUPreset. It's a structure called AUPreset. And it's defined in AudioUnitProperties.h. It only has two member variables. The preset number, just an integer, and the name, which is a CFStringRef. And that can be localized to a particular language since it's a CFStringRef.
Okay, so here we're just defining two factory presets. And we're going to call the first one Preset 1 and the second one Preset 2. Of course, if you had a real effect, interesting, like a reverb or something, you could give those names like Concert Hall, Small Room, and stuff like that.
Okay, so we're just returning a CFArray of our list of factory presets. That's the discovery mechanism. When the host decides to choose a particular preset, we call the new factory preset set. With an AU preset structure, we get the preset number here, first of all, and we go through in a loop and we find the one that matches the preset number that they want selected.
And the way I've done it here is I just have a simple switch statement and I set the parameters a particular way for preset one, cutoff frequency is 200 and resonance is minus 5. And for the second preset, preset two, I give it cutoff frequency of 1,000 and 10. You don't have to implement this with a switch statement. You could use the chosen preset number as an index into an array.
So that's pretty much all the code that we're going to look at. I want to show you how to install the Audio Unit. I mentioned that there are two locations where you can install it. The Audio Unit is built in the build directory here of our project, Filter Demo. It's called FilterDemo.Component. And I'm gonna put it in-- The user's home directory, Bob Aaron in this case. Library, Audio, Plugins, Components. That's where it goes.
Copy it there. Well, it's already there, but I'll say replace. Okay. So we've installed it. Now let's see if it shows up in the system. So I'm going to call up the, um, the terminal here, and we're going to use AUVAL to see if the Audio Unit is on the system.
And the way to do that is use the -A option with AUVAL. And lo and behold, among all of these Apple--built-in Apple Audio Units, we see that ours shows up right there with the name that we gave it. Now I want to put this Audio Unit through its paces. I mentioned AUVALS being a validation tool that puts the Audio Unit through a kind of a torture test, so let's actually do that.
Use the -v option for validation. The component type is AUFX. Subtype is FILT. That's what we gave it. and APPL, that's the manufacturer name. Oh, it printed out quite a lot of stuff there. Let's go up and see what it did. Okay. First thing it does is it gives you the name of the Audio Unit, the version number, and tells you how long it takes to open your Audio Unit. And then it gives you some information about the stream format for the input and output. And the stream format is two channels, 44 kilohertz, linear PCM, 32-bit.
It validates some of the built-in properties which are mostly taken care of for you in the base classes. and AU Effect Base also takes care of some of this other stuff. Actually, if you look more closely in the filter.cpp file, you'll see that we support the latency property tail time.
So here it's telling us information about custom views that Audio Unit supports. Zero Carbon views supported, so there are no custom Carbon views, but there's one Cocoa view that's supported. Here it tells us information about presets that we have. So we have our preset 1 and preset 2.
Further on it talks about the parameters where it gives us information about the unit type and minimum, max, default value and so on and the flags that we gave it. So, AUVAL is actually a useful tool to see if we've published our parameters correctly or, you know, with all this information that we're getting, we can practically tell almost everything about the Audio Unit. So, we can tell if we're doing things correctly in our code just by using AUVAL before we even run it in a host.
So after the parameters, it checks the channel handling. And being subclass of AU Effect Base, it handles end-to-end processing, as I mentioned. So it should be able to handle one-to-one, that's mono. Stereo Processing, 4-4 channels, 5-5, 6-6, 8-8, AUVAL checks all of those. Um, it's testing rendering at various sample rates and processing various numbers of frames.
At the very bottom here, it says, "AU Validation Succeeded," and that's the message that you want to see on your Audio Unit, because some hosts actually require that validation succeeds or they won't even be allowed to run. So... That's pretty much it for building Audio Unit and validating it and so on. I'd like to call Michael Hopkins up and he's going to show you how to make a nice Cocoa view for the Audio Unit.
Thanks Michael. Thank you very much Chris. Could we go back to slides please? As Chris showed you, when you create an Audio Unit and load it in a host such as AU Lab, you're already provided with a generic view. But there are many rationales for going one step beyond that and creating your own custom view.
For example, there may be some unwanted detail that you want to hide for your user, or you may just want to customize the order that your parameters appear in. Or you have the flexibility to choose the user interface element that you use to represent that, for example, a knob instead of just the slider and text field that you get with the generic view.
Additionally, you have the flexibility to provide the appropriate amount of eye candy. And you can also provide a branding experience to your user so that if you are creating many different types of Audio Units, you can use common interface elements and graphics to give a uniform look to all of your plugins. But perhaps the best way to make this point is to look at what you get with our filter demo in the generic view. And you can see that the filter demo is a little bit more complex than the generic view.
And compare that to the custom view that we're going to be developing here today. And as you can see here, the custom view shows the same amount of information as the generic view, but we're also adding this beautiful display of the response curve. And instead of just showing you a picture of this, let me go ahead and run a demo to show you how that works in real time. So if I could switch to demo station two, please.
I'm going to launch the same AU Lab document that Chris showed you earlier. And as you recall, there was a generic view for the Audio Unit. I'm going to open a second view on the same Audio Unit, which is our custom view. And this resizes in the same manner that the generic view does, except that curve updates in real time. And I can go ahead and change the values of these parameters by typing in a text field directly, which upstates the graph, as you can see. Or I can click and drag in that view to move the response curve in real time.
You'll also notice that if I go to the generic view and I change one of the parameters in that view, that is updating my custom view at the same time. And this is occurring because our generic view is changing the value of that parameter in the Audio Unit and then sending a notification that that parameter value has changed. Our custom view listens for that notification and in response to that, updates its curve. And that also will occur with the resonance parameter too.
And of course, it's our responsibility when we write our custom view to do the same thing so that when we update the curve in our custom UI, that we send a notification that those parameters are changing so the custom view can update its view of that data as well.
And you'll notice one other thing occurring here, and that's that the cutoff frequency and resonance parameters are highlighted in blue. And this is because our custom view is changing the value of that parameter. And that's because our custom view is sending what's known as a begin gesture. And Hosts use this for automation, but the generic view is using that to highlight which parameters are changing.
And you'll notice both parameters are highlighted, and that's because we have a two-dimensional view that's capable of updating both of those at the same time. And when we release the mouse, we send what's known as an end gesture notification, indicating that we're no longer editing that parameter. Now let's go ahead and listen to that.
As I change the cutoff frequency, we can hear the change in the audio, but we also have this beautiful curve that is really giving us a visual representation of what that sound is going to sound like. So this really gives the user a lot of extra information than just a generic view.
OK, could I switch back to slides, please? So now let's take a look at how the view that we are writing fits into the audio unit structure. First of all, views can be written using either the Carbon or the Cocoa API. That interface could be created programmatically, or loaded via a NibFile. In either case, the view is retrieved directly from the audio unit via a get property call. And you see the two identifiers there on the slide, whether using Carbon or Cocoa.
Regardless of which API you use, the host will always provide that window context for you. And if you're a Carbon view, you're a control ref and you'll be embedded in that host in the main control, in the root control. If you're a Carbon view, your view is an NSView derivative class and you'll be placed in the content pane of that window.
Drawing is up to you. You have an open canvas to do whatever you want in the API that's most appropriate for your type of view. And the same thing goes with event handling. If you're a Carbon view, you'll use the Carbon event manager. And if you're a Cocoa view, instead you use the methods in NSView that are inherited from the NSResponder class. So I'm not going to cover any of these items anymore in the remainder of my presentation since these are merely API specific. Instead, I'd like to concentrate on those unique pieces that you'll need to write in order to interface with the Audio Unit.
So let's take a look at that now. As you'll recall, Chris mentioned that the Audio Unit is contained in a component file. And inside that component is the filter bundle. And that is where the Cocoa View resides. So let's focus on that right now. Inside that bundle, similar to the component, is an Info.plist that defines the main class name, the name of the nib, and the name of the bundle, and also provides a bundle identifier. There's a Mac OS directory where your Cocoa View executable code lives, and a resources directory where you'll find all the localized resources used by your view, which in this case is a nib file. We also have added a TIFF file that we use as a background image.
Okay, now let's look at what the host does when it loads a CocoaView. In this case, it's going to call the getProperty method with a CocoaUI identifier. And this method, or this call returns a structure that specifies first a URL to the location of the bundle, and then a string, which is the name of that main class. And the bundle location is a URL to the location that's within the component.
Now we're going to go ahead and look at the source code in the filter class that's required to support that property. And for the remainder of my presentation, I'm going to be showing that code in slides. But you'll notice that there's an icon at the bottom of the slide indicating which source file that code is contained in.
And I encourage you to, at your leisure later on, take a look and go through that code in more detail, because some of it's pretty interesting. To implement the get property info call, we specify that that property is not writable. And then we also specify the size of that structure and return it.
To implement the Get Property call, we first need to provide the location or URL of that bundle, which we do by getting the bundle with the component identifier. And then we, further on down in the code, find the Cocoa View bundle within that component and get the URL for it. Once we have that URL, we take the name of the class in the URL, copy it into the structure, and then return that information.
Okay, when the host is loading that bundle, it's going to be loading first the main class to instantiate that. And there are a few rules about the way that main class needs to be structured. The first is that that bundle's principal class needs to implement the AU Cocoa UI-based protocol.
And this protocol specifies that that view class is a factory method that returns an NSView subclass via this specific property method you'll see on the slide, which is the UI view for Audio Unit with size. That method will take an Audio Unit parameter, store it in the view along with a suggested size, and then return a view with that that implements that protocol. If you're using a nib file like we are in the example, you need to make sure that the owner of that nib file is this factory class.
Okay, now I'd like to talk a little bit about communication between the Audio Unit and its view. These two objects are really separate entities and as such you need to be really careful because they may not necessarily even be living in the same address space. So you want to make sure that you don't do things like pass pointers back and forth or other skankiness. Instead, we recommend that you use a discrete way of communicating with the Audio Unit that Chris previously discussed, which is via the parameters and the properties.
When the view is first loaded, it needs to know what to display and it does that simply by getting the current parameter values via Audio Unit Get Parameter. Once it's updated that information, if the user changes the user interface, it needs to notify the Audio Unit that that value has changed, which it does simply by setting the value of that same parameter.
And the same goes if it's using any custom property. Now we also saw in the demo earlier that our custom view needs to update its state if another object, such as another view, changes the value of that property. And there's a notification mechanism built into Core Audio called the AU Event Listener.
And this mechanism will inform the Audio Unit that the value of that property is changing. And this mechanism will inform our view of any parameter changes or property changes. This mechanism is thread safe, allows us to do things like control the granularity of the notifications. And this mechanism is used both by hosts and by views.
Let's look at the API now in a little bit more detail. When we create a listener or register that we're interested in listening to a specific parameter, we can use the AU Event Listener to create a function. And similarly, when we're no longer interested, we dispose of that event listener.
Once we've created that listener, we need to specify all the types of events that our view is interested in receiving for each parameter. So if we're interested in the value change, that's a specific event type that we register interest in by using AU EventListener@EventType. And similarly, we can remove interest in events as well.
Okay, when a listener notifies us that a particular parameter has changed, we get notified. And that occurs via the notification mechanism that I previously talked about. It's our responsibility as a view, however, to also make sure that we're doing notification of listeners whenever we change our values. And that's done by calling AU Event Listener Notify.
And there's also a convenience method that's provided specifically for parameters. And this will not only set the value of the parameter in the Audio Unit, but it also will perform the same notification. And we'll see this in the code that I'm about to go over in a little bit.
Now I'd like to take a look at the view code that's in our filter demo project. And this filter demo project was created with the Audio Unit template. And so there were a couple of minor housekeeping details that we had to do in order to prepare that project before we started our development. The first thing we did is we went through all the source files and we renamed them to unique names to make sure that we didn't have any namespace collisions with other classes that were based on this template.
We additionally updated the Cocoa UI Xcode target to change the name of the bundle, the identifier of the nib, excuse me, the identifier of the bundle, the nib name, and also to specify the principal class. And then we added that image resource to our project as well. So we've done all this for you in the project that's on your desk. When we updated the Cocoa UI, we went into Xcode, clicked on that target, went to the properties tab, and then specified the identifier class name and then file in that dialog.
Once that is completed and we renamed our class, we need to make sure that we update our nib file so that the interface builder matches as well. And I'm going to show an example of how we changed the principal class, which as you recall is our factory class. First, I went into that nib file, clicked on the file zone or object, then opened the inspector, went to the custom class tab, and changed that class to our view factory class. For the demo, we did the same thing for the view class as well.
The template provides us with a default user interface. That's not really useful for our particular audio unit, so we removed that information and replaced it with a user interface that is more appropriate, which in this case was a parameter text field for the cutoff frequency parameter and a resonance text field. Additionally, we created a generic, excuse me, a custom view in the center for displaying our frequency curve.
Now I'd like to take a look at the steps that occur when the host loads our view for the first time. The host initially gets the view by calling Audio Unit getProperty. It then gets back the bundle URL and the main class name, loads the bundle, and instantiates that class.
Once that class is instantiated, it calls the method in the protocol UIViewForAudiUnitWithSize, which will then load the nib file, initialize the view, create and initialize the view, and then it also returns the view. In the process of creating that view, it will call the method in our code, setAU, which caches the Audio Unit, creates and adds any listeners that we need for parameters. and then returns.
So now let's go ahead and take a look directly in the code. As I mentioned earlier, our first step is to create a listener for those parameters that we're interested in knowing about. This is done directly from the setAU method I just showed you. And here we're registering a specific dispatcher method that will be the main funnel for all event notifications. So anytime a parameter changes, that method will be called. We specify the run loop and the mode so we don't have to worry about threading issues. And then set up some granularity and interval values.
Our first step is to add a listener to the cutoff frequency parameter. So we set up that parameter and then specify the begin parameter change gesture event type and then call au event listener at event type to register that we're interested in that specific event. And similarly, we do the same thing for the end change gesture and the value change event.
Our Event Dispatcher call, which we registered previously, simply calls through to a Cocoa method, which we will then handle those events directly. And that method, first of all, looks at the event type. And if the type is a value change, we need to make sure that we then update our UI.
And we do that by, if it's a cutoff frequency parameter, setting the float value of that field to the parameter value that we get. Additionally, we need to tell the graph to update its view as well for the frequency change. Resonance is handled in exactly the same way, except we need to update a different text field and notify the graph that the resonance parameter has changed, passing the value in there.
Handling the begin gesture is done in exactly the same spot. As you recall from the demonstration, we need to make sure that we highlight our crosshair to indicate that that parameter value is being changed in a view other than our custom view. And we do that simply by telling the graph view to update its state. And we handle the end parameter change gesture the same way.
Okay, that was quite a bit of code, but we only have a little bit more to do. Primarily, now we need to make sure that we can update the Audio Unit when the user has changed our user interface, as opposed to changing our user interface to reflect external value changes.
So the user can interact with our interface in two main ways: directly changing a text field value, as I showed in my demo, or clicking and dragging in our graphs. So to handle the first type of interaction, we've registered a method that gets called via an IB action in Interface Builder.
And here we simply need to get the text value that's in our interface and call au_parameter_set to not only set the value of the Audio Unit for that parameter, but also to notify any listeners that that's occurred. For our custom view, we are doing the same exact thing, but we have the capability of handling two different parameters at a time, so we need to call AU parameter set on both. Otherwise, the code's the same.
When the user has clicked in our view, we need to make sure that we notify any other listeners that a gesture has occurred. And we do that by first creating a parameter, and then setting the event type, begin parameter, change gesture, and then notifying other listeners that that has occurred. We do the same thing for the resonance parameter. End-to-end gesture is handled the same way, except the event type is different.
Okay, right now we're almost finished. Our UI really can handle a lot of stuff. We can handle parameter changes that are coming from other notification sources, as well as changes that are done directly in our user interface via a text field change or via interaction with our graph view.
But we're really not quite there yet. We don't have the capacity to draw that beautiful curve because we don't have access to that data. That data is generated in the Audio Unit. So how do we go from this view-- to one that does have that information. Well, Chris is fortunately going to come up and get us out of our dilemma.
So Chris? So what we have here basically is a curve that depends on both the cutoff frequency and the resonance. So if you drag that control point around, of course, the shape of the curve will change. So we need to find out what the curve is for a specific value of cutoff frequency and resonance. And the Audio Unit is the one that knows what that curve looks like. And the view has to ask the Audio Unit about that.
Basically, the curve is just a function of the frequency. It's a response value. Here it's plotted on a decibel scale. And the way that the view is going to get that information from the Audio Unit is by using a custom property. Now, properties are basically just a general way of passing information back and forth between an Audio Unit and a host or a view.
And the information is passed with a void star pointer, so it can point to any type of data, arbitrary data, and a length parameter, so it can be arbitrarily long or short. Inside of the Audio Unit, there are two basic methods that we're going to be interested in today, the Get Property Info and Get Property. And it's probably easier just to look at the code, the next code. So if we could go to the demo machine, please.
Okay. Let's have a look at the property stuff here in our filter class. So, first of all, get property info. We're giving information about a custom property that we're going to define called Filter Frequency Response. That's just an integer. It's typed after Audio Unit Property ID. Let's see how we find that. We've given it a value of 65,536. Custom properties must have a value of 64,000 or greater because values below that are reserved for Apple's use.
So, we've defined a property ID, and while we're in this file, why don't we just have a look at this structure here. I've defined a structure called Frequency Response, which has two members, the frequency and the magnitude response that corresponds to that frequency. So, what's going to happen is the view is going to pass an array of these structures, filling out all of the frequency values that it's interested in, from the low frequencies up to the high frequencies. And then the Audio Unit is going to calculate the magnitude response at that particular frequency value. And we're going to pass an array of 512, size of 512, so 512 of these structures.
And by the way, this header file that we're looking at, this is a header file that's shared by the UI and the Audio Unit. They both build off of this header file. There's no code being shared. It's just definitions. Okay, so we return the size of the property here. It's not a writable property. It's just a readable property. And the size of the property is the number of frequency responses, which we saw was 512.
It's an array, size 512, times the size of the structure. Okay? The second method To actually get the property information is the get property method and well right here we have the Cocoa UI property which Michael ran you through. That gets the information about the Cocoa view but the one we really care about is the The new custom property that we've defined, the filter frequency response property.
One of the first things that we do is we check to make sure that the Audio Unit is initialized, because in this particular case, we require that the AU kernel-based subclass, the filter kernel, be available. And that happens at initialization time. So if somebody tries to call for this property when it's not initialized, then we return this error. And the custom view will check for that. And if it gets an error back, it won't attempt to display a curve. So, as I said, the information for a property is passed through a void star pointer, and that's one of the arguments to our function right here.
And, Okay. And what we do is we're -- I'm casting that to a pointer to our structure, our frequency response structure that we saw has two member variables. And basically what we're going to do is we're just going to go through in a loop. 512 times 'cause that's the size of the array. Um, we're gonna get the frequency that-- that Michael has asked about and wants to display.
And then we're gonna ask the signal processing code, which happens inside of the-- the filter kernel, and call its get frequency response method, asking it what the response is for that particular frequency. And then we're gonna put the result in the, um, member variable-- the magnitude member variable. Let's just have a really quick look at the get frequency response method to see what the signal processing code does there. Um, so it's given the frequency.
It gets the sample rate, and then it does some mathematics that's--that's particular to a biquad filter, which isn't-- you don't have to understand that. The point is, is that it calculates, uh, response value and returns that value. And after doing that 512 times, um, it's gonna get the sample rate. And then it does some mathematics that's particular to a biquad filter, which isn't-- you don't have to understand that.
The point is, is that it calculates, uh, response value and returns that value. And after doing that 512 times, And after doing that 512 times, in this loop, it returns, and Michael now knows what the curve should look like. So, lastly, Michael is going to take you through a little bit about how that's displayed.
[Michael Hopkins]
Thank you, Chris. Can we go back to slides, please? So the user interface is responsible for a couple of things. Firstly, it actually holds that frequency response data and therefore needs to allocate the memory, which we do in the source code via calling malloc. Once it's been allocated in the setAU method that we saw earlier, we initialize the data with the frequency values for each pixel location where we need to be able to draw a corresponding decibel value.
Once we have initialized that data, we can then retrieve the frequency response curve from the Audio Unit by calling AudioUnit.getProperty. And then finally, we can draw that curve. So let's take a look at the code there. Again, that's all on your CD, so I encourage you to take a look at it later.
The initialization occurs exactly once, when the Audio Unit is initialized. It also can occur when the window changes size, but primarily that data is very static. And we don't need to update it every time we redraw. So we do a little bit of math to calculate those pixels and the values that we need to display. And then we have a loop very similar to what you just saw in Chris's code, where we compute the frequency value for each of our pixel locations and stick that in the structure. here.
Then we need to retrieve that data every time we're ready to plot our curve. And that occurs right in our event listener funnel where our notifications are handled. And that's because every time we get a parameter value change, we then need to say, hey, Audio Unit, please give me the new curve data because I need to redraw. And we do that by calling Audio Unit get property. Once we've had that information filled out, we can then ask the GraphView to plot that curve. And that's pretty much all there is to it.
So there are a couple of other sessions that are occurring here at the conference that I encourage you to go to if you have a chance. There's an excellent session immediately following this one on 3D audio, and there's a bunch of really great demos there, so I encourage that. I also would like to stress our lab that's occurring tomorrow at noon.
And any of you that have any questions about Audio Units or if you've had a chance to go through that sample code and would like to get more information, we'll help you out with that. Also, during the beer bash, we have a special event, which is our Plug Fest, and that occurs at 6:00 on Thursday, so please come to that.
And there's a lot of excellent resources out there for people such as Bill Stort, who will be coming up in a second to do the Q&A. But I also have a couple of minor tips for you when you're developing your own Audio Unit. First, make sure that you use those Xcode Audio Unit templates that we ship.
You can download those from the ADC site. Make sure you register your manufacturer code with us. Please use AUVAL to validate your Audio Unit and make sure that you catch all your problems. And also make sure that you make use of AULAB, which is in Developer Applications Audio, to test your Audio Unit view.