Mac • 43:24
Designing USB audio class devices that take full advantage of Mac OS X is a critical skill for those developing USB Audio Class 1.0 and 2.0 devices. Discover key details of the Snow Leopard USB audio driver and learn about high-speed streaming, clock domains, latency reporting, and device status. See how to optimally configure descriptors to ensure that your device accurately publishes audio controls and device names.
Speaker: Alison Hughes
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
My name is Alison Hughes. I work in the Multimedia group here at Apple, and today we're going to be talking about designing USB audio devices for the Mac platform. All right, so I'm going to start out giving you a high level summary of what I'm going to be going over today. I'm going to start with an introduction, give you an idea of what our class driver does and where it sits in the system architecture.
Then I'm going to talk about class compliance. Now I'm sure you've heard this message before, but for those of you that are still on the fence of whether or not you should go class compliant with your device, I'm going give you a lot of good reasons why you should go that way.
Then, for the bulk of my talk, I'm going to be giving you some tips for -- tips and tricks -- to get -- so you can design your device to work seamlessly with Snow Leopard. Then I'm going to finish up with the new 2.0 USB audio features that we've included in Snow Leopard for all of you who are interested in designing a high speed USB audio device, and then we'll finish up with the conclusion, and I'll open it up to questions. So first. What does the driver do?
Well, our driver, it's called AppleUSBAudio, we like to call it AUA for short. It -- in summary, it represents your device and its features to the application layer. So if you plug in your device, and you launch AM S, that's Audio MIDI Setup, you'll see it. Here is an example. It is our Apple display, that is a USB audio device.
And you can see the volume controls and the mute controls. And right now as we always have supported, we've always supported USB Audio, 1.0 class compliant devices. And recently, we've greatly enhanced our support of USB 2.0 devices. Now when I say USB Audio 2.0, I'm referring to the USB Audio class device specification. I'm not talking about the USB 2.0 class specification. And I just want to say that up front, because even among some Apple folks we have some confusion sometimes when we're talking.
So what does our driver not support? It does not support what we call Frankenstein devices. So if you've been a previous conference, you've heard my predecessor refer to high speed devices with 1.0 USB Audio descriptors as Frankenstein. Now we don't support these. You might plug in your Frankenstein device, it might work now, but that's just a coincidence, okay? So for the long term you want to make sure that you go ahead and write 2.0 descriptors for your high speed devices.
So, those are the basics. What our driver does. Now how does AUA fit in the larger system architecture. I provided a diagram for you here of what the audio stack looks like in Mac OS X. Our driver is an I/O Kit device driver, it sits in the kernel, and it talks to your USB Audio device over USB via the facilities provided by the USB family.
Also, the high level generic audio device features are provided by the I/O audio family. Those features include support for maintaining your sample mixed buffers. It also includes user clients for communication between the user space and kernel. It also -- above that, you have the Audio HAL, that's Core Audio, and it represents the device that controls and the streams to the application layer.
Now I have another picture here for you to go into more detail about what those entities are. You have your audio device, and it may contain one or more engines. And each of these engines contain one or more streams. Now these are actually not containers, they're more of associations.
So just to clarify, the way Core Audio works, it kind of represents the engines more like devices. So when I get into details about naming, I'll let you know, you know, that you'll name your engine, and then in the UI, you'll see the engine name is a device.
So just to clarify that. That can be a little bit confusing. So why would you want to go class compliant. Now you've heard the message before. I know some of you developers out there, you're worried, you know, if I rely on the class driver, what's going to happen when I find a bug and my device isn't functioning the way it should. You also may be concerned about the overhead in designing the 2.0 descriptors.
Well, I'm going to address your concerns. First of all, I think the most compelling reason to go class compliant is the reduction in your development costs. So consider if you do not go class compliant, then you're going be maintaining two code bases. One code base is your Firmware code base, the other one will be the driver. Now you're going to write a driver for the current operating system. And then when we do a software update, you're going to have to test that driver, and it may have some issues, you'll have to fix them, and then redistribute your driver.
Now every time we make a change and release a new OS you'll have to go through the same procedure. So as you know, we have a very aggressive release schedule, so you may be doing this over and over and over. And we really think that you should just leave that to us and then all you have to worry about is your device Firmware. And anywhere, this is where you should be focusing, because this is where your expertise is.
Now what I also would like you to consider is your total development cost. Not just the cost incurred up the to point where you ship your product, but the costs that are incurred after you ship your product. Because your product about still be out on the market, you'll have customers using it, and you will still need to support them.
So you need to factor that in when you make those decisions. Additionally, as you know, you know, we had devices that have been 1.0 devices that were working back in the Power PC days. Well, when we switched to Intel, those devices continued to work. And now we're making our transition from 32-bit to 64-bit in Snow Leopard. Those companies have to do nothing. Their devices continue to work.
So I think that's a pretty compelling argument right there. Secondly, if you use -- if you're a class compliant device you get to take advantage of plug and play. That means when your customers buy your device they just plug it into the Mac and it just works. Now many people who own a Mac, the reason why they use one is because of this ease of use, this very high quality user experience.
Well, plug and play fits right into that. Thirdly, free driver testing. So when we maintain the driver, this is what I do every day, we fix bugs and we test these, we test our driver. And every time we do a software update at least once we'll do a device sweep. Now a small device sweep could be anywhere from 20 to 30 devices, and I think our last sweep, we did up to 80 devices.
So that's a lot of testing. We have an excellent test team in my group, and you know, that's a lot -- that's a lot of work to test all those features, but we take a very wide sampling of devices that cover all the features, including the more exotic ones, so we get good coverage. So I want to take this opportunity to invite you, especially those that are chip vendors, if you have a new device, please send it to us. We'd like to take a look at it.
I'm going to be introducing Michael Wong at the end of the talk. He is our audio relations person, and you know, just get in contact with us and we'll take a look. Now I can't guarantee if you send us the device that we will test it. But I can guarantee you this. We are more likely to test it if you send it to us.
So please do that. And also, when you go to write your class compliant descriptors we have support for you. We have a lot of resources. First off, USB Prober. This is in the developer package. This is -- the first thing I do when I get a device that's been behaving badly, I plug it in and I launch USB Prober. We have audio-specific parsing built in so you can check out your devices in probe -- check out your descriptors in prober and see how your descriptors are parsed. And that's going to be your first clue.
The second thing I do is I build a logging version of our driver. Now you can download it, it's open source, and you can build a USB logging version or possibly a fire logging version and look at log messages. This is a great way to debug descriptors and other problems. Another great resource we have is Apple Developer Connection, and I know that most of you are already members, so that's great. But I highly encourage you to consider upgrading your membership to the select level or higher.
Now if you do this then you'll have access to Seeds. Very important. Because if you have your device and you test it with the Seed before we actually release the software then you can find out if something is going wrong. And if there are problems then you can let us know using Bug Reporter.
Now internally, we refer to this as radar, and this is our primary communication tool when it comes to fixing problems with the driver. So this is your direct line, use it, and make sure that you include as much information as possible. And there's a very good help document on the ADC web site to explain what are good things to put into a bug, and the better the bug is written, the more likely we are to fix it.
So help us out with that. One last note, for ADC members, if you're a select member you do get access to our compatibility lab, and this is a resource that I feel is underused. What it is, is a lab that has tons of hardware configurations, many, many hardware configurations that we've shipped.
You can also call ahead and have specific software loaded on those machines, then you bring your device and you can test it on all of those hardware and software configurations. Now if you try to build a test lab on your own it would be very expensive. So it's a steal, so please, take advantage of that.
And lastly, if you're still saying to yourself I can't use your class driver because I have these advanced features that you do not support, and I have this very fancy device, and I need to write my own driver. Well , what I say to you is why don't you rely on the class driver for streaming, and then you can enhance that driver by adding your own custom plug-ins.
So if you want to do some special DSP on the input or output you can do that, you can add on, you can make a USB Audio plug-in. Or if you are worried that your controls aren't exposed the way you would like in AMS you can write your own custom control panel application, which can be launched straight from AMS with the click of a button.
Now currently our sample code that we have on the web is a little out dated. It only supports doing DSP on output streams at the engine level. We have recently revamped that sample code, and expanded it to include doing DSP on both input and output streams independently, and we're in the review process right now and we're going post that shortly. Now when it's posted I will let you know I'm going post to the USB mailing list, the Apple mailing list, so we'll keep you updated, so stay tuned.
Now in this section I'm going give you four ways to improve your device design so that it works seamlessly in Snow Leopard. I'm going talk about controls and I'm going talk about how to achieve artifact-free streaming. And I'm going to specifically focus on areas that we have recently enhanced. So I'm going focus on the advances that we've made over the past year.
So if you're looking for what's new pay attention to this section. The first tip, I want to talk about controls. So when your user is looking at your device in the operating system the main point of interaction is around the controls. So I'm going to show you our parsing algorithm, how we examine your audio topology and how we find controls for input, output streams, and also for play through, for those of you that are not familiar, we're talking about hardware play through where for instance you can route audio coming in an input port, out in output port directly in the hardware. So it's essentially zero latency.
So in the audio world this is something that's highly desirable. And the last thing I want to talk about is if there's a change in a control on a device how you can send an interrupt to the host and our driver will take notice, and update the user interface. So first, a review for you. Those of you that are not fresh on the USB Audio spec, I want to go over audio topology. Now here is a diagram showing you -- this is an example that I came up with, a very simple topology.
It has S/PDIF In, and Line In, and speaker output. On the outer edges you'll see the Terminals. On the left-hand side you have the input Terminals, so S/PDIF In, Line In, and USB streaming. That's coming -- audio is coming into the device on this side. And on the right-hand side we have the output Terminals.
That's where audio is leaving the device. In the center are various units. The ones you're probably most familiar with are the feature units. This is where we -- where usually you'll call out your volume and mute controls. We have a selector unit. This unit switches between one or more input pinsand routes that to the output pinunaltered. And then finally we have a mixer unit, and this unit takes multiple -- one or more -- input pins, routes it to the output pin, but unlike the selector unit, it actually combines the channels and mixes them together.
Here is a -- I took this diagram straight from the audio spec showing you what a mixer unit could do. And as you see there's this matrix in the center, around at every intersection point between the incoming channels and the out going channels, you can control the volume. So it's very flexible unit. We don't see a lot of devices implementing it, but I think it's a great tool. So you should definitely consider that. So first I want to talk about input paths.
How do we find the volume and mute controls for your input path? Let's look at Line In. That's my first example. The algorithm we use is we start at the selector unit and we find the first feature unit on the input side. So in this case, it's this feature unit. Now this is a very simple diagram, but it sometimes gets more complicated.
If you don't have a selector unit, you know, some devices don't have this complicated of a configuration, well, we'll just start at the output side and move toward the input side. The first feature unit we find with volume controls, those are the ones we're going to use. Now output is similar. We start at the -- let's -- here we go.
Let's look at the speaker path. So the way -- our algorithm works by starting by the output side, and the first feature we find with the volume and mute control that's the one we use. It's very simple. I mean, our algorithm is just, you know, what the user would expect.
We want to control the volume and mute at the last moment possible, before the audio leaves the device, if that makes sense. Now how about play through. It's a little more complicated here. Let's consider the play through path going from line input to the speaker. Now as you can see, there's a few feature units on this path.
We are going to start from the input side and the first feature unit we find with volume and mute controls we're going to use. Now the one caveat here is that that -- those controls cannot be shared with any other audio path. It's very important that they are independent. Now if we find a mixer units, we stop looking. It gets too complicated.
So note, you have to have independent controls. And one thing to note here, so you don't get confused, this confused me at first, when we publish those controls, we do publish volume and mute for play through, it's just that the volume controls are not exposed in Audio MIDI Setup, okay? Only the mute control. The mute control shows up as the On/Off toggle for through, all right?
So if you really want to see your volume controls you're going to have to go into another application like HALLab, and then you can see them. So your typical user isn't going to see those, so my suggestion to you is set your volume levels at something reasonable, i.e., ON, because otherwise it will seem like the play through doesn't work.
So what I think is helpful since I've had a lot of vendors get confused about this, I'm going show you what not to do for play through, okay? So look at this -- I have a different diagram up here. You can see I've removed one of the feature units. And here's our play through path again, coming from the Line In, going to speaker.
And if we were -- as you can see, that feature unit that we used before is now shared with the Line In record path, as you can see. And that will be a problem. And let me give you an example as to why this is a problem. So let's say that I'm recording through Line In.
And I'm using the play through to monitor, okay? So I've got that going, I'm listening to my Line In, and I'm recording, and now I say hey, I don't want to listen to that monitored audio any more. So I turn off the play through. What happens? Your Line In record path is now muted, which isn't what the user expects. So be careful, don't share controls, and you will be fine and you will be able to expose your play through path. Lastly , in the area of controls, I want to talk about status interrupt endpoints.
Now this is a 1.0 feature, it's also in 2.0. We finally implemented this, and it's really cool. Let me give you an example. So if you have a device and there's a volume knob, okay, so you turn the volume up, and via an interrupt, can let us know about it, and we will update the controls in the application layer. So they're informed.
So now your volume sliders are updated. So it's very seamless, the user turns the volume knob and instantly AMS, you can see the volume slider going back and forth. It's great. So we really encourage you to take advantage of this feature, because I think that it really provides for a great user experience. We implemented -- by the way, we implemented that exactly to the spec. So you can use that for your documentation.
Tip two is descriptive naming. We have now given you a lot of control in the way your devices are represented name-wise in the OS. And we've also added support for channel names. And you can either use the predefined channel names that are called out in the spec, or you can come up with your own custom names. So I'll show you how to do that. First, engine and device names. Now I mentioned earlier that sometimes there's a little bit of confusion in the OS, what's the device, what's an engine, where do I get the device names. Well, this is how it works.
If we provide an engine name using an algorithm that I've shown here, then that will be used by the device as it's represented in the application layer. If there is no engine name, then we'll go ahead and use that device name, okay? So that's how it works. And here is the algorithm we use to figure out what the name is. Now it's a little boring, but I'll go through it with you really briefly. For the engine, we're going to see if you have a stream interface name. If you have one, we'll use that. If not, then we'll look to see if there's a control interface name.
And we'll use that. For the devices, we start with the control interface, if there's no name there then we move on the USB product name. And if there's no name there, then we're going to just give you unknown USB Audio device. You don't want this to be you. That doesn't look very good. So try not to have that happen. That's very rare.
We rarely see that. Now channel names. So we use -- you have a choice. You can use the predefined channel names, like I said before. Like front left, front right, front center, low frequency effects, all of the standard names. Or you can call out your own names for your channels to really customize your device. Now this is how it might look in an application to give you an example. This is Logic I have up here. And I'm going zoom in here. And you can see on the input channel one, it's called External Mic. So this actually came from the device descriptors.
This did not require installing any software. Now here's a little bit of code for you. This is the struct that we use to call out an audio cluster descriptor. Now an audio cluster descriptor is where you're going specify your channel names. The first element is the number of channels in your audio cluster. B and R channels. The second element is channel config. This is actually a bit mask, and when the bit is set is means that you are using the predefined channel name.
If the bit is not set it means that we're going to use a custom name. Now the iChannelNames is an index into the string descriptor array. And it calls out the first custom name, and then they go sequentially. So if you want to know how many non-predefined channel names you have, you just take the total number of channels and subtract the number of bits that are set in the bitmask channel config. So where do we find your audio cluster descriptor.
Well, this is the algorithm that we use. Let's take a look at the speaker path again. The audio cluster descriptor, according to the specification, can only be defined in four entities. The input terminal, the mixer unit, the processor unit, or the extension unit. So we start looking, starting from the output terminal side, and we find the first unit that contains the descriptor. In this case, it's the mixer. That's pretty common. But it could also be called out in your input terminal, the USB streaming one. So onto the third tip. This is a big one.
How can you design your device such that you have artifact-free streaming. Now I don't know about you, but to me, streaming audio flawlessly, that is the primary purpose of your device. That's what the customer is expecting. And you would be shocked how many devices actually don't stream free of artifacts.
So I'm going to give you some tips and areas to focus on to prevent artifacts. Number one, the sync types that you use for your end points are crucial. I'm going to give you some guidance on how to pick these appropriately to guarantee that your device is going work right.
Secondly, I want to talk about max packet size. Many times descriptors are incorrect in this area. And I'm going show you how we determine how much bandwidth we're going to reserve for you, which will help you to decide what your max packet size should be. All right, end point synchronization types. I'm going to give you a quick overview, this is straight from the USB spec. Now I'm talking about USB, plain USB, vanilla.
But I'm going to give it a little bit of an audio slant, so first, synchronous. This is the most basic. It's very easy to implement. This means that your endpoint is synced to the host using starter frame tokens. The second type we see is asynchronous. This means that the endpoint is not synced to the host.
Possibly, it's synced to an external source or the crystal on the device, perhaps. The third type is adaptive. Now I'm not going to talk about input adaptive end points, because we just don't see those. We mostly see output adaptive end points. And this would mean that the endpoint is synchronized to the host using that output stream rate.
So that's how they're going to sync the clock. Now I'm going to give you some typical combinations we see between input and output endpoint synchronization types. These work very well. And they're very common. So if you stick to these you're going to be fine. So let's start out. I have input on the left -- yeah, input on the left, output on the right. First, synchronous, and synchronous. This is the easiest, this is the most straight forward. We get it, your clock is synced to the host using starter frame tokens. There's no guess work here.
We know exactly how your synchronization works. The second type we see a lot is asynchronous input combined with adaptive output. Again, very easy for us from the driver perspective to know how your clocks are synchronized, we know that your clock is being derived from the output rate coming back from the host.
And on the input side you don't have a feed back-end point, so you're asynchronous. The last combination we see, especially in pro audio devices where you need a very accurate clock, we see asynchronous on the way in, and asynchronous on the out. And you know, typically you'll see them using a very accurate crystal, you know, on the device, or possibly they're syncing to S/PDIF In, like an external source. So we see that a lot. Stick to these, you're in business. We highly recommend that you do.
Now I want to talk about max packet size. It's an area of a lot of confusion. So I want to take this opportunity to clarify. First off, we follow the spec exactly. The spec reads you cannot vary your packet size by more than plus or minus one sample frame.
It's very clear. Secondly, the way our driver works we're only going to reserve enough bandwidth, just enough bandwidth that you need to the bus, because we want to leave room for other things, right? So we follow the spec. We're only going to give you the average USB frame size plus one sample frame. Now this is best illustrated with an example for you to understand it clearly. I'm going take a full speed example, this stream is at 44.1 kilohertz, two channel, 24-bit. Very basic.
So we know that if we're a full speed device that there are a thousand USB frames a second. Now if you have -- if your sample rate is 44,100 sample frames a second, that gives you an average of 44.1 sample frames per USB frame. Now we know that we cannot straddle two USB frames with one audio sample frame, so we have to use this cadence. So let me give you an example of the exact packets and how they're constructed. So here we've got USB frames in terms of sample frames.
So the first USB packet will have 44 sample frames. The next one, 44 sample framesD. We do that over and over till we get to the tenth frame, and then we send 45 sample frames. That's how we get the 0.1 without straddling two USB frames. Now we know that for each sample frame we've got two channels, 3 bytes of sample, we're 24 bit, so that's 6 bytes per sample frame. So we do the math, and now we can see the USB frames in terms of bytes.
So 44 times 6 is 264. So that's the average size. So we send 264 bytes, 264 bytes, all the way up to the tenth frame, then we add on that extra sample frame, which is 270. So that's what you're going to get reserved for you. 270. It's very important on input.
Make sure that you set these appropriately. If you specify this huge max packet size, it doesn't matter. We're going to reserve for you what you need. So be cognizant of that. And don't make your max packet size too small. Because we've seen that, where you don't allow for that cadence, that extra sample frame occasionally.
So go ahead and, you know, set that properly, and you won't have artifacts. Now the reason why there's artifacts is because we need to use this to generate the time stamps for Core Audio. And if this gets messed up then we don't generate accurate time stamps, and that's where the problem happens. Now also we can lose data, because if we're not expecting the packet size that you send it will get dropped on the floor.
So there's your artifact. So the fourth tip I want to talk about today is this great new feature that we've added to the driver. We added this in 10.5.7. And previously -- well, the feature is you can have multiple streams on a single engine. Now up to this point we always put streams on their own engine. So every stream had an engine. Well, we've changed that, and we now give you the ability to combine those streams.
And what's good about it is that since the streams are on the same engine they're on the same I/O clock cycles, which means they are synchronized. And of course they'll have the same sample rate. So this is a great benefit to the user, and in addition, the user will also experience consistent latency between hot-plugs. Every time they plug-in that device they're going to see the same kind of latency. So that's -- that's great, because now you can actually measure latency.
That works out great for us to, so you can test it. So how -- you're probably asking yourself, well, how can I take advantage of this. Well, most devices already kind of work that we tested, many of them do. But let me give you the criteria that we use to figure out if streams can indeed be put on the same engine.
First, we have to check to see that the sync type is the same for all of the alternate settings on the interface. Now if we didn't have this rule it would get very confusing very fast. The second thing we look for is we need to know that the endpoint sync types are compatible. So remember back a few slides when I was talking about combos that work well together, use those and you're in business. It will work.
The third thing we look for is matching sample rates. So like I said, if the streams are on the same engine and we're using the same clock cycles, then the sample rates have to match. So we need to be able to select -- if you change the sample rate on input, we have to change the output to match.
So they need to have the same set of sample rates. And lastly, for those of you designing 2.0 USB Audio devices, the streams need to be in the same clock domain. And I will go into more detail about 2.0 in a moment. And it's right now. 2.0 USB Audio in Snow Leopard.
We've been working really hard on this portion of the driver. So a lot of vendors have been asking us, well, what features do you support? Well the most obvious one is high speed streaming. A lot of you want to do high channel count devices. Well, you can now.
Previously with full speed, you were only able at 96K/24bit , you could only do three channels. But with high speed, you can do 24 channels of 96 K/24 bit. So that's a huge improvement. Basically, we can now -- we can do 1,024 bytes per micro frame. There's 8 microframes per millisecond. So that's a lot of band width. One of the things we don't support at the moment is high bandwidth end points. Now this part of the spec is a little tricky.
So let me clarify. So we do high speed -- that's 1,024 bytes every microframe. But we can't do multiple transactions at the moment per microframe. The spec allows you to do three transactions, up to three transactions a microframe. So that's 3K. So we haven't gotten to that point yet, but we are highly interested in this. We have not yet seen a device that does anything like this, so if you are considering that, please come and talk to me after the session today.
Because we're really looking for devices that do this. The second big improvement with USB Audio 2.0 is the clock domain support. So now you can use the descriptors to really describe accurately your clock domains. You know, the guessing game I was talking about before with the different endpoint combinations? Well, with 2.0 we don't have to guess. It's clearly called out in the descriptors. We really like this part of 2.0. So back to the diagram. Here's your audio topology again.
Now we have some new descriptors. They're the ones in green. We have implemented clock sources. So we have two in this example. We have an external source and an internal programmable clock source. We have also implemented clock selectors. And a good illustration of this is how it might be exposed in Audio MIDI Setup. So as you can see with this example, this device has a selector and you can choose between external clock or device.
So that's really nice. And thirdly, we implemented the interface association descriptor. And this is there to describe an audio interface collection. And you have to make sure that you put that descriptor in, and then also another requirement is that your interfaces come in a specific order. So you start with your control interface, then your audio streaming interfaces, and then finally your MIDI streaming interfaces.
And the interface numbers need to be contiguous. And lastly, we again implemented the status interrupt endpoint for 2.0. And this allows us to do some fancy things, like maintain a valid clock source. Let me show you how that might work. So let's say you have your S/PDIF In coming in, and that's what you're using for -- you're syncing your clock to that.
As you can see in the diagram, I have that selected. But if you're like me and you kind of have a messy studio, sometimes it -- it happens where I accidentally pull a cable. It's dark, it's late, I accidentally, oops, pulled the cable. So if you have this interrupt endpoint there then you can let the driver know and we will switch it to a valid clock source. So that's a nice feature that you can do in 2.0.
So to wrap up, the main message I would like you to take home today is that class compliant devices work best on Mac OS X. This is really the best choice for so many reasons. It's been a good choice for a long time, but it's a really good choice now. Because you really take advantage of what, you know, the best that Mac OS X platform has to offer.
And that is plug and play, ease of use, seamlessness. You know, when your device is class compliant, you get that for free. And I feel like now we have a lot of resources available for you and a lot of support and a way, you know, to get you there and help you with your descriptors.
So it really is the best choice. Another thing I want to add here is that because we're on the inside, we have a unique perspective to take advantage of low-level improvements in the operating system. You know, we have that inside information, so we optimize our driver to work the best.
And this is information on the -- as for you on the outside that you don't have access to. So it makes sense to leave it to us, and let us, you know, make streaming work the best it can work, even when things change underneath. I just think it's a great opportunity.
So just take advantage of it. And you'll save yourself some money. Secondly, if I haven't convinced you already, we really feel that the 2.0 USB Audio specification is superior to 1.0 in many regards. Now this isn't just for high channel count, high bandwidth, okay, that's a very compelling reason to go 2.0. But there are other reasons to be 2.0 class compliant.
For one, the get the benefit of those clock descriptors. And I can't describe -- we've -- I can't tell you how many times we've tripped over this in the driver when we're trying to figure out how devices synchronize, it's such a problem. People get so confused with those endpoints.
It's just when we have those clock descriptors it's really clear. And then that opens it up for us in the future, now I'm not making any promises here, but there are a lot of features that we could potentially implement here. Like let's say having multiple use audio devices synchronize together. That would be pretty cool.
Much, much cleaner, we can only do this in 2.0. Really. So if you want to take advantage of those future features that we may add, you know, we're concentrating on 2.0. So let's all move forward together and embrace that new specification. And also, again, I want to remind you, if you are designing a 2.0 or a 1.0 device, feel free to contact us, talk to me after the session, come to the lab, call -- send an e-mail to Michael Wong, I'm going to have his e-mail address up here in a moment.
Send us the device. Let us know what you're doing. Tell us what features you want You know, we're open to listening to what you need. We want to make the most devices work the best on our platform. So we're really interested in what you're looking to do, what are your plans. So talk to us. So that's all I have today.
If you want more information you can talk to Craig Keithley. He's an I/O technology evangelist here at Apple. You've probably seen his name on quite a few slides. Again, Michael Wong, he's going to be coming up in a moment during Q and A, so you can see his face.
He is our audio partnership manager, and definitely send him an e-mail if you have a device or questions. And you can always contact DTS, of course. And then lastly, our documentation is the same as your documentation. The spec. I know it's kind of heavy and boring and long, it actually isn't that bad.