Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2003-503
$eventId
ID of event: wwdc2003
$eventContentId
ID of session without event part: 503
$eventShortId
Shortened ID of event: wwdc03
$year
Year of session: 2003
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC03 • Session 503

USB Update

Hardware • 55:36

Learn about the latest USB APIs, tools, and debugging techniques. We'll show you how to get maximum performance from the IOUSBFamily, and how to make your drivers work across the widest range of Mac OS X releases.

Speakers: Craig Keithley, Barry Twycross, Rhoads Hollowell, Fernando Urbina

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good afternoon. This is a general introduction to USB 2. I have been asked when the harsh introduction to USB 2 is, and I'm really hoping that this is when you don't get, not when you get to do all of this. So, which is the bite button? Okay. That's the right button. So, USB 2.

What I'm going to talk about is what USB 2 is, as well as what USB 2 is not, and also what it means to all of you, and a little bit about how it's done, and finally, I'll finish up with a little bit about high-speed ISOC APIs. So, what is USB 2? It is USB 2, but it adds high speed. It is USB 2.

It's compatible with the USB 1.1 specification. It includes almost everything in the USB 1.1 specification. It's almost exactly like the USB that you all know and love, but it adds the high speed. This is high speed, yeah. It will transfer data at an orbit rate of 480 megabits a second.

How fast is 480 megabits, you ask? Well, it's 40 times faster than USB 1.1 full speed. Er, 40 times faster. So... How fast is that? Imagine, I presume you're all going to be going to our plug fest on the campus on Thursday evening. So you're going to go outside, you're going to go on the bus, it'll take you about 40 minutes to get to Cupertino's. If that was a high speed bus though, it would take you one minute. So, you can see it's quite a lot faster.

Okay, what USB 2 is not? USB 2 is not the same thing as high speed, and also USB 2 is not... inefficient. There's a lot of confusion out there in the marketplace, particularly USB 2 and high speed. USB 2 is not a synonym for high speed. A USB 2 device can be high speed, it can be full speed, it can be low speed. However, recently there has been some confusion about USB 2 hosts. If a computer says that it's USB 2, you can plug in a high speed peripheral and you can get 480 megabit per second communication going.

Okay, so USB 2 includes the full and low speeds that you're already familiar with. These can be called classic speed, although as we already use classic for other things, it's not really a word we'd like you to use. Also, the USB Implementers Forum, the governing body of all this, frowns on you confusing USB 2 and high speed. You should not say if you have a full speed USB 2 peripheral that it's USB 2. You should say that it's just USB.

If you were to say that, say you had a printer, and you said on the box that it is USB 2, the user will have an expectation that they plug it in, and it will run faster than a previous USB printer. It won't. The user will be upset. You'll get support calls. So, don't do that sort of thing. It's leased all sorts of marketing tricks we'd really like you to not do.

Also, USB 2 is not inefficient. There's also some confusion about this out there. USB 2, the high speed and the full and low speed communications are segregated. They never share the same bus. So, the full and low speed communications don't get in the way of the high speed communications.

There is a theory out there that if you plug a single USB 1.1 device into a USB 2 system, every high-speed device on the system slows down. This is not true. It is absolutely not true at all. We try and dispel myths like that. There is also some performance tweaks in USB 2 to make it work even better than it did before. The bulk packets, all of them, are now 512 bytes instead of the 64 bytes or less that they were previously. This reduces the overhead of protocol to data, so it will run faster. And then there's also things like the Niet handshake and the ping protocol.

Previously in USB 1, if you wanted to send data out to a device and it couldn't accept it, the host would go out and data, and the host would say, it would receive the data and say, sorry, I couldn't actually deal with that, oh well. And then you'd have to continue sending all this data until it could, and this could use up an awful lot of bandwidth doing nothing. Now the device can say, you send the data out, and the device will say, yes, thank you, I got that data, but I don't have any space for any more. It will say, niet, as a handshake.

Now, niet actually just means not yet, it has nothing to do with a Russian word which looks similar. So in this state, the host will now not speculatively send data to the device. It will instead send pings. "Ping! Do you have any space for data?" And if the device says "Nak", "No I don't." Or the device will say "Ack, yes I do." And then you can carry on sending data. So you're not constantly using up bandwidth doing nothing.

Okay, what does it mean to you? In this case, if you're a device developer, if you have a classic speed, full or low speed device, and the answer is very little. If you want to make your device a USB 2 device for some reason, all you need to do, usually, is to change the version number in the device descriptor. Instantly, you're a USB 2 device.

However, You should be sure to conform to the 1.1 spec, because you should be sure to store requests that you don't know. And in particular, USB 2 defines a new descriptor, the device qualifier descriptor. This is used by the system to ask the device, "What can you do if you're plugged in at the other speed?" If you're a full-speed device, you don't support this, because you don't support high speed. There is no other speed.

So if the system asks this, and you gave the wrong reply, the system may think that, hey, it's capable of working at the other speed. I should tell the user, plug it into a different port. It will work better. The user is confused. So be sure to store requests that you don't know about. You should be doing this already. Whoever does, we don't know.

Oh, sorry. Okay, and a slight modification is that ISOC endpoints now must use no data by default. Previously, this was a very good idea and a common practice, that in the unconfigured state, or in the initially configured state, all the endpoints in an ISOC interface would have zero bandwidth. You would then configure it to use more bandwidth as you go along.

This stops a device which is actually plugged in, but idle, taking up bandwidth. So, the user will plug in a device, not use it. Plug in the video camera he wants to use. He says, "Oh, there's no bandwidth available." The other one is using it, even though the user isn't. So, be careful with bandwidth, and in particular, it's now a specification that you should not use any bandwidth by default.

There are also some changes to the low-speed cabling requirements. If what's written on the slide actually makes any sense to you, you should be sure to go read Chapter 6 of the spec. In particular, you should be sure to read the second half of Chapter 6 in the spec, because the relevant parts in the first half don't mention this, but the ones in the second half do. Oh, well.

Okay, what does it mean to device developers? You have a huge opportunity. You can make your device work much faster. Users like this. And by much faster, I mean that transferring, and say for isochnos or interrupt devices, they can now have 24 megabytes a second. The obvious candidate for this sort of thing is video.

Video usually uses megabits, so I should convert to megabits temporarily. This gives you 183 megabits a second to use for your video. And if you consider that DV is standard 25 megabits, you have plenty of space for DV. Uncompressed DV is 125 megabits. You can use uncompressed DV. You can use your imagination as to what else you want to put in all this bandwidth that you have to use.

Also, bulk runs much faster now. 20 to 24 megabytes a second is pretty typical transfer rates for bulk. So, typically now, hard drives go a lot faster, and users are very happy by this. The transfer rate for bulk is mainly constrained by the host controller. So, different host controller implementations in future may go faster. We think that 35 to 40 megabytes a second is not unreasonable for future controllers. In fact, I've seen an early version of one recently, which actually looks like it could manage 39 megabytes if I could find a hard drive fast enough to keep up.

What does it mean to driver developers? I presume there's a lot of driver developers out there. Hopefully, very little. We're here to make your life easy. And for most of you, you should do nothing, and it will just work. This is USB. Once you've got over the fact that it's running high speed, if it even is running high speed, all the rest of the protocols are more or less the same as they were before, with the exception of packet sizes and whatever. So in general, you should not care How fast your device is running. Just transfer the data as fast as it's going, and be thankful that it comes in that fast. Don't worry about it.

The exception to this, of course, is ISOC. If you're doing ISOC, everything has changed. All the timings are different. Now, instead of the one millisecond frames that you had before, you have 125 microsecond microframes. And you put together eight of these, and you have a good old traditional frame at one millisecond. So all your timings now have to be in terms of microframes, not frames, if you're doing high speed, that is, of course. And in each one of these microframes, you can transfer up to 3K of data. That adds up to the 24 megabytes I'd mentioned earlier.

So, how is all this done? On the bus, we use a new controller standard, the enhanced host controller interface. This is a companion to the open host controller interface, which runs full speed and low speed, as we did previously. There is support for this in Panther, a new driver, the EHCI driver.

The EHCI only works with high speed. As I mentioned, the speeds are segregated. There will be more on this later. So, the EHCI is dealing there solely with high speed transactions. It needs something to help it with the full and low speed. It actually has companion controllers, which in our case will be OHCI controllers, like you're used to.

And also, how is this done? You have all these speeds going on at once, and they need to be segregated. The answer is magic hubs. And there's going to be more, Rhoaads is going to be telling you a lot more about those later, but these are really complicated beasties. They've got most of a host controller in them, because they have to translate between high speed and full and low speed.

So, how are you going to do it? First of all, you need some hardware which supports USB 2.0 high speed. You can either buy one of our lovely new Macintosh G5s, or you can get yourself a PCI card or a PC card and plug it into your existing hardware. Then you need some software.

You need the version of the USB family we have in Panther. This will enable high speed. The version number of the software is now 2.0. We reserved the number 2.0 a long time ago so that no one would ever get confused between USB spec 2.0 and USB software 2.0.

There's been plenty of that sort of thing in the past, and we didn't want to do it. As you notice, we've almost run out of version numbers. We're up to 1.99 already. So, it's good. Good job we did it now. Okay. When writing your driver, I suggest that you first of all do it full speed like you're used to, and then plug it into a high speed port, and it will just work, and you'll be happy. And like I said, we make it easy for you.

However, if it doesn't work, this may be where the harsh introduction to USB 2.0 is. So, be careful. Okay. Tools you might use. I may have mentioned before that I think a USB analyzer... ...is a really useful thing to do USB software with. And it still is a really useful thing to do USB software with.

Now, if you're doing high speed development, you should really get yourself a high speed analyzer. The usual suspects make high speed analyzers. Cat C has the advisor. Catalyst has the SBA20. Data Transit has the pod for their machine. They have pods for every sort of bus imaginable. And I'm sure there's more out there, and I didn't mean to leave them off the list. Okay, now I want to tell you a little bit about high-speed ISOC. As I mentioned before, high-speed ISOC is different.

The driver has to know that it's different. The device has to know that it's different. Say the timings are different. You have 125 microsecond microframes instead of the 1 millisecond frames. And the amount of data you can transfer in any microframe is increased to 3K. There is one small wrinkle in this, is that now the frequency of an ISOC transaction does not have to be one. You don't have to do one every microframe or one every frame as you used to.

You can now specify 2 or 4 or 16 or whatever. However, we haven't seen any devices which do this, so we have not implemented that yet. If you have one, we'd love to hear from you. Okay, having said that high-speed ISOC is different, high-speed ISOC is also the same, in that we recycle the old APIs, use exactly the same APIs, we just treat the parameters you send us a little differently.

As I said, high-speed ISOC is different from the device's perspective. The packets on the bus are different, the timings are different, you have 125 microsecond microframes. Here are diagrams of what the device sees. If you have a small transaction, less than 1K, and note you can actually go up to 1024 bytes, where you could only previously go up to 1023, it's still just in one data zero packet, like it always used to be.

However, if you need more data than that in a microframe, you send multiple packets of a suitable size to add up to the size that you want. It will say, if you're sending data out, the host will say, here's a packet, there's more to come, by saying data one, data zero, or data two, data one, data zero, it will count down like that. Similarly, if the device wants to send the host multiple data packets.

If you're 1K or less, it will just send the single data packet like it always did. If it's more than 1K, you send it in multiple data packets, the device will say, there's more to come, there's more to come, oh, that was it, and you got one or two extra packets. That's encoded in the packet identifiers sent with these. Interrupt will also allow you to do 3K of data per microframe. But it uses the same old data zero, data one, alternating PIDs that it always did.

Okay, the high-speed ISOC APIs. As I said, the APIs used are the same as they were previously. The interpretation of some of the parameters is different. Here you see a typical ISOC API read ISOC byte async. And the first thing to note, or the things to note are the ones, the parts highlighted in orange. And first to note is the frame start.

Now, frame start has not changed, even if it... Oh, sorry, I missed something out there. Whereas previously you had frames, it's now you should consider these to be transfer opportunities. Where an API said frame, you should interpret that as saying you now have a transfer opportunity, whether it's a transfer opportunity once per millisecond, or transfer opportunity once per microframe. So as I was saying, the frame start, however, has not changed. A high-speed ISOC transaction starts on a frame boundary, not on a microframe boundary.

Next, the num frames is now the number of transfers you want to happen. Either a number of frames for full speed, or a number of microframes for high speed. And finally, the frame list now specifies the list of transfers that you want to happen either in frames or in microframes for high speed. and there is also a new API, Get Framelist Time, which actually tells you how long your transfer opportunity lasts, either 125 microseconds or 1,000 microseconds, so you know what the timing should be, or in effect, whether you're running full speed or whether you're running high speed.

Okay? So, in summary, USB 2 is USB, but it will run faster if it's high speed, if there's a high speed host, and if there's a high speed device. But most of the time, it looks just like the USB you always knew and loved. As I say, USB 2.0 is just like full speed if you're not a high speed device.

And don't worry about the speed if you do not have to. Just transfer the data and don't worry about it. You do need to worry about the speed if you're doing ISOC there. And the high-speed ISOC APIs are different. We use the same APIs, but we reinterpret some of the parameters. And with that, I shall turn it over to Rhoaads. Thank you very much, Barry.

Okay, I'm going to talk a little bit about USB 2.0 hubs. The USB Implementers Forum worked very hard to make sure that USB devices just worked no matter whether you were on an old USB 1.1 bus or on a USB 2.0 bus. And as Barry said, however, the data traffic is actually segregated. You do not have full speed and low speed data on the same wire as high speed.

[Transcript missing]

Now most of this is transparent to driver writers because the drivers don't really care too much about which kind of host they're on. The exception would be the hub drivers, which do need to know whether the hub is attached to a high speed bus running in high speed mode, or to a full speed bus running in full speed mode. Luckily Apple produces the hub driver for Mac OS X, and so you wouldn't necessarily need to worry about that.

Now, having this magic hub can cause some problems if it's not configured properly. For example, you may have a high-speed hub, and if you plug it into the root hub of your computer, it runs in high-speed mode. If, however, you plug it into a keyboard hub or a display hub that is a 1.1 hub, your high-speed hub has now become a full-speed hub. And this can cause some confusion if you have high-speed devices attached to it, and you want them to work in high-speed mode.

Because once a hub is running in full-speed mode or low-speed mode, well, full-speed mode, then all downstream devices of that hub have now become full- or low-speed mode devices. Now the segregation that happens between the high speed bus and full speed and low speed devices happens in something inside the hub called a transaction translator.

Now these transaction translators, there is at least one of these in every high speed hub. And in most high speed hubs that I've seen on the market today, there is exactly one transaction translator. However, it's possible to have a transaction translator on every port of a high speed hub. And this produces a much more complex hub, but at the same time allows some benefits.

You can have one full speed controller's worth of bandwidth for isochronous and interrupt endpoints on each transaction translator. Which means if you have one transaction translator in a high speed hub, you have one full speed

[Transcript missing]

So multi-TT hubs can have more full-speed ISOC devices, for example, attached than a single TT hub would be able to have.

Again, the root hub, being special, does not use transaction translators at all. What happens is when you plug in a full-speed device to a root hub port, the high-speed controller sees that it's a full-speed device and electrically disconnects itself from the port and passes the connection over to the companion controller, and that device is now running on an OHCI controller. These transaction translators in the hub are therefore store and forward units for what are called split transactions.

Now, a split transaction essentially is a transaction where when the host wants to talk to a full-speed device or a low-speed device, it transfers information to the hub to which that device is attached in high-speed mode. The hub stores that information, buffers it up, and then transfers the information to or from the full-speed or low-speed device, separate from any activity happening on the high-speed bus.

This transfer occurs in two parts called start split and complete split. And in between a start split and a complete split, which is the information going to and from the high-speed hub, there can be more high-speed activity happening while the hub is doing full-speed or low-speed communication with the downstream device. Now, this works very nicely. And allows you to plug in devices into a hub without really paying much attention to it, but it can have some significant performance issues. And you probably should be aware of these.

Here. So here's, let me show you graphically why that occurs, why you can have some performance issues. Let's say that I have a host, and I have a full speed hard drive attached to a high speed hub. And the host needs to send out a data packet to this hard drive, and it's an out packet from the host to the hard drive.

So what happens is first the host sends a start split command to the hub, saying I have a packet, a data out packet destined for this hard drive. It sends the out PID to specify that it is an out packet. Sends the data, a 64 byte data packet for example, and receives an acknowledgement from the high speed hub that that data was successfully received. However, that acknowledgement is that it was received by the hub, not that it was received by the device.

Then the host and the hub, or the host and other high speed devices can continue to talk. So the host sends the data packet to the device, while the hub starts its transfer to the device itself. So it sends the same out PID to the device. It sends the 64 byte packet to the device, which takes significantly longer at this point.

And it receives an acknowledgement from the device that everything was received successfully. Now the hub remembers that acknowledgement and says, okay, at some point the host is going to ask me whether the device got the data or not. So when the host is able to do so, it sends a complete split to the device. So the host sends the data packet to the device, and then the hub is able to then acknowledge that transaction. So that completes what used to be an old fashioned out data ACK packet, but it takes a little bit longer.

The time between when the full-speed device acknowledges the receipt of the data and the beginning of the complete split transaction can depend on what other activity is happening on the high-speed bus. And because of this, it causes some significant delay in, or can cause significant delay in this data. And the upshot of this is that a full-speed hard drive, for example, attached to a high-speed hub can end up functioning to the point where the actual throughput to the drive is half what it is if it's attached to a full-speed bus.

Now, some hubs, as I mentioned earlier, high speed hubs, have a single TT and some have multiple TTs. And as I mentioned, each transaction translator essentially is a full speed bus. So this is especially important for people doing ISOC work with these hubs because, let's take for an example, if you have a camera that transfers 980 bytes per millisecond frame, it's a full speed camera.

Well, if you have two of these cameras and you try to plug them into two ports of a single transaction translator hub, only one of the cameras is going to work because that's going to saturate the full speed bandwidth available to that hub. If the hub has multi TTs and you plug a camera into each of two ports, each port has its own full speed isochronous bandwidth and both cameras can work.

A more likely example might be a camera with a microphone for input and a pair of USB speakers for output, all of which use isochronous bandwidth. And so with a single TT hub, you may have a difficult time or have reduced quality of your video input or output because you're trying to share the bandwidth amongst all the devices, whereas with a multi TT hub, you'd be able to plug both the camera and the speakers into two separate ports and they and they each have their own bandwidth, so it's fine. Multi TT hubs also allow for better throughput with bulk devices as well.

Now, I have mentioned before that the root hub is different because we have these companion controllers, so there's no need for the transaction translator. Again, the EHCI driver, if it exists in the system, determines that, oh, this is a full-speed device, I need to switch it over to the companion controller, and it disconnects itself from that particular port. So, no split transactions are necessary. What ends up happening is you have a completely separate and the O-H-C-I controller running your full speed, low speed devices, and you get the same performance as you do today on a full speed O-H-C-I controller.

So in summary, split transactions allow for a seamless transition from USB 1.1 to USB 2.0. Your full speed, low speed devices just work. You plug them into a hub and everything works great. However, you can end up with some performance issues which you might want to be aware of.

And any performance critical full speed devices may want to consider Somehow telling the user to put the device on an OHCI bus. One way to do this, for example, would be let's say you have two EHCI ports on a computer, or three or four from a PCI card or whatnot. You might plug a high speed hub into one port and a full speed hub into another port and have all the high speed devices living on the high speed hub and all the full speed devices living on the full speed hub.

Because a full speed hub will provide data transfer on the full speed OHCI bus, and a high speed hub will provide these transfers on the high speed bus. So with that, I'm going to turn it over to Fernando Urbina, and he's going to talk about the new APIs.

Thank you, Mr. Rhodes. We're going to leave USB 2.0 aside for a little bit now and talk about what we've been doing since the last time we've met with respect to adding APIs to our family. We've added both kernel and IOUSB library updates, and on request from multiple developers, we finally have a way to get the version number of the family and the USB library programmatically instead of having to go by hand to get the CFBundle version and all that stuff. So that is available to you. We did add, like Barry mentioned, a couple of USB 2.0 related APIs.

One is to get the frame list time, whether it's 125 microseconds or 1,000 microseconds. And the other one is to get the bus microframe number. And actually, this 64-bit value encodes both the frame number and the microframe number at the time that the call is made. And just like the regular bus frame... So you get bus frame number. It is time stamped so that you can know when we exactly got that frame number.

One request that we've had for a long time as well was to have a more straightforward API to recover from a stall. Presently, when you get a stall, you have to clear the data toggle both on the controller and on that device, and this involves having to issue a device request to your device with a set feature, endpoint halt, blah, blah, blah, blah, blah. Now you have this clear pipe stall both ends that will do that for you, and so that will make your life easier.

We also augmented the isochronous APIs to help you and us manage the bandwidth better. One of the big differences, however, since we introduced this API, is that when you create an interface, which in turn creates the pipe objects for your endpoints, previously we would fail that call if there was not enough bandwidth to create the pipes. With these new APIs, we will now create the pipes even if there is not enough bandwidth. And so your pipes will actually have zero bandwidth allocated to them, so you have to be able to Realize this and do something appropriate like allocating bandwidth for them.

These are the new APIs. The first two, the GetBandwidthAvailable and the GetEndpointProperties, are unique in the sense that you don't have to have the USB interface open in order to make these calls. BandwidthAvailable is pretty straightforward. It just tells you how many bytes are available in the bus.

The GetEndpointProperties will allow you to inquire of the different alternate settings of a particular interface to see how much bandwidth they would use. So typically you would get the bandwidth available, iterate through all your alternate settings, pick one that has a lower amount of bandwidth, and then call SetAlternateInterface, and this call is not really new, it's just here for completeness.

To create that interface and allocate the pipe objects, then, very importantly, you should call GetPipeProperties on that isochronous endpoint, or isochronous pipe object, and make sure that the max packet size for that endpoint is not zero. If it is zero, it meant that even though we had told you that there was bandwidth available earlier, somebody might have come in and grabbed that bandwidth away from you, and it's not going to be zero. And it's not there anymore. So you should really make sure that once your pipes are created once again, there is enough bandwidth for them.

Finally, we're back to par with our Mac OS 9 implementation in the sense that we have a set by policy API. And this essentially allows you to return bandwidth to the system if you know that you're never going to use the amount of bandwidth that was specified in the alternate settings.

The granularity of the alternate settings in an interface depends on the whim of the device manufacturer. And you might know that you're not going to use all that because there's never going to be a case where the device is going to send that data. So be a good citizen and call set by policy and return that bandwidth back to us.

For 10.2.3, we added some new APIs that I'm going to talk about now. These were trying to solve what I call the latency problem. What is the latency? Most of you are probably aware that on Mac OS X, the time between when the hardware transaction completes on the bus and your callback is called varies. Sometimes it comes right away, but sometimes it can be delayed for 80 milliseconds or more.

The callback happens on the IOUSB work loop, and there are other threads that run at a higher priority than that work loop, and if they're doing work, your callback will be delayed. This presents a problem in some cases, but you can work around it like you have done in Mac OS 9 and in previous versions by keeping the ISOC endpoint busy and having multiple transactions, so that even if your callback is delayed, it doesn't matter because you have already scheduled a transaction to start prior to your callback being called. So that's great and that works very well in most cases. However, sometimes there is a problem.

[Transcript missing]

So, we thought about it, and we came with the following solution. We realized that this being USB, and isochronous USB especially, the data from the device was in the user's buffer as soon as the USB controller completed that frame. However, we lacked one critical piece of information that was not in the buffer.

In fact, it's in the USB controller, and that is how many bytes were actually transferred. Isochronous data varies in the amount, even though you ask for a certain amount of data, it can transfer more or less. And so you need to actually know how much data was there.

However, realizing this, we knew that then the client could go and peek into the buffer that they gave us and get that data at the expected time, if only they had the appropriate number of bytes that were transferred in that frame. So we augmented the frameless structure to actually have a timestamp.

We decided to update the actual number, what is called the FR actual count, at primary interrupt time. Now doing processing at primary interrupt time is sort of dicey because you are preventing any other thread from running at that time. So we really have to do some minimal processing at that time, and what we do is we look through our structures, get the number of bytes from the controller, update the frame list, and get out.

Your callback will still happen at the same time that it's happened in the past. However, since you know when the data was going to be there, you can go and look at your data buffer and at your frame list buffer and know how many bytes were actually transferred.

So there are four new calls for these load latency transfers. The first two are just for user-lent drivers, and the last two are the read and write APIs for both user and kernel drivers. We thought it was a good idea to have the IOUSB library manage your buffers.

So you have, as a client in user space, you have to call in and give us the information for how big to make the buffer, essentially. You don't have to have just one whole buffer for your whole data. You can have multiple buffers for each transfer if you want. You can manage that yourself. However, when you do finish the transfers, you need to call us back so we can release our buffers and we can inform the kernel entities that those memory descriptors are no longer being used.

Again, a typical API for the low latency read ISOC is as follows. You can see in the highlight there that there are two changes. One is we have added this update frequency parameter. That tells us how often you want your If you wanted to be updated, of course the granularity is one millisecond. If you wanted it to be updated every millisecond, you would pass in a one. If you wanted it every eight milliseconds, you would pass in an eight.

If you have a 64 frame transfer, and you're only going to be looking at it from user space or from kernel space every eight milliseconds, you should really pass an eight and not a one. Because, again, we're taking processing time at filter interrupt time, and that is preventing any other threads from running on that processor at that time.

The low latency ISOC frame list now has, as I mentioned earlier, an absolute time parameter that is timestamp at the time that we process the data, that we updated your number of bytes transferred. If we are updating more than one frame at that time, all those frames are going to have the same timestamp.

This is used by the audio drivers, for example, to synchronize all their stuff and do their magic. These APIs are detailed on HeaderDoc. If you have any questions, feel free to ask them on the USB list, and we'll answer them as soon as we can. And again, don't abuse the low latency APIs. I can't stress that. We got some pushback from the kernel guys when we were adding these APIs, because they don't want anything to happen at filter interrupt time. And if you abuse it, then performance for the whole system is going to go down.

Just a quick note on the USB Implementers Forum. One of the big things going on right now is that a video device class specification is nearing release. In fact, I think this week it's going to the 30-day comment period, at the end of which it will become a 1.0 specification and will be released.

You should go to the Implementers website. It's public for now, and take a look at it. It provides support for a wide range of video devices, everything that, you know, video related that you can think of. All sorts of different payloads. We plan to provide a class driver for some of those devices. I am the one working on it.

The specification, as I mentioned, is all-encompassing. It does not specify whether you need to be a high-speed device or not. Of course, for some of the payload formats, like DV, it does not make sense. It cannot work on a full-speed device. Some chipsets right now only do bulk transfers, and there is support for just bulk transfer for video. Of course, you have the pros and cons about that.

There's still image support. It's not limited to receiving video on the host. It also has support for sending video out to a device. There's plenty of manufacturers on the working group that are Gearing up to produce these devices, so there's something exciting that we're going to see in the near future.

Another change from the implementers forum is that they've now defined an interface association descriptor as a change notice to the USB 2.0 spec. And all this does is allows classes like the audio class and the video class, for which this descriptor is now mandatory, to relate different interfaces to each other. So that the system will know that when you change something on the control interface, it also needs something to happen to a streaming interface. We are looking at the ramifications of this and expect to support this descriptor in the future.

A quick debugging tools update. This year we gave up on giving a demo on tool machine debugging. It didn't work for the last two years. We decided, you know, we'll save face this time. We do provide login versions of the IOUSBFamily on the website, on our website in developer.apple.com.

The latest versions actually I produced so that you can install as a package the shipping version as well as the login version so that you don't have to save the other one aside and copy it over later. A quick caveat, if you do try to download the sources and build your own IOUSBFamily, do not try to boot with an unstripped version of the USBFamily. And actually when you use Project Builder and build the project using the development build style, it will not strip the binary and you will get an unhappy Mac or whatever the equivalent is now. And you won't boot and you'll need another partition to boot it. Caveat emptor.

So, this is the new tools. It seems to change with every release of Project Builder. This is for the December 2002 developer tools. This is a line that you use to produce an unstripped version. It's simpler now. I have no idea what it is or if this will work with Xcode or... but... For the December tools, use this. Of course, if you're going to do machine to machine debugging, you have to change your default boot arguments to what's on the screen so that you can actually connect to it.

If you don't, you'll just get the multilingual panic message. Finally, actually in solving some bugs, I was able to use the CHOD tools, which stands for Computer Hardware Understanding Developer Tools. There is actually a session tomorrow, and it was very handy, and actually you wouldn't think that get bus frame number could lock out your machine.

Just a tiny bit of administrivia. The repository, as most of you who accessed it before now know, is not live anymore. However, we are still open source. Once Panther is released, we are going to release the EHCI driver for the EHCI controller. You've noticed that things, releases are better now, and very soon after an update is released, the tarballs with all the sources are released on the web, so I still encourage you to go and build it if you need to build a family to debug things.

So this is just a brief summary of the APIs that we've changed. Again, if you have any suggestions for new APIs that would make your life easier for some reason or another, we're always on the list. You know who we are. Just send us some email, and the only way we're going to consider them is if we know about them. So feel free to suggest that. We're looking forward to the video class spec, and the repository is not live anymore, but we are, and we're still working, and we're enjoying ourselves.

These references are pretty much unchanged from last year, even though some of the documents have been updated. It's the usual suspects. There are technical notes on debugging kernel panics, if you feel like doing that for some reason. We still get for persons just coming to develop on "my driver doesn't load." I have that bookmark and always fire it out on the list. Again, some sessions that you might want to attend. Just use your time transport and go to Godfrey's session yesterday.

As an aside, of course, this is for the DVD, blah, blah, blah. There's, as I mentioned, the shot performance optimization session tomorrow. Of course, we are expecting you guys to come to the feedback forum tomorrow as well, and give us, you know, let us have it. Finally on Friday, we have a session on the Hit Manager and our forest feedback support. So if you want to listen to me again, bookmark your Friday morning session.

and you know our email addresses. Again, the USB list at list.apple.com. I also monitored one or several of the Dartwin lists, and the first thing I say when somebody posts a USB question there is, go ahead and join the USB list, and your question will be answered faster that way.