Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2001-402
$eventId
ID of event: wwdc2001
$eventContentId
ID of session without event part: 402
$eventShortId
Shortened ID of event: wwdc01
$year
Year of session: 2001
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC01 • Session 402

QuickTime for Professional Video

Digital Media • 57:31

This session shows new features of QuickTime for use in developing professional video applications and components. Topics include improved movie track editing using media sharing, gamma correction APIs, support of nested effects for real-time hardware, implementing multiprocessing support, and strategies for developing hardware components for Mac OS X.

Speakers: Tim Cherna, Jean-Michel Berthoud, Sam Bushell, Tom Dowdy

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it may have transcription errors.

uh... my name's him sure i'm manager of the uh... quicktime pro video team at apple so uh... we're talking about pro video so why pro video well it's good to have a team for pro video QuickTime has a lot of customers, as you've probably seen either by seeing Tim's session this morning or seeing the interactivity session that was just before mine, or seeing the broadcasting session that follows. So QuickTime has a lot of customers, but it's also the foundation of a lot of core high-end video technology, shipping and apps from both Apple and other companies such as Adobe and Media 100. So it has unique demands in that space.

The demands tend to be it must have high quality, it must have high performance, it has to work well with the hardware that you get. We have hardware now from companies like Matrox and Pinnacle and Digital Voodoo and Aurora which work using QuickTime in the apps we talked about. And you have to have consistent results because you're going to take this material and maybe go on air or make it in a movie. So it's very, very important. So that's why we're specializing in ProVideo, that's why we have a ProVideo team. So what about you guys? This session's targeted towards developers who are writing video editing or video processing applications. You know, it can be from the high end, it can be from just simple ones which can make the experience of using things like iMovie easier. It's also targeted towards codec writers who are writing either software codecs or hardware codecs.

and uh... this'll basically give you some extra information so things we're gonna talk about today uh... we're gonna talk about some improvements we've done in quicktime five which makes your rendering experience better uh... we're gonna talk about improvements for uh... supporting hardware cards such as the ones i talked about and ways that you can take advantage of asynchronous operations and multi-processing on our uh... high-end macintosh systems and things we're going to do with our effects architecture and finally some help in migrating your video hardware towards OS 10 as a platform So I'm going to talk about rendering. And when I talk about rendering, I'm really talking about taking some compressed material, decompressing it, and typically then you'd apply some sort of effect to it, a video effect, let's say a blur, and then recompress it back to the original material. That's a typical workflow experience or experience inside a video editing app where you, let's say, wanted to do a cross-resolve between two streams of DV and you would render it. So you decompress the two streams of DV, you combine them, you recompress it back to DV, and then you can send it out FireWire. Thank you. So the improvements we've done in that space are we've improved some gamma processing. We've also added a new pixel format called R408. And of course, we've improved the DV codec, which is something that Tim talked about. And I'll show you some of the results we got there.

So my favorite first topic is gamma. And I'm going to talk about gamma sort of a little overview. But basically gamma, when I'm talking about it, I'm talking about the non-linearity of intensity reproduction, which is basically that you have an input value and you have an output intensity. And there's a relationship between the input value and the output intensity. And it's not always linear like the diagram at the right. It has a curve. It has a power curve. And this relationship could apply to either the camera, a video camera, or it could apply to just a CRT monitor or maybe a LCD monitor. It can also apply to the system as a whole. For example, the Macintosh or your television. Thank you.

So the issue is that there's different gamma for the different systems that QuickTime deals with. We have video, which has a gamma which is established at 2.2. And the Macintosh, the gamma is established at 1.8. And Windows is basically using the native values of the CRT, which is 2.5. So why is that a problem for video rendering is that because QuickTime knows that video such as DV is at 2.2, and it wants to make it look correct when we display it on the Macintosh for applications such as iMovie when you're doing the preview on the desktop, we do a gamma correction stage to bring the image to look closer to what it would look like if you had a NTSC monitor next to your Macintosh monitor. And so that works really, really well, except it makes some problems for video rendering. I'm going to show you the gamma correction that I was talking about afterwards in my demo. So the solution that we've come up with is that we allow applications now to specify the gamma that they want. They can say, I want you to give me what the source gamma was, or I want you to make it gamma 2.0, or I want you to make it gamma 2.2. So not only can you specify what the gamma would be via some gamma APIs we've done, you can also find out what the gamma actually was, which is really useful. Did it work? And codec can specify the gamma that they prefer. can say my private or custom pixel format compression format has a gamma of let's say 2.2 and therefore when you decompress it if you ask for the source gamma it'll be 2.2 so there's no more guessing you can basically choose the gamma that you want to process your video rendering in and you get what you want So pixel formats. Key thing about rendering is you go from a compressed pixel format to some sort of a pixel format you're going to do the rendering in. And the typical choices out there would be RGB or YUV. And RGB has its advantage in that it's native for graphics. A lot of people have used it for many years. And it also has an optional alpha channel, so you can use it to do compositing fairly easily.

YUV has the advantage that it's native for video, and typically it's stored in a 422 format, which means that there's two samples of luma for every chroma pair. The U and V refer to the chroma, and the Y refers to luma. So it's subsampled, which means it's good for storing. It kind of represents what your eye can see. In other words, your eye is more sensitive to luma compared to chroma.

but it is basically a was hard for every which is from on that my next line of course is not the time so the problems with are you being widely are to be has the extra color space conversion to go back and forth from why do you think since that the data typically be native uh... why you be for video It also can clamp the video because the space of the color space is smaller than the YUV space.

so with why do you the problem is that there's no alpha channels if you want to composite your kind of at a lock uh... really friendly for that and it's also proper hard to process because it's sub samples of you just want to move your wife image by one pixel all of a sudden you have this problem because yet moved that chroma and luma around or chroma around because it was sub sample and also uh... the standard black value for why the video is sixteen so every time you do an operation on why you be you busy adding a subtracting sixteen off on it so we didn't quite like that so quick time uh... came up with our four way and are for eight is really nice is video friendly means is why you be based it's uh... not subsampled That means it's 4, 4, 4, 4. 4? It's 4, 4s anyway. So it has an alpha channel as well. So it's alpha, y, u, v. And the other key thing about it is that the y value is offset so that black is 0. So that you can just simply, if you want to do a dissolve between two R4 weight values, you just average out the y values and you've got your halfway point. So, and the position of the components in R408 is basically matching up with some of the components in ARGB so that if you're just migrating your blit loops, it's pretty easy. So it's a good format and you can use that to improve your quality. And that's what we did with the DV codec for rendering. We improved it by using the gamma and R408 that I talked about. we also went in and prove the quality consist uh... significantly and the main areas what we wanted to achieve work uh... consistent results between a g three and a g four we figured if you got a faster machine it should look as good as the same as it would on a g uh... we prove the color fidelity and uh... we reduce the losses when you did multi-generational rendering so when you render clip in a in editing application you don't have the render material look worse than the original source. We also improved the performance by both improving the code path and also making it accelerated on MP Macintoshes. So now I get to show you a little demo of this stuff working. Let me switch to demo four, please.

right so uh... when i show is that a little application uh... And what I'm doing in this application is I'm taking a DV clip and I'm decompressing it to an off-screen in a pixel format that I can choose. And I'm taking the resulting decompressed image and recompressing it back to DV. And I take that DV frame and I re-decompress it and keep doing that. In my tests, I do it 60 times. So I basically can see the results of a multi-generational render. so I can see the losses that we had or have with the DV codec. So let me just open up a file.

Andrew, this is Kevin Mark's son, and he's holding a ball which he's moving. So you can see over here there's a lot of motion, and that basically has interesting effects on the DV compression and decompression that we were testing. And you can see that there's a lot of detail in his hair, and so he's our test clip for today. And the first thing I wanted to show is why we're using gamma, why we gamma correct DV. So this source clip is DV. And right now we're gamma correcting it so that it looks good on a Macintosh monitor. And I can turn that off. And now it's off and it looks a lot brighter. And now what I'm doing is I'm actually using the gamma APIs to specify what I want the gamma of the display to look like. The gamma of actually not the display but of the PIX map, the port PIX map. So I can switch it back to video gamma and it's not gamma correcting so it looks brighter, too bright. I can set it back to default gamma and it looks dark. Natively, it's going to use the default gamma, and we're going to show how you can use the gamma APIs to fix up your rendering.

So the first test I'm going to do is I'm going to do a test where I render the clip through 2VUI. And when I did that, you'll see the resulting clip degrades every step quite a lot because I didn't actually set that I wanted to use 2.2 as my gamma. It's converting every step on the decompression, but it's not properly converting on the compression. So now I can do the same test on 2VUI.

And now you'll see that it looks basically perfectly. I can play this and it looks perfectly. I can scrub through it. And you see that I use the gamma API to say, please decompress this at 2.2 so that when I recompress it, there's no gamma shift. So I've avoided any gamma processing at all in that rendering cycle.

I want to talk a little bit about the pixel format that I've chosen to use. So you're going to see two impacts of me choosing R408 over RGB. And the first impact is performance, and the second one is quality. So I have two seconds of video that I've just rendered, basically done this multi-generational test to. One frame, and I've done 60 frames. And so my two seconds of video took 2.4 seconds to process on this machine through RGB. And so that's just a little under real time. And you can see that if I go to the last frame, you can see some artifacts appear because of the losses going through RGB. And so that's not good.

So we did the same test going through R408, which is the YUV format. And the first thing that is pretty impressive is it takes 1.4 milliseconds to do. So that's faster than real time to do the decompression and recompression. And the other thing that's really notable about it is the quality. So I can't see that image change, and in fact it really doesn't.

change at all. So just as a comparison, I ran the same test running it on QuickTime 4.1.2. that's all the stuff was also on quick and fly so quick and for one to the first thing i can read i guess you can read but it's took five point eight seconds to the same test uh... versus one point three seconds to do the test on quick and five so you can see the uh... improvements in performance the other thing is that as we play you can see that there's a fair lot of artifacts that we have had in uh... anyway quick and five is much much better so let me just put these things so that the next demos are good and uh... that's pretty much all in the pocket we go back to the slides please So now I'm gonna ask Jean-Michel Berthoud to come up and talk about some improvements in hardware support.

Hi, my name is Jean-Michel Berthoud, and I work in the QuickTime Pro video group. And I'm going to talk about a couple of features that we have added in QuickTime 5.0 in order to improve hardware support. So the first thing that needs to work is the remote control. OK.

okay that's called hardware improvement right okay so the first thing that QuickTime Firewall used to do is to assume that all codecs could decompress right away and for software implementation it's pretty easy to understand that you can start decompressing whenever you want when you have to deal with a piece of hardware it's much more difficult Usually, I mean, third-party developers have been able to manage to deal with this issue because I mean, the time it was taking to set up their first decompression was not that much, actually.

But during the development of QuickTime 5.0, we ran into some third party which were trying to bring up their hardware and support QuickTime. And this time to decompress the first frame was quite huge. And what was happening is that QuickTime was getting upset because it was totally unable to understand that the first frame was going to take a while to show up on screen, but the next one after that will be fine. That's a concept that we didn't have before 5.0. I'm trying again.

OK, that didn't work. So the solution was to make QuickTime aware of this latency. And the way you are reporting this latency is by using a new API that we have put on the Codec side, which is called ImageCodec Get Decompress Latency. So basically, your codec does report its internal pipeline duration and make QuickTime aware that it's going to take you a long time to start decompressing the first frame. The next one, which are coming after that, will be fine.

So as soon as QuickTime, I mean, use a codec which reports latency, what internally we're going to do, we're going to start your video track earlier. And the movie will start when your hardware pipeline is totally full, so you have a chance to decompress the first frame at the right time.

So that's what the new latency support in QuickTime 5.0. And we also extended this latency mechanism to audio track. So the concept is identical, and the way QuickTime is talking to audio devices through the sound output component. So we've added a new selector called SI output latency. using some get component, get info, and same thing there, if your audio device has some internal pipeline, you just need to report that to QuickTime and we'll offset the audio track as well. So QuickTime can deal with different latency between audio track and video track. The only assumption that we still have is that if all the video codecs in this video track need to report the same latency. So another assumption that QuickTime did have before 5.0.

When you have a system which has multiple codecs able to decompress the same kind of data, we needed to choose one. So the one we choose, of course, was the one which was the fastest one, because every codec is supposed to report their speed. I mean, internally-- If you had two DV codecs installed on your system, well, I mean, getting the speed for each of these codecs and making the one we use the one claiming that we are the fastest one. Of course, this scheme assumed that these codecs don't lie. Right? Well, they do.

It's too bad, right? But they don't have that much other option. Basically, what's happening is that you pay for this piece of hardware, and you stick that in your system. And they want to be the one that QuickTime is going to use by default. And the only way for them to make that happen was to claim that they are faster than, for instance, the software upon implementation. But it was getting worse, because when you start having two pieces of hardware in the same machine, I mean, everybody was trying to look at the other guy, Kodak, figure out what their speed, and claim that they were faster than the other one, right? Of course, that's not really a viable solution. And we used to call that the Kodak speed war internally, whenever he was trying to claim that they were faster than the other one. So what we did in firewall was to finally let the application decide which codec they wanted to use when. So we did end up this codec war.

At least we hope so. So the way an application can specify this preferred codec is to use this new API called mediaset-preferred-codec. By using that, you provide QuickTime a list of codecs you prefer to use. Internally, what QuickTime is going to do is to still sort all of them by speed. At the end of the sort, what we're going to do is put the codecs you've given us in this list at the top of the list.

So it's definitely a much better solution than the speed information, which was the only information we had before in QuickTime, and it makes application setup much easier when they decide to set up a user project, trying to understand which piece of hardware or software they want to use. You might have your system setup, for instance, doing a FireWire DB input, and have another piece of hardware which is capable of sending db data to an analog output and you really want to let the user and the application be able to select which one they want to use at one point So just one more thing about hardware codec. If your hardware has implemented a custom compression type, what's happening is that when you create your content with this codec in your movie, if your end user does have the hardware installed in your system, you're fine. You can play back this movie.

If you try to have this content play on the system which doesn't have your hardware, then you need to provide a software implementation of your hardware codec, right? Well, the problem is that the user has no idea what he's looking for when he's running into a movie like that, which he doesn't know where the content has been altered, he doesn't know which company is making what codec, so it's quite a bad user experience for him.

So the solution is to use our new mechanism in QuickTime 5 to do this automatic component download. And all that you have to do if you have a custom hardware codec is register your software implementation with Apple. And we will get it directly from our own server as soon as your end user will run into it. So that's pretty much it about hardware and QuickTime 5.0. Let's talk about MP and QuickTime on Mac. Thank you.

thank you michelle my name is Sam Bushell and i'd like to take a little time to talk to you about quick time on multi-process of macintoshes multi-process of macintoshes a great right and a great because they have more processes if you have more processes in the other guy that you win well maybe In practice, people want to buy a machine with two processors because they'd like everything to run twice as fast. It turns out if you're an engineer, you probably have some idea of why it doesn't quite work so well.

And so as engineers, we have to do a little bit of work to make this hope satisfiable. Now, sometimes the user is running more than one application at the same time, and those applications, maybe several of them are doing compute bound tasks. In that case, on Mac OS X, we automatically get symmetric multiprocessing that'll schedule and run all of the applications that are available to have work to do. And so that side of the problem's pretty much sorted out for us now on X. But sometimes only one application is doing any work.

In that case, we have to do a bit more work to divide that work up across the available processes. Now, in the QuickTime case, there are, a bunch of different bits of work to be done on the system, but the majority of them tend to be done by codecs. And so the work that we've done with QuickTime to support multi-processor computers and multi-processing is primarily focused on making the codecs run faster.

So it's a team effort. In QuickTime generally, if you have an application that uses QuickTime and there are some codecs involved, the application calls QuickTime. QuickTime calls some component. The codec runs for a while doing some work. And when it's done, it returns the application. So let's look at how this team effort might be made faster to take advantage of dual processor computer.

If you're lucky, you might be able to take the work that that codec is doing and divide it evenly across two processes. If you're not so lucky, it might not be applicable, but it might be possible to run that decompression or that codec work asynchronously and let the application do some other work, maybe some other decompression for the next frame at the same time.

In more detail, this is the first approach. If you can split up your work across a bunch of multiprocessor tasks, then they can be run all at the same time. And when they're all done, they all return. So this is still a synchronous API. The application asks you to do the work, and when you're done, you return. And you've taken up all of the CPUs available in the meantime. This is the best situation because the applications don't need to be revised. They can keep using those synchronous APIs. And there are high performance gains possible as we've demonstrated with the dbcodec.

The trouble is it's harder and it's not something that you can easily do with all algorithms. Sometimes step one has to be done before step two and step two has to be done before step three and so you can't do step one, two and three all at the same time. In those situations you need to take a step back reevaluate how you'd like to go, and maybe it's okay to run the entire job that you want to do in a single MP task asynchronously from what the rest of the application is doing. This is a smaller change to the codec, and in fact it can be a really small change if QuickTime can help you out.

The trouble is, it doesn't actually make that task any faster. It just takes as long. Maybe if the application has something else to do, then it's a win overall. But the corollary of this is that in order to take advantage of this situation, the applications do need to be revised and maybe restructured to take advantage of this using asynchronous APIs. eyes. So in QuickTime 5, we've used both approaches to taking advantage of multiprocessor computers.

have revised the dv compressor and decompressors you've probably heard a number of times by now to split up their work across the available processes in the computer we've revised some of the other compressors and decompressors in QuickTime to be able to run asynchronously. And I have a little demonstration of this that I'd like to show you, which is over here on demo four.

Now this is an application that I wrote for debugging and analysis purposes, but I'd like to use it here as a technology demonstration to give you an idea of something that might be an applicable use of both of these kinds of technologies, both the method A and method B for splitting up work on a dual processor computer. have back here is a dual processor 500 megahertz G4 and my application you probably can't see all of the the text on the screen it doesn't matter it's not very interesting apart from the fact that it has a bunch of different bit rates listed and we have a DV camera here and it's pointed at you and this is what you look like and wave this around so you can see this live. You can wave, hello, see, look, there are people waving.

So what we're doing is taking input from the camera coming in on FireWire, we are decompressing frames to a YUV buffer. We're then scaling those YUV frames to three different sizes at four different frame rates. We're recompressing those using H.263 at a bunch of different bit rates simultaneously on the same machine. So different pieces of this work are being accelerated in different ways. The decompressor is automatically splitting up its work into work that can be done by several processes. And the H.263 compression is being done asynchronously. So although each compression activity can only be running on one processor at a time, you might be compressing several different frames at these different sizes. uh... the goals of the tries to to meet for the big rights correspond more or less to a bunch of different modem rates so you might might have the first the 12 kilobit of uh a video for a 28k modem a 24 kilobit per second of video for a 56k modem uh something towards uh 80 kilobit per second for a dual isdn or some other 100 odd kilobit connection and something higher as well at full 30 frames per second but if you're close enough then you can probably read that we're not currently achieving 25 frames per second, which makes this look like a foolish demo, except that I can point out that the reason it's lagging behind is because it's showing you the answers. And if I turn off the preview so that you don't get to see yourself on the screen, then we do reach 30 frames per second pretty efficiently. And there's actually quite a bit of CPU left available on the machine. So you could take, what this demonstrates is that you could take QuickTime 501 and a prepare multiples compression video streams and then you can put in broadcast them to streaming reflectors which would go out to a wide range of people all on one machine. This would be a useful product. And that's my demo.

so among the developers here some of you probably writing applications those a few who are the thing you can do on a multi-process a Macintoshes is you can call quick time using the asynchronous compression API's instead of synchronous once If you're a codec author, then you might want to accelerate your codec to take advantage of multiprocessor machines either using approach A or approach B. It's up to you. Let's have a look at these asynchronous compression APIs first. QuickTime has always had an asynchronous mode for the image compression managers compression API. Basically the last, there's lots of parameters that describe what you want to compress and how you want to compress and so forth. The very last parameter says, is a completion routine. And this you can pass nil, in which case it won't return until it's done, or you can pass a completion proc in RefCon, in which case it is allowed to return immediately and when it's actually done with the compression activity, then it will call your callback routine and you'll know.

if this is this is safe to use even if a codec doesn't actually support a synchronous compression uh... in that case it will return after calling your callback after everything is done The trouble was that there wasn't-- that there's a missing piece here because a lot of people who want to write compression applications want to use a higher level service called the standard compression component. This is the component that provides a nice friendly dialogue box that you've probably seen a hundred times. And that component only had a synchronous API. So in QuickTime 5 we've added an API analogous to the asynchronous compression in the ICM. I've said the word compression a lot of times in that slide.

so you're read more about that i recommend looking at the quick to find documentation is what's a good stuff there If you're a codec author, then as I said, you have the two choices. You can accelerate your codec by calling the multi-processor APIs yourself, creating some tasks, when we call you to do some work, splitting that work up across your tasks, and then waiting until they're all done and then returning, or calling the completion routines. That's great. If you do that, you're pretty much on your own.

Although there were some pitfalls I'm going to warn you about in a second that you should be careful to avoid. If you take approach B of running the entire activity asynchronously and then calling the completion routine when you're done, running the activity on an MP task, that is, if you're writing a decompressor and it's based on the base codec, then QuickTime can help. All you need to do is write a little bit of code that promises that your drawband call is MP safe. Since drawband calls for video codecs generally have to be interrupt safe because they might be called at deferred task time, this isn't a big leap. You really shouldn't be calling any other APIs besides ones just for other functions defined in your own sources in your drawband call.

So generally those things, as long as they're PowerPC native, are also safe to run in MP tasks. So if you write a little bit of code that promises that everything's cool there, then we will run you in an MPTask if applicable, if we're on a multi-processor machine and trying to do asynchronous decompression.

So there are three pitfalls I'd like to point out so you have in mind so you can avoid them moving forward. I've said that you shouldn't call anything in drawband. Some of you who have read the documentation on Apple's multiprocessing APIs will say, "No, wait. I've read about this." You're allowed to call all of these memory allocation routines and in fact in Mac OS 9.1 and later you're allowed to call all sorts of other things like the file system. Well, you're not allowed to do that from a codec drawband routine. The reason is that the remote procedure call that implements some of these allocations and other calls That remote procedure call can only be serviced when someone in the blue task calls wait next event or one of its friends. If that doesn't happen, then you can deadlock. And sometimes codecs can be called in situations where we can't let someone have a chance to call wait next event. So if you need to allocate memory, do it as before, before doing anything in draw band.

A second pitfall to avoid, avoid using your own calls to the MP APIs entirely in a codec unless you're running on Mac OS 9.1 or later. The reason for this is that any page faults you hit, if you're unlucky, any page faults can only be serviced in one of those wet next event calls. And as I said, they might not happen. This is fixed quite nicely in Mac OS 9.1 and it's not a problem at all on Mac OS X.

Finally, if you're writing a decompressor and you divide your work up, into MP tasks. You should be careful that you don't have one of them right to one part of the screen and the other right to another part of the screen because you could see unpleasant tearing artifacts which can be annoying. Well, if you do that, you'll see.

that's all for me i'd like to hand over now to uh... to talk about it Thank you. So I'm yet another member of the professional video team. And what I'm going to talk to you today about is some upcoming changes we've got with QuickTime effects architecture. Now up to this point, we've been talking about things that are available in QuickTime 5 you can take advantage of today. We're going to be showing off some stuff that's going to be coming in future versions of QuickTime. But the reason we're going to talk about this today is that some of these features you'll be able to get ready for in advance of the code actually being available. This is of interest to you either if you're a developer that takes advantage of the built-in QuickTime effects, or if you're a developer who creates QuickTime effects yourself.

We're going to be talking about two new optional specifications that effects can provide or applications can take advantage of to allow the grouping of effects into classifications or groupings that make sense either for the user or for you in the application. We're also going to talk about a feature called effect presets which allows an effect that has a large complicated set of parameters to provide the user with a very simple user interface for getting to them. And that's enough of that. Let's do a demo right away.

Can I have demo four please? So there's no point in showing you something new without showing you what was there before. So this is the existing effects dialog that's been provided since QuickTime 3. It's a standard way for applications to get to parameters and features of the QuickTime effects. As you can see, effects can have a large number of various parameters, and it's nice to have a standard way to provide a user interface for this. But another thing you might notice is that we do tend to have an awful lot of effects here. And a big, long scrolling list is no fun for anybody, particularly if you can't make it any bigger because you can't resize the dialog or anything else. Well, let's get rid of that and solve the problem.

So one of the first things you'll notice is that the list on the left-hand side here has been grouped into classes of effects. Another thing you'll notice is that we've got plenty of screen real estate now, so let's just make that dialog nice and big and widen that on out. We've got plenty of room now. Yay! Okay. We're in the 90s. So what the user can do is they can choose particular effects they might be wanting to do. So for example, filtering, they can see the effects that are classified as filters. Let's just go ahead and pop these all open.

I'll just populate the whole dialog there. This is information that's provided by the effects and is available both to this particular dialog and also to your application. One of the other things I should point out is that we're going along here, you might see some other features that aren't exactly in my list that I originally had. For example, the resizing and the regrowing of the split bars. You can pay attention to those and decide whether or not we're actually going to do anything with them. We talked about presets. So here's an example of an effect that's using the new preset features. This is the slide effect and it provides two presets for a slide from the top or a slide up from the bottom. User can simply choose the preset that they want. You've got a picture that more or less shows them what's going to happen and a name that probably helps them as well. What's actually behind each of the presets is the full list of parameters that are available to the effect.

User can go see that as well and see that this particular slide is going with an angle from zero to zero, which is a slide from the top. Users can still go to this optional parameter section here with the custom and make a change. For example, set the starting angle here and then set the ending angle to something nice and big. And now we've got a slide that's kind of spiraling around there.

Another example of an effect that uses the presets is a new channel composite effect. Who knows whether we'll ship this or not? I'm just showing you. This is an effect that performs a combining of channels from multiple sources to produce a new source. It's often done when you have a mass that have been pulled from video, particularly in the professional video or film market, and you want to combine them together to obtain an alpha track that you're then going to alpha or composite with other tracks. When you do this, the maps are sometimes pulled positive, pulled negative. The actual alpha value may already be in the alpha channel. It might not. It might be down in the RGB values.

So we provide mechanisms for selecting these basic options that users would need. But if they want to, hidden behind the presets are all the actual parameters that are used. So, for example, you can pull the values from different channels and different things, and you get very strange combinations. Once again, there's no reason for the effect to have been written with this limited set of presets that we see here. But those presets are most commonly used, so that users who don't need the more crazy features or optional features of the effect don't need to be concerned with them. And I think that's all I have to show here. That's it. So let's go back to slides.

So we talked about the major and minor classes of effects or groupings. So what are they? Well, the major class is used by applications that need to filter the list of effects that are presented to the user. In other words, limit those effects to only those that make sense for a particular application's market segment. The minor class is used for grouping effects together. In the demo you saw when we had the effects grouped into twist down triangles, those were the groupings and we were using minor classes to define that. As I said, the effect major class is used for filtering. For example, if you're an application that provides a slideshow type service to the user, you want to probably only show the user transitions that make sense for transitioning from one slide to the other. Now, up until this point, there's been no way to tell the difference between a two source effect that performs a transition operation and a two source effect that's performing a compositing operation, like a chroma key. You just had to either limit the user to two source effects or show them all the effects and hope the user figured it out.

With the major class, effects have been divided or classified as to what is their function in this sense. And you can limit the scope of what the user has to choose between. Like most of the options that are in effects, the major class is defined by an atom that's placed in the effect description container. You see here listed the class type and ID for this and the values that we provide.

Now, in the case of the major class, because applications are essentially going to be hard coding themselves to certain classifications for the major class, this list is rigid and has to be defined by Apple and agreed upon by both components that are being implemented and applications that want to use it. So we've defined this list here. which you can make use of. Now if you and your effect don't define what your major class is, you're gonna be grouped into miscellaneous, which may or may not be what you want.

So if you're an effect developer creating effects, you're going to be wanting to add this atom. If you're an application developer that needs to limit the effects the user sees to make the user's experience easier, you can take advantage of this to do so. Effect minor class is used for the UI grouping, like I said, and that was, just to reiterate, that was used to make the twist-down triangles in the dialogue you saw there. Once again, no surprise, it's an atom that's placed in the effect description that describes the minor class. You see here the atom types and IDs that are used for that, and the contents is one of a number of ones that Apple has defined here. We welcome input from third-party developers. We've already solicited some already as to what types of groupings that you would like to see, And any of the standard ones you see here, Apple's going to be providing the, will automatically interpret those and provide the strings to display to the user within the standardized dialog. So here's some, and here's some more. And here's some more. And once again, if you don't tell us what kind of minor class you are, then you're miscellaneous.

If you aren't happy with these, unlike the major class, the minor classes can be extended by the effect. You can supply a custom string that corresponds to your minor class's name. The types and IDs of that atom are specified here. And that string will then be used in place of the standardized-- I'm sorry, in place of your OS type for your minor class, which you probably wouldn't want to see in a scrolling list. If you, however, specify a minor class that's one of the standardized ones, we'll be supplying a string for that, so it's not something you need to worry about.

Finally, effect presets. Effect presets are atoms, once again, in the atom parameter description container that is defined by the effects. If you're an effect that wants to take advantage of these things, you can place these preset atoms in your effect parameter description. And the standardized UI that you saw just here will take use of them, as will any applications that have been revised to take advantage of them. One of the nice things about all the things about so far is you can place these atoms in your effect descriptions today. Since no one's looking at them, they're ignored.

And when software has been revved to take advantage of them, then they can appear. In the case of the precept, there's three things you need to place within the atom. You need to place the name. Obviously, that's the name that's displayed inside that dialog when I show my demonstration. There's a preview picked that you need to place there. That's obviously the picture that's displayed when the user has selected your particular preset. The important thing about the picture, it is a picture and it does have to be at least 86 by 64 pixels in size. If you make it smaller than that, we are going to scale it but it's going to look pretty chunky. It can be bigger in which case it's going to be scaled down. And finally, you have to have contained within the effect preset all of the parameters that are necessary to cause that preset to become active. If you have three parameters, you need to supply those three parameter values that go along with that preset. If you have 106 parameters, you need to supply the 106 parameters that go along with that particular preset. You can't just leave out parameters that "don't care" for this particular preset because there does need to be a correspondence between the values that are present when the user has selected the preset and then when they go into the custom effect portion of the dialog, they'd like to see those values populated. And with that, I'm going to turn back over to Jean-Mi, who's going to talk a little bit about hardware and OS X.

I'm back again. So apparently somebody figured that with my first one on the slide, without any graphics and without any demo, I couldn't make the whole room fall asleep, so they gave me another chance this time. So we're going to talk about hardware again, but this time related to Mac OS X. And what I'm going to try to do is explain to you if you are coming from the Mac OS 9 world only and you have been doing stuff that was totally legitimate to do, at least nobody was preventing you to not do that, how you should move your hardware component to be able to run on Mac OS X.

So first, let's have the bad news out of our way. So you can no longer do direct access from your component to your own hardware. That's definitely something you could do online, and you can no longer do as soon as you run on OS X. You need a driver layer abstraction before-- when I say before it's mostly on 9, I mean, you could make all this access from your own component because 9 was basically having a flat space when everybody could access to anything anywhere in the entire system.

So the need of a driver is no longer an option for 10. And the last bad news is that bringing up your own hardware and debugging it is a little bit more complicated on 10. Because basically you're going to have to live with two different pieces. One is going to be in the kernel space, which is your driver. And then you're going to need some kind of debugging tool. And the other one is the component itself, which lives in the user space. And you have some different kind of tool to debug that. So, I mean, bringing up your stuff is going to be a little bit more complicated than before. So, let's talk about the good news now.

Well, if you already have an existing driver layer on 9, or at least if you have a library to make all your hardware, I'd say you're pretty much almost all set. All that you're going to have to do is move this piece to 10, and your component should be up and running. If you don't have one today, even on 9, we do recommend that you go through the exercise on 9 before you move to 10, because you will be able not only to make sure that you've isolated all your hardware access. And on the top of that, moving forward, you'll be able to maintain the same version for 9 and 10, which will be something that your customer will appreciate. And the last thing that when you're done, you will probably never want to come back to 9. I mean, the memory prediction scheme built inside 10 is definitely going to help you figure out all the issues you have been fighting for all over the years.

So what's the Mac OS X driver model? Well, as I said before, this driver belongs to the kernel space and unfortunately components live in the user space. If your hardware is based on an existing family, like USB or FireWire, what you should do is provide a driver using this family so the stuff you have to write on your driver is much smaller than bringing up a full driver implementation.

So if you deal with this kind of device, you should take advantage of what OS X provides in terms of family. They are helping you with a bunch of stuff. If you have a very high-end system that OS X has not been able to modelize, right? We don't really know what you are doing there. It's way too complicated to put that on the OS side. Then you don't have that many choices. You have to implement the I/O Kit library or a CFPlugin. And the driver API you are coming up with is your full responsibility. I mean, nobody is going to decide what kind of API is going through this driver. It's up to you. And that's why I was talking about nine. I mean, if you can validate all this stuff running nine, I mean, your life will be much easier when you move to 10.

So the big no-no is on Mac OS X, right? As I say, you cannot read write register anymore or even physical PCI memory from within your component. It belongs to your space, and the OS will not let you do that. The other point which is less obvious when you're trying to bring up your hardware on X is that QuickTime passed some completion proc to some component, mostly codecs, video digitizer, and all this stuff. And these completion procs belong to the user space. They are within QuickTime, which is in the user space as well. So if you are calling this completion proc before from your own internal handler on 9, everything was fine. You can no longer do that on 10. You're going to have to come up with some new mechanism that we'll talk about later.

And the last thing is that you cannot hold the CPU anymore. This used to be something that you can do from a component on OS 9 at least. But this OS is fully preemptive OS, so a thread will preempt you not only to call another component, but your own component will be called from another thread. So if you expect to hold the CPU to execute a couple of actions on your hardware, this is not doable from within your component anymore.

So do you do all this stuff on 10? Well, only access your hardware on your driver, as I said. And you are going to have to create a thread, at least in your component, to call all this QuickTime completion proc in order to manage to live in the same user space. How you do that, basically there is some services from the OS which allows you to send messages via my co-op so your kill and in the spread will be able to wake up your you know use of space for it and you will be able to call the quick temple quick and completion proxies clean air So a couple of things to know about this stuff. I mean, crossing kernel space is not totally free. I mean, it doesn't take that long, but it takes some time.

So when you are bringing your hardware, make sure that you minimize the number of calls that you do to your own driver, especially during the initialization process. I mean, don't try to come up with API when you're going to set a bit per call. So it's going to take forever to have your hardware ready.

The last thing is that threads are very cool as soon as you understand what you are doing. But you should never forget that as soon as you have the thread on your component side, you're going to add some load on the OS side. So before going ahead and creating a new thread, think about trying to share them between all your components piece of component that you exposed to QuickTime.

Well, I guess that's pretty much it for our quick time for professional video session. We have a couple of other sessions running today and tomorrow. And the one you should definitely go is the QuickTime Feedback Forum, which is usually pretty packed. So if you have any questions that you want us to answer, you should definitely be there and be there early because you might not be able to get into the room.

so if you have any any in question about uh... all this stuff we have been talking about in these professional video session you should contact you flow at jiffy what that will become and he will be able to get in touch with us So if you need more detailed information about all the stuff that we've been talking about, you should definitely go to developer.apple.com/quicktime.

We have a full documentation about what's new in QuickTime 5. And you will find in there all the stuff we were specifically talking about. There is also another place on this website, which is called the Ice Flows, where you have some explanation about all this rendering, this pixel format, and the effects stuff that Tom has been talking about. If you didn't quite understand all the stuff that Tim was talking about, about YUV space, gamma, offer rate, and all this issue behind that, we do recommend to read this book from Charles Poynton, which is called A Technical Introduction to Digital Video. There is a lot of information in there to understand why it's so painful to go through all this rendering process.

and the last thing is uh... about uh... quicktime land event this uh... october in uh... the valley hills california so if you do all this stuff about quicktime you should definitely go there it's a chance for you to meet other i mean third party developers it's a chance to meet all the quicktime engineering crew which is usually going over there and it's a chance to uh... hear about what's new in quicktime And that's pretty much it. Thank you very much. Thank you.