Digital Media • 57:31
This session shows new features of QuickTime for use in developing professional video applications and components. Topics include improved movie track editing using media sharing, gamma correction APIs, support of nested effects for real-time hardware, implementing multiprocessing support, and strategies for developing hardware components for Mac OS X.
Speakers: Tim Cherna, Jean-Michel Berthoud, Sam Bushell, Tom Dowdy
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
My name is Tim Chernia. I'm manager of the QuickTime Pro Video team at Apple. So we're going to talk about Pro Video. So why Pro Video? Well, it's good to have a team for Pro Video. QuickTime has a lot of customers as you've probably seen either by seeing Tim's session this morning or seeing the interactivity session that was just before mine, or seeing the broadcasting session that follows. So QuickTime has a lot of customers, but it's also the foundation of a lot of core high-end video technology, shipping and apps from both Apple and other companies such as Adobe and Media 100. So it has unique demands in that space.
The demands tend to be it must have high quality, it must have high performance, it has to work well with the hardware that you get. We have hardware now from companies like Matrox and Pinnacle and Digital Voodoo and Aurora which work using QuickTime in the apps we talked about. And you have to have consistent results because you're going to take this material and maybe go on air or make it in a movie. So it's very, very important. So that's why we're specializing in pro video, that's why we have a pro video team.
So what about you guys? This session is targeted towards developers who are writing video editing or video processing applications. It can be from the high end, it can be from just simple ones which can make the experience of using things like iMovie easier. It's also targeted towards codec writers who are writing either software codecs or hardware codecs.
and this will basically give you some extra information. So the things we're going to talk about today. We're going to talk about some improvements we've done in QuickTime 5, which makes your rendering experience better. We're going to talk about improvements for supporting hardware cards, such as the ones I talked about, and ways that you can take advantage of asynchronous operations and multiprocessing on our high-end Macintosh systems. And some future things we're going to do with our effects architecture. And finally, some help in migrating your video hardware towards OS X as a platform.
So, I'm going to talk about rendering. And when I talk about rendering, I'm really talking about taking some compressed material, decompressing it, and typically then you'd apply some sort of effect to it, a video effect, let's say a blur, and then recompress it back to the original material.
That's a typical workflow experience or experience inside a video editing app where you, let's say, wanted to do a cross-resolve between two streams of DV and you would render it. So you decompress the two streams of DV, you combine them, you recompress it back to DV, and then you can send it out FireWire.
So the improvements we've done in that space are we've improved some gamma processing. We've also added a new pixel format called R408. And of course we've improved the DV codec, which is something that Tim talked about, and I'll show you some of the results we got there. So my favorite first topic is gamma.
And I'm going to talk about gamma sort of a little overview. But basically, gamma, when I'm talking about it, I'm talking about the non-linearity of intensity reproduction, which is basically that you have an input value and you have an output intensity. And there's a relationship between the input value and the output intensity. And it's not always linear like the diagram at the right.
It has a curve. It has a power curve. And this relationship could apply to either the camera, a video camera, or it could apply to just a CRT monitor or maybe a LCD monitor. It can also apply to the system as a whole, for example, the Macintosh or your television.
So, the issue is that there's different gamma for the different systems that QuickTime deals with. We have video, which has a gamma which is established at 2.2, and the Macintosh, the gamma is established at 1.8, and Windows is basically using the native values of the CRT, which is 2.5.
So, why is that a problem for video rendering is that because QuickTime knows that videos such as DV is at 2.2, and it wants to make it look correct when we display it on the Macintosh for applications such as iMovie when you're doing the preview on the desktop, we do a gamma correction stage to bring the image to look closer to what it would look like if you had a NTSC monitor next to your Macintosh monitor.
And so that works really, really well, except it makes it a little bit more difficult to see what the image is going to look like. So, we've come up with some problems for video rendering. I'm going to show you the gamma correction that I was talking about afterwards in my demo.
So, the solution that we've come up with is that we allow applications now to specify the gamma that they want. They can say, "I want you to give me what the source gamma was," or, "I want you to make it gamma 2.0," or, "I want you to make it gamma 2.2." So, not only can you specify what the gamma would be via some gamma APIs we've done, you can also find out what the gamma actually was, which is really useful. So, let's see how that works.
And Codex can specify the gamma that they prefer. They can say, "My private or custom pixel format compression format has a gamma of, let's say, 2.2," and, therefore, when you decompress it, if you ask for the source gamma, it'll be 2.2. So, there's no more guessing. You can basically choose the gamma that you want to process your video rendering in, and you get what you want.
So, pixel formats. Key thing about rendering is you go from a compressed pixel format to some sort of a pixel format you're going to do the rendering in. And the typical choices out there would be RGB or YUV. RGB has its advantage in that it's native for graphics. A lot of people have used it for many years. It also has an optional alpha channel So you can use it to do compositing fairly easily.
YUV has the advantage that it's native for video, and typically it's stored in a 4:2:2 format, which means that there's two samples of Luma for every chroma pair. The U and V refer to the chroma and the Y refers to Luma. So it's sub-sampled, which means it's good for storing. It kind of represents what your eye can see. In other words, your eye is more sensitive to luma compared to chroma.
But it's basically, well, it's hard for rendering, which is kind of my next slide. And of course, there's no alpha channel. So the problems with RGB and YUV, RGB has the extra color space conversion to go back and forth from YUV since the data would typically be native YUV for video. It also can clamp the video because the color space is smaller than the YUV space.
So with YUV, the problem is that there's no alpha channel, so if you wanted to do compositing, you're kind of out of luck. It's not really friendly for that. And it's also hard to process because it's sub-sampled, so if you just wanted to move your YUV image by one pixel, all of a sudden you have this problem because you have to move the chroma and luma around, or the chroma around, because it was sub-sampled.
And also, the standard black value for YUV video is 16, so every time you do an operation on YUV, you're busy adding and subtracting 16 on it, so we didn't quite like that. So QuickTime came up with R408, and R408 is really nice. It's video-friendly, which means it's YUV-based. It's not sub-sampled.
[Transcript missing]
All right. So what I'm going to show is a little application And what I'm doing in this application is I'm taking a DV clip and I'm decompressing it to an off-screen in a pixel format that I can choose. And I'm taking the resulting decompressed image and recompressing it back to DV.
And I take that DV frame and I re-decompress it and keep doing that. In my tests I do it 60 times. So I basically can see the results of a multi-generational render. So I can see the losses that we had or have with the DV codec. So let me just open up a file.
This is... Andrew, this is Kevin Mark's son, and he's holding a ball which he's moving, so you can see over here there's a lot of motion, and that basically has interesting effects on the DV compression and decompression that we were testing, and you can see that there's a lot of detail in his hair, and so he's our test clip for today.
And the first thing I wanted to show is why we're using gamma, why we gamma correct DV. So this is a source clip, it's DV, and right now we're gamma correcting it so that it looks good on a Macintosh monitor, and I can turn that off, and now it's off and it looks a lot brighter. And now what I'm doing is I'm actually using the gamma APIs to specify what I want the gamma of the display to look like. The gamma of actually not the display, but of the PIX map, the port PIX map.
So I can switch it back to video gamma, and it's not gamma correcting, so it looks brighter, too bright. I can set it back to default gamma, it looks dark. Natively, it's going to use the default gamma, and we're going to show how you can use the gamma APIs to fix up your rendering.
So the first test I'm going to do is I'm going to do a test where I render the clip through 2VUI. And when I did that, you'll see the resulting clip degrades every step quite a lot because I didn't actually set that I wanted to use 2.2 as my gamma. It's converting every step on the decompression, but it's not properly converting on the compression. So now I can do the same test on 2VUI.
And now you'll see that it looks basically perfectly. I can play this and it looks perfectly. I can scrub through it. And you see that I've used the gamma API to say, please decompress this at 2.2 so that when I recompress it, there's no gamma shift. So I've avoided any gamma processing at all in that rendering cycle.
I want to talk a little bit about the pixel format that I've chosen to use. So you're going to see two impacts of me choosing R408 over RGB. And the first impact is performance and the second one is quality. So I have two seconds of video that I've just rendered, basically done this multigenerational test to. One frame and I've done 60 frames.
And so my two seconds of video took 2.4 seconds to process on this machine through RGB. And so that's just a little under real time. And you can see that if I go to the last frame you can see some artifacts appear because of the losses going through RGB. And so that's not good.
So we did the same test going through R408, which is the YUV format. And the first thing that is pretty impressive is it takes 1.4 milliseconds to do. So that's faster than real-time to do the decompression and recompression. And the other thing that's really notable about it is the quality. So I can't see that image change. In fact, it really doesn't.
Transcript for this session: "How to Use QuickTime for Video" by Tim Cherna That's all, the stuff was all shown on QuickTime 5. So with QuickTime 4.1.2, the first thing I can read, I guess you can't read, but it took 5.8 seconds to do the same test versus 1.3 seconds to do the test on QuickTime 5. So you can see the improvements in performance. The other thing is, as we play it, you can see that there's a fair lot of artifacts that we had in, anyway, QuickTime 5 is much, much better.
So, let me just quit these things so that the next demos are good. And that's pretty much all I'm going to talk about. Can we go back to the slides please? So now I'm going to ask Jean-Michel Berthoud to come up and talk about some improvements in hardware support.
Hi, my name is Jean-Michel Berthoud and I work in the QuickTime Pro Video Group. And I'm going to talk about a couple of features that we have added in QuickTime 5.0 in order to improve hardware support. So the first thing that needs to work is the remote control.
Okay, that's called hardware improvement, right? Okay, so the first thing that QuickTime for use to do is to assume that all codec can decompress right away. And for software implementation, it's pretty easy to understand that you can start decompressing whenever you want. When you have to deal with a piece of hardware, it's much more difficult.
Usually, I mean, third-party developers have been able to manage to deal with this issue because I mean, the time it was taking to set up their first decompression was not that much, actually. But during the development of QuickTime 5.0, we ran into some third party which were trying to bring up their hardware and support QuickTime.
And this time to decompress the first frame was quite huge. And what was happening is that QuickTime was getting upset because it was totally unable to understand that the first frame was going to take a while to show up on screen, but the next one after that will be fine. That's a concept that we didn't have before 5.0. I'm trying again.
Okay, that didn't work. So the solution was to make QuickTime aware of this latency. And the way you are reporting this latency is by using a new API that we have put on the Codec side, which is called ImageCodec Get Decompress Latency. So basically your Codec does report its internal pipeline duration and make QuickTime aware that it's going to take you a long time to start decompressing the first frame. The next one, which are coming after that, will be fine.
So as soon as QuickTime, I mean, use a codec which reports latency, what internally we're going to do, we're going to start your video track earlier, and the movie will start when your hardware pipeline is totally full, so you have a chance to decompress the first frame at the right time.
So that's what the new latency support in QuickTime 5.0. And we also extended this latency mechanism to audio track. So the concept is identical, and the way QuickTime is talking to audio devices through the sound output component. So we've added a new selector called SI output latency. Using some get component get info and same thing there if your audio device has some internal pipeline, you just need to report that to QuickTime and will offset the audio track as well. So QuickTime can deal with different latency between audio track and video track. The only assumption that we still have is that if all the video codecs in this video track need to report the same latency. So another assumption that QuickTime did have before FIBO.
When you have a system which has multiple codecs able to decompress the same kind of data, we needed to choose one. Right? So, the one we chose, of course, was the one which was the fastest one. Because every codec is supposed to report their speed. I mean, internally, If you had two DV codecs installed on your system, we were getting the speed for each of these codecs and making the one we used the one claiming that we were the fastest one. Of course, this scheme assumes that these codecs don't lie, right? Well, they do.
It's too bad, right? But they don't have that much other option. Basically what's happening is that you pay for this piece of hardware and you stick that in your system and they want to be the one that QuickTime is going to use by default. And the only way for them to make that happen was to claim that they are faster than, for instance, the software upon implementation.
But it was getting worse because when you start having two pieces of hardware in the same machine, I mean, everybody was trying to look at the other guy, Kodak, figure out what their speed and claim that they were faster than the other one, right? Of course, that's not really a viable solution and we used to call that the Kodak speed war internally whenever he was trying to claim that they were faster than the other one. So what we did in 5.0 was to finally let the application decide which codec they want to use when. So we did end up this codec war.
At least we hope so. So the way an application can specify this preferred codec is to use this new API called mediaset_preferred_codec. By using that, you provide QuickTime a list of codecs you prefer to use. Internally, what QuickTime is going to do is to still sort all of them by speed. At the end of the sort, what we're going to do is put the codec you've given us in this list at the top of the list.
So it's definitely a much better solution than the speed information, which was the only information we had before in QuickTime. And it makes application setup much easier when they decide to set up a user project, trying to understand which piece of hardware or software they want to use.
You might have your system setup, for instance, doing a FireWire DB input, and have another piece of hardware which is capable of sending DB data to an analog output, and you really want to let the user and the application be able to select which one they want to use at one point.
So just one more thing about hardware codec. If your hardware has implemented a custom compression type, what's happening is that when you create your content with this codec in your movie, if your end user does have the hardware installed in your system, you're fine. You can play back this movie.
If you try to have this content play on a system which doesn't have your hardware, then you need to provide a software implementation of your hardware codec, right? Well, the problem is that the user has no idea what he's looking for when he's running into a movie like that. He doesn't know where the content has been altered, he doesn't know which company is making what codec, so it's quite a bad user experience for him.
So the solution is to use our new mechanism in QuickTime 5 to do this automatic component download. And all that you have to do if you have a custom hardware codec is register your software implementation with Apple. And we will get it directly from our own server as soon as your end user will run into it. So that's pretty much it about hardware and QuickTime 5.0. Let's talk about MP and QuickTime on Mac. Thank you.
Thank you Jean-Michel. My name is Sam Bushell and I'd like to take a little time to talk to you about QuickTime on multiprocessor Macintoshes. Multiprocessor Macintoshes are great, right? And they're great because they have more processors. If you have more processors than the other guy then you win! Well, maybe. In practice, people want to buy a machine with two processors because they'd like everything to run twice as fast. It turns out, if you're an engineer, you probably have some idea of why it doesn't quite work so well.
And so as engineers, we have to do a little bit of work to make this hope satisfiable. Now, sometimes the user is running more than one application at the same time, and those applications, maybe several of them are doing compute-bound tasks. In that case, on Mac OS X, we automatically get symmetric multiprocessing that'll schedule and run all of the applications that are available to have work to do. And so that side of the problem is pretty much sorted out for us now on 10. But sometimes only one application is doing any work.
In that case, we have to do a bit more work to divide that work up across the available processes. Now, in the QuickTime case, there are A bunch of different bits of work to be done on the system, but the majority of them tend to be done by codecs. And so the work that we've done with QuickTime to support multiprocessor computers and multiprocessing is primarily focused on making the codecs run faster.
So it's a team effort. In QuickTime generally, if you have an application that uses QuickTime and there are some codecs involved, the application calls QuickTime, QuickTime calls some component, the codec runs for a while doing some work, and when it's done it returns the application. So let's look at how this team effort might be made faster to take advantage of dual-processor computer.
If you're lucky, you might be able to take the work that Kodak is doing and divide it evenly across two processes. If you're not so lucky, it might not be applicable, but it might be possible to run that decompression or that Kodak work asynchronously and let the application do some other work, maybe some other decompression for the next frame at the same time.
In more detail, this is the first approach. If you can split up your work across a bunch of multiprocessor tasks, then they can be run all at the same time. And when they're all done, they all return. So this is still a synchronous API. The application asks you to do the work, and when you're done, you return. And you've taken up all of the CPUs available in the meantime.
This is the best situation because the applications don't need to be revised. They can keep using those synchronous APIs. And there are high performance gains possible as we've demonstrated with the dbCodec. The trouble is it's harder. And it's not something you can easily do with all algorithms. Sometimes step one has to be done before step two, and step two has to be done before step three. And so you can't do step one, two, and three all at the same time. In those situations you need to take a step back.
Re-evaluate how you'd like to go, and maybe it's okay to run the entire job that you want to do in a single MP task asynchronously from what the rest of the application is doing. This is a smaller change to the codec, and in fact it can be a really small change if QuickTime can help you out.
The trouble is, it doesn't actually make that task any faster. It just takes as long. Maybe if the application has something else to do, then it's a win overall. But the corollary of this is that in order to take advantage of this situation, the applications do need to be revised and maybe restructured to take advantage of this using asynchronous APIs. So in QuickTime 5, we've used both approaches to taking advantage of multiprocessor computers.
have revised the DV Compressor and Decompressor, as you've probably heard a number of times by now, to split up their work across the available processes in the computer. We've revised some of the other compressors and decompressors in QuickTime to be able to run asynchronously. And I have a little demonstration of this that I'd like to show you, which is over here on demo four.
Now this is an application that I wrote for debugging and analysis purposes, but I'd like to use it here as a technology demonstration to give you an idea of something that might be an applicable use of both of these kinds of technologies, both the method A and method B for splitting up work on a dual processor computer.
We have back here is a dual processor 500 megahertz G4, and my application, you probably can't see all of the text on the screen, it doesn't matter, it's not very interesting, apart from the fact that it has a bunch of different bit rates listed. And we have a DV camera here, and it's pointed at you, and this is what you look like.
[Transcript missing]
Modem rates. So you might have the first, the 12 kilobit of video for a 28k modem, 24 kilobit per second of video for a 56k modem, something towards 80 kilobit per second for a dual ISDN or some other hundred odd kilobit connection, and something higher as well at full 30 frames per second. But if you're close enough, then you can probably read that we're not currently achieving 25 frames per second. Which makes this look like a foolish demo, except that I can point out that the reason it's lagging behind is because it's showing you the answers.
And if I turn off the preview so that you don't get to see yourself on the screen, then we do reach 30 frames per second pretty efficiently. And there's actually quite a bit of CPU left available on the machine. So you could take, what this demonstrates is that you could take QuickTime 501 and a dual processor 500 megahertz G4, and you could prepare. So you could take a few of these, multiple compression video streams, and then you could probably broadcast them to a Streaming Reflectors, which will go out to a wide range of people. All in one machine. This will be a useful product. And that's my demo.
So among the developers here, some of you are probably writing applications. Those of you who are, the thing you can do on multiprocessor Macintoshes is you can call QuickTime using the asynchronous compression APIs instead of synchronous ones. If you're a codec author, then you might want to accelerate your codec to take advantage of multiprocessor machines, either using approach A or approach B. It's up to you.
Let's have a look at these asynchronous compression APIs first. QuickTime has always had an asynchronous mode for the Image Compression Manager's compression API. Basically, the last of these lots of parameters that describe what you want to compress and how you want to compress and so forth. The very last parameter says, is a completion routine.
And this, you can pass nil, in which case it won't return until it's done, or you can pass a completion proc in RefCon, in which case it is allowed to return immediately, and when it's actually done with the compression activity, then it will call your callback routine, and you'll know. Um, If this is safe to use, even if a codec doesn't actually support asynchronous compression, in that case, it will return after calling your callback after everything is done.
The trouble was that there wasn't, that there's a missing piece here, because a lot of people who want to write compression applications want to use a higher level service called the standard compression component. This is the component that provides a nice friendly dialog box that you've probably seen a hundred times. And that component only had a synchronous API. So in QuickTime 5 we've added an API analogous to the asynchronous compression in the ICM. I've said the word compression a lot of times in that slide.
So if you want to read more about that, I recommend looking at the QuickTime 5 documentation. There's lots of good stuff there. If you're a codec author, then as I said, you have the two choices. You can accelerate your codec by calling the multiprocessor APIs yourself, creating some tasks, when we call you to do some work, splitting that work up across your tasks, and then waiting until they're all done and then returning, or calling the completion routines. That's great. If you do that, you're pretty much on your own.
Although there were some pitfalls I'm going to warn you about in a second that you should be careful to avoid. If you take approach B of running the entire activity asynchronously and then calling the completion routine when you're done, running the activity on an MP task that is, if you're writing a decompressor and it's based on the base codec, then QuickTime can help.
All you need to do is write a little bit of code that promises that your drawband call is MP safe. Since drawband calls for video codecs generally have to be interrupt safe because they might be called at deferred task time, this isn't a big leap. You really shouldn't be calling any other APIs besides ones, just other functions defined in your own sources in your drawband call. So generally those things, as long as they're PowerPC native, are also safe to run in MP tasks.
So if you write a little bit of code that promises that everything's cool there, then we will run you in an MP task. If applicable, if we're on a multiprocessor machine and trying to do an asynchronous decompression. Subtitles by the Amara.org community So there are three pitfalls I'd like to point out so you have in mind so you can avoid them moving forward.
I've said that you shouldn't call anything in drawband. Some of you who have read the documentation on Apple's multiprocessing APIs will say, no wait, I've read about this. You're allowed to call all of these memory allocation routines and in fact in Mac OS 9.1 and later you're allowed to call all sorts of other things like the file system. Well, you're not allowed to do that from a codex drawband. routine.
The reason is that the remote procedure call that implements some of these allocations and other calls That remote procedure call can only be serviced when someone in the blue task calls waitnextevent or one of its friends. If that doesn't happen, then you can deadlock. And sometimes codecs can be called in situations where we can't let someone have a chance to call waitnextevent. So if you need to allocate memory, do it as before, before doing anything in drawband.
A second pitfall to avoid, avoid using your own calls to the MP APIs entirely in a codec, unless you're running on Mac OS 9.1 or later. The reason for this is that any page faults you hit, if you're unlucky, any page faults can only be serviced in one of those wetnext event calls. And as I said, they might not happen. This is fixed quite nicely in Mac OS 9.1 and it's not a problem at all on Mac OS X.
Finally, if you're writing a decompressor and you divide your work up, Enter MP Tasks. You should be careful that you don't have one of them right to one part of the screen and the other right to another part of the screen, because you could see unpleasant tearing artifacts, which can be annoying. Well, if you do that, you'll see. That's all for me. I'd like to hand over now to Tom Dowdy. Thank you.
Thank you. So I'm yet another member of the professional video team, and what I'm going to talk to you today about is some upcoming changes we've got with QuickTime effects architecture. Now up to this point we've been talking about things that are available in QuickTime 5, you could take advantage of today.
We're going to be showing off some stuff that's going to be coming in future versions of QuickTime, but the reason we're going to talk about this today is that some of these features you'll be able to get ready for in advance of the code actually being available. This is of interest to you either if you're a developer that takes advantage of the built-in QuickTime effects, or if you're a developer who creates QuickTime effects yourself.
We're going to be talking about two new optional specifications that effects can provide or applications can take advantage of to allow the grouping of effects into classifications or groupings that make sense either for the user or for you in the application. We're also going to talk about a feature called effect presets, which allows an effect that has a large, complicated set of parameters to provide the user with a very simple user interface for getting to them. And that's enough of that. Let's do a demo right away.
Can I have demo 4 please? So there's no point in showing you something new without showing you what was there before. So this is the existing effects dialog that's been provided since QuickTime 3. It's a standard way for applications to get to parameters and features of the QuickTime effects.
As you can see, effects can have a large number of various parameters, and it's nice to have a standard way to provide a user interface for this. But another thing you might notice is that we do tend to have an awful lot of effects here, and a big, long scrolling list is no fun for anybody, particularly if you can't make it any bigger because you can't resize the dialog or anything else. Well, let's get rid of that and solve the problem.
So one of the first things you'll notice is that the list on the left-hand side here has been grouped into classes of effects. Another thing you'll notice is that we've got plenty of screen real estate now. So let's just make that dialogue nice and big and widen that on out. We've got plenty of room now. Yay.
We're in the 90s. So what a user can do is they can choose particular effects they might be wanting to do. So for example, filtering, they can see the effects that are classified as filters. Let's just go ahead and pop these all open. will populate the whole dialogue there.
This is information that's provided by the effects and is available both to this particular dialogue and also to your application. One of the other things I should point out is that we're going along here, you might see some other features that aren't exactly in my list that I originally had. For example, the resizing and the regrowing of the split bars. You can pay attention to those and decide whether or not we're actually going to do anything with them. We talked about presets.
So here's an example of an effect that's using the new preset features. This is the slide effect, and it provides two presets for a slide from the top or a slide up from the bottom. The user can simply choose the preset that they want. They've got a picture that more or less shows them what's going to happen and a name that probably helps them as well. What's actually behind each of the presets is the full list of parameters that are available to the effect.
User can go see that as well and see that this particular slide is going with an angle from zero to zero, which is a slide from the top. Users can still go to this optional parameter section here with the custom and make a change. For example, set the starting angle here and then set the ending angle to something nice and big. And now we've got a slide that's kind of spiraling around there.
Another example of an effect that uses the presets is a new channel composite effect. Who knows whether we'll ship this or not? I'm just showing you. This is an effect that performs a combining of channels from multiple sources to produce a new source. It's often done when you have maps that have been pulled from video, particularly in the professional video or film market, and you want to combine them together to obtain an alpha track that you're then going to alpha or composite with other tracks. When you do this, the maps are sometimes pulled positive, pulled negative. The actual alpha value may already be in the alpha channel. It might not. It might be down in the RGB values. So we provide mechanisms for selecting these basic options that users would need.
But if they want to, hidden behind the presets are all the actual parameters that are used. So, for example, you can pull the values from different channels and different things, and you get very strange combinations. Once again, there's no reason for the effect to have been written, with this limited set of presets that we see here. But those presets are most commonly used, so that users who don't need the more crazy features or optional features of the effect don't need to be concerned with them. And I think that's all I have to show here. That's it. So let's go back to slides.
[Transcript missing]
With the major class, effects have been divided or classified as to what is their function in this sense, and you can limit the scope of what the user has to choose between. Like most of the options that are in effects, the major class is defined by an atom that's placed in the effect description container. Now you see here listed the class type and ID for this and the values that we provide.
Now in the case of the major class, because applications are essentially going to be hard coding themselves to certain classifications for the major class, this list is rigid and has to be defined by Apple and agreed upon by both components that are being implemented and applications that want to use it. So we've defined this list here, which you can make use of. Now if you and your effect don't define what your major class is, you're going to be grouped into miscellaneous, which may or may not be what you want.
So, if you're an effect developer creating effects, you're going to be wanting to add this atom. If you're an application developer that needs to limit the effects the user sees to make the user's experience easier, you can take advantage of this to do so. Effect Minor Class is used for the UI grouping, like I said, and that was, just to reiterate, that was used to make the twist-down triangles in the dialogue you saw there. Once again, no surprise, it's an atom that's placed in the effect description that describes the minor class.
You see here the atom types and IDs that are used for that, and the contents is one of a number of ones that Apple has defined here. We welcome input from third-party developers. We've already solicited some already as to what types of groupings that you would like to see.
And any of the standard ones you see here, Apple's going to be providing the, will automatically interpret those and provide the strings to display to the user within the standardized dialogue. So here's some, and here's some more. And here's some more. And once again, if you don't tell us what kind of minor class you are, then you're miscellaneous.
If you aren't happy with these, unlike the major class, the minor classes can be extended by the effect. You can supply a custom string that corresponds to your minor class's name. The types and IDs of that atom are specified here. And that string will then be used in place of the standardized, I'm sorry, in place of your OS type for your minor class, which you probably wouldn't want to see in a scrolling list.
If you, however, specify a minor class that's one of the standardized ones, we'll be supplying a string for that, so it's not something you need to worry about.
[Transcript missing]
And with that, I'm going to turn back over to Jean-Mi, who's going to talk a little bit about hardware and OS X.
I'm back again. So apparently somebody figured out that with my first run of the slide, without any graphics and without any demo, I couldn't make the whole room fall asleep, so they gave me another chance this time. So we're going to talk about hardware again, but this time related to Mac OS X.
And what I'm going to try to do is explain to you, if you are coming from the Mac OS 9 world only, and you have been doing stuff that was totally legitimate to do, at least nobody was preventing you to not do that, how you should move your hardware component to be able to run on Mac OS X.
So first, let's have the bad news out of our way. So you can no longer do direct access from your component to your own hardware. That's definitely something you could do online and you can no longer do as soon as you run on OS X. You need a driver layer abstraction. Before When I say before it's mostly on 9, I mean you could make all this access from your own component because 9 was basically, I mean, having a flat space when everybody could access to anything, anywhere in the entire system.
So the need of a driver is no longer an option for 10. And the last bad news is that bringing up your own hardware and debugging it is a little bit more complicated on 10. Because basically you're going to have to live with two different pieces. One is going to be in the kernel space, which is your driver, and then you're going to need some kind of debugging tool. And the other one is the component itself, which lives in the user space, and you have some different kind of tool to debug that. So, I mean, bringing up your stuff is going to be a little bit more complicated. So, let's talk about the good news now.
Well, if you already have an existing driver layer on 9, or at least if you have a library to make all your hardware, I'd say you're pretty much almost all set. All that you're going to have to do is move this piece to 10, and your component should be up and running.
If you don't have one today, even on 9, we do recommend that you go through the exercise on 9 before you move to 10, because you will be able not only to make sure that you've isolated all your hardware access, and on top of that, moving forward, you'll be able to maintain the same version for 9 and 10, which will be something that your customer will appreciate. And the last thing is that when you're done, you will probably never want to come back to 9. I mean, the memory prediction scheme built inside 10 is definitely going to help you figure out all the issues you have been fighting for all over the years.
So what's the Mac OS X driver model? Well, as I said before, this driver belongs to the kernel space and unfortunately components live in the user space. If your hardware is based on an existing family like USB or FireWire, what you should do is provide a driver using this family so the stuff you have to write on your driver is much smaller than bringing up a full driver implementation.
So if you deal with this kind of device, you should take advantage of what OS X provides in terms of family. They are helping you with a bunch of stuff. If you have a very high-end system that OS X has not been able to modelize, right? We don't really know what you are doing there.
It's way too complicated to put that on the OS side. Then you don't have that many choices. You have to implement the NIOkit library or a CFPlugin. And the driver API you are coming up with is your full responsibility. I mean, nobody is going to decide what kind of API is going through this driver. It's up to you. And that's why I was talking about 9. I mean, if you can validate all this stuff running 9, I mean, your life will be much easier when you move to 10.
So the big no-no is on Mac OS X, right? As I said, you cannot read write register anymore or even physical PCI memory from within your component. It belongs to your space and the OS will not let you do that. The other point which is less obvious when you're trying to bring up your hardware on X is that QuickTime passed some completion proc to some component, mostly codecs, video digitizer, and all this stuff. And these completion procs belong to the user space.
They are within QuickTime, which is in the user space as well. So if you were doing, if you are calling this completion proc before from your own internet handler on 9, everything was fine. You can no longer do that on 10. You're going to have to come up with some new mechanism that we'll talk about later.
And the last thing is that you cannot hold the CPU anymore. This used to be something that you can do from a component on OS 9 at least. But this OS is fully preemptive OS, so a thread will preempt you not only to call another component, but your own component will be called from another thread. So if you expect to hold the CPU to execute a component of action on your hardware, this is not doable from within your component anymore.
So do you do all this stuff on 10? Well, only access your hardware on your driver, as I said. And you are going to have to create a thread, at least in your component, to call all this QuickTime completion proc in order to manage to live in the same user space. How you do that, basically there is some services from the OS which allows you to send messages via Mac port. So your kernel thread will be able to wake up your user space thread and you will be able to call the QuickTime completion product safely in there.
So, a couple of things to know about this stuff. I mean, crossing kernel space is not totally free. I mean, it doesn't take that long, but it takes some time. So, when you're bringing your hardware, make sure that you minimize the number of calls that you do to your own driver, especially during the initialization process. I mean, don't try to come up with API when you're going to set a bit per call, so it's going to take forever to have your hardware ready.
The last thing is that threads are very cool as soon as you understand what you are doing, but you should never forget that as soon as you add a thread on your component side, you're going to add some load on the OS side. So, before going ahead and creating a new thread, think about, I mean, trying to share them between all your components and your different piece of component that you expose to QuickTime. Thank you.
Well, I guess that's pretty much it for our QuickTime for Professional Video session. We have a couple of other sessions running today and tomorrow. And the one you should definitely go is the QuickTime feedback forum, which is usually pretty packed. So if you have any questions that you want us to answer, you should definitely be there and be there early because you might not be able to get into the room.
So if you have any questions about all the stuff we have been talking about in this professional video session, you should contact Jeff Lowe at [email protected] and he will be able to get in touch with us. So if you need more detailed information about all the stuff that we've been talking about, you should definitely go to developer.apple.com/quicktime. We have a full documentation about what's new in QuickTime 5, and you will find in there all the stuff we were specifically talking about.
There is also another place on this website, which is called the Ice Flows, where you have some explanation about all this rendering, this pixel format, and the effects stuff that Tom has been talking about. If you didn't quite understand all the stuff that Tim was talking about, about YUV space, gamma, off-road rate, and all this issue behind that, we do recommend to read this book from Charles Poynton, which is called "A Technical Introduction to Digital Video." There is a lot of information in there to understand why it's so painful to go through all this rendering process.
And the last thing is about our QuickTime event this October in Beverly Hills, California. So if you do all this stuff about QuickTime, you should definitely go there. It's a chance for you to meet other, I mean, third-party developers. It's a chance to meet all the QuickTime engineering crew, which is usually going over there. And it's a chance to hear about what's new in QuickTime. And that's pretty much it. Thank you very much.