Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2006-212
$eventId
ID of event: wwdc2006
$eventContentId
ID of session without event part: 212
$eventShortId
Shortened ID of event: wwdc06
$year
Year of session: 2006
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC06 • Session 212

Developing Modern QuickTime Applications

Graphics and Media • 1:04:01

The QuickTime architecture provides state-of-the-art multimedia technologies that enable high-definition audio and video playback. Discover how QuickTime integrates with the industry-leading array of 2D and 3D frameworks available in Mac OS X. Learn how to keep your application on the cutting edge by taking full advantage of these features.

Speakers: Vince Uttley, Tim Monroe, Brad Ford, Ken Greenebaum

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it may have transcription errors.

Good morning. I'm Vince Uttley, and I manage the QuickTime engineering team. And I wanted to thank you all for attending this session, especially, I know it's very early for some of you. I also want to thank you for your continued commitment to QuickTime technologies. Last year, we had an overview session. It wasn't this well attended. So we restructured it, and I'd be very interested in your feedback afterwards. We want to make sure that there's enough substance for you. So part of what we're doing is introducing you to the other sessions we have. We want to tease you and motivate you to attend them, but we want to make sure there's enough substance in this presentation.

But the fact that last year was at a much more reasonable hour, but this is much more heavily attended, we appreciate that you're here. So I'm going to give a short introduction to the sessions that are available to you at the conference and talk briefly about what you'll be hearing this morning from the engineering team. and I'll speak briefly to these points, we are advancing QuickTime, even though the code base is pretty old.

And there are some challenges in doing so. And I'll speak briefly to that. We'll talk about some of the new features we're adding in audio and video plumbing. And then we'll talk about some-- we'll do our annual provision of best practices to guide the new developers to the right set of APIs and to enhance the skill set of the experienced developers. So I don't have any iPod giveaways, but can anyone tell me how old QuickTime is? QuickTime framework, not the logo.

End of '91, yeah, December '91 when we introduced it. So it's showing its age. In terms of Silicon Valley technologies, 15 years is an eternity. And thank you, Brad, for finding this image. So it is old, but part of what we're trying to do with this image is to debunk the notion that QuickTime is no longer usable or useful. So let me speak a little bit to what we've been doing to advance the technology. At the same time, we acknowledge that it's old in its existence, because a big percent of the APIs are a bit dated.

Carbonisms and classicisms abound. And you've experienced it firsthand with our challenge to move to full 64-bit support. The code base is a presumptuous amount of, there's a lot of presumptuous point arithmetic, And we are, because of our historical commitment to backward compatibility, We depend on technologies like Quickdraw that aren't appropriate to move into the 64-bit world and in fact, even appropriate to retain in our long-term plans. So we're doing our best, but at certain areas, we've reached our limit to how we can extend QuickTime.

However, having said that, the last four years, the team has spent an incredible amount of effort redoing large portions of the AV plumbing. And we continue to make enhancements, which we'll speak about today in detail. And in addition, we want to be able to help you integrate QuickTime into the more modern frameworks that are available on the platform. Okay.

And the APIs we've introduced in these renovations carry with them modern idioms that are based on modern programming practices. And... What we want to do is encourage you, one important message we want to give you today and the rest of the conference is to move forward with us. Please adopt these new APIs. It's only through these APIs that you're going to be able to exploit the introduction of new technologies as they become available on the platform. So whether it be codec acceleration through GPU, if it's available, or the lower-level facilities, if you think really low, fragrant programming level, later on top of that is GL and core image, QuickTime rests on that to take advantage of more powerful media processing capabilities. And that's the only way you're gonna be able to get to those resources.

And if you have to change your code, it's work to get there. We understand that. But it's work in the right direction. Even if the API is changed in the future, what we're quite encouraged to do is the design and the algorithms are going to be consistent with the programming model that we're endorsing on the platform.

It's consistent with all the frameworks. This is a picture that Tim Monroe is going to deconstruct for you in more detail later, but it's a picture you should become familiar with because it's one of the icons of our new programming model. And part of this is to ensure you that you're in safe hands with QtKit.

So the bottom portion of the diagram shows how we can modify the underlying pieces, underlying components of the technology, but have an abstraction layer on top of it to protect you from the API turmoil underneath it. And it helps us suss out and vet some of the issues in these low-level components before we give you exposure to the low-level C APIs.

QtKit is not new, so some of this information may not be new to you either. As I mentioned, it's a high-level programming interface. Currently the only way you can get access to any kind of 64-bit support, and we've added a new functionality, AV Capture, that we're exposing first through QtKit, and you'll see some demos of that later. Apple are adopting QtKit rapidly and even internal to the company. Other applications within Apple are using this. Michael Jones has been one of our biggest evangelists. He spoke I think earlier this, maybe yesterday, he spoke in previous years and not only is he using it, he's even using the new AV capture capability. So if, and they built a very sophisticated animation production tool at Pixar. So we know it's capable of doing quite a bit. In the area of audio, we did a complete renovation of Sound Media, re-hosting it on Core Audio, and that was a significant effort. And if you have adopted that, then you're familiar with audio context, and Brad's gonna talk about some enhancements we've made to that.

actively deprecating so i think apple is actually very generous about deprecating we don't really deprecate we don't throw things away we allow you to keep building your apps But we are encouraging you, if you're using Sound Manager, if you're using Sequence Grabber, please stop. We have newer, better things for you to use.

The same with video. When we, in QuickTime 7, we added B-frame support. And if you've, the ability to do that in a code base that was never designed to accommodate modern codecs was huge. It almost killed us. But we can now deliver to you this very sophisticated codec, the 264 codec.

If modern compression and decompression techniques require a whole different kind of data flow model and what we did was using again a new metaphor, a new construct called visual context, we can allow you to have buffers in flight. So moving off of the old shallow ICM buffer model was significant. I suspect that in changing your applications to adopt these new technologies, you're also going to have to build a more sophisticated memory management model. So again, this is consistent with modern practices. We're asking you to move forward with us.

Another significant factor of the modifications we made is to allow you to integrate into these frameworks, and Ken's going to talk to you about what we did to allow you to integrate into core animation in the Quartz Composer frameworks. So in the aperture mode, we've been littered with bugs about this, and what we attempted to do here was be able to help users across the entire spectrum. The person just wants to open up their movie and have it look consistent no matter what application they use, and a video production expert who wants to be able to tag their content correctly and deal with dirty video.

So feedback session, a couple things we'd like you to do. Please do what you can with QtKit while you're here, especially the 64-bit piece of it and the test, and excuse me, come see us, come to the labs and give us feedback as soon as you can so that we have time to make changes where appropriate before we ship Leopard.

And find a QuickTime engineer, ask all the questions you can, and give us good feedback. The last comment is about the session. I'm pointing you to the feedback session. In the past, it's been a place where people come to vent, and I understand your need for that relief.

But what I want to encourage you to do is bring us some ideas. We're trying to move forward. We want to know what is missing in media services on the platform for you. What is inhibiting you to innovate? If you come talk to us about a bug that someone filed 10 years ago, this radar bug, we're not going to fix it.

Took me a while to dig that up. And if you don't make it to any of the labs, but you do make it to the beer bus tomorrow, find somebody wearing this t-shirt and give him or her your feedback or ask questions. So I just wanted to thank the team, the QuickTime team, who made-- who's driving the sessions and did all the work for the code that we're delivering-- the demos that we're doing here at the show. And I'll hand it off first to Tim.

Thank you, Vince. The number one feature request for QT Kit from the very day it shipped was for us to give you a way to grab audio and video data from devices attached to your computer. So we went ahead and we've added some new AV capture classes to QT Kit. The important thing for you to understand is that this is not just a wrapper on the sequence grabber. On the diagram that Vince showed earlier, you saw there was a new capture engine sort of at the bottom. So that is a pro-grade level capture engine that we are now exposing through QTKit. There is no direct CAPI to that engine, so if you want to get the capabilities that we have implemented, you need to use QTKit. So what does it give us that, say, Sequence Grabber didn't? Well, maybe most important, it gives us accurate audio-visual synchronization. This was sometimes very difficult to do with Sequence Grabber, and here we just got it right. Another feature that it gives us is frame-accurate capture. So you can say, I want to start capturing at this particular time code, and I want exactly 245 frames, and we'll write that into the file for you. Another thing it gives us is transport controls. So if you're hooked up to a camcorder that can fast-forward or rewind, we can do that through our API. And there are more features that I will talk about in a second. So let's just get straight to a demo of these new classes. And could I have the demo machine, please?

It looks like we've gone to sleep here. There we go. This is a bit tricky because of technical reasons. We're all having to look down at this monitor here to do our work. So if I look like I'm sort of not paying attention to this monitor, well, that's because I'm not. So I'm just going to launch this application called QT Recorder. This is on your DVD, the complete source code for this. So you can build and run this yourself now with the seed you have. There's one restriction for the seed you have that won't work, which is that this little, let me see where my, there we go. This device selection pop-up will not work in the seed you have, but if you come in through the lab and you play with it there, it should work fine. And what do we have there? I can't read that. What does that top thing say?

All right, so I can select various flavors of built-in audio and the eyesight with its audio. And let's just go ahead and do a capture here. So I'll come down and hit this button. Testing, testing, one, two, three. And I'll stop it. It'll ask me where I want to save that movie.

I'll save it. It'll take a second to write that out to disk. And-- Did food on move show up down there? Is that it there? And let's see. Okay. All right, well, it didn't capture my audio. So that's great. We've got a real nice set of classes that make this very easy for you to drop into your application. We've got this capture view, so you can preview what's being captured. And one more nice thing we've added, it turns out that I've got hooked up to my machine this Sony HC1, which is an HDV camcorder. So let me turn that on, and if I'm real lucky, that may actually then go ahead and show up in my list here.

Is that there? And it automatically switched, so we have on-the-fly device selection and device detection. And so I'm just going to go ahead, and the lighting here isn't that good, but I'll see what I can capture with HD. So let's capture maybe my monitor here. That's not going to look very much fun. Not much light there. How about this? All right. So I've stopped that. We'll save it.

Again, that'll take a second to write itself out to disk. And somewhere down there, is this the... And it's high def, so it's big. So let me just try and fit that on the monitor here. And of course, I didn't take anything very interesting, but if I had focused on something wonderful, we'd have nice high definition video capture there. Okay, so let's go back to slides, please.

So you're probably wondering what devices do we support. We support the internal and external eyesight and the verbiage there, VDC over USB or IIDC over FireWire. As you just saw, we support HDV devices. The caveat there is that you need to have certain codecs installed in order to get the high-def video. And to get those codecs right now, one way to do it is to have Final Cut Pro installed on your system. The yellow ones there are what's in the seed you have. And actually, this list is incorrect. The core audio HAL devices are also in the seed that you have. So that should be yellowed in this slide. We will support DV devices. And again, if you want to do the ProDV formats, you need Final Cut Pro installed. And finally, I think will be interesting to some of you, we're going to grandfather in sequence grabber video devices.

So if you have existing devices that you work with, you'll be able to use those too. but again, not in the seed that you have in your hands. So let me point you to the main QT Kit session, which is tomorrow afternoon. That's the Building Multimedia Applications with QT Kit. And then we have a couple of labs, which are actually going on contemporaneously Friday morning in the Graphics and Media Lab. So with that, let me bring up Brad Ford to talk to us about modern QuickTime audio programming techniques. Thank you. Thank you.

Thanks, Tim, and thank all of you for getting up as early as I had to get up. I'm going to give you an overview of the audio subsystem of QuickTime, and I'd like to start by approaching it from a question that we get very often on the QuickTime API list and on the Core Audio list. Which do I use? There is this set of Core Audio APIs, and there are the QuickTime APIs. I'm an audio programmer, or I'm interested in audio in my app. Which set of APIs are appropriate for me to use? Let's talk about why you might want to choose QuickTime Audio for doing your import, decode, export, et cetera. In the area of import and decode, it's all about the file formats. When you use QuickTime, you get a host of file formats, some of which fall into the purview of Core Audio and the Audio File API. Some of them do not. So what you find when you import using the QuickTime Audio APIs is a superset of what you would get if you used the Core Audio Audio File APIs alone. For instance, the.mov file format, many other video/audio mixed formats, as well as third-party importers like for DivX and WMV. In the area of data formats, well, we can decode a lot of different formats with QuickTime, including all of the ones that you might get with the Core Audio decoders and sound decompressors. We also have some third-party decoders that only work with QuickTime.

In the area of playback, well, probably the biggest reason to use QuickTime for audio is because you have video that you want to play at the same time as the audio. If you want to do that, you can write an engine yourself that does synchronization, startup sync, drift sync, or you can use QuickTime. We tend to do that pretty well. Also, we take care of media time scaling for you.

For instance, if you have some media that has an edit in it, so you have one section of the media that's supposed to play at a different rate than the rest, for instance, you have one section that's supposed to be played at 1.5x, when you play that media using QuickTime, it will automatically take care of that scaled edit for you. We also give you these last three on the slide, pitch and rate control, volume, spectral level metering, sample accurate multi-track mixing. None of that is specific to QuickTime. You can get all of that using Core Audio APIs. It's just that when you use QuickTime, Typically, there's an easier, higher level interface, so you write fewer lines of code.

For instance, if you're going to do pitch and rate control using Core Audio, you would perhaps make an AU graph in which you inserted an AU very speed unit, and then do all the setup necessary to configure that. Whereas with the QuickTime Audio interfaces, you just open a movie and play it at some rate that's not one, and you automatically get the pitch and rate controls.

We introduced in QuickTime 7 movie audio extraction, which is a great set of APIs for getting mixed PCM audio out of a movie. This is something that people have been doing with various success by dipping down to the media sample level and decompressing samples themselves. We've had a lot of good feedback about this API, movie audio extraction, so we know that people are using it.

It gives you access to the decoded and mixed movie audio samples. So that means if you have some source media that has multiple tracks, what you get out is a summary mix of all of that. You don't have to dip down to each track individually and then perform the mix yourself. We do it for you. And assuming that all of the audio decoder components involved in the audio playback chain are threads for the media that you care about, we support multiple threads, which means you can extract movie audio on a different thread than the main thread or the thread on which you open the movie. It also means that you don't have to work with the audio converter directly. That might not scare some of you, but some of you it might really scare.

In the area of encode and export, well, it should come as no surprise that we encode to a lot of formats, including all of the formats that Core Audio supports. There is a caveat that I want to bring up here, because it comes up a lot. If you're interested in AAC and AMR, which is not listed here, and you want to encode to them on Windows, you do need to pay a licensing fee. We cover that blanket license for you on the Mac. If you're a developer on the Mac, you can encode to AAC for If you are doing so on Windows, you need to acquire the license and pay some money.

We also give you the standard audio compression component, or SC audio, to do compression cross-platform. I'll talk about that in a minute. This is a brand new feature in QuickTime 7.1, so you might not be familiar with it. About the file formats, we give you the file formats that Core Audio does, plus.mov, mp4, et cetera. So really what QuickTime Audio, using the QuickTime level of you APIs gives you is a uniform code path for opening, playback, import/export, and code.

Now, I've said a lot of nice things about QuickTime. I hopefully didn't detract from Core Audio, because there are definitely times when Core Audio is the more appropriate API to use. So when would you want to use Core Audio? If you're an audio-only app, an app that does a lot of signal processing with audio, or you want to tap in directly to audio unit features, then QuickTime might just get in your way. It might be a level of ease that you don't want to pay for. You'd rather go down to the lower level and get the finer level of control. Thank you. So let's talk about, very briefly, QuickTime Audio API best practices.

Vince already talked about, in general, what the best practices are to use and the strategies for going forward. And he showed you this slide. I just wanted to bring this back before you, because this picture is a perception of QuickTime that will never go away unless we perform our due diligence and when necessary, and also make the API level as modern as the internals are. So we need to do our job and deprecate when the system changes enough that it no longer makes sense to have that set of APIs around. One of these is SoundManager. So SoundManager, if you came yesterday to the core audio sessions, you know that it is officially deprecated in Leopard, and let's talk about what that means to you. So Sound Manager is deprecated in Leopard. Let's have a moment of silence for Sound Manager.

If you are a codec writer, this means you should not be writing sound compressors or decompressors. Here's the biggie. Most people holding on to Sound Manager are doing so because they just want that sound converter. Well, that's deprecated too. If you're using a sound mixer, that's deprecated. In the area of capture, sound input components. In the area of playback, SDEVs.

And this one's a little bit more vacuous. The use of legacy QuickTime audio interfaces is discouraged. Let me tell you what that means. SG New Channel, for instance, if you're using Sequence Grabber, is an API that you use to create a new SG channel to capture from. This API, you can pass into it the kind of channel that you want to make. So a video channel, an SG Audio Media Type channel, or this sound media type one.

That one, the sound media type one, happens to use a sound input component underneath, which is deprecated. We can't deprecate our API because it serves multiple purposes. But know that that particular code path, creating an SGE channel of sound media type, should be considered deprecated because we're no longer maintaining that code path.

I've told you a lot about what you should not use, but I'm not going to tell you what you should use instead. You'll have to come to session 223 later today at 5 p.m. to find out what to use instead. I hope you're really scared so that you'll come. - Now let's talk about SC Audio Compression APIs. New in QuickTime 7.1, 7.1 was released about four or five months ago. The SDK just became available this week. The 7.1 SDK for Windows is now live. The Mac OS X one is live, and it's also in the Xcode 2.4 installation. So you have these new SDKs available. SC Audio Compression is available on Mac and Windows. Look in QuickTimeComponents.h. And there's a sample code that you can Google for called SC Audio Compress.

What is it? It's the modern replacement for sound converter that's cross-platform. So when you say to us on the QuickTime API list, I need to know how to do such and such with sound converter, and we reply, oh, don't use sound converter. Use audio converter, you fool. And you say, well, I need it on Windows. And we have no answer for that. Well, finally, we do have an answer. Use SC Audio Compression instead. It can do all that audio converter can do, plus mixing. So if you want to go from a 5.1 to a stereo mix, and additionally perform an encode, you can do all of that with a single API call. But enough talk, let's do a demo. Can we switch to demo machine, please? I guess we went to sleep again.

And now we're back. OK. So I'm going to bring up a version of QuickTime Player in Leopard. SC Audio Compression is a QuickTime 7.1 feature, not a Leopard-only feature. It is shipping now. You can use it now. But what I'm about to show you is a Leopard feature. that happens to make use of these new SC Audio compression APIs. I'm going to close Tim's app. And up here I have an audio-only file. This is of a concert that I gave two weeks ago. Could we pump up the volume?

So here's a common scenario for me. I play a concert and my mom says I want to listen to it, so I need to send her a version of what I played. And it's too big to send her over the internet, so I need to compress it. So I'm going to go into my sound settings dialog. Any of you that are familiar with QuickTime Pro probably have used this before. That's the sound settings dialog. It lets you configure an output and export to a new file or just encode to a different format. And to those of you who have used this a lot, you'll notice that these two buttons down here are new.

We found that in feedback given to us, these advanced feature dialogues are great for those of you who know what you're doing with audio or video. But for those of you who just want it to sound right, this might be overwhelming. Because once you get into this dialogue, all of a sudden you've got eight or nine different choices for codecs, and lots of sample rates, and lots of channel choices. And so you'll wind up iterating over this several times, perhaps exporting your 20-minute content and then listening to it and finding it's not exactly what you wanted, it's not the right quality setting. So using the SC Audio Compression APIs, we already know the format that you want to go to. We just take a 10-second segment of your source and loop it over and over and apply the settings that you currently have selected so you can hear it as you're making live updates to the settings and hear how that will affect the sound. So I'm going to go ahead and play this. So what if I take the sample rate down to eight?

Maybe I'll choose AAC and listen to how it sounds if I select a different bit rate. And at any time I can compare that to the source by hitting play source. So there you go. That's a nice-- Back to slides, please. Hopefully this will bring the frustration level down a level.

How did we do that? Well, like I just showed you, there is that dialogue up at the top. That's what you see. Underneath, there's that new call SCAudioFillBuffer. It knows about the source movie. It performs an extraction from that source movie and then creates a new destination movie using addMediaSample2 and plays it. For a better explanation of what we're doing, including what your code should look like, you'll have to come to audio session 223.

And another great feature that I'd like to spend a few minutes on is the Audio Context Insert API. Movie audio extraction got us-- 75% of the way there to what people wanted. This should hopefully cover the rest of you. This gives your application access to our audio rendering path. That is, you can insert your effects or visualizations or whatever into the audio signal path while QuickTime is playing. You don't have to take care of the synchronization unless you had to if you were using movie audio extraction.

It does real-time rendering, so you get called back when you register an audio context insert, you get called back on the real-time I/O proc thread. You can also use it in movie audio extraction if you want to insert some effects while extracting PCM audio out of a movie.

It's designed for compatibility with audio units. That said, it is a callback interface, so you don't have to be an audio unit in order to plug yourself in. You just have to register a call to do the processing or your own custom code. If you're on Windows, you can use direct show filters or your own custom code, whatever you would like.

Let's take a look at what the audio context is and how the audio data flow works in QuickTime's audio architecture. We have an abstraction called a device context that is a wrap you for the audio device that you're playing to. You can get one of these device contexts by having the UID of an audio device and then creating a context for it. There's also a movie audio context, and that is the context in which all of the mixing takes place for the movie.

When you perform a movie audio extraction, there's also an extraction context to which you are playing when you extract PCM audio. You don't see this extraction context. It's just made for you underneath the covers automatically. Inside the movie audio context, you'll see that if there are multiple tracks, they are being mixed down before they're sent to the device context. If you have a more complicated movie, for instance, here we've got one with three tracks with varying channel valences and varying sample rates, we mix them down to a movie summary mix. That is, we pick the highest sample rate of all of the sample rates, and all of the like channels mix to a common channel. So for instance, there are two rights. There's one in track two and one in track one. Those mix into a single right in the movie summary mix.

And if you register an audio context insert, you plug in right there after the movie audio summary mix. Now you can ask for a different channel layout than we've provided in the movie summary mix. So for instance, if you have some processing that must be done in stereo, like the audio unit that you're going to use only accepts stereo input, you can tell QuickTime, I want you to give me stereo, and what I'm going to give back to you is stereo or mono, what have you.

You'll see that that mix is performed. The samples are provided to your callback to the client application. You perform your signal processing. Then you hand the samples back. The only caveat there is that you must not change the sample rate, because that would just be mean. Let's go back to the demo machine. I'm going to show you a sample app that was written by an engineer on the QuickTime Audio team named Siley, who's great.

This is a QT Audio Context insert app. Not surprisingly, it plays QuickTime movies, like, for instance, Four Brads. So what can I do with this? Well, I can bring up the-- oh, whoops. That brought it up in QuickTime Player, didn't it? Let's try that again, bringing it up in the Audio Context Insert app.

And we have an audio insert panel over here. So here we can visualize what the context insert looks like. First we have to channel layout. I'll go ahead and select stereo. So I'm going to provide the insert with stereo. I'm going to get stereo back out. And then I can choose a Core Audio effects filter to apply to the audio as we're listening to it. So I'll go ahead and play around with it. Thank you.

make myself sound like I'm in a concert hall? Or bypass it? Here's my kids context inserts. Back to slides, please. How did we do that? I'm not going to tell you at all. You have to come to audio session 223 to find out more. Hopefully we've whetted your appetite for good things to come. Next let me bring up Ken to talk to you about the video subsystem in QuickTime.

Thank you, Brad. So for the next 20 minutes or so, we're going to talk about new features in QuickTime and video for Leopard. So, uh, some of the things that we were gonna talk about, uh, core animation, quartz composer integration, how to use QuickTime with, uh, quartz composer. We're gonna talk about, uh, aperture modes, which are new features in QuickTime 7.1. And we're gonna talk about best practices. Actually, it's just gonna be a brief overview for what's coming in session 220.

So first, core animation. So new to Leopard, you saw it in the keynote, very exciting stuff, and we're going to talk about how to integrate QuickTime and core animation together. If you're already using QuickTime's visual context, then it should be very easy for you and provide some nice features that have been difficult to get otherwise. So let's start with a demo before the demo machine goes to sleep. Could we switch to the demo machine, please? Thank you.

So once again, I have to look at this other monitor. OK, so what you see here is core animation. We're playing a QuickTime video in a layer, and we can do all sorts of interesting things that have been traditionally difficult. So if you notice in the upper right-hand corner, there's a control panel. I can do things like control the alpha. Oh, I should mention that control panel is actually a Cocoa UI, and traditionally it's been very difficult to overlay Cocoa and QuickTime together. And then I can't really make out the settings, but I'm going to be turning on and off some core image filters. So that was a filter that's applied. This is a secondary filter that's applied on top of that. And then finally-- we have a spectrum analyzer that we're running. So this is something that's put together very easily. Traditionally, it would be kind of complicated to do this, and core animation makes it much easier. And can we switch back to the slides, please? Thank you.

So the applause should go to the core animation team. They've done a great job. So moving on, we're going to show you how you can integrate QuickTime and Quartz Composer. So Quartz Composer is another technology that you all have been exposed to for some time already. It shipped with Tiger, and you've all seen the sexy demos. Today we're going to talk about how we can integrate Quartz Composer and QuickTime together. And there are basically two things that make that possible. First of all, there's a QuickTime node that's available in Quartz Composer. That allows you to take QuickTime movies and bring them into a composition. We'll talk about that. The other thing is Quartz Composer QuickTime component, which is really a mouthful.

That's actually what allows you to take a Quartz Composer composition and bring it into a movie as a track. And we're going to talk about how you can actually make these things work together. And it's really important to remember that a Quartz composition is really a first class media type in QuickTime. Also, you can actually take the Quartz Composer composition, that's a QTZ file, and that can actually be dragged into QuickTime, and it'll just play. It'll do the right thing.

So Quartz Composer is actually computer graphics. And computer graphics are wonderful. They're very flexible. And Quartz Composer provides us with a very flexible video source. So it's interesting from a video standpoint in a number of ways. So first of all, Quartz Composer, being computer graphics, it's resolution independent, both in terms of time and in terms of space. Also, it supports alpha channels, which are very nice because that helps us do compositing. It's core image accelerated, so we can do all these wonderful effects in real time.

And also very nice is it has a filter graph model. Those of you who've seen the demos or played with it yourself know that you can, even without programming, you can create new compositions and edit existing ones. But with flexibility comes challenges. So right now, Quartz Composer doesn't have any real concept of video. So it supports square pixels only, and it doesn't have interlaced support. So you have to do a little bit more work to use it.

In terms of integration, there's this circular relationship between Quartz Composer and QuickTime. So basically, Quartz Composer can bring in QuickTime movies. That's the first kind of integration. The second time is you can take the Quartz Composer composition and bring it into a movie. And then I think most interesting of all, there's a compound situation. And that's where things get really fun. And we'll show you demos of most of these.

So this is the case where we're not showing you an actual demo. There's a screenshot of what this looks like, and this is actually one of the typical Quartz Composer compositions just dragged into a QuickTime movie. Via the Quartz Composer component and importer, it allows it just to play back and work correctly, which is pretty cool. Now, going on behind the scenes, remember I mentioned that Quartz Composer is resolution independent. It's actually QuickTime that's imposing the actual dimensions on the frame, the frame rate, and also the sampling times of those frames. So that's something else that's good to keep in mind. Another really nice feature is that you can take a composition, bring it into QuickTime, it becomes a QuickTime movie. And then if you use an exporter, you can export it not anymore as computer graphics, it then becomes video. So it's a very nice way to save these things back out again.

So the first more complicated mechanism we'll talk about uses and involves an alpha channel. So using an alpha channel with Quartz Composer and QuickTime is very, very valuable. I'm going to show you a demo that puts a little title up, but you can use it for crawls and other kind of mechanisms. One very nice aspect of using Quartz Composer, which is really just an animation, is that it's a fixed size. Pretty much a fixed size is largely dominated, in most cases, by the media that Quartz Composer uses. And it's the same size composition, no matter how long it plays for, which kind of makes sense in terms of computer graphics, but it's something that's kind of unusual in terms of movies and video. We think that a larger movie-- a longer movie is also a larger movie. But that's not necessarily the case. Let's do a demo. Could we switch back to the demo machine, please? So this is going to be challenging to pick the right movie.

I have to drag it to the right place. OK, so you'll notice in the lower right hand corner, there's the QuickTime logo. We forgot to seed. Do, do, do, do, do, do, do, do, do, do, do, do, do, do, do, do, So I think you get the idea. You probably noticed that the QuickTime logo is strobing away. And that logo is actually a composition. And we'll tell you how this was actually constructed if we move back to the slides, please.

Can we move back to the slides, please? Thank you. So we had a quartz composer composition, and it's basically added as a track to the movie. And this is how the pieces came together. So this is the composition. Very quickly, there was an image, pretty much that cue. That image was transformed in terms of size based on a timing block. And the result is-- you can see the blocks in the background. That's because that aspect is transparent.

So once you have the composition, you can drag that into a movie. This is Qt Player's property page. And you can see you do have a new track, which is the Quartz Composer track. You can see that we've specified the video resolution. So what was resolution independent now becomes video.

And what we did was we put it in the very first layer, in this case, layer negative 1. So it's ahead of the video track that was already here. And we selected alpha mode. That allows it to be transparently on top of the video, and that's all there is to it.

Now for the compound scenarios. This is where things get more interesting. So there are all sorts of things possible. We're going to talk about a video theme that I'll show you. And I think it's really important to sort of embrace these technologies. Users are really used to production standards they see on television and movies. And this allows you to get close to those things. So this is what's going on behind the scenes. We have a Quartz Composer composition with multiple QuickTime nodes. So basically, it's bringing in more than one video source.

The QuickTime Movie imports the Quartz Composer track, and there's an application that basically can control the Quartz Composer composition, and that's how you trigger the effects. In our case, we're just going to play it back in a movie, so we won't be triggering or doing anything fancy. You can do those things in your own application. So again, there's some special concerns that we mentioned earlier, and that's pretty much Quartz Composer doesn't know about video. So what you may have to do is take your video sources and convert them into pretty much computer graphics, which may mean you have to deinterlace the movie, render it through Quartz Composer, and then if you wanted to match the rest of your video content, you might have to reinterlace the video.

So let's look at the demo. Can we switch to the demo machine, please? Another part of the same movie. And then we're going to be switching back and forth. And I apologize. I programmed it to switch pretty quickly. But you can get the idea that we can integrate computer graphics and video as video, which is very powerful. Can we switch back to the slides, please?

So next we're going to talk about aperture modes, and this is a QuickTime 7.1 feature. And as a bit of motivation to sort of talk about why we need these aperture modes, Vince sort of alluded to some of these things. There's a tremendous difference between video gear, especially traditional analog video gear in computers. And in some ways, aperture mode sort of bridges the difference between those. So one concept we talk about is pixel aspect ratio that we call PASP. So if you hear us talking about PASP, that's what we mean. So computers were designed to have square pixels. Makes a lot of sense to have square pixels. It's very easy and mathematical to use. When we say square pixels, we mean the ratio of the height and width of the pixel are one. On the other hand, video started out as an analog thing. They were just lines. There was nothing such a pixel until we moved to digital video.

And when we moved to digital video, we sampled, we basically chopped those lines into pieces. And that was based on the bandwidth of the signal. It had very little to do with the pixel dimensions. So depending on the video source, your pixels could have an aspect ratio that's less than one or greater than one.

And basically, the pixel aspect ratio, if we set that, it allows us to correct for it. And I'll show you some examples of that. So the next concept is the concept of clean aperture, or sometimes we call it CLAP. So computers were designed to display all the pixels. None of them are hidden. We sort of take that for granted. However, again, back in the analog TV days, televisions were displayed to basically hide a fairly significant portion of the video, maybe 5 to 10%. That's actually behind the plastic bezel that's on the television. So we actually need to define what portion of a video signal is actually intended to be displayed. So that's the purpose of the clap.

So SMPTE 187 in 1995 defined these concepts-- pixel aspect ratio, clean aperture, production aperture, and edge processing region. And I'll show you examples of all of those. In terms of PASP and CLAP, in the QuickTime world, we identify those with the movie via tagging, and we'll show you how to tag those and examine those in the movie.

And there's also an Aperture Mode API, and pretty much that controls how you want, you as a developer, want QuickTime to interpret those values. Thank you. So there are four supported aperture modes. The first is classic mode. This is what you're all used to on QuickTime. Basically, all the pixels are displayed, and there's no aspect correction at all made.

Next is clean aperture mode, and that's really what the end user expects and wants to see. This is where we perform all of our processing. So only the clean pixels are displayed, meaning those that the people that produced the video actually intended people to see, and aspect ratio is performed. The next two versions are really for professionals, and they basically involve modes where the professionals want to see all the pixels and all the pixels unprocessed. So this is an example of a DV image, and it's probably fundamental to those people that have been working with video for any length of time, so much so that you may not notice anymore the bars on the left and right edges.

So to blow this up a little bit, on the left-hand side, you can see that there are no fewer than eight pixels that are actually wrong. And that's an artifact of the codec. And that's not a problem with the codec. That's the way the codec's defined. Pretty much the DV specification instructs you not to display those. But in classic mode, those are displayed.

you can choose to conform your aperture mode to encoded pixels, and you would see something like the image that we see up here. It's 655 by 4D resolution. What's important is that you see all the pixels, including the edge processing regions, which are the fuchsia bars on the left and right edges. And I think, depending on how it looks in the projector, that you should see that the circle in the center is actually distorted. It should appear wider than it is tall. To make it a little easier, we can look at these dimensions. And you should see that based on this coming from a DV source, that there should be an 11 to 10 ratio, because those pixels weren't square to begin with.

If we chose to correct the PASP only, basically apply an aspect ratio correction, we can conform the aperture to production mode. That's one of those professional kind of modes. Again, you see all the pixels. You see the fuchsia bands. But now, hopefully, I don't know how it appears to you in the audience, but the circle should appear to be round, and as illustrated here, it has the same pixel dimensions. So most people want the clean aperture mode, and that's where we make the clap and pass corrections. So the bars are removed. Hopefully this appears round to you. And it's -- the dimensions are back to the 640 by 480 dimensions that are somewhat familiar.

So this is a way that you can actually adjust these mechanisms yourself to take a look at your own content. If you bring up the property panel from Qt Player, you can select the movie track, click on the Presentation tab, click on the Conform Aperture to checkbox, and then there's a pull-down for each of the four modes.

So tagging. Tagging is really the number one source of image quality issues that we hear from developers. And it's just imperative that you tag your content. So tagging content is what allows you as the owners of your video to basically indicate to QuickTime what is the nature of the video and how to correctly display it. If you don't provide tagging, then advanced features are not to you.

We can't aspect ratio correct. We can't remove the edge processing regions and regions that aren't supposed to be displayed. We're not going to talk about it, but we also can't do color correction and other things. And basically, QuickTime will guess in terms of these modes, and there's a good chance that you're not going to get the intended results that you would like to have. So there's a programmatic interface for attaching this tagged information to your movie files, and you can view these settings with Dumpster, and we'll get into more details of that in session 220. So here are some things to take away.

QuickTime and ProCodex are actually PASP and CLAP aware. If you fail to tag or if you have old content that's not tagged, then QuickTime will, by default, go into classic aperture mode, which means it won't provide any processing, which is good. That's the behavior that you expect, so there's no change there. As developers, we would really like you to remove any workarounds that you have that do PASP or other forms correction, basically adopt our Aperture Mode APIs, and perhaps most importantly, tag all your content.

So I'm going to give a very quick rundown of best practices, and these are going to be provided in detail in session 220. money. So perhaps the most important thing is that you should use modern examples. As Vince indicated, QuickTime's been around for a long while. The wonderful thing is that there are many, many, many, many demonstrations and examples out there on the Internet talking about how to use QuickTime, but many of them are dated and are using old principles and idioms. So please make sure that you're using new examples that use best practices. Please use the track and media APIs. So, again, as Vince mentioned, to support new codecs that support B-frames, you really have to use the new interfaces or else you're gonna have to do an awful lot of work and the results won't be correct. Also, please adopt the compression session APIs. Those replace the compression sequences APIs. And again, this is imperative for supporting new codecs with B-frames.

We want you to use the visual context. If you're using visual context, then you can use all of the OS X technologies and some of the new technologies that we've talked about in Leopard. Additionally, this is where you're going to get your best performance. And finally, please tag your content.

So there are lots of exciting new technologies. Please use the new QuickTime APIs. And for details, please join us here in this room tomorrow at 5 o'clock for session 220. And with that, I'm going to bring Tim back up. Thank you. So you heard in several of the keynotes that 64-bit computing is big for Leopard, all the way down from the Unix Foundation up through the Cocoa and Carbon frameworks. So what we needed to do was bring QuickTime into the 64-bit world. And we actually managed to do this after a fashion. And I think the main question you should have is how in the world did we do that? To appreciate the issue, let's go back to that diagram that Vince showed us, indicating the sorts of things that QuickTime depends on outside of itself. And all of the technologies on the bottom there are available in 64-bit, except for that big blue box, which is QuickDraw, which, as you've heard several times, is not going to be available in 64-bit. Well, the bad news is that QuickTime depends heavily on QuickDraw. We've been moving toward the more modern core video, core imaging interfaces, but we're not all the way there yet. So we had a real problem when we had to bring some version of QuickTime into 64-bit.

So what did we do? Well, typically in a situation like this, you'll adopt a client-server solution. If you have an application that wants to run 64-bit, but you've got QuickTime stuck back in the 32-bit world, you will, say, write a client and server module, have them communicate through some sort of inter-process communication. The server can then do whatever QuickTime work you want and send the results back to the client. That's pretty much what we did to bring QtKit into the 64-bit world. So QtKit exists as a 64-bit framework and as a 32-bit framework. And in fact, it's what we call four-way FAT. There are Intel and PowerPC versions of each of those two frameworks. So when you have an application, in this case Cocoa, but it could also be a Carbon application, that links against the 64-bit version of the QtKit framework, what happens is that we take care of doing this client-server communication. In fact, the large majority of the code that's in the 64-bit QtKit framework is nothing more than a client that communicates with this special server that we call QtKit Server. We've got them communicating using MIG-based IPC, so we're sending Mach messages between these two modules. And then of course, QtKit Server communicates with the 32-bit version of QtKit, which of course then uses QuickTime.

So at this point, I would step up here and do a demo of a 64-bit QuickTime app. Unfortunately, it turns out that this machine, the way it's configured, doesn't let that happen. It'll come up and run, but there'll be some display issues. So I won't offend you with those display issues. Let me just talk briefly about what the limitations are in the seed that you have. First of all, the capture classes that I demoed earlier and that I'll talk about more later or tomorrow are not available in the seed you've got. We've just about got them all implemented in 64-bit, but they just didn't make the cutoff. If you use any Qt movie delegate methods, you won't be able to use those in this seed in the 64-bit world. Again, we'll work on that and solve that problem. Currently, we have drag and drop turned off from the Qt movie view in the 64-bit world. Once again, just a limitation of having to get it into your hands as quickly as we could.

And finally, there are some drawing glitches that you will encounter, even if you actually get something to draw into your Qt MovieView, which we couldn't on this particular machine, you'll see some drawing glitches. And again, we'll address those by the time Leopard ships, and of course, certainly before. So maybe even the next seed you get, you'll see this all cleaned up.

So do you have 64-bit applications that need to access QuickTime functionality? If you do, and you're using QtKit, then you should make sure that QtKit is suitable for your needs. In particular, you should look for any uses of the QuickTime movie method or the QuickTime movie controller method. Those give you back the QuickTime identifiers that in the 32-bit world you would pass to the C-level APIs. Well, as you know, there are no C-level APIs for QuickTime in 64-bit. So we could give you back an identifier, but you couldn't do anything with it. If you're not a QtKit app and you want to have QuickTime functionality in your 64-bit application, well, there's no choice. You're going to have to become a QtKit app. In particular, you should look at all the QuickTime APIs you're calling and make sure that there's a QtKit analog that you can rely upon. If there's not, then you should let us know right now, because we can still bulk up QtKit. If there's some particular C-level API that doesn't yet have an analog in kit, we can add it to it in order to make your life easier. So once again, let me just point you off to the main QtKit session tomorrow afternoon at 3.30 in this room. And then on Friday morning, there are two labs where I'm sure there'll be lots of questions about QtKit and what we've done to her this time around. around.