Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2006-220
$eventId
ID of event: wwdc2006
$eventContentId
ID of session without event part: 220
$eventShortId
Shortened ID of event: wwdc06
$year
Year of session: 2006
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC06 • Session 220

High-Performance QuickTime Video Processing

Graphics and Media • 58:41

QuickTime leverages the latest graphics advancements of Mac OS X allowing your application to easily take advantage of key Mac OS X graphics technologies. Discover how to use the power of OpenGL, Core Video, and Core Image with your own rendering pipeline. Learn Visual Context best practices, modern ways to use QuickTime compression sessions, movie exporting to the iPod and much more. This is the ideal session for QuickTime developers looking to add next-generation capabilities to their applications.

Speakers: Ken Greenebaum, David Eldred, Frank Doepke

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Hi, welcome everybody. I'm Ken Greenbaum and I'm going to be joined in a little bit by two of my colleagues, David and Frank, and we're going to be talking about high-performance QuickTime video. Thank you all. I know it's late in the week and it's also late in the day and it's right before the beer bash. So we're going to try to be quick with this. We're going to not do a QA afterwards, try to get you all onto campus. And on campus you can track us down and you can also track us down tomorrow at the lab which begins at 10 o'clock.

So this is what we're going to talk about today. Of course, you all came here to see the cool new stuff, and we'll show you some of the cool new things. You've seen some of it already, and we're going to talk about how that applies to QuickTime. So what's new with QuickTime and Leopard? 64-bit. We're going to tell you how QuickTime and 64-bit get along. We're going to show you some Quartz Composer integration. We're going to talk about some new features in QuickTime, most notably Aperture Mode. And we're going to talk about QuickTime and Core Animation integration.

For those of you who were just at the previous talk on QtKit, you've heard an awful lot about QtKit, but we're going to talk about it a little bit more. And maybe most importantly, we're going to talk about how as developers you want to position yourselves with QuickTime. So we're going to talk about QuickTime strategy and also best practices for developing for QuickTime.

First strategy. So as you know, QuickTime's been around for a long while now. Because of that, some of the APIs reflect that age. The good news is that all the old code, or at least most of it, can continue to run. And that's a wonderful thing. It means that if you have old code bases, you can still maintain them. It's important to remember while some of the APIs are old, QuickTime internals are remarkably new really. They're modern. Much of the video and audio in QuickTime 7.

So if you're using those new APIs, you really get the benefits of all the new technologies we've been delivering in OS X. And you also can take advantage of the new features that you've heard about this week in terms of Leopard. It's important to remember that if you use the new APIs, you're going to get the highest performance, you're going to get integration with these new features, and moving forward, it's going to be the safest place to be for your applications.

So first, let's talk about QuickTime in 64-bit mode. So again, if you were just here at the session, Tim told you a lot about it, but this will be a brief review. So we consider QuickTime to be a bridge between the 64-bit and 32-bit worlds. So maybe I should ask, how many people are planning to develop applications or maybe currently developing applications involving video and 64-bit? Okay, there are actually a surprising number of hands that came up.

I would love it if you folk came up, not afterwards, but tomorrow or at the beer bash and tell me what all you're planning to do with that. So for those who haven't really considered 64-bit yet, 64-bit is great. It's really great if you need it. It's especially nice, the LP64-bit mode really means that you have long pointers, pointers point to 64-bit values.

It means you can memory map just huge data files. It's really important for the sciences. But there's a question. Once you have these huge databases, how do you share them with your colleagues? It's simply impossible to copy and share those databases. Well, we think one thing that you can do is actually output analysis to QuickTime movie files and then trade the movie files.

So for those of you who were at Bertrand's State of the Union, how many people were at the State of the Union? Okay, that's quite a few people. I wasn't. I was busy preparing for these. So I'm not sure. I know that he showed a demo. This is the demo. This is still from the demo.

Pretty much it uses Very large database, molecular data, and it renders that to a QuickTime movie file. So this is sort of the kind of process that we're talking about. But I'm not sure if he showed you the QuickTime movie that was generated from it. We're going to show it to you. Or if this is not the actual one, this is one that's similar. So it's pretty cool.

And of course, easy to trade this around, impossible to trade that original data set, or at least very challenging to. So 64-bit and QuickTime. It's really important to remember that QtKit is the 64-bit interface for QuickTime. This is your entree into the 64-bit world. And there's not going to be 64-bit support for the present C APIs.

So in terms of QtKit, the 64-bit API is identical to the 32-bit API. Let's take a quick look. So new in Leopard, there's a new function in it to writable data that allows you to instance a movie in memory. But the rest of this should be familiar to you folk. To add a frame of video to a movie you call add image.

To play the resulting movie, you call play on it. So very similar to what you'd expect if you were already using QtKit and hopefully something easy to use if you begin using it. So next we're going to talk about Quartz Composer integration. So Quartz Composer is great. We've seen and heard a lot about it. We've seen the demos. Hopefully you attended some of the sessions even. It's been around for a while now. He released it with Tiger.

But in this talk, we're going to briefly talk about how to integrate Quartz Composer and QuickTime together. And there are two things that make that possible. The first is that Quartz Composer itself ships with a QuickTime movie node. And that's your way to get QuickTime movies into your composition. And you can have any number of those, as we'll demonstrate in a little bit. And then the second part maybe is a little more esoteric, that there's actually a Quartz Composer QuickTime component and importer.

So using those things are what allow us to do some of the really cool forms of integration that we're going to demonstrate. So Quartz Composer compositions are actually what we call first-class QuickTime media types, which means that we can bring those directly into the movies and we can treat them as if they're just another movie track. And that's pretty cool. Also, the Quartz Composer QuickTime importer is what allows us to take a Quartz Composer composition, a QTZ file, and basically drag it into QT Player and have it play. And that's also what allows us to play compositions within our slide sets.

So Quartz Composer makes for a very interesting video source. It's computer graphics, and computer graphics by nature are very flexible. So Quartz Composer is actually resolution independent, and it's resolution independent in terms of time as well as space. Quartz Composer supports alpha channels, which are really great because as you know, those really come in handy when it's time to composite things together.

Quartz Composer is Core Image accelerated, and that means you can do all sorts of wonderful things in real time. And of course, Quartz Composer has a very interesting model, a filter graph that allows you to create compositions and edit existing compositions without requiring programming. So that allows people who are maybe more artistic than programmers to be able to create new content and effects. And we're really happy about that. But with all this great flexibility come challenges. Right now, Quartz Composer only understands what we call square pixels. And we'll talk a lot more about pixel aspect ratios in a little bit.

But for now, remember that Quartz Composer doesn't really understand video right now. In a similar way, Quartz Composer doesn't understand what an interlaced movie format is or video. So right now, we have to do some things to work around those issues, and you also have to be aware of where those might pop up. And we'll talk about some of that.

So it's very interesting. There's this interesting circular kind of relationship in terms of who hosts who in this case. You can actually take a QuickTime movie and put it inside a Quartz Composer composition via those nodes we were talking about. Or you can take-- A Quartz Composer track and put it inside a QuickTime movie. And we're going to show you a demonstration of that. And then I think the most exciting things are what I'm calling these compound situations that are a combination of both.

So first we're going to talk about QuickTime track in a movie. I didn't take the time out of the talk to give a demo of this, but this is what it would look like if we took Quartz Composer composition and dragged it into the QuickTime player. So this works again because of the component importer that we already talked about. And the next points are very important.

That it's actually QuickTime that's imposing the frame dimensions, the frame times, and the frame rates on the composition. So that's what's changing the nature of the composition from being this computer graphics run as fast as you can kind of world into something that's much more similar to what we expect from video. And a very nice feature of this is once you've brought it into QuickTime, then you can export the result into any QuickTime supported movie format. So you went from having this composition, computer graphics, to having something that's actually video. So this is your print to video function.

So we're going to talk about use of Quartz Composer as an overlay track. But first, let's talk a little bit about what's significant about using these Quartz Composer compositions. The composition is, really, a description of animation. And that description doesn't change with respect to length. It's the same.

And most of that size tends to be the actual media that's included. So in a little bit we're going to show you a demo. It's going to have a bouncing cue. That's what I mean by media. That's something that's imported into Quartz Composer. And that takes up a whole lot of space of that composition. But we could display that bouncing cue for one second or 100 hours, and the composition is the same length. So if you add that into your movie, you sort of get that almost for free. It averages out to nothing.

You can use this overlay technique for a lot of things, crawls, titles, all sorts of fancy things. I'm just going to show you a very, very simple example of it. Can we cut to the demo machine, please? So if you notice there's a bouncing cue in the lower right hand corner. And that's the composition.

And we could do anything that was much more fancy or much subtler, but this is what we did in this case. I think you get the idea. Can we go back to slides please? So what exactly went on? First of all, there was no programming required. We created a composition, and we added the composition to the movie. This is how it works. This is the Quartz Composer interface, probably you've seen before.

This is the image that we imported. In this case, this is our cue. Here are a couple that handle the spatial transformation that's happening. This is a node that handles timing. And you'll see in the lower left-hand corner the result. And if you notice, the cue is on the checkerboard background. And that's how we do the overlay.

In terms of the QuickTime movie this was brought into, we already had the movie in QuickTime Player. Then we brought up the property panel. You can see we selected, there's a new track, the Quartz Composer track. We specify the size of it. We put it on-- I don't know if you can see it, but this is the very first layer, layer negative 1. And then in terms of transparency, we choose alpha.

And that's as simple as it gets. Let's talk about the compound scenario. So this is where things get very, very interesting. Our users are used to seeing very flashy content. They see it produced on TV, and they really expect it from all of us in our applications. And we think that this type of integration is one way to produce it. So pretty much what we have is a composition. The composition has multiple QuickTime nodes in it.

The QuickTime movie basically brings that in as a track. Or if you're dealing with an application, it can integrate the Quartz Composer via Visual Context. That application can programmatically control the composition. So in this example coming up, we don't do this. But if we wanted to, we could programmatically change the video content, trigger the switching to go on, do these type of things. But as we have the caveats, there are concerns that we have to have. And that is that you want your true video content to match the content that is being produced by Quartz Composer.

So just for example, you may have to de-interlace the movie content before it comes into Quartz Composer. That is, if your movie content started out being interlaced content. And you might have to re-interlace that as it comes out of Quartz Composer just so it looks the same as the native video. Let's see the demo. Please can we switch-- oh, you're ahead of me. Thank you.

So here we're switching entirely too fast back and forth between two pieces of video. And you can see that it has a very nice effect, 3D effect. So you can do these things and more. Let's go back to slides, please. So in terms of references, this is where you can go for more information.

There's the QCTV example, and there's a very nice tech note as well that has more information on this. Aperture Modes. This is a new feature in QuickTime 7.1. And it's a feature that requires a little bit of motivation. So let's talk about why aperture modes are necessary. Video and computer graphics are inherently different, or at least they have very different backgrounds.

Video started out in terms of an analog world. In computers, at least what we think of as modern computers, are digital. So the first concept we're going to talk about is this concept of pixel aspect ratio. And sometimes we call that PASP, and now you know what that means. So computers were designed to have square pixels. Why? It just makes sense mathematically. Things render more or less properly when your pixels are square. And when I say square, I mean they have a ratio of their height to width that's equal to one.

Video, as I mentioned, is an analog thing, or at least it started out that way. No concept of pixels at all. Pretty much you had analog lines. And you didn't start associating pixels with video until we started digitizing.

[Transcript missing]

Next, clean aperture. We also call that clap. So computers were designed to display all of their pixels. There are none that are hidden or invisible.

TVs, also being analog devices, were designed to hide pixels. Basically, 5% to 10% of the video image was purposely hidden behind the plastic bezel. That's all by design. So there has to be some way that we can describe the nature of our video content, which part was intended to be seen and which part wasn't intended to be seen. The part that's not intended to be seen contains all sorts of mistakes and bloopers and things that we really don't want to see.

So these aperture modes, these concepts were defined in SMPTE 187 back in 1995. These are the concepts that we're dealing with. Pixel aspect ratio, clean aperture, production aperture, edge processing region. We'll talk about those. What's important from the QuickTime perspective is that things like PASP and CLAP should be identified with your video content. And we do that by tagging the movies.

There's also Aperture Mode APIs that we've added to QuickTime. And pretty much that allows you as the developer to tell QuickTime how to deal with these values. And I'll give you examples of those as well. We have four supported aperture modes. The first is classic. This is really what you're used to. You get all the pixels and there's no aspect ratio correction.

The next is Clean, Clean Aperture Mode. This is really what your end user probably expects. We do all the processing, we only show what we call the clean pixels, and we do perform aspect ratio correction. The next two are really intended for probably professional video editors. In one case, we perform the aspect ratio correction and show all the pixels. In another case, we don't do any corrections at all. We don't alter the signal.

So this is an example of DV. It's what we all are used to seeing. And we're so used to seeing it that we probably don't even notice anymore on the left and right edges there are these vertical bars. And if you look at this blow up, you'll see that those vertical bars are more complicated, that there's no fewer than eight columns of pixels that are somehow bad or peculiar. They're off. And really this is because the codec doesn't define that area. We're not supposed to display these.

So these are the modes again. This is also from DV. It's a specially prepared signal. In this mode, where we're conforming aperture to encoded pixels, we don't provide any pixel aspect ratio correction. So I believe you should be able to see that the circle in the center isn't so much a circle as it is oval. It should appear to be very wide. Also, there are these fuchsia bars on either side.

To make the circle or lack of circularity a little apparent, we can measure the dimensions. And we can see that it's 390 pixels wide and 345 tall. And interestingly enough, that's an 11 to 10 ratio. And we're going to see those numbers come back again. If we correct the pixel aspect ratio by conforming the aperture mode to production, Then we get back to something that hopefully appears round to you. And you can see that when we measured it, it's 354 by 354.

When we correct pass and clap, we're actually conforming the aperture mode to clean aperture, and we're getting back to the 640 by 480 dimensions that we know and love. You can see that the vertical bars are gone, and hopefully things appear to be round as they should be.

So this is another way to look at it using an object we're all familiar with. So in this case, the edge processing region are those areas that didn't get chocolate coated and really that's just a manufacturing defect that none of you were supposed to be exposed to. The clean aperture, really the material that we all want to have is that nice chocolate area in the center.

When we set QuickTime to the clean aperture mode, you only get the chocolaty goodness, all those other things go away. So for those of you who are interested, these are the values that we are currently using in QuickTime. And we've been using these in terms of the QuickTime codecs for some time.

If you want to experiment with your own content, you can bring up the content in the QuickTime Player property panel. You can select the movie, presentation tab, Click the conform aperture mode to, and there's a drop down for the four aperture modes. So you can see what these things actually look like. There's an available API that we're going to talk about in a moment. There's both a C API and Apple Script.

So next I'd like to briefly talk to you about tagging. Failure to tag your content is the number one source of basically image quality problems that are reported to us. Tagging really is used to allow you, the developer of content, to describe to QuickTime what is the nature of the content.

If those tags aren't provided, then QuickTime disables most of the advanced features. You don't get aspect ratio correction, no clean aperture, and other things that we're not going to talk about today, like color correction, are all disabled. This causes QuickTime to basically guess your intentions if it's not tagged.

So it's guessing that means that you're likely not going to get the results that you want or intend. There's a programmatic interface for attaching your tags to your movies. And you can look at these tags using dumpster that we're going to do very quickly. So hopefully you all have been exposed to Dumpster before. If not, you can download it.

In this case, we're looking at the sample description. You can see there's values for pass, but in this case, you can see that same 10 to 11 ratio as we saw before. You can also see that there's a clean aperture mode that's set. and the dimensions. On the second area in the track, you can see that there are these three values, the clean aperture dimensions, the production aperture dimensions, and the tracking coded dimensions. And those are all computed based on those other values.

This is a very quick look at the API. The first two calls are optional. Basically allows you to add an image description for PASP and CLAP. The second call, Generate Movie Aperture Mode Dimension, either will use those values or it'll query those from the codec. Codecs that are PASP and CLAP aware will provide those dimensions for you if you don't declare them yourself. And then finally, you can set the aperture mode using the QuickTime property API.

So keep in mind the new codecs are Aperture aware. QuickTime will default to the classic behavior if your movie content's not tagged. And as developers, you really should remove any workarounds you might have to have corrected the aspect ratio yourself. Please adopt these modes, excuse me, Aperture Mode APIs, and be sure to tag all your content. So next, I turn over to David Eldred. Thank you, David. Thanks. Thanks a lot, Ken.

So an important part of any modern, any high performance video pipeline is a modern codec. And modern codecs introduce all sorts of, introduce a new set of concerns. So as new codecs are developed, they're always adding a bunch of new tricks in order to advance or increase the compression without decreasing the quality of your video.

And a huge number of these tricks are completely internal to the codec. They have no impact on you as an end user or as a developer or anything. But there's a few tricks that they use that actually do require some change in behavior in your application, depending on how you interact with QuickTime.

But it's important to note that if you stick to the higher level interfaces, things like QtKit, things like in Carbon using the movie and track level APIs, these will actually hide all of these complexities from you. You don't have to worry about a thing. And there's also some things like using the clipboard for cut, copy, and paste. It hides the complexities of movie editing from you. But not everyone has this luxury. And in order to understand these complexities I'm talking about a little bit more, we're going to just take a quick look at some aspects of video compression.

Simplest forms of compression tend to involve just key frames. Every single frame of video is compressed as its own image. There's no dependencies on other images. These are also called iframes. And as you can see in this, it's very simple. Say you want to display frame number four in this sequence. All you'd have to do is decode and display frame number four. It's that easy. But codec developers have-- they're smart folks. They realize that there's a lot of information that's similar between these different frames. And so that's where this whole idea of difference frames comes in.

With difference frames, individual frames of video depend upon earlier displayed frames of video. So doing this, you get much better compression, and there's a few complexities that are added. And difference frames are also called P-frames. So looking at our same sequence, the first frame, we still want to compress that as an iframe. It's completely self-contained.

But the next frame in the sequence, it's basically just that iframe but with a tiny bit of car added. And so if you keep on describing that section, most of it in terms of the previous frame, you only have to add the small amount of information required for that car.

And similarly, frame number three is actually an awful lot like frame number two. So describing in terms of frame number two, but that bit of car that was added is shifted to the left and a new strip of car is added. And so on for the rest of the frames in the movie. They're all described in terms of the previous frame. And you can see we get much better compression this way.

But clever codec designers, they see that the first frame and the final frame of this sequence actually contain basically all of the information we want. And describing the frames in between those two points in terms of those two frames will make a huge difference in compression. So that means that the frames in the middle of the sequence will depend upon an earlier displayed frame and a later displayed frame. This is bidirectional prediction. And using this, you can get better compression.

Random access is a little bit complicated, and these bidirectionally predicted frames, they're also called B-frames. So looking at that same sequence, we once again would like to display that first frame as an iframe. But instead of going on to the second displayed frame, we want to go on to the final frame in the sequence.

And we'll encode that frame as a difference frame or a P frame based on that first frame. And then, using these two frames, we're able to describe each frame in between. So frame number two is basically the first frame plus the car that was added in the final frame shifted to the right. And same for frame number two. And frame number three-- well, sorry, off by one. And frame number five.

And you can see we get much better compression this way. So it's all very interesting, but what does it really mean inside of QuickTime? So this means that each frame of video, each video sample could have a distinct display, decode and display time. Previously, frames of video would just have a single time attached to them. That's the display time.

But now, things are a little more complicated. So in our previous example, I love this sequence, the first frame has a decode and display time of zero. And it's important to note that frames are stored in the movie in decode order. So we're going to look at them in that way. So the next frame in decode order is actually the final frame in the sequence. So it's got a decode time of 10 and a display time of 50.

And then all of our B-frames are in order, one after the other, display times 20, 30, 40-- sorry, decode times 20, 30, 40, and 50. And their display times are 10, 20, 30, and 40. Inside the QuickTime file, the display time is generally stored as an offset from the decode time.

So this offset for the first frame would be zero. The offset for the second frame would be 40. And for each following frame, all of these B frames, the offset is negative 10. And this is sort of how you figure out that frames are reordered. If you ever have negative offsets, there's frame reordering.

So that's what it means for QuickTime, but what it really means for you is we have a bunch of APIs that are a little bit ambiguous. So we've updated a ton of APIs in the media and the sample level in order to disambiguate things. Things like getMediaNextInterestingTime, very simple, getMediaNextInterestingDecodeTime, getMediaNextInterestingDisplayTime. Same for getMediaDuration, both a decode and a display variation. addMediaSample. That previously took a single timestamp for that media sample. Now you can have two timestamps, the decode and the display time. So we've had to add addMediaSample too.

And this is just a small sampling of the APIs that have changed at the media and sample level in order to accommodate these frames with distinct decode and display times. It's very important to note here that the track and movie level APIs that are very similar to these are unchanged. They assume you're talking about display time and there's really nothing else that makes sense at that level. So get movie next interesting time is still the only way to query movie for that sort of value.

And if you were to use one of these old APIs, get media next interesting time on an H264 movie or another movie that has reordered content, you would get an error back. So if you haven't switched to the new APIs, I hope you're ready to deal with errors.

So that's a little talk about those APIs. We've introduced a bunch of other APIs in order to deal with modern codecs like this. There's a whole set of new compressor and decompressor interfaces. These have been around for a little while, but these allow you to write your own codecs that implement B-frames. You can get off of GWorlds, and you can do multi-pass compression. You can have a codec that supports multi-pass compression.

And we have some fantastic sample code. It's this example IPB codec. And suggest all codec writers check it out. And please come and visit us in the lab tomorrow. Find us at the beer bash. Track us down if you're interested in codec writing and you want some help with this.

So we had these old decompression sequence APIs. Those didn't deal well with B-frames. They were very G-World based. And so we have the new decompression session APIs that replace them. Get you off of G-Worlds, get you off of QuickDraw, compatible with the old codecs as well as new codecs. So if you're currently using the decompression sequence APIs, it's a great idea to switch to decompression sessions. And we've got some sample code, the movie video chart sample code, which I believe is linked to this session, which demonstrates the usage of decompression sessions.

And we updated decompression sequences, so now we've got, we definitely had to update compression sequences. Sequences are bad, sequences are old. Use sessions. We've got compression sessions now, allow you to do GWorld-free compression. You pack things into CVPixel buffers and you get pixel buffers out. I mean, the way these work is you open up a compression session and you're gonna feed it your frames in display order and it's gonna hold on to some amount of them.

And when it's ready to output frames, it'll call your output callback and it'll give you these encoded frames out in decode order. And since they're in decode order, they're ready to insert into your media or do whatever you want with. and supports frame reordering, it supports multi-pass. Check it out.

And with that, I'd actually like to go to a demo. And the demo is-- so we're going to look at how to use a movie export in order to export iPod-compatible video. And where's my mouse? There it is. I'm going to do iPod compatible export. And we're going to do it without presenting-- show you how to do that without presenting any UI to the end user. And so you can integrate this into your own application. So the way we're going to do this, and just as an example-- so we've got this beautiful app here. Just for fun, we're going to add a little Core Image effect.

You get the idea. The way that we do this iPod export is we're gonna-- you need to at least once invoke this-- We're using the MPEG-4 exporter. And you need to open this dialog. And we're going to choose iPod-compatible settings. If you're interested in getting the details of this, it can help you out. But you have to make sure we're doing H.264 video. To be conservative, we're going to choose 500 kilobits. We'll let it do automatic keyframing. The important thing is to make sure that we're baseline compatible. And we'll let it do multi-pass compression.

When I click OK here, it's not going to do the export. All I've done is invoke that dialog and save its settings off to an Atom container inside my application. And I'm not going to do an export with them. Then I've closed that exporter component. And you could do that in some other application, save these settings off, you know, inside your application and never-- and then use those settings as we're going to do over here, apply them without any interaction with the user to another export component.

So, I can't see that too well, but I'm going to give, all I'm going to do is present, save dialogue. Click Save and I do-- so it's going to go ahead and do some multi-pass export here. And it's rendering this to the screen as it's doing the export, doing core image effects on top of it and all that crazy stuff. And then this application reads that buffer back from the screen.

And there's my exported movie. Should work on your iPod. And I'm going to really quickly show you the code that's doing-- that's both invoking the first export dialog where I save off the settings, but that's the code that you actually probably wouldn't want in your final application. And then I'll show you the export that-- So, first thing you have to do is actually open an export component. Here I'm opening an MPEG-4 export component.

And so this is the call where I'm actually invoking the dialog. So just going to do movie export do user dialog. That presents the UI. And then when the UI is dismissed, when the user clicks the OK button, the application continues here. And we do a movie export get settings as Atom Container call. And in this case, in the application, we're just saving this off to a global variable, the export settings. But you could have this saved off inside your application and never have to do any of this.

And so this is the function that actually gets called when you click the export to iPod button. And it is going to, just like before, we open up an MPEG-4 export component. But then I'm going to call movie export set settings from Atom Container. Very simple. And that just sucks in those same settings that we had before, just as if the user had entered them themselves.

And, yeah, then the important thing here is we're doing a movie export from procedures to DataRef. And the rest of the stuff going on in this demo is all about all of the complications involved with pulling the data back from the screen after it's been rendered. If you're interested in that, you can look at that some more.

But it's not really the critical part of the demo as far as the iPod is concerned. But there's that demo. Back to slides. And so with that, I think I'm done and I'd like to bring Frank Doepke up here to talk to you more about the video pipeline. Thank you, David.

Good afternoon. I know everybody wants to get to the beer bash, but be aware there will be plenty of water bottles and broccoli will also be available for you who are really hungry. So I'm here to talk a little bit about the video pipeline. You know that we-- oops, sorry. I pressed the wrong button.

There we go, video pipeline. With QuickTime 7, we opened up the pipeline for you so that you can actually access the frames before they actually go up to the display. You're not just rendering to a G world anymore. You can now go in and access the frames, draw them yourselves, do all kinds of crazy stuff with those frames.

To do that, we use a visual context. And there are two. And there's been some confusion. So let me talk a little bit about the difference between these two visual contexts. So first of all, we have the OpenGL visual context. The frames are uploaded as textures to the screen, and so you get a fast playback.

This is when you want to render to the screen or you want to preview a movie. That's when you use the OpenGL Visual Context. And then we have a pixel buffer visual context. This one is kind of like for your off-screen rendering, when you did in the past create an off-screen G world rendered into that. What this means actually is that all the pixels are in main memory. And so you can access them, do whatever you want with them in main memory. They are not uploaded to the graphics card.

And those are the two main contexts that you actually want to use. We do have some sample code also out on the website that shows you the differences between those two. But as you, for those who were in the Qt Kit session heard, there's the new use actually of the Visual Context and that is together with the Capture API.

With the Capture API, we want to replace the sequence grabber. So we have new ways of capturing movie and actually accessing it through the Visual Context pipeline. So you don't have to deal with G-Worlds anymore. It's much more modern interface. Tim gave you all the goodies what we have with this part.

I want to just focus on the use together with the Visual Context. You can record from more than one destination and also work from more than one camera. So with that, I would like to give you a little demo of the Live Video Mixer 3 that uses the new Visual Context.

So you've seen the live video mixer in the past already and what we taught him as a new trick here is that we have video in, so I'll just grab some video material for the first two channels. And so I can start play this now. And you can already see me there in the bottom of the corner where I'm actually here. So I can make now the appropriate comments to this game of pool here in the background. So let me position myself a little bit correctly here. So you can move it around.

So I can go up into this corner and say yes that the player is using that pointy stick to what the experts call the ball, I think. And let's also bring in here we have, of course, the other frame, the other camera angle we can bring this in. And this is just using now the new QtKit capture API.

When you look into the sample code of it, you will actually see that the rendering part of it, I simply have to subclass my rendering of the regular movie and it looks very, very similar. The only thing I don't have to do is actually do the play part. So everything else for the rendering is using the same OpenGL code for it to render these core video frames. On the step, I would like to go back to slides, please.

Next thing, you heard a lot about Core Animation already. So Core Animation also works great with QuickTime. You can use this for compositing and So you heard Core Animation is new in Leopard and it makes it really, really easy to do composite and all kinds of effects in your UI. And it really gives your applications a new kick.

And one of the things, for instance, that was really, really tough in the past was when you had, for instance, your movies, you want to play them in the background of whatever you do in your application, it was very difficult to get like AppKit controls on top of it because you used an OpenGL context and AppKit did not work very well with that on top of it. So this has changed now when you use Core Animation. And let me give you a little demo of that part as well.

So I have a little sample app here that plays a movie. And what I can do now here-- so I can actually fade that movie a little bit into the background. You see regular app kit controls right here in the corner. We're going to play some very saddened movie music. And we can play some funky effects that makes us look really groovy.

Our boss is very happy about that music as well, so we seem to use some frequency levels and let's just compose it on top of it. Of course, we can make this look a little more spacey in general. Since it's already late in the day, the eyes are getting a little blurry.

I can also do some live resizing with this part. And this is all done through Core Animation. And this is almost shocking how little code was needed to do this kind of stuff. Enough of that music. Can we go back to the slides, please? What I would like to mention is that the Live Video Mixer 3 as well as this LayerKit, sorry, that was a little blooper there. It's actually called Core Animation now. We used to call it LayerKit, but you find it actually under the LK QuickTime demo. They are both associated with the session, so you can download their sample code and already play around with it.

So next we want to talk a little bit about the performance. So I'm not under commission based here to sell you, OK, what's the most fanciest API that you can use that has clear code on it? We want to actually direct you towards using the correct API. Because there's been a lot of confusion because we have different sets of it, and what is the right thing for me to use? So if you simply want to play back a movie, you're fine in just using QtKit. It works in all cases, and it will just play back your movie. You don't have access to the video pipeline, but well, you just said, I just want to play back that movie.

If you want to have access to the pipeline and do something a little bit more fancy with it, you have the choice of using OpenGL. And this gives you a very broad hardware support. Everything that runs with Quartz Extreme can run with the Qt OpenGL Visual Context. And you can do, of course, all the 3D stuff that everybody does. But you can also use it just for compositing. That's what I've shown in the live video mixer.

And you see you can also use masks. So you can get some really interesting even video compositing effects done just in OpenGL. And it's actually not really too hard. And then, of course, we have Core Image. As David already showed in his example, it's very simple to use some very easy effects.

And also together with Core Animation, you have some really nice effect machinery. There are plenty of plugins also available to extend this machinery, but you need, of course, Core Image capable hardware. And every product actually that Apple ships today has the support for Core Image right on the graphics card. You can also, of course, use Core Image on the CPU, but you might not get the performance that you actually expected.

Now, when we talk about performance, the important part is actually to look at how can we measure or look at the performance in the moment when we actually have a playback problem that we are dropping frames. Why? So there are, of course, the OpenGL tools. For those who deal with OpenGL, you're definitely familiar with it using the profiler and the debugger. But we also have an old friend, Quartz Debug, that learned some new tricks.

Because Quartz Debug allows you now to look into what you can do with Core Image. It actually shows you what's going on in your CI context. You can actually see which filters are actually applied so that there's no surprises there. And you can actually judge how long did a filter take. So that is important for you because not every filter has the same complexity.

So when it runs on the graphic card, some of them take definitely more resources and take definitely longer, and that can cause you to drop frames. Of course, the image size has something to do with it. So you need to judge that. And just to show you actually how to use Quartz Debug, I would like to give you a short demo of that.

[Transcript missing]

Again, our CR Video Demo GL. I changed a little bit to use just a very simple filter here, and that is a CPR tone filter. And let me launch Quartz Debug now. What you don't see because unfortunately we can't mirror here is I go into the tools menu and show the window list.

And then I get a list actually that you see right here of all the clients that are actually running. So now I have to actually glance a little bit here. So this is my CR Video Demo GL and I can actually see what's happening in that context. So let me just actually step through a few frames here.

And I can see actually what did render here. So now I can open this context and... On the bottom part I can see, okay, this was the filter that rendered. When I look at the performance side, I can actually see when I start sampling and render a few frames.

I should be able to see some time. Actually, there we go. You can see the accumulated time actually that was used by that filter during the time that you were sampling. You can see how many pixels ran through it. There seems to be a little problem here. I have no numbers, but technically it would work. And you can actually see also the throughput, which is important for you to know how many megapixels per second actually ran through this.

So this is new in Quartz debug on Leopard and this is a great tool for you to find out when you use Core Image actually what is really the performance hit that I take when I use some filters. And with that I would like to go back to the slides please. Thank you.

So those are the tools. Somebody's falling over there. But there's no substitute for actual testing. If you target a specific hardware, you need to test on it. So there's a lot of differences actually in the capabilities of the GPU, meaning the graphic cards. There's a lot of differences also in what the CPU can do. And now that we have also both platforms, we definitely have to look into that. And those are the main components that definitely impact how quickly frames get decoded, how quickly they can get rendered, and which filters. You want to take that call? Okay, thanks.

Then of course there are memory considerations as well. When you have low memory, you will not be able to hold as many frames in memory and that definitely drops your performance. And if you run multiple streams, the disk performance is also very important because when your hard disk is slow, you will not be able to access that many frames. So you have to look also like in sampling those parts in your application when you run into this.

Now, saying that you have to test on all these configurations, I mean, of course, I encourage you to buy every single computer that you can get your hands on. But to make it a little bit easier for you, we also have the compatibility lab in Cupertino. And for all the ADC Premier and Select members that can go there, there are some free cookies that are here. And they can test on every configuration. And then you know at least, well, before I ship it to my customer, it is running. So come to Cupertino. It's actually much warmer than it is here in the city right now.

Let's summarize of what we really want you to take away from this presentation. First of all, Quartz Composer loves QuickTime and QuickTime loves Quartz Composer. So do some creative things by mixing those together and create some new material that you actually have not never done before. So that opens up really new alleys and fancy movies.

Then of course look at the tags. So you saw we really want the chocolate of the Cocchione. Leave the crumbs out. So make sure that you only get the clean aperture. And then of course circles deserve to be circles. Make sure that you get your aspect ratio right. Otherwise things look stretched and we don't want that.

Then as David said, there's a new order, so frames can be not in linear order in your files anymore. For that, make sure that you use the latest APIs. Use the compression session, the decompression session APIs. Sequences are outdated, and TV as well, and API as well. And then of course, look at all the new things that we have in Neppert. Play around with the QtKit part. Play around also with the core animation. And QuickTime is a great mix in media that you can use with those kind of new technologies that we have there.

As Ken already mentioned in the beginning, QuickTime has already some years under its belt and therefore it has lots of APIs. Some of them are not really the ones that we want you to use in today's application. They will work, but they are not as good anymore. So use the modern interfaces and please move away from G-Worlds. QuickTime had used a lot of QuickDraw stuff in the past, but that is the past. QuickDraw is deprecated, so you really want to move away from G-Worlds wherever you can.

Now, where do you find modern sample codes? You can go to our website and Ed Agarberg was great in writing some very simple but down to the point sample codes that help you to use all the new stuff. Then, of course, look at the live video mixer. Look also at the CR Video Demo GL that we have on your lab red disk and you can play around with those. Those are really the best practices as we can suggest today.

Now, since there is so much in QuickTime, I know that a lot of you go out and find, okay, I need to do this and let me Google basically for some specific stuff. And, well, you find some sample code and that might not be really appropriate anymore. So, if you see some sample code that's been written and it starts with like 19-something, that might be a little bit outdated already.

If it's written in Pascal, I would definitely stay away from that already. If it was targeted for something that we had before OS X, yeah, stay away from those. Those might not drag you into the correct stuff anymore by, well, because they simply didn't know about our new APIs.