Graphics • 55:26
Mac OS X "Tiger" features exciting new media services that leverage the GPU for real-time performance and high-quality effects. Find out how QuickTime takes advantage of these new services to provide high performance processing throughout the video pipeline. And, learn how you can use these new capabilities in QuickTime to easily provide features such as integration of video into OpenGL scenes, and application of real time effects to video.
Speakers: Tim Cherna, Sean Gies, Jim Batson, Ken Dyke, Frank Doepke
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Good afternoon. Happy Canada Day. Thank you for coming. I'm Tim Chernna, and I manage the QuickTime Video Foundation team, and together with my team and the QuartzFX team, we're going to talk about new directions in QuickTime Performance, which is really talking about The integration of QuickTime and Core Video on Tiger. We're going to talk about a new video pipeline we've been working on.
So, this lets you take advantage of the GPU for video acceleration and video effects. And it will show you also how to adopt Core Video for high performance rendering and timing. And we'll show you how to move away from some of the QuickDraw data structures that QuickTime uses.
So, today we're going to start out talking about some simple ways to take advantage of the new pipeline using H.I. MovieView for Carbon developers and Qt MovieView for Cocoa developers. We'll also show you how to make your pipeline customized by using visual context and core video. And then finally, how to integrate this with OpenGL and the new core image.
So before I start, I want to talk a little bit about how QuickTime did rendering before Tiger. So movies would render into graph ports and G worlds. G worlds would be an off-screen memory, and graph ports would be an on-screen windows. So, QuickTime could use a transfer codec to go from an intermediate pixel format to a final one.
And, for certain formats, we could accelerate the decoding to hardware. But, it was important to note that if we were going to an off-screen G world, the movie would always play synchronously, so you wouldn't benefit from the asynchronous scheduling of QuickTime. So, the stack kind of looks like this.
You have the movie, and on top, underneath that, it uses the image compression manager and, say, a primary codec like DV or MPEG-4 to either go to the destination or it would use a transfer codec, and in some cases, it would accelerate that going directly to the hardware. But, the one thing to note is that entire stack, from the movie down to the final destination, is owned by QuickTime, and it wasn't really a layering.
So, one of the things that we've learned is that the movie is not a layer. So, one of the things that we've learned is that the movie is not a layer. that we had in that architecture was it was really hard to play a movie into GL. Well, it's easy to do a bad job. It wasn't that efficient. And about two years ago at WWDC, we showed a solution to write your own custom transfer codec to get the drawing notifications from a -- from QuickTime so you could synchronize it with OpenGL.
But it only worked for certain movies. It worked for movies where there was a single video track and certain specific codecs, and it wasn't very efficient. So that didn't work so well. So we decided to build a new video pipeline in Tiger. And we had some goals. The first goal was to have good integration with OpenGL. We wanted to be able to support multiple buffers in flight.
We wanted to be able to separate the decode and presentation logic. And we wanted to layer this on top of Quartz. But basically, this is the diagrams that we showed at the graphics media overview. And you know that Quartz is basically core graphics, core image, and core video. And I wanted to show you how we're going to build this pipeline, so we made a little bit more of a colored version of this.
So, how do we build a pipeline? We went this movie to get to the graphics hardware. And the first thing we're going to do is we're going to use QuickTime to source and decode the data like we normally have done. We've chosen to use OpenGL to do the rendering to the display.
And we've created these visual contexts as a structure to source textures from the movie for downstream rendering. And we use Core Video for timing and buffering services. And we can also use additional OpenGL functionality for transformations of the image as they go through. And of course we can add Core Image for image-based effects.
So these last two modules that I've shown are basically where your application would customize the effects pipeline. And this is where you as developers can take advantage and do whatever you want to do, and we'll show you some of that coming up. So this is the architectural stack that we've built. And once again, this is the video pipeline. And to talk about this, and we're going to have the other people on the team talk about the various parts, but to start out with some simple solutions, I'd like to bring up Sean to talk about simple solutions.
Thank you. So you want to get video in your application. Well, what's the easiest thing you can do? Maybe you don't care about QuickTime or OpenGL and textures, so you just want to get your video playing in some window. What's the easiest thing? Well, Cocoa developers, they can use QtKit. I mean, there was a whole session this morning on how to use QtKit.
And in that, there's the QtMovie and QtMovieView. So if you use QtMovieView in your applications, you get all the acceleration and all the benefits of Core Video underneath without having to deal with any of this new stuff. Similar with Carbon developers, you can use the HIMovieView. Now, this is a replacement for the CarbonMovieControl and also gives you all the benefits.
So, here you can see the movie view takes care of the visual context, it manages the OpenGL rendering, and it deals with core video, doing all the timing and so forth. Same with H.I. MovieView. So, since the H.I. Movie wasn't talked in depth in the other sessions, I'm going to go over it quickly here.
So, this is now the preferred way to put movies in your Carbon windows. It's now a full-featured H.I. View, unlike the Carbon Movie Control. So, it's going to work in your composited windows. And, you can also use Interface Builder and put this in your Nib files. And, of course, it uses visual context to get all the live resizing and the GPU acceleration. Again, use this one instead of Carbon Movie Control.
So, here are some of the APIs. You create one of these things with HIMovieViewCreate. You can also use HIObjectCreate and pass it the class ID of the HIMovieView. Or, if you're using an interface builder, you can create a custom HIView and use the class ID inside interface builder, and you'll get that in your nib file.
So, once you have one of these, you can use the setMovieCall and the changeAttributes calls to change the movie in the HIMovieView. Well, this is a slight difference from the Carbon Movie Control, where it was the same movie from the time you created it to the time you destroyed it. You couldn't change the movie, and there were some attributes you couldn't change at runtime. So, you can turn on editing, turn off editing, tell it to start listening to copy/paste and so forth.
So that's how you use it. And since it's HIView, it makes heavy use of Carbon events. So for example, you can listen for the control-- you can use the control get optimal bounds event. And that will tell you how big this movie wants to be, taking into account the size of the controller bar if that's visible. And sometimes these movies may change size.
So you can listen for a new event, the movie view optimal bounds changed. And that will tell you when you get that event, you know, OK, let's relay out the window. because the HMU view doesn't know how you want your window laid out. It's not going to do that for you. All right, with that, I hand it over to Jim Batson. He'll show us some of these views in action. Thanks, Sean.
So, can we have a demo machine? Great, thank you very much. I'm Jim, and I'm going to show you some simple applications to show the simple solutions for showing accelerated video. We'll start off by looking at Finder, the canonical example that's been used for HM MovieView already. I want to navigate over in ColumnView, over to some place where I've got some movies.
And you'll see that as you're used to seeing inside Finder, preview over here. And of course you can play the movie. I'm going to jump around so we can get some of the place more interesting first. You can scrub. Now, one of the things to note here is that there's a lot more information in "Tiger" than in "Panther" in terms of metadata information about the movie being displayed in this panel. You see the duration, the width and height, and even the codecs that are being used and media types that are being used in the movie. So, it's just kind of a nice addition in "Tiger". You can also grow.
While even we play, we can grow the movie. This is actually one of the HD movies, so it gets pretty large. And this is, uh, H.I. MovieView is being used inside Finder to handle this, and you can see it scrolling. and updating and doing all the normal behavior you'd expect.
And, of course, this works for other movie types than just your normal audio/video. Bring up a VR movie. Let's see it here. Scroll over. And you can still control it. You have the normal VR controls. and all that works nicely. Another place in the finder where you'll see HIMovieView being used is inside the preview pane itself.
And you'll notice over here, not only do you have, you also have the extra information that I mentioned before in the preview pane in the general area. and you also have the movie being previewed here and it plays well in there as well. Okay, next I'm going to show We're going to talk about the new HIMovieView player, which is a very simple player that's built on top of just... it's a Carbon player built on top of HIMovieView. And, first I'm going to go ahead and launch it. Open a movie. We'll start off with Harry Potter. Familiar movie, you can just click around and see the controller comes up.
"You can cook around and use it. Also, you can make the controller visible or invisible while it's running. And, you can make it editable if you want." I don't know if you can see, but the thumb for displaying where you are in the movie has changed to be the edit thumb.
Also, added some other controls out here, just using the normal HI technique for having a button and then sending a message to the view to implement play and stop buttons. So, if you didn't want to have the controller, you can implement your own UI for controlling the movie.
I want to show you a couple of, something else that the H.I. MovieView does support is the focus and tabbing within fields. I don't know if you can see it or not, but when I tab between the text here, I'm going to blow it up a little bit, and I'm using the Command Option + key.
To be able to zoom in and minus will zoom out. You can see that the text field is highlighted. I can tab over. If I tab over the text field and start typing, and I, you know, The space didn't do anything, but if I come over, tab over to the movie window and I hit space, it starts the movie as you'd expect. So, this supports focus. I'm going to Command-Option-Minus to un-zoom.
Now, you might have noticed that the controls up here were not being tabbed to. Just in case you're trying to do that and want to find out, you know, curious why it doesn't work on your machine, I'm going to actually turn it on now and go back. So, I can just come up here and find where to turn on the tab keys.
Let's see. I want to be able to tab between all controls and windows. So, now I want to go back to HMovieViewPlayer. If I tab over... It's hard to see, isn't it? So if I tab over to, let's say, the controller visible, I can turn that on. Wow.
At least you can see the tab building. Okay. There's something moving out there. It was a Dementor. Now, one thing to realize is that it's not just for playing movies and only for movie applications, whose main purpose is to play video content. QuickTime also gets used a lot for playing little animations. I'm going to bring up this alarm clock, and you might recognize this alarm clock. It's the alarm clock from iCal, and that animation is done as a is just a QuickTime movie.
So now that you've got H.I. MovieView support, it's really easy to add little animations into your Carbon apps. And finally, I want to end this part of the demo with a kind of a classic QuickTime movie to show that the There are other kinds of animations, and they also still play. So it's not just the new H.264 or video style movies.
Is there any audio? Okay. Well, I'm gonna quit it now, 'cause the punchline is not as good without audio. So I'm now going to move over to, we talked before about H.I.MovieView, but also QtPlayer. QtPlayer is built on top of QtKit. And I'm just going to show you, just briefly, QuickTime Player by bringing up a movie. We've seen this QuickTime Player several times this week already. I just want to show you some of the-- being able to animate up and down. And it's taking the power of the visual context and exposing it through this. It can also go full screen. Full screen.
I'll go for a sec. I've fought many wars in my time. "Some fault for that." Okay, that's enough. I'm sure you've seen enough this week. So, one trick here is that if you want to know whether Qt Movie Kit, or sorry, Qt Movie or Qt Movie View being implemented by QtKit is in use, is you can control-click and you get a little pop-up here. So, we can...
[Transcript missing]
Earlier this morning, saw Adrian show the QuickTime plugin, a version of the QuickTime plugin that he created very quickly, leveraging the new WebKit facility to add JavaScript binding and the new Safari plugin.
and other capabilities that you can apply to your Objective-C class. And, he already went over the code, but what I want to show you here is briefly inside the info.p list,
[Transcript missing]
And here we have the plugin being used. It supports these buttons down here, just through some JavaScript.
This is the stupid movie this year that was shown yesterday. There'll be a full showing, I understand, in the next session. Let me just skip around here a little bit. And, the idea behind having, being able to call QuickTime stuff directly inside a plugin is that you can add additional features like full screen. So, directly from a plugin you can go full screen your movie, which is kind of cool. So, oh, let's see.
The last thing I want to show you is, another reason I showed you the plug-in was that with Dashboard, which you saw Monday, The last thing I want to show you is, another reason I showed you the plug-in was that with Dashboard, which you saw Monday,
[Transcript missing]
And just to prove that it's using QtKit, here we've got... Look at that. All that good stuff works. And also just randomly support full screen inside GAT, you know, dashboard.
So, another reason I showed you that with Gadget is that, with the dashboard, is that if you go back, take your SDK back, and you try to do this yourself, it won't work. Not with this version, anyway. The problem is that Dashboard, with the new QuickTime, we're using OpenGL and creating the surface, the GL surface, with that, with the window from Dashboard fails right now, so I had to kind of work around that. And, but of course, that'll be fixed. So, in case, trying to save you some grief, don't try this at home.
Okay, with that, I'm done. And, just as a final wrap up, you know, one of the good things with these, the new H.I. MovieView and Qt MovieViews is that it helps, it lets you add QuickTime technology, QuickTime playback to the rest of the OS, where we're well integrated now with Carbon and the nice new classes in Cocoa. Back to Sean.
All right, enough of the easy stuff. So you want to customize your pipeline. Here's our diagram again of the pipeline. I'm going to be focusing on the QuickTime side of it, and later I'll hand it over to Ken and Frank to talk about the other pieces of the pipeline.
So, why do you want to customize it? Maybe you want to perform some image processing, you want to use that new cool core image stuff. Or, you have some heads-up display, you want to draw some FPS counter, or you want to put some cropping boxes over it or something, I don't know. Maybe you have a game and you want to have a movie playing inside the overall scene.
Well, you can't use H-I Movie View or Q-T Movie View to do these things. They're just designed to have a single rectangular window or a view in a larger window. So, you're going to need to use OpenGL, you're going to need to use Core Video and the visual contexts. So, how does this pipeline work? Well, not going to have the view.
So you see the first thing in here is the movie. First, you're going to need a movie. So you may be thinking, well, I can just call a new movie, or a new movie from File, or a new movie from Scrap, or a new movie from DataRef, or a new movie from DataFork, or so on. You have 10 choices. Well, now you have 11.
Think of this as new movie from anything. So, all the parameters are passed in through a list of properties, and all the results are returned through properties. So, this is a super set of everything else. You can get the same functionality of all those other new movie calls with this new one here.
But, the biggest difference is that you can now create a movie that does not inherit the current G world. This was a subtle semantic of all the other new movie calls, is that whatever port happened to be current, you know, last call to set port, your movie would get that, and that's where it would render.
So, if you just created a movie and started playing it, it might go up all over your screen or something. So, with this call, you can specify a visual context to use upon creation. You could set it to nil and have it render nowhere. So, this is the recommended way to create all movies. And, in fact, if you pass all zeros to this function, it behaves exactly as calling new movie with zero.
So the next piece is the visual context. So what is this thing? It's an abstraction of the video rendering destination for QuickTime movies. It's a replacement for the G world. This is where we get our new rendering performance from. We can take advantage of the GPU. We can have multiple buffers going at once. You know, we're not restricted to certain movies.
So one of the fundamental bottlenecks of the old QuickDraw rendering model was that everything went through a single buffer. The decompressor would decode into a buffer, and then the hardware would start pulling out of that onto the screen. And the decompressor couldn't start writing into that until the hardware was finished reading from it. So there was just a fundamental bottleneck. So now we have multiple buffers. The decompressor can start decoding into a completely different buffer while the hardware is using the other one.
So we got rid of that restriction. And we're not restricted to movies that have single video tracks with single codecs that support the right pixel formats and aren't transformed or something. You can use any movie with a visual context. And because we've decoupled the decoding from the presentation, you get more asynchronous behavior. Like our video media handler will actually be decoding ahead of time from what you're actually displaying.
So how do you create this thing? Well, in "Tiger" we're shipping an OpenGL texture context. Now the visual context is actually an abstraction, an abstract base class in the object-oriented terms. There could be many different kinds of visual contexts. But for now we're shipping the OpenGL texture context. And it gives you a stream of OpenGL textures for your video frames.
So you're going to need to set up OpenGL first. And we use the core OpenGL objects to create this thing. That's CGL context and CGL pixel format. And most of you probably don't use core OpenGL directly, so you'll be using Cocoa or Carbon to do that. And Cocoa developers can use the NSOpenGL context and get the underlying CGL context. Same with Carbon using AGL.
But the Carbon thing is new in "Tiger." So there's a new function to get a CGL context from an AGL context. Same with pixel format. And here's a little example code creating the texture context. Now, the textures that come out of this thing can only be used with this CGL context that you pass into it, unless you create a shared context, which is a more detailed OpenGL topic.
So now you have these two pieces, you want to connect them together. So you set movie visual context. This is a replacement for set movie G world. And it's how you direct a movie to use your visual context. And note, you can't have two movies playing into the same visual context at the same time.
So this call will fail if another movie is already connected. If you want to play multiple movies, you're going to have to do the compositing yourself. You have two separate texture contexts, you'll get the textures and use OpenGL to composite them. Now this can also fail if your hardware does not support the size of video, or it's just not sufficient for this movie.
More on that later. Okay, now calling setMovieVisualContext with null is also a little special. Unlike setMovieGerold with null, this will actually tell your movie to not draw. setMovieGerold actually, when you pass null, it would tell it to get the current G-roll, whatever that happened to be, you know, just like the new movie calls. But this is actually going to tell it to not render.
And one important note, when you have a movie that's targeted at a visual context, and for some reason you need to actually target it at a G-world, first go through a null visual context to actually, you know, make it stop using the visual context before switching. You know, important little detail here.
Okay, the ICM can drive a visual context new as well, not just a movie. So you can use ICM decompression session create for visual context. And this gives you access to the visual context at a lower level than the movie toolbox. And there'll be more on this ICM topic tomorrow at 3:30 in next generation video formats for QuickTime.
Okay, the ICM can drive a visual context new as well, not just a movie. So you can use ICM decompression session create for visual context. And this gives you access to the visual context at a lower level than the movie toolbox. And there'll be more on this ICM topic tomorrow at 3:30 in next generation video formats for QuickTime. Now this will pull out that texture, give you a copy of it for that same timestamp that you used in the previous function.
Note that this uses the core foundation retain and release semantics, so be sure to release those textures when you're done with them. Otherwise, you'll chew through your video card's memory in no time. The last one here, Qt OpenGL Texture Context Task. This is used for giving the texture context a little time to do some housekeeping. It's got to check all those textures, see if they're ready to be reused or something. And, it's a non-thread-safe thing, so more on thread safety later.
With that, I hand it over to Ken to talk about those textures that you get out of that thing. Thanks, Sean. All right, so now we get into core video a little bit and give you guys a quick overview of what that's all about. So, I've seen this diagram a couple of times before. I'm mainly going to go over the core video timing and buffering aspects.
So, there's two main pieces to core video today. There's buffer management. We wanted to have a common data interchange format between QuickTime and OpenGL. That was always really tricky before. G-worlds were very non-optimal. You could get the bits out of a G-world and get it into a texture, but it was kind of a pain in the butt.
And, if you didn't write the drivers, knowing how to do that correctly is kind of tricky. The other part of it is display synchronization. In this whole scheme, you want to know when to go and ask QuickTime, "When should I ask for the next frame?" And, I'll talk about that in a minute.
So, first a little more detail on the buffering model. All the buffer objects that you get out of QuickTime, they're all core foundation-based objects. So, as Sean said, you use CFRetain, CFRelease on them. They're sort of an abstract base class, if you will, that defines a couple of interesting behaviors.
There's this concept of buffer attachments for metadata, things like timestamps might go there, color space information, that sort of thing. There aren't any ones really defined yet, but we expect that there will be at some point. There's also then a couple of concrete buffer types you'll run into in the Tiger timeframe.
There's CVPixelBuffer and CVOpenGLTexture. And again, diagram, you can sort of see how that works. So, just real briefly on CVPixelBuffer, this is how internally the ICM would like decode memory-based data. Like if it wanted to, you know, for a codec, it needs to be able to put things into main memory. It just doesn't want to put them right into VRAM, so it'll use a CVPixelBuffer for that.
I guess that's all I had to say there, sorry. Oh, and then this is the foundation as well for the new ICM APIs. And again, tomorrow at 3:30, Sam will talk about that a little bit more. But, here's the one that's interesting for you guys, the CVO OpenGL Texture Object. So, this basically represents a high-level abstraction wrapper, whatever you want to call it, around an OpenGL texture.
And, its job is internally to deal with the details of how do I get texture data, if it's YUV, is it 2VY, is it YUVS, is it RGBA, is it A, B, G, R, A, RGB, whatever. So, it figures out how to get that into OpenGL. It knows about all of our custom extensions that we use in Quartz, you know, to avoid memory copies, that whole thing. The API is pretty straightforward. You basically can get back the texture target. These days, it's pretty much always going to be texture rectangle.
It'll give you back the size, what the texture ID is, that sort of thing. And, you can use OpenGL to query more additional internal information if you care. The one thing I want to point out is you should really ask us for the texture coordinates. So, if you're using either of these two calls here, if it's something like DV video, the texture might be 720 by 480, but you might really only want to use like a sub-region of that, because there's undefined regions and you don't want to display garbage to your users.
So, the next part of Core Video is the Display Link, and this is sort of responsible for driving the timing into the entire system. So, overall, with this new visual context thing, it's kind of a pull model, sort of like Core Audio. The idea is that every now and then you basically go and ask QuickTime, "Hey, is there a new frame available for some upcoming display time?" And, Display Link's job is to basically tell you when you should go and make that call.
It provides timestamp information as to when the next sort of vertical blanking period on the monitor is going to happen for whatever display you're on. Contrary to popular belief, this does apply to flat panels as well. They really do have VBL sort of timing idea, even if they don't really have a blanking period.
That timestamp that it will give you is a little data structure that's actually required by the QuickTime visual context, so it knows when the next display time is going to be. The interesting thing is that it's not just a display time. It's not like the time right now when the callback is made. We're actually trying to estimate when the next display period on your display is, so it's a little bit forward in time.
But that gives you time to do the OpenGL rendering, set up calls, core image, whatever you want to do before the display happens. There are separate render and display callbacks that are sort of happen one right after the other. You can kind of use that however you'd like. We might define some more behaviors there in the future, but it's pretty straightforward.
And all of these callbacks happen on a dedicated high priority thread. It's kind of like core audio. We're not so high that we'll bump off the Windows Server or core audio or screw anything like that up, so don't worry there. And it's similar to the same high priority thread that we use in the DVD player. And you can create one of these things from a CG direct display ID or multiple ones. And there's calls to switch back and forth if you move from one display to the other. There's calls to do that.
So just to give you a sort of graphical idea of what goes on here, the thread that's running basically picks up information from I/O Kit that says, hey, here's when the last VBL happened. Here's sort of the time span from the last one. So we can sort of guesstimate or estimate when the next one is going to happen and then feed that timestamp to you.
That triggers the callbacks, and then you can basically go all the way up through your custom pipeline, ask QuickTime, hey, what's the next frame, and then start pulling it back through all the other custom effects. With that, we'll bring Sean back up and he'll give you a demo of how this stuff works.
All right. So, now that you've seen all the pieces for the pipeline that are required, visual context, the buffers, the display link, I'll show you a little app that uses it. Some of you may have seen this in an earlier session in the week. This is using the display link to get its rendering callbacks, and is pulling the textures out of the visual context, you know, like we talked about, and rendering them with OpenGL.
So, since we're using OpenGL, it's pretty easy to do, you know, this kind of thing, move the texture around, rotate it around, so forth. So, we thought, well, why render one texture when you can hold on to them? Because we have multiple buffers now. You don't have to use that buffer and get rid of it. You can just hold on to it. So, we have an array of these frames, and so forth. That is the way I love Helen. So, you can see, we're rendering many, many, many video frames every single pass.
So we can see here, here's the dimensions of this movie playing at full rate. And we're rendering 46 frames every single time. We're going to run that. Remember, these are high-definition frames. So you can see, once you have the video up on the graphics card, you can do a lot with it. And look at all the-- a lot of horsepower there. "I've asked you to fight my war. You already have." "The loser will burn before night falls." "Immortality! Take it! It's yours!" There's that.
So that was showing the set movie visual context APIs that we were using. We also added capabilities to use the ICM APIs. Hey. So this is using a sequence grabber. and extracting the frames off the DV camera, feeding them directly into the ICM, which has been connected to the visual context.
So, here we can ignore the zero zero here. A few more. So look at that. Almost 300 frames here. This will be sample code. It's not on the DVD. It's not available yet, but in a week or two we'll have it to you. So, there's that. Alright, so, there you go.
Back to the slides. Okay, so now for Frank. He's going to show you how to use OpenGL to do more interesting effects. Thank you, Sean. Good afternoon. I will talk a little bit more now what benefits we can add to this new pipeline by customizing it. And for that, we use the OpenGL processing.
"You've seen that diagram, I hope everybody has it now in his mind. And I will talk a little bit more about the gray area here, so what can we do with OpenGL when we use our new video pipeline?" So, the added benefits that we bring in by using OpenGL is, first of all, we have blending. So, you can compose the stuff on top of each other, and you can use the blending effects that you have in OpenGL. I will show that a little bit later on in the demo.
And, the other part is geometry. So, if you think of like putting this into like some games or some scenes that you want to play the stuff, and you've seen it now nicely in Sean's demo, like how he can twist things around and do all the funky kind of stuff with it.
So, in an overview, what is it really now about? So, the first part, I want to take a little bit away the fear. So, you don't have to be an OpenGL guru to really use our stuff. As Ken pointed out already earlier, we take a lot of the pain away from you by using all the hard stuff with all the texture management for you. So, this is really, really easy for you to use. The other really important thing to keep in mind is now, we free up the CPU. So, you can do additional processing that's happening on the graphics card.
The CPU is now more free to decompress video, so you have less frame droppings. You can use more streams at the same time. On the other hand, also, as we've seen, like, with the resizing, your UI is more fluid. You have better feedback on this whole stuff because the CPU has more cycles to run on your user interaction. So, that makes the live resizing and zooming really, really easy for you to do.
So for those who are new to OpenGL, let me go a little bit into the terminology, and this is down the, ready, five second overview of this whole part. OpenGL normally draws in like primitives, it's like rectangles and that is the basic foundation what they draw with. So what we can do now with images, it's what we call a texture, and that's, you heard this terminology a couple of times already, and that's pretty much like we skin these drawing primitives, and that's how they end up on the screen. There's, as you probably missed now all the OpenGL talks, but there is a lab session later on in which you can get a little bit more of your fingers on what OpenGL is all about.
As Sean already pointed out, thread safety is an important thing. Since OpenGL is not reentrant, we have to make sure that we work with thread safety. And for that part, we have to make sure that we use like Pthread mutex logs, or we can use like an NS log from some part, or we can use like a shared context that gets you around. You can use for multiple threads and Core Video, as we already seen, uses a separate thread to get around the thread safety issue that you normally would run into.
You have to use these logs also when you use our new API, the QtOpenGL texture context. But there's at least one part, the "Is new texture available?" that is thread safe. You can do that outside the logging part. And for those of you who want to use AppKit, I want to point out that you have to override the update call and wrap this with a log, because otherwise, since AppKit will do some OpenGL calls, you will run into some thread safety issues.
So, getting now a little bit into the deeper part, how does the whole thing work? So, you've seen all the pieces, and I want to show you in a quick overview how we really get now the whole thing to the screen. So, in the first part, I'm setting up here a display link, and you can see I create one, I set up my two callbacks for the render and the display part, and after that, all I have to do is basically start it, and now I have my timing service running, and I have the two callbacks that will all the time be getting called from our display link.
On the rendering part, we see now the pieces here that come together. So this is now my render callback. And all what I'm doing here is I check if there's a texture available. If yes, then I can throw away my old one. I don't need to keep on.
I don't need the fancy effect that Sean has shown. So I'll just throw the old texture away, get the new one, and be ready now actually in the next step to bring this up to the screen. So how do we bring it to the screen? It's a little bit more detailed here.
The first part, I have to make sure that I clear out whatever was there on the screen. I just give you an overview here. There's details that you might have to look into, like, depending on what you are doing there. And the second part is I bind the texture. And what does that mean is, like, OpenGL now knows this is the texture that I want to draw with. This is the one that I'm skinning my whole rectangle with.
And then I draw a rectangle. And this is simply a quad, as you can see, with the coordinates and I'm mapping the texture coordinates to my rectangle and that's all I need. And the last step, et voila, I bring it with flush to the screen and we'll see the image on the screen. That is how simple it is for you to draw. And now I will show you the whole thing on a demo. Please, demo application. Okay.
So this is a little sample application that I wrote. It's, I call it, Live Video Mixer. And what I will bring in now is just three video files. Which are a little pool beard game that we had and we just shot this with three cameras at the same time. So I'm trying to imitate now his studio here. What I can do is now I play the movies and I have the different camera angles at the bottom part here. And since OpenGL allows me to compositing, so I can superimpose the close-up of that shot.
And say, well, actually, let me see. Like, I want to see the other camera angle. See. Yeah, I see struggling with this part a little bit. And I can do this fluently. And you see that's like no problem to run this on any kind of CPU. And I can use funny stuff, which what you call like multi-texturing. I can use masks and put this video in some funky shapes.
I can use this channel, and I can actually say, well, now they're laying on top of each other. OK, let me take this one, move this up in this corner. You see how nice and fluent this runs while the movie is playing back. And I'm really playing back like three streams here. I have like a semi-transparent shape here, and let me position this into this corner.
And even for the background part, I can do this all the time and play this up and down. And with that, we would like to go back to the slides. Thank you. So to quickly summarize once again what we've seen here, I've used the display link to have some precise timing and that helped a lot. And I've done these kind of applications before here and I can tell you it's worth tons of code. And here's like really a few lines to do this.
And then I use for the compositing the GL blend part. So I mean, just throwing a little bit of terminology for you so that you can find out later on in the books, OK, what's this all about? I showed you how to use masking by using multi-textures, which makes this really easy. It's been a pain to do that beforehand. Using for the resizing part, we can do this simply with a GL viewport.
I normalize the coordinates so that makes it very easy for me to work in different coordinate spaces. And with these little ingredients, I can create a really easy application that shows a little bit more fun in this video. And with that part, I will take the stage back to Ken.
Thanks Frank. Alright, so this is sort of my favorite part of this. So, yeah you can do all this fun OpenGL stuff, but you know earlier in this week there seemed to be a lot of interest in doing all this cool effects processing. So, how are we going to get this into this whole new pipeline? So, integrating Core Image is actually very straightforward.
I'm just going to sort of briefly cover a little bit of the Core Image API here in case you guys missed the session. In this case, you just create a CI context with your OpenGL context in pixel format. And then, once you've got that CV OpenGL texture object out of QuickTime's visual context, you basically need to create a CI image to represent it.
And, CI Core Image has a very nice API for just creating a CI image out of an arbitrary OpenGL texture. So, in this case, I basically fetch out the texture's name, its size, is it flipped, that has to do with whether or not the origin is the upper lower left hand corner. Most of the stuff coming out of QuickTime will be flipped. And then, basically create the CI image with all those parameters.
So, once you've got that, you can run it through a CI filter just like any other, you know, core CI image. You basically set that CI image that you created to the input, in this case just the input image to the filter, and then you can basically pull the result right back out again as another CI image.
Then you basically use CI like you would any other thing, and you call, in this case I'm using drawImageAtPointFromRect, I think that's the method name on that one. And again, you'll note that I'm using the getCleanRect in this case to make sure that I'm only rendering the sub-region that's defined by the original CVOpenGL texture.
So, a couple of notes on this. Both the CI image, well first the CI images are immutable, so every time you get a new frame out of QuickTime, you're basically going to have to create a new CI image for it. There's a little bit of trickiness with that though. Both the CI image and the CVOpenGL texture structure will basically have a reference, if you will, not in the sort of CF retain or Objective-C reference sense, but they'll both be referencing the OpenGL texture object. So, they sort of need to come and go together.
So, if you're going to keep your CI image around, make sure you keep the underlying texture that you've gotten from QuickTime around at the same time. One way you might simplify that for yourself, if you're a Cocoa programmer, is you could subclass CI image to basically do the CF retain for you. And then, whenever you release the CI image, you can let it go away.
So, we'll do a little demo of this. I was trying to think of interesting demo ideas and something we tossed around a month. An idea we tossed around a month ago, you know, not do CP or anything else, was do some underwater video color correction. So, I happened to be on vacation a couple weeks ago and shot some underwater video.
And this is like a couple of sample images that you can see out of it. These were taken, the first image on your left is at about, I think it's like 17 feet or so, and the one on the right is down around 50 feet or so. And you can see that there's a color shift there.
More and more the red disappears as you go deeper. So, The other interesting thing is I need some way to calibrate that though. It's like, well, for what depth am I at? How much correction am I going to do? So, for that, where is it? Here. Well, the red, oops. Go back to the slides real quick.
Slides please. Thanks. The color matrix is a good match for doing the color correction, but I still got to get the depth information, so my little trusty dive computer here. The neat thing about it is that every 10 seconds it records what depth I'm at. I can, with a little serial cable, pull that information back out and get a dive profile with it. Right. And I'll use OpenGL for a little gratuitous heads up display in the demo. So, go to demo one please.
Alright, so I have this demo lovingly called "Coral Video." I think it was Tim's idea, so you can blame him. So what this shows is this is like a little video clip that I shot one of the days. I start out somewhat shallow, again about 20 feet or so, and just sort of swim along the coral, and then end up down here on the side a little bit.
So, you know, I can play that. You can hear me breathe. Everybody thought I should leave that in there, but I don't think it's that interesting. Anyway, all right, enough of that. So... So, down here on the right I've got sort of dive profile. Here's where I basically got in the water and then swam around a bit, up and down, all over the place.
And then right here, I put a couple of little bookmarks in the dive computer to show where I was going to shoot the video clip so I could find it again. So, I kind of need to do a little bit of a manual calibration step here and say, "Well, okay, this is about the start point of the clip." The dive computer's bookmarks are only accurate to 10 seconds, so there's a fudge factor in here for sure.
So, now that I've set the current time, you can see as I move around, the depth sort of in the dive profile display matches where I am on the clip. So, I can go to the beginning here and I have all these little color correction controls I can use.
[Transcript missing]
Back to slides, please. All right. So the summary on that is, you know, it's, as I've shown, it's pretty easy to get this stuff into core image, you know, once you've got the stuff out of QuickTime. All I was basically doing is just feed the video and color correction data into core image. Really, really straightforward.
And use OpenGL in this case to do an additional heads-up display. Actually, there's one thing I didn't mention before, and it'll be in the sample code, is there's a, I wrote a cheesy little deinterlace filter in core image as well that's in that app. Just as a little side note I forgot to mention earlier. So a couple of caveats with this, obviously, with this whole thing. So, you know, here's the bad news.
So there are limitations to using all this new pipeline stuff. To use sort of visual context at all, you basically need to be on sort of Quartz Extreme class hardware, and that's mainly because, again, we need texture rectangle. Very little video is 256. 256 by 256 or whatever. There's also drawable and size limitations. If you're on an older piece of hardware, it might not either have enough memory or might not be able to support texture resolution as big as the video you're trying to pump through it.
So that's something else to watch out for. For doing the core image stuff, again, as been shown earlier this week, you basically need a Radeon 9600 or higher or an NVIDIA GeForce FX or higher. One thing you can do, though, is if the video coming out of QuickTime is going to be too big for your VRAM,
[Transcript missing]
Wrong way.
I want to see my name again. So we basically built this. This is the architectural stack from in Tiger, and we built this pipeline, which you've seen this diagram. The important thing is that you can basically use the movie views for high-level access where you don't need to actually get into the details. But if you want to customize your application, you want to make a special application, you can use the full pipeline to distinguish your app from another one and take advantages of OpenGL and Core Image.
Use the seed. Everything you've seen here is basically working in the seed. A couple of notes. The HI movie view that's shipping, the one the finder's using, the one that you can use is functional, but it doesn't actually use visual context yet. We had a couple of integration issues that we still needed to work through. We'll do that for Tiger.
You'll find that some movies, maybe if you rotate a movie using QuickTime, it might not play exactly right, so that was an issue. And then the call that you need to get CGL pixel format returns null, and we need that pixel format, so that's currently not working. That's a little note.
So, for more information, documentation, you can get the information on visual context from the QuickTime documentation that is in the Tiger docs on your CD, and then you'll be able to download the sample applications that you've seen. Some of it's already up there today in the 2.15 package, and some will be updated as the weeks go on. So, the hands-on lab, which is basically the back, the graphics and media lab, tomorrow morning there's going to be a bunch of people from the GL team who can help you now that you're all wanting to know how to do GL.
And then, in the early afternoon, there's going to be the people from the Core Video, Core Image, and the QuickTime part team that's been doing the visual context. And then, at the end of the day, after the session on the new ICM APIs and IPP coding technologies, there'll be more people from QuickTime.
Upcoming sessions for you, if you stay in this room, or come back into this room, you're going to hear the update on audio and more information about audio capture as well, sequence grabber changes, and also tomorrow afternoon, again, next generation video formats. We'll be talking about H.264 and changes to the movie toolbox and the ICM to support IPB coded video. If you're interested in being seeded with QuickTime as we start seeding for the next version, send your name, company, product, and technology interest to [email protected]. at apple.com.