Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2004-204
$eventId
ID of event: wwdc2004
$eventContentId
ID of session without event part: 204
$eventShortId
Shortened ID of event: wwdc04
$year
Year of session: 2004
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC04 • Session 204

Graphics and Media State of the Union

Graphics • 1:09:56

Mac OS X contains an industry-leading array of 2D, 3D and multimedia technologies that will make your application excel. This session provides in-depth information on Mac OS X's graphics and audio architecture and provide the latest information on Quartz 2D, Quartz Extreme, OpenGL, and QuickTime. This session is the perfect kickoff for developers viewing sessions in the Graphics and Media track.

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Ladies and gentlemen, please welcome Vice President, Interactive Media Group, Tim Schaaf.

[Transcript missing]

So, as we've mentioned earlier today, there's a lot more happening inside that box than just what's going on in the microprocessor. The chips that we ship in our current systems, the Radeon systems, are already up at about 115 million transistors. This is about double the complexity of the microprocessor.

The system that Steve introduced this morning, the new NVIDIA GeForce 6800 Ultra processors, have 222 million transistors. Four times the complexity of the G5. That's an incredible opportunity for all of us to be able to enhance the functionality of our applications. But unfortunately, it doesn't happen automatically. And the question we're going to try to address today is how to unlock this stuff.

There are many, many obstacles. So, in 1984, this was the statement that Apple made about the importance of rich graphics interface, WYSIWYG, and interactive UI. This was really a dramatic statement. This brought into the mainstream concepts which had not been seen in a personal computer before. Spin forward 20 years to this year, Apple's introduced this product called Motion. And I think it's a very good example of the way the expectations have evolved over the 20 years.

Still rich graphics, still essentially a WYSIWYG kind of an experience. Very much an interactive UI. But obviously, the kind of software required to build this interface is incredibly more complicated than the software required to build the app of 1984. Let's look at the operating systems that you have to learn and understand. Apple has, you know, roughly speaking, every single year, we're introducing another thousand man years worth of software. Every single year.

And you can see this in this sort of very simplistic view of the size of the disks that we have to ship to you in order to deliver the operating system. We've moved to a DVD this year because, yes, it's growing again. How are you going to manage all of those toolboxes? It's just so much stuff to learn.

As we introduce more and more systems with more and more diversity of hardware with fancier and fancier architectures, the question becomes, how the heck do you get all this performance out of the box? It used to be that you could sit down and you'd learn the instruction set. You'd write some programs and you'd learn what was fast and then you'd write that in your code. And you'd have the fastest app on the planet. It doesn't work like that anymore. It's really, really complex.

And the performance decisions that you would make for a G3 will be different, very different than what you're going to do on a G5. How can you do that automatically? This is not a puzzle you probably want to spend all of your time trying to solve. When we sat down to think about what would be most profound in our contract, in our contribution to Tiger, in the graphics and media team that I run, we wanted to look at these issues and say, how can we unlock this power for you, protect your software investment, and give you tools to build beautiful UI? And this was our goal.

So we want to build the technologies that are going to allow you to build the next killer app. Because we believe, as cool as these demos that you've seen today are, we believe that you folks are the ones who are going to have the really, really exciting ideas. I mean, it's neat to see Core Image processing these images.

And we're going to show you some more of it today. But we are absolutely positive that your inventions are going to blow us away, amaze and delight your customers, and just baffle the world. How do they do that? So here's the architecture of the graphics and media layer as a panther. Basically, very familiar block diagram here.

We've got Quartz 2D, we've got QuickTime, we've got OpenGL, all layered on top of the graphics hardware. This has been a very, very powerful architecture for Apple. There's only one problem here. You could look at this another way as three stove-piped technology stacks. And any of these are going to be As many of you will know, if you start to try to combine these technologies, drawing the lines from left to right, right to left, more horizontally, you start to run into some real challenges. So in Tiger, one of the most important changes we are making is we're kind of turning this diagram on its side in order to give a much better, much more powerfully layered system for you to work with.

Now, the APIs aren't all different in Tiger. Your applications are still going to be compatible. But we're working down in the lower layers to reorient this architecture in order to maximize the kind of performance and the kind of data interoperability that we'll be able to provide through the APIs so that your apps will be able to combine all these technologies and still have a high performance result.

I've broken this up a little bit more detail. You see the Quartz stack, you see the Core Audio stack. They've both got their hardware abstraction layer conceptually. And so what I want to do for the rest of this talk is we're going to dive in and look at what's going on in each of the individual technology areas and tell you some of the highlights of what's new and hopefully plant some seeds for how you might be able to take advantage of these things. Our first area is Core Audio. As we've expressed earlier today, this is a very profound technology for processing of audio. This is the most advanced audio subsystem built into any operating system anywhere else.

When you look at other operating systems, if you want to achieve the kind of performance that we're able to achieve inside of Mac OS X, you always end up having to build special things into the OS. You have to add special extensions in order to achieve performance. So you have to build special extensions in order to achieve the kind of performance that you get with Mac OS X out of the box.

What does that look like? Well, it's the ultra-low latency that allows you to do demos like the guitar demo you saw earlier this morning. It's playing live into the system. It's processing the audio in real time, sending it back out the audio outputs. And it feels like he's playing it live as if it were a piece of hardware in a music studio rack.

We've got HiDef. This was the first media processing stack to build in a full floating point processing pipeline. And it's worked tremendously well for us. Not only does it allow us to support current state-of-the-art processing 24-bit type samples, we can go way beyond that as the industry evolves. We support a very, very wide range of sample rates and there's no constraints on the channels.

We have beautiful, robust plug-and-play compatibility across a wide range of connectivity protocols. We've got built-in native, built-in MIDI support, very low latency just like the audio support. And then we have this very important architecture for extending the system. One of the most important parts of an audio subsystem is the set of customized audio plug-ins that you can bring to bear to provide your audio a unique kind of a sound or a unique kind of processing.

Now, we've had a tremendous amount of success with our audio units and this is a small sampling of the developers who have gotten on board with audio units. Now, of course, the universe of developers who are using core audio is much, much larger. But here's a really important collection of developers who are focused on audio unit development. This success has brought some very interesting new challenges and some new opportunities.

As the community of developers who are centered around core audio continues to grow, one of the things that we've been very concerned to be able to provide is a very high level of compatibility between all these different kinds of plug-ins. And so earlier this year, we introduced a program and a piece of software that would help audio unit developers ensure that their audio unit plug-ins were going to be robust, compatible as they went from one app to the next and to the next.

We're introducing today a new product called AU Lab, which is a part of the developer program called AU Lab. AU Lab is a part of the developer program called AU Lab, which is a part of the developer program called AU Lab, which is a part of the developer program called AU Lab, the developer tool set that's going to take this one step further and I'd like to give you a chance to take a look at that in just a second.

And what this is going to allow you to do is it's going to help--help the audio unit developer develop an even more robust audio unit that can be plugged into the ever increasingly complex and sophisticated range of audio applications. So why don't we come over and take a quick look at this.

So we have this little application called AU Lab. And basically, the model of AU Lab is it's a mixer. And the first thing we're going to do is we're going to come in here and we're going to add a generator. And we've built into this tool a generator that's capable of bringing in audio from a file. Now, of course, it can also process audio in real time, coming in through the different kinds of inputs that you can support.

So the first thing we're going to do is we're going to open up the little configuration panel for this, bring in an audio file. And we can play it. It's just like an audio playback system. Great. Now, what you can do is you can bring in your audio unit plug-in and load it into this little mixer, and it's now been inserted into the processing chain.

And what we've got here is we've got a new audio unit that we're providing in Tiger called TimePitch. And what it has the ability to do, it's a very sophisticated DSP algorithm that allows you to alter the timing of a piece of audio without altering the pitch, or you can alter the pitch without altering the timing. And I'll give you a very quick example of what this might be useful for. So this is a piece of music by a couple of guitar players, John McLaughlin, Al DiMila, two of the fastest guitar players I've ever seen in the universe.

And what you encounter with this kind of guitar playing is that they start playing really, really fast. My son is a guitar player and he's always asking, "How can I learn how to learn this music? It's going too fast." This little plug-in basically allows you to take these files You can dramatically slow them down without altering the pitch. You can go faster.

And again, it sounds just like the original audio. So there's a very simple demo of how AU Lab can be used to validate and test out your audio units. And we think it's going to be a very important part of our overall strategy for ensuring broad adoption of audio units and being able to help facilitate a very powerful community in the audio world relying on Core Audio.

So here's another situation that's coming up more and more as these Core Audio applications are being enhanced. Typical kind of a studio setup, you've got multiple audio devices. Today in Core Audio, we can handle these devices just fine, but when it comes to managing those devices, all the burden of managing which channels are going where and the inter-device synchronization, the burden is on the application. We're going to be introducing a new technology that allows you to add and remove audio units. We're going to be integrating these devices into one logical unit, and we think it's going to make the process of developing applications for sophisticated studio setups much, much easier.

Last of all, I want to talk about OpenAL. OpenAL is an industry-standard API for managing spatialized audio. It's used all over the place in games. It's a technology developed by Creative Labs, and we have been working with them and other developers to build a highly optimized version of OpenAL. It's capable of supporting not only super high quality spatialized audio processing, but it's also got a variety of modes to be able to do lower complexity modes so you can get lots and lots of channels.

We're going to build this into Tiger, and we think it's going to be a great addition to the OpenGL technology for a complete solution for game development. So this is the story about what's happening with Quartz Audio. What I want to do next is I want to ask Peter Grafagnino, our Quartz extremist, to come over and tell you about everything that's going on in the Quartz world. And it's a very large world.

I guess I can go to the podium. Oh, there we are. Okay. So I'm going to walk you through the technologies in the graphics stack. I'm going to basically walk you up the stack from the hardware, talk a little bit more about that, OpenGL, and then talking about Quartz. And then Tim's going to come back and talk about QuickTime.

So on the hardware side, you've seen this graph many times. This is pixels processed per second. And if you look at also memory fill rate, the current generation of graphics chips are about 35 gigabytes per second of memory bandwidth, which is pretty incredible. And as you've heard a couple of times already, this is kind of an inflection point in computer graphics. We now have programmability at the pixel level, it's a floating point, it's accurate enough for high-end work and high dynamic range, and enables a lot of applications beyond just the traditional gaming stuff.

So we're seeing a convergence of graphics and media processing together in the GPU. And there's some new programming paradigms. You heard me and Bertrand's talk about the Stream Computing Model that we're using for Core Image. And Tiger will advance the state of the art here and bring a lot of ways to bring the power of the GPU into your applications.

So there's a bunch of things you can do in Tiger as far as GPU programming. There's low-level access via OpenGL. There's accelerated 2D graphics we have with Core graphics, which we'll talk about in a second. We have accelerated image processing with Core Image. We have Core Video, which we were also talking about. Basically, we've got a bunch of ways to leverage the power of the graphics processor.

And the trick for your apps is to try to use the highest level of abstraction you can, which allows us to do more of the heavy lifting. There's nothing wrong with going down to the OpenGL layer if you need to do that or want to do that. But we've got these higher-level services as well, which you can take advantage of and let the platform do the rest. So the Tiger Quartz layer is optimized to take advantage of programmable GPUs. Now, it's not necessarily required to run Tiger, obviously, on a programmable part, but it's optimized in that way.

And by programmable GPU, what do I mean? I mean on the ATI product line, any part that's an ATI Radeon 9600 or higher, or on the NVIDIA side, the GeForce FX or higher, or the new card that we just announced today. By technology, I mean ARB Fragment Program, if you're an OpenGL programmer, or on the Windows side, you sometimes hear it called DirectX 9 capable hardware. That's the basic class of hardware.

Now, interestingly enough, if you look at the GPU versus the CPU, the GPU is not necessarily always faster. So it's something to keep in mind that you really need to treat it as a kind of a co-processing environment. The high-end CPU, if you put dual gigahertz, 2.5 gigahertz G5s in a computer with a low-end but programmable graphics part, the CPUs will be able to easily beat the low-end GPUs.

So you always have to be aware of the trade-off and really view the GPU as kind of a co-processor and also understand if you're flexible about using GPU or CPU that your app will scale to a much greater level with the GPU scaling factors that we're seeing that are exceeding Moore's Law. So that's all I'm going to say about hardware. Just to motivate you guys to learn more about what's going on there, talk a little bit about OpenGL.

OpenGL is the foundation and sort of the hardware abstraction layer for our graphics hardware, as Tim mentioned. And there's a bunch of things going on in Tiger. There's the OpenGL shading language, which we'll be supporting. There's floating-point pixel support. Some of this made its way into Panther. There's major enhancements to the OpenGL profiler tool, which is a really popular developer tool for analyzing your OpenGL performance.

New resource management improvements. We're probably on our third or fourth generation of kind of treating the GPU as a full-fledged resource to be managed within the kernel. There's a lot of new stuff to support many of the things you've seen today and will see throughout the conference. And I did want to mention one session that was not in the show guide here, which is the introduction to OpenGL shading language, which is going to be on Friday. So be sure to check that out.

That's OpenGL. Let's move up to the Quartz layer. This year, we're dividing the Quartz layer into three things. There's Core Graphics, of course, which is our 2D graphics and windowing system. Then there's Core Image and Core Video, which are new this year. Core Image is our image processing engine, and Core Video is our video processing engine. On the Core Graphics side, we're not staying still here either. So let me talk about that.

The big news in core graphics land is that Quartz 2D goes extreme. So we talked a little bit last year about having Quartz 2D on OpenGL, and this year it's going to be the default in Tiger. So we have quality with the 2D rendering through OpenGL that's virtually identical to software quality.

And any of you guys who know about how GPUs make different trade-offs about 2D graphics, it's actually pretty tricky to do this in all cases, to get really high-quality text with OpenGL is kind of a pain. But we've taken care of all of that. We cache even LCD-quality sub-pixel position glyphs and can render them and blit them onto the screen with all the proper blending you need to do to do that.

So this acceleration of Quartz 2D, however, does require programmable hardware to be able to do the LCD blitting and all the programmable blend modes. The low-level benchmarks when you put Quartz 2D on top of GL increase by 2 to 100 times. So there's some real impressive performance gains to be had. But the key to getting those are to reuse your resources.

So if you have CG image refs or CG pattern refs, things like that, and you're going to draw them more than once, just be sure to hold onto that and it will get cached up in video memory. The other thing, since we can't accelerate Quick Draw, if you use Quick Draw within a window and you're using Quartz 2D Extreme, we fall everything back to software.

So the core primitive benchmarks with Quartz 2D, you can see the increases here, pretty striking. Obviously, GPUs are great at filling memory, so an 800 by 800 rectangle is 236 times faster. It could probably even go faster if we could figure out how to feed it. Line drawing, for example, eight times. Text strings. Our software text path is actually pretty tuned at 1 and 1/2 million glyphs per second, but we're getting almost 5 million glyphs per second with the hardware, which is really good.

So we didn't turn it on by default in the WWDC build, but for those of you who know about the Quartz debug tool, you can go in and turn it on. It works pretty well. It works a little bit better on ATI hardware than NVIDIA hardware right now. But you can turn it on. We know that there are a few bugs in there, but if you want to try out your app and see if things get faster, if not, go catch your pattern refs and image refs, and you'll probably see quite a performance difference.

The other thing to look at in Quartz Debug is you'll notice a show user interface resolution menu item. And what we're talking about this year at WWDC is getting ready for a resolution independence in the user interface in the tool kit. And I think this is-- thanks.

[Transcript missing]

Just to show you Quartz Debug, if you don't know Quartz Debug, you can find it in the developer tools.

It's under the performance tools. You can see I've got Quartz 2D Extreme enabled here. I'm going to go up to that tools menu and bring up the user interface resolution. And let's crank up things a little bit and give us kind of a virtual scaling factor of about, let's do something big like 1.75.

And now any app I launch is going to get a 1.75 scale factor. So let's launch Safari. You can see I get a huge menu bar and I'm not connected to the Internet. Well, that's too bad. But you can see how the app is drawn much larger. The menus are high resolution rendered. You can see the rest of the-- if I switch to another app, for example, it'll make it clear.

Here's Quartz Debugs menus. They're still small, whereas Safari's menus are big. And so you can see that there's going to be some drawing bugs, like over here by the Google Search menu, which we're-- we're working through with the apps and the frameworks. But you should use this tool on the developer CD and test out your app with the resolution independence. So back to slides, please.

The next thing we're doing in Quartz 2D is floating point pixel support. So yes, thanks. What comes after millions? It's jillions, I don't know. But it's full floating point pixel processing pipeline. Floating point both on the source and destination, so CG image refs can be floating point, as well as CG bitmap context, the destination for drawing. We used unclamped colors processed all throughout the pipeline. And we're going to have a floating point CMM based on ColorSync in the Tiger GM.

So that's pretty exciting. And to go with that, we have a new framework in Quartz Graphics called ImageIO, which is a thread-safe image handling library for FileIO. And it includes new HDR, or high dynamic range, floating point formats, such as OpenEXR from ILM, and various flavors of floating point TIFF that are out there.

Next up is PDFKit. PDFKit is a high-level framework for dealing with PDFs. It's an Objective-C framework up at the AppKit layer. You can think of it as preview on a palette. And much in the way that WebKit is sort of Safari on a palette, PDFKit is the PDF sort of half of preview running on a palette, complete with link traversal and printing and fit to page and all that sort of stuff.

[Transcript missing]

Quartz 2D in Tiger will fill in the last of the big holes that we've definitely gotten feedback on. The headers are still available, but they'll be marked deprecated if you compile against the Tiger target. Binary compatibility, of course, will continue to exist, at least for now. And you should definitely budget time in your next revision of your app to move to Core Graphics. And we really want to make it work for you, so we're definitely open to feedback.

And we have a whole session about transitioning to Quartz 2D that's specifically targeted for QuickDraw developers. And we've got a lot of new stuff that should make it easier, and we want to hear about more things you might need. So to help motivate that a little bit, let me do a quick demo.

of line drawing performance. So this is one thing we got feedback on as being much faster in QuickDraw than it was in Quartz 2D. Quartz 2D would lovingly anti-alias all of your intersecting polygons and some N squared algorithm and would just be really slow. So people would use QuickDraw for stuff like this. Well, we've improved Quartz 2D such that it's now 1.5 million and this is actually the software renderer. And once we go on top of GL, we're actually up at 11 million lines per second.

Great. And this is still only immediate mode in OpenGL. If you know anything about OpenGL, there might be another 5, 10x on top of this as well. But clearly, GL is the way to draw lines. And so just to drive it home, we're looking at, you know, 41. We didn't draw the initial Quartz 2D software, which is somewhat less than 1.0, as some of you might know.

But getting on to Quartz 2D is clearly what you want to do here. So back to slides, please. So some bonus sessions in the 2D graphics path. The high dynamic range is going to be talked about together with the Image.io framework. That session is going to be on Wednesday, and then we have a session on PDF kit tomorrow. So please attend those.

Let's talk a little bit about Core Image. You've heard about it in the keynotes and other talks. Again, it's a framework for GPU-focused image processing with a rich set of built-in filters and our plugin architecture. The great thing is that you don't have to know about OpenGL to use Core Image, which is a nice feature for those of you coming from the 2D world. Core Image actually is an Objective-C API. It's based on a stream-based processing model.

The number of abstractions is very few. There's images and there's kernels, which are the processing units. And then there's samplers, which basically kernels use to access data from within images. Kernels are described in a runtime compiled language, high-level C-like language. The parallelism is implicit. As I said before in the other talk, there's no explicit loops or threads. You just use the image unit or program the image unit to do what you want.

And the evaluation model, since it's implicit, is mappable to GPUs, symmetric multi-threaded CPUs, or other parallel architectures. You might imagine. We have a software fallback that's actually quite good as well. It uses the Velocity Engine and a just-in-time compiler that's custom for Core Image. And it generates auto-vectorized code optimized either for G4 or G5 or even dual G5 with multi-threading if that's available.

And the evaluation engine does a number of sort of compiler techniques at the graph level as you construct image units together to minimize the number of passes and handle the temporary buffers, pbuffers and caching and things like that. So it does a lot of heavy lifting for you, and let me give you a demo of that.

So when we were preparing for the keynote, we did a lot of sort of experimentation with core image and some imagery that we didn't use in the keynote I thought it would be fun to show you guys here. So here's a couple of interesting filters. Here's a circular half-tone screen on this image. I can change the width. It's all getting computed on GPU. This is a Radeon 9800 XT card. Some other ones that are kind of fun. The surfer wave. Reset back and let's do a zoom blur on him. Oops, not white point adjust.

So that's kind of cool. You can change the amount. What else did I have? Let's do a layer effect. So I'm going to bring a water droplet background image. Let's turn off Zoom Blur. Let me add another filter. which is a piece of water artwork. Let me add a rounded 3D effect on that, and you can see what that looks like. That's pretty neat.

The other thing I wanted to show was kind of a build of One of the stacking effects, Phil did the electric zebra this morning, and I'm going to show you a different one, which is kind of also in the 60s theme. So let's, we're going to try to go for some kind of a silkscreen look, so we're going to, Color posterize the image a little bit. Then I'm going to oversaturate it using a Over saturate filter. I'm going to use a false color.

A false color just converts the image to gray scale and then maps it through a ramp. And I've got a couple of colors here that I'm going to use. So yeah, so now it's looking pretty psychedelic. Then the next thing I add in is every 60s poster needs a lenticular halo. There we go.

Which, this is doing a, it's a generator that generates kind of a sun striation sort of a thing. Let's do that. Okay, let's bring in some line art. Oops, well, that's okay. The 60s album, which is the type, and we'll insert a bump map in there to do our bump distortion.

And then, lastly, I think I have a crop in here. Yeah, so you can crop out the edges. So, but the great thing is the whole stack is live. I can, it's totally non-destructive. It's just sort of remembering the recipe to create the image, and you can kind of turn off the layers and see how each one is doing or go back.

I can change the posterization level on the fly. I can even go back and change the image on the fly. I could do it to, you know, the water image if I, you know, my band changed. Or I can even run video through the background. So, you know, you get your album cover and your video at the same time. And it's all still live. I can find the bump.

Let me add the bump back in, I lost where it was.

[Transcript missing]

So you definitely want to go to the core image session. Hopefully we've teased you enough by showing it in every single keynote session. And it's tomorrow afternoon at 5 o'clock, so you'll definitely want to see that.

Core Video. Core Video is, one way to think about Core Video is it takes the output of QuickTime and maps it into the GPU. And so it basically, new for QuickTime, is separating the decoding logic from the presentation logic. So it solves the video on a texture problem, which we know a lot of our internal developers, and I know a lot of you guys as well, have had. We've had like four or five different examples on the website of how to do video on a texture. None of them were actually optimal.

And we're just solving that in Tiger in a great way where you get total asynchronous behavior and data flow between the GPU and the CPU. And that's really what allows us to do some of these demos like 6DP high definition H.264. It allows also, once you get the video frames up in the GPU, you can use Core Image on them for video effects. So bonus session on that, of course, the Core Image effect, the Core Image talk, as well as new directions for QuickTime performance. where you're going to see a lot of this stuff in action.

So that's Core Video. So we have a whole stack of technologies we call Quartz at the core level of the operating system. And we think it's a great substrate for you guys to build apps on. And I'm going to show you Quartz Composer one more time. I had a quick demo of it before. I spent a little bit more time on it here.

Quartz Composer is an application that harnesses the power of the Quartz layer using Core Graphics, OpenGL, Core Video, and all of the technologies. Compositions can be saved in files, kind of like-- you can think of them as meta files. And they can have OpenGL, info, Core Graphics, et cetera. And there's a simple playback API that's a framework. The engine's built into the OS.

So you can load a composition and play it. And it creates a procedural animation. You can actually expose the variables outside of your animation to the key value coding system in Cocoa and set parameters like images or text strings into the animation. So let me give you a demo of that real quick here.

[Transcript missing]

nd I'm going to show you a couple of things this time that I didn't show last time, like masks, slideshows, kind of neat. Basically, it takes some of the images from the screen saver and puts them through a mask. It generates a mask with OpenGL and then does a blit using the graphics hardware. I think this one pretty much is straight up OpenGL. It doesn't use any core image, just uses blending modes in the hardware. So that one's, that's pretty interesting. Let me switch back to Finder and show you Distortion FX.

Distortion FX is an interesting one. So here's something you hadn't seen before with the other composer demo. It says drag and drop an input file into the input parameters. And so what this composition does is it's all this wiring and in the details of the plumbing, but it exposes two variables, an image and a duration. And so it's waiting for input. And so let's drag Copenhagen, just an image of Copenhagen Harbor there. And let's look at that full screen. It's using some core image effects, the optile and the bump distortion to do that.

So you can see how if you had an application and you wanted to do that to an image, you could just actually create a Quartz Composer composition, load it up, and then just key value those couple of variables, and you've got it going. So to drive that home a little bit more, let me show you... Another one we have, which is a, we have engrave your iPods, well, we have engrave your G5, so. Let's look at the input. I'll show it to you down here, I can say, Peter's G5.

And you can see it now. It says Peter's G5. You can't really see it too well. So let's actually build an app based out of this composition. So I actually have a nib file. There's a nib file in the example directory there. You can see. And we have a palette, the Quartz Composer palette, which is one of these, I think. There it is.

Which has a controller, which is the green thing, and a view, which is the Quartz Composer view. And what's happening is I have a patch controller that I brought into my project down here, which is controlling the patch, the G5 engraving patch. And I have a view. And if we look at the parameters on the view, By bringing up the inspector, I'll look at the bindings.

You can see that the view is bound to the patch controller, which is this guy here who's controlling the patch. Now, if I look at this text field, its value-- The mouse is a little twitchy, sorry about that. The value of the text field is bound to the patch and to a variable in the patch called text.value, which is that thing that was exported in the composer.

So now, if I run this, I have a simple application and if I can just type in this text field and say Peters G5, Let's see, I can label my own G5. So you see it's pretty easy to go into Quartz Composer and create a procedural definition of either an animation or a still thing like this and then just wire it up and drive data into it with Cocoa Binding. So we think that's kind of a pretty powerful integration technology there.

So back to slides. So we're going to have a session on the Quartz Composer called Discovering the Quartz Composer. It's 9:00 AM on Friday. But if you want to stop by the hands-on lab and get going with this stuff, tomorrow morning a bunch of us are going to be in there to help you with that, if you want to just look at the examples and get up and running on the tool.

So anyway, so that's our platform of all the new stuff we're doing in Quartz. And I just highlighted a lot of the new stuff. There's tons of sessions. I think there's about 15 sessions just on the graphics side of stuff that you want to talk and a bunch more on the QuickTime and audio stuff. So have a great conference. And we will see you at the sessions. And back to Tim.

Thanks a lot, Peter. OK, so we've covered audio, we've covered graphics, now it's time for the video stuff, and of course that means QuickTime. This year I'm not going to spend any time at all talking about marketing stuff. We've got a tremendous amount of good news to share with you, but tomorrow morning Frank Casanova and his team are going to put on a big extravaganza here talking about what's going on with the marketing side of QuickTime, the business side, how we're doing out in the marketplace.

Fantastic story. Going to tell you an awful lot about our 3G strategy, which is a tremendous success in the marketplace. We have by far the best system for supporting mobile media in the world. We support all the global standards at this point, and we've got just a fantastic end-to-end story and I think you'll like it. We're going to focus today on the technical side of QuickTime.

This is the architecture of QuickTime circa 1995, obviously very simplified, but the basic point here is that-- QuickTime is a very modular, very componentized architecture, but the components tend to be oriented towards supporting QuickTime, so they're kind of inwardly focused. We had video processing components, we have audio processing components, and all of that was layered on top of QuickDraw and on top of the sound manager.

If we spin forward to look at what we're doing in Tiger this year, you're going to see some dramatic changes taking place down in the core of QuickTime. And we're going to be layering QuickTime on top of Quartz, we're going to be putting it on top of Core Audio, and there's going to be some tremendous features here. We're going to take advantage of all these technologies, and I think you're going to really enjoy what starts to happen with QuickTime.

Today I'm going to talk about four details that were four very, very important happenings in the world of QuickTime that I think are going to have profound impact on your applications and on the ways that QuickTime starts to proliferate around the world. So the first thing I want to talk about is H.264. You saw some demos of this earlier. Let's look at what this is really about. So you all know what MPEG is. These are the guys who make DVDs, digital TV. It's actually this compression.

It's actually a collaborative effort between MPEG and a lesser-known organization called ITU, the International Telecommunications Union. These are the guys who back in the 1860s, the 1860s, first set up interoperability, an interoperability organization to ensure that people would be able to telegraph across national boundaries. They then applied that kind of mindset to radio, and more recently they do it with telecommunications. And they're really wonderful.

They're one of the premier organizations for advanced technology in the area of compression as it applies to communications. The goal is very simple. Build the best codec that anybody's ever seen. It supports a lot of different modes. We'll talk about that some more in a second. What you may not know is that Apple's been involved in this standard for a long time, and we've actually got a whole bunch of patented technology built into the core of the standard. And the format is built on the QuickTime file format, of course, so that's a great thing. Here's a little bit of a... a timeline chart showing you kind of how the video compression efficiency story has evolved.

The upper line shows you the story for MPEG-2, and the lower lines are showing you what's been happening with some of the more modern codecs. And the orange piece, obviously, is the 264. A couple things to notice. We learned an awful lot about video compression over the last 10 years. And a lot of that was able to be played out in the MPEG-2 standard. So even though the standard was established in 1994, So even though the standard was established in 1994, for the next 10 years, we saw dramatic improvements in the efficiencies of MPEG-2.

So you sometimes hear a story which says, "Well, your codec isn't good enough. You don't have the right standard." The thing you gotta separate out here is the politics from the reality of the technologies. And what we know is that with each successive generation of these technologies, there are an incredible number of tools that are built into these little algorithms that we all have to learn how to use.

And this shows you, in a very graphic way, what happened with MPEG over the last 10 years. I mean, this is dramatic. They went from almost 6 megabits down to just under 2 megabits to achieve the same level of quality. And what we're gonna see with MPEG-4 Part 10, H.264, ABC, it goes by a billion different names. We're gonna see, again, a continuing evolution as we learn how to use this codec. Let me give an example of the kinds of stuff that we're working through. So this is MPEG-4 Part 2. This is the version of MPEG-4 that we ship in QuickTime 6 today.

That we've distributed literally hundreds and hundreds of millions of copies of. This is sort of a summary of the various algorithms that are combined together to create the overall compression effect. H.264 offers this vocabulary of tools. So you might imagine that a software developer trying to build these kinds of technologies is gonna take quite a while to learn how to use them optimally. We've been working on this for a long time. We think we have a fantastic implementation already. But it's gonna get a whole lot better as the years go on. So we're really, really excited about it. What I want to show you now... Oops.

Nope. There we go. I'll show you a little demo of something we've never been able to do in QuickTime before. And I'm gonna do it in H.264, which you would think would be the hardest place to do it. We've been talking about high-def content a lot today. And as you can tell, Apple's very, very interested in high def.

We have... We can support high def at the standard for a long time. We can do it at frame rates--30 frames, 24 frames per second. We can do amazing things with large frame sizes. But there's another flavor of high def that's often referred to as 60p. And this is actually 60 frames per second. I'm gonna show you a little demo of QuickTime playing this content... at 60 frames per second.

First thing you notice, I pulled down the menu and it didn't stutter. Pretty good. Pretty good. Now, so, what's interesting here, this is, you know, the video itself's not terribly dramatic. You saw a different clip of it this morning, but it's playing at 60 full frames per second.

Now, you would think, well, this is, yeah, this is a computational problem. You've got a really fast computer. Actually, it's not that heavy duty of a compute problem. We're not using that much of the CPU, but what the problem has always been, has been that, you know, you can't get the frames to the card fast enough to be able to keep up.

Because of the fact that we're now layering QuickTime on top of Core Video, we're able to do this perfectly. So, Core Video combined with QuickTime is not only providing us access to these amazing image processing capabilities that are built into the GPU, it's also providing some absolutely outrageous abilities to transfer data over to the card, which is obviously the other thing that we're able to do.

So, that's another half of the problem. If you can't get the stuff to the card fast enough, you're not going to be able to do anything with it. We're working with the Core Video guys as well to come up with very highly optimal methods for pulling the data back.

If you can't pull it back out, then you're not really going to be able to get it out, firewire it to your cameras, or wherever else you want to go very effectively. And so, we're working on a lot of problems in this area, and it's very exciting that we're relayering this stuff the way we are. So, that's QuickTime playing 60p content, something that's going to be able to do a lot of things. we've never seen before.

Peter referred to this session earlier. There's a whole bunch of information available about what we're doing in the area of 264. In particular, the underlying format is outrageously complex, and we're introducing some new capabilities in QuickTime to be able to manipulate these formats in the ways you're accustomed to being able to manipulate video today in QuickTime, even though when you learn the facts, you would think it would be impossible to edit this stuff ever. And we're going to make it very, very easy so that your applications have no changes in the vast majority of cases, and if you do extremely sophisticated stuff, you'll have to make a few changes, and it's going to be very exciting.

Okay, so the next thing I want to talk about is some of the specifics about what happens when you put QuickTime on top of these Quartz technologies. I showed you some data transfer optimizations that are pretty exciting for a bunch of people who are focused on video. Let me show you some other things that we can do now that are very cool. Now, I'm not going to, I don't have slides with words. I think the demo will say it all.

Here we go. This is a little test app that we developed in the last couple of days. That's another very important point. This stuff has gotten so easy to program because we're really working all in Cocoa now. And it's just, it's amazing how fast these things come together. And you're not really making, you're not making performance compromises by going down the easy path. It's incredibly important. So here we have a video, iRobot trailer.

And the first thing you'll see-- OK, I'll turn the volume down just a little bit. The first thing that you can see is it's just like a lot of the apps that we're making now. Live resize. Very smooth, moves like butter. But there's a lot more going on here than meets the eye.

So I'm going to zoom back from the video a little bit, and what you can see is I can just grab this video and move it around. Again, I'm not losing frames or anything like that. That's kind of neat. What you can also see is that actually what we're doing is we're playing video onto a surface. Okay, well that's... That's pretty cool.

This is sort of the infamous, gee, if I could only play to a GL surface, then I could do all these wild and crazy things. But that's always been virtually impossible. We've made that really, really simple now. So the next thing we're going to do is we'll just do a little fade back here to normal. And I'm going to show you, we built some presets into the application to allow me to talk and make things change at the same time. And let me show you a couple of them.

So this first one is just going to take the video and we're going to just slide it down into the corner. Now, We have all these frames flying through Quartz Video and we want to be able to kind of visualize what that might look like. So we built this groovy... Oops, groovy little feature to let me spit the frames out.

Now, let me show you. This is not, like, rendered. It's not faked up. I mean, this is all live. This is live. I can do all the same kind of 3D stuff that we were doing when it was just on the single surface. Let me go back to normal. So we do an animated-- We can do a little slow-mo thing, 'cause it's just the way these things have to work.

So we've got another one here. This one will take it and put it over in the other corner. So now the frames are coming back out this way. You know, that's kind of fun to go back and forth. We've got another little effect we can do here where we just start taking the frames and twirling them. That's kind of fun. This is one I always wanted to see. It's sort of our matrix effect here. We'll take the video. I'll put it back in the straight-ahead mode. So now we're looking into the frames. We can zoom back it out.

It's kind of cool. And then we can go back. Go back to normal. So this is obviously a very small example of the kinds of things that you're going to be able to do with QuickTime layered on top of the Quartz technology. It's really fun. It's really fun. So that's QuickTime on Quartz. Whoops, I'm going the wrong way in the world.

Thank you. Okay, QuickTime and Core Audio. So we've been talking a lot about QuickTime on the image stack side. Talk a little bit more about what's happening with QuickTime on the audio side. So we've basically re-layered QuickTime entirely on top of Core Audio. It has some very... Very obvious benefits. We're going to be able to support all the same high-def audio formats that you can support in Core Audio today. We'll support including surround audio formats.

And when you go to some of the other QuickTime sessions where they'll be talking about this in detail, you'll see some amazing demos. This also has an incredibly important but perhaps lesser known effect, which is that the quality of the synchronization across tracks is going to be rock solid. We're going to be down to sample level accuracy in the synchronization part of the system, which is fantastically important. And you'll be able to leverage the DSP capabilities that are present in audio units.

So I think I have a little demo of that as well. Yeah, okay, so I showed you a very simple time scale. I showed you a very simple scaling demo in the AU Lab application. I want to show you what you can do with this applied to more of a real-world application. So we have a little movie here, Harry Potter trailer. This is what happens when you normally go fast forward.

So you get the kind of chipmunk thing. Okay, well, that's cute, but not a lot of fun. - Change the modes that QuickTime's running in. Let me start the movie up again. This time, let's get it going. Now listen to what happens. - On page 394. - Is that working? You almost can't tell that it's been sped up because you can actually understand what's happening now.

So we've taken this one step further, and we actually added a little menu in here. To be able to access the controls a little bit more directly since the fast forward is sort of more constrained and I can play it along. So now I've got the ability to speed it up.

[Transcript missing]

Anyway, so you get the idea. You can modify the pitch and the tempo, and we're going to try and come up with some clever ways to be able to integrate this into more of the products that we build, but we think it's a very interesting technology, and I just wanted you to get a quick glimpse of what would be possible as we start to leverage these inside of the whole stack. So, the last thing I want to talk about is something we call QtKit.

We have gotten a lot of feedback about what it's like to program QuickTime on Mac OS X, and I've heard loud and clear, "QuickTime's very hard to learn. There's 2,400 APIs. How am I ever supposed to figure this out? You use data types that I don't like. I don't understand what they are. I don't want to have to create them. You're messing up my life. Why do I have to do this?" So, I heard it.

I've heard it. So I'm very, very happy to report that we're introducing a whole new framework to be able to work with QuickTime. And it's going to give you a very rich set of services, but like a lot of the other services that you find in the system these days, it's going to be really, really straightforward to use, fantastic impedance match with Cocoa, of course, and it's going to be able to take advantage of all the other advancements we've been talking about today.

It'll be able to be combined with all the other toolkits, and you're going to be able to do amazing things. To help me illustrate some of the very basic elements of this, I want to ask... Oh, yep, there we go. Ask Tim Monroe to come up and show you a very simple demo. Tim's one of the designers of QtKit, and it's very cool. Hi, Tim. TIM MONROE: Hi. Thank you.

So one of the first things people ask when they get a new framework in Cocoa is, what can I do for free? Show me the zero lines of code demo. And that's what I want to show you here now. So what I'm going to do is to show, to emphasize that there are zero lines of code. Actually, first I'm going to clean up here. Okay.

I'm going to do everything inside of Interface Builder. So you can't write any code in Interface Builder. So this will prove it. So here's a new palette for the QuickTime movie view, the Qt movie view, which is the central view aspect of the new kit. What I'm going to do is ask for a new Cocoa application. So now I've just got a window here.

And I will drag the palette item into the window. And there you can see it already looks a little bit like a movie. Let me resize this to fit the window. And then I want to set a few attributes for this. First, I want to set the size so that when I resize the movie window, the Qt Movie View will resize automatically.

And then let's look back at the Attributes pane. You can see I've got sort of a handful of things I can control here. I can show or hide the controller bar. I can make the movie editable or not editable. And let's just stick it with that. The last thing I can do here is attach a movie to this Qt Movie View. And I'm just going to grab the Harry Potter.

And now I can go ahead and run this inside of Interface Builder. I'll test Interface. And I've got a QuickTime movie that I can run with zero lines of code. This is using the accelerated visual context. I've got all the goodies in Cocoa. Let me just illustrate that briefly.

One thing you like to get with a view is if I hold down the control key and click, I get a contextual menu. So we've got a nice contextual menu here that allows me to control the movie. Another thing I can do here, totally with zero lines of code, is make some selections.

I can come up here and cut, perhaps go over here and paste that. That was such a good section, I'll paste it over here. Maybe I'll come in here and cut this out. And because we're built on top of Cocoa, I can undo all of this up to the very top. I can undo the cut, undo the second paste, undo the first paste, undo the original cut, and now I'm back to where I can't undo anymore because I'm back to the movie.

Let me show you one more thing. I'll start it playing. And again, Tim sort of talked a little bit about this, but I can resize this and you'll notice that I'm getting full frame playback while I'm resizing it. So that's your basic live resizing handled automatically for you by our new QtKit framework. And that's what I've got to show.

Thank you. So that's really exciting. You know, this stuff is very powerful. We're actually building QuickTime Player on top of QtKit now. So the QuickTime Player that's in your seeds, in the developer seed that we've handed out today, is actually built on top of QtKit. It's a very powerful toolkit.

There's a session that will cover this in great detail coming up on Thursday and you should check it out. It's very cool. So here are four really big developments that we're making, advancements we're making in QuickTime. This is probably the most profound set of changes we've made to QuickTime since it was introduced back in 1991.

It's very, very exciting. We've got a whole new architecture. We're working very closely across all the different layers within the team to be able to build highly optimized implementations. We're working very closely across all the different layers within the team to be able to build highly optimized implementations.

And so that's QuickTime. So we're just about at the end here. I wanted to tell you one reminder. There is this Graphics and Media Lab. It's a hands-on lab running all week long. I'm not sure if you know where it is, but it's downstairs in the back. And there's a lot of engineers hanging out down there waiting to help you with your applications, help you tune them, help you figure out how to do things, tell you about the new stuff. And I hope you have a chance to go down and see this.

I told you at the beginning of the talk, we wanted to focus on some very valuable technologies that would help you improve the quality of your user interface, would help you unlock some amazing performance, and would help you build applications and tools that were going to stand the test of time. And I hope you can see that with this stack, QuickTime, Quartz, OpenGL, on top of the great Apple hardware, we think we've got a great platform for innovation here. And I just want to say thanks. I hope you have a great time with it. Thank you.