Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2004-222
$eventId
ID of event: wwdc2004
$eventContentId
ID of session without event part: 222
$eventShortId
Shortened ID of event: wwdc04
$year
Year of session: 2004
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC04 • Session 222

Discovering Quartz Composer

Graphics • 1:05:34

Quartz Composer is a development tool provided with Mac OS X v10.4 – for processing and rendering graphical data. It allows developers to use Quartz 2D, Core Image, Core Video, OpenGL, and QuickTime technologies through a visual programming environment. Developers can use Quartz Composer as an exploratory tool to learn the tasks each visual technology performs without having to learn the application programming interface (API) for that technology. View this session to discover Mac OS X's incredible new graphics technologies.

Speaker: Pierre-Olivier Latour

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good morning, everybody. 9:00 a.m. after the party. Not too bad. I'm sure we'll get some stragglers coming in. Anyway, I want to make one program note. If you had to choose between this session and the GL shading language session, which unfortunately is at the same time because we had a last minute scramble, after both sessions, after this session and the GL one upstairs, everyone's going to be back in the graphics and media lab if you had questions about that and had to pick.

Anyway, I'm really excited about what we have for you this morning. We're going to be talking about Quartz Composer, which is a new tool on the developer DVD. Pretty exciting way to play around with all the technology we'll have. So without further ado, I'll bring up Pierre-Olivié Latour.

Good morning everyone. Welcome to the Quartz Composer WWDC session. So the first thing we are going to look at is obviously, what is Quartz Composer? So it's a brand new tool we're introducing in Tiger, a developer tool, an exciting visual programming tool. And what you do with it is you create compositions, which basically are little programs that somehow process and render graphical data.

Quartz Composer is able to actually package a bunch of technologies into one single environment. And you have technologies like OpenGL Quartz 2D or brand new technologies we're introducing in Tiger, like Core Video or Core Image, everything in real time, from editing, preview, to debugging. And the compositions you create are really easy to integrate into your applications. We provide an IB palette, and the system is completely compatible with the Cocoa bindings.

The first thing I'm going to show you are some simple examples of compositions. Can we see demo machine number two, please? Thank you. So here we have simple but typical compositions. It contains a mix of 2D and 3D elements. Like in that case, we have a 3D cube that is rotating with some text on it. And in the background, we have some simple text that is being blurred in real time using some core image filters. Okay, I'm going to let you read to the end of the text if you are able to.

This composition is actually-- at the foundation of it is a simple 2D image that was-- to which were applied a bunch of filters, and at the end you opted that completely different result from the original image, which is animated with the time. And it's interesting to see that the quality of what you generate with Squares Composer using the underlying technologies we have-- graphic technologies we have in your printing system-- is very, very, very good. And it's running very fast. You might not see it, but at the bottom left corner we got a frame rate counter, and we're running at 60 frames per second right now because we're syncing to the display. But it would actually run even faster.

This composition is interesting because we don't have any image as its source. It's basically generated from scratch using Core Image filters that are assembled and animated with time. And once again, you can see that the quality is very good, and it's completely per-pixel computed. This release of Quartz Composer also allows you to do some simple 3D animations which could be interactive and respond to the mouse. I'm using the mouse right now to actually move around in the 3D world.

This other composition is actually a slideshow where we have a folder full of images and the composition is responsible for putting them on screen and computing the transition between the images. The last composition I want to show you is a composition that gets data from the outside world, namely the Internet. So this composition is getting the RSS feed from the Apple website and showing the hot news with a simple text animation. Back to slides, please.

So what are we going to learn today? Basically five points. We're going to have some theory at the beginning, basically the Quartz Composer concepts. Then we're going to look at the application itself. We're going to go through a tutorial so that you can see a typical example of building a composition. And obviously we'll look at how to play back compositions in your applications.

At the base of the Quartz Composer concept, we have what we call patches. Patches are basic, simple processing units. Their role in life is simply to execute and produce some results. This result can be sent to their output ports, which we are simply going to call outputs, or to some rendering destinations.

Then to control these patches, you pass parameters-- input parameters, actually-- through their input ports of the patches. We're going to call input ports simply inputs. So we can say that patches are a little function that produce some result according to a bunch of parameters and the current time.

The ports, the input ports and output ports of the patches are actually typed, which means the kind of data you pass does matter. We can pass some values like simple numbers, Boolean values, or strings, even colors. And we can also pass around complex objects like bitmaps, OpenGL textures, or Core Image images. Let's look at some example patches.

The first one we're going to look at is the LFO patch, which stands for low frequency oscillator. The goal of this patch is actually to produce a wave on its output, which is determined by the current time, the period of the wave, the wave type – so that would be, for example, a square wave or a sinusoidal wave – and it's sampled.

This other patch we have here does not have any input parameter. It doesn't need any. Its simple role is to produce on its outputs the current mouse position, the X and Y coordinates. And the last patch, the sprite patch, does not have any output ports because it is rendering to a destination. And in that case, this patch is actually drawing a quad at a given X and Y position and with a given color and texture on it.

So how do we put everything together? Well, let's say we have kind of workspace and we put all the patches on it, patches we want to use. These patches are going to have some inputs and outputs. We can build some connections between the inputs and outputs so that, for example, the patch in the middle is retrieving the data, its input data from the other patches on the left.

So we can say that the patch on the further right is actually pulling the data from the patches on the left. So what we build here is a kind of simple data flow model. And this is at the core of the Quartz Composer concept. We're going to put these patches into actually a kind of macro patch.

And when it becomes interesting is that, for example, the patch we have at the middle of the macro patch, the big one, could actually be itself another macro patch made of several patches. And some of these patches may themselves be other macro patches and so on. So what you end up with is a hierarchical patch tree.

Now, if we call the top patch, the top macro patch, at the top of the tree the root patch, then the entire patch tree and the entire data flow it describes is what we call the Quartz Composer composition. Now I'm going to show you the application itself, because that's enough theory for now.

So you will find the application into the developer folder, applications, graphic tools, Quartz Composer. So it's a kind of standard Cocoa looking application where you can work on multiple compositions at the same time. And in that case, we have one document per composition. So let's look in detail at this document window. What we have here is the workspace where actually we put the patches and we interconnect them.

On the right, we have the list of patches we can actually use. They are pre-sorted by categories so that it's easier to find them. For example, some of them are going to be controlling the objects by sending data for their position, for example. Some other patches are going to be used to import data, like images from files or from a video camera. Then we have several numeric patches that are used to perform some mathematical operations.

And then we also have rendering patches, because we want to render something eventually. At the bottom of the list you will find several categories that start with a dot. And these are all the Core Image filters which are natively supported by Quartz Composer. So you can find here all the compositing filters, distortion effects, generators, and so on.

[Transcript missing]

But you will notice that I don't see my sprite anymore. The reason is that since we're performing drawing operations, we need to define which order the operations are performed. And in that case, you will notice on the top right corner of this kind of patches that they have a number indicating in which order they are executed.

We have number one here, number two there, and obviously we want to do the inverse, which is rendering first – I mean, clearing first the rendering area and then drawing a sprite. For that, you display the contextual menu on the sprite you're interested in, and you can change its rendering layer, which basically defines the order. And now it's working fine. The next step, I'm going to use the third patch we introduced earlier, which is the LFO patch.

So the LFO is outputting a wave according to the time, and I'm going to drive the width of the sprite with that wave. So now we can see something that is animated. I might want to also drive the height of the sprite with that wave, but do some other operations on it. The way you would do it, for example, is use a mat patch.

And I'm going to take the value from the output of the LFO, send it through the MAT patch, and get the next result and connect it to the height of the sprite. Now, we've seen so far how you connect patches and have values being transmitted dynamically through patches, between patches. But, of course, you might not want to always do that and have some values defined statically.

So how do you do that? Well, you can simply double-click on an input input, which is going to bring up an editor, and then you can set the value on the input. So in that case, that's a color input, so we get a color well, and I can then pick up some green color, for example.

Then another way to edit the parameters is to show the inspector there and go to the input parameters pane, where you can see all the parameters at once. You will notice that parameters that are defined by -- because they are connected to some outputs are not editable, obviously.

And what I might want to do there is simply, I don't know, define -- take the -- force the sprite not to go below 0.5 for its width, for example. For its height, I mean. And to go above 0.5. And now we've got something like this. Let's go back to the slides, please.

Okay, we still have some theory to look at. Can we go back to the slides, please? Thanks. So we've seen earlier that we have what we call a hierarchical patch tree. Now we are going to have a rough look of-- regarding how exactly it is being evaluated by the system. So at the top of the tree, we have a macro patch, which is basically like the root patch. And then, during evaluation, this patch is being traversed. And each time-- and the system is going to execute the sub-patches that are contained into this macro patch.

And each time one of these sub-patches is a macro patch itself, it's going to be traversed. And all its sub-patches are being executed, and so on and so on, so the tree is traversed upside down. Now what exactly happens when we are inside a macro patch and we need to execute the sub-patches? The first thing to know is that not all patches are kind of born equal. We have consumer patches. These are the essential, the most important patches. They are the ones which render something.

For right now, they're all rendering something to the destination area, which is going to be like a screen or a preview rendering window. Each time you render a frame of the compositions, they're going to be executed. They're executed in a defined order, as we've seen earlier, and you can look at the order by looking at the number on the top right corner.

And they actually are the one which pull the data from the other patches. What kind of other patches do we have? Well, we have the processor patches, which are kind of slave patches. They run on demand, and they simply execute to when their inputs have changed, and they need their outputs to be updated. The system only executes them kind of in a lazy mode.

The third type of patches we have are providers, and this one's – their role is to get data from outside sources into the system. In that example, it would be a mouse patch. And they also run on demand, which means only when the – their outputs are needed.

Okay, so now, how is that simple example going to actually execute? As we said earlier, the consumer patches are the most important patches, and they're the one driving everything. So they get executed first, and first, we have the clear patch that executes. Then we have the sprite patch that's going to execute. But this one is interesting because some of its inputs are actually defined by the fact they are connected to other outputs, to some outputs of other patches. So before running the sprite patch, the system must make sure that all the inputs that are connected have up-to-date values.

So first, it's going to execute the mouse patch so that it's updating its outputs that get copies to the inputs of the sprite patch. And then we have the LFO patch that is executing, and eventually the MAT patch. Now all the values on the sprite are up-to-date, and we can execute the sprite patch. Let's go back to the demo, please. So I'm going to show you a little more about the evaluation system and how you can look at it inside the application.

So here we have a composition that is far more advanced than the one we looked at earlier. Let's look at it for a second full screen. So it's a kind of DVD sim, if you want, that was built using Quartz Composer. And we have live video playing, being masked in real time, some real time Core Image filters also, and some flying hearts and everything. And everything is mostly completely editable. For example, if I want to change the title, I just double click on a string input and I say, um, now, WWDC 2004. Okay, now it's very good.

As we said earlier, we have some – it's a patch tree. The entire composition is a patch tree. The way you can look at it is by displaying the browser. And here you can see we have the root patch, and I can look at the background patch, the flying heart patch, title and menu.

Some of them are going to contain other sub-patches and so on. Back to the root patch. And let's look at the background patch, which is a macro that was created to simplify the – so that you don't end up with thousands of patches at the root level. You can create macros and put them in order to clean up your workspace. We're going to see that later in the tutorial.

So let's look at the exact evaluation of the system. In that case, we would have some movie coming in, which is our standard DV progressive 24 frames per second. It's going through this patch here, the movie positioning, which actually it's not the real name, but I renamed it, in order simply to put the movie at the correct position on screen. Then it goes through a series of filters to display – to correct the gamma, to adjust the color controls.

[Transcript missing]

And on top of that, we add a kind of white mask that is going to be used to generate a hello inside the ring. And eventually, when we combine everything, we end up with our final background image. Now, you can see that this is a very – the tooltip system is very powerful because it allows you to see exactly what's happening inside the data flow.

But we can even do better. If you press the debug button here, what's going to happen is that it's going to colorize the patches depending on the way they are currently being executed. We have three colors. Green means the patch is currently activated and is running. "Red" means a patch is not even being activated or running or anything at all. For example, if I just drag and drop a patch here, nobody's using it. So it's just red. It's useless.

Then orange patches are the ones that are being activated, but they're not running. Why is that? Because the data here is never changing. So the system is obviously smart enough to only re-execute the parts of the data flow that are changing. Which is the case here, because the movie images that arrives at the entry of the patch is changing every frame. So this enter a pipe here needs to be re-evaluated. Now, let's look at what happens if I change the color here.

[Transcript missing]

So up to the tutorial. This simple tutorial is going to take us through building a composition from scratch. And what we are going to build is a composition that's rendered a simple real-time glow effect. And we're going to learn through that how to build such a glow effect using Core Image Filters, and also how to render a simple animated cube that we're going to feed through that glow effect, because we're going to be rendering and create a texture out of it.

And we're going to obtain the result that you see at the bottom of the screen with the original cube, and then the cube with the glow effect on top of it. So how would we do a simple glow effect? It's only a two-step process. We have the original image, and then on top of it, we put a blurred version. We add a blurred version. So we obtain this nice glowing effect. So let's get started.

So I'm going to launch Quartz Composer, a startup with a brand new composition. First thing I'm going to do is import an image to work with. So I go to the generators and I use bitmap with file. There in the inspector, if I go to the settings pane, I can click on the import file button and pick up an image file.

In that case, this one. Then the next thing I want to do is actually render this image on screen. We can get started. Here we go. For that, I'm going to use the image renderer, which gives us extensive control over rendering a simple image on screen. And I'm simply going to connect the bitmap output to the image input of the image renderer.

The connection is red, meaning there is some conversion going on because the bitmap with file is generating kind of a bitmap object. And the image renderer accepts the core image image. But the system is smart enough to do conversions between the objects or the various types you may be transmitting whenever it's possible.

So the red is just to indicate that a conversion is occurring, is happening. Otherwise, you would get like a green connection. We're not displaying the entire image because the image renderer gives us specific control over which part of the image we want to display. The reason is that because it's rendering core images, core images may be infinite. And you might often end up with infinite images when you apply some filters. So in that case, you really need to specify, I want to render this area of the image.

So we could specify manually here in input parameters that our original image is "1024 by 768, so that will do for now." Then to achieve our blur effect, we said we needed to use a blur and to add that on top of the original rendering. So for that, I'm going to use the Gaussian Blur, which is into the Core Image filters, and the Addition Compositing Patch. All I have to do is connect the bitmap to the input of the Gaussian Blur.

[Transcript missing]

Okay, and once I'm done with my blur, I want to add it to the original image. Now we got our nice glow effect. But it's if we look at it full screen what I notice is that it's very bright and one way to fix that is simply to insert a color adjustment and cheat by playing on the gamma before we apply the blur.

So I'm going to insert a gamma adjust before the blur and then I can play with the okay "The power on the gamma correction to actually kind of fix the brightness of the glow effect. Now it's much better." So that will be our first part of this tutorial. Second part is going to be to create some animated background because it's obviously more interesting to have our glow effect being applied to something that is animating, that's something that is completely static.

So first thing we want to do is we're going to just create a brand new composition, as usual clear the screen, and then render a cube on top of it. The cube has several parameters, and you can control position orientations and the width, the dimensions of every axis and so on. First thing we are going to do is simply change the dimensions to make it a little smaller.

"So, 0.75, we do the trick, and then we want to have this cube simply animate with time. One way to do that is, would be to use the, let's see, interpolation patch, which simply interpolates between two values on a given duration. And I'm going to control with that the X and Y rotation.

Okay. I want the interpolation to start at zero. In that case, we're going to be in degrees. Finish at 360 degrees over, let's say, 10 seconds. And I want interpolation to loop. Now we got our nice rotating cube. Next step is to obviously add a texture on the faces of the cube.

So I'm going to go back to generators and use bitmap with file and import an image. So I should have one for this. Here we go. Nice brick texture. And to display this kind of brand new image on the faces of the cube, I simply have to connect the bitmap to the various faces of the cube.

[Transcript missing]

So it's not very realistic. So let's look at the environment patches we have here. And one of them is the lighting. So we're going to drag and drop it there. We don't see any difference yet. The reason is that you need to specify how the lighting is going to influence the patch tree. Which patches on the workspace here are going to be influenced by the lighting.

You might not want all the patches to be specified, to be affected, so the way we do it is by using the fact that the lighting patch is a macro patch itself. And it's only going to affect the patches that are inside this macro patch. One way to navigate through the patch tree we've seen earlier is to use the browser there. Another way is simply to double click on a title bar on a patch.

And then you can go inside it. And if you want to go up one level, you just click on the Edit Parent button. So what we're going to do is just cut this. Go into the lighting patch and patch it. I mean, past it. Here we go. So now we've got our cube, which is lighter than rotating and everything. That's going to be part two.

All right, now how do we put these two compositions together? What we need to do is produce some kind of image with the rotating cube that we're going to feed through the glow effect. And it's convenient because we have a patch specifically for that, which is called – which is a generator patch, and which generates a texture with what is inside itself.

[Transcript missing]

So it's important to notice that this tool, even if it's attempting to abstract all the various technologies it encompasses, it's not hiding everything. So you can still have access to the low-level settings most of the time. So in that case, I'm going to use a rectangle texture and I also want

[Transcript missing]

So now, onto this texture, I'm going to have my rotating cube. And the next step is obviously to fit that texture through the glow effect instead of the original image.

[Transcript missing]

You might want to have these values to be defined automatically. One way to do that is to use a tool we have around here, which is the, for example, texture dimensions. As the name implies, you simply pass in a texture and you can get the width and height in pixels of that texture.

So now it's set automatically and we have our nice glowing effect that is applied on the cube. And by playing with the power of the gamma, you can see the effect of the glow. If I go full screen, you're going to notice it's not running extremely smooth. The reason is that, remember, our original texture on which we applied the Gaussian blur is the same size as the screen. So in that case, that's going to be 1290 by 960 or something.

And it's a lot of data to be processed by the GPU. Because computing a blur is a very expensive operation. So there are ways to optimize that. I'm not going to go into details for this example. I'm simply going to show you the end result of that optimization.

This is the version you're actually going to find in the examples provided on the Tiger DVD, the example compositions. So this one is the final result. And it's very smooth, and we got a nice glow effect. Basically, the difference with what we just built is after the texture, after the step where we generate the texture with the cube, we basically downsample the texture to one that is always 256 x 256. And we feed that downsampled version of the texture through the Gaussian blur and the glow effect. And because it's only 256 by 256, it's going to be very fast to be computed. Back to the slides, please.

So now it's time to look at how we are going to play the compositions. There are three possibilities. You can use the QC view, which is a customized NSView we provide with Quartz Composer. And you can use it directly in Interface Builder. You can do kind of advanced playback, still in Interface Builder, using the QC Patch Controller, to use the bindings. And then the final way of playing back a composition is kind of the hard way where you have more control over the composition. And that's going to be using the low-level QC renderer class. And you have to use that programmatically. Back to Dima Mashin, please.

So let's launch Interface Builder. The first time you launch it, you're going to have to add the Quartz Composer palette, which is not loaded by default. So for that you show the preferences of Interface Builder. You go to the palette area and you simply click on add and you will find the Quartz Composer palette into developer, extras, palettes, and then Quartz Composer. Okay, now we're up and running. So let's create an empty application.

And I'm simply going to drag and drop an instance of the Quartz Composer view onto my window and then display the interface builder inspector. In the attributes pane, you're going to find a load button which allows us to obviously specify the compositions we're going to be playing back.

So I'm going to take the optimized version of the glow effect we just created. And now I can just go directly to test interface and it's up and running. So, so far we haven't even typed a single line of code to do all of this. It's worth noticing it.

Okay. That was for very simple playback. Now, we're going to do something that is a little more advanced and use the Cocoa bindings. So let's go back for a minute to the Quartz Composer application. And I'm going to hide Interface Builder. Here we go. Let's open our optimized glow. So that's this one. So we said at the very beginning that the system was completely compliant with the Cocoa bindings. That means key value coding and key value observing. And to use bindings, you need to use, obviously, keys to specify the objects.

When you look at the tooltips, you will notice that you may not actually see it clearly on the screen, but the third item is the key. Because each patch in the system, in the patch tree, has a unique key to identify it. And then each input port or output port also has a key. And you can see these keys by simply displaying the tooltips.

So in that case, the key for the patch is rendering texture underscore one. And the key for its outputs is output texture. So we have a way to identify ports and patches in a tree. What we're going to do now is, in order to use that composition with bindings, we're going to transform it into some parametrized composition.

Pierre-Olivié Latour: Okay, let's hide this. For example, we might want to have to parameterize the background color or the intensity of the glow effect. How would we do that? Well, let's go inside the rendering texture, and the color we're interested to change is defined by the clear patch. So what we're going to do is bring that input to its parent patch.

So for that, I display the contextual menu on the patch, and I have an option Publish Inputs. And I can select the inputs I'm interested in, which in that case is Clear Color, and specify a name for the new input, simply Color. Now, if I go back to the parent patch, you will notice it now has a Color input, which corresponds to the Clear Color input of the Clear patch that is inside this macro patch. So that is called publishing inputs/outputs. You know an input is published to its upper level by – because it's going to be like a full – a filled dot instead of an empty dot.

Let's publish-- but remember, we said at the beginning we have a patch tree. And at the top of the patch tree we have a macro patch like any other macro patch, which means we can also publish inputs and outputs to this macro patch. So let's do it. We're going to publish the new color input we just created to its parent patch, which is the root patch. We're going to keep the same name. And now I have an input that is at the very top level. And it's kind of an input of the composition. And it's the color input.

So you can use the display pane here to actually see all the top level inputs of the compositions, which can be considered like the parameters of the composition itself. Let's publish another input like the intensity of the glow effect. Let's save the result as our part 4. Now I'm going to go back to IB. And this time we're going to do playback using bindings.

To use bindings, we need a controller, and we provide one, which is the QC patch controller, which you instantiate simply by dragging and dropping it onto the document window. Remember, the way binding works is that you have a model. In that case, that's going to be the composition.

Then you have several views that are displaying – that are interacting with that model. And interaction is met through the controller that is standing in the middle and doing the intermediate. On the controller, I can obviously load a composition, and I'm going to load the one I just created.

Then I'm going to go back to my QC view, just unload the composition that I did on it, and use now bindings. So I displayed the bindings area of the attribute inspector. And then we have a property that is patch, which obviously determines the patch that is displayed by the view.

So I'm going to bind that to the patch controller I just created. And I want to retrieve the root patch of the composition and display it on the view. And the way you do that is simply by using the patch controller key, which is going to return the root patch object. So remember the top patch at the top level of the composition.

So now I can test the interface and it's the same result as before, nothing has changed, except we're going through bindings, which give us a lot of flexibility. Because what I can do now is, for example, pick up a slider and use that to control our power input we just created on the composition. Remember, it's this one.

Okay, so the way I would do that is simply by binding the value on the slider to the patch controller, then retrieve the root patch object, and then we need to specify the key pass to access that color input. If we look at the Quartz Composer, we can see that the color input, if you look at the tooltips, you can look at the key and you can look also at the key for the published version of the input. And in that case, it is automatically computed by the name. So because I used the name color, the key is conveniently set to color. It's always going to be the case unless you already have a color input.

And then the system is going to have to pick up another name for you. So we know that that input is identified with color. So now that gives me the input. And what I'm interested in on the input is the value itself. Oh, I'm sorry. It's not actually color. It's power in that case.

So it's power. And what I'm interested in is the value on the power input. So I add .value. OK, now I can run the-- Oh, I need to fix the attributes of the slider so that it goes from zero to one only and that it is in continuous mode. Okay, that's better.

So now I got live interaction without typing a single line of code still of the composition we created and control views I have in my window. Let's go a step further and drive the color now. So same principle. You go to the Bindings area. You bind the value to the PatchController. Here we go. You retrieve the Patch object itself, and then we say color, to retrieve the color input, .value, to retrieve the value on that input.

OK. We can do better than that. If we go back to Composition, I can even-- There you go. Let's get it running. I might want to change the images, the image I use as a texture on the cube. So the way to do that would be to go back to the cube.

And we would like to have the same texture be put on all the faces on the cube. And here we have six connections going on. If you want to simplify your life-- and this happens quite often when you want to have the same value be set on several inputs at the same time. And when you change one of these values, you don't want to change, for example, six of them each time.

For that, we provide a little tool, which is the little utility patch, I should say, which is the Input Splitter. If you display the inspector, you can select the type of data it's transmitting. So the Input Splitter is-- so in that case, we want to transmit a texture. And the Input Splitter is simply transmitting the value that arrives on its input to its output.

So I'm going to connect the six faces to that single output, and now I can set that value from one single entry point instead of setting it for six. Obviously, I can do this, and I get the same result as before. But I'm going to publish that input and name it texture. Go back to the parent patch.

Get rid of that one, which is useless now. Go back to the parent patch. It's right there, the texture input we just created. Publish it again to the top level – to the upper level, I mean. And publishing it one last time to the very – oops, I published the output instead. Here we go.

texture and keep the same name. Now our composition has three input parameters, the background color, the power of the glow effect, and the texture on the cube. I'm going to save that and stop it. Go back to IB, reload the composition on the controller. So the composition is actually stored inside the controller and saved with the Nib file, so you need to reload it when you change it.

Now I've got my brand-new composition still working the same. Okay. And in that because I'm going to draw an NSImageView.

[Transcript missing]

So now it's up and running and what I can do is just drop any kind of image file there, you know, and We'll see our starting image, the glow effect, you know. Back to the slides, please.

All right, so that was playing back a composition without even typing a single line of code. But what if you really want to type some code? Well, you got to use the QCRenderer class. Unfortunately, it's still very simple to do. So only three steps. You need to have an NSOpenGL context around because this entire system, of course, Composer, is basically running on top of OpenGL as the primary backbone. And then we need to have a QCRenderer instance. And we simply render frames using the renderAtTime method.

Let's look at some sample code. So what does it look like? Well, I'm going to assume I have an NSOpenGL view standing around, which is called my NSOpenGL view. And I retrieve its NSOpenGL context from it. Then you create an instance of the QCRenderer using that context and using the pass to a composition file somewhere on your hard drive.

Now, if I want to render 10 seconds of that composition with like 25 frames each second for one second, you can do it in a very ugly manner in that simple for loop. All you have to do is call renderer. You pass the appropriate time. And then you need to flush the buffers on the OpenGL context to display what was just drawn on screen. More in a minute about why we need to do that. And eventually, when you're done, you just release the renderer, and all the cleanup is going to be done automatically.

Now why exactly is the renderer not flushing the OpenGL buffers itself so that just after a frame is rendered you immediately see it on screen? Well, the goal is that this allows a lot of flexibility where you can interleave the composition rendering with your own OpenGL code. And in that case, I can do some OpenGL drawing before, then render the composition and do more OpenGL drawing afterwards to create kind of underlay overlay.

So it's very easy to integrate the Quartz Composer system into your application, into your already existing OpenGL application. If you want to add some examples, some flying logos all around the screen or stuff like that, you can design them easily as compositions and then import them into your application.

We've seen that one way to communicate with the composition is by doing everything using bindings in Interface Builder. Now you can also do that programmatically, and the equivalent calls would be set value for input key or value for output key to retrieve data from an output of the composition.

And the way you would be doing that is simply you pass an object corresponding to the value and you specify the key corresponding to the input part of the composition or the output part. You may have noticed earlier that the render at time takes an optional dictionary of arguments. And what we are passing here, or what we may pass here, actually, because this is completely optional, is the current NS event that is being processed by the application.

And you can also pass the mouse location in normalized coordinates. You'll find more details into the header corresponding to this QC renderer class. You may wonder why you have to actually pass this NS event stuff and the mouse location. Well, once again, it's optional. But the real reason is that you might want to have the QC renderer system running into a command line tool or some application where you don't have any UI.

So it's not even there. I mean, this data is not available, so the system cannot retrieve it. Another reason might be that you're running that into your application, but you don't want the QC renderer to actually steal events from your system. So that's the downside of using a low-level API like this, is that if you're playing back a composition that expects user events-- and in that case, only the mouse patch, I think, is actually using them to obviously detect mouse up and mouse down-- You will need to pass these events manually.

So what kind of value should you pass when you use set value for input key? Well, to each type of port, there is a corresponding NS object you can pass. If it's a Boolean, index, or number port, you simply pass an NSNumber. You obviously pass NS color object for color port, and so on. What you need to pass for now for the texture, bitmap, or image port is you can pass NS images, or in the case of the image port, you can directly pass a CI image you obtained from some other place.

So what if I want to control programmatically the power of my glow effect? Well, the way you would do it is by simply adding a call renderer set value. And the power input is a number input, so as we said earlier, we're going to use a simple NSNumber for that.

So we back up an NSNumber with the value we're interested in, and we pass that to the renderer for the correct input key. The system is kind of smart enough. So you might-- actually, you don't even need to pass an NSNumber. Anything that will-- any Objective-C object that is going to respond to float value, or double value, or int value, this kind of stuff, is going to work. So, back to the demo please. I'm only going to show you a very simple application of the QCWanderer class. That is going to be the simple playback application you've seen at the very beginning of this presentation.

So this simple application is basically playing full screen a composition. So it's creating a full screen in S/OPENGL context, create a QC renderer on it, load the composition, and play continuously. And it's like two pages of code and 90% of it is actually setting up the OpenGL context, capturing the screen, detecting the fact that the user is drawing and dropping a file on the application icon and so on. The Quartz Composer part is like a couple of lines of code. Another example is this screen saver that is now provided with Tiger.

You might have seen during Bertrand's keynote, I think. So what exactly is inside a screen saver? Well, let's have a look at it. So I can show the original there, which is somewhere into the system folder. And if you look at the package content, go to Resources, you will find, guess what, a composition file. And it's right there. So all that animation was done – oops, it's changed the color or something. All that kind of complex-looking screen server was done without typing a line of code except for the simple playback part into the screen server bundle. Back to the slides, please.

[Transcript missing]

The last thing I would like to show you is kind of open some doors of what exactly you can do with Quartz Composer and really on the point that it's really an open system. And you can bring data from many, many places. You've seen earlier that there were some interactive compositions, retrieving data from the mouse, for example, or from an RSS feed. But we can also use MIDI controlling devices, video cameras, and even more, the compositions we built can also respond to their execution environment somehow.

So that would be the capabilities of the OpenGL renderer or the dimensions of the rendering area. So demo machine, please. For the last demo, I'm going to still modify our composition. Now famous glow effect composition and add more interactivity to it. Okay, so let's see, let's get a picture first. It's not there anymore.

So, for example, here I have a MIDI device controller which is simply – has a bunch of sliders and knobs, and on each of this knob and each of this slider, there is a corresponding MIDI controller ID, and when you touch them, it's simply sending the current value of that control to the MIDI system.

And we have here a MIDI controller patch I can drag and drop. And by looking in the inspector, specifically at the settings panel, you can configure it completely. And I want to listen to my MIDI input here. You can select several sources. You can filter the MIDI channels you want to listen to, and obviously select the controllers you want to observe. In that case, I want to observe controller 32 actually, so I'm going to select 32 in the list. And now we have a new output that was generated with the 32 controller, and I'm going to delete the controller number 1.

It's opening a value that is normalized between 0 and 1, because it's more convenient than the 0, 1, 27 values that you usually get from MIDI equipment. So all I have to do is connect it there. And now I can use my slider here, and as you can see I control the glow effect simply with this slider.

All right. Let's do a little more. So instead of using static image or something that is generated from some other compositions, let's get some video in. So I just used the texture with video patch. Here we go. And I'm going to feed that on the, let's say -- well, the cube, the faces of the cube.

So what I have to do is connect this, connect the texture output to the texture input of the -- which was eventually going to end up on my cube itself. You can see there are several settings you can set for video captures. And I need to turn on the video camera, I guess. Let's try this. Okay, I might have to restart the composition. So it detects the video input. Okay, so now I have... Pierre-Olivié Latour: Yeah, okay. My last video input there uncontrolled with the slider.

Yeah, OK. Mine has video input there and controlled with the slider. I'm not sure if the camera is connected or not, is by the fact some object is defined on the output. For that, we could use a tool that is the multiplexer. The multiplexer is simply a number of inputs. And you select, by using the source index input, you select which one of these inputs is going to be forwarded to the output.

So in that case, I'm going to set the multiplexer to manipulate texture objects, reduce the number of inputs to only two, Connect the output of the Mixiplexer to the input of my rendering textures thing, Macro.

[Transcript missing]

"So, I'm going to connect it there. Okay. And I can change the text, like no video. I have no video on my Q. How would I – so it's easy, you know, you just – if I turn on back the video camera there. Is it running? Okay.

And I restart. Now, by changing the input, I'm going to get either the video input or the text. Now, we would like, obviously, that to be done automatically. There is one little hack you can use to do that. It's simply using – let's see. It is conditional or logic. Here we go.

Because you can actually connect an object, like a texture or something like that, to a Boolean input. And basically if there is an object, the Boolean input is going to be set to true and if there is no object, it's going to be set to false. So here I have a simple patch which is doing a logic comparison between two Booleans and I'm going to use that.

So I'm going to set it to OR comparison, set the second input to true. So it always pass and connect the texture object to the input there. So what we have now is On that output, I'm going to have true if there is a texture coming, or false if there is no texture.

And I can take that Boolean and connect it to an index input, and true is simply going to translate to 1, and false is going to translate to 0, which is exactly what I need. So now I've got a system that is automatically defined, and if I have a video camera connected, I'm going to get the video camera image. If I don't have a video camera connected, I'm going to get the video picture. Here we go.

Oops. I kind of accelerated the thing. Oh, I did it wrong. Oh, that's right, that's right. What did I do there? "True, that is correct. And there we go. Oh, that's right. You didn't pay attention there. Here we go. That's better." So that was kind of an interactive compositions. And we can do the last thing I'm going to show you is how to have that composition respond to the OpenGL render capabilities.

Because all that part here is basically using Core Image. And it's using it in hardware, so it's only going to work if you have a video card that is supporting the proper extension set. So what if the composition is going to run on it? Well, it's not going to crash or anything, but all this part here is going to do strictly nothing. So you might want to display something else instead.

So let's go back to the tools, and I have a convenient OpenGL info patch, which is returning me, for example, the renderer vendor or renderer name of the current renderer, the version, and can check also for the existence of various OpenGL extensions. The way you use it is that you display the settings pane, and you can add some fancy extensions names here. For example, I don't know, my fancy extension. OK, add it there.

And now you get a Boolean. It's a Boolean output. So if the extension is supported, you get true. If it's not supported, you get false. So obviously GLARP vertex program is supported on this computer. Fragment program is also supported. But my fancy extension, well, is not supported for some reason. So OK.

Now what I'm going to do is do a simple comparison on the renderer version. So that would be numeric and then conditional. So the renderer version is a number, and in that case, I think it's OpenGL 1.4. And what I want to test is, do we have an OpenGL renderer that supports OpenGL 1.4 or later? So I just do is greater or equal than 1.4.

And you may have noticed that all the consumer patches automatically have an enable input added by the system, which is a simple Boolean, and you can turn on/turn off the consumer patches from that input so that enable input is at the top of the patch. Now what I'm going to do is connect my results to this enable input.

So as we can see, it doesn't change anything because if you see this video card support OpenGL 1.4. Now what if for some reason I wanted to test for the support of OpenGL 1.5? Well, I would simply set 1.5 – oops, I didn't even put the 1.4 there. Here we go. I would simply put 1.5 instead of 1.4.

"Greater or equal than." Okay, and I must have done something wrong here. Oh, that's right. That's pretty bad. Here we go. So now you can see that simply changing The version you want to detect for is going to turn off dynamically parts of the data flow. So it's important to – I'm going to conclude on that, that compositions you create are absolutely not something that is closed.

You can really interact with them. You can feed data in. You can retrieve data from them. They can be dynamically responding to the environment. And it's a great, great system where you can experiment with the brand new technologies we are adding in Tiger without typing a line of code. And you can add them easily to your applications. Back to the slides, please.

So who you might want to contact about this, there is Travis Brown who is our graphics and imaging evangelist or possibly myself. For more information, you may want to look at two documentations we have, Visual Computing with Quartz Composer, which is a simple introduction followed by a tutorial, which is pretty much the tutorial we just did right there.

And the other documentation is Image Processing with a Core Image Framework, and it actually contains an extensive description of all the filters provided by Core Image, which are natively supported by the Quartz Composer system. You will also find the sample code for the Quartz Composer Player I demoed today, which is going to be on the ADC website. And you will also find some demo compositions located into developer examples Quartz Composer, and you may obviously look at the screen server.