Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2006-221
$eventId
ID of event: wwdc2006
$eventContentId
ID of session without event part: 221
$eventShortId
Shortened ID of event: wwdc06
$year
Year of session: 2006
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC06 • Session 221

Advanced Quartz Composer Development

Graphics and Media • 1:12:59

Quartz Composer is a powerful visual programming tool for utilizing graphics and animation on the Macintosh. With Quartz Composer you can easily explore the graphics stack in Mac OS X. Mac OS X Leopard brings a number of new APIs as well as the ability to write custom ""patches"" for use in your compositions, allowing you to take Quartz Composer further than it's ever gone before. We'll demonstrate this new technology as well as advanced API techniques for integrating Quartz Composer technology into your application.

Speaker: Pierre-Olivier Latour

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it may have transcription errors.

Good afternoon, everyone. Welcome to the third and final Course Composer session during this WWDC. My name is Pierre-Olivier Latour, and I'm the engineering manager for Course Composer at Apple. In this session, titled Advanced Techniques of Course Composer, we are going to learn a number of things. First of all, like the title of that session implies a number of advanced techniques using either the QC view or the QC renderers and how to do low-level rendering of compositions. But then the biggest part of that session is going to be how to write your own patches. So we'll have a deep look at that.

Through that session, I'm going to assume a number of things that you are already familiar with the course concept, which would be fine if you have already played with it. Or if you went to either a session this morning or the one yesterday. I'm going to assume you have the basics of OpenGL as well as the basics of the Objective-C 2.0 and the changes that went into that language.

So let's start with advanced techniques. First of all, the QC view. Well, the QC view is the easiest way for you to display Quartz Composer contents inside your application. The way it works is it's pretty much an autonomous view that's driven by a timer that's running on the main thread, and every x number of times per second, it's going to render a new frame. So because the QC view is kind of living in its own world, the question becomes how to best interact with the rendering that's happening in a QC view.

In part, we're introducing a new method that's called renderAtTimeArguments. That method is only for subclasses of the QC view. So you don't want to call that directly. And when you subclass QC view and override that method, you do not want to modify the time or arguments parameters when passing them to the super implementation. So what's the point of that new method?

Well, it's to cover two usage cases when you want to interface tightly with the QC view. It's either to do synchronized communication when talking and retrieving results from the QC view, or if you want to even add your own OpenGL rendering as underlays or overlays. So let's look at the synchronized communication. What does that exactly mean?

Remember, in the QC view, the composition's rendering in its own world, in its own rate, and so on. So the problem becomes, what if you want to set some input parameters of the composition, but you want to set it for every frame? So the way you would do it in Tiger is by probably having your own timer that's running a little faster than the one of the QC view, say 70 hertz or something like that. And then you would end up setting the parameters all the time, but it wouldn't be synchronous. So the idea here is that you can now know precisely when the QC view is about to render a frame, and therefore set your input parameters only when necessary and only at that time, and then it renders and you retrieve the result. So let's look at an example.

You subclass QC view. You override the renderAtTime arguments method. And the first thing you do is you're going to call super so that the QC view is not broken and actually performs rendering. But then before calling super, you can call a set value for input key and set your input parameters at that time just before rendering happens, then rendering happens, and you can retrieve the results. So it's pretty straightforward.

A more interesting and, let's say, more powerful use of that is if you want to add some custom OpenGL overlays or underlays combined with the rendering in the QC view. Couple of restrictions there. First of all, we're going to do OpenGL drawing, and the current context does matter. Instead of changing the current context to the one that the QCV uses, it's simpler and more efficient to just use the CGL macros. So there'll be more in a minute. A golden rule to follow when you work inside Course Composer and do OpenGL code-- and we're going to repeat that a few times through that presentation-- is make sure you save/restore the state you change. When the course composer is tightly integrated with OpenGL, OpenGL is a giant state machine. If you change the states and you don't restore them properly to what they were before, then the whole system end up in an inconsistent state and your rendering is not correct. And finally, you want to make sure you check for OpenGL errors.

So here's what it means. In that case, we subclass QC view, override render at time like in the previous example. You will notice at the top of that example sample code that there is the inclusion of that cgl macro file, which basically is going to make it so that whenever you call an OpenGL function, it will be targeted to the context, to a specific OpenGL context that's defined by the variable cgl_ctx. And it's basically a bunch of preprocessor macros that expand. And this way, you don't have to touch at all the current context, you don't have to try to set it, to restore it, anything like that. It's very straightforward. You just retrieve the context that the QC view uses by doing, In that case, self-OpenGL context and then CGL context object to retrieve the underlying CGL context object. And then every OpenGL code you're going to put there from that point on will be sent to that context. So about the OpenGL code, so here's an example that would just draw a red quad. Remember what I said that was very important. You want to save, restore the states. So the first step at the top is saving the current state that we're about to change, changing them, then from the drawing, restore the original stats, and handle any errors.

Another thing we added to the QC view in Leopard is the ability to retrieve directly the image that it rendered. It's very straightforward as well. All you have to do is call on the QC view you want to retrieve the rendering, create snapshot image of type. And what's great about that method is you can specify a number of types. So you specify the type as a string, which can be for instance NSBitmapImageRep or could be CVPixelBuffer. That method would take care of downloading the image from the GPU and flipping it vertically if necessary, because the OpenGL rendering on a GPU usually happens upside down, so you don't even have to deal with that. It will provide you directly an object ready to be used in the format you want.

Now, you've got to be careful because that method starts with create for a reason. It means it's going to return you a brand new object. It's not an auto-release. It's not on the auto-release pool. So make sure you call release on it or the appropriate CF release type, whatever it is for the image type.

And you will also want to make sure for best performances to ask for the type that is best suited for what you're going to do with it afterwards. So for example, if what you want to do is get the image, pass it further to QuickTime, well, QuickTime, like the compression session APIs, they're typically using CVPixelBuffer. So you ask for a CVPixelBuffer. If you were to pass it to CoreImage, you would ask for CIImage, and so on. One of those cases is I want to pass the rendered image to another QC view or QC render or something like that. The best type you can use in that case is CVOpenGLBuffer because it is a buffer that is on the GPU, and therefore your content will stay on the GPU. There is no extra download and then upload that would be done. I'll show you a quick demo of some of those features.

I just built a very simple app here that is-- showcasing those two features. You can see here I got a basic composition with an Apple logo and I can just rotate it like that to some inertia in the logo. Well, here that red frame could be, for example, a set frame area in the video world. And that is drawn by a few lines of OpenGL code that were inserted after the drawing of the QC view. So it's completely synchronous with the rendering of the QC view. Now, you can see here I also have a secondary window, which has a timer, and it's simply an NSImageView inside. The timer fires 10 times per second. Take a snapshot of the main view and put it as an NSImage inside the NSImageView here. So it's really simple to do overlay, rendering, or underlay, or grabbing snapshots from the QC view. Back to slides, please.

Let's go a bit deeper. So about low-level rendering of compositions. We have a dedicated API in Quartz Composer. It was there in Tiger. It's still there in Leopard, obviously. It's been improved with new method. And it's called the QC Renderer API. The way it works is that in that case, you're really in charge of rendering the composition. So you specifically render the composition to a given NS OpenGL context or CGL context object, which is kind of the low-level OpenGL context in Mac OS X. You can also specify custom output RGB color space, which is new for Leopard. So you can have your composition render in device color space, or you can have it render to a specific color space that you want to do color matching later on. And it's, like the rest of QC, fairly easy to use. Only two steps. The first one, you create your QC renderer.

And typically, the parameters-- there are numerous methods there. But typically, the parameters are the OpenGL context, the pixel format. You have to pass this one as well. It's required. And the pass to the file, like the composition file, or QC composition object, there are also variants of the method that accept the QC composition objects and so on. Then at that point, you have a QC renderer. The composition is loaded. Everything's ready to render.

Whenever you want to render a frame, you simply have to call on the QC renderer, renderAtTime. You pass the time at which you want to render that frame. For example, 0 for the first frame. And if you want to render 10 frames over one second, you would pass 0, 1, 0, 2, 0, 3, and so on. The time is expressed in second. And you can pass optional arguments. We'll look at that in a minute. There is one point of interest regarding the QC renderer. So it's rendering to an OpenGL context that you define yourself. However, the QC renderer will not take care of swapping the buffers if your OpenGL context is double buffered. So what it means is, remember, a double buffered OpenGL context as a back buffer to which you draw. And when you're done drawing, you actually have to swap the back buffer with the front buffer so that it ends up visible on screen. That allows the interest of double buffering is that this way you don't see the drawing happening as if you were drawing directly to screen. You see only the final image. In that case, the QC renderer will not do that swapping for you, which is great because you actually have the opportunity to do post-drawing, like the same way you could do drawing before the composition renders. Here's a pretty simple case. I'm assuming here that for a sec of that example, I have an NSOpenGL view around. So I retrieve the OpenGL context, I retrieve the pixel format of that view, and have a file somewhere on disk with a composition.

So we create the QC renderer with those parameters. Then we can have a simple for loop and render frame, render frame, render frame. Assuming the OpenGL context is double buffered, we flush it to make the result visible on screen. And when we're done, we just release the renderer. That's it. It's that simple.

So like I said earlier, you can interleave the rendering of the QC renderer with your own OpenGL code, which allows interesting underlays and overlays as well. In that case, you can see that I took the same loop as before, but I added a bit of OpenGL code before and after, where you would draw some kind of background. Then you draw the composition. Obviously, we're assuming here that the composition is not covering the whole drawing area. Otherwise, it's pointless to draw something before. And then you can draw some overlay with some GL code.

If you want to communicate with the compositions through the QC renderer, it's the same exact set of methods as on the QC view. So set value for input key will set an input parameter on the composition. Value for output key will retrieve an output result from the composition. Now regarding the arguments you can pass when you call render at time arguments, there are two of them, and they're completely optional. The first one is QC renderer event key. If you have an NS event around, because your QC renderer, the result is being displayed in Windows or something, and then you have NS events flowing in, you should pass it in to the QC renderer when you call render at time, pass only the current event. And this will allow patches that depend on events, for example, the keyboard or the mouse to work properly for the mouse clicks and such. If another optional argument is passing the current mouse location, so the way we do that here is you put the mouse location as an NS point stored inside an NS value. And the coordinate system for the mouse must be normalized 0, 1 to 0, 1.

A great thing about the QC Renderer is that you can do offline rendering. You could have a custom thread that's a separate thread from the main thread that's rendering composition in the background or something like that. Or you can just have a common line tool that's rendering composition directly as to a bunch of TIFF files, things like that. Well, we made it easier in Leopard where there is a new method on QC Renderer called initOffscreenWithSize, color, space, and file. So what it does is taking care completely of creating the OpenGL context, the P buffer to render on a GPU, all of that, you don't even need to know about how it works. You just use it. You specify a file, a color space. You can even pass null instead of a color space to get the default one, and you're good to go. Now, that rendering still happens on a GPU, always happens on a GPU, which means you cannot render at size that won't be supported on a GPU, for example, like 16,000 by 10,000, that wouldn't work. Typically, the size limitation is 2K, 2K on the current hardware, but most of the machines we're shipping today are probably more than 4K and 4K. And depending on the amount of VRAM you will have around, the rendering will happen more or less fast.

I would like to now look at a very interesting possibility we added in Mac OS X Leopard, which is the ability to do chaining of compositions. The idea behind that is that instead of having a big monolithic composition with a ton of patches inside, you divide it into a set of primary composition and then a bunch of sub-compositions that are well-defined and that you load dynamically into the first one. So that was not possible in Mac OS X Tiger, and you can now to do that in Leopard using a new patch that's called the Composition Loader. So let's do a demo of that.

to a demo machine, please. Thanks. I have here two compositions. that are autonomous. One of them is simply displaying an input image with this animation and reflected gradient and so on. Another one-- is applying some kind of neon effect to an input image, and you can also pick the color of that neon effect. What I want to do now is build a new composition from scratch that puts these two sub-compositions into this master one. So let me organize that quite a bit.

OK, so what we need is that new patch called the Composition Loader. I'm going to create an instance of it. The first thing you expect from it is the ability to specify the composition to actually render. And there is an input for that, composition location, which will accept the file pass, the URL, usual things. So let's take that gradient composition.

It's loaded, and now it's rendering here. However, that would be pretty limited if we were only able to load Composition and render them. What we want to do as well is communicate with the inputs and outputs, and we can do that. If you display the inspector for the new Composition Loader patch, go to the Settings pane. You can configure a kind of template with the inputs and outputs of the Composition. If I look again at the Gradient Composition, there is one input, and the key is eMatch.

So here in the settings of that patch, I'm going to say I have an input of type image with the key image. Create that. You can see the image is now available as an input. And I can take my Apple logo and connect it there. And it just works. Now, what would exactly happen if I were to load a composition that doesn't have an image input with the key input image? Well, nothing. I mean, you wouldn't get an error. It wouldn't get connected. So you can see the inputs and outputs that you define on the CompositionLoader as kind of a weak template of inputs and outputs. If the Composition that's loaded has one or more of those inputs and outputs, they will automatically get connected, and you can talk to those inputs and outputs through the inputs and outputs of the patch. If they don't exist, well, no big deal. You don't get errors. It doesn't crash or anything like that.

Now let's insert our effect, the neon effect. I'm going to create another composition loader. And in that case, we also have something interesting, which is We don't want that neon effect to render. We want it to be inserted between the Apple logo image and the image that's displayed in the second composition loader here.

So let's look at the settings. And you can see that the top setting is the type of patches. So remember, there are several types of patches in Quartz Composer, three to be precise, processor, consumer, and provider. And obviously, the system, when it loads a composition from disk, the type definitely matters. So in our case, we are going to use that neon composition as a processor with the input image and color inputs. And it also has an output, which we can see by looking at the composition, because right now we don't have a way to display the outputs on the viewer. We only display the inputs in that area. You can see it has a published output, which has the key output image.

So I can go back here, set the execution mode to processor, and now I can define inputs and outputs because consumer have that particularity that they're not allowed to have outputs. So now the two sets are visible, and I can add an image input with the key input image, an image output with the key output image. Let's specify the composition location.

and send our original image through that. And now I have my neon effect animated on the background. So as you can imagine, that opens the door to pretty powerful possibilities. And as a demonstration of that, for those of you who were present last year, there was this hands-on session that I gave where at the end, we were building kind of a fake TV system that was called the QC TV. And it's a sample code that you can get from the ADC website. The way it was implemented is that there were a number of composition for each part of the TV setup, where you have one composition that handles the crawler at the bottom, another one that renders the upper bar with the logo and such, each of them was a different composition.

And then in the source code, there was a QC renderer for each of the composition. And everything was kind of put together in an OpenGL context. Well, now doing the sensing would be a lot simpler, because I have here a composition folder with all the sub-compositions, which is really convenient when you want to have artists or programmers work on separate composition without having everyone touching on that big master composition. So it gives you granularity.

It gives you an easy way to replace sub-compositions. And here, it's really a bunch of composition loaders that are configured with the proper inputs and outputs. And now, on the code side, you would simply have one QC renderer or one QC view to be able to render that instead of having a number of QC renderers and do all the logic to render them in a proper order and such. So that would simplify your life quite a bit. Let's go back to the slides, please.

Now it's time to get down and dirty with the real thing, which is writing your own patches. This has been the number one developer request, as you can imagine. So the nice thing about it is that, yes, it does make sense for you to send all your requests and bugs to [email protected]. And it actually works. So we listen to them. We wanted that ability to write custom patches simple and powerful like the rest of Course Composer, which concretely means we wanted it easy to be able to manage your set of inputs and outputs on the custom patch, easy to do image processing, or OpenGL rendering, or even provide a custom user interface to edit the internal settings of your patch. So to achieve that goal, we actually built completely the custom patch mechanism on top of Objective-C 2.0, which as a side effect implies that those custom patches that you write will only work on Mac OS X Leopard. Let me introduce you to that new class called the QC Plugin.

This is the best class for writing your custom patches. It is itself a subclass of NSObject. And the way it's going to work is that you subclass intern the QCPlugin class. You implement one or more. You implement or override one or more methods, the one you need, define the inputs and outputs, implement how exactly the patch executes, and so on. And then you end up with a plugin that you can put at the appropriate location on disk. It gets loaded inside Course Composer and turns into a custom patch.

There are a few requirements when you write custom patches. The first of them is that, obviously, you want to be a good citizen, and your custom patch must be able to have-- well, let's say your QC plugin subclass must be able to generate multiple instances of those custom patches.

And that's really a requirement. The other one is that you've got to be able to write code that is going to work on any thread. We're not talking about reentrant code, but we're talking about code that doesn't matter if it's executed on the main thread or a background thread or those kind of things. In the same direction, you also need to make code that works if you don't have a run loop around. So for some system services, the way they work is they require a run loop to be able to post notifications and things like that. If you want to use those APIs inside one of your custom patches, you will likely need to spin off a custom thread, and then in that thread have a run loop, and then communicate with the patch.

Here are the basics of the QC plugin class. So you subclass it, and then the first two methods you're likely to implement are attributes, which is called so that the system can retrieve the name, description, and so on for the UI. And if you need to, you can implement init and dialog, which are the way the plugin instances are created.

Now the important thing is how do those custom patches execute? Well, there are a number of methods on the QC plugin class that you implement whenever needed. You will notice that there is a certain symmetry in those methods. So what happened is when the engine starts, start execution is called on your plugin instance. When the engine shuts down, stop execution is called.

When your patch starts being used by the engine because somebody's pulling data from it, pulling, let's say, results from it, then enable execution is called. And when nobody needs the data anymore, for example, for several minutes before it's being used again, disable execution is called. And at the core of it, we have the execute method, which is called by the engine whenever results are needed from your patch.

You will have likely noticed that all those methods were taking a parameter that was an opaque object complying to the QCPluginContext protocol. That protocol defines the number of methods that you can use to get information about the rendering destination, like obviously the bounds, the color space, and the OpenGL context to render to it, as well as a couple utilities so that you can send a message to the course composer log or store some user information. The interest of that user information method is that This is--this mutable dictionary is shared between all your instances that are running in the same course composer context. So that's a great way if you need to do caching that is shared between your various instances in the same context to use the user info dictionary. You want to be careful. Do not retain that QCPlugin context object. Do not make call to it outside of the scope of those execution methods. It's really a dedicated object for that, for only being used inside the execution methods.

So the core execution, like I mentioned earlier, is all about that execute method. So remember, in Course Composer, we have patches. Those custom patches are going to execute in the environment of the QC plugin context. They get their input parameters. You read the values. So that means reading the values from the input port. Then you do computations in your execute method to build the result. You can also take the time into account if necessary. And finally, you output those results to the output ports of your batch or you render it to the destination.

There are two more things we need to define regarding execution. Like I mentioned earlier with the CompositionLoader, those three types of patches, Provider, Processor, and Consumer, are fairly important to the system. So you need to implement the plus execution method on your class to define what's the execution mode for your patch. So as a reminder here, providers are the ones that are executed whenever the outputs are needed, but no more than once per frame. Processors are kind of the lazy patches. They're only executed when their input change and their output change are needed. Or the consumer always are the one putting the data from the two others, and they're pretty much always executed.

You also need to specify the time dependency of your patch. Three cases-- either your patch doesn't care about the time at all in its execute method. It just ignores the parameters, so the type is none. Or it can definitely take the time into account its computations. So you would say the patch has a time base. Or there is kind of an intermediary case where you have some patches that just need some idle time because it could be a provider patch that's connected to some hardware, and you want to make sure it pulls data from it it's needed by the engine. So you don't truly depend on time, but you need some idling going on. So that's the kind of a special case that is defined by the KQC plugin TimeModeIdle. But it's pretty rare. You shouldn't use that very often.

So we define how the patches execute, when they get executed, and so on. The last thing we need to look at is, well, how do I define my inputs and outputs? So we're completely leveraging the Objective-C 2.0 properties. And the way we do that is, if you define a property of your subclass whose names start with input and output, and whose type is one of the supported types, then we automatically turn that into an input and output, or output, sorry, on the patch. So for instance, you can see here, I have a dynamic property called input value 1, double, and that will turn into an input port on your patch with the key input value 1 and of type number. So the same thing for input value 2, and it's pretty much the same thing for an output result property.

You still need to define-- it's not mandatory, but it would be nice for the UI-- that to provide some description for your inputs and outputs so that when you're in a Quartz Composer editor, you see a normal name instead of saying input value 1. So for that, you override the method attributes for property port with key, and that will get called by the system. And it will pass, for instance, input value 1, and you can return a dictionary that contains the name and the default value, those kind of things.

How do you read and write to those property ports? Well, the beauty of Objective-C 2.0 is that it becomes completely transparent. You want to read from one of those property ports, input value one, you just access it like if it was an IVAR of your class, it's that transparent. It will automatically make a round trip to the port, query the value, and return it, as if you were using an IVAR. If you want to write to an output, it's the exact same mechanism.

So there is a little thing to be aware of. It's definitely more expensive than reading and writing to an IVAR because of that run-through to the input or output port. So if you have a loop, like a for loop or any kind of loop, it is definitely recommended to actually cache the value before you do that loop and use the cached value in the loop. As a hint, what you may do in your code is instead of typing input value 1 in your source code to read or write to a property port, you can do self.inputValue1. It works exactly the same. But in that case, when you read your code again later, it will remind you, oh, I'm actually not reading right into an eyebar here. I'm accessing a property of myself. And therefore, if you see self.foo inside a loop, then you know you should move that out of the loop.

Now, not all patches want to have a pretty fine number of inputs and outputs. So obviously, we provide a way for you to add custom inputs and outputs. And it's pretty straightforward. You call addInputPort with type. The type is going to be, for example, Boolean, string number, those kind of things. And then the key to use for the port, as well as optional attributes, which are the same attributes as the giveWith return for property ports, like name, default value, those kind of things. And it's the exact same mechanism to create output ports. Because those are not properly parsed, when you want to read and write to them, you need to go through an explicit method. And it's called value for input key, or it's called set value for output key to write to the output.

Here's a detailed list. And you can see on the left column, the first one, the type of port in the Quartz Composer world, for example, color. You can see the type of the Objective-C 2.0 property that is required so that your property turns into a color port. In that case, that would be CGColorRef. If you create it directly by calling addInputPort, the type would be the QC port type color constant. And if you were to talk to it using set value for input key, value for output key, the type you would get is CGColorF. You may notice here that we're taking a break from the usual Course Commodore API, which are really flexible. And they would typically allow you to pass an NSColor, a CI color, CG color, pretty much anything. Well, the idea here is that the code you write is executing inside the Course Commodore engine. So for performance reasons, we need to make sure we minimize the number of conversions. And if we allowed people to write a patch that takes an NSColor, the next one that takes a CI color, well, we would need to convert that all the time, which might still be OK for colors. But as you can imagine, for images, it would be pretty expensive. So we have restricted it to one type of object per type of course, ComposerPort.

The last thing we need to do is when we build our plugin to package it somewhere, and QC plugins, you simply put them into standard Cocoa bundles. The only specificity they're going to have is a special entry in the info.plist of the bundle that lists the subclasses of QC plugin that you have in that bundle. Then you put that plugin in the proper location, library graphics, course composer plugins. You cannot put them in the tilde library graphics. They won't get loaded here. only library graphics because we want for security reasons have administrator only have the rights to install those plugins. Or you can also load the plugins directly from inside your application if for whatever reason you don't want to install the plugin and make it available for everyone.

So you can either load a plugin from an arbitrary pass or you can have the code mixed with your application code and just called registerPluginClass, class being a subclass of QCPlugin. One thing I did not mention here is that even if you file, your custom plug-in at the proper location, that doesn't mean it's going to get loaded in all environments. Because in WebKit, for example, we have to have restrictions for security reasons as well, and we're not allowed to load any kind of patch in a web environment, so your custom plug-ins won't get loaded. But for regular clients, from the editor to applications you write or anything else, they will get loaded.

So in F-theory for now, it's time to build our first patch. And what I'm going to build here is a very simple patch to demonstrate all the best principles, something that I call the I-patch, which is kind of a marketing tool where you start with a regular name and you end up with an I-name. So it's very simple. Input string, output string, pure string manipulation. OK, thank you.

So we start in Xcode and we create a new project. There are two templates for Quartz Composer plugins. The one we're going to use, the simplest one, Quartz Composer Plugin, that's the name. Those templates are not in the Leopard seed that you currently have. We'll upload them on the ADC website or the WWDC website as soon as possible, so likely later this week or the week after, so that you can download them and put them at the proper location so they appear in Xcode. So let's go ahead and create what we call the IPatch project.

And let's start by looking at the structure of the template. So we have on the left side the series of files that are used by default. Here are your two files that defined the subclass of QC plugin. Then you have your usual prefix file so that it compiles faster, which includes by default here the Quartz framework, which in turn includes all the Quartz Composer APIs. Then we only need two frameworks, the Quartz framework, Abusely, and then the Cocoa framework. Now if we look more precisely at the ipatchplugin.h file, so pretty straightforward, remember, subclass of QC plugin, and called ipatchplugin. First thing I'm going to do is add those input properties we talked about so much. So we need two properties to define the input string and the output string. So the type would be NSString. First name would be input string. And then let me create another one that's of type NSString as well, except it's called outputString. That's all we need to do so far. Then we can look at the.m implementation file. So by default, it's pretty much a placeholder. And you just have to fill your code at the proper places if you want to. You can see the structure here. Every method is implemented, except it does nothing or returns the default value. So you can either delete the one you don't need, or you can leave them as is. It won't make a difference. So let's start at the top of the file. First thing I'm going to do is change the description of the patch, which is conveniently defined as a preprocessor constant. So let's say a patch that converts any name to an iname. OK. The attributes here are defined directly through the preprocessor constants we have there, so we don't need to touch that method. Then remember, it would be nice to define for the UI attributes for the inputs and output ports. So let's look for the key output string.

And if we have a match, then let's return a dictionary that a UI name for that input part. So the key would be nameKey, and the name, let's say, name. Now, if the key is input string, we're going to do a little more than return an m. We're also going to define the default value to have on that part when your patch instance is created.

So let's see, dictionary with object and keys. And my first object is going to be INM. It's going to be the official name of our port in the UI. Then we're going to have some default value. And that's pretty much it. And default value, I'm going to define that to a conveniently pod. All right.

Execution mode, that patch is a processor, just like data in, data out, nothing to change. This is what we have by default, processor. Time mode, nothing to change at all. Our patch doesn't depend on the time. We don't have anything to do in inter-dialog. We don't need global resources or anything like that. We can skip it. We don't care to know when the engine is starting, stopping, or enable execution. So we don't touch any of those methods. We leave them as is, doing nothing. And the only one where we need to do something is in the execute method. So remember, in execute, you take your inputs, process them, produce some result on the outputs. So our output is output string. So we can just write to it. And we're going to have that ising happen.

string by append in string, input string. And to make it even better, we're going to capitalize the input string. OK. Now we are almost ready to build. Let me tell you first that by default, the active executable is the Quartz Composer editing environment. So that's pretty nice. You don't even need to add it. You can build a debug and release that's configured. Wisp also provides, as a convenience, a secondary target that's called build and copy. So typically, if you don't use that target, you build your patch, and then it's in the build directory. But it's not going to get loaded by Quartz Composer because it's not located in the appropriate Quartz Composer plugins. So at that point, you would either manually copy it there, or you would put a symlink, something like that. To avoid that annoying operation, that target is simply calling the normal target and after run a script phase that copies the built result directly into the Quartz Composer plugins folder. So let's build.

OK, now I can look into the QuarkSkMoser plugins folder, which is located into library graphics. And my IPatch is there. So let's launch-- well, actually, I need to quit it and relaunch. There we go. And I already have a composition to host. Here we go, a new patch. Just zoom on that. What this composition does is it has simply an image with string to generate the image with some string on it and display it on a billboard. Very straightforward. And now I can look for my eye patch. Whoops, not too fast. Create an instance of it, connect it there. You can see the name is proper, iname, name. The default value is-- let's wait for the tooltip-- is pod. And so the result is iPod. But it definitely works. So I can type book, and you get iBook. And you can even invent completely new products, like chair or whatever, and get iChair. So it's very simple to implement that plug-in. And we almost have no code, thanks to Objective-C 2.0. Back to the slides, please.

All right, so that was a very basic demo, just to make sure you understand the principles there. Now we're going to go a bit further and do some interesting OpenGL rendering. So what are the conditions here? Well, first of all, your QC plugin subclass is going to have to be of type consumer, because they're the only ones which can render to the destination. If you have a processor or provider, that won't work. Now the same way we have those OpenGL, let's say, good citizenship rules, they also apply when you write custom patches, which means use CGL macros instead of touching the current OpenGL context. And make sure you set restore all your OpenGL stats, except the one that are part of the GL current bit stats. So that basically means the current vertex position, current vertex color, current vertex texture coordinates. Because everyone is changing them anyway, so there is no point in saving or restoring them.

So here's what the execute method of your patch that does OpenGL rendering would look like. Remember, you included the top OpenGL cglmacros.h to get the cglmacros. And then the first thing we would do in execute is retrieve the OpenGL context to render with and assign it to a variable with the name cgl_ctx. So we simply do that by calling the cglcontext object method on the context parameter. For extra safety here, you can always check-- that should never be null, but you can always do if it's null, return null, fail execution. Then you insert your GL code. Exact same thing as the example for the QC renderer and the QC view. Save the current state, change the one you want to the values you're interested in, then perform your OpenGL rendering, restore the states, check and handle errors. Really straightforward.

Well, the next question becomes, OK, I can do OpenGL rendering, but what about images? I would like to draw images, obviously, or get images from the system. So the way you use images within QC plugin is by dealing with OPEC objects. There are two OPEC objects. The first one is called-- I use a protocol that is QC plugin input image source. And the second one is QC plugin output image provider. The reasons we use protocols there is because we have numerous image types in the OS, you know, NSImage, as bitmap ref, CG image ref, and so on. And each of them is really suited to a given usage. And Quark's Composer is a really generic system. So none of those was really fitting. So by picking one of them, it would have a number of restrictions to deal with. Or by, on the contrary, allowing any of them, then we would have to deal with, again, impedance mismatch, which can be really expensive. So we solved that problem. And the nice thing we get as a side effect is that we can defer, through those protocols, as you're going to see in a minute, all computation until it's really knitted. So that improves performances. First thing, input images.

Like I said earlier, those are OPEC objects that simply comply to the QC plugin input image source. The way you would create input image ports is either through Objective-C 2.0 properties, where the type would be ID because it's a generic object and then a protocol, or it would be dynamic ports where you can just create them calling self-add-input-port, where the type would be QC port type image.

Now you have those image, you have those ports, you get image through them. How do you access the pixels to do something with them? Well, we use what we call representations. There are two types of representation. The first one is on the CPU. So the idea here is that when you want to access the contents of the image, the actual pixels, on the CPU you call log buffer representation with pixel format on the OPEC object, and you specify one of the supported formats right now, that can be RGB 8, RGB F for 32 bits float, or intensity 8, or intensity float. And in case of success, it will return yes, which theoretically should always be successful, unless you already logged the image in a different format, obviously. But at that point on, you have the pixels of the image accessible in a buffer on the CPU. So you can call regular methods like buffer pixel wide, buffer pixel high, and the usual base address and by Sparrow to read from the pixels. Finally, when you're done, you simply have to call unlock buffer representation to release that representation. You cannot have multiple representation at a time. You need to unlock the current one and relock a new one. The pixel buffer is read-only, so it's very important you don't touch that memory. Only for reading.

The second case is accessing the input images on a GPU. It's pretty much a similar mechanism, except instead of locking a pixel buffer in CPU memory, you're going to lock a texture. And you can specify the target, which can be 2D or rectangle. If you specify 2D and the actual image does not have power of two dimensions, it will be automatically resized to fit. So you will get your 2D texture, which will allow you to do repeat and various operations that can only be done on 2D textures at the expense of a potential quality loss because of the rescaling that had happened to get a 2D texture out of a non-PowerF2D original image. Once it is locked as a texture, it's the usual stuff. You can get the width, the height, the target again, as well as the name, obviously. As a conveniency, and also the fact if the contents inside a texture is flipped or not, vertically flipped. As a conveniency, we provide a texture matrix. The idea here is that if you get a rectangle texture, you have to express the texture coordinates in pixels versus the regular, let's say, normalized units that you use for 1D, 2D, and 3D textures. Also, if the texture's contents is flipped, you would need to flip your texture coordinates to take that into account. Well, you can do all that transparently and always have to deal with unflipped, normalized texture coordinates just by loading a texture matrix on the OpenGL texture metric stack. And then your coordinates are sent through that matrix, which will do the proper transform before producing the real coordinates to use for that texture. You don't have to use that mechanism, but it simplifies your life if you do. So we provide a call that will return a texture matrix that you can directly load on OpenGL. If there is no need for a texture matrix, this will return null. The same way you cannot modify the contents of the pixel buffer on a CPU, you cannot modify the content of the texture.

Then the third case to look at is producing images out of the plugin. So once again, we have an OPEC object, which is going to have to comply to the QC plugin output image provider. But you're the one creating that object. So you can return any object as long as it conforms to that protocol. The way you would create output ports for output images, exact same thing as for input images, Objective-C 2.0 properties, or dynamic ports.

So let's look a bit closer at that protocol, output image provider. So it's going to-- your OPEC object that you built and return, which complies to that protocol, is going to be curated by the engine whenever necessary. And it's going to be responsible for providing pixels for the image on a CPU, on a GPU. So it's the inverse mechanism that's getting image in your plug-in. There are two methods that are generic on that protocol, just the dimensions of your image, pixel wide and pixels high.

Then once again, two cases, CPU and GPU. In case of CPU, your provider object is called, and the engine is going to ask, what are your supported buffer pixel formats? So if you're able to draw your image into, let's say, an RGB8 format, you will return RGB8. If you can support more than one format, you return more than one in SNES array, and you can order them by order of preferences. If you don't support rendering in CPU, then just return an empty array or nil. If that condition pass, then the engine is going to call when it actually needs the actual pixels, render to buffer, giving you a base address and a row bytes and the actual pixel format of that buffer. And at that point, you just access the buffer and write to it and put your pixels in there.

In the case of producing pixels for the image on a GPU, it's pretty similar. Except in that case, instead of drawing to a buffer in CPU, you're going to draw to a drawable, an OpenGL drawable, on the GPU indirectly through an OpenGL context. So the first thing is, what kind of drawable pixel formats do you support? So because it goes through OpenGL rendering, you can always render. It doesn't really actually matter what the drawable pixel format is, except for precision. Because if your image is, say, a floating point image originally-- that's floating point data you have-- well, you might as well say, I'd like to render into a floating point buffer on the GPU, if that's possible. So that's the point of that method, supported drawable pixel format.

Because we're in the OpenGL world, to be able to render, you might also require, from the OpenGL context, a number of extensions to be supported. So you can check for those and return yes if you can work with that context. And finally, if the above conditions are met, render with CGL context is called. The context is ready to be used with CGL macros, as you can see, and you just draw, and the viewport is already set, and so on. The projection and model view matrices are set to identity. You just render. Remember, very important, preserve the stats you change always, and at that point you're done. So you don't have to implement both CPU and GPU method, but obviously, you need to implement one of them. So let's do a demo of OpenGL rendering.

In that case, I'm not going to build a patch, because there is a bit more code, especially GL code. So I'm going to show you a project that's already written. OK, so this is our usual plugin project. The difference here is the inputs and outputs and the execution method compared to the previous one we wrote.

So you can see here dynamic properties that define the inputs of our plugin. I should mention first that our plugin is just drawing a square with an image in it and colorizing the image. So the inputs we need are x and y for the position of that square on screen, as well as the color to modulate the image with. And finally, we need an image. Now let's look at the actual implementation of that file. By default, Quartz Composer, the project template in Xcode already has OpenGL, CGL macros at the top. So you don't need to bother with that if you have actually forgotten. So same thing here. We define a nice name and a description that's going to go in there in the attributes. Attributes for property port with key, exact same thing like we saw before. Check for each of those keys and return an appropriate set of attributes. Execution mode, that's the difference. Remember, we have to be a consumer to be able to render. We don't depend on time, so it's still known. Then to make the code clearer, I deleted all the methods that were actually not implemented. So we're only left with one, which is the execute method. So let's rapidly look at it. The first thing we do is we retrieve the OpenGL context to draw with. And for extra safety, we check to make sure it's not null, because I should have mentioned before that when you actually use CGL macros and CGL context is null, you will crash. So you want to make sure it's not null.

Here, for convenience, remember, it's when you access the image many, many times, you might as well cache it in a local variable rather than read from the input property. So that's what I'm doing here. I am copying the image on the input into an image local variable. And now it's time to get to the real thing. So what we want to do is we have an image as an input. We want to get an OpenGL texture out of it. If this image exists and if we can successfully get a texture representation of type rectangle texture, then they just put the texture name inside another local variable. Otherwise, we put zero.

Then we set up our ModelView to do the translation. And remember, we have to save the previous ModelView matrix. If a texture does exist, if we do have a texture, then we configure the texture unit in OpenGL to enable texturing, to use the texture corresponding to the texture name. And if we need a texture matrix, we just load it on the texture matrix stack.

We're almost there. One more configuration we need to do is set the current color so we can get-- remember, input color will contain a CG color ref object. So we can get the components out of it. And it's guaranteed to be RGBA components. So we retrieve them, and we can directly set the current color, which we don't need to save because it's one of the few stats that is part of the current stat bit. Everything is set up. At that point, we're ready to draw. So we draw a quad, specifying the four vertices of the quad as well as the texture coordinates for each of those. Because we know the texture, we actually load the texture matrix that is provided to us. We don't need to deal with rectangle texture coordinates and specify the coordinates in pixels or anything. We just specify normalized coordinates for the texture, like 0 and 1, and it just works. So if you were to change the constant above here to be 2D instead of rectangle, you would just replace it there and it would still work. You wouldn't have to change that code here. Finally, our quote is drawn and we do the inverse operation, restoring the previous state. So if there was a texture name, we unload the texture matrix. We restore the original state of the texture unit by disabling texturing. And we restart the model view matrix and finally check for OpenGL errors. And don't forget to unlock the texture presentation at the end. So it might seem a bit complex, but all of that is OpenGL code. The real QC code is very simple. Let's build a project.

OK, let's run. Yeah, should be installed. Just double check. Yes, it is. Create a new document. Look for our brand new square patch. Create an instance of it. As you can see, x, y color and image inputs. So let's take an image. Okay, and I can obviously change the color and the position as well.

So that was a simple demo of how you would implement OpenGL rendering, where obviously you would start from that and then replace with your own OpenGL code, however complex it is. Let's go to the-- oh, before we go back to the slides, I wanted to show you a more complex plugin that I wrote the other day. So it's a hide-fill plugin that does a lot more complex GL operations.

But the QC side of things is still very simple. me open that compulsion here so what i have uh... in that plugin is We start-- well, let's look at the composition, and then I'll explain what the plugin does. So we start with the video input. We get the image out of that. Then there is a custom core image kernel that extracts, computes the luminosity out of that image. So you get basically a grayscale image representing the luminosity. Then we have another image kernel that is going to take the luminosity and build a new image where the RGBA component, instead of representing colors, are representing vertices. So R maps to X, G maps to Y, and so on. So you get an image where the color components are actually x, y, z, w, coordinates of vertices. So we're building just a mesh of vertices stored in an image, in a way. And what happened is we take that special type of image, and it goes inside-- well, here I have a trackball so that I can rotate.

But inside, it's basically passed to the custom patch I was talking about, height field. And here inside, you have some fairly complex GL code that is getting that image, copying it into an FBO, then going from the FBO to a VBO, and basically processing a mesh of vertices. So the whole operation from processing the video captured frames to building the vertices in real time to generating the mesh, it's completely done on a GPU. It doesn't leave the GPU. So that's fairly advanced OpenGL. And that is to show you here that you can really leverage the Chorus Composer environment to do complex GL code where you don't have to deal with all your stuff. Like now I need to write some code to capture the video input, I need to write some code to import images and create textures. You can leverage all those facilities provided by Chorus Composer to do fairly advanced GL. So you can kind of see it here that it's basically-- I need to rotate it, but you might see my face in the mesh. There we go.

So it's a mesh of about 16,000 vertices and a bunch of triangles as well. Here you see on the bottom left the result of the first core image kernel that is computing that luminosity. And here you can see that special image where RGBA are actually vertex positions kind of normalized in a 0 to 1, 0 to 1 space in x, y, and z. All right.

To learn about advanced OpenGL techniques like this one, I would recommend you definitely go tomorrow to the GLSL session. Now it's time to look at the final part of writing custom plugins, which is using internal settings. The idea behind that is you cannot have a plugin where necessarily all the inputs and parameters needed by that plugin are defined through input ports. For instance, you write a custom plugin that is going to access some hardware device, and you might have a number of parameters for the configuration of that hardware device or the way you communicate with it. And you simply don't want to have those settings available on the inputs because you don't want them to be animatable or it might just not make any sense at all. In that case, you have what we call internal settings for the plug-in. And the way people are going to be editing those is by going through a custom interface which appears in the inspector of the editor. So in that schematic drawing here, you can see the RQC plug-in which gets some values from input ports, produce results, while the The internal settings are set from the inspector, and the communication between the plugin and inspector happens through key value coding.

So how exactly do we implement internal settings? Typically, you would define IVARs in your class, or even better, Objective-C 2.0 properties. Then you make sure those are accessible through key value coding, remember, set value for key. You also need to list the keys corresponding to those internal settings by implementing the method pluginKeys, which return an array containing those keys. We need that list of keys because then we can do a number of automatic things. Like for now, we do automatic serialization of the values things, but we can also do in the future like automatic undo, redo, those kind of things.

Here's an example of internal settings. It's the same subclass that I had earlier in my slide, except I added two Objective-C 2.0 properties that you can see at the bottom to store my internal settings. In that case, I have a system color. So it's completely arbitrary for the sake of that example. So an NSColor kind of thing. And another one that would be a completely opaque object. And let's call that system configuration. We need to declare the corresponding keys, system color, system configuration, when implementing the plug-in keys method.

Now how does serialization work? Well, remember, you have your plugin, you have your internal settings, and obviously when the user creates an instance of your plugin, then save the composition file, you want those internal settings to be saved, and you want them to be restored when the composition file is loaded. Well, we do that automatically for you as long as the object you return for that internal setting complies to NSCoding. If it doesn't, you will have to override serialized value for key and set serialized value for key methods, which basically are used to convert an arbitrary object into a value we can store in the composition file. And that has to be either nil or it has to be a PList class, like NSString, NSNumber, and so on.

Here's an example serialization still leveraging our previous class interface. Remember, the first one we had as an internal setting was system color. Because it's an NSColor type of object and it already conforms to NSCoding, there's nothing to do. However, for the second one, that special configuration object completely opaque, well, it doesn't conform to NSCoding, so we have to handle the serialization ourself. So we override serialized value for key, check for the appropriate key, And for the sake of the example here, I'm assuming that this object can return some kind of just binary data of its representation. So we just return that. It's an NSData, so it's PLoS compliant. We're fine. Then on the other way, when the composition file is loaded, setSerializedValueForKey is called. And we need to override that, check for that custom key, do the opposite operation. We got that kind of data object, so we recreate a configuration object from it and set it.

Now the user interface, how does that work? Well, you are going to create an NSView with a number of controls inside to edit those internal settings. And it's displayed in the settings pane of the inspector. This NSView is going to be managed through a QC plugin view controller instance. So the controller, it's your usual model view controller mechanism in Cocoa, is going to act as an intermediary between the plugin instance, which is the model, the view, which is your view full of controls, and the controller in the middle, the QC plugin view controller. And it also handles all the loading of the Nib file and those kind of things.

The way you return the controller is by implementing that method, which has to return a brand new controller object for your plugin instance. And it's called createViewController. Typically, you would not subclass QC ViewController. You only need to do that if you want to do some fairly advanced things. But for now, I'm assuming simple case. Just return an instance initiated from self, your own plugin instance, and as well as providing the name of the Nib file to use. Now the Nib file, how is it going to work?

Well, we use Cocoa Bindings to simplify our life. So whenever possible, your control is simply going to be Cocoa-bound to the plugin settings. So using the plugin.xxx, where xxx would be simply the key corresponding to the internal settings. If you want to use the more-- the kind of the former target action communication model, it's completely doable. You would just have to subclass QCPluginViewController. So practically, how does that work? We start with our QCPlugin.

It returns a QC plugin view controller, which is going to communicate through KVC to set and read the internal settings. That plugin view controller is loading a Nib file, and it makes itself the file's owner of the Nib file. Then the plugin view controller, as the file owner, has a view outlet, which you need to connect to a view in your Nib file that contains all the controls. And finally, all those controls use Cocoa bindings bound to the QC plugin view controller. And that's pretty much it. So in that final demo, I'm going to quickly build some kind of simple text thing where you have the ability to enter a long text in the inspector of Course Composer Editor. And then this is just produced as an output.

OK. New project. In this case, I'm going to use another template, which is the one which already provides support for all the UI, with the Nib files and everything. And let's call that simple text. All right. set up the project. We only need to have one property. which will be the text and the string that we actually produce, so output string. And my internal setting is going to be-- an IVAR, sorry, an Objective-C 2.0 property. So start as an IVAR automatically in the class, and let's call that type text and any string. Now I can go to the implementation file. So simple text, I'm going to change here. Usual attributes, so let's do if key.

Alright, a default name and I'm just going to retrieve, I forgot to copy that. All right, NAMKey. Execution mode, processor, nothing to do here. Time mode, still not in it. So remember, we have an internal variable, an internal setting that is storing our text. So let's give it some default value.

This internal setting is stored using an Objective-C 2.0 property, and when your class is dialogued, your instance is dialogued, you need to set it to nil. If you don't do that, then it will basically leak. And you cannot release the text because it's a property, so you just set it to nil, which will trigger, you know, replace the property by the nil value and release the previous string that was stored in there. Plugin keys. Remember, we have to list our keys here. We have only one. are with object.

text. We don't need to change the serialization because NSString conforms to NSCoding, so nothing to do here. CreateViewController is already all implemented for us, nothing to do. And the only method we actually care about is execute, where we do output string. So we do it very simply. We take the text that we have internally and we copy it to the output string. We're almost done here. The last step we have is edit the Nib file. So the templates provide a Nib file that's already ready with a view. You can just put your control on. The view is already-- the file owner is already configured to be of class QC Plugin View Controller. Its view outlet is already connected to the NSU. So everything's pretty much ready. Let's add an NSTextView.

configure the NX TextView to not have multiple fonts. And let's see, small sliders. There we go. And finally, we're going to bind the content of the NX TextView to the internal state of our plugin. So we make sure we select the actual NX TextView inside the NS ScrollView. Then we go to Bindings. The value here is going to be an NSString if the NX TextView does not have multiple fonts enabled, which is exactly what we want. So we bind to the file's owner, which turns out to be our QC plugin view controller. Then the key pass is going to be plugin. Then at that point, we retrieve the plugin instance. And remember, the internal setting is text. That's our internal setting. So we just bound the contents of that NSTextView to our internal variable. And the last thing we need to do here is make sure it updates continuously. So let's save that and build it.

All good. Make sure it's in there. It is. Simple text. Let me hide that. Hide this. Quit this. OK. And now I have a patch that's conveniently pre-populated. So this patch is-- sorry, this composition has an image restringing and applies some bloom effect on the text before displaying it on a billboard. So you get a nice kind of look. And let's create the instance of our simple text. Oops. Oh, I might have to quit it. Yeah, it's there. Simple text.

There we go. It was unloaded, I guess. Connect to the string input. So now I have my initial value, which if you remember, was Bonjour. When it becomes interesting is I can go show the inspector, go to the Settings pane, and now I have my text view with the text. And I can type some text here, and it's all real time. So do a nice thing with the effects. OK.

I can save the composition, quit, and reopen it. And a text is there, which shows that serialization worked perfectly fine. Back to slides, please. Thanks. So that was all for the advanced Quartz Composer session, pretty intensive context. Remember that there are many advanced features you can leverage out of Quartz Composer. Quartz Composer is definitely more than just to do pretty motion graphics. You can do pretty professional work with it, especially if you leverage the advanced feature of the QC view or use the QC renderer to integrate into your own imaging pipelines. And when you reach the limits of Quartz Composer, the Composition Loader will allow you to build pretty complex compositions by dividing them in subcomposition and a master one. Or you can even write your own patches. And we've said it's not that difficult. All the basics of writing your own patches is really simple. So you just have to, in a way, put your code for the actual patch in the pre-populated templates, and you're good to go. Thanks for coming.

I don't think we have obviously time for Q&A. The one thing worth noticing is that we have a number of labs regarding Chorus Composer. There will be Chorus Composer folks at the lab tonight, starting at 6:00. Our official Chorus Composer lab is tomorrow at 3:30. So there will be a number there. And finally, I would like to remind you that we have a great developer mailing list called [email protected] to which you're welcome to subscribe. Thanks.