Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2007-421
$eventId
ID of event: wwdc2007
$eventContentId
ID of session without event part: 421
$eventShortId
Shortened ID of event: wwdc07
$year
Year of session: 2007
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC07 • Session 421

Using Quartz Composer in Your Application

Graphics and Imaging • 1:10:17

Quartz Composer is a rich visual programming environment that allows you to rapidly utilize the best of Mac OS X graphics technologies. Learn how Quartz Composer compositions can be used directly within your application. See how to use Cocoa bindings to enhance your application. Find out how you can extend Quartz Composer to create powerful, interactive visualizations.

Speakers: Pierre-Olivier Latour, Kevin Quennesson

Unlisted on Apple Developer site

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Good morning everyone, welcome to the certain last Quartz composition during this WWDC 2007 so my name's Pierre-Olivier Latour, I am the Engineering Manager for Quartz Composer and today during this session we're going to look at using Quartz Composer in your applications through concrete integration case studies. So, what you will learn during these sessions are first of all we'll do a rapid overview of what's new regarding the propers for Quartz Composer and Leopard. And then we'll have a look at those case studies I just mentioned.

So there's going to be one where we set up some simple data visualization directly on the desktop and the other one where we look at another way to visualize movies and the final one we're going to look at how we built that Quartz Composer visualizer which you can see in the graphics and imaging lab downstairs. And through those three examples we're obviously going to build some custom patches which is the big news things for you guys.

We're going to make some assumptions during this session, three of them basically that you already played with Quartz Composer and editor application, that you played as well a little with the basic Quartz Composer API's for example QCView or QCRenderer and also that you are familiar with the basics of Objective-C 2.0 properties because we rely on that for the writing custom plug-ins, custom patches. So first of all, a quick tour of all the new features that effect developers.

What is Quartz Composer and Leopard? Well, it's truly a matrix technology. The first version of Quartz Composer was released in Tiger and now two years later, there's been a lot of work that went in and one of the first things you will notice is, we've done a lot of work to improve the image processing which is usually at the heart of Quartz Composer and that means concretely the ability to handle very large images, high precision data floating point for instance, very efficiently as well as full color matching all the way, correct on everything. So it's a big improvement whereas what you had in Tiger.

Then like I mentioned, the ability for you to write custom patches, we also have system wide composition repository which we mentioned during the first couple of Quartz Composer sessions, and a bunch of new API's available and we're only going to look at a few of them today. Now when it comes to development environment itself, the editor has been redesigned. We have lots of changes under the hood and I'm sure you noticed that if you went to yesterday's session and when it comes to the patches themselves, we also have a lot of new patches and many changes on the already existing one.

So rapidly the new editor template helps you get started faster, it has a lot of new features in it, you are ready to create now it's in the workspace, I did the parameters in line. You can have timelines; we improve the way of creating patches using that floating window there.

A lot of changes, so I highly encourage you to look at it; something that wasn't mentioned so far is the ability to edit meta data inside the composition so you can now attach any kind of regular plist like meta data like dictionary string attributes and so forth in the file and retrieve them from your application by using the attributes method on the QCView or QCRenderer and so on.

So it's pretty useful, and to integrate better into your work flow when you develop compositions especially if you have source version controlling your environment where you wouldn't really be able to decomposition directly because they are essentially battery files, we now have a nice composition comparator built inside the editor.

When it comes to programming inside the Quartz Composer environment, the Core Image kernel patch and the JavaScript patches have been redesigned completely, they are a lot more powerful and invite you to watch yesterday's session if you want more information about those and also some mention on that slide which we have a new GLSL patch as well for programming.

An interesting new patch for developers is the composition loader new in Leopard, and what it does is allows you to load a composition from what's in a file on disk into another composition that's currently running. So as you can imagine, it's pretty powerful because it really allows you to instead of having a big monolithic composition where you have absolutely everything in it, you export it in sub pieces that are easier to manage or to edit or that your graphic card is to work with, you can easily modify without risking to break the whole scheme or something like that. So using the composition loader you can completely configure the way you want and it's really powerful.

I also wanted to talk about virtual patches, something new in Leopard and it's basically writing passed custom patches without having to code in Xcode. So what exactly does that mean? You start by creating your regular composition file and you publish inputs and outputs at the top level, so nothing changes so far.

Then what you do with that file is you drop it into library graphics Quartz Composition patches or to dot library graphics Quartz Composer patches. Then you relaunch the editor or actually any application that uses Quartz Composer so that the change is taken into account and now your composition file is actually available as a stand alone patch.

And you can just drag and drop it like any other patch that would be native into the workspace and use it and it's not a Micro patch, if you double click on it you won't be able to edit it, but it's really kind of a weak link between the file that's on disk and the inclusion inside the composer composition. If you change your file on disk, the changes are reflected, it has exactly at the plug-in if you open a composition and this virtual patch is not in place then you will get an error, you won't be able to properly load the composition.

Briefly about the composition repository, so it's really this idea of Quartz Composer becoming the standard web expression motion graphics and visual effects and the platform so now we have a nice area where we can put all those composition files and share them between applications. So this relies on the fact that we define a number of protocols which are a pretty fine set of inputs and outputs for compositions.

Any composition file you drop in there will automatically be taken advantage of by the compatible applications and we have a few out of the box in the system for you to write a lot of them, and some example protocols are filters, omission transitions, those kind of things. And it comes with an API as you can guess.

Now let's look at the integration API's and Quartz Composer so that you can use Quartz Composer in the application. There are essentially four of them, the first one is new for Leopard and is a new concept is a QCComposition; we're going to look at that in a minute.

We also have another new class which is the QCComposition layer which allows you to integrate seamlessly Quartz Composition contents inside kernel emission trees, layer trees. And finally the old kind of two API's which run in Tiger, QCView and QCRenderer, the first one being for integration nicely, easily, inside interface builder for instance, Cocoa binding and so forth.

And the other one, the QCRenderer really being the overall API, so let's look at QCComposition, what exactly is that new beast? It's an abstract class that represents a composition file and we realize we need it to introduce a new concept because we moved this composition around between the API's and there was no abstraction prior to that.

You were dealing with the files on disk and introduction of the repository especially meant that required. So how do you get or create those composition files? Well the first step, the first possibility, is that you get them from the repository where you can get a composition, a QCComposition instance if you know the item key file specifically the composition you want in the repository, or you can ask the repository for all the composition that complies so instead of protocols and attributes and it will return you an area of QCComposition objects.

The other possibility is to create then indirectly for instance, it can them from a file on disk or from a data blog and that's interesting to data case for developers who want to product their compositions, because remember they're completely open source. So if you want to kind of encrypt them or you don't want to have your users or competitors look at how this composition is designed.

Well, you can just encrypt them and at the end of the day what you need to do is decrypt, have the data blog be exactly what the content that QC file is on disk, and you can pass that to composition with data. Or you can also transmit composition over to network frame for instance using that. And eventually you have some curing abilities on the QCComposition object; you know you can get the protocol, the attributes, and the least of inputs and outputs.

The other change we did regarding the API is we now have QCView, QCRenderer, and QCComposition layer, and they're all composition renderers so in order to simplify the API's, we kind of factored out all the common matters between them and they just all comply to QCComposition renderer protocol. So you will find in there as usual the ability to achieve properties of the composition that's currently loaded on the renderer, the ability to read and set the inputs, the ability to retrieve the output results, and here I like to point out we have a new API in Leopard and as you can see the second line value for output key of type, so now you're not limited to the default object type that you get from the resolute of the composition.

You can actually say for instance, if you want to get, if you're composition output's an image, you used to only get an NS image out, now you can say I want that output value of the composition but give me a CG metro, CIE metro, whatever is convenient for you to integrate with the rest of your pipeline.

So that's really great and it works with other types as well, for example structures you can get as arrays or dictionaries, etc. And finally two new API's of interest as well on that protocol the ability to save in one shot all the current values, all the inputs of the composition that's currently loaded on the renderer or restore them.

And it sets them in a convenient p list like format so you can actually look into it and you can also just put that directly in a profile or something I bet. Now to talk about the Core Animation support, so it's really nice because you can use all the power of Quartz kernels to create motion graphics or effects, those kind of things and bring them seamlessly inside Core Animation on a layer.

So, really it's one line of good, its composition layer with file or with a QCComposition object and there you go, you have your layer, you insert it in your tree and you're done. And because it's also a QCComposition renderer object like I just said, you have all those regular methods you can use with.

The QCView has many changes on the API, I'm just going to point out a few you can pause, yet more precise control over the playback so you can pause and resume rendering for instance. A powerful ability now is if you subtract the QCView you can override it so it's not designed so you call it directly, do not do that.

It's only for overriding the renderer at time arguments mattered which is a primitive. Every time a frame is valid it goes through that, so what's the point of overriding it if you want to do precise synchronized communication with a composition for example, only if the inputs just before it's about to render are only the ready outputs after it has rendered.

And you can also retrieve now the OpenGL context and pixel format choosing to render this is also not so that you can draw with them, but it's that you can share other OpenGL contexts, for example Core Video, or actually QuickTime visual contexts, those kind of objects, be able to create them properly.

The QCRenderer, I'm also just going to point out a few new methods. The first one is the ability to create a QCRenderer from core OpenGL contacts so it doesn't go more lower than that in a system level for OpenGL contexts so you can create it from really any kind of OpenGL contexts now.

And you can also specify which is a big saying 'the output quarter space' for rendering output quarter space. What this means is you can now say well I want to render the Quartz Composition contents in the proper color space for what I'm going to do with that content. For instance if you do video processing, you'll likely want to have Quartz Composer output to 709 color space or 170M color space, it's more like NTSC data, those kind of things.

If you want to do QC rendering on Tiger you used to have to create the OpenGLP references for GSL and manage all that. Now it's like completely abstracted, you just go any tell screen with size, color space, and that's it. And then you can render frames and you can read the result from the frames, those kind of things.

If you want to take a snapshot of the last frame that was rendered we have now very convenient methods, Snapshota, you may just create a snapshot, you may just type in the case of the type, as usual CG image, CI image, see the OpenGL buffer, all those kind of types is supported.

We have two UI elements that we added, the first one is a QCComposition parameter view, what you use it for is if you want to display using a standard user interface in your application all the controls to editing input parameters of the composition that's currently loaded on a composition renderer. So it's super easy to set up, set composition renderer, and you're done. And it will take care of automatically updating in those directions.

Also if you look at your QCComposition repository we have a nice speaker built in which either works as a stand alone version which you can see a nice shot for instance or an unabated view version and the way you use it is you set it up to display some categories of composition and eventually you get notifications where the user changes set at the composition.

Finally, like I said, I've uncovered everything so we encourage you to look at the release now that you can easily access from the help menu in the editor. So now onto our first integration case study, and the problem we're going to look at is monitoring a data source that doesn't look that very often, for example we know ever hour, every couple hours or something like that, and to take a more concrete example, it's actually something we solved in our team, we build on a regular basis Quartz Composer as you can imagine continuously several times a day when we have check ins and repository system and so forth.

And we can go to web page, look at the build results, we also get emails what would happen but we wanted to try some nice way to monitor that without having to physically having to go to web page and check for emails, just have it on your desktop like just hanging around, you put it since we all have two or three screens, just put that somewhere and you just monitor it very easily. So an entry is even simple.

So how would we build something like that? If you don't have Quartz Composer experience you might go the usual route which would be 'I'm going to do Cocoa and I'm going to create a window and NS table view and set up all the columns, do all that stuff, do external downloading and what not' and it's not like a crucial project for your work, it's something really that's an add-on for your productivity so you don't want to spend too many hours on it either so we're going to look at approaching that problem with Quartz Composer and how we can solve that problem really fast and at the same time do something that looks nice and have some fun while doing it.

So what are the steps? Well you'll see the first one you want to get the data from that source you want to monitor. Then you want to build a composition to display that data and finally building the application, so pretty straightforward. In the case of getting the data from the source, we could remember there's a web page in our concrete examples so we could have used the new XML down loader patch and download that web page, do a lot of heavy parsing because HTML is basically XML so I try to extract the data.

That can be quite a bit of work and also fragile so in our case it was simpler to just have the build system outputs a secondary file just using basic XML which we're going to call for the rest of this talk an XML report thing. And as you can see really simple, there's just a name on the list of items.

Then each side downsize titles, subtitle, and caption and also an icon which symbolizes status where zero would be a success, one would be a warning and two would be a failure. So in practice this is how it looks like for example in our case you can see the builds, you can see the success and the error, etc. So now onto the demo machine please.

OK, so the first thing is displayig that XML composition we talked out, that XML data we talked about. So, I have here a sample report, exactly like it was on the slide and then I'm not going to build the composition with you because that wasn't, that's not the purpose of that session. It's really to show the integration steps; however I'm going to explain to you how it was built.

So as usual you will notice that the first thing we do is clear the background the usual color, with a constant gray color just so that we clearly see the text in the viewer. Now onto the XML handling, so we use the new XML down loader patch and we specify usually the URL or the bypass to the location of that file, it gets downloaded, nicely parked and available and the output as a structure, which exactly mirrors with the XML contents.

Now we do two things, we use the structure accessor to extract the name of that report and then we display choosing the usual imagery string and billboard which you can see at the top of the viewer here and then the second step is, as you can guess, to extract the list of add ins which we do using another structure key member and then you should be fine with your list by now, we use the iterator to iterate over each item in the list and that's the subset here, so we extract the proper item corresponding to the current iteration then remember that I'd had contents several numbers so the first one would be the icons thing and we use a multiplexer to choose the right image to display, depending on that value is a bit of work to display it, then we go through constructing the title, do some math just to position it properly next to the icon and so forth.

We use the regular imagery string and every thing connected to billboard so that it renders on screen and it's accurate in process for subtitle and the caption. Now as you can see I can change that fake report to wonder would be more realistic, looks like we won't. Back to slides please.

Now how to build, remember second step was building a composition, first step getting the data, building the composition and then building the little application. So what does that mean? Well eventually we're going to need a transparent window, we're going to need to put it on the desktop, we're going to use a QCView because it's the fastest way you can put Quartz Composer playback into an application, put that in a window.

So that's solves most of the problem, but we still have a few things to do. First of all, we need the QCView to ascertain transparency because remember we have that transparent window if it's complete to QCView opaque then complete it gets to a point. Then we also need in ensure the QCView does not render more often than necessary. We're only monitoring a data source that's a few times for an hour at most, there is no point in having the QCView running like sixty frames per second which is kind of difficult behavior.

You don't want to slow down your system on a thing like that, you just want to have something that updates kind of like all those status monitoring things in the menu bar once now and then. And finally we need to ensure that whenever that QCView read rolls, once every few minutes or something, it downloads synchronously the latest XML reports so that we immediately see the latest one and know something that has been cached. So now back to the demonstration please.

So let me show you first the concept for the transparency so if we go to interface builder, I'm just going to create a dummy window here and change its class to be a panel. Alright, now I can take advantage because our application is kind of a full screen application, it doesn't even have a menu bar or anything, it's just full screen and it sends the text over to desktop. So we're just going to keep the data running and take advantage of the new head ability of the window so we'll get nice transparency, we don't have to do any fancy settings on the window itself setting opacity, all those API calls on the window, it just works.

Then as you can guess we need to add a QCView on it so we look for the QCView object, drag and drop, put it properly, size it properly, go and make sure it resides with the window, and now if I run that it's opaque, so how do we get the QCView to be transparent? Well there is a little hack here which is artificially supported but not many people actually know it and it also works on Tiger, same thing To make the QCView render with transparency and not draw over everything that's behind it, you want to set the color to have an alpha value that's not one.

It's that simple, so then it tells, turns on the setting and generally that says OK I need to render with transparency so in our case we're just going to go all the way and set it to zero, now if I run that you can see the QCView is still rendered, it's still there but it's not visible. Now, how do we get the composition to render properly with transparency as well? Because you don't want the composition to be opaque, so I have a sample composition here I'm going to show you and it's pretty easy to do, all you need is to do two things.

You need to clear the background of the view of the composition using clear patch or something equivalent, you need to plant it with a color that's completely clear so especially the AFP, you want the AFP to be zero. So basically that means the enter area here is actually transparent when it's going to be composited over some other background.

And then all the images you draw on top of that, you need to draw them with the transparency mode or blending mode set to over, so that it composites nicely and it doesn't, you don't see like a black kind of box around the image, everything is composited smoothly.

So remember just clear the background completely. Now if I go back to my IB file, I can load that demo composition here and now I'm going to run and as you can see the composition rendered perfectly with transparency and there is absolutely no artifacts, nothing. So that's the basic thing that we're going to use to build our application, the best concept, so I need to show you what it looks like.

So here we have our little Xcode project, it's really basic you know, there's little code in there. Just putting some glue between the various elements, you can find here the neat file obviously which is very close to the one we've just built. The composition file is there and we just open it and show it to you because there are a couple of changes. It's very close to the one we had before, the first one we built except for a couple of things.

Instead of displaying the report and then on screen we outputted on the report name a string output of the compositions so that we can retrieve the report name from the application and be able to set the title of the window from that. We also changed the clear, you will notice, to erase with the completely transparent color.

The XML down loader has a little change as well, if you show the inspector settings on it you will notice that it has a new option in Leopard that's called synchronous download, so by default the XML down loader will actually use a background stripe so that it doesn't block the execution of the composition and the data is going to be there at some point there.

So with the synchronize download option you just say you want to block actually during the execution of the graph and you want to download, wait for the download to be completed and parsed, then output it. And that's what we need here because remember we're not going to render that QCView very often. So we want to add the latest data downloaded and processed and be visible immediately, not have any kind of data.

And finally there is one thing left which is that the XML down loader is kind of smart in the sense that it's not going to re-download the XML unless the URL changes or if you play with the update signal, for example you make it go from false to true to regenerate the download to tell it you want to re-download.

So remember we want to re-download it every time we render so an approach could be to have that signal alternatively go from zero to one and so forth, so that would work perfectly fine, but I wanted to show you here a different technique to do it which might be used in other cases, it's a little bit more complex but it's more powerful as well. So the trick here is I just wanted to change the URL constantly so that basically we fool the XML downloader and we tell it 'OK we download', so how do you change a URL without really changing it so that you don't end up downloading another file.

Well it's really simple, you actually just add arguments to it, so you can see here we have our input URL, report that XML or it can be a full URL and then I've written a few lines of JavaScript we simply add the question mark and a dummy argument just to regenerate that URL and the dummy argument simply assists in time so it's guaranteed to change all the time. So with that, every time we render the composition it downloads the latest version of the report.

Now let's go back to Xcode, regarding the info.plist there is a little thing you need to add. This application doesn't have a user interface it doesn't have the menu bar or anything. So this is called the background on the application; it still has a user interface; it's not a daemon So the way you tell the system that is by setting the LSUI element to true in your info.plist. Now lets at the implementation in itself, so you have your usual app control error which is going to handle start up and the application figuration those kind of things. And we also have a custom subclass of the QCView so more about that in a minute.

Now you can see the implementation of the app controller, it's pretty straight forward, when the app finishes, about to finish launching, we set the main panel which is our window, to be on the desktop directly, we load the composition and we set the default report URL and important; we set the frame rate in that case to be one per minute, but we could set it lower or higher.

Finally when the application has really finished launching we put the window on screen and we start the rendering of the QCView. And now the subclass of the QCView, we use the subclass to do two things, we override the render at time argument to know when the composition has rendered so that we can read from it's output, remember we have that report name output we want to use, so we read the value just after it has rendered and we set the windows title. See, it's quite basic to do.

And we also do another thing which is we check for a mouse down event with the option key down, it's not a very nice UI but that's actually the purpose of that demonstration just so that we could edit the URL that you want to monitor. So if you press the option key, if you click in the view and you have the option key down just show the text field and we can enter the URL. And the rest of the code here just handles editing that text field in updating the value in the composition. So now let's build and run. You can see it's all live.

There we go. And it's not visible because it works correctly and it's some of that stuff. So you can see the little application that was built, is in those techniques, pretty fast to build and I can set to a different URL so let me describe my example here, which was actually here.

And now we see what it looks like and obviously it would work exactly the same as that URL on the remote server instead of being on the disk here. So let's quit that. And back to the slides please. So now what about if we want to monitor another type of data source, I mean kind of similar data source that you cannot express you cannot monitor directly using Quartz Composer.

For instance it doesn't output in XML or as an RSS feed. So you will need to write a custom patch to handle it, that means using the new QCView plug in API which is really powerful because you can write any type of patches with that and it also takes advantage, like I mentioned earlier of Objective-C 2.0, so let's look into that.

I have a quick crash course on writing your own QC plug-ins. So that's a QC plug in class and the way it works to where you use it is you subclass and you override the required methods. You then define the inputs and outputs for your custom patch and you use the have to implement the execution.

A few requirements when you subclass QC plug in, you need to insure that it works correctly if you have multiple instances of your QC plug-in. For every instance of the patch there is a composition there will be an instance of your QC plug in subclass, so be careful to share global variables and things like that. Also it has to work correctly, if it's not executed from the main thread, because Quartz Composer can run on any thread, and don't assume there is a run loop around. If you need that for download or those kinds of things, we'll look at the technique later.

So the basics of QC plug in subclassing, well if you need to do any kind utilization, destruction, just use any dealloc. Now, the real thing is going to happen for the execution of the plug in, and you can see the primary set of methods there which are a conveniently working pair, so yes start execution stop execution call when the engine start and stop, you have unable and visible which I call where you're instance of the QC plug- in start and stop being used by the engine. So for example if there is no patch that's currently pulling data from it then visible application's going to be called and when it starts again enable execution going to be called. So you can do some precise tuning or usage of your precise controller of your resources.

And at the core we have the executive time argument which is called every time execution needs to happen, so let's look at this one a bit more in detail. So that method is responsible for first ridding the values from the input ports of the patch, then there from all the computation with those values and also taking into account the time if necessary, which you can see is passed as an argument to that method so that's a composition to current time of the rendering of the composition. And then finally write the results of this composition either on the output ports or if this patch does any kind of other type of rendering, sending to an editor, rendering on screen, we'll perform those actions.

Now let's look at how to create ports. So we rely on dynamic properties which were introduced in Objective-C 2.0, and what it means is if your subclass of QC plug in has dynamic properties that start with an input or output whose names start with input or output and however have the proper type, the automatically we create ports that match those.

So in that case you can see a dynamic property of type double and could value one and that's going to to map automatically into an input port of type number and send for input value two and the same way if you wanted to create a string port that would be any string for the type and here are the results. To read and write to those ports you use the corresponding Objective-C 2.0 syntax, self dot input value one so after that the result not that the input properties are read only and the output properties are write only.

Now you might not necessarily know up front all the inputs and outputs that your plug in needs, you might want to create them dynamically while the plug in's running, so we call those custom ports and it's quite simple to create and destroy them as you can see using those API's, you just specify the proper type, the key that identified the ports, and some attributes for the UI and that's it. Now, because those do not have corresponding Objective-C 2.0 properties to read and write to them, you have to use custom API value for input key, set value for output key.

You can see on that table the correspondence between the type of an Objective-C 2.0 property as well as the port type and the Quartz Composer world and also the type of value you will get if you create those ports in a custom way and called value for input key.

Finally, once you set up all your QC plug-ins the way you package it is basically Cocoa bundle you just need to have a special entry in the plist that tells it to plug in classes and then you install it to the proper location and it just works. You can also put that plug in inside your application, load it directly from there if you don't want to install it system wide and you can even leave the implementation of the plug in mixed with the rest of your source code and call register plug in class directly or register the class.

We have a couple things here, we have some very nice Xcode templates that's a starting point to create your own QC plug-ins and also be aware that QC plug-ins will not load in Quartz Composer set environments, so that implies what key for instance Quick Time does scan over environments, having limitations on what we allow to run inside Quartz Composer and the custom plug-ins because you have no control over what they can do, for security reasons we exclude them.

Now QC plug in and practice, remember we wanted to write a QC plug in to monitor our custom data source for the sake of the example here, I picked SQLite 3 Database, which is not supported by default by Quartz Composer so only as a plug in that allows us to specify the paths in the database, the query string, and that returns us a very nice parsed structure. So onto demo machine, please.

So here I have a simple SQLite that I basically created which has the same key now as the XML report so you know its, and we actually show it to you, its going to be easier, then the ID column, column for the icon, the column for a title, subtitle, and a caption, so really basic here. Now let's look at the plug in itself.

So as you can see, we subclass QC plug in and we declare our three inputs and output properties, you can see input data that's fast, input query string are both string properties that we're going to map to string ports and output results structure which we set to be an NS array in that case it's going to map to a structure port in Quartz composer. Now let's look at the implementation.

So first of all, we have some UI thing here so that you know it appears with a nice name in the Quartz composer interface for the plug in itself as well as the inputs and outputs, we also need to tell the system what type of plug in we're building, what type of custom patch. So in that case it's a processor, we want to execute whenever there is a change on the inputs, create something new on the output and also this plug in does not depend on the times so we tell the system so.

Now let's go to the core of the implementation which is the execute method. So the first thing we do is we read from the database patch that was given then and we make sure it's valid for now and if it's not then we just set the output to nil and continue execution. We don't want to abort the complete execution of the composition we just, it's kind of an acceptable user error in that case, we just return yes and you set your output to proper values.

Then to actually read and send a query to the database, I want to keep that simple so instead of using the SQLite API's I just use directly the SQLite 3 comma line 2 and just collate with the proper arguments and force the outputs. So what you can see here, and I'm not going to go into details, you can look at the sample card yourself here, it's essentially NFS task stuff, and so we run the NFS task synchronously and then we'll read the output that comes from it and we parse it. And here's a function that parses its output it's essentially looking for return characters those kind of things and building a nice array of results.

Here we have the NFS task handling which would copy best from another sample code and that's pretty much it, there you go so and let's build and run it. And you will notice that our projects, if you create them from a template, have build and copy target which conveniently built and installed the program at the proper location.

So that's done, now let's look at that composition file, it's the same as before just slightly modified so that it chooses our new plug in we just wrote, so as you can see here replaced the XML one by the SQLte query plug in. The query string has been published as an input port which we can show here and the pass to that database is query to send and it reports exactly back in the terminal if all the reports are nice, which reports to result, nice reports to use as a structure and then the rest is exactly the same because remember the structure had the same formatting. So this concludes my demo, I'd like to look back to the slides please.

Now a couple of things I didn't have a chance to talk about, if you need to access resources relatively to where the composition file is, call composition URL on the plug in contact that is passed to every execution method of the plug in, however this is not necessarily going to return a value it may return nil, so you need to be prepared to handle that. And finally if you need to do any kind of logging instead of using print F or NS log those kind of things, try to use log message on the context so then we can redirect it properly.

And finally when it comes to performing background processing you're going to have to deal with that if either you want to do very legthy operations and you care but not locking the execution of Quartz composer or if you need a run loop. So what's you're going to have to do is create a background workers thread.

And it's not that complex, you can use the nice NS thread API and what you want to start the thread from the start execution, terminate the thread and stop execution and obviously communicate with it during the execute method. So you have to be aware of synchronization issues, you gotta use the locks and mutexes to process your common data obviously and it's better if you actually block and start execution and stop execution until the thread is actually finished starting or finished terminating.

When it comes to testing your QC plug in , so Leopard is really for when now you have 32-bit, 64-bit, d garbage collected, not garbage collected, so instead of writing your own test application to test your plug in in all those environments, we have a convenient feature inside the editor it's already built in, you can go to file new and select test in run time and then the composition that's open will launch under 32-bit or 64-bit if your machine supports 64-bit and you can see that your plug in behaves correctly or not.

And if you press the option key down when you select it will indicate to you that it is functioning with garbage collection on as well, so you can test really the 4 cases. Now I'd like to invite Kevin on stage to take us through the second insert case studies. Thank you.

(applause)

Alright, hello everybody so you see this pretty clear from visualizing in Quartz composer in your application and external source of data. Now we're going to concentrate on the novel source of data which is movies. So we're going to see a pretty unique way, pretty cool way of visualizing the movies.

So here's the problem we try to solve, so you just had this awesome vacation and you've been at the beach in Hawaii and something, and you have all this movie you took with your digital camera, of fish, of children, everything. And so you put these movies on your computer and you just have this perfect shot, somewhere over there and you just can't find it so you have all this viewfinder, QuickTime, QuickLook, and you're looking at this precise shot you exactly know what looking for what it's looking like, but you keep on scrolling and scroll too fast and so you miss it, and so that's very annoying. So there's really no application out there that allows you to quickly get another view of all the shots in the movie so that you can quickly get to the one you're looking for.

And so we say well what about trying to do this in QC? And that's where we go to so because pretty much you know a movie is nothing more than a 3-D object. You have the frames and then you have the frames that are 2-D, 2 dimensional, and then you have time which is a third dimension.

So what about showing these frames over time and so you know, we have this kind of view that you see that you see on the slide where frames are exported over timing in the kind of view you see precisely when the shot starts, when the shot ends, you pretty much get pretty quickly an idea of it's content and you can scroll through it and select entry output points and ideally export this shot to disk. So yes, that's what we did, so how do we get there? Well first of all you need a composition, a Quartz composition to display the frames but you know that you loading these frames might take a while, especially the movies.

So you don't want to have your even thread that's going to be blocked where the frames are being loaded. And each uses the building patch in Quartz composer which is a movie loader patch. Well it's going to be the main thread and it's going to block your UI so it's not going to respond anymore.

So what you want to do is to delegate the loading of frames to a background thread. So UI is still going to be responsive while this is happening. And so to do this well, we're going to a QC plug in that's going to do this background frame processing and that's going to be the main and the hardest part of the project and then once this composition is ready we're going to put it in the QCView and put this QCView and edit this QCView in the application, and use Cocoa bindings to compilate so it's pretty easy, just so that the UI's actually come through in the view where we will finish the application.

And that said, well we want to save the entry and output points when we find the shot and so we need to do, we have lots of techniques for doing this, so what I chose to do is just the QC plug in that use QT tip to export a Quick Time movie, reference movie to disk and it's fine rime with QT tip.

So this is the easy part, I'm not going to go into details over that. I'm going to concentrate on the plug in, so the plug in what you will like ideally is so this plug in that takes an input back which is the best of the movie and disk and that returns an array of frames and each index in this array is going to be either null, either the frame in the frame is inverted by the background thread. And so then you return this array to QC and is going to geerate with the natural rate or all that kind of stuff, so known techniques.

So these array of frames is not going to be able to hold all the frames because the view run is limited, we have 256 actual in the best case, so even in low resolution this is not going to work, so first we have to load the frame that carries resolution and then so that is going to load the old array is going to fit in view run we need to skip some frames. So that's the thing we have to do on the array.

So how is the plug in going to work internally? Well so we'll have two arrays, an array of frame requests on one side and an array of available frames on the other side, so the array of frames we request is going to be filled by older requests for the frame that we want and the available frames is going to be the frame that I've been loading. And so as prior mentioned the main part of the QC plug in is the execute method.

So the execute method is running in the main thread is going to fill up the array of requests, the frames that I want at each time stamp and is going to check the array of available frames is OK, is going to copy it. This one's available; return it to QC so the QC can do the drawing.

And so in palette you have the background thread that will work presently, so he's going to take the request so that's what I actually load, he's going to process then using some QuickTime code and then he's going to put this frame and fill them up in the array of available frames. And of course because this two array are going to be accessed by two threads in palette so we need to log them carefully. So let me show you a demo and first of all how all of this code looks like.

So you have Xcode, let me open the directory of plug-ins. So all the sample codes, source code is available so you're welcome to go and check it out yourself. So let me just go directly to the execute method, it's here so that's where everything's happening so for those of you who want to go into details, there is some directories above the QuickTime code, the threading, the internal stuff, we don't want to really go into details about it, I'm just going to show you the main architecture of the plug in, the QC plug in part of it. So here's the code that happens whenever the input path change so what we do here that we set up the thread, we set up the movie so we call this little function and we do some computation with interval.

And so here you see this function set of thread which is where the thread is going to be set up, I'm just going to come back there in one second. And this is the main core, which is we call a function, and so this function is simply building the array request and getting the array of available threads and returning it so this function returns.

And then we have other parameters that are going to be for the later improvement of the plug in, and so let me just go quickly in this set up threading that out so we go there, we create a new thread so that's here, the thread is detached over here, using a selector processing thread, so let me go to selector, so that's a pretty interesting technique so I just want to mention really quickly.

We have a run loop which is the run up under the thread and what we're going to do is that we are going to attach a run loop source that is linked to a call back in here and that list call back is going to process to do the frame processing to using QuickTime and so that's going to be a call back method whether the frame put them in there direct.

So that technique is pretty cool because then you attach this call back to the run loop you can pretty easily work up your thread and put it to sleep by stopping the run loop and waking it up. So let me show how it looks like with a simple composition.

And you see that it's showing, and your UI looks pretty responsive, it's smooth nothing, you know, it's pretty nice. And so here we go so load this movie in and it's pretty cool because here you can clearly see the different frames, the different shots over there, so let me look at the composition to see how we did this. I'm going to zoom in just a little bit, we have to try both so we can rotate it around, then we have a free transformation that's just three camera position is based with this nice perspective at this time.

And then so let's look again other parameters for later but here we have the path and the output frames array, so all the frames are here, and then images, we use an iterator to draw these frames over the axis. So just some, here we go the images, then we do some math so aspect ratio of the stripes and then the position on this axis simple math and then we draw it using this part. So it's not hard and it' pretty cool, it's pretty nice. Alright, so let me go back to slides.

So that's our first example, now I'm going to show you how you can leverage QC to enhance the visual impact of your composition and its interactivity, so first of all a pretty cool technique, how to navigate smoothly in the QCView so you have different camera position. Let's say you have another view camera position and you have a close up camera position and you want to smoothly look from one camera position to the other one, so it's pretty easy to put in QC, I'm going to show you how. Then let's say you want to browse your movie and you want to define a focus point which would be the focus frame of your movie.

So around this focus frame let's say that you want to get close to it, you would like to have full resolution on the frame itself so lots of more details and for resolution temporal resolution, so you would like to have more frames around. So to do this we need to improve the plug in in exactly the same manner, so we're going to add the thread to the four resolutions, they're going to be exactly the same thing.

And then because we want to make this focus frame purely stand out so what I'm going to show you is how to do a pretty simple and pretty cool cover flow like effect. And then the reflection is not only for the dock it's also for QC so we're going to add a little reflection on the frames so that everything is going to look pretty good.

Alright, so I need to go back to the demo machine to show you this new composition so here we go, we have flowers and a pretty good thing of flowers but that some of you can use, so here you have the 3D transformation, you see this rotation and transition, the position of the camera in space is actually coming out of - let me enlarge it a little bit - a JavaScript patch that is out spurning these positions and then this position rotation and then going through a smooth patch, so that's the technique and the JavaScript patch, what it's going to do, that it's going to store the camera rotation and position and return regarding an input index, the associated position and rotation in the area. And because this position or rotation is going to go through smooth patch we'll have this very nice transition smooth and go from there, so we have a very cool look on the first frame and where this side view is pretty nice.

So you know you can have different other view on your movie so then I said let's define a focus frame, so it's here, right here you see this little stuff moving around, and you see the background loading a frame below the gray area. So and here you see around this area, every frame is loaded so that's a thread that's doing that, we give it an input index, an input number location in the movie and around this location is going to load every frame and also at high resolution. So if you see from this view, this guy coming, so you view up you see you have four resolutions, so here's your resolution on that point.

So that's nice but in this stack there we don't really see up, we don't really see anything, so it's not very useful so much. So let me go and add, so here is being inside the submission then the actuator and this is the function that simply computes the position on the z axis of the frames and nicely in there.

So it's one give one, two gives two, and so it's just like that. So I'm going to change this by a little Mac function that you can look if the composition is available, just simple, not too hard. So and is going to simply explode around the position so the focus point the frames so let me connect this, we're going to turn this function up and you see here you have, this looks pretty cool you know, you can already look at your movies and you already see this frame over here which is for resolution.

And every thing's pretty smooth still, we can go here and see, so another thing now I'm going to add a little, here I have this JavaScript patch that kind of makes the focus point move at the speed that would be the play back movie, the play back speed of the movie. And I'm going to insert that in my stream here, and that's going to be linked to the play button and so that's going to give this kind of very nice plain effect.

So you can go and move around, but that's kind of nice and what's pretty cool is that you know we recreated the movie in its initial resolution because the frame, the images in frame, are static, is the frames of the movie that are moving in the free environment from this time we create the image of motion motion, it's not images changing.

So here it is, last thing add some reflection so, here we go, so that's a reflection which is pretty much flipping the sprite and adding an effect so I we have the reflection effect and up. So I'm going to turn that on, the ??? key here, put it on true, so you see now pretty quickly we get you something which is pretty nice and assumes you are active. So now let me show you the final application. So the code that's pretty much all Cocoa bindings and stuff like that.

We're waiting for Xcode, compiling, it's real time demo. And so we have all the old nice UI's so I did a bunch of stuff not so much, but here we go, so we have a little library of this movie and we're trying to locate our shots within that library of movies and you see I linked the focus point to the mouse position here so you can scroll here of little indexes and stuff like that.

And so camera position here so I can go to camera two and we zoom in, we have frame by frame, edit here, adding to the keys so we can go to look for something. This third camera position this I can of like it does this slow thingy which is a trip thing, it's just new function so it's kind of cool.

(applause)

And, thanks, so you know it's just another function on the rotation, we have position and that's for rotation, pretty much the same thing.

So it's nice, it does films to the effect and you still have the reflection going under it and so we can go and look at our new close up and see that here we have, so it's pretty smooth still, and we have the full resolution HD and we have pretty cool caching system in QC so that you know all the texture and everything we manage that for you in very optimal manner.

So let me start playing and let's say that I want to find my movies so I go there, on the fishes because I was at the beach so I want to find my shots so let me zoom in and so you can play and set it up, that's it so I found it. Now let me click on start position, so we have an input point that's over here and let me go to the end position, here we go, let's go back, and so we can go in the overview to see if it's exactly what I want, so yeah, up.

Yeah, that's it, I recognize the color and the shade so that's me, so that view we just exported to the desktop so it's my....and we have my fish movie and it will go. So I set in my shot pretty quick so just as a reference quick to get movie. So I use my application to find my shot from my awesome vacation at the beach.

(applause)

But you know it's pretty easy to experiment and go even further, I couldn't stop there so I did this slow.

(laughter)

So you know you have CI and QC core in there so you can apply realtime perimeter effects so you can adjust the brightness, select the region, the contrast and everything and let's put a little black and white effect, you can do all the images you want, just an example that's pretty cool. So it's pretty back to normal.

So I'm going to erase that stuff, but I could not just stop there and you know we have lots of course on those machines and so let's say let's have another thread to show image permission because we have a new way of seeing the movies, you have all this space which is there. So let me go back on the flower movie because it has lots of color and so what about using all this 3 D, are we yet to show image histogram, so information on the images.

(laughter) So we have this other and it's pretty nice because you see this little guide, you know the histogram's going from one side to the other as you change the frames so you know you can go check out the other camera position if you go close, you have this stick around, updating in old time. So...(applause) and there is not much in there of OpenGL, so let me go back to the slides please.

Alright, so just imagine the possibilities because I had to stop because I had to work on my slides (laughter) and you know the visual stuff is coming in a second, I also had to work on that, so imagine the possibilities. Let's say that you can, may be the next generation editing, 3D's hot, you can imagine selecting shots and all this stuff translating around so you can clearly show where your shots, where your edit point starts and ends, you can export that to QuickTime using a Quartz composer plug in to final cut format so that you can continue editing in Final Cut. You kind of say imagine showing image difference and to show where the shot starts, where the shot ends, where you have difference plotting this curve in space and that's of course thresholding to automatic syncer.

For the science people you can also, and I thought it would be pretty nice to do it, you can do a motion or object tracking, you could track an object and you can visual the curve in space and see if it's continuous or not continuous. So you know you have lots of stuff and QC with all these new patches are available, the new API, you're hardly going to be limited and you're going to be very efficient so you go very fast doing this. And again you have the source code you can check it out, it's really not a couple of months, it's just a couple of weeks, so now let me talk to you about the Quartz composer visualizer.

So the Quartz composer visualizer is this wall of displays, it is the application that drives the Quartz composer wall display that is in the graphics and media lab. So this wall is made of nine MacPros with ATI X1900 XT cards connected to 18 to 30" displays, so each MacPro directs two monitors and we need this card and this power because we have two 30 inch monitors so we need the MacPro, but if you have plasma you can work with all kind of computers.

And so the Quartz Composer visualizer is the application that makes this possible so spanning across the composition or over multiple displays over network and that's a new developer tool which is in developer application graphics tool. And it's free so anything better than free because the source code is provided.

So you can hack it yourself, so how does it look? Well that's a local case so it's pretty as simple as you can get, you can span your composition over multiple displays connected locally to your machine, you just drag your composition, click on start, and boom it's going to work as expected.

Then local case so that's the whole screen chart that we have for the wall, so little more option for network synchronization, but that's pretty much it, you drag composition, you can start and it's going to span and we have a little UI to position the screen just basically correspond their physical location.

So now I'm going to talk to you how to handle the root of this application design, so first of all, locally how does it work to a standard composition over multiple displays; and then how to span in just like the work configuration over the network? So locally you need a Mac and displays, let's say two, and so first you're going to set up one Quartz composition per display then you set up a full screen of contacts on the display and then you create a QCRenderer out of this composition and full OPenGL context object.

Then you need to tell this QCRenderer to render so you can use different techniques timer, timer in a separate thread per screen, but the technique we recommend you use is a Core Video display link which is just like the timer in a separate thread per display but synchronized with a different refresh rate so it's better for performances.

Then you need to tell 'what is the subregion that needs to be rendered on these displays', and for this you just transform the OpenGL projection matrix, just before coding your render at time method on your QCRenderer. So we just do a frame transform so that just transect and secure as expected, and then you invert that after you use that method.

So all the drawing is going to be perfect. So this stuff is going to work in most cases, but lets say you have a game where you have 2 states and because you have one big state, which is the lives of the player and because you have one composition per screen let's say that if you die on the left screen that state is going to reflect that on the left screen but you might still be alive on the right screen, because you have state kind of stuff.

So, having a composition per screen in that kind of particular case, it's not going to work, so a solution we took is to spin this composition between a processing composition, and a rendering composition. The processing composition is going to do the composition on only the game for instance, and keep all the lives and the rendering composition is going to do the drawing only.

So the composition only on one side and drawing on the other side, and then we're going to have one processing composition per screen sorry, one processing composition per computer and one drawing renderer per display. And this was going to happen as the processing composition is going to forward its output source to the input source of each rendering composition on the displays.

So then that's going to solve that kind of problem, so now it met ok, so we have a host computer and we have a bunch of clients so the host needs to know where the clients are so they're going to use BOM so that the host can find them and then once it's going to be done we're going to create a TCP connection between the host and initial client. And the TCP connection is going to be used to send a screen configuration composition files, play stop messages from the host to the clients, application parameters for instance screen results and so on.

So something I would like to say here is that this Quartz composer visualizer application the technology behind it, Quartz composer, is not like the X11 of thing, so in X11 you send an instruction over the network, here it's not this, we have the Quartz composition, a recipe for rendering graphics so we transfer this composition once and for all to the clients and then they're are going to be played locally and then only time is going to be synchronized. So what it means is that in that case we're going to have full performance everywhere, so it's important to say that. In the case of processing, rendering, differentiation, well the processing composition is going to run on the host and forward the composition data to every client.

And yes, so that's pretty much it so you can go and check out the source code yourself, if you want some more details about that. So we have first the application directory that deals with all the events and UI in the interface. We have the network directory that deals with all the Bonjour and TCP connections.

Then we have the renderers that have the processing and rendering renderers to see where it's done, and then the extras which is just a bunch of objects for getting rid of screen configurations and stuff like that and sending them over the network. Alright so go and come check out the wall, you can have open hours when you can try your own composition and if it don't look great at least it looks big.

(laughter)

So let me go back here.

(applause)

Hello, OK, thanks. Thanks Kevin for all of those great demonstrations, so rapidly a few slides to finish; I'd like to talk about optimizing the API usage, we have a lot of new API's in Quartz composer and two big things you can do for performance. The first one is be careful of color management , we now pay strict attention to that new image by applying so that your data is not properly tagged or tagged wrongly, it will not look right or just slow down everything because we have to do this for color matching.

So for, for instance try to stay away from device core space because it's not always properly defined, you should use generate instead; the NS color, NS bit map another kind of object those classes do not necessarily have a well defined Quartz space either in all cases so it's better to use CG color or CG image and so forth and if you deal with video data we have new profiles built in Leopard in the system so that you can tag them properly with like 709 HD or HD content.

And also I'd like to point out that to really increase performances in complex users of Quartz composer, you're likely going to have multiple QCView or QCRenderer and passed data will be between them You used to be limited to have to use NSG matrix which are really not great for performances because it might imply downloading from a GPU, re uploading, all those kind of things.

So what we have now is through the value for output key of type new method you can use a special type called QC image which will return an opaque object you cannot do anything with but you can use it as a token to pass to another QCRenderer and that's basically an internal measure composition for Quartz composer and super optimal, the best it will be.

Same thing for passing structures around, you can ask for a QC structure opaque object, for more information Alan Schaffer is our Graphics Evangelist so please refer to him and once again we have a great public mailing list you're more than welcome to join and we're actively participating on that list.

We have our lab right this afternoon where actually most of us are going to go right now and during lunch break and we'll be there all afternoon in the graphics imaging lab. The OpenGLP pooling core image people will be there as well, so it's going to be a great lab please drop by if you have any questions.

Now what to remember from these sessions? Well, really I'd like you to consider how Quartz composer can help you in your new project and the fact that it's not only for graphics because even it's multi used for that today, you can definitely build a composition that does pure data processing and outputs other data without rendering in a single screen. Also, you can do pretty complex things if you want to integrate an already existing application and a QCRenderer might be your best friend for that because you really have precise control over the rendering.

And finally you know it's now completely possible to extend Quartz composer by writing your own patches and you can do that in a simpler way using the virtual patch technique I described as the beginning, or you can write your own plug-ins and this time you have to use Xcode on Objective-C 2.0.