Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2006-210
$eventId
ID of event: wwdc2006
$eventContentId
ID of session without event part: 210
$eventShortId
Shortened ID of event: wwdc06
$year
Year of session: 2006
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC06 • Session 210

Developing with Core Image

Graphics and Media • 52:26

Core Image provides high-performance and GPU-assisted image processing. By harnessing the tremendous pixel processing power of the GPU or the vector execution unit of the CPU, Core Image performs complex per-pixel imaging operations at blistering speeds to create spectacular visual effects and transitions. In this session, you'll see how to add image processing to your own application using any of the 100 built-in effects. We'll also show you how to create your own custom algorithms and deploy them as Image Units. This is a must-experience session for developers of image enhancement software, video effects systems, color management solutions, and scientific visualization packages.

Speakers: Ralph Brunner, Frank Doepke

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Hello. Welcome to the Developing with Core Image session. Yeah, that's me. So here's the agenda for this presentation. I'm going to talk a bit about what happened with Core Image in essentially the last year since I was up here last time. And for those people who actually missed last year's presentation, I'll give a brief overview what Core Image is all about.

And then we're going to switch immediately to what's actually new in Leopard. We're talking about new APIs and features, the improvements we did to enable debugging and performance tuning of Core Image filter chains, how to build user interfaces for filters, and how to make your own Image Unit.

So, yeah, what happened so far? So Core Image was introduced two years ago at WWDC and then shipped last year in April. And by now, the entire hardware product line that Apple sells is able to support Core Image in GPU accelerated mode. And many clients have started to use the API. So here's a list of clients that are internal to Apple. So there's really big applications like Aperture. There's things like Dashboard, where Core Image is doing the ripple effect, and these kind of things. New in Leopard-- excuse me.

Resolution Independence uses Core Image, Time Machine uses Core Image, and there's plenty of new things in iChat and so on. But kind of the real stars of the adoption of Core Image is our third party applications. So here's a selection I couldn't really test all applications I could find. And these span the gamut from, well, doing something on video or on images, kind of the bread and butter thing that you would do with Core Image, to games that use flame effects that happen to be Core Image filters. And yeah, everybody likes OmniDazzle.

So here's an example of a custom third-party image unit. The chocoflop image unit contains about five filters, and one of them is this leaf distortion effect. So an interesting anecdote here, how did I get that image unit to work in Keynote? So what I did, I built a Quartz Composer composition which uses the third-party image unit.

And that does all the timing. And then exported that as a QuickTime movie. And the QuickTime movie exported from Quartz Composer is just the wrapper. So it's still the Quartz Composer composition that gets rendered in real time. It's just packaged in such a way that anything that can play QuickTime can play that back. And that's how this got into Keynote. So Keynote is actually playing a third-party image unit, even though their programmers probably never thought about that.

And it's a great way for you if you have a, you know, would like to have custom motion backgrounds in Keynote or so, you know, you can write an Image Unit and put it in there. And it's a great way for you if you have a, you know, would like to have custom motion backgrounds in Keynote or so, you know, you can write an Image Unit and put it in there.

or specular highlights which are done with the filter. And there's another filter which does the shadow underneath the card, turns that hard shadow into a soft shadow. And to me, this is a really compelling example of what you can do if you combine vector art with filter effects because this is still scalable. You can print that at 600 DPI and get a really beautiful rendition of this.

So, kind of a summary for people who have missed last year's presentation. Core Image is an image processing library. It enables GPU-accelerated image processing, and it has a full floating point, full color-managed pipeline. And we'll go over a slide a bit later about what that means. There's a base set of over 100 filters, and then there's a plug-in architecture, which we call Image Units, to extend that base set. So this is where third-party developers come in and can build their own.

Filter kernels are expressed in a subset of the OpenGL shading language. And such that pieces that don't really make sense in terms of image, like in image processing, like fog, these kind of things that OpenGL does. So these pieces we left out. Other than that, it's pretty complete.

And the nice thing about that is this is an architecture-independent way of describing image processing. So for those people who wrote Image Units and then moved to Intel, they essentially didn't have to do any work because it was just essentially a C-string with instruction about the filter is supposed to do, and in the runtime just compiles it on the new platform. And hopefully, the next time something wild like that happens, we can pull off the same trick.

There is also a just-in-time compiler under the hood that can take these kernels and compile them to the CPU if the GPU is not available or for some reason the GPU is busy and you kind of want to offload stuff onto the CPU. And that just-in-time compiler can compile to the Velocity Engine on the PowerPC chip and to SSC3 on Intel chips, and it works in 64-bit.

So kind of the philosophy behind the API is that Core Image follows the lazy evaluation model to its, well, as far as it is reasonable. What that means is, so you have your object, which is a CI image, and you pass that to a filter. Let's say you do something like undo a barrel distortion from your camera.

So you get a new image out, which has that distortion removed. Then you apply another filter, which is, say, a color correction filter, because the sunset isn't warm enough or whatever you like. And you get a third image out. And then you pass that-- I don't know, a sharpened filter to make things crisp and you get your fourth image. All of these operations actually didn't work on the bits.

What really happened in the background is the original data was referenced, and there's a little sticky note that the system attaches to it that says to do, barrel distortion undo, what was it, color correct, and sharpen. The time when the image is actually evaluated is when you go and draw it. So that has a bunch of interesting benefits.

So the first benefit is you get higher performance in general because the runtime can concatenate all these filter operations into a single path over the image. And generally, you're not going back and forth to memory, which helps performance. This particularly helps performance in the CPU case. GPUs have usually a lot of more memory bandwidth and have not-- the problem still exists, but it's not as severe.

Similarly, when you actually draw your sub-rectangle of that image in the end, say it's actually in a scroll view and only a part of it is visible, then only that part needs to be evaluated. So there's a huge performance gain there. There's also a precision gain because everything that happens in a filter kernel is expressed as full floating point operations. So if you don't go back to a buffer which then say clamps stuff down to 16 bit ins or so, then you get a performance precision gain.

So I was mentioning full color managed pipeline before, and I'm trying to explain what that means. So essentially, images come in and they have color spaces attached to it. So one is Adobe RGB, one is sRGB, and so on. And before they enter the system and any processing happens, they get-- the data gets converted, which is really just another filter that gets inserted by the runtime.

The data gets converted to a working color space. And all the processing happens in that working color space. And at the end, before you draw another color conversion, which in practice is just another filter that gets attached at the end, that color gets matched into the target device, say, related to your display color space.

So the first color matching going from images into the working space is necessary, because you want to be able to composite several images from several color spaces together. So you kind of have to unify that. And the last color matching is necessary, because you want all your filter operations to be device independent. A blur shouldn't look different on a device on display A versus a printer. So that's why all operations happen in that canonical working space.

The default working space is generic RGB HDR, and in most cases, you will not need to change that. And generic RGB HDR has a bunch of properties which are important for image processing. So the first one is light linear, and that essentially means the values that you have in your color components have a linear relationship to the amount of photons that come out of your display at the end.

Or you can look at it the other way around. The amount of photons that hit the sensor in your camera, the number that you get out is proportional to that. And that allows you to do things like do exposure adjusts in software and actually get the same result as the exposure adjusts on your camera. These kind of things.

The second point is that infinite gamut. So for the filter writer, infinite gamut just means values for RGB can be outside the 0-1 range. So you can have values that are bigger than 1, so you have super bright pixels like specular highlights in images, you know, if there's sunset in there, these kind of things.

Values can also be negative, and that's typically for neon colors, colors that are out of the gamut of usually the color triangle, if you have seen those diagrams. Okay, so after that brief overview, I would like to invite Frank Doepke up to tell us all that's new in Leopard.

[Frank Doepke]

Thank you, Ralph. Also, welcome from my side to WWDC. And my name is Frank Doepke, and I'll be talking about all the new features that we have for you in Core Image. So let's start a little bit looking at our new APIs. So we've done refinements all over the place based on your feedback, so we started actually, for instance, having more convenience functions for the common task. You will find those in our headers.

Then we have lots of now common keys basically in the filters where we introduced constants for it. So this allows you actually that in Xcode, you can use now a constant for like input image or output image, so you can use code completion. So less typos also in that code.

And then we have, for instance, now documentation for the filters. This means actually that the filter can give you a small description and even like an HTML page that shows you all the features of that filter. And for those right image units, you can provide those also. So even your documentation can be seen and presented to the user. I'll talk a little bit more about this in some samples later.

And then clip in your bindings. I'm not inviting you to go skiing this moment, but we have something new that is that you can actually now observe your output image. So this is important, actually, because this allows you to automatically catch all the updates in a filter chain.

When you, in the past, tried to chain multiple filters and you change something-- let's say you pipe an image into filter A, then from there you take that result and put it into filter B, and that's what you want to draw on screen. Now you change a parameter on filter A.

That output image that you originally put into filter B has not changed because your images are immutable. So what you had to do in your code was actually, well, something changed in filter A, now I have to propagate this all the way through the chain. Now that you can actually bind the output image from filter A to the input image of filter B, it will just happen automatically. And what you do in the end is like on filter B. You just register yourself as an observer.

You register yourself as an observer to the output image there, and that just triggers your drawing code. So this all happens now for you automatically. Somebody changes the filter on A, and you get the change on your result, and your drawing code gets even called automatically. So this really reduces the workload for you in terms of writing code to handle all your filter change. And when you look at some of the sample apps that are available for now, you will see that it's really much less code.

And then let me talk about QuartzGL. QuartzGL is new in Leopard. And with that part, what we actually allow you to do is that you can use now a regular NSView and create your CI context of it and still get the full hardware acceleration. So going back to Tiger, what happened there was basically when you wanted to get the full performance, you wanted to create an NSOpenGL view. So you needed to write a little bit of OpenGL code, or you could use our sample, the sample CI view or the sample CI scrollable view to get the full performance running on the GPU.

And that is, of course, a little bit inconvenient. So in Leopard, you can actually use an NSView. Now, this is not 100% true in the seed that you have already now, but it will be true in the end when Leopard finally ships. So the full performance is not there. It is definitely faster already, but it's not as fast as it will be at the very end. So this will be something to make life a little bit easier for you.

Next step, as Ralph already mentioned, one of the big features of Core Image is that you don't just want to run one filter, you actually want to concatenate multiple filters. And therefore you create a little more complex effects. And for that we have something new for you and that is the CI Filter Generator.

So this allows you to take multiple filters, put them together and create a complex effect. And you can reuse those because they are now wrapped into one filter, kind of like a macro actually, that's how you can see those. And you can install them actually on disk and you can reuse them in your applications.

And we do have also an editor for it that I will be showing to you in a minute. It's not on your disk, so don't search for it right now. But it will actually show you how to visually-- you can create your graph with little magnets, and you see really what the results of your filters are.

And this is a great tool that we envision that later on artists can use to create effects for your applications. And you just have to load those. You don't even have to write any code to really take advantage of this. And with that, I would like to give you a demo of what the filter generator can do.

Okay, so this is our filter generator. What I can simply do now, I start with an image. That didn't work. Let's try it once more. OK, there we go. So I start with an image. And now I can use a filter here. Let's say I start with something-- and it's not very visually pleasing, but at least hopefully seen also in the last row. So I start with a pointillized filter. And I want to compare it actually with something like pixelate.

Let's drag this in. So I can pipe now my image like this. And I see already in the right part, OK, this is how point-alized looks like. Now I use pixelate. And now I use a transition actually at the very end. Let me say I use the copy machine.

"Let's make this in. And hook this up. So this is a very, very simple graph. Now what I want to do is actually, I want to compare those effects. So I can actually say, well, I'm interested in seeing the result of the input radius. I'm going to actually export that key to what the input scale can do.

And then in the copy machine, let me just adjust it a little bit so it looks a little bit better. And I know that image actually is a little bit bigger. I want to see the input time. So now I can save this out. If I can type.

Okay. And now I wrote a little sample application that uses these filter generators. So now I can open that one. This was the one that I just created. And now I can actually go in here and actually compare the stuff. So you see, this is just an image that I pipe in and just can simply also say, OK, well, this is what happens if I use the pixelate part, and this is what happens when I use my pointillized filter.

So that is a very, very simple example of that. So let me just give you one additional part to this, actually how much code was involved in doing this. And that's it. It's the open panel takes up most of the space here. All I need to load is the filter and use it. Okay, now this was a very, very simple filter. Let me try to create a little bit more compelling example for you here. So I actually start with some text that I created as an image. And now we'll start by inverting it.

And then I mask that, create a height field from it. So you see this is getting a little more complex now. And then I actually want to use some shaded material. And I pipe this in here. So you can already see here on the right-hand side that we get some nice effect with that text. But it's not quite right yet. What I want to achieve is actually that this text looks like Chrome. So let me draw on an image for this.

And now I have a nice chrome effect where I can actually even change... Huh? A little glitch here on the graphics card. Okay, well, please ignore that. But you want to say, Chrome is normally not orange, so let me actually change that as well, so I can actually break this connection and change this.

Make this a monochrome. And now this looks already much more like Chrome, and I can actually pick whatever color I want here. Or I can go to my color wheel, and you see how nice that changes actually the color of the text. So this is like a Chrome effect. And let me even change this by taking this color invert out. And now, actually, I have something that looks almost like an oil kind of text. So this is our filter generator editor. And instead, I would like to go back to the slides.

Now think of what somebody who has really graphic skills can do with that. Okay, so you saw it's all about connecting things. So let's have a look at the API, how we connect stuff. So you want to connect a filter to another filter. So what you have is you connect it as a source object, and you say, "Okay, which key of that filter I want to connect to the target object, and which key that will be there?" One special thing that you can do with the filter generator is that you can actually use an NSString, which is the path of an image, and you can put that in there as a source object.

So you can say, like an environment map where you simply take the path of that image and you set it up as my source object that I pipe into, like as I used a ball, for instance, here, and set this as my shaded material. That's my environment map. We'll not save all that image in the graph.

We just save that reference for you and then resolve that on the fly. So that makes it actually really convenient for these things. And then, of course, when I want to disconnect, as I've done earlier in the demo, I simply have, of course, a disconnect call for this as well.

You saw that I was able to take some parameters of that filter and export those. What does it actually mean? When you have a filter, you want to give your client the ability to change some parameters in it. So, in that form, we just export a key. This allows the client to change, as I showed you, the radius or the Like the time on the transition.

Now, there's one interesting part, as you saw, that I used as one input image. And I used it actually in two filters that I then, again, laid on pipe together. So what I did actually for this input image-- and the generator actually does this for you automatically in that case-- is that I used one input key and exported it on two objects. So whenever you later on set that one input image, it automatically gets set on both filters. So there's this very need for more complex graphs.

So that's the way how you can export the same key on multiple filters. One important thing that you don't want to forget is actually that at the very end, you need to export on your filter an output image key because otherwise, there's nothing that you can get out of your filter.

Of course, you can remove a key if needed if you did anything wrong. So that's simply the remove export key. And you can set default attributes. So normally, each of these keys takes the attributes from the filter. But if you say, for instance, well, I have this Gaussian blur, but the maximum value in my specific application would be way too big for what I want to do. So you can actually override the default attributes by saying, well, my maximum value can only be half of it.

So now we want to-- basically, after we hook everything up, we want to create a filter. And for that part, we have an API for that. Simply create a CI filter. And this gives you an instance of a CI filter that you can treat like every other filter that you already have in Core Image.

Now, if you want to use this filter in multiple places of your application, or you want to use it over and over again, just carrying around this one instance is not very convenient. For that, you would actually register your filter with a name. And that allows you then to use, again, the filter with name API on Core Image to create instances of that filter over and over again.

One note that you really need to keep in mind is before you want to create the filters, that you have to set all the inputs of the filters that went into this generator. So let's say on this shaded material one, which takes an environment map, if I don't export that key or don't set it to an image beforehand, it will be nil the moment when this filter tries to get generated. And when you now ask that filter for an output image, it will actually throw an exception, because we cannot deal with a nil image as a source. So make sure that you set all the parameters. That way, your filter later on will work correctly.

And of course, you want to save those filter generators out. And there's a simple API that writes it out into a P-list document. So you simply write it to a URL. And you can use these descriptions in your applications or even distribute it to your friends if you want to. So the part that we really envision there is that if you have an art department or a graphic artist, he can create these effects for you. And then you can use them in your application again.

And then of course there's the counterpart, how to read it back in, and that is that you simply create a filter generator from that URL. And that was pretty much that only little code snippet that I showed in my sample app that uses it. That's all what I had to do, and then ask for a filter.

So that was the filter generator. Next part, let's talk about something uncooked. We now support actually raw images in Core Image. So what that means is when you look at digital cameras today, it's getting more important that you can actually support the raw format. And raw is like-- yeah, the analogy of being uncooked is not so far off. Because this is really what comes from the sensor.

So when you look at all these little colorful buttons and labels that you have on the box of like this image processing that the camera can do and this fancy algorithm, this has not been applied. The advantage actually of using that raw data is that you can now, based on this real raw information, really tweak the image correctly. And there's ongoing research of how you can actually read the sensor information and make better images out of it.

So this is the way now that with the CI raw filter that we allow you to use raw images directly in Core Image. You can adjust the image right there. So we give you some parameters there. And as I said, since we do continuously improve on this, so I can guarantee you there will be future versions of Core Image that have better improvements on raw import, so there will be different versions. That is important for you to keep in mind.

If you want to basically create an application that takes advantage of this. Now for your clients, you want to make sure, well, even like three years down the line, he wants to get the same image out of it. You want to again select the same version of the raw implementation.

Or if you want to have the best results, go with the latest. And with that, let's have a look at the raw API. So the API to create a filter is either from a file-- so we go by a URL-- or if you go by data, if you have it already in memory.

And then there are some keys that allow you actually then to manipulate this. So you see this is a filter, this is not an image. That is important as being a filter, we can manipulate the stuff and then pipe it also further down into the pipeline of Core Image when you do additional processing. So the keys that allow you, for instance, the exposure setting and the net neutral chromaticity, that was a hard word.

You can send the temperature tint and also the location. Those kind of parameters go a little bit hand in hand. Those are the most important ones for actually adjusting an image for the white balance and the correct exposure. And with that, I would like to give you a demo of this.

So I'm opening up a raw image. And this is a pretty big image. That's why it took a little moment to load. And what you can see, it's a little bit overexposed. If you look in the sky, and you can barely make out the tail fin of that plane.

So I can actually adjust the exposure value a little bit down. And you see, wow, that looks already much, much better. So that gives me an easy way of already seeing, OK, this was an image that was not quite right. And I can tweak it into the correct way. And I didn't lose any information really that much here.

And then, of course, I can either use the temperature slider to make this a little bit more cooler or warmer looking, depending on what I want. Or in this case, actually, I can just pick a white point in this image. And yeah, so this is NASA. This is going English versus metric white. So that is the easy way of how I can already adjust that image. Now, as I said, we can use CI filters with it. So let me take a filter here, and I want to do a gamma adjust.

So now what I can do is go a little bit stronger with this guy. Now you can actually start to see actually that he's pointing inside that engine. That was almost not visible before. So that is just putting it into a regular CI filter pipeline, so regular Core Image processing.

The sample was supposed to be made on the disk that you have available. Unfortunately, it did not quite make it. But if you come later on to our lab, we can give you the sample code so you can actually work with this application. And with that, I would like to go back to the slides.

So one of the most requested features from Core Image in the past was, "OK, I have this filter. I need to show the parameters to our users. How do I create some UI?" So we had, of course, the Funhaus demo application, which we showed you. Just write this amount of code, and you have basically a UI for your filters. Well, now we have an API for you that is slightly smaller.

So it allows you to automatically create a UI for filters, and you can also provide for those right filters your custom UI. So that way, as an Image Unit, you can have your own branding on it. And what you get is actually a view with all the controls needed to set up this filter and change all the parameters of it.

In addition, we threw something in that's a filter browser. And if you paid attention to what I've used before already in the samples, that's the filter browser. You can have it as a view. You can have it as a sheet or as a panel, depending on what you like.

So it's your choice. It allows you to browse through the filters. And you see a preview of the filter. And you see also a description of it. And very much like the phone panel, it allows you to collect the favorites, so you can actually keep your favorite filters in one location.

And very much like the phone panel, it allows you to collect the favorites, so you can actually keep your favorite filters in one location. So let's look at this API. As I said, it's really big. Well, to get a view that provides UI for a filter, all I have to call is actually view with UI configuration.

UI configuration is actually a dictionary which allows you to take the size of the controls that you want. So you can get mini, small, or regular-sized controls. And also a set of controls where we are actually still defining a parameter set of very basic or the more advanced feature set.

And you can also on your own say, OK, I want to exclude certain keys. So this sounds a little bit abstract. Why would I do that? If your application, like most of them, is a document application, so you know your document is your image, that will be your input image that you want to pipe into that filter. So you most likely don't want to display this input image in your UI for that filter because that's already right there in your document. So you can simply exclude it, and that key, actually the UI for that key will not show up in the view.

And for filters, so this is now for the people who write image units, who want to provide their own UI, you simply have to implement the Provider method, and you can bring up your own view. So that was all what you needed to create this UI for filter.

On the filter browser side, so we have a shared instance, so you get this shared instance and run it. In this case, I used to run model with options. But they, of course, are the reader calls if you want to run it as a sheet or in a panel.

So again, there are some options which allow you to configure if the preview is originally visible. Or you can set your own images for the preview so that it matches what you see in your document. So that is all there in the options dictionary. And if you want to put this view right into your window, you can do that as well because you simply have to ask for the filter browser view and you stick this into your window.

And if you want to put this view right into your window, you can do that as well because you simply have to ask for the filter browser view and you stick this into your window. So this little demo application you actually do find on your Leopard Disk. It's in the developer examples under Quartz Core Image. And it's a very simple application just to show you a little bit the capabilities and also to test some parts out. So I can now open my filter browser here.

And you can see I can search for a filter, as I've done before. I can go for like sepia. There's my sepia tone filter. And it tells me what sepia does, shows me a little preview of it. Of course, I can turn this preview on and off. And then I just double click on it, and it gets added to my image. I can take a second filter. So let's say I want to actually just distort this a little bit. Let me cancel my search.

And yeah, let me see like the glass distortion. No, that does not quite look like-- yeah, circle splash. OK, let me use the circle splash in this case. And I add this as well. So now I have like some fancy effect that I go on top of my image. And I hope that it's visible for everybody. And I can still change also my sepia tone.

When you look at that sample code, this also takes advantage of a lot of the bindings part. So, it's actually very little code that I had to do to really chain up these filters in this moment. And then, of course, I can remove a filter and go back to something like this.

And when you play around with it, it allows you to test out the different sizes of the controls and which set of controls you want. So that is the Image Unit Demo application. And as I said, that is already available for you. I would like to go back to the slides.

So, writing Image Unit is not necessarily the easiest task, I have to admit. So, a lot of requests was like, "Give us a little bit better documentation," and we do have now. It's the Image Unit tutorial, and you can, with this Image Unit tutorial, much better write filters. So, it has really a step-by-step guide for writing filters, how to write the kernels, and how to package them up. There's lots of sample code in it. It's 78 pages. And actually, I have a copy here.

This is how it looks like. And you really see-- well, if you can see, even in the last row, there's lots of samples and how to do this stuff. So check it out. We have a bunch of copies also in the lab that you can get and find it on your DVD. So it's for you available as a PDF. So who wants this copy? Here, first row. There we go.

So let's have a look at one of these examples. I picked the Lens Image Unit to actually demonstrate one thing that you might run into when you write your own Image Units. And that is the problem with the ROI. So what is the ROI? It's the region of interest.

When you write a filter, most likely you don't just sample like one pixel, you want to sample surrounding it. And when an image actually gets uploaded to Core Image, most often it gets tiled. So this is for efficiency. It gets sliced and diced into the smaller sections. Now when you try to access a pixel on like the edge of this tile, and you can go further out, that tile is at the end. So I can't access the pixel anymore, so I get some artifacts on screen.

And that can be that there's something missing, or you see like little vertical or horizontal lines, and you think, what do these guys in Cupertino do? There are bugs in it. It's actually not a bug. You need to tell Core Image how big that image range is that you actually need in your kernel to really access it.

So if you say, well, my algorithm will normally sample like x amount of pixels around a center point, the tile Core Image, OK, this rectangle is x amount of pixels bigger than what I originally that image was, so that I can really tile everything correctly. And with that, I would like to give you a little demo of how this looks like.

So as I said, I'm using the Lens Image Unit, which is described in that Image Unit tutorial. So what I've done here is just to-- I don't want to go over the whole code, but in the part here where we set up the ROI, the region of interest calculation, I simply was so bad that I commented out.

So when I build this filter now, So as I said, I'm using the Lens Image Unit, which is described in that Image Unit tutorial. So what I've done here is just to-- I don't want to go over the whole code, but in the part here where we set up the ROI, the region of interest calculation, I simply was so bad that I commented out.

So when I build this filter now, And there's my lens. Now I can go in and move around and look-- oops, where's my ring? It just disappeared. And that is simply because the ROI function is missing, so I can't sample those pixels anymore. So let me fix that for you by simply going back here. Save this. Let's build it again. And I actually had it built directly into the Image Unit folder, so that just makes the demo a little bit easier for me. Here's the same image.

Save this. Let's build it again. And I actually had it built directly into the Image Unit folder, so that just makes the demo a little bit easier for me. Here's the same image. Talking about image units, there can also be a lot of things going wrong by packaging them.

So for that we have an Image Unit Analyzer. That tool was already available for those who participated in the Image Unit logo program earlier, but we now have it available in Neppert for you. In that part, we actually check and make sure that everything is in order with this Image Unit.

So we test, first of all, if the Image Unit is complete and if the bundle is correct. So there are some parameters that you have to set up, some information that you need to provide to create an Image Unit, and we check for this. Then we verify if the filter is set up correctly. So there are certain things that we expect from the filter, so we will check with this tool.

Then we test drive the filter. So we're not looking at the image, but we at least apply that filter to an image and see, will it barf? Now, one thing that's very important for LabRD is we will check, actually, if this Image Unit is built for all four architectures. So we check if it's running on PPC and Intel, 32 and 64-bit. So set up your project correctly in Xcode, and you'll get all four architectures.

So you find it in the developer tools. This is a command line tool. And this is actually how the result looks like. So this is a little filter that I wrote, which, oh, what a surprise, it passed the test. And you will see basically some similar results if your filter is correct.

So again, the call: build Image Units, create applications that also host Image Units. It's just one line of code that you have to do. You can take advantage of how Core Image can be extended. And there's the Image Unit logo program. So this allows you, when your test actually was successful and it's the Image Unit analyzer and you follow a license agreement, that you can put that logo on your box and everybody can see, "Ooh, this is creamy and juicy." Next point, we have a widget for you.

This is the CI Filter Browser Dashboard widget. And this was available as a download for Tiger a little bit earlier, but we have improved it now for Leopard. So it is now already with your Leopard developer tools installed. So if you go into Dashboard on your machines, you will see it already. So it is now already with your Leopard developer tools installed. So if you go into Dashboard on your machines, you will see it already.

And you get to know the filter. This means actually that you can have a look at what parameters are there and actually what is the parameter set that I can use with this specific key, so like the maximum, minimum values. And you can test drive the filter, so we allow you even to test live with your own images how this filter will actually affect your image.

And you get to know the filter. This means actually that you can have a look at what parameters are there and actually what is the parameter set that I can use with this specific key, so like the maximum, minimum values. And you can test drive the filter, so we allow you even to test live with your own images how this filter will actually affect your image.

Okay, so I'll open it up in Dashboard and I can see my preview of the filter. Now I can go here and have a look at different filters. As I said, I can search. I always like to search for CPL. And there it is. And I can see that filter. And as I said, now we have documentation. When I click here, this actually brings me to the full documentation. And there's my filter reference. It did not work quite here. We have a network problem. OK. But technically, it would work.

And what I can also do is I said, like, besides just looking at these all, I can see, okay, it has an intensity parameter. These are the values that I can use. It's a scalar. I can use my own images. So, this is a slight trick to do that. So, I grab, actually, an image, start dragging. Now, I invoke dashboard, and I can drag it right in here, and there's my own image to test drive this filter.

And as I said, we can also copy the code. So let me take a little bit more complex filter here, like, for instance, the CMYK halftone. And now all I have to do, when I select the filter, I hit Command-C for copy. And then I go into Xcode. Let me just create an empty file.

And I can paste it. This is a very long one. But you see, we already create for you filter with name, correct filter name. And all the keys are here, so you can set them directly if you want to. And we give you also the, OK, what is the type of it? So that makes coding much, much easier. So see this as kind of like an API browser for you. OK, so that was the widget. I'd like to go back to the slides, please.

Well, I've already mentioned debugging. So debugging means, actually, that we want to have a look into the black box of what CoreImage is doing. And for that, we reintroduce an old friend that has learned some new tricks, and that is Quartz Debug. Quartz Debug now can look into CoreImage.

And that means it knows which filters get executed on the CI context and it knows how long a filter took. So this is important for you to really analyze at least what is happening with my CoreImage rendering. And to make some mileage on the stage, I would like to give you a demo.

Can we-- yeah, thank you. So I'm going back to my own little sample code here.

[Transcript missing]

I'm adding a CPU filter. Oh, and you see now, actually, there are two renderings happening here. So why is that? I only have one image. Well, of course, in that first part here, when I look at it, this is actually what happened.

In my real application versus what actually happened in my little filter browser when I brought up that filter browser panel, because that preview needs to be rendered as well, and that is Core Image as well. So this shows me when I look here in the bottom part, actually, OK, this is the CPR tone filter that rendered. I can see the input intensity. I can see the domain of definition that was used. It's all there.

So I at least understand which filters have been used. Now, have a look at the performance part. And I can actually open this part here. I need to start sampling. And now when I go in here and actually use the slider, you can actually see how the numbers on the right side actually increase. So this tells me actually, OK, something is rendering and using some time. This is actually the time spent in that filter.

And it will tell me how long it took, actually how many pixels were processed, and how many pixels even per second we actually used. So this is simply how you can look with Quartz Debug into what is happening with Core Image in your application. Instead, I would like to go back to the slides, please.

So this is the rendering log. And you can see in that application, I rendered quite a bit already. And on that bottom part, you see really what actually filters were executed. So I clicked on one of them, and I want to see what did happen during this render instance. This is actually using the sample code for the transition. So you can see there was a swipe transition that I used right here. And I can see all the images that went into it.

In the performance log, the important part for me is to see how long does it take. Now we see over 4,000 milliseconds. That's a long time. Well, this is the accumulated time as long as I was sampling. And as I said, I used the transition selector sample, which constantly renders. So over the time, that was like spending 4,000 milliseconds in this. It processed over 260,000 pixels, and that gives me a count of 61 megapixels per second. So that is actually the information that I can see, okay, how long does this specific filter took.

So what does this all tell me? When you look at the filters, you first of all notice, OK, which one gets really executed and in which order. So when you have more complex applications, you build your chain from different points. You might not really know what's really in there. And there can be a crop before you actually do the next effect.

And you wonder, OK, why is that really not showing the effects to the outside as I wanted to see? Second of all, when you look at how much time is spent, not every filter has the same impact. So they have definitely different costs on the graphics card, or even if they run in CPU. So that depends on the complexity of the filter.

And of course, the image size has a great impact on your performance as well. Larger image, of course, needs more time. So if you have a filter graph and we have some environment map, and this might be you just accidentally picked a very big image for it, although it's not really needed, your performance will go down. And with Quartz Debug, you can actually see these performance lags and fix it. So, with that, I would like to get Ralph back on the stage for the closing ceremony. Thank you.

So, with that, I would like to get Ralph back on the stage for the closing ceremony. Thank you. The DTS guys told me to fire you all up and go off, so you go off and do something with this stuff. So this is the slide.

[Transcript missing]

Take advantage of the eyesight. So this, I think, is actually an important one for the coming years.

Essentially, the EyeSight is built in, in half of our product line today, and it's great. You can use it as an input device. For example, there is this application, Delicious Library, which uses the EyeSight to scan barcodes of your CDs, DVDs, books, and so on. And this is a great use of the EyeSight. Similarly, you saw in EyeChat, excuse me, in EyeChat, the background removal, the new feature that got added, and that is actually a core image filter that does the image diffing and then composites a new filter in the back.

So one example that we experimented with a while ago was color tracking. So I could essentially wear an orange shirt, point the eyesight at me, and have a Core Image filter, which first marks everything as orange, then finds the center point of that object, and then does something with it. So I had a little duck following me around. But you can probably find a more viable application.

Also, I'd like to tell you to go and check out core animation, and it's kind of for two reasons. One is if you're doing animation, well, core animation is kind of what you should look at. And core animation at the layer object can have core image filters attached to it. So animating filter effects is very easy. You can wire up an LK animation object directly to an input of a filter and make things pulsant glow and these kind of things.

And the second thing I would like to point out is you can expand the effects vocabulary of core animation by building an image unit. So if there is a transition effect that core animation doesn't support, but you would really like to see it, well, write an image unit that does this, and then let core animation drive that image unit for you to do that particular transition effect.

Okay, with that, I would like to point you where to go next. There is the late-night graphics and media lab starting at 6:00 tonight to 10:00. Come by and ask as many questions as you like. And then tomorrow, there is the Core Image lab. Okay, with that, I would like to point you where to go next. There is the late-night graphics and media lab starting at 6:00 tonight to 10:00. Come by and ask as many questions as you like. And then tomorrow, there is the Core Image lab.