Mac • 1:03:52
Create stunning visual effects and perform sophisticated imaging operations with Core Image. Discover how you can use Core Image to adjust still images, enhance video playback, and process digital camera RAW files. Learn how to create your own custom image-processing filters and see how to optimize their performance.
Speakers: David Hayward, Frank Doepke, Alexandre Naaman
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
[David Hayward]
So my name is David Hayward, and we'll be talking about creating image processing effects and filters using Core Image. This is what we'll be talking about today. Give a brief introduction to Core Image for those of you who may be unfamiliar with the technology, and show how easy it is to get started using Core Image in your application. After that, we'll get to the heart of our conversation today which is to show you how to write CIFilters. We have several great examples to show you how to do this. And then we'll conclude with some final thoughts on debugging and other resources to consult.
So first, a quick introduction to Core Image. So what is Core Image? So first and foremost, Core Image is a framework that allows you to do ultra-ast image processing, and to apply the effects for fun or profit. It works in realtime with other parts of the system such as Core Video on video, or also on the user interface using Core Animation.
Runs optimized in all the supported Snow Leopard hardware, and it has several key features, not the least of which are floating point accuracy and a full color managed pipeline. In addition to the image processing framework, Core Image also includes a base set of 125 filters to get you started.
And in addition to those 125 filters you can extend that filter set with your own custom filters. And that we'll be talking about considerably later on in the presentation today. So here's how we use Core Image today on our operating system. It's used throughout Mac OS X, our applications, and in third party applications as well. We use it for user interface candy, such as the Dashboard ripple effect, we use it for video effects in iChat and in Photo Booth for fun.
We also use it for screen savers and iTunes visualizers. And then we also use it in our professional and, you know, consumer type-applications like iPhoto and aperture to do image processing. So that's where we use it. Let me show you a little bit about how it works as part of our overall framework. At the top, the key part of Core Image are its filters.
And these filters are written in an architecture-independent language that's C-like, that'sbeen extended to have vector types, and is a subset of the OpenGL shading language. These kernels are then executed through Core Image, combined optimally, and then executed either using OpenGL or Open CL use -- either the CPU or the GPU, depending on what's best for your application.
So here's a brief illustration of how Core Image works in the most trivial example. Start out with an original image, and you want to apply one of our filters to it. In this simple example we want to apply a sepia tone filter, and the end result of this filter is a new image which is outputted, which is a sepia tone image. Obviously, this is a very simple example, we can start to contrive more complex examples by combining filters. And this is where the flexibility of Core Image really starts to grow.
And this simple example we now have 2 filters. An original image goes through a sepia tone filter, and then the output of that goes into a hue rotation filter. And that gives us a look at the output which looks kind of like a blue tone rather than a sepia tone image. And obviously these are just 2 filters, but if you can combine more you can see how you can get much more complex effects.
One key aspect of Core Image is that it will concatenate filters whenever possible. The idea behind this is even though we're applying 2 filters we don't necessarily want to have an intermediate image in between them. And so by concatenating filters we're reducing the number of images, and that improves performance.
And of course, as I alluded to earlier, complex graphs of filters are supported. It doesn't have to be a linear chain. You can have tree-type graphs with multiple input images producing in the end a final output image. And this again allows for much more complicated filter effects, including spacial effects and distortion effects, and so forth. And no matter how complex your graph is, Core Image will work to optimize that graph to give the best possible performance.
So as I mentioned, Core Image includes 125 base filters. So let me just give you a quick demonstration of what some of those filters look like. We have an original image here. We include four different types of geometry filters for doing A fine scales, or rotations of an image. We have a bunch of distortion effects, such as this glass effect that we can apply to an image.
We have a whole bunch of different blur operations. We'll be talking about that a little bit more later on in the show. We have typically Gaussian blurs, but we also have something called a disblur, which is used for more photographic effect. We have several different types of sharpening an image, several different color adjustments, and color effects to produce different distortions on an image. More fun, stylized filters.
These are some of the filters that you see in Photo Booth. Halftone effects, such as a cmyk halftone effect. Different tilings of images, which will be useful. We also have 7 generators. A generator is a special class of filter that produces an image but doesn't take an input image.
So for example, this one produces a star burst pattern. We also have nine different filters which are of a class of filters we call transition filters. And transition filters are very useful for video. They take 2 images and a transition time, which will allow you to do wipes between a source and a destination. We have lots of composite operations, all the standard porter-duff composite modes and others. And then we have several reduction type filters. These are filters that allow you to get information about an image and reduce it down. The most typical example of that is a histogram.
So we have histogram filters as well. So that's our filter set, or a brief tour of it at least. Let me talk to you a bit more about how Core Image optimizes filter graphs, because this is one of the key features that I think Core Image provides to your application.
So Core Image contains a runtime just-in-time Compiler that is designed to optimize your filter graph. The optimization is deferred until the image is actually drawn. And this allows us to only evaluate just the portion of your image as needed to render. That means, for example, if you're zooming in on a very small portion of your image, Core Image will only execute the portion of the kernel that's needed to render that sub-wrapped.
Another thing about Core Image and its optimization is it will concatenate and prune the tree to remove any redundant operations in order to improve performance. Once all this is concatenated we have Redo passes over the filter graph to do global optimizations, such as traditional Compiler optimizations such as peephole and sub-expression elimination.
And then another real important aspect is that we perform several optimizations that are unique to the domain of image processing. Let me talk about that, because that's where we get some of our key benefits. But the great thing is that in Snow Leopard we now have -- because of these optimizations, we have 20% improvement on average in our filters.
This is great news. Some are even more. So let me talk to you about -- now you've got an introduction of what Core Image is capable of, let me give you an illustration of how easy it is to add Core Image to your application. So there's 3 key data types you need to know about when working with Core Image.
The first is a CIImage. The CIImage is an immutable object which represents either a pixilated image that comes from disc or from memory, or it represents the output of a filter, which we'll talk about next. A CIFilter is a mutable object that represents the recipe to produce the CIImage.
And this CIFilter has input parameters which are based on Key/Value coding, and these input parameters, such as the input image, will affect the output image. The last key data type is the CIContext. The CIContext is a destination where Core Image will render its results. And a CIContext image is based on either an OpenGL context or a Core Graphics context. So to walk you through the process we have five steps for how to add Core Image to your application. The first is to create a CIImage. And there's 3 easy ways to create a CIImage.
We can create a CIImage from a CGImage ref, we can create it from a URL that references a file, or we can reference it from data that your application has produced using some other means. Next step is to create a CIFilter. CIFilters are represented or instantiated in Core Image by name. So typical code to create a CIFilter would be to call CIFilter, filter with the name Sepia Tone. Or CIFilter, filter with name Hue Adjust." If you add your own custom filters in your application, these too can be created by name.
So for example, if you have a custom filter you can reference that by name. So we have 125 of these filters. In order to know which one you want to put in your code you may want to discover which ones we have available in more detail. So we have a utility that's part of the developer install package which is a Dashboard widget, which allows you to explore and see the parameters and all the filters that we include.
So in order to get this, you go to your Dashboard, bring up the Core Image filter browser, and it brings up this blue pane which allows you to scroll through all the filters we install, search through them, see which parameters they take as inputs. There's another kind of secret feature of this browser is it supports copy and paste. Which allows you to paste -- once you find the filter you want, paste the template for it into your application.
Once we have the CIFilter, the next thing we need to do is set the input parameters on it. So filter parameters are set via Key/Value coding. One of the most common input parameters on a filter is the input image. You set that by calling filter, set value forKey and the value is the image we want to set as the input, and the key that we pass in is KCI InputImageKey.
Filters may have additional parameters. For example, the sepia tone filter has a parameter to set the intensity of the amount of sepia tone. So that's set again via Key/Value coding. This time the value we pass in is an NSNumber, and the key that we specify is KCI InputIntensityKey.
So now we have an image, and its filter. Next thing we need is context. We can create a Cox text from a CG using context from a CGContext. Or we can create it from an OpenGL context, using create with OpenGL context. There's a convenience method for Cocoa programmers, which is that if you have an NSView custom drawing proc, within that draw proc you can call AppKit to get the current CIContext for that view.
One important thing that you might want to consider is that if you're using a GL -- I'm sorry, a Core Graphics base context, to get the best performance out of that context you want to have your application opt into QuartzGL acceleration. And you can do that by adding the QuartzGL Enable key to your applications info P-list. This greatly improves of the performance of Core Image because it allows Core Image in conjunction with Core Graphics to be fully accelerated on the GPU. It avoids the read back from the graphics card into Core Graphics backing store.
So this is a great way to get optimal performance when using a CGContext. Next and final step is to draw the result. So first thing we want to do is we have our last filter. We want to get the output image of that last filter. And we do that by again using Key/Value accessors.
In this case, we access the value KCI output image key. And then we tell the context to draw that image, which is a very simple call to tell your context to draw the image at a point in the wrap. So a common question we often get is "Well, what if I want the results of a filter for other purposes?" And there's 2 calls you may want to consider if that's what your application needs.
One is once you have a context you can create a CGImage from it, and another call - with CGImage you can then call image I/O to write the image to disc. So that's useful. Another option is you can call -- you can ask the context to render to a bitmap, and that allows you to get the raw bits that came out of the filter. So just to summarize those steps, we can put everything we talked about on these preceding slides into one screen here. And this screen shows us how little code it takes to do an image processing effect using Core Image.