Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2012-510
$eventId
ID of event: wwdc2012
$eventContentId
ID of session without event part: 510
$eventShortId
Shortened ID of event: wwdc12
$year
Year of session: 2012
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2012] [Session 510] Getting Sta...

WWDC12 • Session 510

Getting Started with Core Image

Graphics, Media, and Games • iOS, OS X • 53:02

Core Image lets you create incredible visual effects in your photo and video apps on iOS and OS X. Get introduced to the capabilities of Core Image and the sophisticated effects you can build using built-in filters. Learn recommended practices for using Core Image efficiently and see how to harness its powerful features.

Speakers: David Hayward, Alexandre Naaman

Unlisted on Apple Developer site

Downloads from Apple

HD Video (485.7 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good morning, everyone. My name is David Hayward, and I work on the imaging team at Apple. And I want to talk today about Core Image and how you can use it in your application, give you a good introduction to this technology we're talking about today, which is available for both Mac OS and iOS. So today we'll be talking about giving you an introduction to Core Image, what are the key concepts of Core Image, and what are the built-in filters it provides.

Then we'll go into a bit more detail about how to use the key classes in Core Image, some details on the specifics of the two different platforms that we provide support for. And then also we'll be talking about images and filters and contexts. After that, in the second half of our presentation, Alex will be talking about how to combine some filters in some really exciting ways to produce some unusual effects. Thank you.

So, first off, introduction to Core Image. So Core Image is an image processing framework. But it's not just useful for images. It can also be used for video and for games. The idea behind it is that you can use filters to apply per-pixel operations to an image of data. So, for example, in a simple test case here, we have an original image that we have, and we want to apply a sepia tone filter to it. And it produces a new image as a result.

One of the key flexibilities of Core Image is that it's very easy to chain together multiple filters. So, for example, in addition to applying sepia tone to an image, we can apply a hue rotation filter, which will turn it into kind of a blue tone effect. And then we can apply another effect after that, which is a contrast enhancing filter, to, you know, give a new, more complicated effect. And it's important to keep this in mind, that these unique combinations of filters can produce a myriad of different options.

While conceptually you can think of there being an intermediate image between each of these filters, one of the key things that Core Image does to improve performance is concatenate filter chains so as to reduce intermediate buffers. And this is a key thing that Core Image does to improve performance. Another key thing that we do to improve performance is that we optimize our render graphs at the time that the actual render occurs. This allows us to know at the time of rendering what exact optimizations we can make and do several enhancements.

For example, the color rotation and the contrast filters are internally represented as color matrices. And these color matrices, because they're matrices, can be concatenated. And this actually further improves performance and actually improves precision as well because we can be doing this matrix concatenation at high precision. Thank you.

Another key power of Core Image is that it's a very easy-to-use Objective-C API that has a lot of different flexible inputs. For example, those images can come from a variety of sources. They can come from your iPhoto library. They can come from a live video capture session. They can come from images that you've created in memory using other techniques. They can come from JPEGs and PNG files that you may have on disk. They can also even come in from OpenGL textures. And we'll be talking more about how to leverage OpenGL and Core Image together in our second session this afternoon.

We also have very flexible outputs, and this is also really important. One type of output you can get out of Core Image is a CG image ref. And from a CG image ref, you can go to a lot of other different places in the system. Like, you can create a UI image from that, or you can go out to Image.io to save these images to disk, or you can use the assets library to save it into PhotoRoll.

We can also render into an Eagle layer for OpenGL rendering. We can render into a CV pixel buffer, which is useful if you're going to be using Core Image as part of an AV Foundation API application. You can also render to raw bits or to OpenGL textures. David Hayward, Alexandre Naaman So again, it's a very easy-to-use Objective-C library that has very flexible ways of combining filters and flexible inputs and flexible outputs.

Part of the magic of Core Image is that it comes with a large library of built-in filters. And a year ago at WWC, before iOS 5, we only had a few filters that were available on iOS. We've greatly increased that when iOS 5 shipped, and we've further increased that significantly with iOS 6. We now have 93 built-in filters. Thank you.

I'll talk about them in a little bit more detail. Obviously, there's too many just to read easily on this screen here, so let me kind of break them down into different groups. Core Image has different categories of filters, and I'm going to give some highlights of the different categories. So first of all, we have a category of filters which are for color effects or color adjustments. And these are ones like the sepia tone image that I showed earlier, which allows you to take an image and apply an operation that will change the color on that.

Now, conceptually, a filter like this takes an input image and produces an output image. All filters produce an output image. And it also may have additional numerical parameters on it, such as the intensity of the sepia tone effect in this example. So a lot of the color adjustment filters will look like this, an input image, an output image, and one or more additional parameters.

Another category of filters that we have are compositing operations. And these are operations that allow you to combine two images, either using Porter-Duff compositing or some of the other compositing modes that are very common in image editing applications. I can give an example of that here. We have a blend mode operation where we take in our picture of these boats and a checkerboard image and combine them into an image where you can see both the boats and the checkerboards together.

Conceptually, these filters are interesting because they actually have two input images and one output image. So you can see that now that we've got an example of a filter with two inputs, you can see that we can create complex graphs of filter operations. Another class of filters are for adjusting the geometry on an image. A canonical example of that is something that allows you to do an affine transform on an image.

Another example are tiling effects. These are effects that will allow you to take an image and repeat it in interesting ways. For example, we have a perspective tile effect, which will take the original image and tile it and apply a perspective transform to it in a shader program. One thing that's interesting about this class of filters is it actually produces an infinite image because the tiles repeat off to infinity. So this is one thing that's unique about the Core Image APIs, actually is perfectly suited to handling images of infinite extent. Bye.

Another class of filters are distortion effects. These also affect the location of pixels, for example, a twirled distortion. You might be familiar with these if you use the Photo Booth application on iOS, which uses Core Image and filters like these quite frequently. Another class are blur and sharpen effects. And this has been one of our most requested filters to be added to iOS 6. Blur is a really critical filter, and it's also the basis of many other important filters, like sharpening filters. So this isn't focused. It's just a blurry image.

Another set of effects we have are what we call some stylizing filters. And these are just interesting effects. We have, for example, highlights and shadows adjustments. And I've kind of overdriven it for sake of demo here, so it kind of produces a fake HDR look where you brought up the shadows of an image and brought down the highlights to create a very stylized image. Halftone effects. These are another set of fun effects. We'll take an image and produce a two-color image based on a halftone pattern. Here's a kind of traditional halftone screen.

And transition effects. These are also fun. These are effects that are very useful if you're using Core Image in a video or gaming environment where you want to transition between two sets of content. If you have a first scene and a second scene and you want to do a transition or blend between those two scenes.

We have here a copy machine effect. We have two images and we got a screenshot of it halfway between in the middle of the transition where we have a checkerboard and the boats image and this sort of copy machine blue highlight that's in the process of moving across the screen.

Transition effects are sort of interesting because they take two images, the input image, which is the image you start with at the beginning of the transition, the target image, which is the effect that you want to end up with at the end, and then a time value, which ranges from zero to one. Generators is another interesting class of filters. These are filters that don't take any input images. They just produce an algorithmically generated image.

And here's a very dramatic one, a starburst pattern, but there's other generators as well. One important one that we'll show later today is one that produces a tiled image of random texture data, and that's very useful for interesting effects as well. Generators, like the Starshine generator, are interesting because they have an output image, just like all other filters, but there's no input image. All the inputs are numerical parameters. And in fact, some of the generators have no input parameters. like random, it just produces a output image.

So these are the 93 filters that we've added in iOS 6, and we've taken quite a bit of care to choose these filters. The key things we've decided is to make sure we support both some fun effects, some effects that are all performant on a variety of devices, and also the kind of effects that can be combined in important ways to produce additional effects in your application.

All right, so I'm going to come over to my device here and show you a new application we have to demonstrate all these filters that I was talking about a few slides ago. So as I mentioned before, we have 93 filters, and we have this great application that now allows you to explore these filters.

What we can do when we launch the app here is we just get an empty screen. So the first thing we're going to do is go to Filters, and we're going to add a video source. So I'm going to go and add an import video. And now we can see a picture of myself.

But we want to start seeing what the filters look like. So the first thing I'm going to do is add a filter on here, and I'm going to add a very simple one called Color Controls. And after clicking on that, I can then go and adjust all of its input parameters.

So, for example, we can adjust the saturation way up to be saturated or all the way down and make the image grayscale. Or we can adjust the brightness up or down or the contrast up or down. And this kind of gives you an idea of the frame rates we can get for a very simple filter. I go back. I can now start doing some other filters. The next one I want to add is a filter that we'll see a lot today, which is called Pixlite.

So I can scroll down through here through all these filters. Gone too far. Pixelate. And as you can already see, it's pixelated. We can go in here and adjust the parameters. We can make the pixels really big or really small. And again, you can see the kind of frame rate we can get. All right, next thing I can show you is sort of another more interesting effect. We have here, again, looking at the image, we want to apply a filter called Circular Screen. Let me find that in the list under C.

Circular screen. Circular screen is a halftone type filter. And in this case, it produces a circular halftone. And we can adjust where the center of it is on the screen, left and right. We can also adjust the scale of it so you can get an idea. And the sharpness of it, how crisp. This might be harder to see, but you get the idea.

But what we have right now is sort of an interesting image, but we'd like to be able to combine maybe another effect on top of this. And it would be kind of nice if this had the screen effect, but also you could still see some of the color from the video coming through. And so I'm going to use a composite operation for that. I'm going to composite this effect with the original video. So to do that, I'm going to add another instance of the video filter.

And as you can see, now we're back to just seeing the video filter. But the circular screen is still on the stack of filters. So now what I next want to do is I want to do a blend mode between these two. So I'm going to do a darken blend mode.

Go down here to darken. And so now hopefully you can see clearly we have an image that's sort of the combination of both the video, its screened version of itself, and the original color. And we can go back and adjust the parameters on any of these earlier filters in real time and see the effect.

All right, so that's it. This is an application that will have available a sample code, and you'll be able to use this to try out filters. One other thing that we'll be showing you next is how to add your own filters, which are based up upon our built-in filters.

And we'll just mention them briefly, and we'll get to see them in more detail. We have some very fun effects, which you'll be able to see in this application when you get the sample code. So how can you use Core Image in your application? So as I mentioned before, Core Image is an Objective-C API. It's very easy to use.

And there's only really three classes that you need to understand in Core Image. The first is the CI Filter class. In the CI Filter class, this represents a mutable object that represents the effect that you want to produce. And a filter has a set of input parameters, which are either image parameters or other numerical parameters.

And the result of a filter is that it produces an output image based on the current set of input parameters. The second key class is the CI image class. And this is an immutable object that represents the recipe for an image. And this CI image object can either represent a file that comes directly from an input source or the output of a CI filter.

The third key class is the CI context class. And this is the object which maintains state and is the object through which Core Image will render its results. And the CI context, one thing to keep in mind is it can be based on either a CPU or a GPU-based implementation. And this is an interesting flexibility of Core Image and we'll talk about that in a little bit more detail in a minute.

So as I mentioned earlier, most of what we're talking about in this presentation is equally applicable to iOS and to Mac OS, and the basic API is very similar, but there are a couple distinctions that are important to keep in mind. First of all, on our set of filters, on iOS, we have 93 built-in filters that can be combined in an infinite number of ways. On Mac OS, we have a few additional filters which have additional performance constraints on them. We also have the ability to create developer extendable kernels.

The basic API is identical between iOS and Mac OS. The key three classes which I mentioned earlier, CI Filter, CI Image, and CI Context are the same. On Mac OS, there's a few additional classes that you need to be aware of, CI kernel and CI filter shape, if you were writing your own custom kernels. But again, the vast majority of applications can use our built-in filters.

In both cases, we do render time optimization of the render graph in order to produce the best possible performance on device. And in both cases, we support both CPU and GPU rendering. One subtle difference is that on iOS, our GPU rendering is based on the OpenGL ES 2.0 API, whereas on Mac OS, it's based on the traditional OpenGL API.

So those are the intros. Let me show you just how in a few lines of code you can add Core Image to your application. It's actually very, very simple. First thing we want to do is we're going to create a CI image object, and we're going to create that by initializing it with the contents of a file on disk. So we call it image with contents of URL.

Second thing we're going to do is create an instance of a filter object. In this case, we're going to create a filter of type sepia tone, and we're going to set some parameters on it, which are the input image and the parameter, the amount of the sepia tone we want to apply.

Third, we're going to create a context object that we're going to render through. And fourth, we're going to use that context to produce an output image. We're first going to ask the filter for its output image, and then we're going to ask the context to create a CG image from the output image. And this is one of several ways we can get outputs out of Core Image. We'll talk about more of those later.

So now that you kind of see how easy it is and a few lines of code, let me talk in a little bit more detail about how these three key classes in Core Image work. Again, CI image, CI filter, and CI context. So, some more on CI image.

So a CI image can be instantiated in several key ways. The most common is you will instantiate it from an image I/O supported file format. So, for example, instantiate it with a URL or some data that represents a JPEG or a PNG or a TIFF file or whatever the formats that are supported by image I/O.

We can also instantiate a CI image from several other key data types on iOS and Mac OS. We can create a CI image from a CG image. We can, on iOS, create a CI image from a CV pixel buffer. And on Mac OS, you can create it from the CV image buffer, which is slightly different, and/or an I/O surface. We can also create a CI image from an OpenGL texture. And we'll be talking about that in much more detail in our second session after this one.

And lastly, you can create a CI image from raw pixel data if you have some other means of generating an image data. So one key aspect to think about with Core Image is color management. On both iOS and Mac OS, there is automatic color management that's involved in Core Image.

On Mac OS, a Core Image can be tagged with any color space. And if an image is tagged, it will be automatically converted before filters are applied into a linear working space that all filters work in. And this allows all the filters to work in a consistent way regardless of the input image's color space or your destination color space.

On iOS, it's very similar but slightly different. A CI image can be tagged with device RGB, which you can think of as effectively tagging it with sRGB. And if it is tagged, then all the pixels are gamma corrected using the proper sRGB math into a linear space before filters are applied.

If you use any of the normal ways of instantiating an image from either a URL or data via Image.io, all of this is handled for you automatically. However, if you wish to override the default behavior of Core Image, you can change the working space for an image to something else. One key example of this is if you want to turn off color management for an image, you can set the KCI image color space to override the default to be NS null null.

And that means that no management will occur for that image if that's what you want to do. Another key thing about images we all know these days is metadata. When you take a picture on your iPhone, you're not just capturing a raster of pixels, you're also capturing a wealth of metadata that goes along with that image. Such as when it was taken, what type of camera it was taken with, what the orientation of the camera is, and where the image was taken. And this is really handy information for applications. And we expose that in CI image through a filter properties API.

So if you instantiate a CI image using any of the Image.io based creation methods, image with URL or image with data, you can ask for the filter properties and you'll get a wealth of information. You'll get back a dictionary that returns the same type of dictionary as you get if you were to call the Image.io API CG image source copy properties at index.

If you wish to override the metadata for an image, you can do that by specifying when you instantiate the CI image an optional value for the KCI image properties. This might be useful, for example, if you're creating an image synthetically and yet you want to have it be processed and still maintain some metadata that you've created.

Second thing I want to talk about is this other key class, which is CI filters. We've talked about the variety of filters, but let's talk in a bit more detail about how you use them programmatically. First, we have a variety of filters, and we've grown significantly from iOS 5 to iOS 6. You can query using the filters in category API what filters are currently installed on your system.

Filters are--once you have the list of filters, you can instantiate a filter by name. For example, you can instantiate a filter with the name CICpiotone. Another nice API we have is filter attributes. This is, you can think of it as sort of run-time documentation for how a filter works. It will give you information about all the inputs of a given filter.

For example, it'll tell you what the name of each key for each input is, the expected data type for each input, such as whether it's a number or a vector or an image. And also it'll give you some common values for each input, such as what the default value is for that parameter, what the identity value is, or minimum and maximum.

These kind of properties are really useful if you want to build an application that shows some UI slider for a given parameter, such as the amount of sepia tone. You can query the sepia tone filter, see what the range of the parameter is, and you can set up your sliders to present that range to the user. Once you have instantiated a CI filter, you can set its parameters using standard key value coding conventions. So, for example, you can set the input image by saying set value image for key KCI input image.

Similarly, you can set numerical values as well. Once you've set all the inputs on a filter, you can ask for the output, and this can also be done using key value conventions. For example, you can get value for key KCI output image key. On iOS, we have a couple other convenient ways to do the same thing. They're semantically equivalent but have slightly different coding styles. For example, you can just ask the filter for its output image, or you can say filter.outputimage.

One convenient shortcut is you can actually combine everything I talked about on this slide and the previous slide, instantiating the filter, setting the parameters, and asking for its output into a single line of code. So we have this helper function here, which is filter, filter with name, keys, and values, which allows you to instantiate a filter, set its values, and then right after that you can ask for its output image. So this is a very compact way of doing this, and for sake of brevity of our code, we use this a lot in our slides to follow.

As I alluded to earlier, you can chain together multiple filters. So for example, we can have an input image. And let's say we want to apply just one filter first. We can apply the sepia tone to that. If we want to apply a second, it's very easy to chain these together. All we do is we apply the next filter as the input for the second filter. We give the output from the previous filter.

And one thing to keep in mind is that there's actually no pixel processing that has occurred at the time we've built up the filter graph. All the actual work of rendering and optimization is done at the time you actually ask for the final render to be requested. So that brings up a nice segue to talk about rendering through a CI context.

So CI contexts have a lot of flexible ways of rendering. You can render into a CG image ref for several other types, or you can render into a UI image view or into an iPhoto library. There's several other ways. We'll talk about some of these other methods in our second session this afternoon, but this morning I want to first talk about how to render into a CG image ref and what you can do with that approach.

So let's say, for example, you want to display the output of a filter into a UI image view. This is actually very easy to do in your application. All you need to do is create a CI context, get the output image from your filter chain, render the output image into a CG image ref, and then tell the UI image view to use that CG image ref for its view.

So what does this look like in code? Again, it's a very brief amount of code you need. You just need to instantiate a CI context with its default values. asked the filter for its output image. Tell the context to turn that image into a CG image. Create a UI image from that CG image and tell the view to use that UI image as a view.

There's actually a shortcut for all of this, which is convenient. We'll talk more about the performance implications of this in our second session. But it's actually very easy on a UI. You can create a UI image directly from a CI image, if you wish. And then all you need to do is do that and then tell the view to use that UI image. Internally, this is conceptually equivalent to the previous slide. Let's say you want to do something a little bit more elaborate, which is you want to save results of a CI filter into your photo library.

So in this case, we probably want to create a CPU-based CI context. Why? Well, when you're saving a full effect on a photo, photos can actually be quite large, and those images that we have on our cameras these days are actually bigger than the GPU limits of our devices.

So for that reason alone, we would want to use a CPU context. Also, when you're saving a photo into your photo roll, you might want to be able to have this done as a background task so that the user can quit your application while you're completing that save into the photo roll.

And in order for this to be done, you need to be using a CPU-based CI context rather than a GPU context. So for both of these reasons, this is a good idea if you're working on large images or you want to be able to have a task be backgrounded.

So how do we do this? This is just a slight variation on what we saw two slides ago. This time we're going to create a context, but we're going to specify this time that we want a software renderer. And this is very simple. All we need to do is provide an options dictionary with the key KCI context, use software renderer, and specify yes.

Once we have the context, we're going to do everything we did as before. We're going to create an output image from our filter. We're going to create a CG image from that output image using the context. And then we can use the normal assets library API to save that image into the photo roll. And then a completion callback will be called when it is complete.

So this is all very simple. Here's some tips and best practices that you should be aware of. One thing to keep in mind is like all Objective-C objects, CI images and CI filters are typically created as auto-released objects. And this is convenient, but you should be aware that these classes in particular may hold on to large assets. So in the interest of keeping your memory usage to a minimum, you will probably want to be careful about using auto-release pools in critical areas so as to avoid memory pressure.

Another thing to keep in mind is there's no need to create a context every time you render. If you're going to be doing a lot of renders in a sequence, you probably want to create the context once. There's setup costs associated with the CI context. And by doing that, you can reuse that context again on the same thread as many times as you wish. And that will help improve performance.

It's also important to be aware that both Core Image and Core Animation both leverage the GPU on our devices. And because Core Animation is critical for Core Animation to provide smooth and fluid user interface animations, you'll want to take care not to use Core Image aggressively on the GPU if you're also doing Core Animation animations. So again, this might be a case where you might want to use a CPU-based CI context.

Another thing to keep in mind, as I alluded to a little bit earlier, is that both CI/CPU and CI/GPU contexts have limits on the maximum image that can be processed. There's an API you can ask once you have a context instantiated, which will tell you what the maximum input image size is and what the maximum output image size is. And this can change from device to device and release to release.

It's also important to remember that whenever possible, it's good to use smaller images. Performance in CI is largely determined by complexity of your graph and also, critically, the number of output pixels that you're asking CI to render. So asking it to reduce the images' output image size will have a direct effect on performance.

One thing to keep in mind is that there are several convenient APIs in both Core Graphics and Image I/O that allow you to either crop or reduce an input image. So if you have an 8-megapixel image but you're only showing the user a portion of that, you can create a cropped region of that and process that very efficiently using Core Image.

So that's the end of my first discussion for today. I'm going to pass the stage over to Alex, who will be talking in much more detail about filters and how they can be combined in some really interesting ways. Thanks. Okay. Well, good morning, everyone. My name is Alexandre Naaman. So, so far this morning, we've had an overview of the API in general. And what I'm going to talk about now is just how we can use the CI filter class to produce interesting effects.

So on iOS 6, we now have 93 built-in filters. And you can use those -- and if you're wondering why we have so many, it's because you can use those, combine them together in interesting ways to create effects of your own. So you can come up with recipes that don't exist and aren't built-in and create new effects.

So how can we do that? Well, you can actually subclass CI filter. And this is the same thing that we do internally. And in order to do that, you have to override a few methods, declare your properties, set the defaults, et cetera. And an example of that is, as I was saying, we use this internally for certain of the filters that we provide, such as CI Color Invert, which just inverts colors.

And this is the entirety of the code that shows how you do that. This is a relatively simple example. What we're going to do now is go over six more complicated examples. And I've got an overview here of our inputs. And I'm going to click and then don't blink.

And we're going to go and see what all the different effects we're going to generate today are and go through each step. So here we go. So now we're going to go through each one of those individually and talk about all the steps and different filters we use to create the effects that we see here.

So first things first, we're going to start with chroma key, so green screen style process, where we're going to remove the background by keying off a certain color and then blend. with the background image. So how are we going to do that? Well, we're going to create a color cube, so we're going to use a CI color cube class, and we're going to tell it that we want certain colors to be transparent.

We're going to use that data that we've just created, that color cube, as an input to the color cube filter. And then we're going to use source over compositing to blend the result of our input image, in this case the picture of me standing in front of a green screen, with our background, the picture of the beach.

So if we start with a color cube that looks like this, which is just an identity, so if we were to use this as a data, it would just pass the input image straight through. It's effectively a 3D color lookup table, but we wouldn't be modifying anything. And in order to get the effect that we're looking for, what we want to do is remove all of the green from the image and make that transparent.

So we're going to want to take a slice, basically, out of this cube and make all the alpha values go to zero. So this is easier to see if we, instead of looking at this in RGB color model, we look at this in HSV, so U-saturation value, where we have a cone. And we're basically going to take a slice out of this cone.

And make a range of angles become alpha zero, so transparent. So this, if we look at the resulting cube that we get, we have basically a wedge that's been taken out of the cube where everything that's been taken out is alpha zero and everything that remains is alpha one in the original color, so it just passes through, but all the green from the image is gone.

So in terms of code, it's relatively straightforward. First off, we're going to allocate some memory. We've got some limits in terms of size. We're going to do a 64 by 64 by 64 cube in this case. And then we're going to populate the cube by computing the red, green, blue values. So basically a simple gradient going from zero to one in float values. Now, once we have our RGB value for a given point, we're going to convert that to HSV.

And then we're going to use the U value, so the first component of the HSV, to determine whether or not that RGB value is within the range of colors that we want to make transparent. And if it is, we're going to set alpha to zero, and if it isn't, we're going to set alpha to one. And then we populate the actual RGB values by multiplying by alpha, because what you put in a color cube is pre-multiplied alpha values.

Once we have that, we create some NSData with the memory that we just allocated. We create a CI color cube filter. We then tell the color cube filter how large it is, in this case 64, which is the dimension of our cube. And we set the NSData as the input data for our color cube.

Now, in terms of visually how this works, if we were to create a CI color cube filter, we've got our input image and we've got our color cube that we've created which has all the green transparent, we apply that to our input image and we'll get an image of, you know, someone standing in front of a green background but with the green transparent. In this case, we're going to be using gray and white checkered background to indicate transparency.

So now that we've got our transparent image, we can take that image and use the CI source over compositing filter, set that as the input image, set another image as the background image, and the result is the composited image that we were looking for. So it's really that easy to create a brand new effect with built-in filters.

Another example of how you can use a color cube, and we'll be talking more about color cubes in our second session today, is to do what we call the, what's often referred to as the color accent mode, which is common in digital cameras these days, where you just want to highlight one color.

So once again, if you were to look at the colors in HSV and say you wanted to preserve only the red, you could take the cone and say, well, I'm going to make everything except for red be luminous, so gray, and just preserve the red. And if you do that, and you create a color cube that looks like this, your resulting image will be all gray except for red. So it's quite simple, and it allows you to do a lot of interesting effects.

So our next sample, we're going to show how you can do a white vignetting effect. So on iOS, we have a vignetting effect, but it does the opposite of what we're trying to do here. So how would you go about creating a white vignette? We have a vignetting effect that darkens.

We don't have a vignetting effect that kind of creates a halo. So if we wanted to go from this image to that image, how would we do that? Well, we can do this using built-in filters once again. We're going to start by finding the face in the image.

Faces. We're going to create a base shading map using a CI radial gradient centered on that face that we've just found. And then we're going to blend that image with our original input image. And that's really all there is to it. So let's look at that in terms of code. We're going to use a new API that we haven't talked about so far today called CI detector. So we're going to create a CI detector and we're going to tell it that we're looking for faces.

Then we're going to ask the detector to find the features in the image, our input image, which will look for the faces, and it's going to return to us an array. In this case, we'll just look for the first face in the image, so we'll get the CI feature face. We're going to compute the center.

of the rect that's returned and create a vector which we'll then use to create our radial gradient. So in terms of how that looks like visually, we've got our CR radial gradient filter. We're going to set the input radius to be something relatively large compared to the overall size of the image. The input radius one to something slightly larger than the radius of the face that we just found. input color zero is going to be opaque white.

So as we go out towards the edge of the image, it's going to be completely white. And input color 1 is going to be transparent white. And basically we're going to transition in between those two values from the radius 0 to 1. And the input center is the face rec that we found earlier. When we create that, what we end up with is an image that looks like this. And again, the checkerboard pattern indicates transparency.

The radial gradient which is centered on the face that we found and completely transparent around that radius of 150 and completely opaque around the larger radius. Now all we have to do is take that image and blend it with our original background using a CI source over compositing filter.

and we get the result that we were looking for. Our next example is going to show how we can do the tilt shift effect, which is something that we showed how to do several years ago using custom kernels. And today we're going to show you how you can do the exact same thing just by using built-in filters.

So how are we going to do this? Well, we're going to start off by creating a blurred version of the image. And then we're going to create two linear gradients and blend them together. And we're going to composite the results using blend with mask and it's going to help us determine where we want the image to be blurred and where we want it to be sharp.

So we're going to start with CI Gaussian Blur, which is a new filter for iOS 6. And as David was mentioning earlier, a lot of people have asked us for it. And it is very useful for creating a lot of interesting effects, including the tilt-shift effect. So in this case, we're going to take our input image.

And we're going to blur it by a certain amount. It shouldn't say input background image there. It should say radius. And we're going to end up with a blurred image. The blurred image is going to be larger than our original image because the extents get larger, so we're going to crop it a little bit to match the size of our input image.

The next thing we're going to do is we're going to create the two linear gradients that I spoke about earlier. So first things first, we'll create one that goes from the top to the bottom. I'm just going to go through all these little inputs. And what we end up here with is a green image that's completely green and opaque from the top of the image till we get to one quarter of the way through it.

And then once we're at that spot, we get to -- until we get to halfway through the image, so 0.5 of the height, it becomes more and more transparent. And then completely transparent after that. So that gives us a solid, slightly transparent, completely transparent gradient. And we're going to do the same thing, but starting from the bottom up.

And we're going to use the same color values. And I'll talk about why we use green in a moment. So now we've got our two gradients. We're going to combine those two together using a CI edition compositing filter that just sets We use these two images as the inputs and we end up with our nice linear gradient which goes from solid green, transparent, and then back to solid green. Once we have that, we can use the CI blend with masks. So we've got our three input images.

We're going to use the blurred image as the input image. And then we're also going to use the background image, the one that we haven't blurred, our original input image, set that as the background image. And the mask image is going to be the green image that we just created with the two blends. And the way this works is CI Blend with Masks looks at the green channel to determine from which image it should be sampling. So where it's completely opaque and green, it's going to sample from the blurred image.

And where it's completely transparent, it's going to sample from the background image. And it's going to transition in between those based on the alpha. So... With that, we get the result that we were looking for. And so it's that easy to combine these filters together to get your tilt shift looked.

Okay, so now let's pretend for some reason you had to relocate your family due to some witness relocation program or something along those lines, and you wanted to quickly anonymize all your family photos. We can do that with Core Image with relative ease. And I'm going to show you how to do that.

So first things first, we're going to create a pixelated version of the image. We're going to build a mask using the face detector to find all the faces in the image. And then for each of those faces, we're going to create another radial gradient like we did earlier for the shading.

And we're going to create a mask by building one on top of the other. And then we're going to blend the pixelated image with the original image using the mask that we've created, which corresponds to the circles for the faces. Let's go over that process in depth. So first things first, we're going to create a pixelated image by using a CI pixelate filter. Set the input image, set the scale to something that we find pleasing, and we end up with a pixelated image.

The next thing we need to do is find the faces and create the mask. So conceptually the way this works is that once again we use a CI detector class, and we ask that detector to find all the features of typeface in this image. In this case we're going to find four images, and we're going to get rects that are going to be returned to us. And for each one of those rects, what we're going to do is create a circle that covers that rect completely.

That is going to be our mask. That's going to tell us where we want to use the pixelated image. And again, that's why we use green, because we're going to be using a CI blend with mask filter, which uses the green color component to determine where to sample from. So our mask is going to look like this.

Now in terms of code, and I think this is my last code slide, what this looks like is we've got our mask image initially set to nil. And then we're going to iterate over all the faces. We're going to find the center. for each face, computer radius for it. And then we're going to create a radial gradient. We're using the shortcuts that David showed us earlier to create a circle that is completely opaque green in the center and then completely transparent outside of where the face is located.

Once we've created that filter, we can ask for the CI image from it, so its output image. And then if this is the first image, the first face that we found, our mask image is that image. If it isn't, what we're going to do is composite the current circle image that we've just created with the previous result that we got. And we just keep doing that for all the faces. So we just keep compositing. And we end up with the mask that we saw previously.

So finally, what we need to do is just use the CI Blend with Mask filter. And we're going to use our input image. We're going to set it to be the pixelated image that we created earlier. Our mask image is what we just created with that for loop iterating over all the faces. The background image.

This is our original input image, and when we do that, we get the desired result. Now, let's pretend you wanted to create a transition, but what exists inside of Core Image doesn't suit your needs. So you wanted to go from the image on the left here to the image on the right of my boss's family. How would you go about creating an interesting transition that looked like an arcade-style effect where it's pixelated and dissolving all at the same time? We can do that by combining just a few existing built-in filters.

So, we're going to use a CI Dissolve transition to blend between those images and then we're going to pixelate the result of that

[Transcript missing]

Set our input image to the start image. Set our target image to the image we want to end up at. Use some function for time that's going to look like this. It's just a simple ramp that's been clamped. And we get our dissolve transition.

So that's the first part of the equation. The next thing we're going to do is take the output that we've just gotten. So we've got our dissolve transition. We're going to use the CI Pixelate filter, and we're going to change the scale of the Pixelate Based on time.

So we're going to vary that over time from zero to one the same way we did with the transition filter. And in this case, we're going to make the pixels go really, really big, and then we're going to make them go back down. So we'll get this kind of big pixel to small pixel effect that we had.

And we do that, combine everything together, we get the result we wanted. And that went by a little bit too quick, but anyhow. Next thing we're going to do is we're going to take A video and make it look like it was old film. And we're going to do that by -- this is the most complicated sample I have -- combining several different filters and using a lot of different techniques together. And it's going to give us this kind of old film look where we have speckles, white speckles and kind of dark streaks and the sepia tone look to the video that's being processed by Core Image.

So how are we going to do this? So first things first, we're going to apply the sepia filter to our input source. We're going to create white specs. We're going to create dark scratches. And then we're going to composite everything together. So first things first, we use the CI sepia tone filter, set the input image, in this case it could be video, set the intensity to one to get the maximum effect, and we get our sepia toned image.

The next thing we're going to do is we're going to create the white specs, which are going to kind of give it that noisy look to it. And in this case, we're going to use a filter that we haven't used so far today called the CI Random Generator, which, as David mentioned, is a generator, so it doesn't have any input image, but it generates an output.

And in this case, it generates a very colorful noise pattern. But we want white specs. We don't want colorful noise. So what we're going to do instead is we're going to apply Color Matrix to it. And the result of this color matrix is going to give us a very faint, like dark image, mostly transparent with white specs.

Which looks like this. And if we were to take the output from the CI random generator and use an affine transform to move it over time, we'll end up with something that looks like this. So that's starting to look like what we want for the first -- Addition to the video that we want.

If we take that and we blend it over our source image, Using the CI Source Over Compositing filter, we set that as the input image. And again, it's mostly transparent. We used it as the background image, our sepia tone image. We're going to start getting the result that we're looking for. So now we've got our image. It's got the white specs applied to it.

and the sepia. The next thing we need to do is to add the dark scratches. So in order to do this, we're going to once again use CI random generator. But as I mentioned earlier, it creates very colorful noise. And contrary to the other filter where we were looking for mostly black transparent and just white specs, in this case, we're going to want to create a mostly white image with just a few dark streaks. So what we're going to do, first things first, is we're going to apply an affine transform to it. And you can do this by calling CI image, image by applying transform and providing a CG transform, or by creating a CI affine transform filter.

So in this case, we're going to apply a filter that scales this, the result of the CI random generator in both the X and Y directions. And what it's going to give us is thick pixels and very long pixels. So we've elongated it by using a scale in the Y direction of 25 and in the X direction of 1.5.

But it's still very colorful. So the next thing we're going to do is apply color matrix to it. And in this case, what we're going to do is we're going to blow out the values By applying a scale and a bias vector that's going to make most of the image disappear, basically. So now we've got an image that just has what looks like some cyan highlights here and long and streaky cyan highlights.

And then we're going to use a CI minimum component filter. And this is really the last piece of the recipe where what the CI minimum component filter does is it looks for the minimum of the RGB values and it uses that to create a grayscale image. And if we do that, we end up with our image with these black streaks in it, or dark scratches.

And again, if we were to take the initial output from the random generator and if we were to apply a transform such that we animate over time in the Y direction, then these streaks will look like they're moving up and down, or actually just down. And we're good to go in terms of taking that and compositing it with our already white specs on top of the CPI image using the CI multiply compositing, and we'll get our final result.

And that's all we had to do to create this relatively interesting effect. So that's all we have today so far in terms of filter recipes. If you'd like additional information, you can contact Alan Schaefer, who's our graphics and imaging evangelist, at [email protected]. Or you can go to the website, devforums.apple.com.

We also have another talk immediately following this talk, which is going to go more in-depth into some advanced techniques using Core Image and how you can get the maximum performance out of your application. And that's all I have for today. So I'd like to thank you once again for coming. And good luck with using Core Image. Thank you very much.