Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2005-209
$eventId
ID of event: wwdc2005
$eventContentId
ID of session without event part: 209
$eventShortId
Shortened ID of event: wwdc05
$year
Year of session: 2005
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC05 • Session 209

Mac OS X Color Image Management Explained

Graphics and Media • 56:43

The growing prevalence of color digital media, such as digital photos, puts increasing pressure on applications to handle color correctly. Mac OS X takes a system-wide approach to managing color data by integrating ICC color management into all layers of the graphics stack. Attend this session to learn all aspects of Mac OS X color management capabilities from important automatic color matching behaviors found in Quartz 2D to direct manipulation of color profile information using ColorSync APIs. This is a must-attend session for developers looking to get great color results both on screen and in print.

Speakers: David Hayward, Luke Wallis

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good afternoon and thank you for coming to session 209, which is Color Image Management Explained. As you might have guessed from the title, we'll be talking about the convergence of two critical aspects of a graphical operating system, color management and handling image file formats. These are very critical in applications, especially on Mac OS X.

In this discussion today, we'll be talking about several areas. One is we'll be talking about the general architectural overview of how image handling and color management fit into the overall Mac OS X system. Then we'll go on to talk about opening images, displaying images, filtering them with core image, printing images, and lastly, saving them. In all of these cases, we'll be talking about the two themes, which is how to handle image file formats correctly and how to handle color management correctly.

So first off, architectural overview. For over a decade, color management has been provided using ColorSync, which has been an industry standard for providing great color management at the operating system level. ColorSync is built on three key technologies. First are ICC profiles, which are data files that represent how devices and image files characterize color.

The other foundation is device integration, which is a system by which devices, as they are discovered on the system, can register their modes and color profiles with ColorSync. And lastly are CMMs, which are the computational engines that perform the mathematical calculations to convert from one profile to another profile's color space.

ColorSync is built on the system at a low level on top of Darwin, and above that we have a variety of graphics frameworks on our system, providing a diverse set of functionality from image capture, Quartz, QuickTime, and printing. All of these technologies make use of color management provided by ColorSync.

One thing to keep aware is actually Quartz is made up of several key components. And today we'll be talking about two in particular. One is core graphics in particular and something new in Tiger called Image.io. And these two technologies together provide a great way of handling image file formats and color management.

One thing to keep aware is actually Quartz is made up of several key components. And today we'll be talking about two in particular. One is core graphics in particular and something new in Tiger called Image.io. And these two technologies together provide a great way of handling image file formats and color management.

One thing to keep aware is actually Quartz is made up of several key components. And today we'll be talking about two in particular. One is core graphics in particular and something new in Tiger called Image I.O. And these two technologies together provide a great way of handling image file formats and color management. One thing to keep aware is actually Quartz is made up of several key components.

One thing to keep aware is actually Quartz is made up of several key components. And today we'll be talking about two in particular. One is core graphics in particular and something new in Tiger called Image.io. And these two technologies together provide a great way of handling image file formats and color management.

One thing to keep aware is actually Quartz is made up of several key components. And today we'll be talking about two in particular. One is core graphics in particular and something new in Tiger called Image.io. And these two technologies together provide a great way of handling image file formats and color management.

Lower down, we have the Quartz API layer, which has CG color space, CG image, and CG color. And lastly, we have the very low-level ColorSync API, which provides CM Profile, CM Bitmap, and CM Color. All of these data types can often be converted between each other, so you as an application can choose which API best suits your application needs. But what we recommend these days for developers is to use the Quartz layer. This provides the best compromise between having access to a diverse set of options and also being able to be easy to program to.

All of these data types can often be converted between each other, so you as an application can choose which API best suits your application needs. But where we recommend these days for developers is to use the Quartz layer. This provides the best compromise between having access to a diverse set of options, and also being able to be easy to program to.

Some applications just want stuff to look good by default, other applications may want to have explicit controls to handle the cases when an image doesn't have embedded profiles. So again, this is the key area we'll be talking about today, which is image management and how it intersects with the Quartz API. In particular, we'll be talking about CG image and its companion framework, Image.io.

So as I mentioned before, ImageIO is a new framework in Tiger which provides image reading and writing functionality for core graphics and the rest of Quartz. It is capable of reading many file formats, writing to file formats, reading and writing metadata, incremental loading of images, floating point image support, and automatic color management. And it does this with best-in-class performance and also providing a consistent API for you to use across a wide variety of formats.

That brings up the question that people always ask me first about Image.io is what formats are supported. Of course, all the web standard formats are there, and we've spent a considerable amount of effort to get great performance out of these file formats. We support TIFF, JPEG, JPEG 2000, PNG, and GIF.

There's also an emerging area of floating-point image file formats, which we support natively in Image.io. These are formats such as OpenEXR and TIFF variants such as LogLUV, IEEE Float, and PixR TIFFs. Another growing area in image formats are camera RAW formats. Increasingly, this is becoming the file format of choice by users and professionals alike to get the best color images out of their cameras. Image.io today supports RAW formats from Canon, Nikon, Minolta, Olympus, and Sony.

And of course, no image library would be complete without support for the wide variety of other file formats that are needed for legacy and to support other niche needs. Another key functionality of Image.io is the metadata it supports. There are several flavors of metadata, such as XSIF for digital camera images, IPTC for publishing markets, SIF, GPS metadata, and some vendor-specific maker notes. We support all that in Image.io. And of course, this is the first release of Image.io, and it will be growing extensively in the months and years to come.

Before I talk more in detail about the Image.io API, I just want to start with a foundation of what actually is an image file. An image, in order to be defined, needs to define its geometry, its height, its width, its pixel depth. It also needs to define a color space and its pixel data. And optionally, it may also provide a thumbnail and metadata. This metadata is increasingly becoming a critical aspect of image processing. People are finding it very, very useful through Spotlight to be able to find and locate their images using metadata.

Image files also sometimes support more than one image per file. This is important to be aware of. Formats such as GIF and TIFF support this. And given that there's more than one image per file, there may be some attributes that apply to a file as a whole rather than to any one particular image, such as the file type of the file or file properties in the case of GIF, such as whether it animates or not. So here's just a brief example of some typical data that would be found in a multi-page TIFF file. I won't go into detail, but you can get the idea of the different types of images that can be represented in this model.

So given this model of an image file, how do we represent this in our API using Core Graphics? Well, the geometry, color space, and pixel data are nicely encompassed via the existing Core Graphics data type, the CG Image Ref. Also, the thumbnail, which is optional, may also be represented as a CG Image Ref. The metadata is represented as a hierarchical dictionary of key value pairs, as a CFDictionary, and so is the file properties.

So now that we have this foundation, the first thing you need to do with images is to be able to open them up in your application. So here are the general steps that are involved with reading an image. The file may start on disk or over a network, and we need to be able to determine the correct file format and parse that file. Once we have parsed that file, we can decompress the pixel data, extract the color space definition from the file, and extract the metadata from that file. Then this information is passed on to the application for use as they see fit.

Here's how this works using the ImageIO API. There's a new data type, which is CGImageSource. You can create an image source by calling CGImageSource create with URL. You may also want to create an image source with a CFData or CGData provider. Once you have a CGImageSource, you can get properties for the file, such as the UTI or type identifier for that file format by calling CGImageSource get type. You can get the count for the number of images within that file using CGImageSource get count.

Then for each image that's contained within that file, you can get the actual image by calling CGImageSource create image at index. You can get the metadata by calling CGImageSource copy properties at index. And you can get the thumbnail by calling CGImageSource create image at index. Those are basically all the API you need to know to use ImageIO.

It's very simple. Here's an example of how to use it in practice. Here's a function that given a URL returns a CGImageRef and returns three critical pieces of metadata, the DPI and the orientation. We'll talk more about that later, but this is very important when displaying your image correctly. We create a CGImageSourceRef by calling CGImageSource create with URL. Then we get the metadata and properties for the first image. We can get the image in that file by calling CGImageSource copy properties at index.

Once we have that property dictionary, we can get keys out of that, key values out of that by calling CFDictionary calls to get the CGImage property DPI width, DPI height, and orientation of the image. Lastly, we call CG Image Source Create Image at Index, and that returns the actual CG image to you. That's all there is.

One of the other areas that Image.io excels in is returning thumbnails for images. This was an interesting area when developing Image.io because there's a wide variety of thumbnails that are used. Some file formats support thumbnails, some don't. Some thumbnails are very large, some are small. On the other side, applications may want to have thumbnails returned as quickly as possible or may want to have thumbnails returned even if there isn't a thumbnail inside the image.

So to provide this flexibility, We added an options dictionary to the create thumbnail call. In this example, we create an image source given a URL, and then we create an options dictionary with two key value pairs in it. The first is create thumbnail from image if absent. This tells image IO that you wish to have a thumbnail returned even if the file format doesn't contain an actual embedded thumbnail. The other property we specify is the maximum pixel size.

This tells image IO that even if the thumbnail is bigger than this requested size, it should be scaled down. We create that options dictionary and we pass it into CG Image Source Create Thumbnail at Index. In the session after this, there will be a discussion on using image capture APIs and there will be further examples of how you can use this thumbnail API in a real-world application.

That's session 210. So those are the basics, all you need to know to get up and running using Image.io to read all these file formats that we now support in Tiger. There are a few advanced areas I'd like to talk about in a little detail just to get you more excited. First is loading images incrementally.

One of our other design goals for Image.io is to support loading images incrementally for clients such as WebKit and Safari. We do this by creating a CG image source that's incremental. CG image source create incremental. Then what we do is the application in a loop of some sort will accumulate data into CFData and provide that by calling CG image source update data.

Once new data is provided to Image.io, the client requests an image to be returned by calling CGImageSource createImage at index. This will return one of three values. It will return nil in the case when there has not been enough data accumulated to return any image. Or it will return a partial image, either a lower fidelity or initial bands of the image. Lastly, when all the data is complete, it will return the full image. You can determine what the status is of this loop by calling CGImageSource getStatusAtIndex.

Once you have the status in the image, you can draw the image. One thing that's important to keep in mind, however, is that you need to release the image when you're done with it. This is because the CG image ref is immutable, and in order for a new image to be returned by ImageIO, we need to make sure that the client has released the previous image.

On the subject of floating point images, you're all probably aware that many formats only support one pixel depth. For example, JPEG only support 8 bits per sample. Other formats support arbitrary pixel depth, such as tith. As a rule, when ImageIO returns an image ref to the application, it will be the same depth as the file.

However, for high dynamic range or floating point images, the story is a little bit different. Typically with these file formats, the data that's actually stored in the file is some specially packed bits. And the open source code for decompressing this data often has several different modes of operation. These bits can either be unpacked as floating point data or integer data, or often it can either be unpacked as extended range data where values less than zero or greater than one are allowed, or tone compressed into the range zero to one.

Given these intricacies, we believe that applications will want to opt in to using floating point images. So by default, when you ask for a CG image from using ImageIO, if the file contains floating point data, it'll be tone compressed into 16-bit integers. However, an application that is aware of floating point data and wants to pass it on for further processing, for example, using Core Image, you can request that ImageIO return floating point values.

Let me give a brief code example for how this works. It's very simple. Again, we're making use of an options dictionary to specify flags that will be passed into image IO. Here the options dictionary contains a single key value pair, which is KCG image should allow float with the value true.

We pass that options dictionary into copy properties at index. If the image actually contains floating point data, that property dictionary will contain a key value for KCG image property is float. If the image actually contained floating point data, that options dictionary, or that property dictionary, will contain a key value for KCG image property is float.

The last subject I want to talk about on opening images is how to dynamically support all the file formats that are supported by Image.io. Typically, applications, either Cocoa or Carbon-based, pre-declare the file formats that they can support by adding to the list in their info.p list. However, as I mentioned before, Image.io is going to be evolving considerably in the months and years ahead, and we will be adding more file formats, especially in the area of raw cameras.

So you may want to be able to write your application today so that as new file formats are supported by Image.io, your application will automatically get this functionality. The way you do this is you make use of the Image.io API CG Image Source Copy Type Identifiers . This returns an array of CFStrings where each of these strings is a universal type identifier for the file format. Once you have that array of type identifiers, you can use UT type copy declaration to get the list of file name extensions that are used for that type identifier.

So let me go and give a brief demo of what we've talked about so far using a sample app that we've developed for this presentation called Image App. This sample code, we'll be making extensive references to it throughout the presentation today. It's available for download and provides a great reference for how to handle image and color management correctly in your application. So let me switch over to the demo machine.

And let me just bring up the code real quickly. Again, typically an application would specify in their plist the list of types. In this case, however, in addition to doing that, we will also subclass NSDocument to specify what the types that this document can control. And in this case, the readable types is very simply the results of calling CGImageSourceCopyTypeIdentifiers. As this array grows in the future, this document class will automatically be able to support them.

The other subclass that I make is a subclass on NS Document Controller, which specifies what the file extensions are for a type. And again, since the type in this case is a UTI, we can use UT type copy declaration to get a dictionary which contains the array of suffixes that each type provides.

Let me just give an example of how this application works. Which is over here. If we go to File, Open, what you'll notice is all of these types here, JPEG, CRW, NEF files, are all selectable. And that's because we've declared that this type can support, this class can support all the types that are supported by Image.io.

So let me open up this one image here. And let me just open up another image here. And again, this application will talk more about displaying images. What I want to talk about right now is to show an inspector window that we've implemented here. This inspector window is showing a couple key pieces of information.

It's showing the thumbnail for the image, and it's also showing the property dictionary that's returned from Image.io. As you can see from this list, the property dictionary is actually hierarchical in nature. There are several key values at the root level that are common to all file formats, such as the color model, the depth, the height, and the width.

If the DPI is specified, it's also present at this root level. However, where the real interesting metadata exists is in these sub-dictionaries. There's a sub-dictionary for TIFF properties, which shows all the TIFF properties that are present, the XF properties, and also some maker note information that we are able to extract from this image. And of course other attributes such as the file UTI, the file size, and the path are also displayed.

Again, we have code that shows you how to do all of this. The key function is in our info panel code. We have a set URL call, which calls CG image source create with URL, calls CG image source copy properties at index, also calls ImageIO to create the thumbnail and then starts filling in the UI based on the results that are returned from this. So that's all I want to show at this point. The next thing we'll be talking about is displaying images. So I'm going to pass the microphone over to Luke Wallis. We'll be talking about that and even more.

Thank you very much, David. Can I get back the slides, please? David, do you have the controller? Okay. Don't tell. Sorry about that.

[Transcript missing]

Let's start with some general steps which have to be taken when we are displaying an image. Obviously, the first thing that has to happen is color conversion. Image data has to be converted from this original color space to the display color space.

Next, we need to handle the differences in bit depth between the image and our display. We need to take care of geometry mapping and things like interpolation and so forth that the image looks correctly on the display. And if our image contains some transparency information, for example, like alpha channel, we need to composite this image with transparency on the display.

And when the user changes its, let's say, viewing condition, And for example, resizes the window or moves it from one display to another, most of these steps would have to be repeated. But I have very good news for all of you that all this functionality can be handled within one call called CG Context Draw Image.

The CG Context Draw Image belongs to Quartz API and it takes three parameters. The first is the context, which describes our drawing destination. The next one is a rectangle, which defines the location and the size of the image in the user space. And the third parameter is the image itself.

Graphic context is a fundamental concept in Quartz and represents a drawing destination. It contains all drawing parameters and all device-specific information, for example, like an ICC profile for our window context. Graphic context is a fundamental concept in Quartz and represents a drawing destination. It contains all drawing parameters and all device-specific information, for example, like an ICC profile for our window context.

Another fundamental concept for drawing images in Quartz are Quartz coordinates. There are two separate coordinate systems in Quartz. One is user space and another is device space. The transformation from user space to device space is done through so-called current transformation matrix, which is also a part of our destination context. The CTM, as we call it in short, provides options for scaling, rotating, and translating objects into destination context. But in our image app, we are using it to properly fit the image into our window.

Here is what we do to display an image in our image app. First, we need to find the context corresponding to our view. As you see that we can do it through appropriate call to Cocoa API. Then we define the image geometry. The next step is we need to figure out the transformation for placing the image correctly in our view. We concatenate this transformation with the current transformation in the context and we execute CGContextDrawImage. And this is all.

That's all what we have to do to display correctly color-managed images created by Image.io. However, there are two more details that I would like to bring your attention to. The first one is image orientation. Many image file formats support image orientation through a special tag in the metadata. So, we need to check the metadata, as David showed you in one of the code examples, and make the proper adjustments.

Another very similar problem is with image resolution. Many image file formats support, as we call it, asymmetric resolution, which means the vertical resolution may be different than horizontal. And we also need to take it into account so the image has the proper aspect ratio when displayed. And all that can be very easily done in Quartz because of the transforms that we are using for translating the image from the user space to the device space. Here is a sample code from our image app, how we take the image resolution and orientation into account when we calculate our transform.

I'd like to talk now about some advanced, let's call them advanced techniques, related to drawing images in Quartz. The first one is extracting the data from the image in a requested format. This example shows you how to extract alpha RGB data from any CG image. First, we need to find out what's the size of the image. We have to pre-allocate a buffer for the image data to be returned.

We make a decision about the color space we want to have our images to be converted to. And we create a CG bitmap context using our pre-allocated data, our color space, and specifying the proper alpha layout. The next thing we do, we draw our image to that context, and we can return the flattened image data.

The next topic that I would like to talk about is assigning default profiles to the images that are using device color spaces. First thing we need to do is to find out if indeed the image color space is one of the device types. And the way we can do it in Tiger is we can use the core foundation call CFEqual to compare color spaces. We can do that because CG color spaces are, as we call them, runtime core foundation types.

And as such, they have to provide a method for internals of CFEqual, which will compare the color spaces, not only if this is exactly the same pointer, but it will compare depending on the color space type all the details. And in the case of ICC-based color spaces, we'll actually go all the way to comparing unique profiles identifiers called MD5 signatures. And once I find out that my image indeed has the device color space, I can select the proper default and call CG API to create a copy with color space.

We use this code in our image app, as I mentioned before. Another subject is how to find a ColorSync profile for NSWindow. Here is the sample code which shows you how to do it. We start with finding the display ID in the device description dictionary of the screen object which is associated with our window.

Once we have this number, we call traditional color sync API CM Get Profile by AV ID, and here we go, we have the display profile. Very often for drawing with quartz we may actually need the color space instead of color sync profile, very easy to do. Just call CG Color Space Creative with Platform Space using our display profile.

Another subject which I'd like to talk about is color matching across multiple displays. In Tiger, this is all very easy. We added a new attribute to NSWindow, which can be set by calling this API I'm showing here, setDisplaysWhenScreenProfileChanges. Once this attribute is set and the window is moved from one screen to another or the underlying profile changes, the system will purge all the caches associated with the window and it will replace the profile in underlying context. And after that, the system will call application to draw the content one more time and it obviously will be rematched to the new display profile.

And if the application needs to be notified about this event, it can register for NSWindowDidChangeScreenProfileNot ification and then execute a proper callback making necessary adjustments. Now, let me go back to the demo machine and show you our image app one more time. David, do you want me to save all? All right.

There's our image up. We have prepared a special file that now I'm going to open with our image up. Well, all looks good. It's hard to say if there was anything special about this image. And one way to do it would be to go to the info panel that is constructed using metadata provided by Image.io and look there.

Well, we see there is an orientation tag and there is also some profile. So we're assuming that these are important for displaying the image. And in order to illustrate better the need for handling these two things, we also wrote what we call bad image up, which is using some old legacy components allowing you for opening images and displaying them. So if I open exactly the same file in the bad image up, you see that obviously the image looks very wrong. It's first of all flipped around and all the colors are wrong.

So that's because the old style components that were used for displaying images were not using the information properly which were contained in the image file. Speaking of testing the applications with different files, in addition to the image app that you can download, there is also a directory of different test files, and I would like to show you an example of such a file. If I open this image, the text says that the embedded test profile is used and is not double matched, which is exactly what we want to do.

If I open exactly the same image with my bad image app, The text changes and says the embedded test profile is not used and is not double, obviously was not used, couldn't be double matched. In addition to those test files that are dealing with the profiles, there are also files that are good for testing orientation.

If I open this particular image with the bad image up, the image comes out all rotated. And if I go back to my image up and I open the same file, If I open this particular image with the bad image up, the image comes out all rotated. And if I go back to my image up and I open the same file...

[Transcript missing]

I can change the image saturation. And I can also match my image to a selected profile.

And obviously there are different ways that this can be handled. One way would be to extract the image data from the original image, apply exposure, then apply saturation, create the image, convert this data to the destination profile, create a new image, and then go through the path of correct displaying it. But in Tiger, we can do it a completely different way. And this brings me to another subject I would like to talk about.

which is filtering images with Core Image. Core Image as many of you already may know, is a new and exciting image processing package which was added in Tiger. One of the main features of Core Image is that it tries to take advantage of the programable graphics hardware whenever it's possible. And this way it provides all near real time performance. In addition to that, Core Image comes with many pre-built filters, which are very easy to set up and apply.

And if you want to create your own custom filters, Core Image also provides an API to do that. Excuse me, wrong way. Drawing images with Core Image is essentially the same as with Quartz, except Core Image uses its own types for representing things like image or context. In a typical scenario, what we do is we create a CI image from CG image. We set up one or more filters that like to apply to the image. We create a CI context and draw the result.

That's all what we need to do. And this is what we have done in our image app to draw filter image to a window. In the first step, we create a CI image from our CG image. We apply the exposure filter. Then we apply a saturation filter. And here is a nice part about things we can do with Core Image.

Instead of converting the image into our destination profile, we created a so-called soft proof profile using a standard color cube filter. We just populated this filter with the sample transformation from the CI image working space through our destination profile. We draw the image from the CI image working space and we use that as a filter. And then we draw to the CI context created from window context. And this is the sample code that you can find in the image app doing exactly what I described.

We create CI image from the CG image. Take care of geometry of the image. Apply exposure and saturation filters. Then apply our color cube filter that we are building out of our destination profile. Then we define the destination. That's slight difference between the CI image and Quartz. We define the destination rectangle in our destination context. We create CI context from our window context. And we draw the image.

That path works very well when we display what we wanted to show in terms of color conversions and applying filters, but it wouldn't work exactly as we want if we were to create a CG image for printing or for saving. For that reason, we modify slightly our small image processing pipeline.

We initially go through the same steps of creating the CI image from CG image, and then we apply exposure filter, saturation filter. But here, instead of using the color cube, we create a bitmap context with our selected profile. And then, from there, we create a CI context from our CG bitmap context and draw to it.

And after that, we use this Quartz API to create a CG image out of our bitmap context. Here's how it looks in the real code. CI image created from CG image, apply exposure saturation filter, create a bitmap context, create a CI context from that bitmap context, draw the image, and then CG bitmap context, create image gives us our new image. Amen.

And this brings me to the next topic I would like to talk about, which is printing images. I would like to start with the architectural overview of printing and specifically talk about printing as a part of what we call color communication. Then I would like to touch on the subject of application versus driver color management in printing. And I would like to conclude what is needed in the application for drawing properly color managed images. In Mac OS X, as most of you know, printing consists of two components. The first one is called the printing front end, and this is a library that application leaks against.

Printing front-end is responsible for creating a spool file, which will be passed to the second component, which is the printing back-end. Printing back-end is a separate process and is responsible for creating the proper data for the printer. In the case of the raster printer, printing back-end will rasterize the spool file and in the case of PostScript printer will generate a PostScript job. One of the things that printing back-end is responsible for is color conversion. The printing back-end will convert all the colors in the spool file into the profile that is maintained in the device integration database of ColorSync.

This is a very important point which touches on the color communication. Profiles are not only used for, printer profiles are not only used for color matching from printing back end. I'm talking about this because it happened quite a few times that printer driver developers think that at this moment they can once the printing back end provides the data match to the printer profile they can basically take it over and do whatever they want with it. No, that would break what we call color communication.

Remember, the end user has access to the ColorSync device integration database through ColorSync utility where he can see all the profiles registered and can use ColorSync utility to evaluate the device without actually seeing or printing images by looking at the ColorSync profile. This is the place where the user can assign custom profiles to the devices and obviously the expectation is that these profiles will be respected.

Another example of color communication based on ColorSync profile is what I hope you're very familiar with is soft proofing. If I have a profile for my printer and I know that this is going to be used during printing I can evaluate the capabilities of my device in soft proof The color communication has specific requirements to work. The first one is that application is generating calibrated color data. The second is that we know the profile for the current print mode and obviously we know that the driver is not going to do any additional color adjustments.

You saw already one of the benefits is soft proofing, but this color communication through ICC or ColorSync profiles allows savvy application to produce output specific color. I would like to give you here an example of PDFX. I don't know how many of you are familiar with PDFX, but PDFX is an international standard for data exchange used in publishing industry.

One of the requirements of PDFX is that it is a data exchange system. PDFX is a data exchange system that is used in publishing industry. PDFX is a data exchange system that is used in publishing industry. PDFX is embedding a profile for intended output device that is stored in PDFX's so-called output intent.

When it comes to the printing architecture, we have, as we call them, two separate modes that deal with color management in printing. The first one, We call it internally application. Many printer driver developers call it ColorSync. It's the one that I was just talking about. The application creates the color. The user expects that the matching will happen to the selected printer profile.

We are assuming that in this mode, the driver is not going to make any color adjustments. But we have another mode of controlling color in printing, which is called the driver mode. This is a legacy mode from the days when there was no color management in printing and the only color space that was available was device RGB. And obviously at this moment, the driver had to make some decisions about how to put the ink on the paper to represent the color.

We keep this mode for legacy reasons. And the big difference between the first one and the second is that in this case, printing is no longer part of color communication. Simply because there is no way to communicate back what is going to happen with this image that we are printing.

I would like to give you examples of what application has to do in order to make sure that the color is spooled with the intended color spaces. What I understand by intended color space, it could be either the color space of the image that was created by Image.io when the application didn't make any color adjustments, or that is the user-selected color space that in your application the user was able to convert the images to. This simple code shows you the essence of this code is to create your own view, which will be called to spool the data for printing.

And when this view is called to draw, the only thing we need to do is to draw the image with our intended color space, very much the same way as we are doing that for proper display of the images. And this is all what I wanted to talk about today, so I'd like to thank you for your attention and I return this stage back to David.

Thank you, Luke. We've already done a lot of work today, and as you know, it's always good when you've done a lot of work to save it when you're done. So the last thing we're going to be talking about is saving images using Image.io. So what's involved in saving an image? Well, the application that's going to be saving an image needs to provide lots of information. For example, the image itself, the metadata for the image, and also several options, such as compression type that's to be used when saving the image.

Some of those options might be file format specific. Given all this information, it's passed us some code that's responsible for compressing the pixel data, embedding the appropriate color space into the file, and also embedding the metadata in the file. The end result of this is that a data is produced either as file on disk or data in memory.

So how is this done with the Image.io API? Well, it's very simple. It's just a few calls that you need to make. First is there's a new data type called CG image destination. You can create an image destination by calling CG image destination create with URL. You may also create one with mutable CFData or CG data consumer.

At the time you create the destination, you also need to specify two other things. You need to specify the type identifier to indicate what the file format of the file will be. You also need to declare what the number of images that will be in the resulting file.

Once you have an image destination, then you can set the properties for the file as a whole by calling CGImageDestinationSetProperties. This is only needed in certain special cases. The key thing that you need to do is to add the image and options and metadata by calling CGImageDestinationAddImage. Lastly, you call CGImageDestinationFinalize.

Here's an example of the code, all the code you need to do to write a JPEG. We start with a function where we pass in a URL that we'll be writing to, a CG image ref, and one key piece of metadata, which is the DPI. We create the image destination by calling CGImageDestinationCreateWithURL, and we specify the URL and that we want to be creating a JPEG file. And we also specify that there'll be one image. It's all that JPEG supports, so that's good.

Then we're going to be creating a dictionary, and this dictionary serves two purposes when writing. It's used to specify metadata, and it's also used to specify options that are needed when writing the image, such as compression type or compression quality. So here in this example, it's a JPEG, so we want to be able to specify the compression quality. We do that by adding a key to the dictionary, which is KCGImageDestinationCompressionQuality. It's a bit of a mouthful, but the value we put in there is a value between 0 and 1. So we're going to do that.

We also specify in this short example the metadata that we want to write, which is the DPI. And we specify that by adding the keys CGImagePropertyDPIWidth and DPIHeight. Now we could have added a lot of other options at the same time to the dictionary. We could have added XTIF tags, TIFF tags, and all of these will be written as well. Once we have the image and now this property in OptionsDictionary, we call CGImageDestinationAddImage, and then we call CGImageDestinationFinalize. There's a lot of great stuff that happens behind the scenes here. For example, if you specify when you write that you want an orientation.

Well, some formats support orientation, other ones don't. ImageIO will automatically take care of that for you. If the image file format doesn't support orientation, it will pre-rotate the image. The other thing is not all file formats support all color spaces. So if the image that's passed in happened to be a floating point image and you're saving it as a JPEG, that needs to be converted to 8-bit data. ImageIO will take care of that for you. A lot of this stuff is handled automatically by ImageIO. This is a great, very easy to use API.

That said, there are a few advanced areas I'd like to talk about when saving images. One has to do with saving floating point images, and the other is adding some UI to your application to save images for the user. So with floating point images, it's pretty easy to be honest. All you need to do is provide the image destination object a CG image with floating point pixels.

However, you should be aware that there's currently only two formats supported by ImageIO that support writing floating point data. These are IEEE float TIFFs and OpenEXR. If you are writing to one of those formats, then your floating point data will be passed as is to that file format and then written to disk. In the cases where the file format doesn't support floating point data, ImageIO will take care of converting that to 18 or 16-bit data if possible.

The other area I want to talk about when saving images is how you can add some helpful UI to your user when they're saving an image. The general idea when you're saving an image is often you want to make sure to preserve the original image's metadata as much as possible.

So one thing I recommend is when you open the image, you get the image metadata from Image.io and you keep track of that for later use. Then when it comes time to save the image, you'll modify that slightly, but for the most part you'll pass that metadata dictionary in to the resulting file.

What you then do is in your user interface, you build a pop-up of formats by using the CG image destination copy type identifiers. This API returns the list of all writable file formats supported by Image.io. Then based on when the user interacts with the save panel, when they change the file format, when they change other attributes, you can modify the options in metadata dictionary accordingly. And then when you're done, you write the file to disk. So let me switch the demo machine here and show you an example of how that works in practice. Can we switch to the demo, please? Thanks.

So let me open up an image here. Again, I'm going to open up this Canon RAW image. So let me open up an image here. Again, I'm going to open up this Canon RAW image. And what we have is a format menu that has a list of all the formats that Image.io supports writing. In this case, we're going to save it as a JPEG and we'll have a quality slider. Now let me just show you, before I finish, let me go back to the code and show you how this works.

project. First of all, in my nib, I've got a view that I'm going to be adding to the save panel. And it has a format pop up that's initially empty in the nib. But it also has this other tab view below it. And that tab view has three panes in it. One is an empty pane. This is another pane that's for specifying compression.

And one thing you'll notice is this has three different compression types. If I bring up the info window, you'll notice that the LZW menu item has the value 5 associated with it. And the Pack Bits has the value 32773. These are the magic numbers that TIFF needs in order to specify no compression, LZW compression, or Pack Bits compression. The other pain in this view is the quality pain, which is simply a slider that goes from zero to one.

So given this nib, what we do is in our NSDocument class, there is a method saying prepare save panel. This is a class that you can override if you want to add a view to the save panel. This is how we do it right here. One of the things you'll see me doing here is we're building a list of menu items for the format pop-up. I'm setting the format to the appropriate type. One thing I'm being aware of is that the file that I open may not be writable, so I want to handle the case in that situation and set it to TIFF if appropriate.

Then we set up some initial defaults like a quality value of .85 and compression none. Then what happens is as the user interacts, they'll do things like select a different file format. In that case, we'll tell the save panel what the new list of extensions that are appropriate for the selected file format are.

In addition, then, when the compression pop-up is hit, we get the tag that's associated with that menu, which has those magic numbers in it, 1, 5, and 3, 2, 7, 7, 3, and add that to the options and property dictionary whenever the user selects it. The end result is when it comes time to save, we'll write the image to disk, and we'll specify those options and properties.

So let me just go back to the demo, and we're down here, we'll go and specify JPEG, change the quality, make a little higher quality, and hit save. Now before I hit save, you'll see the info here is all the metadata for the original Canon RAW file. When I hit save, it'll write it to disk, the window will update.

And you'll see even this new file now has preserved all the original EXIF and TIFF data. And also some of the new attributes, such as the JPEG-specific attributes, are present. So I actually think this is a great functionality. This is something before that was dedicated only to special purpose applications. But now this humble piece of sample code can support a wide variety of esoteric RAW camera file formats and save them as JPEGs. So that's the last... Let me go back to the main slides.

So that's the conclusion of our discussion today. I just want to make a reference to our developer documentation that's available. We have some great documentation for CG Image Source and CG Image Destination. I also want to make a plug for a book that will be available soon, which talks about programming with Quartz and also has several discussions of how to use ImageIO correctly and color management correctly. And lastly, there's several other presentations that if you haven't been to already, you should check out. One in particular is the presentation that follows this one, which is Session 210 for Essential Ingredients to Mac OS X Imaging Solutions. It's also a great show.