Graphics, Media, and Games • iOS, OS X • 58:57
OS X uses advanced standards-based color management techniques to ensure that images, graphics, and video always look great on screen and on paper. See how ColorSync, Quartz, and AV Foundation can automatically color match digital media in your application. Learn color management best practices.
Speakers: Ken Greenebaum, Luke Wallis
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Hi, good morning, everybody. Thanks for coming out early in the morning to talk with us about color. I'm Ken Greenbaum. This is Luke Wallis offstage, and we're going to talk about what you folks at developers have to do to manage color for your applications, both on the Mac OS X desktop as well as iOS for mobile devices.
So first, an introduction to color management. Perhaps one of the big takeaway items is that we at Apple color manage almost all content that you see on the display. It may be video, it may be graphics, it may be animation, it may be still image. So we're going to talk about what that really means to your applications.
And specifically, we're going to talk today about what you folk have to do in your applications, whether using high-level frameworks or low-level frameworks where you have more responsibility. And we're going to talk about what you have to do to author your content in such a way that it works correctly with the color management and most importantly, looks correct for your users.
Finally, and maybe most important, we're going to tell you how to manage maybe the subtle aspects of verification to make sure that color management is enabled for your applications and that your media and applications are looking correct. I'll be talking about a lot of the high-level topics. Luke is going to follow up with some of the details.
So as I mentioned, Apple Color manages almost all the content that you see on the display. This is incredible because it provides high-quality, consistent results, and that's exactly what we want to produce. And it produces those results across all the different devices and environments that users may be using your applications in, as well as the different devices that they may be running it on.
What we're trying to do is preserve the author's intent, and we're going to talk more about that, but that's really the key to this whole process. This is not just for pros. It's not a professional feature. While it's very nice for authoring and proofing, it also provides a tremendous amount of value for those people using your apps and consuming content as well.
So while Apple manages almost everything on the display, that's not true at all for the rest of the industry. Some applications, most notably drawing apps, maybe photo apps, they provide color management, but it's not consistent. There is a form of color management that's used in the broadcast TV industry. Basically, they author for a certain standard, and then they expect receivers to honor that standard. We'll talk a little bit about that later.
So as developers, it's important for you to understand the philosophy behind the color management, and then you folk can do the right things and understand what tradeoffs you're making and how to set up your own applications. So first, the creation of film, video, images, and other forms of media, that's a creative endeavor.
A specific implication of that is that cameras are not scientific instruments. They're not colorimeters, they're not photospectrometers, they're not supposed to. For those that are familiar with color management, that means that our color management is not what is known as scene-referred. We're not trying to capture reality and reproduce that, the reality of the scene that was actually photographed.
Rather, what we're trying to do is reproduce this author's intent that I've mentioned before. By that I mean, what is proofed? What did the author see on their own display or output device? So that's a form of color management that, generically, is called output-referred. If they're proofing on a display, it may be called display-referred.
So because of all the different devices and environments, there has to be some kind of active signal processing performed that makes the content look as close as possible to what the author intended as possible. We call that operation a color match. It may have gamma conversion and other processes that Luke will describe in a moment.
So what do we mean by a creative endeavor? Here we are in paradise. Well, it's paradise until the cameras come out. Then it begins looking like work, and we have to worry about how bright the environment is. We have the cinematographer come out, and then the cinematographer begins twisting the knobs and basically changing things from reality to some form of heightened reality. And this is the creative thing that they're trying to capture. Now, this isn't then what the camera records isn't distributed to other folks as the finished product. Rather, it gets proofed, and that's what we call the authoring intent.
So that result gets viewed on, if we're talking about broadcast video, broadcast monitor, in a very specific environment. So for broadcast video, it happens to be 16 lux D65 studio environment, and there's a director that's looking at it. And that director is making further tweaks on it until it matches the director's idea of what the finished product should be like.
And then for video, only then does it get distributed. In this case, it's only intended to be looked at on a TV. And that TV is only intended to be in a specific environment. According to the standards, it's basically this 1950s-era living room. And in those days, lighting was dimmer, so it was supposed to be a 16 lux dim surround environment.
Now, modern usage is very different from that. We're not looking at our modern workstations or other devices in our living rooms, our dim living rooms. We have big, beautiful, bright displays, so we have to do something else. And there are even more environments. We have mobile devices that you may look at outdoors in sunshine.
You may be looking at our devices in dark surround, in a theater kind of environment. So clearly there has to be something that makes that content look appropriate on these different devices in different environments. And that's where color management comes in. So I have a lot of people who come to me and they think that color management is really this pro feature. It's really about just getting very simple hues looking exactly right, things that general people wouldn't be aware of or don't care about. So I like to show them this to talk about that.
This is a photo of my daughter. The photo is taken in Profoto, which is a professional color space. It's very wide in terms of the gamut. Luke will describe in a moment what that means. But suffice it to say that it can capture very saturated colors. It captures a lot of color. We're rendering it in Preview. Preview is color managed.
And I think even on this large projector, you can see that her outfit is pretty saturated. You can see that there are saturated colors on her mat and her ball. Now, if you were to look at this photograph in another application that isn't color managed, it may look like this. Here we go.
And here you can see it's clearly different. And I'd say not only is it different, but it's wrong. Her outfit is very dim and washed out. Her face doesn't look like a healthy baby at all. The colors, even her ball, half the ball is supposed to be this really vibrant red, and it looks like this dim orange in other case. So without color management, things are just wrong. So I'm going to invite Luke up to the stage, and he's going to talk about controlling color management using ICC profiles. Thank you, Ken.
So a moment ago, Ken gave you an example of color workflow. Color data was acquired by the camera, viewed and manipulated on the display, and then distributed to the user. Here's another very similar workflow, which most of Mac users are finding themselves in. Most of our users have digital cameras, have very nice Apple displays, and a printer.
And a typical workflow for them is to acquire the images from the camera, do some elementary or even more advanced adjustments to their pictures, and print them. And there is one common theme between both of those workflows. One very professional, one just user of your application. They both want to have a consistent color appearance across devices.
So this is where I would like to introduce the definition of color management. One of those definitions says that color management, as you can see, is a controlled conversion between different color representations. And this very simple scenario in modern world can be very quickly complicated and many other things can be added to it. For example, user has some access to image archives that he wants to combine with the images he just acquired from the camera. User has access to the web, which is at this point almost infinite source of color media.
The user not only wants to view it on his own specific workstation or computer, but also worries about presenting that on other computers like powerful workstations or laptop. We have portable devices like iPhone or iPad. Color is also very important here. And in addition, we can add video to the mix, which has slightly different color management requirements, but our goal is to make it kind of uniform and the same for all.
So when we want to talk about reproducing color on different devices, we need to find a way of characterizing the color capabilities of any of those devices. And Ken already mentioned the term we are using, as we call device gamut to describe those capabilities.
[Transcript missing]
Obviously, I cannot do miracles. My device, my laptop, cannot physically reproduce it. I have to do something with it.
In the case of the printer, I just hit the edge of the gamut. So, I can reproduce it a little bit better than on the display. In the case of the display, I have to, let's say, move this color to what is the closest color in my laptop display gamut. and this process we call gamut mapping.
Very often, people think that gamut mapping can do miracles almost and reproduce the same color. No. We are limited to what device can do. And that has significant consequences in terms of organizing your workflow. Obviously, if you want to be proofing your content, it is very important to have decent size gamut on that device because otherwise you may be looking at something which is not correct in terms of original data.
[Transcript missing]
So now I'll give Mike back to Ken, who's talking about active color management. Thank you, Luke. Now we'll talk about what we do on the desktop and what we do on iOS. First thing we're going to talk about is active color management. And active color management is a dynamic process. It's something we apply to every pixel in real time on every frame.
So what do we do with these pixels? We perform the color match operation that Luke has been talking about. And we go from the source contents profile space to the destination space Possibly with an intermediate space or a working space in between. So that could be two color matches for each pixel.
If you're dealing with still images, then you could probably do this on the CPU, and certainly applications do. But if you're doing this dynamically to anything with animation, any kind of video, any kind of a dynamic application, you want to be GPU-based or use some other form of hardware acceleration.
On the other hand, there's targeted color management. In that case, we pick one color space, and the media is matched to that space at authoring time. There may be hardware involved, but that hardware is not doing color match as per se. It's trying to make that display act as much like the target space as possible.
And targeted color management is similar to what the video industry has traditionally done, where they either target traditionally SD video or now HD video. And this is sort of an illustration of that. The director was proofing on, I called it a broadcast monitor, but really in this illustration it should be an SD monitor, and it's only appropriate to be viewed on the SD TV.
So, not surprisingly, on the Mac, on OS X, we're performing active color management. Now, the logic is provided by the ColorSync framework. However, there are a number of frameworks and applications that provide GP-accelerated matching. Those include the Windows Server, Core Image, and Core Animation. So active color management is really all about flexibility. It allows you to use any content and view it on any display.
So certainly on a desktop where you could have lots of different display devices or other forms of output devices like printers, as long as your content was ICC, basically tagged with an ICC profile, and I should say you have an ICC profile that describes your display, then you're good.
On iOS, we are taking the targeted color management approach. In this case, we're targeting the sRGB color space. So your content is still matched to sRGB, only this is happening earlier in the process. It's not happening dynamically. It's happening during authoring. And at the end of the presentation, we're going to talk about how to author for that.
So it's happening during authoring on your Mac, running OS X, and not surprisingly, it's still being managed by ColorSync in that case. There's another place where color management could occur, and that's during syncing. So when you sync content to your mobile device, that's an opportunity to provide color management. And in fact, iTunes is providing that for you when you sync content from iPhoto.
So for instance, if I have the photo -- the pro photo photo of my daughter in my iPhoto library, when that's synced to my mobile device, iTunes is causing that to be converted to sRGB so it looks appropriate on that device. There are a lot of advantages for that for mobile. The largest is that it saves a tremendous amount of power, and power extends runtime, and we all want runtime.
It also unburdens very valuable resources like the GPU to perform other operations on. And one thing that should really be stressed is that you're getting the same high-quality result. You're getting the same match. It's happening at authoring time instead of dynamically. What you're trading off is flexibility. However, you're not taking the display panel out of your mobile device and replacing it with another one.
So I return to Luke. And I'll be back up in a moment to talk about authoring and verification after Luke talks about theory. All right, thank you, Ken. Color technology, color management technology that we're using in OS X, as Ken mentioned, is ColorSync. Very quickly about ColorSync. It consists of several parts. Maybe you have heard about the most critical part, creating the color transfer I mentioned to you and performing number crunching, which is called CMM, which stands for Color Management Module.
In addition to that mathematical part, ColorSync has a database that we call device integration. Every device that is connected to your Mac will be registered with ColorSync, and if the manufacturer provided factory profiles, those will be registered with ColorSync for usage when needed. Also, user can assign custom profiles to this specific device.
Also, part of ColorSync are profiles that we are shipping with the system, standard profiles for different color spaces, and user obviously can add his or her own profiles to that database. Since you're developers, it is maybe interesting detail to know that ColorSync provides the plugin architecture for color management modules, and if you have one that you want to use in your application, there is a specific API to invoke it and use it.
In OS X, we are trying to provide all across color management through all frameworks that are processing color media. We came up with several modern frameworks to control that. Here is the list that I'll be talking about. In addition to that, the first and most important thing for the user needs from your application is the user interface, which is provided by AppKit. And AppKit is also integrated with ColorSync. We'll be talking about details of that in a second.
Quartz has several different components. The most important from the standpoint of color management is Core Graphics, which defines basic objects that allow us to describe and process color. The first one is CG color space. Maybe we can talk here for hours how to define color space, but I think this very brief is very good. You can think about color space as an object that allows Quartz to interpret your color data. And color spaces can be created many different ways. There are a lot of APIs, and here's one example.
You can also use a simple call as create color space with name, and the call that you see would result in creating sRGB color space. CG color spaces can be also created directly from ICC profile. Here's the call that does that. Once I have color space, now I can talk about what is understood as CG color, which means I define how to interpret the values and I combine them with the values. That is CG color. Here's a little sample code how to create it.
Next one, very similar concept. Once I have the color space and I have a lot of data, which is my image organized as an array of pixels in rows and columns, I know how to interpret those values. And here is the call, which emphasizes the most important aspects of creating the image in CG. And something new that I was not talking about yet is CG context.
It is a concept that represents drawing destination. Once we have, on one part, we created the color content in form of an image or data, and I want to rasterize the render to my destination. CG context is the abstract that encapsulates different types of those destinations. The most significant difference between different types of context is that there are some that have their own color space that will be attached to your bitmap or to, for example, to the window.
And there are those which really do not require color conversion because they are, as we call them, recording context, like PDF, that allows you to specify many types of different types of color data, or PostScript that will create it as a source and will hand off to the color management in the printer.
Post-Crip Inter. Context, CG context, is this is really the place where automatic color matching happens for you, for a developer. I can, having a context created with a specific color space, I can draw into it content of many types of, in terms of color. So, for example, I can create my sRGB context and having a CMYK image, I can draw it.
And I can composite that with monochrome or solid color. That's the model that we are using based on PDF imaging model. So, in this situation, when the color space of the context destination is different from my source, Core Graphics will automatically invoke ColorSync and perform proper color correction.
As you see, I mentioned here a reference that I would like you to take a look at. This is Quartz 2D Programming Guide, and we also have sample code called Image App, which you can download from developer.apple.com that shows you how to use these objects I just mentioned to you.
At this point, I would like to step into a little bit into the architecture. Mostly, understanding of that will help you to perhaps correctly write your application. We are talking here about the backing store of the window. In the other systems, on the old versions of our system as well, the contexts were attached directly to the framebuffer. In the recent releases, we have something that we call Window Backing Store. And this is where the context is connected to. In default situations, window context is tagged with current display profile.
And if I have the context attached to it, I can draw different types of images and composite them together, like sRGB image with some dynamic range image, and even monochrome, and come up with something that will look like this on the stage. Very important point here is that I mentioned that there is a place to define a color space.
By default, it's a current color space, but for certain cases, that color space can be changed. to the choice of application. And this will have consequences that I'll be talking about in a second.
[Transcript missing]
As you can guess, this framework acquires images directly from the cameras and scanners. And because images are always some kind of file format, it's obviously mostly based on image I.O., but has some specific characteristic things for color management that I'd like to mention to you. So still there are file formats that do not have any color metric information. And in order to process that, we need to do something about that.
In the ColorSync device database, we can keep a profile for that specific camera that we know that does not provide color metric information. And ImageCapture will consult that database if that would be the case and assign that profile, which means create a proper CG color space, and send it back to the application. In certain cases, ImageCapture also does conversion of images for display, and I'll talk about this in a second. Two major frameworks. One of the frameworks parts of ImageCapture is ImageKit. That provides you all UI to browse all ImageCapture devices, their content, and also display the images.
So that specific view, IK ImageView, ImageKit ImageView, will perform color management for you if you want to display the image. If another part is more down to the core of acquiring images, it's called ImageCapture Core, and this one has just classes that allows you programmatically to do the same thing. Browse the devices, browse their content, and acquire images from the camera or the scanner. The reference I mentioned here is ImageCapture Applications Programming Guide.
Next framework I'll talk about is AV Foundation. In essence, it provides very similar functions as the previous frameworks. It is still based on core graphics to open, display, and save video content. It has its own classes that allow you to do all this functionality, but it's a little different in certain aspects. It's very critical that when using this framework, you set properly the output settings for saving the video. Here are a couple of examples of the keys that you need to specify in your settings dictionary when you do that. Also, distinct differences between the two.
The first difference between the other frameworks is that CG image will be saved as is in terms of color metric information will be just recorded. Using output setting dictionary, you can have AV Foundation convert your source that you acquire to the format that you desire to be saved.
What is new in Mountain Lion, there is a video toolkit which provides new concepts of VT pixel transfer session and VT compression session that allows you to dig much deeper into both color management and processing video. And last but not least, we're talking about AppKit before being integrated with ColorSync. This is the first thing that is needed in your application to present the content. AppKit automatically creates, when the window is created, this is where color management starts. It assigns the proper backing store to the window that resides on a specific display.
And also, all the little widgets that you're typically using in your window are also color managed. What is important to know is how to bridge between AppKit and the frameworks that I mentioned. Most likely, you will need the CG context into which you can draw using the frameworks I described to you. And here is the call that accomplishes that.
As I mentioned, for special cases, you can set the backing story yourself in your application. This is the call. And one thing that I'd like to talk about, you can also, in a second, register for display change notification. User either can change the profile going to the system preferences, or the window will be moved from one display to another. How to use all these is shown also in the image app.
But let's talk about this display change notification. So if I have a window that I draw into using different kind of media, and I'm ready for display, everything is done, the user may decide to move that window from one display to another. In what is happening now, AppKit realizes that getting notification from Windows Server that has happened, it calls application to draw from scratch everything.
This gives us a chance to do rematching to the new display. So the conclusion for developer here is that, pay attention to that, if you're caching some information that is kind of tied to the first display you presented your content on, you need to discard it and allow for proper color management to happen.
So let's put this all together and see what happens in the application. This is a simple application that wants to acquire the image from the camera. So by using Image Capture, The object of CG image will be properly tagged, will contain all the color metric information in terms of color space. If the application wants to draw that image on the screen, the only thing it needs to do is to hand it to Quartz and not to worry about what is the current display at this moment and so forth.
The Quartz will do all that work. We'll consult ColorSync about finding the proper profile, we'll perform color management, and this image will be displayed correctly. Very similar situation if you want to print that image. You don't need to really know at this moment which printer user has selected. You hand it to CAPS, our printing system. This will find out the proper profile for the printer, convert the data, and print the job.
Conclusion of this is that OS X is offering implicit automatic color management through the frameworks which are integrated with ColorSync. So in certain situations, you don't need to do anything about color management, but all the images and devices have the profiles associated with them. You can dig into that, extract the information, and do some kind of higher-level operations that would require that.
Well, besides that, you still have certain responsibilities. If you're creating your own content, make sure that this is properly tagged, that has the calibrated color information in the images you're creating. The contexts are fully specified. In terms of CG context, the only thing we need to is to have the proper color space. In video, there's a little bit more details that you have specified. Referring that to using device architecture.
So, for example, if you have a device that has a color that is color-based, you can use color-based RGB. It's really something we try to discourage as much as we can. This is something which is legacy, as I was trying to describe to you in the previous part of the talk. It is meaningless. We really don't know what are those colors. So, and when device RGB is processed by the system, we don't know which device you really mean. For some people, this debate goes for years already. It goes to 20 years what to do with that.
Oh, the device RGB is some average color space, you know, decent display. No, device means I already managed the data. Don't touch it. So, when this comes as a source, we really don't know what to do and don't expect that it will be consistent because different frameworks make their own decisions about that. Besides all those high-level color-managed frameworks, we have one that low-level that is very important in graphics. It's OpenGL. By definition, it doesn't have anything that deals with color media in automatic fashion like the ones I described to you.
So, if you're using that, the developer is responsible for tagging properly the buffers when the data processed by the OpenGL will be handed off to other frameworks. And if there is a color management required, guess what? You have to do it yourself.
[Transcript missing]
So now I'll turn back to Ken. Thank you. So now we're going to return to how to author content. We're going to talk about how do you verify your applications. And then finally we're going to talk about what's new in Mountain Lion related to color.
So we're going to return to our director. You recall he was performing what I call a proofing operation on his broadcast monitor. For you folk, we're recommending that you use Apple desktop displays. Desktop displays are better than laptop displays for proofing at least because they have a very, very wide viewing angle. Even as good as our laptop displays are getting with some of our announcements this week, we still suggest desktop displays. They're brighter. They have a wide color gamut.
What we don't recommend doing is using some of the third-party wide gamut displays that are available. Those things have, in some cases, a tremendously wide color gamut. They're beautiful to look at. But unless you folk are very, very sophisticated developers when it comes to color management, it's very easy to make mistakes. With a wide gamut display, the photo of my daughter could look great even in an uncolored managed aspect. But then when you got it into your actual application and saw it on other displays, it would look funny.
So please strive for a consistent viewing environment. Unfortunately, that means forsaking windows. In addition to the big difference between sunlight and darkness, the viewing-- your perceptions of what you're viewing can change really profoundly, even when the sun ducks behind a cloud momentarily. It's nice to backlight your displays, meaning have a light illuminate the wall behind you, and that wall should be some kind of a neutral color optimally. At the very least, that reduces eye strain.
While calibration is not strictly required, as Luke mentioned, at least for Apple devices, the ColorSync database provides a profile for our displays. You still may want to calibrate, and we provide a software calibrator with the system. So authoring content. It's imperative to use color-managed authoring tools. All of Apple's tools are color-managed, and some third-party tools are managed.
Usually when they are managed, they're very sophisticated, and you as developers and artists really have to take responsibility to make sure they're configured appropriately. The most straightforward configuration would be to use basically sRGB for everything. Now, your source content can be in any format, in any color space you like, provided it's properly tagged. But when you're authoring, you want to author in sRGB.
That's actually a working space. It means that even if your content is wider gamut and your display can display wider than sRGB gamut, the application is going to be narrowing that and mapping that to sRGB. So, you want to use sRGB, which is where you want to eventually export it to.
So, of course, you want to export to sRGB. For certain circumstances, if you have very high-value content and your application's running on a Mac, and of course your application's color managed, you may want to keep it in its native format. So we've been talking a lot about tagging content. I want to make sure it's really obvious what that means. Pretty much that's attaching meta information to your content that makes it self-describing, at least from a color perspective.
For still images, that's the ICC profile that Luke described. For video, it's this construct we call an NCLC that describes the nature of the video space. Sometimes for some formats, there's basically bits in the header or something like that that describes color spaces. Programmatically, your image buffers may be tagged with a CG color space, and that's an opaque data structure that actually contains an ICC profile internally.
In terms of tagging in Mac OS X, you should know that Apple tools will automatically tag content they generate. But that's not necessarily true of other tools. So again, as developers and artists, it's your responsibility to make sure that all your content is tagged and looks correct. And we'll be talking about that. I'm going to run through three tools that you can use really quickly.
So the first I'm sure you've all used is Preview. Preview is color managed, of course, and it's a great tool for opening your content, making sure that it looks correct. If you bring up the inspector, you can actually look and make sure that there's a tag attached. That's the best thing because you could be looking at untagged content and it won't be obvious if it was tagged or untagged. And there's a subtle aspect to that, too. Untagged content may look right. Even non-color managed content may look correct on your particular display, but it's almost guaranteed to look different on everybody else's display. Color management not only makes things look correct, but it makes them consistent.
What some of you may not know is that under the Tools menu, there's an Assign Profile command, and that allows you to basically set the color space for an image. And this is the tagging operation. Now, this doesn't change the pixel values in your buffer -- in your file, rather. What it does do is it changes the meta information that's associated with it. It tells the Color Management System how to render those pixels.
So next up is the ColorSync utility, and it's a very powerful utility. I'm going to go through four things that you can do with it. The first is, again, what we just talked about in preview. You can assign a profile to image data. But in addition to that, you can perform a new operation, a match operation. And this is an operation that actually modifies the pixels.
So for instance, if I had my daughter's Profoto image, that wide gamut image we saw earlier, in ColorSync, I could cause it to be matched to sRGB. That means all the pixel values will be converted to sRGB and saved in the file. And then the file will then have an sRGB ICC profile associated with it.
A pretty fun thing that it'll do is it'll allow you to compare ICC profiles in 3D. What wasn't obvious from Luke's presentation earlier is that triangular gamut that you saw. That's actually not actually 2D. In fact, it's 3D. And what we're looking at in the triangle is actually a 2D projection of that 3D space. So ColorSync will allow you to turn the gamut around and actually see what shape it's in.
And where that becomes useful is comparing gamuts. So for instance, if you want to figure out if all the colors in a certain image in a certain color space can be represented in another color space, you can compare both of those in 3D and you can spin the result around and you can see if any stick out. If any colors stick out, those are colors that cannot be represented in that second color space.
Then finally, for ColorSync, there is actually a calculator that allows you to perform color math. And this is actually how Luke came up with those RGB values that you saw. How do you figure out what a value in Profoto is in sRGB? And that may be useful for you if you're authoring RGB values for, let's say, GL rendering, and you want it to match something that's color managed. It may also be useful for creating values for HTML or for the web.
So we've had a color profile automator action for some time. This allows you to set an NCLC tag on video. It allows you to tag video. Whereas previously it only allowed you to tag video files that came from true video sources, in Mountain Lion we've added the provision that allow you to add tags that describe RGB-based data. For you QuickTime users out there, you may remember things like the animation codec. So this way we bring animation into QuickTime animation codec into the color-managed world. And there's a new P22 phosphor that actually describes the old color CRTs that were used on Macs.
Sips is a command line tool that allows you to perform batch processing. So if you have a whole library full of maybe a 10, 100, 1,000 images that you want to tag, this allows you to do it as a batch. In this example, we're doing a match to sRGB. But if you read the man page, you'll find out that it's a very capable tool and it can do many, many things.
So how do we evaluate the results? There are three techniques I'm going to describe quickly to you. They involve using special tricks, let's say, special test patterns, special profiles that you can use, and special content. So we begin with this image. It's actually a modified version of the SMPTE color bars.
These things are available in TechNote that's at the end of the presentation. Here we're rendering it in QuickTime Player 10. We're reading the value back in digital color meter, and we're looking at the 75% white value. So 75% of an 8-bit value, 8-bit values, max values 255 is 191. Reading the value back, we're reading back 198, 198, 198, not 191, 191, 191. There's color management involved. Read the TechNote for more details.
Another technique, and this is an extraordinarily powerful technique, is to use what we call a trick profile. An example of that is the BGR profile. It's a profile that purposely swaps the red and blue channels to make things look really funny. So what you can do is you can take that profile, you can copy it to your ColorSync profiles directory, and then when you bring up the display preference pane, you can select it.
And the object is to make things look really strange. The things that look strange are things that we know to be color managed. So in this case, we've brought up Safari, and we can see that the tag content is color managed, which is a great thing. And in the course of creating this presentation, I looked at an early version of the beta and saw that there's an area that wasn't color managed.
When you zoom using the two-finger pinch operation, you can see that images stop being color managed. And then here's an animation that sort of depicts that. So you can see that it looks funny, it's color managed, and as soon as we begin zooming, it pops back to looking normal, which is not a good thing. In this case, it's bad. And I slowed the zoom way down so you could see it. And then when it finally stops, it flips back to looking funny, which is good. So this technique is wonderful for really sussing out some small mistakes.
Using trick content. So this is special content with a special trick embedded profile. So there are a couple of places where color management can be broken. It can be broken at the side where you're matching to the display device, but it could also be broken on the source side.
So if we look at this content in a non-color managed application, we can see that it's clearly wrong. It says as much. If you look at it in Preview, which is a color managed application, we can see that it's correct. And you can use this content in your own applications, too. This content's a part of the image app sample that Luke mentioned.
So finally, what's new in Mountain Lion? So as Luke mentioned, don't use Device RGB, but what we've done is we've standardized what Device RGB does. Now Device RGB always stands in as sRGB. You may not be using Device RGB, but it could be embedded in legacy content that's out there, so just be aware of that.
QuickTime Player 10 now color manages all content, whereas previously it only color managed tag content. What it does is it automatically computes an NCLC based on the assumption that it was video that was exported via QuickTime. So this fixes a big issue where previously if you exported classic content to a modern content type like HI264, the two used to look different. Now they will no longer look different.
So if you don't like the assumptions that we make, by all means go and tag your content to make the assumption permanent. Finally, Final Cut, not part of the operating system but it was released in the last year, is now color managed. So whereas previously applications, excuse me, video that was authored in Final Cut would look different when played in QuickTime that was color managed, now they look identical.
So just to review very quickly, color management is extremely powerful, but the verification can be subtle. Be sure you author for sRGB, make sure all your content's tagged, and use these techniques to verify your results. You can get more information at these sources. These related sessions have already passed, but be sure to check them out online.