Mac • 1:15:36
Mac OS X uses advanced standards-based color management techniques to ensure that images, graphics, and video always look great on screen and on paper. See how ColorSync, Quartz, and QuickTime X can automatically color match digital media in your application. Learn color management best practices and understand how changes to system gamma may affect your application.
Speakers: Luke Wallis, Ken Greenebaum
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
[Luke Wallis]
My name is Luke Wallis and I'll be delivering this presentation together with my colleague Ken Greenebaum. What we are hoping that you're going to learn today is, first of all, how the Color Management Architecture is done in Snow Leopard. Then we'll touch on the subject of new ColorSync API that was introduced in Snow Leopard. And later on we'll talk about changes to system gamma that were also introduced in Snow Leopard. Following that we'll touch on the subject of Window Backing Store Color Space that was also introduced in Snow Leopard.
And after that Ken will talk more specifically about Video Color Management in QuickTime X. So before we go to the details of Color Management Architecture in Snow Leopard let's start with a simple example of a scenario which is quite typical on the Mac. I have a digital camera that I use for taking pictures, let's say of being on my vacation in Hawaii and I brought this thing back home, I have my Mac, I have a printer and what I would like to do is to download the images from the camera to my computer, preview them, perhaps make some editorial small changes and print them. When I do all that I have a certain goal in mind. What I'd like to do is to make sure that the images that I'm going to view on my display and later on print will be consistent in terms of color.
[ Background noise ]
[Luke Wallis]
Why this is important, because I don't want to waste paper unnecessarily on images I don't like I will like the same way. I will like to make sure that my display reproduces faithfully what camera was able to capture Is that really is my goal is easy to formulate but is that really technically easy to achieve? Well, I need to step back a little bit and think about how those devices work. Certainly they use completely different technologies to represent color and this is where the differences between those devices are.
The fundamental problem that I have is that each of those devices will represent color a different way, so in order to achieve what I want in my application I'm going to write, is to make sure that I can somehow translate or convert the color from one device to another in the way it will preserve its appearance, so this is where I need what we call color management. Color management will take care of proper conversions between different color representations.
In order to understand this better we need to look into a way of how to represent color capabilities of each of those devices, and here I give you an example which is quite simple. I'm talking about one camera, one display and one printer, but when I'm thinking about moving forward with the application that I'm going to develop to help user working this workload I'm thinking that, well, very often he may have access to some other images which were acquired by completely different cameras and stored on the disk on somewhere else and my user may want to combine all that with new picture I just brought from Hawaii.
Well, my Mac is also on the network, there's a Web maybe I've seen the images of the place I stayed in but the weather was bad I couldn't really take good pictures but maybe I can take something from the Web and add to my album I'm going to be creating. Well, I work on one computer but very often maybe I will be moving from one place to another. I may be working on different devices.
I will, perhaps, do some work on my laptop or my iMac somewhere else so I'm not going to be dealing with only one display but multiple devices. And then recently we added new devices which I can connect to my Mac and also take advantage of presenting the content that I'm working on on them.
I may want to upload some of the images to my iPhone so I can show people, you know, my kind of abbreviated version of my album I'm working on on my iPhone. But, perhaps, I can also do the similar thing and upload images to my Apple TV and then when I have the after vacation homecoming party my friends may look at this on my high-definition TV connected to Apple TV. So that's a great picture but I realize, as I said, all those devices are different. How I'm going to tackle the problem? So let's take a look at how can I represent different devices in terms of their color capabilities?
I'm showing you something very simple in terms of a model of the visible spectrum; this is called, as many of you may know, chromaticity diagram that shows a visible spectrum, and each of my devices can reproduce a part of that. There's really no device who could reproduce the full visible spectrum.
I give you an example of the Adobe RGB which is one of the typical color spaces using digital SLR cameras today. But let's go in order of introducing those new concepts. So, first of all, my device can be characterized by its gamut. Gamut, basically, means the color range that my device is able to reproduce. On this particular graph this triangle represents the device gamut, and all colors which are inside the triangle are reproducible in this particular color space. So here is another term I use, color space.
As we think about device gamut as kind of a volume or a range of color as a part of physical colors in nature color space allows me to mathematically express the color. And here this is a very simple example of Color Space using three primaries, red, green and blue and each of the colors can be represented as a linear combination of all three. So, great, I kind of came out with the model of how to express and define my device color capabilities so now when I go back to the problem I'm trying to solve is I need to reproduce the color, produce on one device on another.
And I know they have different gamuts so I go back to devices I showed you on the first line and my camera is a pretty good camera using Adobe RGB as the color in which all the images will be produced. I have my laptop display with another gamut and I have an inkjet printer.
And as you see all those gamuts are different so if I pick a color somewhere in Adobe RGB which is in the middle of the gamut and I want to reproduce this color on my LCD I see that at least from the physical capabilities perspective I have no problem because this color fits quite well inside the LCD Display Gamut, same thing with the printer that color is inside of inkjet printer gamut. So there's no color shift, colors are in gamut.
But what if I chose some other color which is still in my RGB camera but, obviously, as you can tell; this is way beyond the gamuts of my two other devices. So then in order to do anything and reproduce this color I need to do something and in the example of my LCD display I have to push it to the closest point I have in gamut to my point in the visible spectrum and as you see my green is no longer green it changed, right? I had to in order to reproduce it anyway I'm going to add an unfortunate color shift. This color is also outside of the gamut of my printer but it's not as bad it was quite close and, you know, it's kind of green the same as it was.
So one more quite important point I'd like to make here is that the choice of making conversions is very important. Very often my color conversion can be lossy. So if I decided to take my green color and first convert to the smaller gamut and then go from there to the printer, obviously, I introduced kind of damage to my color that I cannot recover anymore. So point is, it is important to remember that color conversions can introduce certain changes which are not always desired.
So I know all that and so I go back to my application I wanted to write. So I have only the camera and I have display and the printer and I have to do now all this handling of those differences in gamuts and mapping and making smart decisions and all that. Well, this may be quite a big task so this is when we're designing Mac OS X one of the decisions that we made is that we need to really help applications to handle all those issues.
And the way we wanted to do it was by introducing what we called Integrated and Automatic Color Management to kind of separate application from dealing with all those details. There was already a very well known component existing in the old Mac OS called ColorSync so, obviously, there was an obvious choice to use ColorSync as a color management technology. ColorSync was very well established already in the publishing industry and as many of you may know it is based on the standard produced by International Color Consortium or ICC and recently ICC became an ISO Standard as well.
And the ICC as well as ColorSync is based on two fundamental concepts of ICC profiles which will define in general terms color capabilities of any devices of color spaces. And another part of that is so called CMM or, in other words, Color Management Module which in essence is a mathematical engine which does all the mathematical operations required by converting the color from one representation to another.
So how all that works? In a nutshell a profile contains the information in forms of tags that allow me to convert the data from device color space to some arbitrary chosen reference color space called Profile Connection Space. So let's say I have, again, my camera and my printer and I have profiles associated with those devices. Using the profile I can convert the data to my Profile Connection Space and then from there, using the printer profile, convert it to the printer. In a very similar way I may have the scanner with the proper profile.
It might have a profile for my display and by using connection through Profile PCS I can convert the data from the scanner to my display. It's not difficult to imagine that the same thing can be applied to the video content which is defined to be in a specific Color Space. I convert it to PCS and then by converting to my display I can play it with the proper colors.
I'd like to bring your attention to a little detail here. As you see the printer is considered typically an output device but my profile shows the arrows going both ways. Why this is important because what I am trying to represent here is the fact that I can kind of convert my color to the printer and then having that color converted to the printer bring it back to PCS and display, converted then to display.
What it allows me to do is to simulate the behavior of the printer. And in the publishing industry this process is called soft proofing. So instead of checking my colors every time by throwing ink on the paper I can preview that and this becomes more and more reliable and more popular tool in real big printing business. There is a little bit of more details in terms of what profiles contain.
As I said they allow us to convert the color between the device in PCS and vice versa but they can provide also information how to do it in some different standard pre-defined ways. Those ways are called The Rendering Intent and each profile contains the information you have to do it for 3 different rendering intents, one is perceptual.
Perceptual is the intent where we are not necessarily worrying only about faithfully representing the color as I showed you by moving those color dots from one gamut to another. I know that I have some limitations in my device so I'm trying to move the colors, preserve the perception of the color which works well for pictorial images. Another more mathematical kind of rendering and this is colorimetric; this is the one I was presenting you.
I really want to represent take the color from one device and put it in PCS exactly as it is with sometimes I have to do some clipping or moving but I want to minimize that, so that one is called colorimetric. And there's yet another one called saturation which tries to explore the kind of vividness of my device or in which very useful for things like business graphics when I want my red to be really the best red possible on that device. Already I know I started with some color space but maybe I can do something better in terms of presenting that.
So all that, as you see, gives me the ways of selecting a path through the profile according to that rendering intent. And this is where CMM comes to play because when I'm creating a transform from one device to another I give the CMM profiles and as well as rendering intent and direction of the transform.
So the example here on the slide shows you how I created a transform growing from my device A which was my source and doing just Colorimetric transformation which, basically, says, you know, just represent these colors as I had on my device the best possible way in PCS. And then from there I wanted to do perceptual rendering on my output device which says, you know, here are the colors in PCS and try to make the best in terms of converting them in perceptual way which may be as good as I told you for pictorial images. I think this is also a good moment to mention what kind of data CMM is going to convert.
We deal with 8-bit and 16-bit integers and as well added recently 32-bit floating point. So I'd like to illustrate that with some numerical example. Let's take again as example two profiles I already mentioned. My source this time will be the LCD Display Profile and my destination's going to be my Adobe RGB.
So, you know, there are many different types of profiles and the one I'm going to talk about or using as an example here so-called Matrix Based Profile. It contains the information which allows me to build the tone rendering curves that are applied in the process of converting the data from device to PCS by applying those individual tone rendering curves that popularly very often are called gamma; this is some old display.
As you see, it had 1.8 gamma so I was able to build those tables based on the profile. Another information I was going to extract the profile is LCD Display Matrix is based on the values of my primaries I represented in XYZ Color Space, and this allows me to convert the data from my device to Profile Connection Space.
Well, the Adobe RGB is a very similar type of the profile so what I have to do is, again, build the matrix based on some information contained in the profile but this time it's going to be inverted matrix because I'm going in the opposite direction, and the same thing with TRC's. As we know Adobe RGB has a gamma 2.2 so I'll be applying the inverted 2.2 gamma to my data to convert it to Adobe RGB.
So now let's take and put a real number that I want to convert. I've chosen the most saturated green that I can reproduce on this device. Well, we are not going to go through the math here but if I really did the math what comes on the other end looks quite different.
My pure green which has zero red, zero blue suddenly introduced quite a bit of red and blue and my green is far away from fully saturated color. There often people are surprised by that so this is why I'm talking about this. Let's look, again, at the gamuts of those two profiles. The one in the middle is my display. So what is {0, 255, 0} in terms of red, green and blue?
Well, this is my vertex of my gamut, right; this is the most green color I can reproduce on this device. Well, just by reference if I want the same device values {0, 255, 0} in my AdoUbe RGB, well, I'm far away from there. I'm in much more saturated green than my LCD, so this is why when I represent LCD {0, 255, 0} in Adobe RGB I'm getting something which is a linear combination of the other three and really the device values has nothing to do with each other. So this brings me to the most important that I'd like to make here, that color values are only on unequivocal if they are tagged with proper profile. If we drop that information we really don't know what kind of color we are talking about.
[ Background noise ]
[Luke Wallis]
So this part covers the mathematics and profiles that are used in ColorSync but ColorSync on Mac OS X has another role. Besides all this math and profiles it contains Device Integration. Device Integration is a database that contains the information about color devices connected to the system.
So as you know Mac OS X contains different device managers for different types of devices, and those device managers in Mac OS X are integrated in ColorSync in the sense that they not only provide the awareness of the device being connected to the system but also allow us an access to their profiles. So part of this process where the Device Integration is being used is Device Registration.
You might have seen it. Every time you plug in your camera to the computer and you have ColorSync Utility open you will see that, you know, the new device showed up and if the driver had a profile for it it will register and the profile can be seen in the ColorSync utility. Same thing of the printer I have my USB printer I connect it without any specific action automatically the proper device manager, printing manager will talk to the driver, get the proper profiles, register those profiles on behalf of the driver for that specific printer.
This may be important for some of the programmers who need to know about new devices coming or going away. An important point I'd like to make here is that device drivers not necessarily have to register their full profiles specifically for display devices like, you know, just a regular display or very often projectors, those devices carry so called EDID information, which is I don't remember exactly that but what it means is Extended Display Information -- something -- Data, right? And it contains color metric information out of which ColorSync can build the profile.
And this is actually what is happening with every single panel that we produce at Apple, they don't come with the profiles they come with this color metric information. The same thing happens very often with LCD projectors. When you plug it in the EDID contains the data and Mac will build that profile on the fly and we're going to use it.
So this is how it works. What may be important for developers is the fact that Device Integration will send notification every time an event like registration or changing the profile happens and its application needs to know that it can register for that and handle that. Important part to know about Device Integration is the fact the user can override those factory profiles. ColorSync Utility allows you to that.
Here's a screen shot from my computer where I have the Epson Stylus 1280 which at the moment of plugging in registered nine profiles and as is shown on the right-hand side of the slide it came with a specific factory profile but I have my own tools which allow me to create my own custom profile that I'm going to use instead of that.
How it works that if the component that needs to know or what kind of profile is associated with the device in context when I am the current user my custom profile will be returned instead of the factory profile. So a quick summary, ColorSync, some database of profiles that come preinstalled over the system, user can add the profiles into specific location. Besides that we have Device Integration that just described to you how it works for different kind of devices connected to the system. And last but not least CMMs. Mac OS X ships with one Apple CMM but there are ways for other applications register their own CMMs.
So now after all that what I said let's take a look at what's going to happen in the application that wants to do some color processing, deal with color devices. So my simple application is obviously a Cocoa application. I listed here all those lower-level frameworks that deal with the color data and in my example a camera is connected to the system. So from the high-level application I sent a query through the component through the frame we'll call Image Capture, 'please acquire an image from the camera for me'.
So when Image Capture talks to a specific device module for my camera it checks the content of the image and if that image contains the profile is being passed on and we know we have properly characterized or tagged image being passed back to the application. But if that image did not contain anything I go back to ColorSync Integration and say, where is the profile for this camera and my image will be tagged with that profile.
So what application's receiving is we'll tag an object or what we call CG Image or we can call it just an image for the purpose of this talk that contains everything I need to properly process that. So the very next thing I would like to do in my application is to draw it on the screen. Do I need to do anything about this image, no I take the image I receive from Image Capture and I pass it to Quartz which an old terminology core graphics which will rasterize it for me and display it on my display.
And in that process without any involvement of the application Quartz itself will query, ColorSync will find out the proper profile for the display and what has to happen in terms of color conversion will be done without any involvement of the application. So you can imagine the exactly same thing is going to happen when I want to print. I just pass my image as is to CUPS with the selected printer and the underlying frameworks will take care of selecting the current profile for the given printing condition and sending the rasterized data to the printer.
And here are other example that maybe I should mention. I can same way using proper lower-level framework open as an example PDF document which may contain many different color spaces. As you know PDF can represent -- doesn't have to be just RGB it can be many different objects and images in completely different color spaces.
Application without even looking inside just by creating an object of PDF document can send it to Quartz and say, display for me, and besides providing some geometrical information about the placement in the window it doesn't have to do anything. The same process of converting the source color spaces to the proper destination will happen automatically. Obviously, the same will be applied in terms of opening typical image files and the same in terms of writing it out.
When we are writing an image to a specific image file we will tag it properly so next time we or anybody opens it it has the proper color information to process that image properly. I would like to stop for a second on some details of printing because it is slightly different. The general idea is exactly the same but there is something additional there.
So very quickly looking at the architecture of printing how it works it consists of kind of two components the first we call Printing From, that's the actual library your application is linking with and this is where the choice about the profile and printer happens and if I decided what is my destination we spool the spool file which contains both the content I want to print and the profiles for my printer and this is being sent to the other component which we call the Printing Backend.
And the Printing Backend will do the rasterization or produce post script or whatever we need at the backend, but there is a detail that I want to mention here, that Printing Backend may work in 2 different modes. The first and this is the default mode for most of the printers is so called Color Hand-off.
What it means that I really don't need to match to my destination profile of the printer. The driver might have told us please rasterize everything convert all these multiple color spaces that we may have, for example, in PDF document to that one color space before you give it to me. The typical color space like that is sRGB. And then the driver takes it over from there and does its own color management to put the ink on the paper.
But, obviously, for kind of higher-level needs of more advanced developers or users there is another mode where literally the profile that was registered for that printer for a specific mode is going to be used and pre-matched by OS and the driver at this moment should not be making no adjustment.
So what is the summary of all those things I was talking about? Well, that we, as I said, Mac OS X offers the Integrated Automatic Color Management. So what it means in essence that the profiles will be always used wherever they are or they are needed. So as a developer there are often the only thing you need to is to choose the proper color space for additional drawings that you're doing besides handling the images that may be coming from different sources and just leave it alone. The Mac OS X will do the proper color management for you.
[ Background noise ]
[Luke Wallis]
As I mentioned before for those who really need to know more details is the fact that we rewrote the ColorSync API, ColorSync in Snow Leopard that we have a new API based on core foundations it's C API. Now it's a lot smaller, contains only 32 functions just for statistics which replace 129 of old functions, you know, which now are deprecated, of course, they're still supported. You know ColorSync has grown for a very, very long time ago until now and there was really a time to make an adjustment in that sense.
Unfortunately I don't have any sample code but I promise they will be available at some point on Apple Developer Connection website so please check that from time to time at some point everything will be there. But speaking of ColorSync API I would like to mention kind of a new feature which is very unusual for other types of frameworks.
So when I was talking about how this process works of getting a profile and selecting the profile for my source and destination and creating and transforming and then passing it back to CMM and requesting the data can be converted. We came up with a new idea of handling this process which is very useful in presence of GPU which we know can be very powerful.
So instead of going through all this what I described to you ColorSync can return kind of recipe of how to apply the transform. What is that transform has to be applied in order to convert from profile A to profile B? And this allows an application to ride a fragment code in the Shader Language for a given GPU and really push those bits through that very quickly. This model is already used inside of Mac OS X by components like Core Image and QuickTime.
[ Background noise ]
[Luke Wallis]
So that's -- this is all about ColorSync. Now there are certain topics related to color that I would like to talk about which are new in Snow Leopard. First of all, the fact that we are now using 2.2 as a system gamma. Another topic I'd like to mention is the Color Space of the Window Backing Store and then something which is also important starting from Snow Leopard is Display Change Notification. So let's go quickly to system gamma in Snow Leopard. Well, very brief reminder for those who are not familiar with that gamma, basically, describes the relationship between digital counts or voltage and the luminance.
And by nature this is an exponential function and for most of CRT is very close to 2.2. Obviously, we are almost forgetting that CRTs existed but LCDs that took over from there try to preserve the same behavior so they're behaving the same way that the relation between the digital counts and luminance is also -- turn this off power function. So as you know traditional Mac gamma was 1.8, that was very good in the very old days when there was no color management you wanted to print.
Many of you may be familiar with the problem of Dove Game and so just not to go into the details of that, 1.8 was really much better solution for printing. We needed to somehow implement that. We knew that native display gamma is 2.2. We wanted to have an effective gamma of 1.8 so what we are doing is we are using so called Video Look-up tables of buffer lots as is written on this display to make an adjustment. Those video LUTs should be actually used for completely different things, more for like linearization of the display, you know, making sure everything works as in the spec in terms of some small variations in hardware or firmware.
But we were combining those with additional change which will allow us to emulate 1.8 gamma on the system. So starting from Snow Leopard we are no longer doing that. The native gamma of the CRT or how our LCD is displayed is close to 2.2 the reality is different but that's the model, so our system gamma will be 2.2 as well.
So what that means is we can leave the frame buffer LUTs really for what they are supposed to be used for and we don't need to combine and tweak them in order to achieve that result of 1.8. And there is another important reason why we do that because we wanted to align -- sorry; I went a little bit too far with this slide.
We wanted to align our system gamma with what became the fact of standard in the industry like digital photography, digital video, publishing and digital cinematography. So is there anything that you as developers have to do about this? Well, if your data was tagged it really makes no difference the images will be just matched to a different profile and everything will work fine. But in case when your applications was using untagged data I think you would need to make some adjustments. Here is an example of what's going to happen if the image was not tagged. Clearly the image on the right side is a little bit too dark, too contrasting.
We realize that this is a process of transition so we added a ColorSync API which allows you to very quickly find out what is the current gamma of the display. If you are running on the old system which was still supporting 1.8 or you're already on Snow Leopard and 2.2. The important point I would like to make at this moment is that untagged RGB data will be printed differently on Snow Leopard than it was before.
Again, kind of aligning with what is happening out there besides the Mac. RGB data will be printed as RGB and if we have any untagged gray data we printed as generic gray 2.2 gamma, which what that profile is or color space is exactly the same piece wise gamma curve as in RGB profile.
So next topic I would like to talk about is the Window Backing Store Color Space. So when we think about drawing to a window we actually, on Mac, we don't draw directly to the frame buffer. In reality there is a piece of memory set inside which is called the Window Backing Store; this is where all the drawings and all the compositing is happening before the data is moved to the frame buffer during the refresh.
And by nature that Window Backing Store is tagged with the display profile for where my window is residing and there is a CG context attached to that. So when I have, let's say, a JPG image and I'm going to do some compositing in that window and I draw that sRGB image using a specific API of CG context I can, for example, fill my window with that.
So this will be source coming in one color space. I may have some HDR, High Dynamic Range image I also want to compose in the same window. What I'm doing I'm actually doing everything in my Window Backing Store but this is reflected then later on my window residing on the -- sitting on my display.
So, again, this color space can be completely different. What is happening every time I do the drawing I do the matching from the source like in the case of the first image sRGB to my display profile, second case, most likely some linear RGB to my Window Backing Store depending what kind of color space I had in my HDR. And as an additional illustration I may have some PNG with some tags gray color.
It will, again, be converted to the Backing Store and composited together and then during the refresh sent to the frame buffer and displayed on display. So the thing that we added as a new feature in Snow Leopard is an ability to assign specific color space to my Backing Store.
So now you may ask, well, what happens if I chose that color space? Then in essence the color matching from Window Backing Store to my Frame Buffer will be done automatically by the system by a component called Window Server. It will be all performed in GPU which means it will be as fast as we can get it on this box. And what the application sees in terms of drawing destination is that color space. By default that color space will remain display profile but as I mentioned in specific cases it may be beneficial for in an application to set it to something different.
Example, I have an application which displays the images from the Web. As many of you may know the standard color space on the Web is sRGB, so one model would be to keep it as is and draw every time my sRGB image and use CPU to do the conversion from sRGB to Window Backing Store and then do nothing in that moment when I transfer the data from my Backing Store to the frame buffer.
But in that case, actually, I may say no isn't it better if I tag or I assign sRGB to my Window Backing Store? What it means is I avoid completely the CPU and matching being done when I rasterize my image into Backing Store and allow the Window Server to do the fastest possible conversion from the Window Backing Store to my display.
So just to mention other important implications that, as I told you, the matching in case pf differences between the Window Backing Store, color space and display will be performed by Window Server by using GPU. The typical thing that is maybe worth mentioning is that my Backing Store depth matches whatever my display is. And important thing to mention is how Open GL applications work in this environment.
So this is actually very similar to Window Backing Store because the Open GL application have to render to a specific surface and that surface has the color space assigned to it. If that color space is different from my display same thing happens automatically Window Server will use GPU to convert the data from that surface to the display. And color -- the OpenGL applications are different.
OpenGL doesn't have any built-in automatic color management so if you're writing an application like this you are responsible for making the proper color management. Very typical, let's say there's a game you just pick the color space of your window of your surface and you draw OS in this which means no color conversions required on your side and then Window Server will pick it up from your surface and move to the proper -- and convert to proper display color space. Also OpenGL applications can take advantage of this Code Fragment API from ColorSync and perform its own color management in GPU.
ColorSync has no API to set those color spaces, but I mentioned that because it is really important to understand that in the whole color architecture in Snow Leopard. For those I refer you to, first of all, AppKit APIs see their tech note and same thing Carbon will support those Window Backing Store color space. So next thing I would like to talk about is Display Change Notification.
So as I describe to you how this process of rendering to window works I can have my compositing done and displayed on one display but let's say I did everything on my small laptop and I have a pretty big gray color capability display hooked up to my laptop? So what I do I move my window. Well, something's wrong here right, if we do nothing because I pre-matched, I matched the color to my first display.
So what we did in Snow Leopard in the situations like this Cocoa or AppKit, the highest level framework that you're linking with will request from you to redraw completely the content. So now if your applications was just doing that listening to the request, you know, the direct calls, whatever for window and doing just that everything is fine. But if you were for some reason caching information about this first match and you are now being called to redraw one more time you better dispose of those caches and start from scratch.
If that would be the case you should listen or register for notification about display change, and if it happens purge your caches and start drawing from scratch. I gave you an example of moving, which is most typical from moving my window from one display to another but the same thing happens if I go to the System Preferences and I change my display profile. Now this thing is going to happen for you automatically except your caches should be purged.
Now we can step back and say, well, you know, I have options right; didn't you tell me I can set my Window Backing Store to some color space different than display? Absolutely, if your Window Backing Store was set to some arbitrary display profile which has nothing to do with my current display your action is really not -- hopefully even if you're caching you just have to redraw, right, so when you move the window from one to another the automatic matching from your Window Backing Store will be applied and image will be, hopefully, displayed correctly.
That brings a little details and fine points that I mentioned to you before that a choice on some color space in the middle may be detrimental, maybe it's not always the best; this is something that higher-level application developers should consider, what is better. Most cases the best way is to draw directly from my source to destination. In certain cases maybe I want that color space in the middle, but the main reason for that introducing that is performance.
Because also this Backing Store color space allows us to handle gracefully the problem of windows traveling multiple displays; this is a very simple example maybe not realistic but it's easy to envision the systems in which I'll have the whole mosaic or, you know, a lattice of displays and I want to display one big image so when I have one Backing Store profile which is well determined when I match individual parts of that image to different displays I can do the "right thing".
A very important thing for that Backing Store support is for video because I can set my video to the color space of the content and allow GPU to do all color conversions instead of my application or even the system trying to do the match on every single frame using CPU. So I think this gives me a perfect way to pass the microphone to Ken who will be talking about video. Thank you very much.
[ Applause ]
[Ken Greenebaum]
Thank you Luke. So we're running just a little bit long so I'll be moving fairly quickly through this material. This is some content that I've been meaning to communicate to developers for some time now and we're not going to talk about APIs, rather, we're going to talk about what's happening inside the box behind your applications. So this will give you the ability to evaluate what it is you see and why you see what you see and why, hopefully, it's a good thing.
So these are the topics we're going to talk about. The first two; Interpreting Video From a Computer's Perspective as well as What Does it Mean to Color Manage Video are kind of motivational topics. Probably what you're here for is what's new in QuickTime X in terms of color management.
We're also going to, very quickly, talk about what the implications of what Luke just introduced in terms of the new display gamma are to your applications and then we're going to quickly go into some more advanced material regarding how to evaluate the video that your application's actually producing. So one thing we have to remember is that video's a pretty old technology already. In terms of electronic television that's really a product of the 1930s, colors actually from the 1950s, and some really brilliant people created these technologies.
We have to understand that these technologies actually represent solutions from the 1950s era and you're talking tubes. A lot of the things that are baked into the standards that we now know and maybe love in video actually are based on the compromises that these engineers made quite a long time ago, and we're going to talk about a few of these.
So this is a pretty old video camera. You have to understand that technology from these days was inherently noisy. So a potential solution from that era was just to slope limit to blacks and that is actually what makes it into a rec.709 standard as what we know as this piecewise transfer function. We'll talk more about that.
Another aspect of video is how it was intended to be viewed and it was viewed on pretty old CRTs in sort of 1950s era living rooms which were fairly dim affairs. So one thing that's important to remember is that or maybe you're not familiar with the concept is that if you take an image that was produced in bright sunlight and then you display it in a darker environment kind of like this 16 Lux environment that the video standards assume that people are going to be watching video in, the net affect will be that the image appears to be kind of flat and it needs a gamma boost which is another way of saying it needs a contrast enhancement in order to look linear, in order for it to look like what the original image looked like.
So consequently built into the standards and built into the video equipment that we've enjoyed for all these years is something around a 1.25 gamma boost which means that the cameras were producing a signal that the CRTs that people had in their homes purposely didn't completely unmatch, so you're not getting a one to one relationship.
This is also true if you're familiar with print film technology or at least slide film projectors. So that if you took a picture in a bright sunlit room and watched it in a dark environment maybe like this room projected you'd actually need a 1.5 gamma boost. And in the case of film that's built into the emulsion because there's no place else in that system to provide the difference. Also in the 1950's they added Pthis thing called color to the pre-existing black and white TV signals.
And, again, they were really clever about how this information was added. It was added as a subcarrier to the luma signal but for our purposes we're really interested in that chroma information is at a lower bandwidth than the video information. Pretty much for each of the chroma channels there's half the information there is in the luma channel and that's based on our perception.
As many of you know our perception of the luminar black and white is a lot higher frequency than it is of the chroma information, and that's a very early example of lossy compression. So moving forward quite a bit to the 1980s we have what some of you may be aware of is the rec.601 standard and that was pretty much interested in taking the analog video lines and just chopping it into individual pixels.
It also talks about the Y'CbCr color space where Y represents luma, that's what I have as the gray boxes on the slide and Cb and Cr are the two chroma channels. And you can see here is a representation of the 4:2:2 chroma sub-sampling that for each line of luma information you have half as much of chroma information. So there's something that the rec.601 standard didn't talk about. It didn't talk about the gamma of video, specifically. It also didn't talk about the chrominance or the color representation of video specifically.
And that's because it really wasn't necessary. So if we look at the example of a DVD player taking these are my own set of color test bars. The actual DVD player takes the information that's written on that DVD, happens to be an MPEG-2, it decodes that, it goes to a frame buffer that's in the DVD player and then it gets sent directly out to your TV without any further processing. So the idea is that the DVD player is not interpreting the video at all.
Whatever is on the DVD it basically puts out to the TV. Now the Mac environment's actually different. Luke just spent all this time describing all the wonderful mechanisms that we have on the Mac in terms of being a fully color managed system. And there are some subtleties here that I think it's important to remember.
One is that the display profile is actually is calibrated and that calibration includes whatever the nature of the ambient environment is at the time of calibration. So that means that where the original video standards included this gamma boost that boost isn't needed and it also isn't desired in the environment that we have because our color management system's providing whatever is appropriate for the given environment and that's a very good thing, but it makes our lives more complicated.
And then the other thing to recall is that the display buffer in the Mac isn't at whatever the nominal video gamma is. So we have to do some work to figure out what the Native video gamma is so we can provide the conversion. In Snow Leopard we're providing the conversion to the new 2.2 display buffer Gamma. Prior to Snow Leopard and Leopard before we had a match to the 1.8 value.
So that leaves us with a problem of interpreting video standards. To try to figure out what the actual gamma of video is in the standard. And a lot of people look for these standards and they look for numbers and they don't see what they're finding but they find something else and they'll go use that. But there are ways that we can derive these values. You don't find this derivation in video standards there are some more modern standards usually from the computer industry that do provide derivation, and they come up with numbers very similar to the numbers that we use.
We're using a value of approximately 1.96 as the gamma of video. So very quickly I'll walk through two ways to derive that information. CRTs, if you measure them, have a gamma of somewhere between 2.4 and 2.5, and you see these varying numbers because it really is dependent on how the brightness and contrast is set on the monitor.
So if we pick a value halfway in between maybe 2.45 and you remove that 1.25 adaptation that we already talked about then you'll get a value very similar to that 1.96. As you may know a gamma function is an exponential function and you don't subtract the exponents, rather you divide so if you take 2.45 divided by 1.25 you get a value very similar to our 1.96 value.
Also, if you look to the rec.709 standard and you see this piecewise gamma function and if you go back to your calculus we can actually solve for the area under that curve and that sort of area under the curve is proportional to what we call tonality which is a real fancy way of talking about the brightness or contrast that curve represents. So if you solve for a pure power function that has an area similar to that of what's underneath the piecewise function you'll also get a value very close to this 1.96 value.
So why color manage video? It's basically the same motivation as Luke mentioned for 2D and this is really important. What we want to do is to produce the intention of the author of the video content with as high a fidelity as possible. So that means when the producer, the director, the cinematographer and the post-production people sit down behind their broadcast monitor and they say it's a wrap, we like this we want to make sure that we provide that same video response, that same look on across all the different displays that we produce. And what may not be obvious is that the different displays that you may have available to you are actually very different.
So LCDs that are in portable laptop computers tend to be optimized for brightness and they sacrifice some saturation for that. A cinema display, on the other hand, provides very wide gamut high saturation but, of course, it takes more power to use. So it's also important if you're authoring content.
So if you're authoring our editing video you want your application to be color managed and that way you get the same affect no matter which display or workstation you happen to be using. So, perhaps, in your editing suite you have an actual broadcast video monitor but we want to provide as close fidelity to that experience as possible even if you're using your LCD and your laptop.
So similar to the 2D color management we also tagged the color intention via tags except we use a different tag. We use a tag called 'nclc' instead of an ICC Profile. So very quickly an 'nclc' is just shorthand for describing video color imagery. These are some common examples of 'nclc's you see in the table; this 'nclc' information can be tagged and included in a lot of places QuickTime Movies MPEG4.MP4 files. It can also be made into MPEG4 streams and other places.
So the 'nclc' has three components and those aren't the actual values rather they're indices into tables of values. So the first parameter are the primaries in Luke did a very nice job at describing these. You can see the most common values we have correspond to the rec.709, the SD PAL, the SD SMPTE-C color imageries. So the second parameter is the Transfer Function.
It's kind of commonly called gamma. And usually we use just this one value the rec.709. The third and final parameter for the 'nclc' is the matrix and this is the 3x3 matrix that's used to convert between Y'CbCr and RGB. And the part that's not obvious to a lot of people is that you use a different matrix depending on the color imagery, and this is really critical while if you use the wrong matrix it appears to largely work. The colors won't come out correctly and that's especially noticeable if you look at SMPTE Color Bars.
[ Background noise ]
[Ken Greenebaum]
So, finally, what's new? So we have a GPU Accelerated Pipeline in QuickTime X and it's providing not only the color management but it's also now providing a Chroma Siting-based Upsampling from the Y'CbCr's chroma subsample space to RGB's 4:4:4. I'll go into details on that in a moment.
We also provide colorimetrically correct Export, Capture, Screen Capture. We're always color managing content and that's a change from the past. We'll talk about that as well. We have an Automator action that allows you to provide your own tagging. And additionally and, I think, most excitingly we're providing consistent color management of video across the platform.
[ Background noise ]
[Ken Greenebaum]
So very quickly the steps that we go through for upsampling Y'CbCr to 4:4:4 are basically controlled by this new tag it's the Chroma Tag or CHRM. And it also, the 'nclc' supplies which matrix to apply. So here are the luma values represented in gray. These orange boxes or pixels correspond to the Cb and Cr chroma values.
And you can see that the orange values are aligned with the left most array of or column I should say of the luma pixels, so we consider this to be left-siting. By applying a GPU-based filter we actually perform the conversion into RGB as well providing a filter-based correct up-sampling to RGB 4:4:4.
Now if we slide over the orange values somewhere halfway in between the two adjacent luma values then you get another siting; this is called center siting, and we also correctly handle this case as specified in the chroma tag. So one of the new features is that we make sure that when we export that it's done in terms of ColorSync and it's done correctly.
So primarily that means two things if there's a color space conversion that's necessary not only do we use the appropriate matrix but now we also provide a ColorSync. And, two, we make sure that the video that we produce is properly tagged so it can be processed correctly. In the case with Apple TV, Apple TV is a modern and color managed platform like the Macintosh so if you started with rec.709 HD content you can leave it in that content and Apple TV will preserve it correctly and it will provide the proper color management. When you export to an iPod or an iPhone those things actually have SD output from them, so we perform a conversion from HD to SD and that goes through a new step where we actually make sure that the color values are adjusted so they appear correct.
The new QuickTime X QTKit Capture mechanisms now inquire from the camera what the color space is and make sure that that 'nclc' information is piped all the way through the pipeline so that previews are ColorSynced properly, any color conversions are performed properly and the end result is also tagged.
Also new QTKit Capture can capture from the display and what that does if you think about it the display buffer of the computer isn't in a video color space at all it's actually in an ICC profile that corresponds to the display. So before information, RGB information from the display can be managed by the encoding pipeline you have to perform a conversion from that RGB device space into an RGB video space.
In our case we used RGB rec.709. So the end result is that the captured movie when played back will have a very good fidelity with the original that you may have seen on the display and it'll be true video so that you can take that and put it on a DVD or use it on another source.
Otherwise if you naively captured RGB data, ran it through the video pipeline and played it back again you would notice that the colors aren't quite right and, perhaps, even more importantly the gamma would be off, it would be shifted. So QuickTime X now color manages all content. That's a change from past behaviors. We like it because it's consistent there's no guessing nothing based on the size or any other information.
We've chosen to use the classic SD color imagery, which is as good of a guess as any. And you get this behavior with all modern content which is H.264 and other fordmats. One thing that's important to know is that the QuickTime X Player will fall back to the QuickTime 7 Pipeline and the QuickTime 7 behaviors with legacy content and also legacy modes. QuickTime 7 Player always uses the QuickTime 7 Pipeline. So there's a new tagging tool it's an Automator action and it allows you to tag all of your content and it's pretty easy to set up a workflow using Automator.
That's especially important for old content that's untagged, maybe content that's coming from an external source and anything that's incorrectly tagged. So I'm very excited that we're providing consistent color management which means we're using the same color math across all the applications on the platform.
[ Background noise ]
[Ken Greenebaum]
So Luke quickly mentioned the new 2.2 gamma on Snow Leopard and you may be wondering what does that mean to video on the platform. So the good news is that chances are there's absolutely no impact to you. So the QuickTime Visual Context as well as QTKit those applications have always been color managed and because of that they'll continue to be color managed on Snow Leopard.
You may get different values if, for instance, where previously on Leopard and the default 1.8 display profile the 75% Gray Bar is represented with 186 values. Now on Snow Leopard it'll be represented on 198 levels, but the good news is those appear to be identical.
[ Background noise ]
[Ken Greenebaum]
Also if you're using an old, and you shouldn't if anymore, be using an old application with an old framework like the Classic QuickTime Carbon G Rule framework, there's also no change. Those frameworks were never color managed they would process 74% grade value and produce 179 levels, they still do on Snow Leopard. And it may not be obvious on the slide but they'll be a subtle change in luma.
Basically on Snow Leopard 2.2 the end results will be a little bit dimmer, and that's exactly the same behavior as any old Carbon application. They look a little bit darker on Snow Leopard's 2.2 display buffer. So this next section is fairly advanced and I'm going to go through it very quickly as well.
So very important things to remember is that your pixels are changed by Color Management. That means if your application is trying to render one value out Color Management's going to change it, and if you read the resulting pixel values back maybe using Digital Color Meter you're going to get a different value.
And that's a good thing that's the Value Add that we're providing, but you should remember that. Also, as I mentioned earlier, that different displays have different display profiles that means you're going to get different results. And what may not be obvious is that even between two seemingly identical displays like two cinema displays those may have different display profiles and they may produce different color results.
So if you slide your video between the two displays and read the values back you may get different values, and, again, that's a good thing. So these are some things you should remember if you want to evaluate color on your own application. One, SMPTE Color Bars is the standard way to accomplish this and it's a very good thing to do, but remember garbage in, garbage out.
Those SMPTE Color Bars have to be tagged themselves. So 75% Bars in an 8-bit space they have that 191 energy level, 191 happens to be 75% of 255 isu where that comes in. So ColorSync, in our case, won't change the gray value if you have a perfect gray, which means the red, green and blue components all have the same values they still will after being color managed but they may have different levels because the gamma shift or the gamma adaptation will make them different. Pixel values that aren't gray like the 75% green 0, 191, 0 may change.
Luke went through a detailed description of this. So here's some values that we may see if you read your 75% bars back. On Snow Leopard your 191 value will be processed by a 1.96 to 2.2 gamma conversion producing 198 levels. That's the default QuickTime X behavior on Snow Leopard.
QuickTime 7 on Leopard and earlier provided a 1.96 to 1.8 value of color correction that produces these 186 values. You may also have seen these 179 values.styyl They're the result of a very early QuickTime behavior and that's the same behavior that's used in File Cut as well as QuickTime 7 on untagged content. It's the result of a 2.2 to 1.8 conversion. If you see 191s that means there was no processing done or for whatever reason it tried to do a 2.2 to 2.2 conversion.
You shouldn't see those values. So these are the concepts that I really want you to really remember and take away with you and that is that it's really critical that all content is tagged with an 'nclc'. You can use a new tagging Automator action to perform that operation.
And if you have applications or if you have components in codecs that produce video please make sure that you provide the 'nclc' information in the files that you write. Untagged content is color managed in the QuickTime X world. It's treated as SD. And lastly videos processed consistently across the platform and this is true for all modern applications. So if you want more information you can contact Allan Schaffer.