Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2002-501
$eventId
ID of event: wwdc2002
$eventContentId
ID of session without event part: 501
$eventShortId
Shortened ID of event: wwdc02
$year
Year of session: 2002
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC02 • Session 501

Quartz 2D & PDF

Digital Media • 1:08:05

Quartz 2D creates the visually rich, anti-aliased, and semi-transparent graphics of Mac OS X. This session illustrates how developers can integrate the full power of the Quartz 2D graphics system into their Mac OS X applications. The focus is on important Quartz 2D features such as device/resolution independent rendering, advanced drawing model, transformations, and support for PDF.

Speaker: Derek Clegg

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good afternoon everyone. Welcome to session 501, which is Quartz 2D & PDF. I'm Travis Brown. I'm the graphics and imaging evangelist. And one of the technologies I work with and help drive developer adoption with is Quartz 2D. And one thing that was important that Avi mentioned in yesterday's keynote is for developers to start considering creating applications that run only on Mac OS X. One of the fantastic opportunities you have in deploying Mac OS X only applications is leveraging technologies that only exist on X.

And one technology that we're really proud of and we think offers a lot of benefit for the developer and the end user is Quartz 2D, which is a new 2D drawing library. It's very powerful. It's based on a different imaging model from Quickdraw, which we're very familiar with, but also suffers from none of the limitations that Quickdraw has saddled us with for over the past near 15 years. So it's interesting to see that certain developers have already gone ahead and done this.

We held up Microsoft as an example in a couple sessions where they decided to deliver Office 10 only on Mac OS X and decided to begin to move their Carbon application from Quickdraw-based calls to sort of incrementally begin to move that application over to using Quartz 2D for part of its rendering.

And it's something that's very possible for all applications to do. So you don't have to really consider your development cycle to engage new Mac OS X technologies and particularly Quartz 2D to be an all or nothing proposition. And that's something you really need to be thinking about as we go through today's presentation where we talk specifically about the advantages of Quartz 2D and also how it works and a lot of new features that we've put in specifically upon developer request. So to help out with the rest of the session, I'd like to invite Derek Clegg with Quartz Engineering to stage to take you through the presentation.

My name is Derek Clegg. I'm the principal engineer responsible for the API in Quartz 2D. And today we're going to discuss sort of a general overview of what Quartz is. We're going to talk a little bit about the Quartz architecture, but mostly spend our time on the Quartz APIs. We'll have a little bit of demo thrown in here and there along the way.

So what is Quartz? Well, as you probably have seen in the earlier sessions, it's the underlying graphics system on Mac OS X. There are two components of it that are important today. There's Quartz 2D, which is the API level that you call, and there's the Quartz compositor that lives underneath that.

This is a diagram you may have seen in an earlier session. Quartz 2D, the one little lozenge, lives on top of the Quartz compositor as well as OpenGL and QuickTime. All of those talk to the lower level piece of the system which actually takes Windows and composites them up on the screen.

The compositor itself is treated in a separate session, one right after this one. It's very interesting. You may have seen some of this earlier in Peter's talk. I definitely would recommend going to that talk if you're interested at all in the compositor. I won't talk about that today. I'm going to be focusing on the higher level APIs that you call in your application.

So what is Quartz 2D from your point of view? It's a low-level, lightweight rendering library. So it lives underneath a lot of the other parts of the system, underneath the main frameworks, Carbon and Cocoa. It's lightweight in that the APIs are very simple. There's a real clear design choice to make the APIs powerful but not sort of give you an entire pile of things that you have to learn. It's very simple and easy to use. And of course it's 2D only. We're not trying to do any 3D. We're just focusing on 2D.

It's resolution independent. What that means is that whether you're talking to a 300 DPI printer or a 72 DPI screen, the API that you use, the calls you make are exactly identical. So you don't have to worry about the final destination in terms of what your calls are doing. Similarly, device independent, a printer, a screen, a bitmap, all the same API calls that you make. So your application doesn't have to worry about all that kind of stuff.

And that can be very powerful. Because we want to provide high quality text and colors, we leverage Apple technology for font management. We use ATS so we get all of the type one, true type, so on font support. And we use ColorSync for color rendering. That guarantees you get high fidelity color so what you see on the screen is going to be what you end up on the So who uses it currently? Well, obviously the highest level parts of the system do. Cocoa and Carbon both use Quartz 2D very heavily.

In fact, Cocoa is very closely matching the Quartz 2D APIs. In many cases, if you use a Cocoa application, if you write one in Cocoa, you don't actually need to go down to the Quartz 2D level because a lot of that stuff is already handled at the Cocoa level.

Carbon, however, still, as was mentioned earlier, there are cases where Carbon is using Quick Draw, but it's also starting to use Core Graphics Quartz 2D more and more. And so that's a place where we're also having the framework live on top of Quartz 2D. Java also uses it for many things.

A lot of third-party applications are switching over. As Travis mentioned, Microsoft has started to use Quartz 2D almost exclusively for a lot of their drawing. And of course, your application can use it. The way your application would use it is by using the Core Graphics framework. The APIs are called Core Graphics for historical reasons. I'm going to use both Core Graphics and CG and Quartz 2D interchangeably. They're all the same basic idea.

The advantages of Quartz 2D are pretty straightforward over Quickdraw. In principle, you get very high quality 2D graphics. In particular, you get anti-alias rendering for all of your drawing, sometimes whether you like it or not. Some people have issues with anti-alias rendering of text, but for regular drawing, for vector art and so on, nothing beats it. It makes your application look really smooth and really nice. And of course, you get transparency, which allows you to do fades and do overlays and stuff like that in a very simple way. Quartz 2D can also be used for off-screen rendering.

You can create an off-screen bitmap that you draw to just the same way you would to draw on screen or to an output printer or something like that. And then once you have that off-screen bitmap, you could then say, for example, draw it back into your application or send it out as a special image, anything like that that you want to do. For PDF document import, if you have PDF files that you're interested in bringing into your application, Quartz 2D supports that and lets you draw any page in the document very easily.

And similarly, if you want to export PDF, if you want to export PDF on your own way, not necessarily going through the printing system, sort of the print preview feature, but by creating your own PDF files, it's very easy to do that with Quartz 2D. So as a simple example, here we have a nice, you know, perfectly fine Excel document drawn using Quick Draw.

It's really good, but you sort of, you know, you can't quite figure out what's going on behind that blue wall. So if you use transparency, you can get a much nicer effect. So this is actually a pretty simple example, but it shows sort of how your application might use transparency, maybe you might use anti-aliasing and so on, to get a better result in the final, for your final output. Okay, so that's the basic overview of the architecture.

And now we're going to talk a little bit more technically about some of the, what we call the core graphics types, the basic pieces of the system that you use. I'm going to talk about all the types that are available, but I'm mostly going to focus on the new things. I want to make sure that we cover everything for people who are new here this year, but the primary focus is going to be on the new APIs.

So what kind of types are available for Core Graphics? Well, context. Context is sort of the workhorse. Everything goes through a context. Once you have a context, you can use a path to draw to the context, fonts, images, PDF documents. Whoops, excuse me. If you're drawing, once you start drawing things, of course you're going to be working with colors and color spaces.

Patterns are new this year, allow you to do replicated drawing. Shadings for gradient fills are also new this year. We have additional functions to let you manage geometry and API transforms, some convenience functions. And data managers are the way you get data into and out of Core graphics. So let's talk about each of those in a little bit more detail. The context is the principal thing. Everything goes to the context. If you don't have a context, you're not drawing. So you've got to start with the context.

It abstracts the device and represents sort of wherever you might be going to the destination. So you have a context for a printer, for a PDF file, for a bitmap, for the on-screen rendering, all the same basic thing. It all comes down to a single context you talk to. Additionally, the context keeps track of graphic state information for you, so you don't have to be resetting that information every single time.

It would track the color or the line width or various other parameters of your drawing. And the state that contains all that information, can be saved and restored. So you can save the state, change some of the parameters, do a little bit of drawing and restore back to the original state. So you don't have to do as much tracking yourself as well.

The contexts that are supported are the same as last year. The window context, which is created for you by Carbon or Cocoa, still that's the way you get to a window context. Postscript context, usually created by the printing system when you're going to go to a postscript printer. A PDF context also is often created for you by the printing system, but it can be created by you directly if you want to create a PDF file with the CGPDFCONTEXT_CREATE function that allows you to create PDF context directly to draw into.

And then off-screen bitmaps, or just regular bitmaps, CGBitmapContextCreate, you pass in data, you pass in a bunch of parameters that say how the data is arranged, and then all of the drawing will end up on that context. Sorry, onto that data for you. So the primitives that we have this year, there are a couple of new ones and some old favorites.

Basic drawing is pretty much broken down into vector geometry, text, and images. That's a lot of the basic components of drawing any sort of graphics you want to do. To do vector geometry, we have this year a CGPath type, which lets you abstract away how a path is represented in the system.

There's also, of course, as in the past, paths built into the context itself. But we now have a separate type that lets you record a path and keep track of a path independently from the context itself. CGFont is for text, CGImage, of course, for images, and PDFDocument for PDF document import.

This year we have added patterns, so that lets you do repeated drawing in a very easy way. And CGShading, that's our way of abstracting gradients, typically either an axial or radial gradient. So let's switch to demo one, and we'll have a little demo of vector geometry and path objects in Core Graphics. Let's see.

So as you see, here's a simple application. It's a Cocoa application that I wrote in, I don't know, a couple hours. Every Cocoa application, they always say, oh, you can write it in 10 minutes, and it turned out a little longer than that, but it was still pretty fast. What I'm showing you here is a Cocoa application that's using native Core Graphics calls to do all of the drawing inside the windows. So I haven't done any optimization, so you'll see it's actually pretty fast without actually doing anything special at all.

So this is an example of drawing a path. In this case, I'm drawing a rectangle. As you can see, the path is the rectangle itself, and here I'm both filling and stroking. To show you the actual path element that I have created, the actual thing that I tell the context to draw is this red line in the middle. So you can see that what that means is that when we stroke, for example, we're actually stroking on both sides of the path, inside and outside, and the fill is all inside the path.

If I turn off the stroking, you can see I'm just filling the interior of the path. So let's turn that off. Now, paths, of course, can be obviously rectangles. Those are pretty simple. Let's get this back down to a little better. Circles, pretty easy. You can draw an oval if you want. That's no different from drawing a circle.

Some people have asked in the past for Core graphics to have some sort of simple way to do rounded rectangles. It's actually very easy, and there's some code samples available that show how to do this. Very simple thing are stars. Pretty much anything you want. It's, you know, these are all, this is all pretty basic stuff. A path itself doesn't have to be connected. You can have a single line segment that's not filled.

And then we have some additional ways of modifying the way the path looks. So let me just increase the line width. And here you see my curve. So one thing that's interesting, on the ends of the path, I can change the way that looks. Right now I have what's known as a butt cap.

In other words, the stroked region ends precisely where the path ends. If I want, I can change that to a round cap so I have a nice little round corner. And I can also change that to a square cap. So I have a square end added to the end of my path. Now, some people, when they're drawing, I mean, this is all fine if you're just sort of naively drawing a single curve.

If you're drawing 10 or 20 or 30,000 lines at a time, doing a round line cap isn't really very efficient because we have to do an awful lot of calculation to fill in that little end point. So you typically, for lots of line drawing, you want to use a simple butt cap. Additionally, for shapes, let's bring up the star.

Oops, I'll start in the center. As you can see, we have this path that consists of several different line segments. The way the ends of the line segments are drawn is controlled by a line join parameter. So here we have a miter line join, but if I like, I can also change that to round.

So you can see here at each point, I'm having a nice round join of the curve, or I can change that to a bevel, so that just chops it off at each angle. And that, you know, depends on what you're looking for. But again, round is a little bit more expensive, so if you do that, you're going to be slightly slower. Miter's actually pretty cheap.

But you can see for a simple application like this, here I'm actually really limited by the fact that I have 60 hertz refresh rate. I mean, when you're not limited by that, it actually draws pretty much instantaneously. And of course, colors, you can change the colors. You can make the color be anything you want for the fill or the stroke.

So this is all just sort of basic CG drawing. It's pretty simple. I mean, you've probably seen applications like this for $1,500. It's pretty simple. But this is just to show that you have that basic structure. So if we could switch back to the slides, and I can find my little thingy.

So as I mentioned, paths can be both simple and complex. On the upper left you have a path that's just a single straight line, but it's dashed. This is a parameter that you can change in the graphic state, the way a path is drawn. You can ask for it to be dashed for you. A path doesn't have to be closed, it can just be a single set of lines in the upper middle. The path can be relatively complicated. So here we have the state of California, relatively complicated path filled and stroked.

Oh, that's interesting. So on the lower left in the middle, in the lower middle, you see two paths that look identical. However, they shouldn't look identical, it turns out. The one on the left is an example that a path can be self-intersecting. And in this case, we have the path is filled with what's known as the even-odd rule. So the center is not filled. The one to the run in the middle here should have the center filled.

That's an example of a different fill rule, which is called the winding number fill. So imagine, if you will, the center filled in that star. And then over to the far right on the bottom, you can see here's a path that's created out of two separate independent subpaths. In this case, it's two circles, one inside the other. So a path can consist of multiple independent segments of paths. And here, when we fill it, we just get the ring, not sort of the whole thing.

The focus is on important Quartz 2D features such as device/resolution independent rendering, advanced drawing model, transformations, and support for PDF. This year, we've added, as I mentioned before, a path type. This sort of works like the core foundation model, if you're familiar with that. You create a mutable path, which means one that you can modify. So there's a call "cg_path.create_mutable." Then you add line segments and so on to it. So there's a function "move_to_point_in_the_path," add a line to the point in the path, add a curve, add a quadratic curve, and so on.

And then because those are sort of the primitive notions of the path API, we also have some convenience functions that make it easier to do something more complicated. So we have an add_arc function, which will add a circular arc. Excuse me, we have an add_rect function, which adds a rectangle. So there's some sort of common things that you might use frequently in your own application that we've added as convenience functions for your API.

And then we also provide DTS sample code to do some even more complicated things. So while we don't have an add_oval function, it's about four lines of code. There's a DTS example that shows you how to draw an oval for a path. Same thing with round_and_rect. We don't supply that, but again, it's a handful of lines of code, and you can just download that directly.

And then once you've created a path to draw with, you call cg_context_add_path. That takes the path that you've created and adds it to the context for later rendering. And here, what I example is, I might fill it or I might stroke it. I could do other things with it and so on.

The operation that we're doing here is that you can do combinations, in fact, that you can do. You can certainly fill it. You can stroke it. You can do combinations, like fill_stroke, if you want. And you can clip to it, which means that you constrain all future rendering to be just within the inside of the path.

I'll have an example of that on the next slide. When you go to stroke, there's a couple of parameters in the graphics state that let you change the look of the path. You can change its width. You can change how the line join, I showed that before, how the two line segments are joined, the line cap, what's at the end, whether it's a round or join and so on. Some other parameters, the MITRE limit, the line dash.

So here's an example of using a path to clip. In this case, we have a path that consists of two subpaths. The whole path is the apple. There's two subpaths, the leaf and the little apple with a bite out of it. Those are two independent paths, but they all can join together to make one single closed path. I take that path and I add it to the context, and then I tell the context to clip. In other words, what I'm telling the context is, from now on, only draw anything that shows through the apple.

In this case, what I'm going to do is then draw a bitmap, and what happens is that only the part of the bitmap that's inside the apple gets drawn. So that can be really powerful in some circumstances when you're trying to do complicated drawing. You have the path outline, but you don't necessarily want to convert that to an image that you need to filter and all that stuff.

You just can let Core Graphics do that for you. The Quartz 2D graphics system is a very important part of the Mac OS X application. It is a very important part of the Mac OS X application. It is a very important part of the Mac OS X application. The Quartz 2D graphics system is a very important part of the Mac OS X application.

Here I'm going to create a path which is a circle. Because I want to, maybe I love circles. I want to draw thousands of circles or something. So in this case what I'm going to do is I create a path, CGPathCreateMutable starts me off. I add an arc.

In this case I'm going to add an arc from 0 all the way round to 360 degrees. Now that at that point still is an open path. It doesn't have, it's not closed. So then I call CGPathClosedSubPath to actually close it. Otherwise, if I didn't do the close, I'd have line caps on the ends of my curve which would look pretty ugly.

And then I call CGContextAddPath to add the path to the context. And I call CGFill to get the nice filled circle. Now what's nice is that once I've created that circle, I can use it over and over again. I can draw, as I said, thousands of circles. But I can also do more clever things.

For example, one of the things I can do to a context is change the way it transforms information from what you provide to the destination. For example, I can scale the context to the way it transforms the information. So I can do that in the context in one dimension, not in the other. So I can stretch it in X, and I can not stretch it in Y.

And what I'll get when I draw the path then,

[Transcript missing]

Okay, so that's vector geometry. Now if we move on to text, Core Graphics has support for text but it is not something that we have a lot of API for. Here you can see, though, we do an awful lot of interesting things with text.

We certainly support, you know, fonts that are relatively complicated. That's the welcome font up there. That's Zepfino. That's drawn with using Core Graphics. We support Chinese, Korean, Japanese, and so on text. You can take text and you can rotate it. That's the example in green there. And you can fill and stroke it.

That's the stroke example. All those effects are just straight Core Graphics. There's nothing special about them. However, that said, we do have a font that--we do have a type that represents system font, CG font. And it does support TrueType, Type 1, and CID font. CIDs are for Korean, Japanese, Chinese, and so on.

But we still recommend, if you're going to do text rendering, to use CID. And if you're going to do text rendering, you can use CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID.

And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID.

And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID.

And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID. And we do have a font that's called CID.

And we do have The type that you use to represent image data is CG image. We support a lot of color spaces, RGB certainly, CMYK. You can have ICC profiles, that's the same thing as Color Sync profiles associated with the image. A lot of cameras, the JPEG data that comes off now will have a profile associated with it. So that means that you're going to get high fidelity color managed images. As I mentioned before, we have support for alpha channels, transparency.

Alpha can be pre-multiplied or not pre-multiplied, that's sort of technical, but that's good. And we support image masks as well, which are a way to specify an 8-bit deep or 1-bit deep or some depth of image and draw color through it and the color will only show up where the image is non-zero.

In Jaguar, we've added support for extended bit depths. We now go from 1 to 32 bits per component. That's not total bits per pixel, that's per component. So you can have an RGB image with 24 bits of red, 24 bits of green, 24 bits of blue, 24 bits of alpha, and will render it correctly. So that can be really significant for people who need to have high bit depth images. And that seems to be happening more and more as more things start getting deeper in terms of the bit depth of the components.

The one thing about that though is that we don't have output formats yet that adequately represent that. So while we can bring in 32 bits per component, we can't write it out and preserve that 32 bits. We'll always truncate it. In time, we expect the image formats will catch up and we'll support the ones that already do support deeper bit depths. But currently, most of them are limited to 8 bits per component. So you might have some problems on output, but on input, we can support all of that.

Additionally, we've built in support for high fidelity color managed images. Again, this is based on ColorSync. So ColorSync will take the data that comes in the image and match it to the destination so you'll get the correct look as you go through the system. But for that to work, your image has to have calibrated colors. Your image needs to have a color space associated with it. If it doesn't, then you're probably going to need to supply that yourself. And I'll talk about that a little bit more later.

So if you wanted to use images, it's actually really easy. You first create a data provider to supply image data. So what's a data provider? A data provider is the way you give us information from your side of things inside the CG. It's sort of a general workhorse that we have that is based on a callback mechanism, so you can do a lot of things with the data provider to give us image data. Once you've created one of those, you can call CG image create. That passes a lot of parameters to specify the format of the image data. And then CG context draw image. So here's a code sample to illustrate that in a little bit more depth.

As you can see, we imagine here that what I have is a... I have a big blob of memory that represents my image. It's already, you know, ready to go. It's in memory. It's in there. It's in that data parameter, and it's a certain size. So I start out by creating a data provider with data, passing in the data that I have, and then I start to synthesize the information, and that creates the data provider for me. Then I create the CG image. I pass in all these parameters-- the width, the height, the number of bits per sample, the number of bits per pixel, the bytes per row. I mean, you can read the list.

But all of those things tell Core Graphics how to interpret the data, whether the data's RGB or CMYK and so on, whether it has alpha or not. All of that stuff is part of the process of creating an image and letting us know how to interpret the data. Once you've created the image, you release the provider, because you don't want to leave it there. You don't want to leak memory, and the image itself will retain the provider for you. Works just like the auto-retain-release mechanism in Cocoa and Core Foundation.

And then once you--now you have an image ready to draw. You can draw that 100 times, 10,000 times, as many times as you want. In this case, I'm going to draw it once. So I want to draw it at a certain place in my context. So I create a rectangle. I'm going to give it the origin, x, y, the width and height that I want to make the image.

And then I call CGContextDrawImage, passing in the rectangle, the rectangle that I want to map the image to, the image itself and the context. And then now that I'm done with the image, I release it. As I said, if you didn't release it-- well, if you didn't release it, you'd leak memory. But if you wanted to draw it multiple times, once you've created the image, you can just call that CGContextDrawImage call as many times as you want, and the image will get redrawn every single time.

What's interesting about data providers and the mechanism that we use to get information into core graphics is that because it is based on a callback scheme that you create, you can actually do anything you want to in that data provider. Here we have the source data comes in, you write your own routines to sort of decode it, figure out what it does, and so on, and then pass it into CG image.

The focus is on important Quartz 2D features such as device/resolution independent rendering, advanced drawing model, transformations, and support for PDF. Here, for example, I'm imagining that I might have, instead of an interleaved image, which is what CG does understand, RGB, RGB, RGB, RGB for each pixel, I might have an image that's planar, so I have a whole plane of all the red components, I have a whole plane of all the green components, a whole plane of all the blue components. Now CG doesn't know how to render that image.

It doesn't understand those. However, if you create a custom data provider that knows how to take that data and interleave it in a form that CG does understand, suddenly we're able to render those images, and that's actually very powerful. Here, as an example in this code case, I've created a couple of callback functions that I'm going to put into my CG data provider callbacks structure.

I've imagined that I have a planar get bytes function that knows how to take the planar data, interleave it appropriately, and write it out to a destination buffer. A planar skip bytes function, that's used so if, for example, Core Graphics wants to only display the bottom half of an image because the rest of it's clipped or something, we would call the skip bytes function to allow you to skip forward rapidly in the image without actually doing all the decoding and all the work yourself.

A rewind function, so if we need to draw the image multiple times, we just sort of tell the data provider, oh, go back to the beginning, we're going to start drawing something again. And then the planar release releases all the extra info that you might have used to do your own work.

Then once you have the callbacks, you create the data provider, CG data provider create, you pass in an info parameter, that's the stuff you use to do your decoding, so whatever you want to, it's just a void star, and the callbacks array. Now you have a provider that can be used to, just like before, create an image.

So here we just create a regular image, pass in similar sets of parameters. Of course, these are the parameters that are appropriate for the way you will decode the image. So, for example, if you're going to say, well, I'm going to decode to an RGB 8-bit deep buffer, then you need to say, well, the bits per sample are 8, 8 bits of R, 8 bits of G, and so on, bits per pixel, 24, and so on.

So the only thing that matters is that this is sort of the way you're planning to format your data when we call you with the data provider. But you can do anything you want with that data provider, and that can be really powerful. For example, one thing that has actually been used already, and I have an example of this I'll show you, you might choose to support a format that we don't support, I mean, in terms of what we ship. So Cineon is an image format used for movies and film and so on, and it's not something that we support as part of Jaguar, but it's something that you could write a data provider to decode the data and bring it into Core Graphics.

Similarly, you might want to do some sort of complicated or fancy image filtering. You might want to downsample your image data in some special, clever way that is top secret and patented and everything, and you could write a data provider to do that for you. Or you could do any sort of regular unsharped mass filtering or whatever you might want. Or you might want to get access to data that's normally not typically available. For example, you might want to say, oh, well, QuickTime's pretty good at understanding lots of image formats.

If only I could get QuickTime data into Core Graphics. Wouldn't that be great? You could write a QuickTime data provider that knew how to talk to QuickTime, get QuickTime to do the decoding, and then pass information back into Core Graphics to do drawing. I said Apple could write one for you. Well, I wonder why that doesn't happen. His question, he made the comment that Apple could actually write a QuickTime data provider, and I would agree, Apple should.

I think the QuickTime feedback session is a little later. So, because this is actually a pretty powerful mechanism, we have used it ourselves, it turns out, maybe not for QuickTime, but for JPEG and PNG. We've written two data providers to allow someone to get JPEG data or PNG data into Core Graphics to image on the screen.

The nice thing about these both are that they bring the image data into Core Graphics and on the screen they'll decompress it. But when they're writing out to PDF files, for example, we don't decompress. So that means that JPEG data, because PDF understands how to write, how to support JPEG natively, we can write the JPEG data directly into the PDF file, which means that your PDF files suddenly are like a tenth of the size or a quarter, you know, a millionth of the size, whatever.

But much, much smaller than they were maybe in earlier versions of Mac OS X. PNG is a little trickier in that PDF doesn't provide a native way to take the PNG data. It's a little trickier in that PDF doesn't provide a native way to take the PNG data. But it's a little trickier in that PDF doesn't provide a native way to take the PNG data. But it's a little trickier in that PDF doesn't provide a native way to take the PNG file directly and just dump it into the PDF file.

However, we can take part of the PNG data and write it out, so we do do that. So those are two ways to get information out of Core Graph--I mean, sorry, from a native format with a custom data provider into Core Graphics in a special way. And I think--yes, okay. So I'm going to go to Demo2, and I'll just show a little example of images and demo. The image stuff. Yes, OK.

So here's an application. So this is a simple application I've written. This is, for example, I can bring in-- this is just a native CG image view that knows how to draw images using CG, not using NSImage. So for example, I can bring in JPEGs. If I like, I can bring in-- whoops, not PDF.

The Quartz 2D graphics system is a great example of how developers can integrate the full power of the Quartz 2D graphics system into their Mac OS X applications. As I mentioned before, I have a special data provider that I've written which knows how to take Cineon images and bring them in. Now, there's a little delay, you'll see.

This image is about 40 megabytes, so it does take a little bit of time. This is a format that, if I didn't write the data provider, we'd never be able to render. It really takes about a handful of lines of code to get this to work. Same thing, here's another Cineon image.

Now let's go back to the JPEG. Well, actually, I'll start here. Okay, so one thing that you note, it took me a little bit of time to render that from the disk into memory. Now, one thing that, I mean, that's obviously not very good if you're trying to do really fast rendering of images or something.

So a lot of times what people want to do is they want to create an image cache. They want to take the image data and then create another bitmap context that they draw the image into and then convert that bitmap context into an image that they then reuse.

So as an example, what I'm doing here, as you can see, I've just taken the image from the Cineon file itself and I've created a brand new off-screen bitmap at a smaller size, and I've drawn the image into the bitmap and then I put it here in the cached image window. So you can see the difference if I turn that off and I try to animate with this image. This is the source image. I'm decoding the Cineon data every time. That's actually pretty slow.

If I create the cache and do the same thing, it's much smoother. So that can be a really powerful technique for you to just down-simple your image into an off-screen bitmap and then use that as a CG image to redraw over and over. Now one thing that you might need to worry about when you're doing that is the image may or may not have color profile data. So let's go, let me bring up a different set of different image.

Now, here's an example of a jpeg that has no color profile. So, what you might want to do is add a profile to that image. Now, you probably can't see much of a difference. I don't think there's really a strong difference here. Mostly because of the way all this goes through the whole thing up to the screen. So, here's a little bit different example where what I have is I start out with an image that has no color information at all in it.

No profile or anything. I just say, you know, all it claims is I'm red, green, blue. I'm red, green, blue. What we do here is I take a profile from the system, a color profile, just like in ColorSync, and I say, well, look, when I want to render it, I'm going to use this profile to show the image so that it actually looks correct. Now, here I picked a really weird profile just for illustration purposes which swaps the red and the blue channels. So, that's why you get this odd yellow image.

Normally, you'd want to use this button right here, which is the user's document default. That's the thing that shows up on the screen. And then you can see that it's a little bit more of a blue. So, you can see that it's a little bit more of a red. So, that's why you get this odd yellow image.

Normally, you'd want to use this button right up on the color sync preferences panel, and it tells you what the user wants his documents to show up at, and what profile the user wants the document, the user wants you to display images if the image doesn't have a profile associated with it. So normally you would use this profile, but it doesn't really look that much different.

Once you've created that, then, of course, you have the same thing, it works the same way, it's just a cached image with a different profile. Of course, when you go to print, if you wanted to print this, you would make sure that you started out with the source data and the appropriate profile, normally if it wasn't specified the user's documented default, not your cache. If you try to print the cache, you get something that wasn't quite appropriate, it'd be too small, look blurry, and so on. Okay, so that's-- we could switch back to the slides.

Okay, so that's images. And now we're going to move on to PDF documents. This is an API that's been present for a while in Mac OS X, but it can be very powerful. It works similarly to images. In this case, what we have is a data provider that you give us to take information from a PDF file and then give it to a CGPDF document, and we use that information to decode and draw and so on.

Now, once you've created the PDF document, there's some accessor functions that are conveniences for you. You can ask the document for the number of pages it has, the bounding box for each page, all of that stuff. And you draw it the same way you draw images. CGContext draw a PDF document where you pass in a rect, pass in the document and the page number. An example, another code example. In this case, what I have is another URL convenience function that I'm using--sorry, a data provider convenience function that I'm using which takes a URL. This is a CFURL.

So you can work with HTTP colon blah, blah, blah stuff. You can work with file colon so on. Once you have the URL, though, you create the data provider with that URL, and we'll call CF, Core Foundation, to suck in the information from the URL that we need.

Then you create the CGPDF document with that CGPDF document create call and release the provider. Make sure you clean up after yourself. And then just like with images, same idea. You want to draw that PDF document, a particular page of it in a certain rectangle, so you create a rectangle, draw a PDF document, we'll draw it, you pass in the page number in this case to say where you want the-- which page you want to have the document be displayed, and then you release it after you're done to clean up after yourself. Or you hang on to it and draw multiple PDF pages.

So for Jaguar, we have full support for PDF 1.3. So we can import every PDF 1.3 document. When we create PDF documents, we create PDF 1.3 documents. So they're compliant in that respect. The one exception is there are cases where PDF 1.3 does not support things that we want to write out. The big one is transparency. So when we write out transparency, we'll typically bump up the version that we create to PDF 1.4.

We're sort of limited by the fact that 1.3 doesn't have any transparency support at all. So that's one thing to be aware of. But normally we're going to generate PDF 1.3 documents. But what's important too about that is that we support a full roundtripping of our CG API.

And what I mean by that is that if you draw something with a regular window context and it shows up on the screen in a certain way, and then you create a PDF context and make the exact same calls to that PDF context, you get a PDF context that's not going to be in the PDF context. So you can use that to create a PDF file. But if you create a PDF file and you bring that up in preview, you'll see the same thing.

So we don't have a situation where the stuff that you call in our API doesn't map to PDF or which doesn't show up correctly when you bring that PDF up in preview. So that's a very important piece of the puzzle. And then we're also adding support for PDFX3. It's a graphics art workflow format that I'll talk about in just a second.

Let's switch to demo 1. I just want to show you some PDF files that in previous versions of Mac OS X were uninteresting. For example, this one. This takes a little bit of time because it's pretty complicated. If you try to open this up earlier, I don't think you'd have a crash, but you'd have a big blank page. Did I even click on it? No.

Oops. And now you have nothing. Oh, there you go. Okay. So as you see, this is actually a very complicated document. There's lots of shadings, lots of things going on here. Previously, we would have had the Words Adobe Illustrator at the bottom, and that's about it. So that's actually a good thing. Similarly, here's another document. Also uses complicated shadings, a few other things that were missing before, so we have a nice set of rendering that we weren't able to render correctly in the past.

And we've also beefed up some of our Japanese and Chinese and so on support. So here you can see this is a PDF file. This text, you might not know it if you don't read Japanese. This is vertical, so that's actually working now. A couple other things in here were previously not working, some of the shadings and so on.

So all of that's there. So we're actually continuing to track the PDF specification. We're making sure that we're compliant and fixing any bugs that we happen to find. And we want to make sure that we're able to keep tracking it in the future. Okay, so if we go back to the slides.

Okay, so I mentioned PDF/X3. This is becoming a more and more important standard. It's going to be an ISO standard. It's not there yet. Just briefly, this is a pretty hairy code sample, but for those of you who are interested, it's now possible in Jaguar to create documents which are PDF/X3 compliant. And the way you do that, the key thing there is to provide an output intent.

The output intent basically tells people, tells us that this is the sort of how I imagine the PDF file to be printed, the sort of the color space and so on, the information there. Now, it's a little bit complicated, but as you can see, the key thing that I'm doing here in the first couple of lines is I'm creating an auxiliary info dictionary. This is actually more useful than just for PDF/X3.

For example, you can provide a title for your document. You can provide the author information and so on. Once I've created the auxiliary info dictionary, I pass that in to CGPDFContextCreate, the very bottom line, create with URL, which that gives information for the PDF context to put into the PDF file that you're generating.

And we plan, over time, this may become a larger and larger set of items that you can put into this dictionary to change the final result of the PDF file that you're creating. In this case, I start out by, I create the dictionary, just a regular old CFDictionary.

I began by adding a title, my document. It's all CF-based, so you sort of need to know about CF. If you don't, you'll probably learn that. And then the key thing for PDF/X3 is the output intent dictionary. I create an output intent dictionary that has a bunch of keys. There's a whole pile more that get put in there. And once I've done all that, I pass that output-- I put that output intent dictionary into the auxiliary info dictionary, and then I create the context.

So this is sort of an example of how you might create a PDF/X3 file. As I mentioned before, it's not quite a standard yet. It should be a standard in the summer, but we are tracking that to try to make sure that we're compliant in the documents we generate. By default, unless you create this output intent dictionary, you won't have a PDF/X3 file. Probably not a problem for most people, but just so you know that. Okay.

So, color spaces. So, at this point, just to step back a little bit, we've gone through the basic types that are used to draw in a context. So, we talked about vector geometry with CG path, images, CG image, CG PDF documents, and so on. And so now we're at the point where we need to talk about sort of, well, it's all nice to specify the geometry and all that kind of thing, but how do I actually draw? How do I fill with a color and so on? So, what we have are a couple of things now that specify how to fill the content of the things you're drawing. The CG color space is the type that we use to specify the way to interpret the color data. So, it tells whether the color data is RGB or CMYK, whether it's got an ICC profile and so on.

There are two flavors of color spaces that are typically used. Device dependent, from the name you can tell it means device dependent. That means that how-- the color is rendered is completely up to the device. They could take red and turn it into blue if they wanted to. It's completely not in your control. So, that's not typically the best choice for most rendering. Normally, people want to use device independent color spaces. If they want high fidelity color, that's always the best idea.

For Jaguar, we're going to add some additional color space convenience functions. Currently, we have a set of them, but they're pretty low level. We're going to add functions that allow you to get information from color sync but through CG without necessarily going and learning all about color sync.

In particular, the user's document default color space, that's the thing that shows up in the color sync preferences panel. We're going to add a color space that's special in that it gives you fast on-screen drawing, which means that the colors may not be precisely correct on screen, but that's sometimes okay for certain applications, but will preserve the color fidelity when you print.

So you get the right results when you print where it often matters, but you might get a slight approximation to the truth on screen, but it will be fast, so that can be very important for some applications. Other API to let you get at some of the standard Apple color spaces, we're going to add a little bit of color. and so on. So you should look for that API coming soon.

There are a couple of other special color spaces, indexed GIF images are a good example where an index value is used to look up in a table which specifies the actual color of a pixel. I'm not going to talk about that very much. I'm going to focus more on patterns, which are new since last year. Although they were part of Mac OS 10.1, we didn't actually talk about them in last year's session, so I wanted to go through that a little bit more.

[Transcript missing]

I can change the spacing between the pattern elements. And as you see, here I am drawing a circle, but the pattern can actually be, for example, a star. That's pretty simple. And I can just fill that. I can draw with, you know, pretty much-- I can fill any shape I want to with my pattern.

and I think I mentioned the size. I can change the color of the pattern to be some other color that I might like better. The first step is to create the Quartz 2D graphics system. The first step is to create the Quartz 2D graphics system. The second step is to create the Quartz 2D graphics system. The third step is to create the Quartz 2D graphics system. The fourth step is to create the Quartz 2D graphics system.

The focus is on important Quartz 2D features such as device/resolution independent rendering,

[Transcript missing]

So suppose you said, "Oh, that's so nice. I want to do patterns myself." Well, what would you do? Well, you'd create a pattern with CGPatternCreate. You'd tell us that you're going to be using patterns by specifying a pattern color space, and then you'd set the pattern in the context, and then you'd just fill or stroke the path.

So you might get the idea by thinking about that that patterns, from our point of view, are just the same as colors. So just like you might say, "I'm going to draw with--I'm going to fill the circle with red. I'm going to fill the circle with umbrellas." It's all the same to us. It's just sort of conceptually like a color. So another code example.

Here's a case where we're going to draw a pattern. This is actually a little bit complicated to look at and absorb. As you can see though, the key thing here is the very top line, draw a circle. That's the function you provide to-- in our case, this is an example of a pattern that's just a circular, using a circle.

That's the thing that you provide that makes everything else possible. So in the same way that here I'm using draw a circle, you could have draw a PDF document, or draw image from my favorite file format, or anything else. So the callback mechanism is very powerful and very useful.

And then we have a whole pile of stuff that sort of is the mechanism by which we step through to say, "Well, we want the pattern cell to be this big. "We want it to be spaced by this amount. "We need to tell you about the color space "we're going to be drawing in." All of this goop, and this is all-- it turns out, this typically turns out to be sort of boilerplate code that you just typically write once and put into a function that you can call yourself multiple times.

But once you get to the end of it, you then, the very last line, when you say, "CGContact set full pattern," you specify the pattern you're interested in and a color, typically, that will just contain alpha information. So that says how, you know, whether the actual pattern is composited with what alpha value, with what transparency value. And then you clean up after yourself. And then once you set the fill pattern, you just fill a circle, you stroke a circle, whatever, whatever shape you like, and you'll get the pattern replicated in that region.

So again, this is a little bit hairy to look at. We're going to--all the examples I've shown today, all of the demos are going to be available to you through DTS, and so you'll be able to look at that a little bit more closely and understand it a little bit better, better than I think I can do here in a large talk like this.

Okay, so that's patterns. And then the one final thing that we're going to talk about in terms of filling a region is shadings. This is also new in Jaguar. And the shading idea, it's a way we abstract gradient, radial-axial gradients. The word "shading" comes from PDF, where they actually support more things as well. Again, just like with patterns, it's resolution independent.

So it's not just like a little weird bitmap that gets replicated and looks ugly. Instead, it's, you know, pretty much a full, nice shading that scales the resolution of the output device. And it's, again, better than doing it yourself. We can do the same types of optimizations, same types of tricks that we do with patterns to make the shadings look really well. So just a quick demo, demo one. I'll show you the--what we have here.

So there are two flavors of shadings in Jaguar. There's the axial shading, And here you see what I have for an axial shading is I have a start point here, and then I have an end point, and the shading changes along that axis. And of course, it's scaled appropriately, and so we can rotate it. We're free to do all of that.

I can also create a radial shading. So in this case, I have a circle that has the color changes along an axis along the radius, starts at this point and ends out here at this other point. So there's the two flavors of shadings. Let's go back to axial.

Now, again, because we really like the callback mechanism because it means you do a lot of the work. Rather, we're enabling you to do the work that otherwise it might not be able to be available. So here, for example, we're not stuck with... Travis is laughing at me. You're not stuck with just linear shadings, which some people would say, oh, that's good enough. You can, in the callback, do anything you want to. So it's a regular general-purpose function. So you can make the shading look your favorite way.

Which actually can make things look really nice. So here I have a simple linear shading. But if I want, I can change that to just a sine wave. So this is a different look because I'm free to change these values. I can make this have multiple... a different frequency. I can even animate it.

In this case, this is just a bunch of different shadings that are being cycled through at different frequencies. But none of this is sort of built into core graphics. Instead, what we're doing is we're providing you callback mechanisms to do this type of thing. This type of drawing.

So you'll see in the example code that we're not saying, oh, create a shading with a sinusoidal function that has these parameters. Instead, it's just a callback that happens to be, in this case, a sine wave. And of course, I can do that with a radial shading if I want to, which is a little weird. But I hope nobody suffers any ill effects from this.

The focus is on important Quartz 2D features such as device/resolution independent rendering, advanced drawing model, transformations, and support for PDF. pull this over here. We can change the start color to be anything we like. Let's make that a little brighter. And we can change the end color, if we want, to be something different.

So all of this, again, you know, this is all just unoptimized, simple API, API calls directly into CG. This is one area that currently is not available in Cocoa, I believe. So this might be the one case where if you were writing a Cocoa application, you might want to drop directly down into Core Graphics.

The other thing we can do is we have a feature which, if you draw the shading, that can often be very nice, but sometimes you want to sort of fill in past the ends. So we can turn on starting and end filling. So the final color will be replicated infinitely, sort of from that point onward. So that can be sometimes useful depending on the effect you're trying to get. And of course that is also tracked by the actual color. Thank you.

Quartz 2D is a software-based software that is designed to support the performance of the Mac OS X.

[Transcript missing]

The Quartz 2D graphics system is a simple, easy-to-use, and fast-growing, highly-efficient, first lower left corner, the starting circle has radius zero, so that's why you sort of see the shading go from the center out to the edge. Now, the two circles don't have to have the same center.

Here I have a shading which, a radial shading where the starting circle is sort of upper right in the ball, and the outer circle is the outer circle. And so then when I fill it, the interpolation sort of makes it look a little bit like a bowling ball, a billiard ball. And then the two circles don't even have to intersect.

So here I have in the upper left hand corner a small tiny circle, which is yellow. I have a larger circle as my ending circle in the right, which is the big circle, and I interpolate between those two to fill the region. So you get sort of a megaphone look. So you can do a number of interesting little effects with shadings. But the key things are things like gradients, linear gradients, very useful. Certainly circular shadings can be useful As I mentioned before, they're not required to be linear. Your callback can be anything complicated that you want.

The key thing is that we're going to pass you a value between 0 and 1, which is the distance along the axis that you specify, the scale distance. The way you give us information back is with what we call a CG function. This is a new type that we've added, which it's a little bit more general than what I described here, but basically for this application, it takes one value in, which is the distance along the axis of the radius, and you give us back a color.

So one value in that place, and then you calculate what color you want it to be. Normally, your start color is going to be value 0, your end color is going to be value 1, and you decide what it's going to be along that axis. So as a simple example here, I'm just going to imagine that I'm going to do a linear shading. I have some magic function somewhere called evaluate linear that knows how to take a value in and do a linear interpolation between two colors and return that out.

I need to pass in information to the function that tells what the domain is. The domain of the function is how, in this case, it's going to be always 0 to 1. What the range is for RGB, again, 0 to 1. Other color spaces might have different ranges. And then I call CG function create info parameters, void stars, which you pass in for any information you might need to track to calculate the function.

I pass in the number of parameters for the domain and range, and I give the callbacks. And so that's my function. Now I have a thing which I can use to give to a shading to do an evaluation. So I'm going to do a linear interpolation between two colors. Whoops.

[Transcript missing]

We could switch to demo one. So here we have a bunch of different things that are drawn using CG, all the different things. We'll just swoosh them out. And as you see, we have things with patterns through them. We have solid shapes and so on. We have text animated. And we even have some bugs, which, as you can see, some of the little things, the shadings are there, and then they disappear. I don't know why, but I just discovered that today.

But nevertheless, this is a simple example of just... You know, I have to sort of keep mentioning this. This is a Cocoa app. Cocoa's amazing. It's really, really great. And I didn't do any optimization. This is just really simple stuff, right? No special back buffers, no double buffering, no crazy stuff.

Core graphics has a lot of power in it to let you do really complicated things very easily. And you should know... I mean, the stuff I'm creating here, I mean, I'm not even being careful about it, right? I mean, it's sort of sloppy code, but it still runs pretty fast. So just so you... Just my little plug for Cocoa and so on. Okay, so if we go back to the slides, just for the last bit. So in summary, we have a lot of new API this year that we want you to try out.

Let us know what you think. We want to continue growing our API set, though I should mention that our philosophy is not necessarily to provide you with a house. It's to provide you with a toolset, but it's also to build a house. So in our API, we try to have pretty simple, lightweight, non-complicated things that let you do really powerful stuff in your own application.

But we don't want to necessarily sort of cram down. Like, that's why we didn't do, you know, oh, here's a linear shader for you, because everybody wants only that, right? Instead, it's really much more general than that. So we're trying to sort of continue to add API where it makes sense, but also make sure that the general philosophy of giving you tools to let you do complex things is respected. And that said, your feedback, always great. We love feedback.

It really helps us help focus where we want to go and what's sort of missing that we want to add and so on. So we encourage everybody to give feedback wherever possible. And I think we have some documentation that you might want to look at. Listed here, there's the documentation website. And the principal way, I think, still, for the new Jaguar APIs to get information about some of the functions was the header bot comments.

So if you look in the headers in the Core Graphics Framework, there's some technical notes listed here. And as I mentioned before, some of the sample code will be provided by DTS. So now, if I could give this back to Travis, he will take us to the end of the presentation.

Yeah, I'm going to hustle through a roadmap here real quick because we ran a little long and we want to get a Q and A in. So, I just want to bring your attention to certain sessions that we have at this year's WWC that you may be interested in.

Obviously, at 3:30 today we have in Hall 2 "Exploring the Quartz Compositor," where we'll be going over in more depth the sort of compositor architecture and talk in depth about the architecture of Quartz Xtreme, which was shown in the Graphics and Imaging Overview and also in the keynote yesterday.

Additionally, we have an interesting session, which is OpenGL Integrated Graphics 2, where I'll essentially show how to build your own compositor, which is something that if you liked what you saw with Quartz Xtreme, you definitely want to attend that session. And then also, I wanted to call out ColorSync Digital Media.

We obviously talked about ColorSync and how it's integrated with Quartz 2D. We'll go into a little more detail in the ColorSync session and additionally we'll be talking about how to leverage ColorSync in the future. And then we'll also talk about how to leverage ColorSync in new ways for addressing the needs of media beyond just still images.

And then finally, I think that there's a really nice session coming up on Friday, which is 5:16 in Hall 2, which is Graphics, Imaging, and Performance Tuning. Derek was getting pretty good performance with his demos up here without doing any performance tuning, but obviously we hear a lot from our developers about how do I performance tune for Mac OS X's visual architecture.

And we're going to be going in depth on the tricks of the trade to definitely squeeze the most performance out of the visual pipeline and the drawing APIs in the system. If you need to contact me on any issues relating to Quartz 2D, I'm your single point of contact. There's my email. It's [email protected]. Feel free to email me with your questions, concerns, or whatnot.