Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2001-107
$eventId
ID of event: wwdc2001
$eventContentId
ID of session without event part: 107
$eventShortId
Shortened ID of event: wwdc01
$year
Year of session: 2001
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC01 • Session 107

2D Graphics Using Quartz

Mac OS • 1:02:55

Quartz is the foundation of 2D graphics in Mac OS X. This session explains how to harness the power of Quartz in your application. Detailed explanations of the Quartz graphics architecture, the Quartz API, and how to integrate PDF support into your products are presented.

Speaker: Haroon Sheikh

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it may have transcription errors.

Good morning, everyone. I'd like to welcome you to session 107, which is 2D graphics using Quartz. Today, we're going to talk about one of the most exciting aspects of Mac OS X, which is the Quartz graphics technology. And for many developers, you're abstracted from it. If you're using Carbon, you often develop with Quick Draw. If you're using Cocoa, you're using the NS Graphics and S Bezier, those abstractions. But what we want to talk about is the Quartz 2D API, because we really believe that every application can benefit from being powered by PuzzScript and PDF style graphics, and also given the ability to create and consume PDFs as well as an imported format. So I'd like to introduce Haroun Shaikh, who's the Quartz 2D engineering manager, and he's going to take the presentation from here.

Thank you, Travis. Good day, everyone. Welcome to this session. So we're going to be focusing on 2D graphics using Quartz. Our agenda for today is basically going to detail about what Quartz is, look at the architecture, see where it's used in the system, where you might be using it indirectly, where you might want to focus on using it. The bulk of the presentation is going to be focusing on the Quartz API. We'll conclude the session with a demo demonstrating the power of Quartz.

So fundamentally, Quartz is the graphic system on Mac OS X. So it's responsible for the Aqua look and feel. It participates in that. All of your window management is handled by Quartz. Rendering is part of the printing workflow. So it plays a very important role throughout Mac OS X. So looking at the system architecture slide, you'll notice that Quartz is in the application services layer right above Darwin. At that level, we've got Quartz, OpenGL, and QuickTime. Looking at that in a little more detail, you'll notice that Quick Draw, QuickTime, and OpenGL are not only peers to Quartz, but at the same time, they're also sitting on top of Quartz. And the reason for that is Quartz is really composed of two components, Quartz 2D, which is our focus today, and the Quartz compositor.

So in the graphics and imaging overview, we've talked a little bit about both of them. The Quartz compositor is responsible for all your Windows server, window management needs in the system. It effectively handles the compositing of multimedia, your windows, your menus, blending them appropriately with the rest of the system and sending that to on screen.

on the Quartz 2D side, that's your-- That's the rendering library, and that's the focus of this talk today. It's a low-level rendering library. It's based on the PDF imaging model. So hopefully most of you were at the PDF and Mac OS X session right before this one, which discussed PDF and the imaging model.

Quartz 2D in itself, because it is based on the imaging model and the way we've architected it, is resolution and device independent. So you do not have to worry about what the device is. That's abstracted for you. All you have to do is draw to the context, which I'll get into. And resolution, high fidelity output can be achieved by coarse minting the resolution of your data throughout the workflow. We also incorporate ColorSync and ATS. ATS is Apple type services, so that's for all your font management. ColorSync is incorporated also, so that allows you to get high fidelity, color managed output onto your device. Quartz does all the work, working with ColorSync to achieve that.

So let's have a look at where Quartz is used. You may already be using Quartz indirectly. There's a Cocoa application, Carbon, or Java applications. They all sit on top of Quartz, and so they're using Quartz indirectly. In Cocoa, as Travis was mentioning, most of the NSBezierPath classes, draw NSImage classes, they take advantage of the Quartz API. So the functionality in Cocoa is very similar to Quartz.

So you're taking advantage of Quartz indirectly there. On the Carbon side, if you do things like draw theme text, that goes through ATSUI, which does all of its text rendering through Quartz. And also, when you're printing from Carbon, you're printing to a QuickDraw port. Internally, what we do is we translate the QuickDraw calls into Quartz calls for your printing needs. Okay.

Java 2D graphics are also implemented on top of Quartz. So they're handled natively on the system. You may want to directly access Quartz. And what you want to do there is you want to access it through the Core Graphics framework. Core Graphics is our internal name for Quartz. Quartz is more the marketing name. Core Graphics is part of the application services framework.

when would you want to actually use quartz? From Carbon, The quick draw model has been around for the last 10-15 years, but the quartz imaging model is much more richer, much more advanced. So that's an opportunity for you to take advantage of advanced 2D graphics through Carbon instead of using quick draw.

We also have two other APIs that I won't be discussing too much today. One is the full-screen access API, and that's if you want to access the screen for gaming or for changing display depths. That's available. There's another API for remote access. So applications like Timbuk2, where you want to access a remote app and remote system and get information about the contents on screen or be able to send events, that's also available. That's also part of Quartz. In general, when you're working with Carbon, Cocoa, or Java, those frameworks will be the ones that you go to for all your window management needs. Quartz doesn't provide that to you explicitly. They're done implicitly through those high-level frameworks.

So let's get into the API itself. It's a C-based interface, and the reason for that is because we're servicing Cocoa, Carbon, and Java, so you've got Objective-C, C++, and Java that's interacting with the system, we decided to choose a simple C-based interface that can service all three of those clients.

And so those who are familiar with the core foundation naming convention, Quartz relies on the same naming convention for all of its API. And the convention is basically a two-letter keyword representing the framework that you're working on and class, verb, and object. So in our case, CG represents the framework. So all of Quartz calls will begin with CG. So as an example, CG context draw image It's straightforward. What that really does is it draws an image into a CG context.

So here's a list of the classes that we'll be talking about that Quartz provides. These are not classes in the C++ term. These are just a collection of things that we'll be discussing today. Thank you. So the first thing you want to do is you want to interact with the device. So... A CG context is your connection to the device. So this is synonymous to a quick report. On the Quartz side, that connection is an abstraction. So you do not have to worry about the details of the device, what the resolution is, or whether it's a printer, or whether it is on screen. From your perspective, you want to basically send down your data in as rich a format as possible.

And course will be responsible for making-- because it knows about the device, it will translate the data appropriately to that device, render it, save it out into PDF, or whatever, depending on what the device is. So the important point there is you can send the same information to whatever device through the same API. Make the same calls, and it'll be handled for you.

One thing also that's part of the CG context is state is also part of that context. So what I mean by state is it's similar to Quick Draw, where you're changing, setting the color, you're setting the font. You set that into the CG context, and until you change it again, the state is maintained. So you can also save the state.

make changes to the state, for example, change the font color or change the text color or change the CTM, change the color space information. Do your drawing, and then if you want, you can then restore it. And when you restore it, you're restoring it back to the state that you saved it at. You do not have to worry about restoring all of the changes that you made. So this way, this allows you to cleanly get back to a state which undoes all of the modifications that you might have done.

So the context that we support, if you're working on screen, you're working with Cocoa or Carbon. So Cocoa and Carbon will create a context for you. So on Cocoa side, you've got an NSView. You can actually get the graphics context for that view. On the Carbon side, you're working with a QuickDraw port. On QuickDraw, you can actually make a function call to get at the context for that port. When you're printing, you're also working with contexts, and those are created for you from the printing system. Those could be a PostScript context, or those could be a raster context, depending on what the printer device is. In those cases, in the first two cases, the context is created for you. We do provide two other contexts that you can create explicitly. One is the CG PDF context. This is where you are trying to generate a PDF document. So in order to generate a PDF document, it's easy. All you do is create a PDF context and now start drawing all of your things into that context, and all of the drawings, calls that you make will now be saved into a PDF document.

At the same time, if you choose to, you could also create a bitmap context. This is an off-screen context where everything will be rendered into that off-screen bitmap. he could make the same set of calls into that same context and instead of being saved out to PDF because it is now a different context, you can now render the same set of calls.

So now that you've got a context, let's look at the drawing primitives that you would want to draw into that context. Basically, there's four in number. You've got vector geometry, which is all of your line art, 2D line art, things like your rectangle, ellipse, paths. You've got text and images, and then finally a PDF document. So on the previous slide, I had talked about, you know, if you want to create a PDF document, and now we also allow you to take a PDF document and draw that into a context as yet another primitive.

The drawing operations are done using the painter's algorithm, and that's basically you draw something onto a canvas or onto the device, and you lay down ink for one drawing primitive, and then you lay down ink for the next one on top, and continue to do that. So it's a bottom-to-top drawing operation. Things above you obscure things below you.

And in order to maintain resolution independence, all of our drawing operations support floating point coordinates. So with that, you can define your shapes, define your positioning in the coordinate system using floating points. And the appropriate translations are done to the device. And that allows for high precision and high fidelity output.

So let's look at the first primitive, which is vector geometry. Fundamentally, vector geometry is represented as a path in quarks. Thank you. What you want to do there is you first want to define the path into the context. The definition constructs that internally in the context. And the next thing you want to do is draw it. So drawing is simple. There's just one called CGContextDrawPath. There are other convenience functions that allow you to draw paths and shapes, but that's one that you would probably use more explicitly. On the definition side, though, It's very similar to Postscript, the PDF imaging model. Basically, what you do is you begin a path. That destroys any paths that you might have in the context already. Okay?

The first thing we want to do after that is move to a point. You can also do line 2, curve 2. A curve 2 is a cubic Bezier curve. A chord curve 2 is a quadratic Bezier curve. And if you choose to, you may also want to close the path, because there's a difference between an open path and a closed path in terms of how it will get rendered. So as an example, I'm going to try and just draw a simple button-like shape using some of these APIs.

The first thing to keep in mind is I'll be illustrating what the path will look like, but the path isn't really being drawn into the context as I step through this. It's really being collected into the context. So the illustration is just to show what the context is collecting. You begin a path. You move to a point in your coordinate system. And if you can faintly see, that's just a point. I've just illustrated that there. The moveTo sets the current point in the context. Then you can perform a lineTo, and that draws a line from the current point to the point that you specify on the lineTo.

Next thing I'm going to do is call a curve2 where I specify two control points and a final endpoint, and that defines a cubic Bezier curve from the current point to the endpoint specified. And similarly, as I keep going on, the line2 and the curve2, and as a result, I'm able to define a shape. I can also explicitly close that path also. So here's a sequence that defines that shape.

You don't necessarily have to go through that sequence for some of the more common primitives that you are used to. For example, rectangle lines and arcs. Arcs can be used to define circles. We provide convenience functions, which I haven't listed, but they're available. You don't have to construct all of those. For example, for a rectangle, you don't have to do a move to, line to, line to, line to, line to. There's a simple convenience function for that.

Once you define the shape, now you want to draw it. There are a few drawing operations that you can do to it. You want to fill it. You may want to clip it, and you may want to stroke it. So a fill is really defining the content of that shape that you've defined. There's a fill and an EO fill. I'll show you the difference between the two.

Instead of filling, you could also choose to set the content of that shape to be a clipping region. So once you set a clip for a shape, any drawing operation that you might do from that point onwards, while the clip is currently set in the context, all those drawing operations will be effectively clipped to that shape that you've defined. Thank you. You can also stroke the outline of that shape with a pen, effectively.

And we provide for various stroking parameters. You can set the width, what happens at line joins. You can define that, what happens at the end of a path in terms of the line cap. You can set the MITRE limits and also specify line dash parameters to control the dashing, effectively define a dot dash pattern on that line.

So here are examples of paths. So on the top left, you will see two open paths. One is a simple line, one with a dash pattern, one using cubic Bezier curves, using the curve 2. A more complicated example on your top right. And here's the two stars. They illustrate the difference between a fill and an EO fill. So on the bottom left, the one which is not filled, on the center, that's an EO fill because it's an even odd filling rule that defines how things are filled. The other star is based on the winding rule, which is the standard fill operation. Paths can be composed of sub-paths also, and those paths can be disjoint. So as an example, you could also do a donut-like shape where the sub-paths are really disjoint from the other one. And to do something like that, all you do is perform another move-to operation.

So with those path constructs, you can actually generate really complicated 2D line art. Text is similar in nature because characters are nothing more than... we work directly with outline fonts, so A glyph is really effectively a path defined in the font. But you don't have to work at it at that level. We provide glyph and drawing functions to draw your text onto a context. And we support outline fonts. And most of these are coming from our leveraging ATS on the system. There's two types, type 1, open type, CID fonts for systems which have a huge number of glyphs in the order of thousands, primarily Chinese, Japanese, Korean, Vietnamese take advantage of that. And similarly to paths, one thing you can do is fill it, stroke it, and clip it also.

Text can, the functionality that we provide is at the glyph level. That's the basic functionality. That's the main functionality that Quartz provides. We also provide simple text drawing functionality where it's primarily Mac Roman encoding. But for Unicode support, what you really want to do is take advantage of ATSUI above us. That will handle all of your Unicode needs and your layout needs.

So let's have a look at the APIs that you would want to use here. First thing you want to do is create a CG font ref. That can come from a platform font, for example, ATS font ref. There's another function that allows you to select a font by name also. Once you've selected the font into the context, effectively, you do two things next. You set the text drawing mode, one of the three types that I mentioned. Those could be mixed also. You might want to fill and stroke or fill and set the clip. And then you just draw your text. There's the two types that I was mentioning. If you want to draw text explicitly or you want to draw glyphs. The context maintains a current text position. So when you just do a show text or a show glyph, it starts the text off at the current text drawing position and draws your text from that point onwards. And once the text has been drawn to the context, it updates the text drawing position. You can explicitly also draw text by specifying the point that you want to start it at also.

So that covers text, and that's our second primitive in the system. The next one I'll focus on is images. We've got support for various types of image formats, primarily through the various color spaces that we support. So we support RGB, LAB, CMYK images. You can draw all of these into a context. And because we also work heavily with ColorSync, we also support ICC profile-based images or ICC profile-based color spaces that you can associate with an image. And it will be appropriately color managed for you render it into a context.

We support alpha channel, and an alpha channel is nothing more than another channel. So in an example where you've got an RGB image, the alpha channel will be yet another component in that, and that represents the transparency for that color component. We'll see you at the That data could also be pre-multiplied or not, which effectively means it determines whether the color values have been pre-multiplied by the alpha or not. So that's provided in the system also. Thank you.

Also, we also can create images which are 1-bit or 8-bit image masks. Once you create an image like that, you can also draw that into a context. Once you draw that image, what you're effectively doing is the current color that's been set into the context will be drawn through that mask. So that's another interesting feature that we've got in course also. So using images are very simple. First thing you want to do is basically create a CG image ref, and you want to call it CG image create function. And when you're creating that image, you're specifying all the parameters that define that image. You're specifying the width, the height, how many bits there are in that component, bits per pixel, the color space, whether it's RGB, LAMP, CMYK, whatnot.

whether it has alpha information or not, whether it's pre-multiplied or not. And normally, in other APIs on other platforms, you'd be providing a pointer to the bits that represents the data itself. In our case, what we do is we provide another abstraction we've called a data provider. That's your communication to provide the bits. I'll get into the data providers a little later in another slide. Once you're done creating that image-- and it's very simple from that point onwards-- you just call the CGContextDrawImage call, and you specify a rectangle that you want the image to be drawn into.

PDF documents are very similar in nature, in the sense that they're as simple as images. First thing you want to do is create a PDF document ref. You can create a PDF document ref from a file on disk, or if you've got one in memory, you can create a PDF document ref. You've got functions for that.

Once you've got that PDF document that effectively points to the document, all you want to do is just draw that into a context, and the function that you use there is a CG context draw PDF document. We also provide some convenience functions that allow you to get the bounding box, or the number of pages in that PDF document, so those are provided.

So now that we've covered all of the drawing primitives, you may want to do some neat effects while you're drawing. One thing that you might want to do is look at the transformation model that we've got in quartz. The initial coordinate system is anchored at an origin on the bottom left. So your 0, 0 is at the bottom left. This is different from quick draw, if you're familiar with that, which is at your top left. And it's anchored in the bottom left, and it's like a Cartesian graph. So it's increasing y upwards and x increasing to your right.

That's your initial coordinate system. That's the coordinate system that you get when you get a context. And now you can modify that by applying a transformation onto that context. So there are transformations like-- the simple examples are rotations, translate, scales. You may want to skew. You could even build up complex transformations yourself. So we provide two mechanisms for that. One is that you modify the current CTM that's inside the context directly by making context calls. Or the other thing you do is we've got affine transform classes that allow you to build up a transform. And then once you've built the appropriate transform, you can set that into the context. So let me just go into that and show it as an example. So here is the default coordinate system. So let's say we're just drawing a unit square. at 0, 0. We begin to count a path, add a rectangle, draw the path, so you see a rectangle.

if we were to modify the CTM prior to drawing the path, prior to drawing the rectangle, and let's say we're just doing a rotate operation, so it's going to rotate about the origin, Notice the new coordinate system, it started out like this. It's now rotated. So now any drawing operation you do from that point onwards, for example, our rectangle, are now relative to that new coordinate system that's in place.

And let's say you were to also insert-- before drawing the path, you were also to draw-- apply a scale. So now we're building on that rotation that we had applied earlier. And now we're performing a scale on top of that. And similarly, if you were to perform a translate, but you translate in the x direction, it's not x relative to the original coordinate system, it's relative to the most recent, effectively to the current transformation matrix that's in the system. So you'll notice that it's, in this case, it's translating to the top right, even though what we may have specified is a translation in the x direction. Okay.

So using this, you can actually build up really complex transformations. Notice you do not have to do the calculations and calculate the points for that rectangle. You can still continue to draw your rectangle as if it was in the original coordinate system. It's just applied based on the current transformation that is set up for you or that you might have set up explicitly yourself.

So we've talked about transformations. We've talked about some of the primitives. One thing I want to get into next is the color spaces that we support. So when you're trying to draw vector geometry or text, you want to be able to draw that with a variety of color spaces. And here are the ones that we support. So the first one is a device-based color spaces. These are color spaces where you know what the destination device is. These are usually very simple for people to create, and that's what most people end up using. But that's not what we recommend necessarily. So in this case you've got RGB, gray, and CMYK. You know what the destination is, and you're telling course you do not want it to be color managed. Alternately, what we recommend is for people to use calibrated color spaces or even go as far as tagging your data with an ICC profile. So you create these color spaces from a profile or from a calibrated space where you've specified things like the gamma and the white point for that color space. And once that color space is set, your drawing operations for your text and your vector geometry will then be color managed for you appropriately. There's the LAB color space also that we support. and the last one is an index color space so which is nothing more you have to have a reference color space for example the RGB color space and you build up an an array of colors and your the color that you specify is done through an index into that color table. So this is very similar to GIF or palettes. Palettes are palettes.

So in order to work with color using our API, what you want to do is create a color space. It's one of the ones from the previous slides. Once you've created a color space, if you want to set that for the-- you can do it for both fill and stroke. So you can make calls to set the fill color space, and you can make calls to set the stroke color space.

Now that that is selected into the context, when you're specifying the colors, you pass it an array of color values for the component of that color space. For example, in RGB, you'd be specifying the R, G, and B components as an array of color values. We also allow you to add alpha values. So when you set the color, you can also pass in an alpha value. And that leads us to transparency.

You can set a global alpha into the context. So what that means is when you set an alpha value, all of the drawing that you do from that point onwards will inherit that transparency value effectively. So it will be composited for you to the appropriate device. Thank you.

Alternately, you can also set that on the fill or on the stroke explicitly also. The global alpha can be applied to PDF documents that you might want to render into the context. It would apply to text, to all of the primitives that we've talked about. Good. whereas the fill and stroke ones only apply to text and to vector geometry.

So transparency is supported on screen context and for bitmap context, because the appropriate compositing is done for you when you specify the alpha value. On the PDF context, we currently do not support transparency. The transparency model in PDF is part of the 1.4 spec that we currently have not implemented. The 1.4 spec is currently not published, and we're working with Adobe to track that spec. We currently support 1.2 and 1.3. So a lot of what we support in terms of the PDF specification was discussed in the PDF and Mac OS X talk prior to this one. And because PDF plays an important role in the printing workflow, because that's the spool file, you will also not get transparency on your printing contacts.

So that's something to keep in mind because it will work on screen, but until we add support for it on the PDF and the printing side, you will not get that necessarily on the printed output. So, one other thing I wanted to cover was data managers, and examples of those are data providers and data consumers. This is nothing more than a way for courts to provide data to you, or for you to provide data to courts. So, in the example of an image, where you were creating an image, you're trying to provide the the bitmap bits to Quartz. So you first have to create a data provider and pass that in. You're passing in the memory pointer, if that happens to be the one-- that happens to be the type of data provider that you're working with.

If you're also working with a PDF document that you have on disk or something else, you do the same thing. You have to create a data provider that allows Quartz to get at the data that resides on disk or wherever it may lie. So we provide convenience functions. Most likely you've got the file loaded already or it's already on disk. So we've got convenience functions that allow you to create data providers and data consumers from memory and from disk. And we use the CFURL mechanism for defining the path for that.

And some miscellaneous items. One thing, if you're drawing on screen, you have to understand that you're drawing into a back buffer. It's buffered for you. There's double buffering that's applied to the system. You're not drawing directly to screen. So you would want to do a flushing operation. So if you wanted to see that drawing operation immediately onto the screen.

there's also another function called CG context synchronized and that's primarily similar to flushing except it allows you to synchronize drawing from multiple components so that they're all atomically flushed at the same time, you may want to call into a plug-in for example to do some drawing into a context but you don't want that the drawing to appear immediately So you'd expect that plugin to do a synchronize. And then you may then choose to flush the contents on your own when you're ready to flush the appropriate pieces. Once they're all done drawing, you want to have that appear atomically on screen. You don't want things to flash onto the screen. So that's another function that you can take advantage of. Okay.

If you're working with QuickDraw, you're probably imaging through the QuickDraw APIs already, but you may choose to also work with CG at the same time. So there would be some interactions if you're trying to do imaging at both at the same time, or if you want to, given a QuickDraw port, how do you get at a CG context for that? So those interactions are actually discussed in the graphics and imaging tips and tricks talks on Friday morning.

So that's a very important talk if you're working with Carbon or if you're working with QuickDraw. It focuses on two aspects. One is the QuickDraw CG and printing interaction and also discusses performance issues related to flushing. And we've got some tools that will help you there. Thank you. And so now I want to bring up Andrew Barnes to do a demo that demonstrates the power of quartz.

So as Harun indicated, there are four types of objects. There is line art, text, images, and PDF documents. This demo is going to demonstrate how to draw all four of these documents, or draw all four of these things, as well as it's going to go through some code examples. When I was in the audience, I saw a lot of people taking a lot of notes. This, the code fragments that I'm going to be showing you, are probably going to be available, or definitely I'm going to make sure they are available on our website so you don't have to take down the notes, It's a fair chunk of code.

So the first thing we're going to start off with is just normal line art. We have a little demonstration here and we show the Apple logo, which is a nice logo. Should have made it yellow in keeping with today's color. But it's basically, it's a glyph that I yanked out of a... So it's the Apple logo. So here is a stroked rectangle, our favorite star that does some strange things to show EO Phil, and some stroking where the dash pattern. Dashing is just basically a... In the API, it's specified as an array of floating point values. They basically specify how much to go on, how much to go off, how much to go on, how much to go off. And there's an extra phase parameter that allows you to take your start point and move it through that array. So for an example, we are going to move things around. So as you can see, I'm just adjusting the phase based on some number.

And so all of this stuff is all, you know, it rotates like normal scales, you know. So let's see how we do that. So... Let's start off with something pretty straightforward. It's going to be-- every object that's ever drawn in this little demonstration program is going to have a draw state. Specifies a transformation, specifies some alpha, specifies a bounding box of the object in the object's user space. So if I were drawing an object that was a unit, let's say a glyph, I placed a glyph inside of a unit square, the bounding box would basically be 0 to 1, 1. So we do that. we keep track of this information associated with the object so that we can adjust the transformation such that our particular point that we want on the screen, which is point A, is going to be mapped at the center of that bounding box. So that bounding box information is kept in track with the object so the object can do its proper transformation. And then there's a progress indicator. For instance, that little dashing movement thing was done with a progress indicator.

So it's just an extra piece of data that gets added So, as Sruin indicated, there is a transformation state. Now, there's a current transformation that exists on the context, PDF or Raster context, and you have to set your transformation up. So given a draw state, which was this other stuff before with x, y, width, and height, where you simply go through very basic stuff like translate, rotate, scale, And this is the little piece of code that will make sure that your bounding box is centered at that point So, you know just basically translates and move by half kind of thing So if you have a draw state you can get back a transform now Now, If you have a draw state and you want to apply this transformation, there's a little piece of dubious code in here that basically says, okay, if your transform is null, then just take the matrix. But that's just to show you that you can either be working with a transform object, which a lot of people like working with. They would just be able to modify the transform, and they keep the transform associated with their object, and they move that around, and then they say, okay, draw with this. Or they keep the parameters explicit, you know, x, y, width, and height. Or, sorry, x, y, angle, and scale, and then you basically apply the two. So in both cases, we're going to apply this to our current context. In this particular case, we have a transform. You concatenate the transform with the current CTM. So if you start off with an upright and you do some rotation, something kind of goes there, and then you can draw your object.

In the other case, it's like, okay, I've got my explicit components. I can either build a transform by concatenating all these things together, or I can apply them directly to the context. And in that way, I'm actually modifying the current CTM as a series, as a bunch of steps. So that's what that code does, which is basically the same as this other code on top. So, okay, so now we are going to draw a path. So there are two things we can do with a path. We can either fill it or we can stroke it.

And we won't talk about clipping. For this current demo, the only thing that's really applicable to the fill state is a color, right? So once you get a fill state, our operation, if it wants to apply a fill color, is going to say apply fill color or fill state apply. And that will basically apply the color space and the color. Those are pretty straightforward.

The stroke state, a little bit more complicated. Stroking is obviously going to be a color. You can set the fill and the stroke color independently. And when you're stroking, you're going to be using the line state or the line parameters inside of the context. There is the line width and the joining.

They're sort of joins that are rounded versus miter joins or, you know, sort of butt joins. Then there's capping, which is like when you stop a segment, what happens to the end? Does it get rounded or it just gets chopped? Or what happens to the join? And then there's this dash array, which is this array where you set your points for your phase, and you specify a sequence of on/off. So... This stroke state is basically passed to the object if it needs to stroke and basically just does those things. Set the line joint, the cap, the width, set the dashes, and set the color. Pretty straightforward.

So now we're talking about paths. For the purposes of this demo, it's very simple. It's just an array of segment types and array of coordinates. And all this little tiny loop goes through. It's just say, if it's a move 2, pick the two coordinates off. Put them on. If it's a line 2, curve 2, four coordinates. So it's very straightforward. Again, this will be in the website.

So now our path example goes in and says, OK, I've got a path object and I want to draw. So all it has to do is apply its transform, code that you saw above, apply its fill state, apply its stroke state, set the alpha. If anybody sets any alpha, begin a path, enumerate the path segments, and then draw your path.

And then you're done, pretty much. So a lot of these parameters have default values. For instance, unless you're doing something strange with stroking, you really don't have to set the cap and the join or the dashes if you're just doing a straight line. But the other caveat to this is that if you're unsure where your context is, if you're unsure of what your state in your context is, you must set it. You can't really assume that, oh my gosh, the font size is going to be the correct size, and then, okay, I'll just show some text or show something, because you're not always sure where that thing came from. But if you are sure, you definitely do not have to do it. If you make modifications, as Haroun indicated, to the current graphic state, you can bracket it by a save and a restore so that you can make a little tiny twiddling, like I changed the color, I changed the dashing. Then you can restore your changes, which just revert the context back or revert the current state back to what it was before.

So whenever these functions or this function or all of these demo functions are called, what I do is I basically do my bracket and I do my end. With reference to the synchronize call, you could possibly do, I could either do a flush here if I wanted to flush each object independently, or I could do a synchronize which says, I'm finished drawing my stuff, synchronize it, and let's move on. Okay, so in saying that, let's move on. So we move to our second demo. So we're saying, our second demo is text, right? So we can fill and stroke text, and we can think text, and we can stroke, And of course, you can rotate, scale, usual stuff. And-- So let's go to our little demo. So now we're moving on to this case. This case is... broken up into two sections first section has to do with text Text, in this particular example, this is slightly misnamed, but it really has to do with a ASCII string of text with some encoding. For this particular example, it's going to be macro encoding. So we're going to get some string of text, and we're just going to say, OK, I want to deal with the string of text in macro and encoding. So like the path example, it's pretty straightforward. When the object comes in, it has a draw state, has a text rendering mode. That's how you're able to get the strokes. It has a font size and some text.

So basically, when the draw happens, we apply the transformation, which was the whole rotation scale thing. Then we apply the fill state, the stroke state, set the alpha, select the font by name with its size. and the encoding. In this case, it's macro and encoding. It's a parameter because it could be not macro and encoding. And then we set the text mode.

In that example, you saw that there was stroking and filling. And I think there are four. There is filling, stroking, filling and stroking, and clipping, I think. Actually, no, there are six. But you can look at that in the documentation, or in the header files, actually. So once we're done with that, then we can say, show text at point. Or show text. Or there are a whole bunch of text type operators that take an ASCII string and send it through some particular encoding that's indicated by the font. The select font is really a convenience function. As you'll see in the next example, select font basically calls the font API to find the font by a particular name and then apply an encoding onto a font, and then set the font into the context, and then set the font's scale. So in the second example, we're going to deal with glyphs. Glyphs are basically actual indices of-- outlines inside of a particular font. They're not Unicode.

They're not anything that's portable. It's really particular to a particular font. Now, typically, a lot of layout engines will decide to take, you know, arbitrary Unicode text or Kanji text or something like that and transform them into glyphs associated with a particular font. And at the lowest level of API, you know, you really are going to be just drawing an array of glyphs that are matched with a particular font, right? So any particular text run, after it's been figured out that you have to do this and swivel it around and put kerning and stuff like that, you'll end up with just a bunch of arrays of glyphs, you know, and their positions and the font for which they're coming from. So this example shows, you know, how to draw with just glyph IDs. You can use at sui to do all of this layout stuff. That's what it's for.

And you'll be able to take Unicode text and convert it to glyph IDs and fonts, and then you'll be able to use those with CG. So like all the examples, they come with a fill state, stroke state, text mode, to pretty usual. Now we're going to get a font ID case, right? So if you get a font ref-- sorry, not font ID, font ref-- get a font ref, and you have a size, and you have the glyph array.

Like all the other examples before, you apply a transform, apply the fill state, str state, set the font, set the color. Here it was explicit, right? Instead of the select font mechanism by name, you actually got the font and you got the size. You can say set the font, set the size. Set the text rendering mode and show glyphs at point. Same like show text at point, except it takes glyphs.

So hopefully everybody's happy with there. I haven't lost anything. Anybody? So now we're going to go to the second example. So now we're talking about images. We have an alpha logo. Of course, all the cases, it's all the same object. So it's like very-- and we can fade things out. So images are-- with respect to this demonstration are practically very simple.

This and PDF documents. Um, the... you-- when the image is called to draw, you simply apply the transform. You set the alpha, which is how you're able to get the fading, and you draw the image with the image box. So let me sort of-- Both images and PDF documents have the same parameter called the rect.

It basically specifies a destination rect. For both PDF documents and images, there seem to be objects that are in a unit square. And what you do with that rectangle is you basically say, okay, I got this thing that's here, which is like a unit square, and I want to take it and put it at this particular point. So if you were to have an image and it was 600 by 800 and you wanted to do with no transform, you set the transform to be identity, you'd be able to take the image and draw the image at the correct location.

PDF documents are also pretty straightforward. but we won't go to that just yet because I lost my step. Here's a PDF document. You know, here's a nice little chicken. They all -- this is -- there's no smoke and mirrors here. This is real line art being rendered. And our document and go down.

So there is just chicken document, Mac OS, data sheet, or Mac OS. So documents are like images. They take the same bounding rack to draw to the destination, and they practically do the same thing. You apply a transform, you set the alpha, and you draw the document. Very straightforward. Don't worry about this piece of code. That basically does our tracking for our progress indicator, All right, which allows us to page through our document.

So, uh, next example. So this is-- Another example that uses PDF documents and images. And what it tries to demonstrate is color matching. Now, here's an example of an image that was rendered off screen. It's a PDF document. Basically, that was rendered off-screen. And the result of that memory was switched around. We lied and we said, it's really a BGR image. So we rendered RGB data, and then we said, oh yeah, forget the RGB. We're just going to call these triples BGR. And basically, this image is essentially the result of that. It's just an image that got rendered through as if it was a BGR data, and this is why all the reds are blues.

This particular case is a slightly different example. This is basically a PDF document-- of course, it does that rotations. The media box is kind of different-- that lays down CMYK colors. And on the screen, you see cyan as electric cyan. Now, what we want to do is we want to say, OK, we would like to draw this PDF document. We really want to proof it on a CMYK printer.

So we have a CMYK profile, which we got from ColorSync, and we loaded up and we created a color space object from it. And then we said, okay, I want you to draw this document inside this off-screen context with this profile, which is this ColorSync CMYK profile. And that's why the cyan looks a little bit like what it should look when you print this electric cyan on a printer.

Now that you've finished the two simple cases of draw document, draw image, we go into slightly more complicated example. But it's not that-- Strange. We get the document, the media box, and we get the images color space, which is the color space to use, and we get the context as color space, which is the color space to use for drawing onto your off-screen image.

So basically the first time through, if there's no context, then let's make one. So we go and compute the width, height, blah, blah, blah, blah, blah, blah, blah. And here we say, I've got a piece of memory and I want to create a data provider. As you indicated, we have these data managers, and they allow you to create objects that can consume and produce memory. This particular case, we want to create a consumer. So the data provider is a direct access sort of, here's a sheet of memory, it's this size, And we have a little release function that says, OK, when you're done with it and everything's all released, go call this method or this function to free the memory up, which is as soon as you get the data pointer, we just call free.

So once we create this data provider, then we say, okay, now that we have our data provider, I want to create this image definition. And, you know, we're going to do that. We have to specify the width and heights, the bits per pixel. In this particular case, we're just using RGB 8-bit. So bits per sample is going to be 8. Bits per pixel is going to be 32. Robites is going to be width of the image times 4. And we're saying that the alpha is pre-multiplied first. We're using an ARGB format. And the color space, no rendering intent for the image.

So that's how you would create an image from a bunch of memory, which you can see this in example. When you look at the header files, it would be a little bit more obvious. two of these parameters here are a little bit, don't worry about them, they basically are decode parameters. We do allow the ability to decode images from one bit depth through some transformation that's linear and produce other bits. So I can take a two-bit image and map my two bits to eight-bit and expand the scale, or I can move the two bits and say, I want you to only populate the range from 128 to 255, and that's what those things to when you see them in the documentation you'll say okay yeah that's what that was for. So now we have our image then the second case is the second thing that we need to do is we have to create our bitmap context which is the thing that we want the context we want to draw to. It's an off-screen context we're going to use the same memory that we hand it to the image and we're going to say okay create a bitmap context with the correct width and height and the bits per sample and we're going to say here's the color space to use we use this particular case for the CMYK case so we said okay here's a CMYK what we want to do is we're going to use the CMYK to draw and because you've handed the data provider off to the image you can just release it because the image has a reference to it so once we've done all of that the first time through you know you didn't have a context you didn't have any memory didn't have an image you You just run through that little tiny thing and it creates, okay, a life-size image and context to draw into that image. So now every time we get new stuff coming in, a different page or whatever, we basically say, you know, have our off-screen, set our transformation. So we're basically trying to say whatever the media size of the document that we're trying to render, we want it to fit smack dab in the middle of the image. Because I only provided, you know, 10 pixels by 10 pixels. And I want it to draw on the 10 pixels. So we translate the context so it's correctly positioned. We draw, basically, we erase the context, or you clear it.

In this particular case, we're recording alpha, so we want to issue a clear, which will clear out both the alpha and the data planes. And then we just draw a document like the example before, but remember, we're going to an off-screen context. So once a document is drawn, then we say, "Okay, great." We have our little off-screen image, right? They're both referencing-- both the image and the context we're referencing the same piece of memory. Basically take that, then we say, OK, great, we're ready to draw. We apply our transform, set our alpha, and we draw our image. And that's how, basically, we're these little proofing things were done. So-- Uh... fairly in-depth kind of situation. So now to tie it all together, we will sort of demonstrate everything all together. You know, who needs OpenGL when you can render Aqua icons with software? And if things are a little bit more complicated, random PDF document with an alpha image being drawn with samples.

Okay, so I just wanted to finish off by talking about some of the documentation. There's a Quartz primer that's available for you on our website. TechPubs is currently working on drawing with Quartz, part of the Inside Mac OS X series. And at the moment, if all fails, you can always go back to the headers, and those are available in the Core Graphics framework as part of the Application Services framework. So that's what you want to go off to. And I want to invite Travis back up to discuss the roadmap.

Thank you, Haroun and Andrew. I want to quickly do the road map so we have some time for some Q&A. Obviously we've already gone and done the PDF courts in Mac OS X and the drawing. Actually no, they've revised it. So the next session that might be of interest is drawing Unicode text with ATSUI. I know that we mentioned text handling in the session, but for the built-in text handling inside courts is very simple. We mentioned a lot of it's at the glyph level and based on Mac Roman encoding. If you're developing applications, you're going to want to use more advanced text. And so what we strongly suggest you do is you use higher level frameworks such as ATSUI and MLTE. That will be covered in the Drawing Unicode Text with ATSUI. We also have an interesting technology called Image Capture, which you've already seen demonstrated. Allows your applications to work with digital cameras.

Then we go over to a very interesting session, 118, which is Color Sync, which is we'll go into depth and describe how color management works in conjunction with Quartz to deliver fully color-managed content on the user screen and output devices. Unlike previously with QuickDraw, where you really did not have a color-managed drawing environment, Quartz is fully integrated with color-managed as was demonstrated by Andrew. This is very important because we feel that the fidelity on screen and on print is Valuable to all users of all classes of applications, so you should definitely check out the color sync Presentation next we have another text on Mac OS X this will again cover quartz and how it relates to The other text api's in the system then we have a very interesting demonstration if you are doing high performance 2D work where you, for example, using large bitmaps, although the performance of Quartz is quite good, if you need even greater performance, you should check out the OpenGL in high performance 2D. And we also have a printing session. The printing session will talk about Carbon, BSD, and Cocoa printing.

And then on the final day, we have a very important session for Carbon developers, the graphics and imaging tips and tricks. This session will provide a lot of information to enable Carbon developers to look seriously at using Quartz 2D for their graphics as opposed to Quick Draw. And then finally, we have the feedback forum.