Mac OS • 1:02:55
Quartz is the foundation of 2D graphics in Mac OS X. This session explains how to harness the power of Quartz in your application. Detailed explanations of the Quartz graphics architecture, the Quartz API, and how to integrate PDF support into your products are presented.
Speaker: Haroon Sheikh
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Good morning, everyone. I'd like to welcome you to session 107, which is 2D graphics using Quartz. Today we're going to talk about one of the most exciting aspects of Mac OS X, which is the Quartz graphics technology. And for many developers, you're abstracted from it. If you're using Carbon, you often develop with QuickDraw. If you're using Cocoa, you're using the NS Graphics and SBAZIA, those abstractions.
But what we want to talk about is the Quartz 2D API, because we really believe that every application can benefit from being powered by PuzzScript and PDF-style graphics. And also given the ability to create and consume PDFs as well as an imported format. So I'd like to introduce Haroon Sheikh, who's the Quartz 2D engineering manager, and he's going to take the presentation from here.
Thank you, Travis. Good day, everyone. Welcome to this session. So we're going to be focusing on 2D graphics using Quartz. Our agenda for today is basically going into detail about what Quartz is, look at the architecture, see where it's used in the system, where you might be using it indirectly, where you might want to focus on using it. The bulk of the presentation is going to be focusing on the Quartz API. We'll conclude the session with a demo demonstrating the power of Quartz.
So fundamentally, Quartz is the graphics system on Mac OS X. So it's responsible for the Aqua look and feel. It participates in that. All of your window management is handled by Quartz. Rendering is part of the printing workflow. So it plays a very important role throughout Mac OS X. So looking at the system architecture slide, you'll notice that Quartz is in the application services layer right above Darwin.
At that level, we've got Quartz, OpenGL, and QuickTime. Looking at that in a little more detail, You'll notice that Quick Draw, QuickTime, and OpenGL are not only peers to Quartz, but at the same time, they're also sitting on top of Quartz. And the reason for that is Quartz is really composed of two components, Quartz 2D, which is our focus today, and the Quartz compositor.
So in the graphics and imaging overview, we've talked a little bit about both of them. The Quartz compositor is responsible for all your Windows server, window management needs in the system. It effectively handles the compositing of multimedia, your windows, your menus, blending them appropriately with the rest of the system and sending that to on screen.
on the Quartz 2D side, that's your-- That's the rendering library, and that's the focus of this talk today. It's a low-level rendering library. It's based on the PDF imaging model. So hopefully most of you were at the PDF and Mac OS X session right before this one, which discussed PDF and the imaging model.
Quartz 2D in itself, because it is based on the imaging model and the way we've architected it, is resolution and device independent. So you do not have to worry about what the device is. That's abstracted for you. All you have to do is draw to the context, which I'll get into.
And the resolution, high fidelity output can be achieved by Quartz minting the resolution of your data throughout the workflow. We also incorporate ColorSync and ATS. ATS is Apple type services, so that's for all your font management. ColorSync is incorporated also, so that allows you to get high fidelity color managed output onto your device. Quartz does all the work working with ColorSync to achieve that.
So let's have a look at where Quartz is used. You may already be using Quartz indirectly. There's a Cocoa application, Carbon, or Java applications. They all sit on top of Quartz, and so they're using Quartz indirectly. In Cocoa, as Travis was mentioning, most of the NS Bezier path classes, NS image classes, they take advantage of the Quartz API. So the functionality in Cocoa is very similar to Quartz.
So you're taking advantage of Quartz indirectly there. On the Carbon side, if you do some things like draw theme text, that goes through ATSUI, which does all of its text rendering through Quartz. And also, when you're printing from Carbon, you're printing to a Quickdraw port. Internally, what we do is we translate the Quickdraw calls into Quartz calls for your printing needs.
Java 2D graphics are also implemented on top of Quartz, so they're handled natively on the system. You may want to directly access Quartz. And what you want to do there is you want to access it through the Core Graphics framework. Core Graphics is our internal name for Quartz. Quartz is more the marketing name. Core Graphics is part of the application services framework.
When would you want to actually use Quartz? From Carbon, The Quickdraw model has been around for the last 10-15 years, but the Quartz imaging model is much more richer, much more advanced. So that's an opportunity for you to take advantage of advanced 2D graphics through Carbon instead of using Quickdraw.
We also have two other APIs that I won't be discussing too much today. One is the Full Screen Access API, and that's if you want to access the screen for gaming or for changing display depths. That's available. There's another API for remote access. So applications like Timbuk2, where you want to access a remote system and get information about the contents on screen or be able to send events, that's also available. That's also part of Quartz. In general, when you're working with Carbon, Cocoa, or Java, those frameworks will be the ones that you go to for all your window management needs. Quartz doesn't provide that to you explicitly. They're done implicitly through those high-level frameworks.
So let's get into the API itself. It's a C-based interface, and the reason for that is because we're servicing Cocoa, Carbon, and Java. So you've got Objective-C, C++, and Java that's interacting with the system. We decided to choose a simple C-based interface that can service all three of those clients.
And so those who are familiar with the core foundation naming convention, Quartz relies on the same naming convention for all of its API. And the convention is basically a two-letter keyword representing the framework that you're working on and class, verb, and object. So in our case, CG represents the framework. So all of Quartz calls will begin with CG. So as an example, CG context draw image.
It's straightforward. What that really does is it draws an image into a CG context. So here's a list of the classes that we'll be talking about that Quartz provides. These are not classes in the C++ term. These are just a collection of things that we'll be discussing today.
So the first thing you want to do is you want to interact with the device. A Xig context is your connection to the device. So this is synonymous to a quick report. On the Quartz side, that connection is an abstraction. So you do not have to worry about the details of the device, what the resolution is, or whether it's a printer, or whether it is on screen.
From your perspective, you want to basically send down your data in as rich a format as possible, and Quartz will be responsible for making, because it knows about the device, it will translate the data appropriately to that device, render it, save it out into PDF, or whatever, depending on what the device is. So that, the important point there is, you can send the same information to, whatever device, through the same API, make the same calls, and it'll be handled for you.
One thing also that's part of the CG context is State is also part of that context. So what I mean by state is it's similar to Quick Draw, where you're changing, setting the color, you're setting the font. You set that into the CG context. And until you change it again, the state is maintained. So you can also save the state.
So, if you want to make changes to the state, for example, change the font color or change the text color, or change the CTM, change the color space information, do your drawing and then if you want you can then restore it and when you restore it, you're restoring it back to the state that you saved it at. You do not have to worry about restoring all of the changes that you made. So, this way, this allows you to cleanly get back to a state that, which undoes, under, undoes the old state. all of the modifications that you might have done.
So the context that we support, if you're working on screen, you're working with Cocoa or Carbon. So Cocoa and Carbon will create a context for you. So on Cocoa side, you've got an NSView. You can actually get the graphics context for that view. On the Carbon side, you're working with a QuickDraw port.
On QuickDraw, you can actually make a function call to get at the context for that port. When you're printing, you're also working with contexts. And those are created for you from the printing system. Those could be a PostScript context, or those could be a Raster context, depending on what the printer device is. In those cases, in the first two cases, the context is created for you. We do provide two options. There are other contexts that you can create explicitly.
One is the CG bitmap, a PDF context. This is where you are trying to generate a PDF document. So in order to generate a PDF document, it's easy. All you do is create a PDF context and now start drawing all of your things into that context. And it will, all of that, all of the drawings that you, calls that you make will now be saved into a PDF document.
At the same time, if you choose to, you could also create a bitmap context, which this is an off-screen context, where you can create a bitmap context. Where everything will be rendered into that off-screen bitmap. You could make the same set of calls into that same context. And instead of being saved out to PDF, because it is now a different context, you can now render the same set of calls.
So now that you've got a context, let's look at the drawing primitives that you would want to draw into that context. Basically, there's four in number. You've got vector geometry, which is all of your line art, 2D line art, things like your rectangle, ellipse, paths. You've got text and images, and then finally a PDF document. So on the previous slide I had talked about, you know, if you want to create a PDF document, and now we also allow you to take a PDF document and draw that into a context as yet another primitive.
The drawing operations are done using the painter's algorithm. And that's basically, you draw something onto a canvas or onto the device, and you lay down ink for one drawing primitive. And then you lay down ink for the next one on top and continue to do that. So it's a bottom to top drawing operation. Things above you obscure things below you.
And in order to maintain resolution independence, all of our drawing operations support floating point coordinates. So you can even, with that, you can define your shapes, define your positioning in the coordinate system using floating points, and the appropriate translations are done to the device, and that allows for high precision and high fidelity output.
So let's look at the first primitive, which is vector geometry. Fundamentally, vector geometry is represented as a path in Quartz. What you want to do there is you first want to define the path into the context. The definition constructs that internally in the context. And the next thing you want to do is draw it. So drawing is simple. There's just one called CG Context Draw Path.
There are other convenience functions that allow you to draw paths and shapes, but that's one that you would probably use more explicitly. On the definition side, though, It's very similar to PostScript, the PDF imaging model. Basically what you do is you begin a path that destroys any paths that you might have in the context already.
The first thing we want to do after that is move to a point. You can also do line 2, curve 2. Curve 2 is a cubic Bezier curve. A chord curve 2 is a quadratic Bezier curve. And if you choose to, you may also want to close the path, because there's a difference between an open path and a closed path in terms of how it will get rendered. So as an example, I'm going to try and just draw a simple button-like shape using some of these APIs.
The first thing to keep in mind is I'll be illustrating what the path will look like, but the path isn't really being drawn into the context as I step through this. It's really being collected into the context, so the illustration is just to show what the context is collecting.
You begin a path, you move to a point in your coordinate system, and if you can see, that's just a point, I've just illustrated that there. The move to sets the current point in the context. Then you can perform a line to, and that draws a line from the current point to the point that you specify on the line to.
Next thing I'm going to do is call a curve to where I specify two control points and a final endpoint, and that defines a cubic Bezier curve from the current point to the endpoint specified. And similarly, as I keep going on, the line to and the curve to. And as a result, I'm able to define a shape. I can also explicitly close that path also. So here's a sequence that defines that shape.
You don't necessarily have to go through that sequence for some of the more common primitives that you are used to. For example, rectangle lines and arcs. Arcs can be used to define circles. We provide convenience functions, which I haven't listed, but they're available. You don't have to construct all of those. For example, for a rectangle, you don't have to do a move to, line to, line to, line to, line to. There's a simple convenience function for that. Once you define the shape, now you want to draw it.
There are a few drawing operations that you can do to it. You want to fill it, you may want to clip it, and you may want to stroke it. So fill is really defining the content of that shape that you've defined. There's a fill and an EO fill. I'll show you the difference between the two.
Instead of filling, you could also choose to set the content of that shape to be a clipping region. Once you set a clip for a shape, any drawing operation that you might do from that point onwards, while the clip is currently set in the context, all those drawing operations will be effectively clipped to that shape that you've defined.
You can also stroke the outline of that shape with a pen effectively, and we provide for various stroking parameters. You can set the width, what happens at line joints, you can define that, what happens at the end of a path in terms of the line cap. You can set the miter limits and also specify line dash parameters to control the dashing, effectively define a dot dash pattern on that line.
So here are examples of paths. So on the top left, you will see two open paths. One is a simple line, one with a dash pattern, one using cubic Bezier curves, using the curve tool. A more complicated example on your top right. And here's the two stars. They illustrate the difference between a fill and an EO fill.
So on the bottom left, the one which is not filled, on the center, that's an EO fill because it's an even odd filling rule that defines how things are filled. The other star is based on the winding rule, which is the standard fill operation. Paths can be composed of sub-paths also, and those paths can be disjoint. So as an example, you could also do a donut-like shape where the sub-paths are really... disjoint from the other one. And to do something like that, all you do is perform another move-to operation.
So with those path constructs, you can actually generate really complicated 2D line art. Text is similar in nature because characters are nothing more than... we work directly with outline fonts, so... A glyph is really effectively a path defined in the font. But you don't have to work at it at that level. We provide glyph and drawing functions to draw your text onto a context.
And we support outline fonts. And most of these are coming from our leveraging ATS on the system. There's two types, Type 1, OpenType, CID fonts for systems which have a huge number of glyphs in the order of thousands. Primarily Chinese, Japanese, Korean, Vietnamese take advantage of that. And similarly to paths, one thing you can do is fill it, stroke it, and clip it also.
Text can, the functionality that we provide is at the glyph level. That's the basic functionality, that's the main functionality that Quartz provides. We also provide simple text drawing functionality where it's primarily Mac Roman encoding. But for Unicode support, what you really want to do is take advantage of Atsui above us. That will handle all of your Unicode needs and your layout needs.
So let's have a look at how the APIs that you would want to use here. First thing you want to do is create a CG font ref. That can come from a platform font, for example, ATS font ref. There's another function that allows you to select a font by name also.
Once you've selected the font into the context, effectively you do two things next. You set the text drawing mode, one of the three types that I mentioned. Those could be mixed also. You might want to fill and stroke or fill and set the clip. And then you just draw your text.
There's the two types that I was mentioning. If you want to draw text explicitly or you want to draw glyphs. The context maintains a current text position. So when you just do a draw text. Or sorry, when you do a show text or a show glyph, it starts the text off at the current text drawing position and draws your text from that point onwards. You could also and once the text has been drawn to the context, it updates the text drawing position. You can explicitly also draw text at by not by specifying the point that you want to start it at also.
So that covers text, and that's our second primitive in the system. The next one I'll focus on is images. We've got support for various types of image formats, primarily through the various color spaces that we support. So we support RGB, LAB, CMYK images. You can draw all of these into a context. And because we also work heavily with ColorSync, we also support ICC profile-based images or ICC profile-based color spaces that you can associate with an image. And it will be appropriately color managed for you when you render it into a context.
We support an alpha channel, and an alpha channel is nothing more than another channel. So in an example where you've got an RGB image, the alpha channel will be yet another component in that, and that represents the transparency for that color component. The data could also be pre-multiplied or not, which effectively means it determines whether the color values have been pre-multiplied by the alpha or not. So that's provided in the system also.
Also, we also can create images which are one-bit or eight-bit image masks. Once you create an image like that, you can also draw that into a context. Once you draw that image, what you're effectively doing is the current color that's been set into the context will be drawn through that mask. So that's another interesting feature that we've got in Quartz also.
Haroon Sheikh So using images are very simple. First thing you want to do is basically create a CG image graph, and you want to call it CG image create function. And when you're creating that image, you're specifying all the parameters that define that image. You're specifying the width, the height, how many bits there are in that component, bits per pixel, the color space, whether it's RGB, LAMP, CMYK, whatnot.
[Transcript missing]
Once you're done creating that image, and it's very simple from that point onwards, you just draw that CG context, call the CG context draw image call, and you specify a rectangle that you want the image to be drawn into.
PDF documents are very similar in nature in the sense that they're as simple as images. First thing you want to do is create a PDF document ref. You can create a PDF document ref from a file on disk, or if you've got one in memory, you can create a PDF document ref. You've got functions for that.
Once you've got that PDF document that effectively points to the document, all you want to do is just draw that into a context. And the function that you use there is a CG context draw PDF document. We also provide some convenience functions that allow you to get the bounding box or the number of pages in that PDF document. So those are provided.
So now that we've covered all of the drawing primitives, you may want to do some neat effects while you're drawing. One thing that you might want to do is look at the transformation model that we've got in Quartz. The initial coordinate system is anchored at an origin on the bottom left. So your 0, 0 is at the bottom left. This is different from QuickDraw, if you're familiar with that, which is at your top left. And it's anchored in the bottom left, and it's like a Cartesian graph, so it's increasing Y upwards and X increasing to your right.
That's your initial coordinate system. That's the coordinate system that you get when you get a context. And now you can modify that by applying a transformation onto that context. So there are transformations like, you know, the simple examples are rotations, translate, scales, you may want to skew. You could even build up complex transformations yourself.
So we provide two mechanisms for that. One is that you draw--you set the transform as in the--modify the current CTM that's inside the context directly by making context calls. And the other thing you do is you can--we've got affine transform classes that allow you to--that allow you to build up a transform. And then you can--once you've built the appropriate transform, you can set that into the context.
So let me just go into that and show it as an example. So here is the default coordinate system. So let's say we're just drawing a unit square.
[Transcript missing]
If we were to modify the CTM prior to drawing the path, prior to drawing the rectangle, and let's say we're just doing a rotate operation, so it's going to rotate about the origin, Notice the new coordinate system. It started out like this. It's now rotated. So now any drawing operations you do from that point onwards, for example, are rectangle, are now relative to that new coordinate system that's in place.
Let's say you were to also insert, before drawing the path, you were also to draw, apply a scale. So now we're building on that rotation that we had applied earlier and now we're performing a scale on top of that. And similarly, if you were to perform a translate, but you translate in the X direction, it's not X relative to the original coordinate system, it's relative to the most recent, effectively to the current transformation matrix that's in the system. So you'll notice that it's, in this case, it's translating to the top right, even though what we may have specified is a translation in the X direction.
So using this, you can actually build up really complex transformations. Notice you do not have to do the calculations and calculate the points for that rectangle. You can still continue to draw your rectangle as if it was in the original coordinate system. It's just applied based on the current transformation that is set up for you or that you might have set up explicitly yourself.
So we've talked about transformations. We've talked about some of the primitives. One thing I want to get into next is the color spaces that we support. So when you're trying to draw vector geometry or text, you want to be able to draw that with a variety of color spaces.
And here are the ones that we support. So the first one is a device-based color spaces. These are color spaces where you know what the destination device is. These are usually very simple for people to create, and that's what most people end up using. But that's not what we recommend necessarily. So in this case, you've got RGB, gray, and CMYK.
You know what the destination is, and you're telling Quartz you do not want it to be color managed. Alternately, what we recommend is for people to use calibrated color spaces or even go as far as tagging your data with an ICC profile. So you create these color spaces from a profile or from a calibrated space where you've specified.
You know, things like the gamma and the white point for that color space. And once that color space is set, your drawing operations for your text and your vector geometry will then be color managed for you appropriately. There's the LAB color space also that we support. And the last one is an index color space, which is nothing more. You have to have a reference color space, for example, the RGB color space. And you. You build up an array of colors. And your. The color that you specify is done through an index into that color table. So this is very similar to GIF or palettes images.
So in order to work with color using our API, what you want to do is create a color space. It's one of the ones from the previous slides. Once you've created a color space, if you want to set that for the-- you can do it for both fill and stroke. So you can make calls to set the fill color space, and you can make calls to set the stroke color space.
Now that that is selected into the context, when you're specifying the colors, you pass it an array of color values for the component of that color space. For example, in RGB, you'd be specifying the R, G, and B components as an array of color values. We also allow you to add alpha values. So when you set the color, you can also pass in an alpha value. And that leads us to transparency.
You can set a global alpha into the context. So what that means is when you set an alpha value, all of the drawing that you do from that point onwards will inherit that transparency value effectively. So it will be composited for you to the appropriate device. Alternately, you can also set that on the fill or on the stroke explicitly also. The global alpha can be applied to PDF documents that you might want to render into the context. It would apply to text, to all of the primitives that we've talked about. Whereas the fill and stroke ones only apply to text and to vector geometry.
Transparency is supported on screen context and for BitNap context because the appropriate compositing is done for you when you specify the alpha value. On the PDF context, we currently do not support transparency. The transparency model in PDF is part of the 1.4 spec that we currently have not implemented. The 1.4 spec is currently not published, and we're working with Adobe to track that spec.
We currently support 1.2 and 1.3. A lot of what we support in terms of the PDF specification was discussed in the PDF and Mac OS X talk prior to this one. And because PDF plays an important role in the printing workflow, because that's the spool file, you will also not get transparency on your printing contacts.
So that's something to keep in mind because it will work on screen, but until we add support for it on the PDF and the printing side, you will not get that necessarily on the printed output. So one thing, other thing I wanted to cover was data provider, data managers, and examples of those are data providers and data consumers. This is nothing more than a way for Quartz to provide data to you or for you to provide data to Quartz.
So in the example of an image where you were creating an image, you're trying to provide the... The bitmap bits to Quartz. So you first have to create a data provider and pass that in. You're passing in the memory pointer if that happens to be the one, that happens to be the type of data provider that you're working with.
If you're also working with a PDF document that you have on disk or something else, you do the same thing. You have to create a data provider that allows Quartz to get at the data that resides on disk or wherever it may lie. So we provide convenience functions. Most likely you've got the file loaded already or it's already on disk. So we've got convenience functions that allow you to create data providers and data consumers from memory and from disk. And we use the CFURL mechanism for defining the path for that.
And some miscellaneous items. One thing, if you're drawing on screen, you have to understand that you're drawing into a back buffer. It's buffered for you. There's double buffering that's applied to the system. You're not drawing directly to screen. So you would want to do a flushing operation. So if you wanted to see that drawing operation immediately onto the screen. There's also another function called CGContextSynchronize. And that's primarily similar to flushing, except it allows you to synchronize drawing from multiple components so that they're all atomically flushed at the same time. You may want to call into a plug-in, for example, to do some drawing into a context.
But you don't want the drawing to appear immediately. So you'd expect that plug-in to synchronize. And then you may then choose to flush the contents on your own when you're ready to flush the appropriate pieces. Once they're all done drawing, you want to have that appear atomically on screen. You don't want things to flash onto the screen. So that's another function that you can take advantage of.
If you're working with QuickDraw, you're probably imaging through the QuickDraw APIs already, but you may choose to also work with CG at the same time. So there would be some interactions if you're trying to do imaging at both at the same time, or if you want to, given a QuickDraw port, how do you get at a CG context for that? So those interactions are actually discussed in the Graphics and Imaging Tips and Tricks talks on Friday morning.
So that's a very important talk if you're working with Carbon or if you're working with QuickDraw. It focuses on two aspects. One is the QuickDraw CG and printing interaction, and also discusses performance issues related to flushing, and we've got some tools that will help you there. Now I want to bring up Andrew Barnes to do a demo that demonstrates the power of Quartz.
So as Haroon indicated, there are four types of objects. There's line art, text, images, and PDF documents. This demo is going to demonstrate how to draw all four of these documents, or draw all four of these things, as well as it's going to go through some code examples.
Haroon Sheikh When I was in the audience, I saw a lot of people taking a lot of notes. This, the code fragments that I'm going to be showing you are probably going to be available, or definitely I'm going to make sure they are available on our website so you don't have to take down the notes because it's a fair chunk of code.
So the first thing we're going to start off with is just normal line art. We have a little demonstration here and we show the Apple logo, which is a nice logo. Should have made it yellow in keeping with today's color. So it's the Apple logo. So here is a stroked rectangle, our favorite star that does some strange things to show EO Phil, and some stroking where the dash pattern.
Dashing is just basically a way to show the In the API, it's specified as an array of floating point values. They basically specify how much to go on, how much to go off, how much to go on, how much to go off. And there's an extra phase parameter that allows you to take your start point and move it through that array. So for an example, we are going to move things around. So as you can see, I'm just adjusting the phase based on some number. And so all of this stuff is all-- it rotates like normal scales. So let's see how we do that.
Let's start off with something pretty straightforward. It's going to be-- every object that's ever drawn in this little demonstration program is going to have a draw state, specifies a transformation, specifies some alpha, specifies a bounding box of the object in the object's user space. So if I were drawing an object that was a unit, let's say a glyph, I placed a glyph inside of a unit square, the bounding box would basically be 0 to 1, 1. So we do that.
We keep track of this information associated with the object so that we can adjust the transformation such that our particular point that we want on the screen, which is point A, is going to be mapped at the center of that bounding box. So that bounding box information is kept in track with the object so the object can do its proper transformation.
And then there's a progress indicator. For instance, that little dashing movement thing was done with a progress indicator. So that's-- it's just an extra piece of data that gets added on. So as Haroon indicated, there is-- the transformation state. Now, there's a current transformation that exists on the context, PDF or raster context.
And you have to set your transformation up. So given a draw state, which was this other stuff before with x, y, width, and height, where you simply go through very basic stuff like translate, rotate, scale, and this is the little piece of code that will make sure that your bounding box is centered at that point. So it just basically translates and moves. It's a little bit of a half kind of thing. So if you have a draw state, you can get back a transform.
Now-- If you have a draw state and you want to apply this transformation, there's a little piece of dubious code in here that basically says, OK, if your transform is null, then just take the matrix. But that's just to show you that you can either be working with a transform object, which a lot of people like working with.
They would just be able to modify the transform, and they keep the transform associated with their object, and they move that around. And then they say, OK, draw with this. Or they keep the parameters explicit, you know, x, y, width, and height. Or, sorry, x, y, angle, and scale, and then you basically apply the two.
So in both cases, we're going to apply this to our current context. In this particular case, we have a transform. You concatenate the transform with the current CTM. So if you start off with an upright and you do some rotation, something kind of goes there, and then you can draw your object.
In the other case, it's like, OK, I've got my explicit components. I can either build a transform, right, by concatenating all these things together, or I can apply them directly to the context, right? And in that way, I'm actually modifying the current CTM as a series, as a bunch of steps. So that's what that code does, which is basically the same as this other code on top. So, OK, so now we are going to draw a path. So there are two things we can do with a path. We can either fill it or we can stroke it.
For this current demo, the only thing that is really applicable to the fill state is the color. Once you get a fill state, our operation, if it wants to apply a fill color, is going to say apply fill color or fill state apply, and that will basically apply the color space and the color. Those are pretty straightforward.
The stroke state is a little bit more complicated. Stroking is obviously going to be a color. You can set the fill and the stroke color independently. When you're stroking, you're going to be using the line state or the line parameters inside of the context. There is the line width and the joining. There are joins that are rounded versus miter joins or butt joins.
Then there's capping, which is when you stop a segment, what happens to the end? Does it get rounded or it just gets chopped? Or what happens to the join? And then there's this dash array, which is this array where you set your points for your phase and you specify a sequence of on and off. This stroke state is basically passed to the object if it needs to stroke and basically just does those things. Set the line joint, the cap, the width, set the dashes, and set the color. Pretty straightforward.
So now we're talking about paths. For the purposes of this demo, it's very simple. It's just an array of segment types and array of coordinates. And all this little tiny loop goes through. It's just say, if it's a move 2, pick the two coordinates off. Put them on. If it's a line 2, curve 2, four coordinates. So it's very straightforward. Again, this will be in the website.
So now our path example goes in and says, OK, I've got a path object and I want to draw. So all it has to do is apply its transform, code that you saw above, apply its fill state, apply its stroke state, set the alpha, if anybody sets any alpha, begin a path, enumerate the path segments, and then draw your path. And then you're done, pretty much.
So a lot of these parameters have default values. For instance, unless you're doing something strange with stroking, you really don't have to set the cap and the join or the dashes if you're just doing a straight line. But the other caveat to this is that if you're unsure of what your state in your context is, you must set it.
I mean, you can't really assume that, oh my gosh, the font size is going to be the correct size. And then, OK, I'll just show some text or show something. But you're not always sure where that thing came from. But if you are sure, you definitely do not have to do it.
If you make modifications, as Haroun indicated, to the current graphic state, you can bracket it by a save and a restore so that you can make little tiny twiddling. I can change the color or change the dashing. Then you can restore your changes, which just revert the context back or revert the current state back to what it was before. So whenever these functions or this function or all of these demo functions are called, what I do is I basically do my bracket and I do my end.
With reference to the synchronize call, you could possibly do-- I could either do a flush here if I wanted to flush each object independently, or I could do a synchronize, which says, I'm finished drawing my stuff. Synchronize it. And let's move on. OK, so in saying that, let's move on. So we move to our second demo. So we're saying our second demo is text. So we can fill and stroke text. And we can think text. And we can stroke. And of course, you can rotate, scale. You know, usual stuff.
And-- So let's go to our little demo. So now we're moving on to this case. Broken up into two sections. First section has to do with text. Text, in this particular example, this is slightly misnamed, but it really has to do with a ASCII string of text with some encoding.
For this particular example, it's going to be macro encoding. So we're going to get some string of text, and we're just going to say, OK, I want to deal with the string of text in macro encoding. So, like the path example, it's pretty straightforward. You know, when the object comes in, it has a draw state, has a text rendering mode. That's how you're able to get the strokes. It has a font size, a character, you know, and some text.
So basically, when the draw happens, we apply the transformation, which was the whole rotation scale thing. Then we apply the fill state, the stroke state, set the alpha, select the font by name with its size. And the encoding. In this case, it's macro and encoding. It's a parameter because it could be not macro and encoding. And then we set the text mode.
In that example you saw that, right, there was stroking and filling and there, I think there are four. There is filling, stroking, filling and stroking. And clipping, I think. Actually, no, there are six. But you can look at that in the documentation or in the header files, actually.
So once we're done with that, then we can say show text at point, right? Or show text or some other, a whole bunch of text type operators that take an ASCII string and send it through some particular encoding that's indicated by the font. The select font is really a convenience function.
As you'll see in the next example, select font basically calls the font API to find the font by a particular name and then apply an encoding onto a font and then set the font into the context and then set the font's scale. So in the second example, we're going to deal with glyphs. Glyphs are basically actual indices of text.
Outlines inside of a particular font. They're not Unicode. They're not... They're not anything that's portable. It's really particular to a particular font. Now, typically, a lot of layout engines will decide to take arbitrary Unicode text or Kanji text or something like that and transform them into glyphs associated with a particular font. And at the lowest level of API, you really are going to be just drawing an array of glyphs that are matched with a particular font.
So any particular text run, after it's been figured out that you have to do this and swivel it around and put kerning and stuff like that, you'll end up with just a bunch of arrays of glyphs and their positions and the font for which they're coming from. So this example shows how to draw with just glyph IDs.
You can use at sui to do all of this layout stuff. That's what it's for. And you'll be able to take Unicode text and convert it to glyph IDs and fonts, and then you'll be able to use those with CG. So like all the examples, they come with a fill state, stroke state.
Text mode, to pretty usual. Now we're going to get a font ID case, right? So if you get a font ref, sorry, not font ID, font ref. You get a font ref, and you have a size, and you have the glyph array. Like all the other examples before, you apply a transform, apply the fill state, stroke state, set the font, set the color.
Here it was explicit. Instead of the select font mechanism by name, you actually got the font and you got the size. You can say set the font, set the size. Set the text rendering mode and show glyphs at point. Same like show text at point, except it takes glyphs.
So hopefully everybody's happy with there. I haven't lost anything. Anybody? So now we're going to go to the second example. So now we're talking about images. We have an alpha logo. You know, of course, all the cases, it's all the same object, right? So it's like very-- And we can fade things out. So images are--
[Transcript missing]
When the image is called to draw, you simply apply the transform. You set the alpha, which is how you're able to get the fading, and you draw the image with the image box.
So let me sort of... Both images and PDF documents have the same parameter called the rect. It basically specifies a destination rect. For both PDF documents and images, there seem to be objects that are in a unit square. And what you do with that rectangle is you basically say, okay, I got this thing that's here, which is like a unit square, and I want to take it and put it at this particular point. So if you were to have an image and it was 600 by 800 and you wanted to do with no transform, you would have to do with a unit square.
When you set the transform to be identity, you'd be able to take the image and draw the image at the correct location.
[Transcript missing]
But we won't go to that just yet because I lost my step. Here's a PDF document. You know, here's a nice little chicken. There's no smoke and mirrors here. This is real line art being rendered. And our document. And go down.
So there is just chicken document, Mac OS datasheet, or Mac OS. So documents are like images, they take the same bounding rack, to draw to the destination, and they practically do the same thing. You apply a transform, you set the alpha, and you draw the document. Very straightforward. Don't worry about this piece of code. That basically does our tracking for our progress indicator. Which allows us to page through our document.
[Transcript missing]
Another example that uses PDF documents and images. And what it tries to demonstrate is color matching. Now, here's an example of an image that was rendered off screen. It's a PDF document. Basically, that was rendered off-screen. And the result of that memory was switched around. We lied and we said, it's really a BGR image.
So we rendered RGB data. And then we said, oh yeah, forget the RGB. We're just going to call these triples BGR. And basically, this image is essentially the result of that. It's just an image that got rendered as if it was a BGR data. And this is why all the reds are blues.
This particular case is a slightly different example. This is basically a PDF document. Of course, it does that rotations. The media box is kind of different. That lays down CMYK colors. And on the screen, you see cyan as electric cyan. Now, what we want to do is we want to say, OK, we would like to draw this PDF document. We really want to proof it on a CMYK printer.
So we have a CMYK profile, which we got from ColorSync. We load it up, and we created a color space object from it. And then we said, OK, I want you to draw this document inside this off-screen context with this profile, which is this ColorSync CMYK profile. And that's why the cyan looks a little bit like what it should look when you print this electric cyan on a printer.
Now that you've finished the two simple cases of draw document, draw image, we go into slightly more complicated example. We get the document, the media box, and we get the images color space, which is the color space to use, and we get the context as color space, which is the color space to use for drawing onto your off-screen image.
So basically the first time through, if there's no context, then let's make one. So we go and compute the width, height, blah, blah, blah, blah, blah, blah, blah. And here we say, I've got a piece of memory and I want to create a data provider. As Haroon indicated, we have these data managers and they allow you to create objects that can consume and produce memory.
This particular case, we want to create a consumer. So the data provider is a direct access sort of, here's a sheet of memory, it's this size, and we have a little release function that says, okay, when you're done with it and everything's all released, go call this method or this function to free the memory up, which is as soon as you get the data pointer, we just call free. So once we create this data provider, then we say, okay, now that we have our data provider, I want to create this image definition.
We have to specify the width and heights, the bits per pixel. In this particular case, we're just using RGB 8-bit. So bits per sample is going to be 8. Bits per pixel is going to be 32. Robites is going to be width of the image times 4. And we're saying that the alpha is pre-multiplied first. We're using an ARGB format. And the color space, no rendering attempt for the image.
So that's how you would create an image from a bunch of memory, which you can see this in an example. When you look at the header files, it would be a little bit more obvious. Two of these parameters here are a little bit-- don't worry about them. They basically are decode parameters. We do allow the ability to decode images from one bit depth through some transformation that's linear and produce other bits.
So I can take a 2-bit image and map my 2 bits to 8-bit and expand the scale. Or I can move the 2 bits and say, I want you to only populate the range from 128 to 255. And that's what those decodes are. Those decode things do. When you see them in the documentation, you'll say, OK, yeah, that's what that was for. So now we have our image. Then the second thing that we need to do is we have to create our bitmap context, which is the context we want to draw to. It's an off-screen context.
We're going to use the same memory that we handed to the image. And we're going to say, OK, create a bitmap context with the correct width and height and the bits per sample. And we're going to say, here's the color space to use. We use this particular case. For the CMYK case. So we said, OK, here's a CMYK. What we want to do is we're going to use the CMYK to draw. And because you've handed the data provider off to the image, you can just release it because the image has reference to it.
So once we've done all of that the first time through, you didn't have a context. You didn't have any memory. You didn't have an image. You just run through that little tiny thing and it creates a life-size image and context to draw into that image. So now every time we get-- new stuff coming in, a different page or whatever, we basically say, have our offscreen, set our transformations so that we're basically trying to say whatever the media size of the document that we're trying to render, we want it to fit smack dab in the middle of the image.
Because I only provided 10 pixels by 10 pixels, and I wanted to draw on the 10 pixels. So we translate the context so it's correctly positioned. We draw, basically, we erase the context, or you clear it. In this particular case, we're recording alpha. So we want to issue a clear, which will clear out both the alpha and the data planes.
And then we just draw a document like the example before. But remember, we're going to an offscreen context. So once a document is drawn, then we say, OK, great. We have our little offscreen image. They're both referencing both the image and the context. We're referencing the same piece of memory. Basically take that. Then we say, OK, great. We're ready to draw. We apply our transform. We set our alpha. And we draw our image. And that's how, basically,
[Transcript missing]
Okay, so I just wanted to finish off by talking about some of the documentation. There's a Quartz primer that's available for you on our website. TechPubs is currently working on drawing with Quartz, part of the Inside Mac OS X series. And at the moment, if all fails, you can always go back to the headers, and those are available in the Core Graphics framework as part of the Application Services framework. That's what you want to go off to. And I want to invite Travis back up to discuss the roadmap.
I want to quickly do the road map so we have some time for some Q and A. Obviously, we've already gone and done the PDF courts of Mac OS X and the drawing--actually, no, they've revised it. So, the next session that might be of interest is drawing Unicode text with Atsui.
I know that we mentioned text handling in the session, but for the built-in text handling inside courts is very simple. We mentioned a lot of it's at the glyph level and based on Mac Roman encoding. If you're developing applications, you're going to want to use more advanced text and so what we strongly suggest you do is you use higher level frameworks such as Atsui and MLTE. That will be covered in the drawing Unicode text with Atsui. We also have an interesting technology called image capture which you've already seen demonstrated. Allows your applications to work with digital cameras.
Then we go over to a very interesting session, 118, which is Color Sync, which is we'll go into depth and describe how color management works in conjunction with Quartz to deliver fully color managed content on the user screen and output devices. Unlike previously with Quick Draw, where you really did not have a color managed drawing environment, Quartz is fully integrated with color managed as was demonstrated by Andrew.
This is very important because we feel that the fidelity on screen and on print is valuable to all users of all classes of applications. So you should definitely check out the Color Sync presentation. Next, we have another text on Mac OS X. This will again cover Quartz and how it relates to the other text APIs in the system. Then we have a very interesting demonstration.
If you are doing high performance 2D work where you, for example, using large bitmaps, you can tell the performance of Quartz is quite good. If you need even greater performance, you should check out the OpenGL and high performance 2D. And we also have a printing session. The printing session will talk about Carbon, BSD, and Cocoa printing.
And then on the final day, we have a very important session for Carbon developers, the graphics and imaging tips and tricks. This session will provide a lot of information to enable Carbon developers to look seriously at using Quartz 2D for their graphics as opposed to Quick Draw. And then finally, we have the feedback forum.