Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2001-105
$eventId
ID of event: wwdc2001
$eventContentId
ID of session without event part: 105
$eventShortId
Shortened ID of event: wwdc01
$year
Year of session: 2001
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC01 • Session 105

Graphics & Imaging Overview

Mac OS • 59:43

This session offers a summary of the exceptional 2D and 3D graphics technologies in Mac OS X. The latest information on Quartz, OpenGL, ColorSync, printing, and Image Capture are presented. This overview provides an introduction to other graphics and imaging sessions.

Speaker: Peter Graffagnino

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it may have transcription errors.

Hi, everybody. I think they were running a little bit late next door. Hopefully, you guys made it over from the other session. So, let's see if I can get this thing going. Okay, that's me. So here's the block diagram you've all seen probably many more times than you need to right now. But what we're going to be focusing on here is the graphics layer. And what I'm going to be doing in this session is really bringing you through an overview of all the various technologies we have on OS X in the graphics layers for you to use in your applications from 2D, 3D, multimedia. Then I'm going to show you a few demos on the system here. And then, as Travis said, have a pretty extensive pointers to other sessions throughout the week that I highly recommend you go to if you're interested, particularly in graphics.

So, again, the bubble bullets for the graphics layers, we have Quartz for 2D, OpenGL for 3D, and QuickTime for multimedia and video. Now, what we did when we came up with Mac OS X was want to take the best technologies in each of these areas and adopt them, and not necessarily reinvent the wheel or come up with new and different technologies. So you can see what we've done as I go through the presentation is really look for best practices in the industry and employ those in OS X as building blocks for you guys to do your applications.

So let's start with Quartz. Some of these slides you may have seen before in the keynote. If any of you work at big companies, you know how this works, but I did them first, so. So anyway, PostScript and PDF are really the industry standard 2D imaging model. And if you think about it, it's pretty amazing. Basically all the pages that you've probably seen in publications over the last 10, 20 years since the basic underpinnings of PostScript were figured out can be described in this language. And it's really evolved only very little over time to add things like device-independent color, other font formats, and things like that. So it's really a credit to the designers of this imaging model that it's withstood the It's even gone through a language change. I mean, it started out as being bound to, you know, very much Java-like, VM-oriented environment with the PostScript language interpreter into a file format like PDF, which is just declarative. And then as we have with Quartz, we'll be showing you just the 2D C library, which implements the same imaging model.

So I thought it would be worth a few minutes just to look at the various components of what make up the PostScript imaging model, because it may be new to some of you. There are basically three fundamental primitive data types that PostScript and PDF can draw. There's outlined fonts, sampled images, and vector line art. Outlined fonts are mathematical descriptions of fonts. If you think about back to 1985, '84, when PostScript was first being conceived, the idea of device-independent fonts was sort of unheard of. I mean, fonts were things that, you know, are in the ROM of your CGA card, and, you know, they're bitmaps of a certain size, or they're in that thing you stick in your HP inkjet. They come in a cartridge in a ROM. The whole idea of device-independent fonts was really something that was pioneered with PostScript and widely adopted. And I think that not a lot of people remember those days, but it was pretty stunning innovation. And over time, different font formats have come out. Of course, there was the original Type 1 that came along with PostScript. And of course, Apple innovated with TrueType, which was a more programmatic way to do hinting. And then OpenType, which is kind of a repackaging of OpenType and TrueType into a more standard container. The other thing that's important about the imaging formats is they're all very orthogonal. In other words, outline fonts can be transformed and rotated, as can sampled images, as can the vector line art. So just because in some systems, for example, text color was, say, a different thing than a color you could draw a circle with. And the PostScript designers were very orthogonal about the imaging model and how they put it together. And whatever you could do to type images or line art, you could do across the board. So the second type is sampled images. Sampled image data is really just a rectangular array of image samples that are then reproduced on the device by some rendering pass, which could involve resampling the image, some image processing, halftoning, that sort of thing to come out on the device.

Now it's pretty amazing back in the early days, you know, PostScript was primarily one-bit devices was all it drove. But still the designers had in mind that why not express the images as deep data, even though they may be half-toned down the stream in a device-independent format. So they really thought that through, even though that's a lot of sort of circa 80s graphic systems tripped up because they wanted efficient bitmaps. But PostScript kind of did it right with the algorithmic model for sampled images. Vector line art is kind of the last thing I have on the slide here. These are kind of the paths, the Bezier cubic and quadratic paths that you can draw with the PostScript imaging model. Again, they can be transformed and filled. In fact, they use pretty much the same path description logic that the outline font system uses to describe arbitrary vector art. So those are the kind of three primitive types. And together, as I said before, they can describe practically any page that's, well, certainly been printed in the last 20 years, and maybe forever. It's pretty amazing.

And so where does Quartz fit into this? And you've heard the name Quartz 2D. Well, Quartz is our implementation of the imaging model of PostScript. So taking those same concepts of the three basic primitive types and putting them into a lightweight C language library that you can call in your applications, and we can call within the system to do rendering on your behalf as well. One thing that's important when coming up with a 2D graphics library is to think about metafile format or a stored structured graphics format. The original designers of QuickDraw were very smart because they came up with a pick file, which was a way to basically pickle any function call you might make to QuickDraw through the bottlenecks and record them into a file and play them back again. And it's a very powerful concept. And Windows GDI has a GDI meta file, which is a very similar thing. And most designers of 2D systems think about, well, what's going to be the stored representation of the graphics? And so we kind of arrived at Quartz 2D kind of working backwards. We knew we wanted PDF graphic streams to be the persistent format and the record and playback format that the 2D library could use. But then we worked backward from that to figure out, what should the API be, and of course we should draw Bezier paths so they can be represented exactly the way PDF expresses them. So that's really a little kind of motivation for you of how we ended up with Quartz 2D. We really started with the idea of PDF record and playback and then arrived at the API from that.

The other thing for Quartz 2D is we have very fast anti-aliasing. One of the major motivations for building Quartz was the user interface and some of the ideas that we wanted to do with Aqua. And we knew it had to be very fast and very high quality. And so Quartz, we spent a lot of time on the anti-aliasing algorithms, and it's really sub-pixel accurate in terms of text positioning and fixed point subpixel coordinates and exact coverage per pixel. So it's pretty sophisticated what's going on there. Then we have Apple type technology that Apple actually already had, so we didn't need to rebuild anything here. Apple has a pretty extensive type machinery called the Apple Type System, and it supports TrueType as well as Type 1 and other pluggable font formats. And that we just built right into Quartz, and Quartz calls the Apple Type Solution whenever it has to do any character handling. And in fact, that piece of code can stream out data when we need to embed a font, and we just tell them, hey, we used these three glyphs from this type, from this font, and the type system will subset the glyphs and stream us out in a format that we can just stick right into the PDF file. So it's a really good way to get a lot of value out of these technologies that we had at Apple that had been around before we even did Quartz. On the Type 1 scaler I mentioned, we do have Type 1 built in. In fact, we worked with Adobe on getting the Type 1 scaler. So in terms of ATM, you don't really need ATM as a rasterizer. I mean, there's font management things in ATM as a product. But in terms of the original reason why ATM existed, which is to bring Type 1 support to the Mac, we've cleaned that up with OS X. and it's now just built in as a standard format, no extra software required.

ColorSync is another technology we had at Apple and used primarily in high-end workflows, and it's a great API for wiring together transformations on colors and sending image data through it. And so we just use that directly in Quartz 2D without necessarily writing any code. if we're asked to draw an LAB image rotated 45 degrees, we'll set up the color worlds behind your back and match appropriately through the color worlds to the screen or however you've set up the destination profile for your rendering context. There's going to be a lot of talk about how this exactly works, the matching between the ColorSync implementation in Quartz 2D and the PDF model versus the straight ICC model, which is in ColorSync. and in the ColorSync talk, they're going to explicitly talk about ColorSync and Quartz working together. So if you're interested in that topic, that's a good session.

The other thing we did, as you've no doubt heard, is we've bundled a bunch of fonts, about 50 megabytes worth of fonts. They're both decorative and classic designs as far as Roman typefaces go. And for Japanese, we've got six really high quality Japanese fonts. These fonts range, again, from kind of more classic designs to more modern designs. But additionally, they come with a huge amount of glyphs. I think it's something like 12,000 glyphs in each of these fonts. So it's a really complete font. And what's more, there's no resolution limit or embedding limit in terms of what you can do with these fonts on OS X. And that's something that's, I think, sort of been holding back desktop publishing a little bit in the Japanese market because it's very expensive to have high-quality fonts, and the outlines are very protected.

And if any of you have dealt with laser writers with these kind of fonts, you know what I'm talking about. But these fonts are -- there's no such limitation and no copy protection or anything like that. And together with our streaming we get out of the Apple type software, what we're able to do is embed only the glyphs necessary from the fonts that are shown into the PDF file. So that PDF file can travel anywhere, any platform, and still have those great fonts embedded in it. So let me show off some of the fonts here.

So we have Baskerville, a bunch of sizes. You'll notice it's not just the regular italic bold kind of set. We've actually got some of the semi-bold and intermediate weights like that, so obviously much more expressive than Quickdraw style bits. We've got regular bold light and condensed, various flavors of American typewriter, which is a nice fixed advanced font.

Some of the classic faces here: Kaslan, Dedo, Copperplate. Some more decorative faces: We've got Markerfelt, which is kind of a nice kind of presentation font. Zapfino, which is a font designed by Herman Zapf, the font designer, in the spirit of his own calligraphy, his own handwriting. And that's really beautiful font, and I'll actually show that interactive during the demo.

Optima. Optima is a great font. It's been around for a while. One of the things that is great about Optima on OS X is the anti-aliasing really brings through some of the subtle characteristics of this font. For example, the B in bold has a very slight bow on the curve. And to really represent that faithfully requires pretty sophisticated anti-aliasing and more than 16 levels of gray, as some approaches do. So it really benefits from the real sophisticated anti-aliasing approach we take. And a normal hinting renderer would flatten that thing away the first time it saw it. So it's great to be able to preserve some of the subtle details of the typefaces. And then here are the Japanese fonts. As I said, I hope I'm not offending anyone with this. I don't speak Japanese, but I'm told it says this is a beautiful font. I trust them on that. But you can see we have some of the modern faces and the more classic serif faces, too, at the bottom.

So that's a little bit about 2D. One very important aspect of 2D, though, is printing. And one of the prime reasons we did the Quartz 2D effort and worked with PDF was we really wanted to retool the printing architecture. And when I put this next slide up, all the guys who are on the printing team in the audience say, if only it were that simple. But believe it or not, this is the essence of what's going on when you print on Mac OS X. We use Quartz 2D heavily even for Carbon applications, which may be calling QuickDraw to draw their pages. What I've depicted here is Internet Explorer on the left, which is a Carbon app running against the Carbon runtime, calling QuickDraw to draw its pages. It goes into its printing loop, and what happens is we've taken a bunch of code and expertise that we had in the LaserWriter 8 driver, which knew how to translate, if you will, from the QuickDraw imaging model to the PostScript imaging model. Well, since the imaging model, PostScript, and PDF, and Quartz are all sort of the same, we retooled that code to essentially implement a set of QuickDraw bottlenecks which call Quartz 2D. And so when you go into your printing loop in Carbon, what you're actually getting back is a graph port that's backed by these special bottlenecks that we've implemented that call Quartz. And so that flows through Quartz, and then Quartz, of course, knows how to record that PDF file and so what we do in that first half of the spooling phase of the printing architecture on OS 10 is really record a faithful representation of what the application told us to draw or told Quickdraw to draw or if you're calling Quartz 2D directly what you told Quartz 2D to draw and so that's really you obviously need a really great packaging format and imaging model to be able to to basically describe what any application wants to draw. And that's again, one of the great advantages of using PDF. So we don't take any steps like in Distiller, which is a product that exists, you know, for the purposes of creating PDF. And it has a lot of great bells and whistles, like down sampling your images, you know, recompressing the JPEG, converting CMYK to RGB. None of that is really essential in this pass here, because the only thing that we're trying to do faithfully represent what the application drew. So it's like a high-fidelity, application-independent, device-independent representation of the pages. That's what we use PDF for there. And then that file flows through the printing system and arrives at a back-end piece of software that manages the actual I.O. to the printer. And what that piece of software does is call Quartz 2D again to this time parse and replay the PDF file and convert it to a device-specific format for the printer. So it'll convert PDF to PostScript in the case of PostScript printers.

It will convert PDF to raster data in the case of Inkjet printers. And so the Inkjet printers that you see on Mac OS X are taking advantage of the high-fidelity rendering of Quartz 2D in their output, which is a great feature. And in fact, we have a bunch of printers, Thanks to any of you from Canon, Epson, or HP to help us get this done, and to our printing team as well. For OS X, I think we did really well getting 50 printer drivers for inkjet printers in the box, and that's only the beginning of what we're going to be doing with these vendors and others. The interesting thing for you guys as application developers is you don't really have to treat inkjet devices as sort of second-class citizens anymore in terms of high-fidelity graphics and PostScript quality rendering. And, yeah, you can rotate text on them. And if you go to the talk on Thursday morning that I'll point you at, you'll hear all about the printing graph port for Carbon applications.

And, in fact, it interprets a lot of the special PIC comments that the LaserWriter 8 code did. So you can really get some of those effects across the line on inkjet printers, which is really pretty nice. So, anyway, a plug for inkjets. They're great. Okay. So that's about it for Quartz. Let me briefly show you some slides on OpenGL. Some of these may look familiar. So OpenGL is industry standard 3D technology. It's been around for, I don't know, 15 years or so. It started life as the GL graphics library on SGI machines. I think early '90s became a standard. It's been through about two or three revisions as OpenGL. And it's very much a vibrant standard. There's all kinds of extensions being proposed very frequently. It's got a very clean model of how to add functionality, how to probe for functionality, to be able to take advantage of specific card features.

And it's a real nice programming model. And a lot of our game developers have used it to develop some great games over time. It's a picture from Star Trek. And if you go down on the show floor, you see the games all running on OS X. It's really pretty amazing. I don't know. This calculation is probably off, but not by an order of magnitude. But I figured there's got to be something about like 100 gigaflops on that table of Quake games that people are playing at. I don't know if anyone has a better number shouted out. But, you know, you figure you've got GeForce CPUs in there and you've got NV11, NVIDIA's GeForce 2 cards in there. I think it's a pretty stunning supercomputer display there.

But OpenGL is really not just for games. Again, you may have seen the slide before. It's also for a lot of applications that kind of grew up around GL and Unix and environments like that, which are prevalent in the high-end modeling, animation, scientific engineering world. Maya is a great example of such an application, and we're really excited and working very closely with the Maya guys to get their port for OS X working really well. But there's a whole stream of other developers who are kind of in this camp as well, and hopefully some of you out there, who come from this heritage of high-end workstation graphics with either, you know, custom vertical solutions or even broad solutions. So we're really excited to be working with any of you developers out there. We have a great OpenGL team and great developer support. So get involved with Apple if you or you know someone who's got a great solution to bring to the platform now that we have Unix and OpenGL all working together.

The architecture on OS X for graphics acceleration, we have spent a lot of time making sure that the graphics acceleration architecture on X aggressively virtualizes the resources of the graphics cards. So it's -- if you're playing a full screen game, for example, essentially every byte of video memory can be owned by that game. When you're running in a windowed environment, the video memory is obviously shared and arbitrated among the applications, but none of them have to explicitly get in the game of managing their memory. So really it's almost sort of a VM system that works in the video memory. Of course, there aren't page tables, so the analogy is not completely accurate. But it really, we did take our role as an OS vendor very seriously when we looked at how to support 3D graphics acceleration, and it's really a nice architecture. So we're really happy to have great drivers for both NVIDIA and ATI cards out there and work with you on any additional cards if you're at one of the other graphics houses. Thank you.

QuickTime, just a few slides on QuickTime. Mac OS X comes with QuickTime 5. There's a few new features in QuickTime that are important. Actually, first, let me back up and just mention on this slide my comment about QuickTime's longevity. So QuickTime has also been around for a long time, you know, early 90s or so. And the amazing thing about QuickTime is that it has really survived the test of time. I mean, the original architecture, If you think back to the early 90s, based on the component manager and pluggable codecs and dynamic codec chain building, all that stuff was really pretty groundbreaking and I think has paid off over the history of QuickTime because the basic architecture is still the same. The object orientation, even though that was, you know, pre-heavy-duty use of C++ and object-oriented, and the component manager was what they made it go with. But it's really pretty amazing that there's been such innovation in video codec technology, video streaming technology, and the basic architecture is still sound.

And so, you know, I think it's a great thing to point out about QuickTime, and I don't always get an opportunity to talk about QuickTime, so I thought I would give that plug, and thanks to the team for caring for that architecture over the years. So the stuff that's in QuickTime 5 that's built in that's new, There's a new user interface. There's a Flash 4 codec, a Cubic VR, which is actually pretty exciting because the Cubic VR not only lets you look up and down, but it uses the six faces of the cube, and it's very amenable to hardware acceleration. And so you can, a lot of the cards now are starting to implement Cubic environment mapping. So I think there are some very interesting opportunities with Cubic VR and OpenGL working together. DLS Music Synthesizer, you saw the demo of that earlier. MPEG-1 Streaming, the new DV codec, which is much higher performance and higher fidelity in terms of getting DV to the screen, very important for applications like iMovie and editing apps that you might be working on. It also leverages the sound architecture on OS X, which takes advantage of some of the real low latency we're able to get inside the kernel in OS X, and additional tricks to really get the sound performance pretty amazingly low latency and high throughput on OS X. I think games will take a lot of advantage of that as well.

So that's the technologies that you as application developers can make use of on OS X. And one important point about them is they're all optimized for G4 and multiprocessing. So we think it's our goal that you should see a nice, you know, sort of performance increase as you move up the product line. So the G3s, we spend a lot of time tuning for G3s, which is probably the most important work. But we also spend a lot of time tuning for G4 and also have multi-processing. We kick off extra threads at certain places throughout the system when it's appropriate. So you should see a nice, you know, performance signal as you move through the product line. And I would encourage you to take a similar approach with your apps because it's nice to be able to point people at, you know, higher-end configs in terms of a way to get around performance issues. Obviously, it makes us sell a lot of high-end hardware, but also makes your app shine, too, because you can employ these tools to make users more productive. So definitely take advantage of those things. So one other topic which I wanted to cover in these slides was a kind of fourth area that we innovated in OS X. And that comes down to the windowing system. And what do you do when you've got all these different graphic environments wanting to share the screen? You've got demands for very high fidelity, anti-aliased icons with, you know, very subtle edges. And what we did was decide to rethink how the windowing system actually worked and approach it from a slightly different angle. And so back in 1984, there was a paper by Tom Porter and Tom Duff from Lucasfilm, who then went on to form Pixar, where they really laid out the beginnings of digital image compositing and introduced the alpha channel, among other concepts, in this paper. And it's really been a touchstone paper for any of you that are in computer graphics or go to SIGGRAPH proceedings, because it's really a very elegant formulation of the compositing algebra and how to put together images that may, at that time, come from different batch offline processes. You had one program that could draw spheres, another program could draw fractal terrain, and you wanted to put them together, but you didn't want to put all the code together, so they worked out a system where you could preserve the anti-aliasing with the alpha channel, and you could recomposite after the fact different scene elements and save a huge amount of time in movie production.

And so, you know, fast forward 15 years, we're trying to apply these exact same principles when we designed the windowing system on OS X. We've got content that's rendered. We've got dynamic content underneath it. We don't want to redraw everyone every time anything happens. So we really looked back on this initial work and implemented a system which is really, I think, a great thing.

And I think this is the way windowing systems are going to be done from now on that provides more of a mixer analogy for the display rather than a simple switcher. In the switcher analogy, each pixel is owned by one application or another. I think time has proven, both in the making of video switching equipment and windowing systems, that the switcher is really too primitive and doesn't give people designing experiences with such a system enough flexibility. And obviously, you know, everyone has, in television studios, they all use mixers, film production as well. The idea of a switch as far as a way to arbitrate real estate is really just kind of an old concept. So we're really excited about this.

I think it's going to, you know, when Avi talks about laying the foundation for the future, I think this is really one of the things I turn to as reexamining some fundamental assumptions in the OS of, you know, how are we going to put together images for the user to make a compelling user experience moving forward. And I really think using the compositor analogy is going to be important there.

So what we call this piece of technology is the Quartz compositor. It's really the windowing system. Last year, I think we called it the lightweight windowing system. But it's a client-server architecture. It's responsible for presenting-- for all the final presentation blitz that happen in the system in order to mix the content onto the display. And it works with the acceleration layers of OpenGL, QuickTime, and everything working together to share the screen.

Some of the features of the Quartz compositor. Full double buffering. Double buffering is important not only from the standpoint of not needing to wake up an app to redraw the display, but also to have access to the rendered content of any window on the system at any time. So we can fade it in, fade it out, put a drop shadow over it, whatever. Per pixel alpha channel-- important to have per pixel control of the opacity of a window. so that, for example, in the menus, the text is actually opaque, but the rest of the menu is transparent. And obviously, Quartz 2D plays a role in that too, because you need a rendering system that can understand how to draw destination alpha into the back buffer for the window. When you draw with Quick Draw, you can only get opaque pixels.

So for another feature that we have is an overall per window fade control. So that's one big opacity knob on the whole window. So that's used for fading in, fading out. We have some per window transform and warp capabilities, sheets, genie, doc animation, those kind of things. And the important thing is that this really integrates across whether you're doing 2D, 3D, video. Certain things are currently assumed opaque, like an accelerated 3D surface.

and someday we'll get those transparent too. But by and large, the model holds together. If the user picks up something that's transparent and drags it across the screen, you know, the illusion is not broken. He can just slide it over every piece of content on the display. So I think with that, I'm gonna switch over to the demo. Demo machine here.

Can I get them on one, please? Yeah. Great. So let me just bring up a very basic OpenGL application here to get some pixels moving. This is SkyFly, which if you've played with a developer CD at all, this is on your CD as a small application in OpenGL. It's a simple terrain model with a flying airplane. I changed the code a little bit to run in a window. It runs full screen as it comes out of the box, but it's pretty straightforward. So you can see the window look of Aqua is very clean. We've got drop shadows.

The window content goes all the way to the edge of the windows. That was really important to the look they were after with Aqua. And so to set off the windows, all we really have is the drop shadow. And so you can see what's happening is the drop shadow is actually dynamically compositing. I mean, it's not a very in-your-face effect, but it carries the illusion that even with animated accelerated content, underneath, we can still set off that finder window without having to just chop the pixels with a hard handoff between the accelerated content and the finder window.

Again, down in the dock, you can see if I drag -- GL window down underneath the dock you can see the compositor kicking in to Do all the blending on the dock you can see that all the icons have a high degree of anti-aliasing very You know high production value if you will graphics So to really carry that anti-aliasing above arbitrary content It's essential to have some kind of compositing going on in the windowing system Otherwise, you're just gonna hard clip it and it's not going to look right So my opinion is as soon as you go to anti-aliased icons that are gonna be piece parts of the user experience, you gotta do a composited windowing system or it's just not gonna hold together. Let me show you a little more in detail with that, with this Pixie application, which is on the developer CD. And I just have to configure it here to refresh continuously.

This is a nice little tool. You can use it for, you know, sniffing at your pixels and seeing what's going on. But if I hold it down here, You can see what's going on in the display is actually, you know, there's a lot of, you know, like half-colored pixels in order to get the side of the Microsoft icon there working well. And that's all getting composited on the fly as the OpenGL application is playing. And, you know, obviously it's a lot of work for us to do that and design a system such that that could be possible.

But it's really important, you know, to not break the illusion. If you're going to present the user with really nice anti-aliased icons, you really don't want them falling apart when they go over 3D content. So again, just another minor example, if I go up there to the corner of the window, you can see the corners of the window have little anti-aliasing to make them look real crisp, and you can see how the accelerated content is blitting through there as well.

So I have one other thing to show. ad-libbing a little bit on my demos because some of those were shown earlier too. Here's a piece of PDF clip art. This is actually just an EPS file that we distilled. And I just drag it into a text editor here. And I can add a text label to that and make the font a little bigger.

So again, for the purposes of drag and drop, what we do is we do live dragging of text, which includes graphics. So if a user wants to move this to another application, he's going to move it over that 3D content and you really want everything to just continue to work. And you can see how the app is still animating under there. We lose a little bit of frame rate, but the illusion holds together. and the user just thinks he's dragging this text clipping over a piece of acetate over the whole screen. And so it's important, you know, and it's kind of a theme, I guess, from working at Apple is that you just want things to appear as they should and not -- it doesn't matter technically how difficult they might be, but you really want to sort of delight the user with the interactivity and the production values of the system.

So that's enough of a compositor. Let me show you-- actually, while I have OpenGL going here, this one does have a little bit of audio. This is kind of a teaser for a session that'll be coming up. I think it might be tomorrow or Wednesday, but we'll see in a second. That shows how to combine some of these technologies together. So for example, this is actually using OpenGL and QuickTime together to put a movie on a surface. And I think Jeff's getting some outrageous frame rate, like 350 frames a second. But there's probably not that many frames of video. But it's one of the Apple ads that he's got there.

And you can see, not only are each of these building blocks that we have pretty compelling, but you start to put them together. I think there's really some pretty interesting opportunities here. Felicity, I'd like to start off with Liz, polyester bride. Sure, for you. So I'll give you a pointer to that one when I talk about sessions later.

Let me bring up Internet Explorer here and hit on a few things about the printing pathway. So I'm basically going to do the first half of that demo where I showed you the printing pathway recording to PDF. So Internet Explorer is a Carbon application drawing its content with QuickDraw. Here's news.com.

Let's see what's going on today. um Bring up the print panel. Now, this is a system-wide print panel, so that's another advantage from OS X, another difference from OS 9, where the print UI, you didn't know what you were going to get. You made the call to put up the print dialog, and it was kind of up to the driver from there. We have standard print dialogs that we bring up. The drivers plug into them and can add capabilities, but it's no longer kind of a guessing game in terms of what applications are going to do with the print record. We've got a nice property list where we store all the information about the job. So hopefully that will make your lives easier, particularly when QAing against a whole bunch of different print drivers.

But I can hit preview here and that's going to basically run the first half of the printing process and bring up in the preview application here a PDF rendering of the page. And it's basically the content that the application drew through QuickDraw calling Quartz 2D and then saved out as a PDF file.

And if I were to actually print this, then this exact PDF file would go through the back end and get converted to PostScript or Raster Bits or whatever. And just to show you, it's real PDF, so let me try dragging it onto Acrobat here. And there's Acrobat showing the same file.

So anything we create can obviously be read by Acrobat because it's just PDF. A little bit more on the 2D. Here's a text edit document that has a bunch of different fonts in it. These are the fonts that we ship. These are all interactive. Just scroll through some of these so you can look at them. There's Zebphino, Gil Sands. I don't think I had that on the slide. Helvetica Noi, a bunch of faces of Helvetica. If you can't find one in there you need, I don't know. Optima, let me show you.

Let me show you what I was talking about with that B. If I go on, crank on the slider here, just to scale up that B, you can see how those subtle curves are in there in the font design. And because of the anti-aliasing, we can carry those features down to very low point size.

Again, the Japanese fonts. Let me pick-- oops. Didn't mean to launch QuickTime. pick one of those and again zoom in on that. I think I got my slider set up to 500. Yeah, I could type in something even bigger. But, you know, no resolution limit, you know, pretty dynamic interactive scaling. So hopefully this will be a great thing for not only Japanese users but everyone.

And, you know, as was mentioned in the other sessions, these fonts exist on every system, every user's system. So, you know, it might be likely if you're writing, say, for example, a mail application and someone gets a message in Japanese, they may actually be showing Japanese content in your application. So I think you also need to be aware of those possibilities that someone may be buying your app not in that particular market but may be dealing with language content from other places. And whether that means linguistically, obviously you may not have all the dictionaries and stuff necessary to edit in that content, but at least to display and manipulate letter forms and things like that. I think that's a good opportunity to for that help help us share content worldwide.

Actually, before I kill that, I wanted to show you with Zapfino up here. So this application is the simple Cocoa Text Edit, which the source is on the CD. It's, I don't know, 3,000 lines of code, something like that. But basically all the work is done by the Cocoa Text System in terms of the interaction with the user and the selection and insertion point and all that stuff. At the lower levels, it's the Apple Type System framework, and then even below that is Quartz doing the actual rendering. Now, if you're a Carbon application, you can call the Apple-type solution, the ATSUI APIs, in order to do some of this. There's also a higher-level object that's more or less the moral equivalent of the text object, which is called MLTE, which is a multilingual text edit version that removes the 32K limit among introducing a bunch of international features. So I think no matter what framework you're coming from, there's a lot of great text technology. And something I just wanted to show off with these fonts you know it's if you wrote your own text edit handling code you know you might be confused you might have bugs or redraw bugs with a font like this because you can see the swashes are overlapping characters are running into each other being over painted you know the italic angles not quite the same throughout the whole font so obviously you know the if at all possible you want to use one of our platform services in order to interact with this kind of type because really is fairly tricky to get it all right. So, for example, he's got a ligature. When he designed the font for his own name, there's the PF ligature, and that forms automatically with the Cocoa Tech system because they've got automatic ligature formation turned on by default. I think LL is another ligature. So you can see it's a real beautiful font, And I think that having those in the system is a lot of opportunity for everyone. I don't know what the line is on that screen. I don't see it there or there, but hopefully it's a projector.

Okay, so what's next? Let's do my final thing that I'm going to talk about before I go back to the slides, which is yet another image capture demo, but this one is a little bit different. I'm going to use a SanDisk USB flash card reader, so this is just a mass storage device. I have on here some pictures that I took on my Hawaiian vacation. I don't have kids, so you get to see my travel photos. You get to see my lovely wife, however. So I'm just going to plug that in, and what's going to happen is OS X is going to recognize the flash card, and you see it mounted the volume. And then actually what happens is image capture kicks in.

And you might ask yourself, well, how does that happen? It's just a volume showing up on the desktop. And what actually goes on there is the image capture software is notified anytime there's any mount, any volume mount occurs on the platform. And because of things like flash card readers and stuff like that, we wanted to plug in at that level and also allow users to work with their digital images through there. Of course, you can have a camera driver, which would directly recognize the camera, but we wanted a file system solution as well. So what we came up with is a way to intercept that mount point, get notified, And we actually look out on the disk for a particular file structure, which is laid out, which actually turns out to be a standard thing that all the camera vendors agreed on, because they didn't want to get in each other's hair and start writing files other cameras couldn't read. So if you look through-- I'll actually give you a brief look at the file structure here in the mounted volume. So what Image Capture has done is recognize that, oh, a removable medium is DOS formatted, has a /dcim camera name folder in it and decides to kick in the image capture software. And we've built a bunch of heuristics in there.

And we've built a bunch of heuristics in there that will recognize insertion of photo CD-- not photo CD. What's it called? Picture CD and other formats. So if you guys are developing a plug and play disk layout that holds images, if you tell us about it, we can make image capture kick off on those mounts as well.

So anyway, the usual thing to do, of course, there's automatic tasks you can set up to download images. You can have them run an Apple script, which you saw the demo of. You can also have, the user can configure it to run any application which can handle a multiple open of images. So, for example, even if you're a word processor, for example, if you can accept an Apple event to open a collection of images, then the user could set up Image Capture to call your application and maybe you make a new document with those images laid out or something like that. So another thing to keep in mind, very simple way that you can take advantage of Image Capture in your code. Now there's gonna be a whole session on Image Capture where they'll go into the API if you wanna like take a picture for example or do camera specific things, but there's a lot of opportunity here. So let me go ahead and download my Hawaii pictures here.

And by default, everything goes into the pictures directory of the user, but if you noticed on the image capture panel, there was a place where I could set a different folder, for example. So those are my small selection of my Hawaii pictures. I only golfed one day, but I spent a lot of it in the sand trap. And then, of course, all of our favorite application, which is the slide show, which I'm going to talk about a little bit while these images show up. So what's going on here? There's my lovely wife, Nancy. Hi, Nancy.

When we started looking at what was going on with graphics cards and what you could do with OpenGL, This actually looks perhaps a lot easier than it is. I mean, what's going on here is there's about 6 megabytes worth of textures that are getting crossfaded. There's a front buffer and a back buffer. And probably about 12 to 16 megabytes worth of video memory to do this well. We use less memory and downsampled textures if we're on a lower config. probably about two gigabytes per second of video memory bandwidth are necessary to draw this much textures with all this blending going on. So, and of course, there's all other sorts of complications if you program with OpenGL. There's a power of two texture limit, so the textures have to be diced or rescaled. So there's a lot of, you know, kind of trickiness to kind of getting this going, but the end result is, you know, something people would expect. I mean, you know, my parents would love to see these pictures of, you know, us on our Hawaii vacation, and this would be a great way to show it to them. So again, this is an example of, you know, kind of OpenGL not being just for games. Obviously, it's great for games, but the innovation that's being driven so hard there in terms of performance lets you do some of these things in real time. I mean, this is not at all a scripted slideshow at all. It's just reading the raw JPEGs. It's picking a random place to zoom in. It alternates between zoom in and zoom out, and it's all very dynamic. In fact, it's great. I run my PowerBook. You can actually boot a PowerBook with the lid closed. I don't know if you know about this. If you have an external keyboard, you can boot it up, and that way all the video memory on the PowerBook will be used on the TV jack. And you can run the screensaver on your TV and put on some music and entertain friends for days and days.

Pretty cool. But I wanted to give you some insight into the technology that's actually behind that. And I think we're at a point now in the evolution of graphic systems where there is a lot of technology that's untapped that we can try to tie together some of these things that are traditionally offline batch processes and do them in real time for the user. So that's... kind of the brief demo there. Let me go back to the slides.

Okay, so I've gone through all of the technology here that we've put for you in Mac OS X from the API levels with 2D, 3D, QuickTime for video, tying it all together with the compositor and kind of putting back up kind of the textbooks there because an important point for us is we really are trying to adopt best practices in the industry and look at what has worked before and deploy that in an integrated fashion for all of you to develop your applications on.

So that's really the extent of my slides. What I've got for you next is a bunch of pictures of the roadmap CD. Hopefully the colors are the same. I took them from the web. And I'm gonna kind of give you my picks of sessions to go to throughout the week. So if you wanna follow along and circle, you can.

tomorrow first thing in Hall J there's a session on PDF Quartz and Mac OS X this is kind of an introductory session if you're not really familiar with PDF and hey how is PDF different than acrobat and how we use it in OS X as a graphics file format versus how tools like acrobat add much more data to it to become more of a document format, you should go to that session and understand that. Then after that, in I guess this Hall A1, which is this room, there's going to be a session called 2D Graphics Using Quartz, which is going to be down with the Quartz API, looking at actual sample code and how the Quartz 2D system is put together in particular and what the function calls look like.

Tomorrow afternoon, right after lunch, there's going to be a graphics for games session, which is going to talk about primarily how to use OpenGL for games. OpenGL, we think, is a great solution for not just 3D games, but 2D sprite engines as well. In fact, I'd be really interested in hearing if anyone has a sprite engine project they're working on, because I think it would be really a great way to bring that class of apps to another level of performance.

After that, there's a session on drawing Unicode text with ATSUI. So if you're a Carbon developer and you want to really get into the layout and the Unicode to glyph processing that happens in the system, that's the session you want to be at. You can also learn how to draw from Carbon apps using ATSUI to Quartz graphics context as well. So if you're looking to get features from Quartz and be a Carbon app, That's one approach is to go to that session there and learn what they are offering. Let's see, on Wednesday in the morning, there's the QuickTime overview. Let's just get the State of the Union on QuickTime. Later in the morning, there's the image capture framework.

You'll hear about the tools they have for if you want to write an application to acquire images or if you're a device vendor and you want to play with the game, how to get involved with Apple on that, that's a good session to go to. The ColorSync session at 2 o'clock after lunch on Wednesday is going to be a combination of just ColorSync, for those of you that are familiar with ColorSync, and also ColorSync and Quartz working together and how that whole integration works in OS X. So that's a really interesting session if you're interested in color and some of the integration work that's been going on in OS X.

Then in the big hall, 3:30 Wednesday, is text on Mac OS X. This talks at a slightly higher level about all of the text facilities on OS X, who's internally using it in terms of layers of the stack and how Appearance Manager gets done what it needs to get done. A little bit on the high-level object I mentioned earlier, the MLTE. Just kind of the whole infrastructure top to bottom for text on OS X. In the red there, we've got the OpenGL sessions. The first session is one called OpenGL High Performance 2D. And that talks about some things like the QuickTime Movie View or the slideshow application. Jeff Stahl has some examples of how to actually dice up big images to put them on texture maps and move them around and rotate them. So if you're interested in doing some 2D graphics where You've got a lot of pixels, you want a lot of interactivity, and you want to use OpenGL for that. That's a really interesting talk. Later that afternoon, there's OpenGL geometry and modeling, which is, again, more the traditional 3D uses of OpenGL. Interesting session. A bunch of QuickTime tracks going on as well in parallel Wednesday afternoon.

Thursday, first thing, we've got OpenGL Optimization, which is going to be, you know, really kind of an expert session. If you're familiar with OpenGL and you really want to tune the last little bit out of your app, go to this session and learn about all the little tricks and techniques and how to package your vertex data so it's optimally passed through the system and things like that. And OpenGL Advanced Rendering, also some advanced techniques for OpenGL experts. on Thursday morning.

Font management on OS X, so rather than text processing, this talks about fonts and how they live in the system, what directories they live in, how you can enumerate all the fonts, all the font management APIs we have on OS X. So that's a good session to go to. We've got a printing session at 3.30, which is a great session to go to to learn about all the printing APIs from the Carbon standpoint, as well as some of the objects from the Cocoa standpoint. Talks about the whole architecture in a fair amount of detail.

There's also a parallel with that session over in the Civic Center on Java graphics, which is pretty interesting if you want to hear how the Java guys are doing Java 2D using a combination of Quartz and some OpenGL. Actually, very interesting session there as well if you're interested in Java.

Feedback forums start on Thursday afternoon. We've got the OpenGL Feedback forum and encourage you After you've gone to the sessions to bring any feedback to that on Friday we have a session 9:00 a.m. In Hall a 2 Called graphics and imaging tips and techniques tips and tricks This is not the session you go to learn how to put the dock on the left-hand side of your screen These are actual programming tricks of how to... There's some debugging, optimization, demonstration. There's also some discussion of how the printing graph port works on OS X. A bunch of topics that none of which were big enough for a whole session, but we kind of put them together into a kind of bag of tricks here that we thought people should know about while we had all of you here for the conference. Then there's the feedback forum on graphics at 2:00 PM in J1. So come there and give us feedback on what we should be doing, what we can do better.