Mac OS • 48:27
Apple continues its graphics industry leadership by integrating industry standards such as OpenGL(r) and QuickTime into the OS. With Mac OS X, Apple incorporates another critical standard, PDF, into Quartz, the new innovative graphics model of Mac OS X. View this session to get an introduction and high-level overview of this new architecture.
Speakers: Peter Graffagnino, Ralph Brunner
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Ladies and gentlemen, please welcome the Director of Graphics and Imaging for Mac OS X, Peter Graffagnino. Hi, everybody. Welcome to WWDC. My name is Peter Graffagnino. I'm the Director of Graphics and Imaging at Apple Computer, and I'll be talking about some of the technologies we're putting in Mac OS X at the graphics services layer for all of you to take advantage of.
Basically, we're going to overview a bunch of technologies. Some of these are new. Some of these we're talking about for the first time here. Some of these should be very familiar. And we're going to have lots of demos. We're going to have Ralph Brunner come up on stage and run us through a whole bunch of demos for you.
So let me go to the architecture slide, which you've probably seen in every presentation so far, which is the basic layer diagram of OS X. At the lowest level, we have the core operating system, and we have core services above that, application services layer where most of our technology lives, and then frameworks on top of that.
I'll be centering mostly my talk on this area right here. We have a lot of great technology for you that we've built into the graphics system in OS X. Based on industry standards as well as some Apple innovations, we have 2D for PDF and PostScript. We have 3D with OpenGL. We have Apple innovations such as ColorSync, Quartz, TrueType, QuickTime.
You combine all of that with an industrial strength operating system underneath based on Mac and BSD and some pretty great frameworks above Carbon, Quartz, Java. And you really have a great platform for innovation for the future. And we're pretty excited about all the ideas we have internally. And hopefully you guys will be able to generate a lot of cool ideas too about what you can do. Let me touch on the graphics block diagram a little.
Mac OS X is a layered system, and even within our graphic services layer, we have divided into little sub-layers. We have at the bottom what we call Core Graphics Services, which is the Windows server that's the central repository of global information for the computer, the Windows database, those sorts of things.
On top of that, we have a variety of drawing frameworks that are linked into your application. We have the new framework, the Core Graphics 2D rendering. We have QuickDraw, which you're all familiar with from Carbon. We have QuickTime for media handling, and OpenGL for 3D. So those libraries link into your application. The Core Graphics Services server handles the coordination among all the apps.
So to give kind of a full list of all the technologies that I'll be talking about today, with the exception of QuickTime at the bottom there, we have Quartz, which is the new technology, the Core Graphic Services windowing system, as well as the 2D rendering library. We have QuickDraw, which you're all familiar with from Carbon.
We have ColorSync, our color management technology. Mac OS X printing, I'll be talking about that. OpenGL for 3D. ImageCapture, which is a new framework you saw demoed in Bertrand's talk, which is for 2D still image acquisition. And QuickTime, which there are a bunch of sessions on elsewhere in the conference, but I won't be covering here.
So the first thing I'm going to talk about is the core graphics services layer, which is that bottom substrate that kind of sits below all of the 2D and 3D and media that happens on the display. It's a client-server windowing system, not unlike DisplayPostScript or XWindows, if you're familiar with any of those technologies. And it's really responsible for the low-level window system programming interface.
The Carbon Window Manager and the Cocoa Window Object are clients of this API, and when you make calls to those in your applications, it turns around and calls the server to allocate a window, destroy a window, or what have you. The server is also responsible for event routing, making sure the event stream gets to the correct application based on focus and what they're interested in. And also basic real estate management. And that's really important. Thank you. in terms of getting everyone to share the display cooperatively.
some of the key innovations in the Core Graphics Services. Some of this stuff we haven't been able to talk about before because we were waiting for the Aqua user interface to come out, but we've been working on this stuff for a couple of years, and the basic idea is that the Windows system is really upgrading its model.
Instead of being a per pixel switcher, which is primarily what Mac OS or X Windows or other windowing systems are, we've gone for a model where there's a per pixel mix from all the windows on the display to create the final presentation. What does this mean? Well, this means that Windows can have a bunch of attributes that get applied to them on the pixels in your window, can have attributes applied on their way to the display.
For example, if you're going to have a window that's a little bit smaller than the size of the window, you can have a window that's a little bit smaller than the size of the window. So for example, if you're going to have a window that's a little bit smaller than the size of the window, you can have a window that's a little bit smaller than the size of the window. So for example, if you're going to have a window that's a little bit smaller than the size of the For example, Windows have a full per-pixel alpha channel for transparency.
There's also layer opacity, so there's an overall fade value on each window. There's a transform, an affine transform, a mesh warp, and color space conversion for YUV and depth conversion. So a lot of these attributes apply at mix time to the graphics to create the display, regardless of what drawing framework you're using.
The client-server mechanism is in place for window system control, creation and deletion, moving, resizing of windows. Drawing, however, happens through a shared memory mechanism using anonymous memory objects in Mach to share a read-write memory between the client and the server. QuickDraw is just drawing into standard memory, same as core graphics, and the server is able to see that memory and apply the mix to the display. Since the server is a mixing server, it needs to repair part of the display. It may need pixels from your application and pixels from someone else's application to actually make up the display.
So the important thing to remember is that the drawing happens in the client, so it's fast, just procedure call, hitting the memory of the back buffer, and the mixing is in the server where the complete display is constructed. So I'm going to bring up Ralph now. There he is. Ralph Brunner, member of the graphics team.
We're going to do a little demo for you of just the basic windowing facilities on OS X. We could have the demo machine. Okay, you can see the display. The UI folks have been able to do a very clean window outline because we've given them the ability to do the shadowing. So you can see if you bring up the zoom tool, you know, it's kind of a subtle effect.
They didn't go overboard. But that's all live content underneath there. If the finder were drawing, you would be seeing those pixels updating underneath the shadow. And so that really gives the user interface a nice subtle effect and allows the application to use as much of the real estate as possible.
So that's kind of the first thing that the windowing system has given us. You can also see the inactive title bars are partially translucent. The menus, we use the fading and the opacity. So you can see the window fade out. You can see if the window is hanging up there. You can move around behind it.
It fades out. If you bring up an application document, we'll show you some of the warping features. So here's a tech sample document. If you go to save that document if it's dirty or close it and when it's dirty, you get a little animation. You can see that alert is translucent as well.
Now the application has just drawn its alert to that. It doesn't necessarily know that that window is translucent. That's happening in the mix when it's presented to the display, as is the animation. If the window's too small, we have this sheet effect and that's done by an animation of a mesh warp on the bits as they're getting flushed to the screen. So that's how we do that and the windows can move together, things like that.
So if you make the window a little more rectangular, you can see that the window is now a little bit more transparent. So if you make the window a little more rectangular, you can see that the window is now a little bit more transparent. Angular, then we can minify it, which brings in the Gini, which is another application of the Mesh Warp.
We create a nicely sync-filtered snapshot of the actual content of the graphics, and then we run it through the mesh motion path down to the dock. In the dock, all of the dock content is little windows including the back translucent part there, and that's showing off the scaling as well as the per pixel alpha so you can see through the icon. So I think that's about it for the basic window demo. OK, we can go back to the slides.
So now I'm going to cover some of the drawing components that you can use in your applications to draw in 2D or 3D or whatever. First and foremost is QuickDraw, which is the key 2D component of Carbon, which you're all familiar with. It's the same API, and it's the same code base, in fact, with just a few minor changes at the bottom end to talk to the new windowing system that we support on Mac OS 9 and Mac OS X. There are a few key differences that you need to be aware of.
You don't need to be aware of all of them, but you might want to in certain cases. One is there's automatic buffering support. So when you talk to a Windows graph port, you're not really talking to the display. You're talking to the shared memory area where your graphics is being blitted to, and then those graphics are being flushed to the display, either at wait next event time or end update, or you can call it explicitly if you want those graphics to be flushed to the display. So that's one thing to be aware of if you do a lot of drawing and it's not showing up because you haven't hit end update or gotten another event. You might want to sprinkle a few flushing calls in there.
The other thing we're not allowing is direct access to the screen's graph port. There is a full-screen API if you want to do a game, so we have that covered. But when the graphical user interface is running, you can see there's a lot of dynamicism going on, and we can't allow applications to draw directly to the screen.
We have certain optimizations in there for people like QuickTime and things like that, so there are some corner cases to that. But the window has to be on top, opaque, not mesh-transformed and all that stuff to actually allow that case to happen. By and large, it's best just to stay away from that and let us handle the window composite.
All the quickdraw drawing is opaque since the window has a per pixel alpha channel. We had to make a decision about what kind of pixels, what kind of alpha quickdraw drew and it's basically 1.0. So all quickdraw content in the window is opaque. The window can still have an overall fade value on it so you can get translucency with a quickdraw window, but it can't actually draw a partially opaque pixel. You need to be aware of this in certain cases in 32-bit mode where we do store the alpha inline. In 16-bit mode we have a separate plane which quickdraw doesn't see.
But in 32-bit mode we actually do use the high byte of the color for the alpha and that needs to be set to FF if you're doing your own blitting. If it's not, you might see holes in your windows occasionally, which is kind of fun, but not intended.
The other thing to be aware of is in the printing case, the quickdraw bottlenecks are implemented rather than by drivers by system software. I'll get to how they're implemented in a second, but it's a common set of quickdraw bottlenecks regardless of what driver you may be talking to in the printing case.
The other 2D rendering library we have, this is the new 2D rendering library we've talked about for a few WWDCs now. It's called Core Graphics 2D Rendering, part of the Core Graphics Framework. It's the key 2D graphics engine beneath Cocoa and also beneath Java, or Java 2D implementation is implemented on top of Core Graphics.
It's a PDF PostScript imaging model, very industry standard. We interpret PDF files. We're based on the PDF 1.2 spec with some 1.3 features now. The major feature we're missing is the shaded fill. We're working on that and we'll get that soon. And we'll continue to track the specs as closely as possible and work with Adobe on future enhancements.
It's fully anti-alias, very fast scan converter. Scan converts on the fly. Anti-alias is text at subpixel position on the fly. And it's vector-based for resolution independence, which has always been true of PostScript and PDF, but this vector-based buzzword is out there now, so we have to remember to apply it to older technology too.
The Core Graphics 2D architecture, again, is sort of layered and factored out. It's basically a hub architecture where we have the Core Graphics API at the core. Above that, we have the ability to parse PDF files and call the API to draw it. We have a C API that you can call from your applications. QuickDraw, with the QuickDraw printing bottlenecks in the printing case, are implemented by calling down to the Core Graphics layer.
And Java 2D, as well, is implemented by calling the Core Graphics rendering library. Underneath the different context implementations we currently have, obviously, we have an on-screen rendering context. We have a PDF file creation context. We have a PostScript file creation context and a raster data context for printing.
In the printing case, what happens is if you're printing from a Carbon app going through QuickDraw, your graphics come in through QuickDraw, get recorded in the first pass when we generate the spool file through the core graphics rendering library and out that PDF arrow there, sent to the back end of the printing system. The back end of the printing system, it comes back in the top through a PDF.
Depending upon what kind of printer it ended up at, you get either PostScript or raster data or I'm sure third parties will do their own PDF renderers as well to plug into that architecture. But standardizing on PDF as that back end of the printing system, I think, will be a big advantage for us.
So the Core Graphics 2D rendering library really leverages and is leveraged by a bunch of Apple technologies. I talked about Cocoa and Java 2D. But also internally, we rely on other pieces of Apple technology, tried and true technologies like Color Sync, Apple Type Services. These are mature technologies, been at Apple for a while, and we're able to just leverage those to do our PDF Core Graphics rendering. So that's really great.
On the other side, in addition to Cocoa and Java 2D, Appearance Manager is using Core Graphics to draw all the text. If you'll notice, it's anti-alias and has a slight blur, drop shadow effect added to it. And so that's going through the Core Graphics API so you can draw Appearance Manager text and it's actually calling Core Graphics.
QuickTime, we've worked with the QuickTime team on a PDF graphics importer. This allows PDF to be handled by QuickTime Graphics. So we've got a lot of graphics importer architecture. So if you're using QuickTime Graphics importer, you'll get a PDF if you hook it up to do drag and drop or import or whatever in your application. In fact, we have a demo of that. We can go back to the demo machine.
We're going to show you, this is a version of Apple Works that's in development. It's an update that will hopefully be coming out soon, running under OS X here. This is actually a CFM application in all its glory. And what we're going to do is Ralph created a little word processing document, typed in some text, and now he's got a directory full of PDFs.
He's going to drag it over. Apple Works guys have hooked up the graphics import capability to ask QuickTime if it can handle the file, so you can do the same thing with TIFF or whatever. But now with, on OS X, with our PDF support, you just get that for free.
So that's pretty cool. I think that's, you know, a great, if you're not using graphics importers now and you have an application like this that can embed graphics, definitely start looking at that. So you see on resize, the codec gets called to re-render the data. It's fully vectors and re-antialiased. and everything. So that's pretty cool.
So back to the slides. So if you want to dig a little deeper and not just do the core graphics or the graphics importer with QuickTime, but you actually want to call the API, there's going to be a detailed session on Friday about the 2D drawing API. Derek Clegg is going to give a pretty detailed overview of it.
I'll just give you a few bullets here. It's a C-based API, very straightforward, virtually one-to-one with PDF PostScript imaging model. The naming convention we use for the functions is similar to core foundation if you were at any of those talks. So for example, to draw the current path, to fill the current path, the BCG context draw path, you pass it the context, and then an enumeration that says you want it filled. You can also say stroke or stroke and fill.
Very straightforward. There's abstractions for transformations, color spaces, images, paths, kind of all the objects you would expect in the PDF and PostScript imaging model. So to give you a demo, a short demo of that, you're also going to see this at Derek's session. But if we can get back to the demo machine.
We have a little application here we call Carbon Draw. One of our engineers, Mike Marinkovitch, had this as a little quick draw example. And it's basically a little drawing canvas that you can create simple line art in just how to build a basic application. What Mike did was he had already kind of factored out his code and he had one module which basically did all of the content rendering there.
And he replaced that with a module on OS X that calls Core Graphics. And so he also prettied up the user interface a little bit to bring out some of the features of OS X and Core Graphics. Well, here's the Carbon Color Picker which is looking pretty good these days. The crayons are my favorite. and I got that Aqua look going. We have the full PostScript line style implementation as you'd expect, so you can change the end caps, round end caps.
You can see if you create an ellipse, we can show you that. And Mike actually didn't get totally done converting. You can see when he rubber bands, it's actually QuickDraw doing the rubber banding and then uses Core Graphics to draw the content. So he's kind of halfway through converting it.
But it's a nice thing to show that you can kind of live in both worlds there. So we have the dash implementation. We've got one particular dash setting. You can change the joins and the dashes and, you know, so kind of fun. So we're going to make this available sometime. Thanks.
Sometime after WWC, the one thing the app doesn't do yet, which we want to show how to do, is print that out to show how to get from the QuickDraw graph port to the core graphics context when you're printing so you can replay the same objects in the printing path. So that's it for that. If you want to see more of that, I think they're going to show actual code snippets of that as well at the session on Friday.
So printing. Printing for OS X is a completely new architecture. It's a client-server architecture with a clean separation between the front end of the printing process and the back end. And they communicate via the PDF spool file and a job ticket mechanism, which is an XML file that basically describes the finishing options on the file, whether it's supposed to be NUP or a certain paper tray or a certain color correction strategy or whatever.
It simplifies the driver model a lot. For those of you who have written printer drivers on OS 9, you know, from the QuickDraw bottlenecks to the paper and the printer is basically your deal. If you want to do spooling, background printing, queue management, whatever, you know, you've got to write a ton of code to get that done.
We're taking care of all of that in OS X, and if you're bringing up a new printer, you really just have to write the engine control logic. In fact, most of the I/O code we have for USB you can probably reuse and other I/O techniques. And, you know, a little bit of UI to bring out your printer-specific features, but that's really about it.
Again, PDF is the default school file format. That brings us a lot of advantages in the sense that we can do print preview for free. You can also debug your printing code pretty quickly by just looking at the PDF output rather than wasting paper. It also leverages Core Graphics, the 2D rendering library as a RIP, which we spend a lot of time optimizing. The printer developers can take advantage of that out of the box and get pretty high-performance rendering.
So demos of printing are never very exciting, so we're going to demo print preview. I shouldn't say that totally, but... We don't have a printer on stage, but this again is Apple Works. So Ralph created a little spreadsheet here for his wild creature collection. We're going to exercise the PDF graphic importer a little bit and drag in a couple of files. So this is the raptor. We resize it down. This is the ferocious chicken.
So it's getting the vector resizing. Now, a spreadsheet is typically something that's difficult to share unless the other person has the application. You don't always want to send them the original file. So exchanging a PDF is kind of a good way to do that. So one way you can do that on OS X is to bring up print and hit the preview button.
So all the QuickDraw calls are going to get translated to PDF, and we're going to open up our little PDF viewer. And we still have a couple of bugs with the black there, but ignore that for now. So you see basically we've got a nice PDF file in preview out of QuickDraw, and that's just built in. Thanks. There's more. If you bring up mail then, you want to mail that to someone, CC someone else.
If you have any questions about something, you can actually just drag out of that title bar proxy there and that will image right in line. You can send that if someone's not running OS X, that'll just show up as a PDF attachment. It'll launch in Acrobat and look just like that in print.
So as I said before, you know, not just for users but for developers, when you're debugging your printing code, you're trying to figure out how to lay out your graphics so you don't have, like, hanging words off one page to the next. You can iterate in that preview loop to get your pages to lay out correctly before, you know, you have to get it all the way to the printer. So that's a good way to debug printing code for developers too. Okay, back to the slides.
Image Capture. Image Capture is a new framework that we'll be talking about at WWDC tomorrow. It's an API and some system services for digital still cameras. We're focusing primarily on cameras. Right now, the architecture is flexible enough to handle scanners and other devices as well. We're going to be building in support for the new USB still camera protocol called PTP, which is coming out soon.
I'm told the cameras this summer will, by and large, be shipping with that protocol. There's a bunch of cameras out there already. We have drivers for some, and we'll get drivers for more by working with the vendors. But writing a driver for this architecture is actually very straightforward.
The API will be available on both Mac OS 9 and Mac OS X. And we're also providing a simple user experience for downloading the photos. So when the hot plug event comes into the computer, there'll be a control panel that'll come up and allow the users to say what they want to do with their image, most likely just download them or whatever. We'll be showing that. I think there are going to be some good demos tomorrow at this session, too. So if you're interested in that stuff, please attend.
Another piece of technology you definitely should be aware of is OpenGL. OpenGL is the only 3D graphics API we support on Mac OS X. So things like Rave and QuickDraw 3D are not being brought forward to OS X. Focusing all of our energy on OpenGL, right now we're at the 1.1 spec level. We have a bunch of extensions, all of the interesting extensions that the game developers have been asking for. We'll continue to track as more extensions come out and more hardware comes out with those extensions and hardware.
We will track those as quickly as possible. And OpenGL 1.2 is also on the horizon as well. It's fully accelerated on Rage 128 based systems. That's in DP4, in what you have. That's all of the Macs you can currently buy except for the iBook. Support for earlier hardware is in software right now, so you can definitely write your app on that. The software renderer is actually pretty good.
We are working on support for Rage Pro and some of the earlier accelerators as well. We're juggling that with support for new hardware that's coming, which it's always easier to get engineers to work on the new stuff than the old stuff. So hopefully we'll be able to get it all done, but just to let you know, we're probably going to be prioritizing new hardware before going back to the old stuff. Just to put that out there. But it's not my call, so give us feedback.
One of the things we've done in the OpenGL implementation since it was being brought up at the same time as the I/O Kit and all of that stuff happening together, we have a pretty advanced resource allocation hardware abstraction layer inside the kernel. This layer is able to quickly manage texture paging and buffer paging on and off the card in a very optimized format. And we've been able to do things like play Quake reasonably with 2 meg of texture memory.
Launch many applications which saturate the texture memory and you get a very smooth slowdown as you need to load more and more textures. It's really just the cost of that extra command to load the texture. It's a very smooth degradation. Because we think over time more and more applications are going to be taking advantage of this and we really want to optimize not just for games, but for a world where, say, every application is using the 3D pipe.
Okay, so I have to apologize first. This part of the demo, you are going to see a command line, so if that bothers you, hold your eyes. I apologize, it's my program. This is a little program I wrote called Slide, and it talks to OpenGL, talks to QuickTime to load a bunch of JPEGs.
These are some JPEGs I took at the office. I took my own advice, used graphic importers, it wasn't that hard. About 600 lines of code, and you can see what I've done is just put the textures on little planes and kind of moving them back and forth, so it kind of looks like a 2D app.
So, these are all my friends at work. But just to show you it's 3D, I added a little mouse handling in there so Ralph can move back on the camera and then ease it off to the side. Kind of awkward, but... So you can see what's happening is those planes are just kind of moving along an axis in and out of the camera with the... using the texture blending functions. get kind of a nice effect.
There's Ralph's modeling tool for the water demo. I don't know if you saw that go by. Maybe now with Maya we can get you some better modeling tools. So I took my own advice and used graphic importers. I said, "Well, we've got this other directory full of PDFs. Maybe I should try to run my same program on those PDFs." I honestly did this. I realized after the fact, I said, "Wait, that slide demo I'm working on, I could use that with PDFs." And a few days ago, I tried it, and it actually worked.
The next demo will show a way to use Core graphics and 3D together. Because we think with the Core graphics 2D rendering library, you can do pretty nice dynamic alpha textures if you have a game that you want to do UI or you just want to do some nice 2D rendering to then use as a texture. We think the combination of those two things will be pretty useful. So that's that. We can go back to the slides for just one sec. Thanks.
The next demo is a 3D compositor. You saw this in the keynote. Ralph is responsible for it, so thank him. And he was responsible for the compositor you saw last year, too. And it just added more features. The basic idea was here. Originally, we wrote the app as a demo of what can you do with PDF and compositing and anti-aliasing. And actually, the engine that drives this application is the same thing that's running the whole windowing system. When we first showed the application, we couldn't really talk about that, but the layer composite engine here is actually running the whole display.
And so what we thought would be fun is to, you know, maybe, I don't know if you have like a packaging application or something. Again, there's a way with the PDF built into the system, you know, maybe there's a nice application idea here to do some kind of simulation like that.
And you can get pretty high performance. You can drag around the PDF, take advantage of the high performance 2D. You can resize it, rotate it, and it's just getting texture mapped. Ralph can also move around the lights. Did you move those yet? Steve didn't show all the features on stage. So that little thing there is the light that's whipping around. That can animate too.
So anyway, I mean, this was something literally, you know, we put together in the last week or so, you know, to try to show off a little bit of OS X and all the work we've been doing. But, you know, you guys have a whole year until next year to impress us with what you can do with all this technology. So just to kind of whet your appetite there. Okay, I think that's it for that.
Okay, the next thing we're going to talk about is performance tools. It's pretty important in this new world where you don't always see what's going on if your application is drawing that you take some time to optimize because you can find that you'll be drawing things a lot of times in the back buffer that aren't necessarily being shown. You may be drawing the same thing over and over again. You may be flushing too much.
It's really important to try to tune that. We've gone through inside of Apple on a lot of iterations with all the other teams to try to improve performance. Some of the tools we've developed in-house we're going to be making available. They're not fully supported in DP4. The windowing system has the debugging hooks, but we don't have a tool to turn them on.
It's just a question of giving you a small app that can turn on some of these debugging hooks. Ralph is going to bring you through some of the performance tools. Hopefully, if there are any of our colleagues at Apple whose apps we're going to show here and embarrass, we apologize in advance. We'll help you tune your code later.
Okay, back to Ralph. Okay. Go ahead. Okay. What I'm going to show is this little application called Quartz Debug. And what it does is whenever an application flushes the contents of the backing store onto the screen, it flashes that area briefly in yellow so you see what's getting drawn. So for example, if you move over the dock, which does a lot of drawing, you see it flickers light madly.
So, how do you use that? So, for example, I take a demo application called Sketch, and I draw a circle which has reasonable performance, but when I turn on the rulers, I noticed that performance is suddenly very bad. So, with that little tool, What this setting does, it flashes yellow, waits for a few milliseconds, and then does the actual drawing so that you can see more clearly what's going on.
So you see, this application actually redraws quite a bit of the rulers every mouse update, and that's the reason why it's so slow. And what's that rectangle like in the upper left? Which one? I don't know, the big yellow one that goes in the upper left of the content. I'm not sure why that needs to be drawn. Oh, I don't know. Obviously there's some pixel touched in the upper left corner.
Okay. So, the message here is whenever you have an event loop, try to flush exactly once per mouse move, mouse click, or whatever the event is you're responding to. And that has two reasons. One, it just looks better because the end user just sees one single update and everything is just there. And the second thing is that memory bandwidth to the backing store is about six times higher on our current machines than it is to the frame buffer. So, flushing less actually helps a lot performance.
Okay, let me quit this one. Another application I'm going to show is Champollion. And I clicked on another switch up there, which is all the flush drawing. So now I'm not only marking every time the application flush, I actually mark every time a single primitive is drawn. So it's similar to seeing a drawing directly on screen.
And you see it's now redrawing the icon bar at the top of the window. With pauses. That's not that slow in the back. Yeah, there's a little delay in there that you actually see what's going on. Okay, and what this application does, when I type a character, you see it's actually redrawing the scroll bars several times.
[Transcript missing]
Sometimes you have an application and it does not perfectly efficient drawing, but you still get away with it because you have a lot of CPU power. Well, you should try to remove those performance bottlenecks anyway, because when the computer is under heavy load, like several other applications are running, then this stuff can actually cause additional paging because you execute code which doesn't need to be executed. So, it is quite a... I would like to advise people to actually use that tool and... Basically, only draw whatever has changed and skip everything else.
Okay. Okay, thanks. Well, wait a second. Oh, wait. We have something else. You can also get a list from the Windows Server which lists you all Windows that have been allocated. So it's probably a bit hard to see up here, but every line represents a window and you see which application has allocated it.
So, for example, Yes, Champollion here has one window which is about 600 by 500 pixels, which is the main document window, and another window which is 1,000 by 22 pixels, which is the menu bar. So, all of these windows require additional memory because there's a backing store there that catches the bits. Essentially, if a window is off-screen, it is a good idea to release it. Otherwise, you're just spending something like a few hundred K of memory just for that window, and it's usually faster to redraw a window when you need it again than swapping it in from disk.
So, as a typical example here, we have a login window here, which was launched before the entire show here started. And actually, the login window has its window allocated somewhere, it's just off-screen. So here are something like 300K which could have been saved. Okay. Great. Some great tips there. We'll be getting you some of those tools like the Quartz debug as soon as we can after DP4 here.
Actually, that one's pretty easy because it is really just that app and it just sets a few bits in the Windows server. Future directions. As we move forward, one of the things to notice is there hasn't been a huge change of story in terms of the technologies we're working on. In terms of 2D, 3D, we made the OpenGL decision over a year ago. Media with QuickTime, obviously. We're pretty pleased with the core set of technologies we've picked to focus on, and we think that those bets are really going to pay off.
What we're doing now is a couple of things. We're continuing in each technology area to obviously advance the state of the art there. For example, in core graphics in the 2D rendering, we're looking at things like raster effects model. If you've seen the SVG spec, there's a pretty sophisticated raster effects model in there. More extensions for transparency and blending modes and things like that.
To really upgrade the 2D graphics to... be able to do a lot of the capabilities on the fly to render the whole button instead of just the label text and things like that. To do that, you need a lot of power, a lot of filtering operations and things like that. We really see 2D going towards that expressiveness, almost a blend of image processing and 2D is the direction that core graphics is going to go in.
The other thing that we're really working on is continuing to look for opportunities to integrate all this technology together better. A prime example of that is OpenGL and the windowing system. We didn't show you back in the demo, but you can actually drag a translucent icon over the 3D content and it all gets mixed in, following the mixing metaphor that we've set up for the desktop. Of course, what we have to do to achieve that, you don't want to hear about, but it actually works.
There is some loss in frame rate, but the user experience, it just feels like any other window. Over time, we want to be able to do those things much more efficiently and hardware, take advantage of OpenGL within the windowing system itself, and to fully accelerate the whole desktop and make a great experience. That's another key integration area that we're looking at as well.
So the summary here, if you're doing a Carbon application, there's a couple of simple things you can do. You can definitely take a look at these performance tools and see if there's too much flushing going on or flushing you don't understand. A lot of times it's not necessarily your code if you're using Power Plan or something like that.
The updates may be coming from there, so kind of chasing down through the layers, making sure that the buffering is as efficient as possible. And of course, using QuickTime graphic importers, if your application needs to import any graphical images, it's a great way to do that. And you'll get the free feature of the PDF graphics importer on OS X.
Also, if you're interested in Mac OS X only opportunities, I think there's a real fertile ground here for applications. You consider by the time we ship 1.0, the average system running OS X, if you will, is going to be a pretty high-performance machine. G4s or high clock rate G3s, good graphics systems.
You can really target a much higher functionality set. Even for us on OS X, we do live window drag because we're starting from scratch here. On OS 9, where you need to go back to the previous architectures, maybe that's not always possible. They could probably do it on OS 9 too.
Just an example of trade-offs where if you know you're going to be hitting a certain performance level, why not take advantage of it? There are some benefits for, say, the smaller market of OS X. You can turn that around to an opportunity to really take advantage of the hardware to its fullest.
The other thing is the technology combination or the palette of technologies we're offering on OS X is really kind of a unique combination in the industry. I think if you look at BSD, the plumbing and the standard Unix facilities there, up to the stuff we're doing with the windowing system and the Quartz APIs, OpenGL, Cocoa, QuickTime, you kind of mix all that stuff up together and you've got some pretty interesting opportunities to do some pretty innovative apps. So if you're looking for OS X opportunities, I would say go for it. And hopefully next year at this conference, we'll all be surprised by the demos you guys give, which would be great.
So I do have some roadmap slides here to point you at some additional sessions. Then I think we're going to have time for some Q&A after that. Today, there's one more session this afternoon in Room C, which is the font management session. If you want to learn how to manage fonts on OS X, since they're between the font management away from the resource manager, you need to hear about that.
On Thursday, there are some sessions on OpenGL in the morning. There's a Beyond Games session that Jeff Stahl is doing to show some of the other ideas for how you can use OpenGL in not just 3D, in games. There's an OpenGL Advanced Optimization session, which talks about some of the extensions that we're working on and some of the optimization techniques, like compiled vertex arrays and other things, if you really want to make your frames go as fast as possible. There is certain code passed through OpenGL that you need to be aware of if you're really trying to get as many triangles through as possible.
There's an OpenGL Feedback Forum tomorrow as well, so you can meet the team, give them some feedback. And tomorrow, of course, is the Image Capture session also, which will be across the street in the Civic Center to learn about the Image Capture framework and see some demos of that.
And then, for the rest of the session, we'll be back with some more information. Friday, we have a pretty full slate for you, starting at 9:00, the ColorSync session, the Mac OS X printing session. There are two printing sessions, introductory and advanced. There's the session on the Quartz APIs, which will be in Hall 2, in here, at 2:00. And then, finally, the last event of the conference, which you're all going to hang out for, is the Graphics and Printing Feedback session. And so we'll all be there, and you can meet the team and give us additional feedback.
So with that, one more slide. Who to contact on the DTS side at Developer Relations as far as our developer partnership folks. John Signa, who is the technology manager for Mac OS, Core OS, and Graphics Services. Sergio Mello, who's recently joined the Apple team as a 3D technology manager. Their email addresses are there.