Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2003-201
$eventId
ID of event: wwdc2003
$eventContentId
ID of session without event part: 201
$eventShortId
Shortened ID of event: wwdc03
$year
Year of session: 2003
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC03 • Session 201

Mac OS X OpenGL in Depth

Graphics and Imaging • 1:14:00

This session is the perfect starting point for developers looking to learn the specifics of the extensive OpenGL implementation in Mac OS X. We begin with an architectural overview of OpenGL and then focus on the various OS-level interfaces (AGL, NSGL, CGL, and GLUT) that developers can use in their applications. This session is ideal for graphics developers who are new to Mac OS X or developers who are looking to use 3D graphics in their applications for the first time.

Speaker: Geoff Stahl

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

So again, we're talking about OpenGL in Depth, and as Travis said, it's kind of a multi-tiered approach to this session. What we want to do is we want to take you from the beginnings of OpenGL for those who don't know a lot about OpenGL, building on that with our API frameworks that access OpenGL, and then moving on to some techniques and tips which will allow those who do know OpenGL fairly well to gain something from the session, things they can use in their app.

So first I'll start out with introducing the OpenGL subsystem. I'll then show how to use OpenGL and talk about how to get the functionality out of it. I'll then show some techniques for using the 3D API and following up with, as I said, the tips. So, let's first take a look at the technology framework. And I have this in kind of a strange way that you may not have seen before as far as looking at OpenGL. This is an application on the top that accesses OpenGL in two ways.

On the right side of the screen is the actual OpenGL access and where it would actually access OpenGL, call the GL functions. On the left side of the screen is the API that you have to use to get at the windowing system or provide the windowing system interface for OpenGL. It looks complicated as far as the number of different APIs.

But what this actually is, it gives you a choice from the highest level GLUD API to the lowest level CGL API depending on how your application is written. It will depend on what API you use and it gives you a lot of different options for writing an application and accessing the OpenGL functionality. Let's introduce OpenGL to you guys.

Talk about what is OpenGL. And we'll answer the question that Peter touched on in his introduction session to the graphics track is why do you want to use OpenGL? Why does... I have a mainly 2D application. Why would I want to use OpenGL when it's a 3D API, right? So that's not something I normally would want to use. We'll talk about the OpenGL state machine, which is key to understanding the OpenGL operations, and finally talk about the API and a little bit how it's different than a lot of other APIs figuring out what actual functionalities available on the platform.

So OpenGL in simple terms is a software interface for hardware. A lot of cases, it's design and development, parallel hardware design and hardware development. It's platform agnostic, which is why when we move on to the interfaces, those interfaces are different on Mac OS X than you may have seen on other platforms.

And it's asynchronous. This is something that Peter touched on in depth in his presentation, and it's good to kind of reemphasize that here. Because what happens with an asynchronous API, you're going to actually issue commands. The commands at some later time will be executed by the graphics processing unit or GPU.

Most developers are used to when they call a command like a GL vertex or even a GL flush to say that command is executed by the time that command returns. In the case of OpenGL, in most cases, that's not true. The GPUs are so fast that we don't see this a lot of times. We don't see that there's a lag.

But if you issue enough commands, you can actually be waiting on the GPU. And you can see this in some of the profiling tools. But in any case, keep that in mind when you develop your application. You want to maximize your asynchronicity in your application. Issue the commands, go on and use the CPU to do other things.

It uses geometric primitives, points, lines, polygons. So the basic primitive for drawing, you can draw lines, you can draw polygons made up of multiple points. It's fairly simple things that we all understand. It also is a state machine. OpenGL has a state you set, and that state is a state machine.

It stays set until you change it. So if you turn texturing off, and you draw a polygon, texturing is obviously going to be off, and it will not texture. If you continue to draw polygons, those will also have no texturing on them until you actually turn texturing on.

For example, texturing is off when you first initialize the interface. And so a lot of people initially get bit by the fact of why am I not drawing a texture? You're not drawing a texture because you never turned it on. So something to keep in mind, again, when you're developing for OpenGL, is that it is a state machine and will remain set until you change it. you change that state.

So, the big question is why should I use OpenGL? Peter showed in his talk that the GPU development is increasing at a higher rate than CPU development and even the simple fact that the GPU is a very powerful processor on the system that you want to utilize to its fullest. It's not a good application design in most cases to process data with a CPU.

Then wait for the GPU to finish processing some data and then go back to the CPU. What you want to do is use OpenGL to get asynchronous processing so they're both working and utilize the power of that GPU. Power like this. We've measured in simple tests. Oh, by the way, for all these measurements, these are all G4 measurements. So as of yesterday, all these numbers moved up a notch.

And I think some of the developers we have here just went down to the lab and tested their applications on G5, were very pleased and saw huge increases in performance above what some of these numbers I mentioned. Geoff Stahl So we've measured in simple apps 650 megabytes per second across the texture upload across the bus.

This is like modifying a texture that's 4 megabytes. It's 1,000 by 1,000 by 32 bits. So that's like if you had a movie that was 1,000 by 1,000 by 32 bits. You can modify that at 200 hertz, 200 times a second. You can modify that texture, play a movie at that large size across the bus to the GPU every frame.

So you have the full world ability to upload something in the neighborhood of 140 million triangles per second. What that means is, let's say your single scene takes 100,000 triangles. You get 1,400 hertz or 1,400 frames per second in your application. Or if you took a game like the kind of ubiquitous Quake 3, which is about 10,000 triangles per frame, that's all they were doing was uploading triangles. You could get 14,000 frames per second. You can see this is a lot of power.

OpenGL provides a large, powerful API that is good for 2D and 3D operations. It has vertex and pixel program for complete customization. And finally, which is, I think, a really good draw for OpenGL, it is cross-platform and there's lots of code samples from Apple and on the web for you to pattern your applications after.

So if you want a quick look at the OpenGL pipeline, we're not going to go over this in depth, but what you see is the application is sending primitives, like the points, lines, and polygons. They're sending image data to the GPU or to the OpenGL framework. It's going to do some transformer lighting. It's going to do clipping to the viewport. It'll then move into the rasterization step. At that point, you do some multi-texturing. You apply the fog, and then you do, at the end, the per-fragment operations. And one key to what's a fragment.

Some people who are new to OpenGL don't understand a fragment. Think of a fragment as a smart pixel, as a dot on the screen that may have additional information other than color, may have an alpha value, have a depth factor on it. Those kind of things are fragments. And that's why we get fragment shaders, vice pixel shaders.

And last is the frame buffer blending to blend with the frame buffer. In the future presentations about vertex programs and fragment shaders, you'll see how we take parts of these pipelines and remove it, and you can put in a customized part of the pipeline. So again, you can customize the control of the GPU to your application.

So we talked about state. The OpenGL is a state machine. You set the state. So it does remain unchanged. We talked about it does remain unchanged until reset. And one thing to also remember is state changes can be expensive. That is not to say that you shouldn't do any state changes.

And you shouldn't over-optimize for state changes. I mean, you could spend months or years optimizing so you do as few state changes as possible. It's probably not a good use of your time. But also, on the other end, you don't want to do duplicate state change and large amount of state changes per vertex or per primitive.

I mean, I don't want to draw a polygon, change a whole bunch of state, draw another polygon, change a whole bunch of state, draw another polygon, change a whole bunch of state. That can be really expensive. So try and avoid that in your applications in general. Change some state, draw some polygons, change some state, draw some polygons. And that organization will be much more efficient for OpenGL. You can examine state both yourself and at programmatic level.

If you want to examine it yourself, the OpenGL Profiler is a great tool. The OpenGL Profiler can pause your application at any point on any OpenGL call. You can bring up the entire OpenGL state and you can look at exactly what it sets. So if you're in that texturing case where you're not getting any texturing, you could pause at the point you're drawing that and look at the OpenGL state and say, "Huh, why am I not getting texturing?" "Oh, look, texturing's turned off. 2D texturing is disabled." Thus, you can debug your apps and get a lot more information about what the GPU capabilities are. And what they're set at with the OpenGL Profiler.

Also, programmatically, you may want to make decisions based on state. It's not always a good idea to do this kind of "if this state's not set, set it. If this state's not set, set it. If this state's not set, set it." Because that can stall the pipeline. Think about it this way. We talked about this being an asynchronous API. So you have a command. You issue the command.

The command's going to move down to the pipeline. And that command could be a state setting command. What's going to happen is, if the next command following that, you want to get what the current state is, we're going to have to wait for that command to flush all the way through the GPU, all the way through the pipeline, and then retrieve the state to make sure that we're actually retrieving a valid state vector for you.

So, realize that getting and setting state can be expensive, but do it when necessary. Some calls at the bottom. You have enable and disable for enabling and disabling. For example, texturing. You have a gl_get or gl_is_enabled for getting some state there. want to do a large amount of state, you can do a push and pop attributes.

So the OpenGL API. It is a procedural command. It is a client-server interface to get this asynchronicity. You're going to issue commands and they're going to be collected and sent to the graphic processing unit. And the type of commands are state commands, drawing primitives, and manipulating of buffers in most cases.

So here's an area I want to spend a little bit of time concentrating on is OpenGL functionality and what that means to you as a developer. One of the big misunderstandings in OpenGL is exactly what functionality do I have when I'm running my program. You may have your design system that's tricked out, has the best graphics card, has everything, and your app may run great.

But how do you make, how do you detect what functionality the end user is going to have? And that's where two types of things come in, both extensions and the core API version number. The core API version for OpenGL in general ranges from OpenGL 1.0 to 1.4. On Mac OS X, we only support 1.1 and above.

We never supported 1.0 on Mac OS X, which just means that you never have to worry about ever having that case of 1.0 occurring in your user base. So it's always going to be 1.1 to 1.4. And usually, it's not a case of when did a driver come out or when did a card come out or when did we rev a driver to get what version it is.

It's what is that actual hardware support? So for example, if a hardware does support 3D texturing. It can say it's going to be 1.2 or above probably in the hardware. If it doesn't support 3D texturing, you're never going to get to report a core OpenGL version of above 1.1 because the 1.2 OpenGL spec requires 3D texturing.

So, but what about a card that may say, "Hey, I'm 1.1," but really can do some more things than the 1.1 spec allows? Well, that's where OpenGL extensions come in. We support over 80 OpenGL extensions. They range from things that are Apple-specific, like some of the things we did with vertex array object, some of the texture range stuff that John will talk about, and the optimization thing, some things that we give you to allow you to do as optimum a texturing path or vertex path as possible on Mac OS X, or to things that are like ARB multisample, which is for full-screen anti-aliasing, and is an ARB extension, which is by the architecture review board, and is cross-platform and supported on a variety of cards.

This extends the functionality above the OpenGL core functionality which is specified by the OpenGL renderer. So how do we detect this, which is really what I want to talk about as far as an application. If I'm in an application, I don't want to just run and say, well, a 3D text stream is not supported on every card, so I'm not going to run in anything that doesn't support 3D text stream. Well, that's not a good example. Or, you know, I really wanted this fog effect in my application, but that was a 3D texture, and since I don't know everything's going to support a 3D texture, I'm not going to even use the fog effect, not another non-good solution.

So what you can do, there's simple checks you can do when you start your application to determine if that 3D text stream is supported. First thing you can do is use a git string, gl version. That will give you the core version. That's a string in a certain format that's defined. It'll be like 1.4 space, and then there'll be some additional vendor-specific stuff, but the beginning of that's always going to be the same.

So if you're OpenGL 1.2 or greater, you know you have 3D text stream. No more checking necessary. It comes back and says, hey, I'm OpenGL 1.1 for whatever reason. You can then move on and look for the extension for 3D text stream. You can do that by getting the extension string, and then we provide through the GLU, the GL Utilities API, we provide additional API to check for specific extensions. In this example, I used Apple Fence extension. I'm checking here for Apple Fence, and this will tell me if the fence is supported, so I can then, in my code, determine if I want to do a code path that uses a fence or not.

This is a really good thing, a good example of this is rectangle texture. A rectangle texture is a great extension to use if you're doing a lot of image text stream, and it's not supported, for example, on the Rage 128. But it simplifies your code path a lot if you can use rectangle texture on some other GPUs.

So you wouldn't want to write everything as if it was a Rage 128. You can use this method right here, check for the texture rectangle extension, and then decide which code path you're going to use in your application. Finally, one thing you should be aware of is OpenGL limits.

OpenGL has limits that are card dependent. If you're on the high-end card, you may have a texture limits of 4,000 pixels. If you're on the high-end card, you may have a texture limits of 4,000 pixels. If you're on the low-end card, it could be as low as maybe 1,000 pixels.

Something to determine if you're working with large textures, you may have to divide them up. You can use things like max texture size right here in the GL get integer, and that will get the texture size for you so you can, again, set your code up to maximize the ability of that GPU to perform function for you.

So I want to point out at the bottom, I should have pointed this out at the beginning, but this is a good time to do it. I have this blue line at the bottom of it. It says sample code, Carbon OpenGL, Carbon CGL, and Cocoa OpenGL. I put that on the bottom of a lot of slides.

What that is saying is that's online sample code, online references that talk more about the subject on the slide. So instead of, you know, taking notes on all these things, you can just go to this sample code and it actually has functions to do this. The other thing that's interesting about this detecting functionality thing, it can be a large effort to put a lot of checks in for a lot of different functionalities.

So what I did was build some sample code that has a GL check function in it, and these three samples have it. And this sample code will go through, and for every display and for every render on your system, it'll go through and build a list of all the functionality present, including limits.

You can use it all as is, or you can extend it or reduce it as it fits your application. So I say look at that sample code, look at how that's done, and either model your application, directly off of that, or use the sample code directly in your application as is. And that'll help you doing the detecting functionality so you all don't have to write the same kind of functionality detection code for every one of your applications.

So let's show kind of what the detection of functionality sample code gets and show you kind of how What I talked about, functionality for extended functionality and core functionality. This is just a simple OpenGL demo. Let me move this out of the way. And one thing I want to show you is not anything specific on the demo, but this information. You probably can't even read what it is, but specifically it talks about OpenGL capabilities. The beginning is texturized, but the bottom is a list of every extension or every feature supported by this render on this machine. You can see there's a lot of them.

So sticking to the core functionality is probably not going to give you a very robust app. A lot of functionality, a lot of things you can do with OpenGL in extensions. You can use the GL check, which generates this information to determine what extensions are there and code your app to take advantage of this. I'm going to go back to slides.

So let's move on and talk about interfaces. Interfaces are going to be the meat and potatoes of getting your applications started. That's something that everyone here who writes an OpenGL app will have to touch. Some of you who are already working with OpenGL may think, "Nah, don't need to know this." But I'm going to go through all the interfaces and you might learn something about if you're a Carbon developer, you may, "Hey, that Cocoa interface looks pretty nice." So if you're a Cocoa developer, some Carbon interface stuff may work for you or a CGL interface may work for you. So that's what we're going to talk about in the interfaces section.

All the interfaces share some basic things. These are the basic things that the windowing system has to provide to OpenGL. Remember I said OpenGL was a platform agnostic API. What that means is there's no windowing system calls in it. There's not any Windows calls or any Mac OS X calls or there's nothing to say, hey, this is a window, I want to attach to that window. These interfaces provide that.

They provide a pixel format, which basically describes buffering capabilities. So it's buffers like, do I want a depth buffer? Do I want an auxiliary buffer? Or capabilities like, do I want full screen or do I want stencil, et cetera. The context you can think of, which these provide, is a state bucket. It's a big bucket of state, and commands are sent and can be sent to that current context.

You can create as many contexts as you need for your rendering, but these will create your context for you. And finally, the drawable is basically equivalent to the window of view or the screen. It provides the size for your buffers, and actually the buffers are instantiated when you attach to the drawable.

Interfaces available. There are four interfaces we're going to talk about. CGL. CGL is a low-level interface. It's the basis for all the other interfaces. It's for full-screen only applications. But if you have an application that's full-screen and windowed, you could use a CGL interface for the full-screen portion of it and then use a different interface for the windowed portion.

AGL is the Carbon interface to OpenGL. So if you're a Carbon developer, you're going to look at AGL and use that to interface with OpenGL. NSGL or NSOpenGL is going to be the Cocoa interface to OpenGL. And finally, GLUT is a very high-level interface that provides source-level cross-platform compatibility. And it's used a lot for examples and in the scientific community. It doesn't provide that rich of a UI set, but for some basic, we want to test something out, it works fairly well.

So again, we've seen this before. You can see that GLUT would be the highest level interface. It's actually built on top of NSOpenGLView. NSOpenGLView is built on top of NSOpenGLContext and PixelFormat. AGL is built on top of CGL, and everything else kind of sits on top of the pancake that way. Again, your application is going to pick one of the interfaces on the left side and then access OpenGL from the right side of the diagram.

So, CGL, Core OpenGL, again, it's low-level, it's a basic interface, it's a foundation for everything else, it's full-screen only, and let's talk about setting it up. All these interfaces have almost the exact same setup code, so if you're not developing OpenGL right now, this is what you're going to have to do to get your OpenGL window on the screen.

It's going to get a desired, for the CGL, for the full-screen, you want to pick a desired display mode. Do you want 1024 by 768, do you want something else? Capture displays to make sure you do not modify anything else on the desktop or, you know, some other application has, or icons on the desktop.

You want to switch the video mode that you have chosen, create a pixel format for that specific display, the key here is a specific display, so you're going to pick a specific one display that you want to render onto and make a pixel format that centers on that display, and then you're going to set full-screen. in the CGL. It'll create the context and then set full screen.

So let's walk through the code example. Again, the CGL sample code here, CarbonCGL, has almost the exact same code in it. This is simplified slightly, but if you really want to look at this and study it, please download the sample. It's on the web right now, and look at that. I'll go through the code here, but I'm not going to go through every detail in this session. So first we're talking about the pixel format, and this is what you'll see in a lot of these setup codes. You'll see a pixel format, and you'll have attributes.

These attributes define things like what buffers you want and what capabilities. And here what's important to note is you have the fullscreen attribute. Supplying the fullscreen attribute will tell CGL that you definitely want a fullscreen. CGL could work with offscreens also, but in this case we're talking about fullscreen.

The other thing of interest is that first two attributes put together, display mask and the zero. The zero doesn't mean anything right now. It's a placeholder. But we put that in, and that's going to tell CGL what screen we want to actually work on. We then go down, and we get the main display, for example.

We get a display mode for the height and width and depth that we want. We capture all the displays, and we switch to display mode. Those are CG calls. Those are things that are covered in a CG API. Then we have one additional call that may look new that's in the CG API you may not have seen before. That's CGL, get the CG display ID for OpenGL display mask. And what that means is it's going to return an OpenGL display mask and fill it into that attribute section. That's going to tell OpenGL, the CGL interface, what display you want to use.

So you call that. You then set the pixel format so you have the display mask. And there for that specific display, create the context. You can destroy the pixel format right now because it's not needed. You could keep it, but you can destroy it if you want to. Set the current context, and then you'll set full screen. At that point, you'll have an OpenGL context on your full screen, and you can draw to it. Fairly simple. .

AGL is a Carbon interface, and it's windowed in full screen support. If you look at the setup here, it's really very, very similar to CGL. You're going to create a view of a window or whatever, since CGL uses screen, but then you're going to create a pixel format.

And I'm going to make a note here about limits on multi-screen pixel formats. It's something that's probably more complicated than we want to go into for this session. If you have specific questions about it for your apps, we can talk about it afterwards. But I also point you to two tech notes there.

AGL choose pixel format at the inside scoop and the correct setup of an AGL drawable. Both of those have a significant amount of information about choosing pixel formats and how it contends with multi-screen displays. If you choose just a normal pixel format, you should normally pick every renderer it possibly can support, and you'll be able to drag the window between multi-screen displays. There are reasons you may not want to do that in some cases.

There are reasons that you probably want it that way. So for just a normal app, you would want to just choose an open pixel format, let it choose to support all the renders, and then you get the ability to drag between displays. You then create the context attached to the drawable as we've seen before. So the code example for windowed, basically the attributes, we've seen the attributes before. They're AGL attributes instead of CGL in this case. Double buffer depth, those are similar to what we saw in CGL. You then choose a pixel format.

You then, in this case, I just do some checking here. If I created a pixel format, I then create a context. If I create the context, I then set the drawable. And if you notice, the get window port to the window, what that's going to do is actually going to use the window that I've created and the normal Carbon routines as the drawable. And I'm going to set the current context. And then you can draw into it with OpenGL. One thing the bottom of that shows is something people ask about, about VBL syncing.

Normally, in Mac OS X, we do not sync a display or sync the OpenGL drawing to the VBL or limit the OpenGL drawing to the VBL. So what you can do with this is use the set integer call that's in almost all of the APIs, and that will allow you to sync to the VBL here. In this case, you call AGL set integer with the context, and you want to use the swap interval and set that to one. And that means, hey, we're going to limit to the VBL sync.

So now let's move on to AGL fullscreen. In the AGL API, you can also do fullscreen inside of that API itself. This looks pretty much the same, but let's just highlight the things that are different. There's only about three or four lines that are actually different here. First, we've added the fullscreen attribute to the pixel format.

So that's saying, hey, I want a fullscreen, not a windowed pixel format. Then we're gonna get a main device, so we need a device to do fullscreen. You need to tell it where to do the fullscreen. And AGL's a little bit different than CGL in the fact that you add, you put the display into the create pixel format instead of in the attributes.

So you put the display you just got, one display to draw to, and the attributes to create a pixel format. And then instead of setting the drawable, you do a set fullscreen. Fairly simple. So this is really, really simple stuff as far as how you set up OpenGL. You can get running with OpenGL depending on what API you pick in a matter of minutes.

Let's talk about NSGL and NSOpenGL. This is the Cocoa interface, very similar to the other interfaces we've already talked about. Two ways to use Cocoa and NSOpenGL. First is the NSOpenGL view subclass. We already have provided an NSOpenGL view in the Cocoa interface. What that allows you to do is basically encapsulate a context and a pixel format already in it and give you some basic utility functions handling some of the bookkeeping functions for you, and so you have to do very little work. There are some limitations to it. For example, if you wanted to have two contexts that work with one view, it's possible to do in Cocoa, but the NSOpenGL view subclass would not allow you to do it.

In that case, you would have to roll your own NSOpenGL view. So let me go back to the NSOpenGL view subclass. The last point on that is that it's basically, you can build it via interface builder, which I'll show you in a minute, and it's fairly simple to hook this into an application. But let's say you have an application, you want to special case it.

For example, you want a context that does, let me think, multi-samples. You want an anti-alias context and a non-anti-alias context, and you want them in the same view, and you want the kind of switch between the two without any flash. You don't want to actually replace the view.

All you're going to do is not render one, and the next frame you're going to render the other. You can do this fairly easily by just replacing the context in a view. You're not going to tear the view down. You're not going to, you know, have this big white, black flash or black flash of nothing being drawn. You're actually going to have to replace the context.

But this would not work with NSOpenGL view. You'd have to roll your own NSOpenGL, custom NSOpenGL view based on the NSView subclass using the NSOpenGL context and NSOpenGL pixel format. This might seem daunting to some people. Make sure they cover all the cases. To simplify this, we provided some sample code.

The custom Cocoa OpenGL sample code, which went up this week, is on the web now. For folks who want to roll their own, it shows you basically an attempt to create a template that you can use directly, and you can modify it as needed. And that should be a great starting point for that. I'm not going to go into detail here, but that sample code I wanted to point out is available and for your use today.

So let's talk about NSOpenGLView and using that. You create it for your interface builder. You create your window. You create your view, and maybe you use a custom view or an OpenGL view, and you drag it into your window, allowing that to have the class. You create a subclass of NSOpenGLView, having that class manage the view creating interface builder. Then in code, what you're going to do is you're going to have to override a few of the methods of NSOpenGLView.

First one is a knit with frame or a knit with coder. Depending on whether you use a custom view or OpenGL view, you can override one of those. And you can do things like set your pixel format up there if you didn't want to set it up in interface builder itself.

[Transcript missing]

Update. Update routine is called in about four cases, and there's a Q&A that just went up about updating OpenGL context. I suggest you all, when you're looking into update, look at that Q&A, and it describes all the cases that update needs to be called. In the cases it needs to be called are things like display configuration change or if a window could be dragged and change renderer.

The idea behind update is update takes care of renderer changes for you. So if you have two displays and someone's going to drag your window from one display to the other, at some point it's likely that your renderer will change into a second renderer or whatever your other card may be in your system. Update needs to be called in these cases to make it happen. Normally with NSOpenGL view you probably just want to ignore update.

Unless you need to track renderer changes. You just need to make sure it works right. It'll happen behind your back. Update will be called, taken care of, and you won't need to do anything. If, for example, in some of the examples that I have, I want to show what renderer I'm on. I want to update some text. I actually would subclass or override the update method. But you've got to make sure you call the super update first.

Lastly, DrawRack. DrawRack is where you're going to do your work. I'm going to handle my resize there and I'm going to do a lot of draws there. I'm going to draw the content in DrawRack. Finally, animation timers. So if you want to do animation, you can use an NSTimer. And the only note I would have other than normal NSTimer use is you want to use both tracking run loop mode and normal run loop and default run loop mode, event tracking run loop mode, which allows you to get the updates during the resize.

So I'm going to take a departure here from the What normally folks would do with a demo, and I'm actually going to create some code. So what we're going to do, we're going to go to Xcode and we're just going to start and we're going to create a new project and we're going to make it a Cocoa application and show you how easily it is to create an OpenGL demo live, on stage, even when you can't type well.

So first thing I'm going to do is I'm going to look at the nib file. And the standard nib file you get has just this one window in it. What I'm going to do is I'm going to take a custom view, I'm going to drag it into the window, and I'm going to resize this the entire size of the window.

Then I want a class to control that custom view, so what I'm going to do is move in. We mentioned that NSOpenGLView is a subclass of NSView. It's NSOpenGLView, and I'm actually going to subclass that with my own class, MyOpenGLView, and that's fine for the name there. Going back to the custom view, get some information on it. First thing I want to do is make sure when the window resizes that it actually adjusts its size. I'll do it that way. And then finally, I want to make that a custom class using MyOpenGLView like that.

Let's use Interface Builder's ability to create some files for that. It's going to automatically create those two files, put it into the demo right there, and then we're going to save this, and we're going to quit Interface Builder. If you notice now, we have two additional files that were created in our project by Interface Builder. We're going to add some code to these files.

So because of my poor typing, I don't want to type all these codes because I'm sure I would make mistakes, but I'm going to add some simple things like headers, some variables, and that kind of thing. So first thing we're going to add is we're going to add the OpenGL headers, the OpenGL framework, GLEXT, and GLU. I'm going to add some member variables over here.

And what that's going to do is that's going to add an initialization variable, it's going to have a timer, a flag on animation, a time for the timer, and then some rotation values. Nothing, no OpenGL values here. I mean, this is just things you can do for doing some simple animation to make the demo a little bit more interesting.

Geoff Stahl And here I'm going to add the functions I'm going to override. I'm going to have a pixel format, update a projection, update my model view matrix. I'm going to do some animation right here, do my drawing routine. This is the Prepare OpenGL I talked about. We have our knit with frame, we have a wake with nib to set up some variables. So that's all I need in there.

In the OpenGL view class, the first thing we're going to add is the rotation and drawing code. This code does some things to actually calculate some rotations to spin some things around to make it look interesting. And then at the end, this is the drawing code right here.

This is the color. This is the vertex. So you draw some quads. So this is the actual OpenGL drawing code. And then I draw a line around it, around the cube with this. So this is going to draw the cube. This data up here is the actual vertices for the cube. Notice they're all ones and zeros, but it's just a unit cube.

We talked about pixel formats. This code right here actually defines a pixel format. This looks very similar to the example I showed. It's going to be a windowed pixel format. It's double buffered, and I'm going to add a depth buffer here. So it's real similar to what we talked about in the CGL and the AGL.

These two functions I added here: update projection, updates my projection so I actually get a projected spinning cube, and this actually updates the world, especially the rotate call here from GL, to actually rotate the cube. Now I'm going to add the animation timer. So this is what we talked about before. This is the timer. It's going to get some time, do a difference here. If I'm animating, I'm going to spin the object, then I'm going to call that drawRefRoutine. This is the actual spin the object, does some math to make it look a nice, pretty spinning object.

And finally, This is the four functions that you actually need to override. A draw-rec routine. What's this going to do is going to handle the resize right here, which is what I talked about handling in draw-rec. It's going to do the init at the very beginning if it's not been done. It's going to do a clear. It's going to draw the cube, and then it's going to do the flush or the flush buffer depending on what situation I'm in.

Prepare OpenGL, some OpenGL setup code right here. And again, this is the Cocoa version of that swap interval for BBL syncing, very similar to the AGL version. Anit with frame is very simple. All I'm going to do is create a pixel format, what we talked about. And finally, the Awake with Nib, I'm going to use this to set up some values and set up a timer. So if I didn't do anything wrong, we should be able to build this and run it.

And so this is an OpenGL spinning square that resizes, handles updates correctly, handles the The full-screen zoom very easily. And so that's all it takes to do an OpenGL app. And I put a square in here, but you can do any content you want. The point is it's real easy to get to the point where you actually can draw your content.

And all this code that I used here is based on the sample code that's on the website. There's a Cocoa OpenGL sample that's really simple to use, and you can just take that, rip the guts out for the drawing code, and put whatever drawing code you want in. We can go back to slides now.

Last thing I'm going to talk about GLUT. GLUT, as I said, is a source-level compatible cross-platform API. It's a limited API, but it's fairly simple to use. So it works really well for doing examples. Another idea is if you wanted to do an example or test some things, you could use GLUT, set it up quickly, and do it that way.

And it works across many platforms. It's callback-based. The setup is fairly simple. You initialize GLUT, you create your windows, you set your callbacks for the things you want to do, and you call GLUT main loop. Don't expect GLUT main loop to return something. So don't put any code past it. It'll probably exit without returning through a different code path.

Here's an example of what your main in Glut works like. We're not going to talk about the specific callbacks. I just wanted to show you an example of it. What you have is you have the init function. You have the window. You set the display mode similar to pixel format. You create a window. You actually can then initialize some OpenGL state if you need to, and then you set your callbacks.

Realize that a couple of the callbacks are app-based, vice window-based. For example, the idle function, you get one of them. So if you set an idle callback, you have one idle callback for your entire application where, for example, the reshape function is on a per-window basis, whatever the current window is. So that's what some Glut code looks like, and there's a lot of Glut examples also on the web and on our site.

So let's talk about some OpenGL techniques here. These techniques are things you can use in your current applications to either improve them or some things to make more content that's more interesting than just having your spinning square. First technique is some texturing, then we'll move on to some fonts, talk about handling images, and then movies.

In this section in general, you'll see that I'm going to talk about the technique and not specifically show you every nuance of the code behind it. Again, there are samples for all this on the web. I want to talk about giving you an overview and a context to work from so you can look at the sample and understand what it's doing and rather have you work through the code at your own pace and ask questions on the mailing list or whatever if you need to after the session.

So first, texturing, we can do texturing in two different directions. One will be using Cocoa and one is Carbon using QuickTime. First, for Cocoa, we're going to texture from a packed 8888 or 1555 ARGB buffer. What that means is it's going to be alpha red-green bool pixel format. It's going to be 8 bits per pixel or 1 bit alpha and then 5 bits per component.

You can allocate a texture. For example, if you wanted to do that, you allocate a texture-sized buffer using new pointer or malloc, whatever your routine of choice is. That's going to be the exact size of your texture you're reading in. You can create a G world from the buffer with a Qt new G world from pointer.

There's also the function is new G world from pointer. Use whichever one you'd prefer. They're actually the exact same routine. What this does is create a buffer that you can actually draw into using quick time and quick draw that does not have padding on the end of it so you can texture from it easily.

You can draw into the G world with whatever content you want, whether it's an image, whether it's a movie, whether it's just lines and circles or text or whatever you want to draw, you draw into it. You can dispose the G world if you want unless you want to draw some more because you don't need to keep it around. All you need is the buffer you created initially.

You don't need that G world to texture from. And then you're going to texture from the buffer. You'll use standard OpenGL texturing techniques and understand that Apple Pack Pixels will be used here. It's an extension that's on every single Mac OS X implementation so you can count on it being there.

And the texture formats are either unsigned in 8888 reversed or unsigned short 155 reversed. And what that does is it tells what the pixel format you have natively in that quick draw format is to use for texturing. So some code, fairly complicated, or fairly busy looking, not actually complicated.

But I'll go through it and we'll talk about it and then we'll show a demo in a little while using this code. The Git graphics importer profile in this case is going to, you have a file descriptor and you're going to actually try and load a file in. You can get the natural bounds for that, which gets the size of it.

You then can allocate a handle the size of the image descriptor here to get the additional information on the depth of the thing where I get the depth part of the image descriptor. I then get the height and width out of the image information. I then calculate a stride to use for convenience later and then allocate a new buffer. That new buffer is going to be my texture.

I'm going to use that actually for texturing. Qt new G wolf and pointer. As we talked about, that's actually going to create a G world from that buffer that you can then use to draw into with Quick Draw or Quick Time and then you can texture out of using OpenGL.

Graphics importer, set GWorld, graphics import, set quality, we're going to do lossless quality, and then we're going to get the Pixmap, lock the pixels, and we're going to drawline it using the graphics importer. So this basically took the contents of that file and drew it into that GWorld, which in turn was drawing it into your buffer.

You can unlock the pixels, close the component for the importer, and actually you can just dispose the GWorld at this point, because you don't need it. All you need is that buffer you created. And then later on when you want to texture from it, you're going to just call glTextImage2D.

Notice I use a rectangle texture, which we mentioned in brief earlier, which allows you to texture from nonpower of two images, and I do a switch on the depth of that image and determine whether I'm going to use the 1555 or 8888 pixel format. That would be kind of the overview of how to texture from a GWorld.

So moving on to texturing from Cocoa using NSImage and NSBitmapImageRep. So first, in this case, we're going to texture from the 8888 pixel format. So we're going to use the 1555 or 8888 pixel format. That would be kind of the overview of how to texture from a GWorld. So moving on to texturing from Cocoa using NSImage and NSBitmapImageRep. So first, 8888 RGBA format, which is what you're going to see from Core Graphics or from Cocoa.

We're going to create a texture size NS image. So you create an NS image as you would normally. We're going to lock the focus on this image and we're going to draw into it. We're then going to create a bitmap image from the NS bitmap image rep from the NS image. Usually I do the focused view to create that, so it's fairly simple to do.

The texture size will be that bitmap size you create, and then you can texture directly from the bitmap data, in this case RGBA, and using the unsigned byte format. So what does code for that look like? A little bit shorter, a little bit simpler. So if you're using the Cocoa, a little bit less work to do here. The NS image alloc init.

Then you get the lock the focus for that image. You draw whatever your content would be there. You create the bitmap, or you can get me back up a little bit when you create the NS image. You can create the bitmap. You can create the bitmap, but you can also create the bitmap. You can create the bitmap, but you can also create the bitmap.

You can create the bitmap, or you can get me back up a little bit when you create the NS image. You can create the bitmap or you can create the bitmap, or you can create the bitmap, or you can create the bitmap. You can create the NS image from a file. In that case, you would not really be drawing just a lock focus, and then you'd create the bitmap.

You'd already have the content. To create the bitmap, you unlock the focus because you don't need to have it locked anymore. You get the size from the bitmap, and then you call text image 2D, texture rectangle again to handle any size image, and you're going to use the RGBA and unsigned byte for that. So bitmap release, image release.

Again, this is in Cocoa. Same thing we showed in Carbon. Fairly simple to do. This is areas of OpenGL where you have to interface with the operating system to handle that image, and this is things that OpenGL does not have built into it. So there's two areas you would have to interface there.

Taking the texture then, we can extend that to drawing fonts. And if you want to use, there's a couple ways to do drawing fonts. First, on a per-character basis, there's things like AGL use font. And also in the CGL example, I wrote a CGL use font, which works very similar to AGL use font. So if you're not using AGL, you can get that per-character bitmap fonts. Good for what you saw in that previous example for putting up text like that.

Just kind of debug text, text that you want, information, but it's not optimum. Bitmaps are not real fast, especially redrawing the bitmap every frame is not a good idea. So what can we do? There's two options here. One I mentioned here and one we'll talk about at the end if we have some time.

And I'll show you a quick demo on that. First, if you're doing a per-string textures or text and you want to sort them in textures, that's a fairly simple thing to do. Some tips on doing this is limit your updates. Every time you change that texture, you have to re-upload the texture.

If you keep your texture constant through the entire application or you have some things that's limited to when the user responds to something, update only the string at that point, update the texture at that point, and you limit the amount of updates of the texture onto the graphics card. Using pre-multiplied alpha for textures. And if some of you aren't familiar with pre-multiplied alpha, we can take a little sidebar and talk about pre-multiplied alpha.

Premultiplied alpha is something that is not... A lot of people, depending on what your background is in graphics, you either think in this terms or don't think in this terms. But for people who don't think about premultiplied alpha as something really good to use in your application, it's alpha which is already multiplied through your pixel, your color. So a 50% gray, non-premultiplied, red of 1.0, green of 1.0, blue of 1.0, alpha of 0.5.

Premultiplied 0.5, 0.5, 0.5, 0.5. So that gives you the alpha is already premultiplied through. Why use it? It's simpler and it's closed on the over operator. So over is basically the compositing operator. If you composite something over top of something else, you get a closed function. If you don't use premultiplied alpha and you take two images and you composite them, what you get out is premultiplied. So then you try and use this image with something else.

Well, this is... I know, well, what's the first to know? Well, let's just use premultiplied all the time. Two pre-multiplied images, you get premultiplied out. Two more pre-multiplied images, you get premultiplied out. Take the results of those two, put them together, and you get premultiplied out. Consistent across the board, easy to use. You can use it very simply. The only change in OpenGL to understand is the blend function changes.

Non-premultiplied, if people are doing blending out here, they're probably used to doing source alpha on the source component. In this case, since you know that you're doing blending out here, you're probably used to doing source alpha on the source component. In this case, since you know that the alpha's already multiplying through your color, you'd use GL1 and know that the color already has the alpha value in it. That's the sidebar in premultiplied alpha.

We have more questions about it. We can talk more about that later. Back to drawing fonts. So the last thing, if you want to have a colored text, one, or color-changing text, instead of putting the color in the text itself, put it in the polygon. Use the polygon for alpha for blending. Use the polygon for blending. Use the polygon for colorizing font strings if you want to do that. So it's really easy to get high-quality fonts into OpenGL through textures. So let's go to a quick demo on that.

So this is my Cocoa OpenGL example which is something also available on the web. I'll move, I'll actually leave the square there and I'll bring the, so this is the same thing you've seen before. Probably too hard to read, it's not really important what the text says, but how it operates on the frame is more important.

This is using very, very simple NSStringTexture class that uses exactly the techniques we showed for texturing and for handling strings and creates simple strings that you can easily update, scale, they composite very nicely over top of each other using pre-multiplied alpha and it's set up in such a way that if you're using Cocoa, go ahead and download the sample and just use the class directly in your applications. You pass a string in, it creates a texture for you and you can draw with the texture. You can go back to the slides.

[Transcript missing]

So OpenGL, handling images works great. So let's talk about playing movies. Playing movies with OpenGL, the setup is very similar to the Carbon setup we saw before. You can use QuickTime as the API to play the movie. And you can use standard QuickTime setup techniques. And you can use this in a Cocoa or a Carbon application.

One thing you do need to do is know when QuickTime has finished drawing a frame of your movie. What you'll do is use the new movie drawing complete UPP. And you'll create a callback basically that says, hey, I'm done with drawing a frame. At that point, OpenGL can check to see if the frame's been updated.

If it has, it can texture from the updated frame. Same technique to just use an OpenGL image and update the frame on the screen. So really what you're doing is you're drawing a sequence of images and updating when you're told to by QuickTime. That's the simple technique. There's nothing more involved than playing movies on the screen.

Some people when they sit back and think about it, will realize there's actually 2 points of synchronization. One when Quicktime's done with the image and you can use it, and one when OpenGL's done with the image. It takes a little more complicated code to handle both of those synchronization points.

Many applications will do fine updating their OpenGL texture when Quicktime's done with it, and not worry about the second sync. We do have a sample that's the OpenGL compositor lab that shows creating a custom codec to actually sync at both ends of the thing for things that people that need specific syncing and very fine control of the movie playing.

And then we'll use GLTextImage2D and then we'll use a different routine, GLTextSubImage2D to actually update the movies when we're only updating the part of the movie that actually changed. And we'll go to another demo of this. Again, this is sample code that's available on the web. This is OpenGL Movie.

And I will grab something you've probably seen before. And this is again, it's basically drawing a new frame every single time that the movie tells it has a frame update. And you can see right now I'm getting about 250, 260 frames per second of OpenGL drawing, updating the texture at the rate the QuickTime updates the texture for a 24, 30 frame per second movie. So the point here is that you can, with OpenGL, easily handle the texturing capabilities for a single QuickTime stream or multiple QuickTime streams.

So dual effects on movies, get movies to the screen at a high speed, it's not a bad thing for this integration to use OpenGL for your application. And manipulating the polygon, that's free. I can draw that just as fast, lift, spinning as I could, square on the screen, full screen. Let's go back to slides.

So the final section of the presentation is some tips and we're going to go through three things that a lot of developers run into and that can help you develop your applications, help you either polish them up or understand what's going on behind the scenes a little bit better. First we'll talk about shared context and we'll talk about full screen anti-aliasing. Finally we'll talk about render to texture, a much requested feature that we've added for Panther.

First, shared context. You have a lot of windows, you have a lot of different stuff on your screen, you have a lot of textures, you have maybe used display lists, you have vertex programs, fragment programs, but you only want to write them once or load them once. You don't want to have to load the texture into every single context.

You can use shared context to alleviate the problem with loading multiple textures. Texture objects, vertex programs, fragment programs, display list, and a vertex array objects all can be shared between context. The other context state, like is texture enabled or not, is not shared. So it's just the objects and the associated state with those objects.

The trick here is it's the same virtual screen configuration, which sounds like a mouthful and kind of in some cases gets tricky. There are two ways to avoid worrying about virtual screen configurations and sharing context. First, create the same pixel format, create a single display pixel format, or share it with other contexts of the same pixel format.

So first, if you're doing full screen or if you know that you only have a single screen system or you want to constrain your windows to a single screen, create that pixel format using some of the techniques we showed for the full screen section to only support that one screen and you'll be able to share no problem.

Second is you can create a single pixel format and share across contacts with a single pixel format. Full screen pixel formats and window drawables are also something new for Panther. For Panther, for pre-Panther, people would create a full screen pixel format and then we would, if they tried to attach that full screen pixel format with the full screen attribute to a windowed drawable, we would fail on that.

We've relaxed that description, basically made that full screen attribute a Additional constraint on the pixel format, but when it attaches to the drawable, it's an optional item. So you can create one pixel format that's full screen, and you can attach it to your window drawable. You can manipulate your thing in a window, like maybe a keynote kind of application. You manipulate your slide, and you want to go full screen.

Well, when the user shifts to full screen, you actually use the exact same pixel format, and you can actually create a full screen context and create that attached to a full screen drawable, rather than having to tear down and recreate everything. This will help your code and simplify your code path.

looking at some context sharing code. For this example, it's an example of using a windowed and full-screen drawable and sharing the same pixel format. First thing we do is create a pixel format with full screen. We create the same pixel format without full screen. We're going to get the main device to show you that we create both, choose pixel format and uses the same device in both cases.

And you'll notice that the second create context call, the final parameter is the AGL context of the first context. That shows you the code, what sharing a context is going to be like. In this case, this too will share the object resources, those five things we talked about. There's some examples of this also in some of the samples on the web.

Full-Screen Anti-Aliasing, something that people ask about. How do you do FSAA or Full-Screen Anti-Aliasing on the Macintosh? It's not supported. I don't see a Full-Screen Anti-Aliasing button on a control panel. Well, what Full-Screen Anti-Aliasing does is it actually is using the R multi-sample extension, which is a standard way of supporting OpenGL Full-Screen Anti-Aliasing. And I say scene rather than screen because it's not a per-window basis, it's not a per-screen basis. So you can have one window that's anti-aliased and one window that's not anti-aliased.

The extension has specific details on how it works, but let me go and say the setup of it is pretty simple. You're going to basically create a pixel format and you're going to add a couple items to your pixel format of samples and a sample buffers to your pixel format that tells you that you want to do Full-Screen Anti-Aliasing.

Then you're going to enable our multi-sample with the multi-sample ARB GL extension, and then you're going to, if optionally, if you would like to, you can send a hint that GL can recognize nicest or fastest to tell the driver that you prefer either way. Either the best-looking possible multi-sample or anti-aliasing or the fastest possible for the number of samples you've picked.

That won't hurt, even if you're not, it's an NVIDIA extension, but it won't hurt to call it on any card. It's not going to break your application or reject it. It'll just ignore the setting if the card does not support that particular setting. Code for this, we'll get rid of the stuff that we've already seen, and really, really simple here. Sample buffers ARB is always going to be one.

Samples ARB, we're going to set the four. In this case, I added the no recovery, and that basically means I don't want a software backup because the software does not support multi-sample at this point. No recovery will tell you only to get hardware renders, and you can see more about this on one of the Q&As that was updated talking about multi-sample and context selection. GL enable, multi-sample ARB, GL hint, multi-sample filter hint, NV and nicest if I want to use that, and that's how to set up full screen anti-aliasing.

For the final item, we'll talk about render to texture. Three ways to do render to texture. First is surface texture. We've had that in Jaguar. That was calling the AGL surface texture, GLUT surface texture, or the NS create texture APIs, and you can use them for surface texturing.

Added for Panther was Pbuffers. It's a Windows API. Prior to this, also supported across Linux as a WGL extension. We've taken that extension, taken the meat out of it, and basically implemented an API that corresponds to that. We couldn't implement it exactly because it deals with things like HTCs and Windows-specific kind of drawable code and formats. So what we've done is make the setup a little bit simpler, but then make the functionality the same.

It's more robust than surface textures. It allows you to do more things. In the end, what it allows you to do is create an accelerated off screen to do some rendering into and then use that rendering as the source of a texture. It's supported in AGL, CGL, and soon in the NSGL.

The Panther that you have does not have that support, but by the shipping time, we should have the support in for the NSGL version of the code to use Pbuffers. Final method is Superbuffers. We're following closely with the ARB working group on that, working directly with them, and when the Superbuffers extension is finalized, we'll be shortly after that. Should be having our implementation of that.

So P buffers, we talked a little bit about it, generalized pixel buffers. It can be the target of rendering. In this case, you're going to want to use the commands like AGL set P buffer or CGL set B buffer, which will basically say, hey, this is what I want to render into.

Think of that as a set drawable call. So basically, you have an offscreen, and you call that, and that's going to do your set drawable. Then if when you want to render from that, you're going to use AGL text image P buffer or CGL text image P buffer.

Think of this exactly as a texture 2D call to render from the P buffer, or you can even use cube maps of P buffers, or you can do a cube map text stream. And finally, the setup is going to be you're going to set it up, you create it, you're going to draw to it, you're going to bind to it, and then you're going to texture from it. Okay.

So code example, I'm not going to go through all the nuances here, but I want you to understand that this is things you've seen before with creating pbuffers. There's an AGL create pbuffer call, which is new, and then you have a set pbuffer call, which is new, but you notice these are very similar to other APIs, the APIs you've seen and stuff you've used before. You're going to draw to it. When you're finished drawing to the pbuffer, you want to use a GL flush to flush it, and then I set my current context to null for safety to make sure I'm not drawing to my pbuffer when I don't intend to.

So if I don't have a texture ID created, I'm actually going to generate a texture. I'm going to bind to that texture and then I'm going to set linear as a parameter, as a filter parameter here. So I don't want to have any mipmaps. PBuffers do support mipmaps. In this case, I'm only showing an example without mipmaps.

I'm going to texture from the PBuffer. Once I have that established and I have that texture name established, I'm just going to bind to the PBuffer directly without using the text image. And then when I destroy it, I'm going to delete the texture, destroy the PBuffer, destroy the context, and destroy the pixel format like we've seen before. And let's show a quick example of that.

So what this example is was just a square. And then what I did, I took the Stanford rabbit and I actually rendered it to... let me actually do that... I actually rendered the rabbit onto a pbuffer. and it's actually a flat, you can see from the top, it's actually a flat surface. It's not, so it's on each face of the, each face of the cube is rendered the same, same rabbit.

You can do a lot of different things. You could do a, Any kind of rendering you want to do as a source here and then texture in any way possible. For example, you could render cube maps and render a full reflection map rendered and then put it into the pbuffer.

I do have one more quick thing to show you. I think we're a little bit into the Q&A time, but I'd rather show you this and we can hang around for questions if people don't get enough answered at the end. I showed you that demo earlier of how to create a simple OpenGL sample.

Well, thinking about it, after Peter's session, we could take that sample and fairly quickly extend it to use another method of getting text on the screen and high-quality images to use with OpenGL, which is the CG on OpenGL. And so what we did was we took that sample that we had before that I just created in a few minutes and we added the code required to do that.

So let me go to the bottom and show you what code we added here. So this little section of code here basically is the create a CGGL contact. So this is a new call, CGGL context create, which actually creates a CG context based on that OpenGL context we've created to draw CG into. Let me shoot back to the top here. Then we added a lot of code in this first section, but this is all CG code to draw something more interesting than nothing.

So all of this is just CG drawing code. I thought drawing one line to the screen would be pretty boring. So the key here is this CG draw routine, which does the fills and the strokes and those kind of things. That's a key routine at the very bottom. Let me scroll it up so you all can see that.

It's a CG context flush. And what this does is actually flushes the drawing out. So when you actually draw the OpenGL and flush the OpenGL context into the swap there, you actually get the CG context state updated. Additionally, going back to the bottom, things that we added, we added that one little piece of code.

And then we added... We flushed our GL drawing, and then we just added the draw CG to the OpenGL drawing routine. This is a draw record routine. This is exactly the same code. No changes to the code. I added the CG drawing. I added the CG create. And I added the draw CG call to call that CG drawing. That was it.

And so what I end up getting from that, to save time, I already built this, is kind of the ovaltine sample with the spinning square in the background. You can see these are all the shapes in the front. We're all drawn with CG. It's doing good transparency and good blending with its pre-multiplied alpha on the square. And you can easily add CG overlays or CG content to your OpenGL views and to your OpenGL windows.

I just wanted to show you that, show you how easy it was to use those new routines. So back to the slides. So again, we talked in the introduction a little bit about OpenGL. We talked about the interfaces. We then talked about some techniques that everyone can use in their apps.

And we talked about some... Again, I want to point out that there is sample code out for almost everything we showed here. There is one Q&A that I'll be posting later this week. And the Pbuffer sample will be posted probably at the very beginning of next week or shortly thereafter. It's complete. It just needs to be run through the posting process. Everything else is on the web, available for your use. So you can go look at it today.

The best place to go to for more information though is developer.apple.com/opengl. There's links to sample code, there's links to documentation, that's a good central repository for it. Or another good place for OpenGL in general is the opengl.org website. And I'm going to shoot through here. There's some documentation, a link to the Q&As that I referenced, the TechNote that I referenced, all the samples that were referenced, all the samples are on the web, they're all listed on the OpenGL website. And then I'm going to bring Travis up to talk about the roadmap and we'll take some questions. Travis Scott Thank you, Geoff.

So real quick, what I want to do is just pop through the rest of the graphics and imaging track we have for you here at WWDC and focus on the OpenGL-related sessions. Next, actually, interestingly enough, in this hall immediately following this presentation is a special presentation that was not in your show guides, and that is the Technology Magicka Keynote.

And this is where we have an engineer, the lead engineer on the keynote product, our presentation package, is going to come essentially talk shop about the application and tell you what technologies they adopted, obstacles they overcame when delivering the application. And the interesting point is that app is a heavy user, a heavy client of OpenGL technology for a lot of its transition and 2D effects. Then obviously we have image capture update, which we're going to talk about our scanning and digital camera support API inside the system.

So then we sort of dive into the deep end of the pool with OpenGL. Starting on Wednesday, we have a vertex programming with OpenGL. A big theme that we talked about in the graphics and imaging overview session was programmability, harnessing the GPU to do interesting things. So we have a section on vertex programmability. And then also, if you notice, on Thursday we have a session on fragment programmability. And these are really key sessions if you want to be at the sort of cutting edge of the evolution of both 3D and 2D graphics using the GPU.

We're also going to talk about Quartz 2D in Depth on Thursday as well. Another big announcement that we made in the graphics imaging overview and Geoff did the quick demo where we have the ability to take our 2D drawing API Quartz 2D, also known as CG by short, it's a whole different story, and point it into an OpenGL context. And that was what the Ovaltine example that Geoff showed you at the end was about.

And then we have a key session. If you're developing any OpenGL applications on Mac OS X, you want to attend Section 209, which is OpenGL optimizations. You're going to learn just tons of information about how to make OpenGL applications run as fast as possible on the platform. And we'll learn a lot about our enhanced OpenGL profiler application. Geoff Stahl And then I want to jump down into cutting-edge OpenGL techniques, which is on Friday.

And this is going to be a great session that's going to -- we have our hardware partners from ATI, who their demo guides essentially are going to come and tell us how they do a lot of the absolutely cutting-edge effects they do in their demo applications. It may be very interesting for you guys to learn from.

And they're going to talk about all sorts of different levels of programmability, vertex programmability and fragment programmability. And then obviously we have a feedback form on Friday. So what we want to do is need to contact either of us. I can be contacted. I'm [email protected]. And Geoff also answers developer questions and he can be found at [email protected].