Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2003-201
$eventId
ID of event: wwdc2003
$eventContentId
ID of session without event part: 201
$eventShortId
Shortened ID of event: wwdc03
$year
Year of session: 2003
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC03 • Session 201

Mac OS X OpenGL in Depth

Graphics and Imaging • 1:14:00

This session is the perfect starting point for developers looking to learn the specifics of the extensive OpenGL implementation in Mac OS X. We begin with an architectural overview of OpenGL and then focus on the various OS-level interfaces (AGL, NSGL, CGL, and GLUT) that developers can use in their applications. This session is ideal for graphics developers who are new to Mac OS X or developers who are looking to use 3D graphics in their applications for the first time.

Speaker: Geoff Stahl

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it may have transcription errors.

Thank you. Again, I'd like to welcome you to session 201, which is Mac OS X OpenGL in-depth. And I also want to take a quick second to introduce myself to you. For the past three years, I've represented essentially the 2D graphics technologies to developers. And recently, with our emphasis on essentially leveraging the GPU through OpenGL to do things besides 3D and include things like 2D, such as our announcement that was made in the graphics imaging overview session where we actually showed using Quartz 2D, which is our 2D API, drawing through OpenGL into an OpenGL context, it's been appropriate for me to also start engaging OpenGL as a technology.

So starting at the beginning of this year, I am really your official interface with regards to questions, development issues on OpenGL for Apple. So I look forward to working with many of you who might not know quite who I am or be able to put the technology with the face. So let me kind of dive in and bracket what we're going to be talking about in this session, which is Mac OS X OpenGL in depth. We really want to accomplish sort of two things during this session. One is to talk about Mac OS 10's implementation of OpenGL, because we have a lot of developers who are new to the platform. They may be coming over from Windows. They may be coming over from Unix. And the expression of OpenGL on those platforms is slightly different than what we have on Mac OS X. So this session is really intended to provide the landscape, to talk about the various expressions of OpenGL inside the platform. Because depending on which framework you're developing on, it's going to have implications in terms of how OpenGL is presented through that framework for you to use. And then secondly, what we're going to do is focus on tips and tricks and also introduce some new functionality in OpenGL that many developers have been asking for. So it's my pleasure to welcome to the stage Jeff Stahl, OpenGL engineer, to take you through the session. JEFF STAHL: Thanks, Travis.

So again, we're talking about OpenGL in depth. And as Travis said, it's kind of a multi-tiered approach to this session. What we wanna do is we wanna take you from the beginnings of OpenGL for those who don't know a lot about OpenGL. Building on that with our API frameworks that access OpenGL, and then moving on to some techniques and tips which will allow those who do know OpenGL fairly well to gain something from the session, things they can use in their app.

So first I'll start out with introducing the OpenGL subsystem. I'll then show how to use OpenGL and talk about how to get the functionality out of it. I'll then show some techniques for using the 3D API and following up with, as I said, the tips. Thank you. So let's first take a look at the technology framework. And I have this in kind of a strange way that you may not have seen before as far as looking at OpenGL. This is an application on the top that accesses OpenGL in two ways. On the right side of the screen is the actual OpenGL access and where it would actually access OpenGL, call the GL functions. On the left side of the screen is the API that you have to use to get at the windowing system or provide the windowing system interface for OpenGL. It looks complicated as far as the number of different APIs, But what this actually is, it gives you a choice from the highest level GLUT API to the lowest level CGL API, depending on how your application's written. Will depend on what API you use, and it gives you a lot of different options for writing an application and accessing the OpenGL functionality. Let's introduce OpenGL to you guys.

Talk about what is OpenGL. Then we'll answer the question that Peter touched on in his introduction session to the graphics track, is why do you want to use OpenGL? Why does, I have a mainly 2D application, why would I want to use OpenGL when it's a 3D API, right? So that's not something I normally would want to use. We'll talk about the OpenGL state machine, which is key to understanding the OpenGL operations, and finally talk about the API and a little bit how it's different than a lot of other APIs figuring out what actual functionalities available on the platform.

So OpenGL in simple terms is a software interface for hardware. A lot of cases, it's design and development, parallel hardware design and hardware development. It's platform agnostic, which is why when we move on to the interfaces, those interfaces are different on Mac OS X than you may have seen on other platforms.

And it's asynchronous. This is something that Peter touched on in depth in his presentation, and it's good to kind of reemphasize that here. is what happens with an asynchronous API, you're going to actually issue commands. The commands at some later time will be executed by the graphics processing unit or GPU. Most developers are used to when they call a command like a GL vertex or even a GL flush to say that command is executed by the time that command returns. In the case of OpenGL, in most cases, that's not true. The GPUs are so fast that we don't see this a lot of times. We don't see that there's a lag.

But if you issue enough commands, You can actually be waiting on the GPU. And you can see this in some of the profiling tools. But in any case, keep that in mind when you develop your application. You want to maximize your asynchronicity in your application. Issue the commands, go on and use the CPU to do other things.

It uses geometric primitives, points, lines, polygons. So the basic primitive for drawing, you can draw lines, you can draw polygons made up of multiple points. It's fairly simple things that we all understand. It also is a state machine. OpenGL has a state you set, and that stays set until you change it. So if you turn texturing off and you draw a polygon, texturing is obviously going to be off and it will not texture. If you continue to draw polygons, those will also have no texturing on them until you actually turn texturing on. For example, texturing is off when you first initialize the interface.

And so a lot of people initially get bit by the fact of, why am I not drawing a texture? You're not drawing a texture because you never turned it on. So something to keep in mind, again, when you're developing for OpenGL is that it is a state machine and will remain set until you change that state.

So the big question is why should I use OpenGL? Peter showed in his talk that the GPU development is increasing at a higher rate than CPU development. And even the simple fact that the GPU is a very powerful processor on the system that you want to utilize to its fullest. It's not a good application design in most cases to process data with a CPU.

then wait for the GPU to finish processing some data, and then go back to the CPU. What you want to do is use OpenGL to get asynchronous processing so they're both working and utilize the power of that GPU. Power like this. We've measured in simple tests. Oh, by the way, for all these measurements, these are all G4 measurements. So as of yesterday, all these numbers moved up a notch. And I think some of the developers we have here just went down to the lab and tested their application on G5, were very pleased and saw huge increases in performance above what some of these numbers I may have here.

So we've measured in simple apps 650 megabytes per second across the texture upload across the bus. This is like modifying a texture that's 4 megabytes. It's 1,000 by 1,000 by 32 bits. So that's like if you had a movie that was 1,000 by 1,000 by 32 bits. You can modify that at 200 hertz, 200 times a second.

You can modify that texture, play a movie at that large size across the bus to the GPU every frame. Real-world ability to upload something in the neighborhood of 140 million triangles per second. What that means is, let's say your single scene takes 100,000 triangles. You get 1,400 hertz or 1,400 frames per second in your application. Or if you took a game like the ubiquitous Quake 3, which is about 10,000 triangles per frame, that's all they were doing was uploading triangles. You could get 14,000 frames per second. You can see this is a lot of power. Also, we have GPUs these days with 20 gigabytes per second of bandwidth or more. And this is like if you have a texture on the GPU and you draw it, it's like drawing a 2,000 by 2,000 by 32-bit 16 meg texture at 1,200 times a second. Again, you can see the power of the GPU. It's there. You should harness it.

OpenGL, getting back to the API. OpenGL provides a large, powerful API that is good for 2D and 3D operations. It has vertex and pixel program for complete customization. And finally, which is, I think, a really good draw for OpenGL, it is cross-platform, and there's lots of code samples from Apple and on the web for you to pattern your applications after. Thank you.

So if you want a quick look at the OpenGL pipeline, we're not going to go over this in depth, but what you see is the application is sending primitives, like the points, lines, and polygons. They're sending image data to the GPU or to the OpenGL framework. It's going to do some transformer lighting. It's going to do clipping to the viewport. It'll then move into the rasterization step. At that point, you do some multi-texturing. You apply the fog, and then you do, at the end, the per-fragment operations. And one key to what's a fragment, Some people who are new to OpenShield don't understand a fragment.

Think of a fragment as a smart pixel, as a dot on the screen that may have additional information other than color, may have an alpha value, have a depth factor on it. Those kind of things are fragments. And that's why we get fragment shaders, vice pixel shaders. And last is the frame buffer blending to blend with the frame buffer. In the future presentations about vertex programs and fragment shaders, you'll see how we take parts of these pipelines and remove it, and you can put in a customized part of the pipeline. So again, you can customize the control of the GPU to your application. Thank you. So we talked about state. The OpenGL is a state machine. You set the state. So it does remain unchanged. We talked about it does remain unchanged until reset.

And one thing to also remember, state changes can be expensive. That is not to say that you shouldn't do any state changes. And you shouldn't over-optimize for state changes. I mean, you could spend months or years optimizing so you do as few state changes as possible. It's probably not a good use of your time. But also, on the other end, you don't want to do duplicate state change and a large amount of state changes per vertex or per primitive.

I mean, I don't want to draw a polygon, change a whole bunch of state, draw another polygon, change a whole bunch of state, draw another polygon, change a whole bunch of state. That can be really expensive. So try and avoid that in your applications in general. Change some state, draw some polygons, change some state, draw some polygons. And that organization will be much more efficient for OpenGL. You can examine state both yourself and at programmatic level. If you want to examine it yourself, the OpenGL Profiler is a great tool. The OpenGL Profiler can pause your application at any point on any OpenGL call, you can bring up the entire OpenGL state and you can look at exactly what it sets. So if you're in that texturing case where you're not getting any texturing, you could pause at the point you're drawing that and look at the OpenGL state and say, huh, why am I not getting texturing? Oh, look, texturing's turned off. 2D texturing is disabled. Thus, you can debug your apps and get a lot more information about what the GPU capabilities have and what they're set at with the OpenGL profiler. Also, programmatically, you may want to make decisions based on state. It's not always a good idea to do this kind of, if this state's not set, set it. If this state's not set, set it. If this state's not set, set it. Because that can stall the pipeline. Think about it this way. We talked about this being an asynchronous API. So you have a command. You issue the command. The command's going to move down to the pipeline. And that command could be a state-setting command.

What's going to happen is, if the next command following that, you want to get what the current state is, We're going to have to wait for that command to flush all the way through the GPU, all the way through the pipeline, and then retrieve the state to make sure that we're actually retrieving a valid state vector for you. So realize that getting and setting state can be expensive, but do it when necessary. Some calls at the bottom. You have enable and disable for enabling or disabling, for example, texturing. You have a glGit or glIsEnabled for getting some state there. And finally, if you want to do a large amount of state, you can do a push and pop attributes.

So the OpenGL API, it is a procedural command. It is a client-server interface to get this asynchronicity. You're going to issue commands, and they're going to be collected and sent to the graphic processing unit. And the type of commands are state commands, drawing primitives, and manipulating of buffers in most cases. So here's an area I want to spend a little bit of time concentrating on is OpenGL functionality and what that means to you as a developer. One of the big misunderstandings in OpenGL is exactly what functionality do I have when I'm running my program. You may have your design system that's tricked out, has the best graphics card, has everything, and your app may run great, but how do you make, how do you detect what functionality the end user is going to have? And that's where two types of things come in, both extensions and the core API version number. Core API version for OpenGL in general ranges from OpenGL 1.0 to 1.4. On Mac OS X, we only support 1.1 and above. We never supported 1.0 on Mac OS X, which just means that you never have to worry about ever having that case of 1.0 occurring in your user base. So it's always going to be 1.1 to 1.4. And usually, it's not a case of when did a driver come out or when did a card come out or when did we rev a driver to get what version it is. It's what is that actual hardware support. So, for example, if a hardware does support 3D texturing, it can say it's going to be 1.2 or above, probably, in the hardware. If it doesn't support 3D texturing, you're never going to get to report a core OpenGL version of above 1.1 because the 1.2 OpenGL spec requires 3D texturing.

So, but what about a card that may say, hey, I'm 1.1, but really can do some more things than the 1.1 spec allows? Well, that's where OpenGL extensions come in. We support over 80 OpenGL extensions. They range from things that are Apple-specific, like some of the things we did with Vertex Array Object, some of the texture range stuff that John will talk about and the optimization thing, some things that we give you to allow you to do as optimum a texturing path or vertex path as possible on Mac OS X, or to things that are like ARB MultiSample, which is for full screen anti-aliasing, and is an ARB extension, which is by the Architecture Review Board, and is cross-platform and supported on a variety of cards.

This extends the functionality above the OpenGL core functionality which is specified by the OpenGL renderer. So how do we detect this, which is really what I want to talk about as far as an application. If I'm an application, I don't want to just run and say, well, 3D texturing is not supported on every card, so I'm not going to run on anything that doesn't support 3D texturing. Well, that's not a good example. Or, you know, I really wanted this fog effect in my application, but that was a 3D texture, and since I don't know everything's going to support a 3D texture, I'm not going to even use the fog effect. Not another non-good solution. So what you can do, there's simple checks you can do when you start your application to determine if that 3D text string is supported. First thing you can do is use a git string, gl version, that will give you the core version.

That's a string in a certain format that's defined. It'll be like 1.4 space, and then there'll be some additional vendor-specific stuff. But the beginning of that's always going to be the same. So if you're OpenGL 1.2 or greater, you know you have 3D text string. No more checking necessary. It comes back and says, hey, I'm OpenGL 1.1 for whatever reason. You can then move on and look for the extension for 3D text string. You can do that by getting the extension string, and then we provide through the GLU, the GLUtilities API, we provide additional API to check for specific extensions. In this example, I used Apple Fence extension. I'm checking here for Apple Fence, and this will tell me if the fence is supported, so I can then, in my code, determine if I want to do a code path that uses a fence or not. This is a really good thing, a good example. This is rectangle texture. Rectangle texture is a great extension to use if you're doing a lot of image texturing. and it's not supported for example on the Rage 128 but it simplifies your code path a lot if you can use rectangular texture on some other GPUs. So you wouldn't want to write everything as if it was a Rage 128 you can use this method right here check for the texture rectangle extension and then decide which code path you're going to use in your application. Finally, one thing you should be aware of is OpenGL limits. OpenGL has limits that are card dependent. If you're on the high end card you may have a texture limits of 4000 pixels If you're on a low end card, it could be as low as maybe 1,000 pixels. Something to determine if you're working with large textures, you may have to divide them up. You can use things like max texture size right here in the GL get integer, and that will get the texture size for you so you can, again, set your code up to maximize the ability of that GPU to perform function for you. Thank you.

So I want to point out at the bottom, I should have pointed this out at the beginning, but this is a good time to do it. I have this blue line at the bottom of it. It says sample code of Carbon OpenGL, Carbon CGL, and Cocoa OpenGL. I put that on the bottom of a lot of slides. What that is saying is that's online sample code, online references that talk more about the subject on the slide. So instead of, you know, taking notes on all these things, you can just go to this sample code and it actually has functions to do this. The other thing that's interesting about this detecting functionality thing, it can be a large effort to put a lot of checks in for a lot of different functionalities. So what I did was build some sample code that has a GL check function in it, and these three samples have it. And this sample code will go through, and for every display and for every render on your system, it'll go through and build a list of all the functionality present, including limits. You can use it all as is, or you can extend it or reduce it as it fits your application. So I say, look at that sample code, look at how that's done, either model your application directly off of that or use the sample code directly in your application as is. That'll help you doing the detecting functionality so you all don't have to write the same kind of functionality detection code for every one of your applications.

So let's show kind of what the detection of functionality sample code gets and show you kind of how What I talked about, functionality for extended functionality and core functionality. This is just a simple OpenGL demo. Let me move this out of the way. And one thing I want to show you is not anything specific on the demo, but this information. You probably can't even read what it is, but specifically it talks about OpenGL capabilities. The beginning is textor size, but the bottom is a list of every extension or every feature supported by this render on this machine. You can see there's a lot of them. So sticking to the core functionality is probably not going to give you a very robust app. A lot of functionality, a lot of things you can do with OpenGL in extensions. You can use the GL check, which generates this information to determine what extensions are there and code your app to take advantage of this. I'm going to go back to slides. All right.

So let's move on and talk about interfaces. Interfaces are gonna be the meat and potatoes of getting your application started. And that's something that everyone here who writes an OpenGL app will have to touch. Some of you who are already working with OpenGL may think, nah, don't need to know this. But I'm gonna go through all the interfaces. And you might learn something about if you're a Carbon developer, you may, hey, that Cocoa interface looks pretty nice, or if you're a Cocoa developer, some Carbon interface stuff may work for you, or a CGL interface may work for you. So that's what we're gonna talk about in the interfaces section.

All the interfaces share some basic things. These are the basic things that the windowing system has to provide to OpenGL. Remember I said OpenGL was a platform agnostic API. What that means is there's no windowing system calls in it. There's not any Windows calls or any Mac OS X calls, or there's nothing to say, hey, this is a window, I want to attach to that window. These interfaces provide that. They provide a pixel format, which basically describes buffering capabilities. So it's buffers like, do I want a depth buffer? Do I want an auxiliary buffer? or capabilities like do I want full screen or do I want stencil, etc. The context you can think of which these provide is a state bucket. It's a big bucket of state, and commands are going to be sent to that current context. You can create as many contexts as you need for your rendering, but these will create your context for you. And finally, the drawable is basically equivalent to the window of view or the screen. It provides the size for your buffers, and actually the buffers are instantiated when you attach to the drawable.

Interfaces available. There are four interfaces we're going to talk about. CGL. CGL is a low-level interface. It's the basis for all the other interfaces. It's for full-screen only applications. But if you have an application that's full-screen and windowed, you could use a CGL interface for the full-screen portion of it and then use a different interface for the windowed portion. AGL is the Carbon interface to OpenGL. So if you're a Carbon developer, you're going to look at AGL and use that to interface with OpenGL. NSGL or NSOpenGL is going to be the COGO interface to OpenGL. And finally, Glut is a very high-level interface that provides source-level cross-platform compatibility, and it's used a lot for examples and in the scientific community. It doesn't provide that rich of a UI set, but for some basic, we want to test something out, it works fairly well.

So again, we've seen this before. You can see that Glut would be the highest level interface. It's actually built on top of NSOpenGL view. NSOpenGL view is built on top of NSOpenGL context in pixel format. AGL is built on top of CGL, and everything else kind of sits on top of the pancake that way.

Again, your application is going to pick one of the interfaces on the left side and then access OpenGL from the right side of the diagram. program. So CGL, core OpenGL, again, it's low level, it's a basic interface, it's a foundation for everything else, it's full screen only, and let's talk about setting it up. All these interfaces have almost the exact same setup code, so if you're not developing OpenGL right now, this is what you're going to have to do to get your OpenGL window on the screen. It's going to get a desired, for the CGL, for the full screen, you want to pick a desired display mode. Do you want 1024 by 768, do you want something else? capture displays to make sure you do not modify anything else on the desktop or some other application or icons on the desktop. You want to switch the video mode that you have chosen. Create a pixel format for that specific display. The key here is a specific display. So you're going to pick a specific one display that you want to render onto and make a pixel format that centers on that display. And then you're going to set full screen in the CGL. Create the context and then set full screen.

So let's walk through the code example. Again, the CGL sample code here, CarbonCGL, has almost the exact same code in it. This is simplified slightly, but if you really want to look at this and study it, please download the sample. It's on the web right now, and look at that. I'll go through the code here, but I'm not going to go through every detail in this session. So first, we're talking about the pixel format, and this is what you'll see in a lot of these setup codes. You'll see a pixel format, and you'll have attributes. These attributes define things like what buffers you want and what capabilities.

And here, what's important to note the full screen attribute. Sublying the full screen attribute will tell CGL that you definitely want a full screen. CGL could work with off screens also, but in this case we're talking about full screen. The other thing of interest is that first two attributes put together, display mask and the zero. The zero doesn't mean anything right now, it's a placeholder. We put that in and that's going to tell CGL what screen we want to actually work on. We then go down and we get the main display, for example.

We get a display mode for the height and width and depth that we want. We capture all the displays and we switch to to display mode. Those are CG calls. Those are things that are covered in a CG API. Then we have one additional call that may look new that's in the CG API you may not have seen before.

That's CGL, get the CG display ID for OpenGL display mask. And what that means is it's going to return an OpenGL display mask and fill it into that attribute section. That's going to tell OpenGL, the CGL interface, what display you want to use. So you call that. You then set the pixel format, so you have the display mask in there for that specific display. Create the context. You can destroy the pixel format right now because it's not needed. You could keep it, but you can destroy it if you want to. Set the current context, and then you'll set full screen. At that point, you'll have an OpenGL context on your full screen, and you can draw to it. Fairly simple.

AGL is a Carbon interface, and it's windowed in full screen support. If you look at the setup here, it's really very, very similar to CGL. You create a view of a window or whatever, since CGL uses the screen, but then you're going to create a pixel format. And I'm going to make a note here about limits on multi-screen pixel formats. It's something that's probably more complicated than we want to go into for this session. If you have specific questions about it for your apps, we can talk about it afterwards. But I also point you to two tech notes there. AGL choose pixel format at the inside scoop and the correct setup of an AGL drawable. Both of those have a significant amount of information about choosing pixel formats and how it contends with multi-screen displays. If you choose just a normal pixel format, you should normally pick every renderer it possibly can support, and you'll be able to drag the window between multi-screen displays. There are reasons you may not want to do that in some cases. There are reasons that you probably want it that way. So for just a normal app, you would want to just choose an open pixel format, let it choose to support all the renders, and then you get the ability to drag between displays. You then create the context attached to the drawable as we've seen before. So the code example for windowed, basically the attributes, we've seen the attributes before. They're AGL attributes instead of CGL in this case. Double buffer depth, those are similar to what we saw in CGL. You then choose the pixel format. Thank you. In this case, I just do some checking here. If I created a pixel format, I then create a context. If I create the context, I then set the drawable.

And if you notice, the get window port to the window, what that's going to do is actually going to use the window that I've created and the normal Carbon routines as the drawable, and I'm going to set the current context. And then you can draw into it with OpenGL. One thing the bottom of that shows is something people ask about, about VBL syncing. Normally, in Mac OS X, we do not sync a display or sync the OpenGL drawing. to the VBL or limit the OpenGL drawing to the VBL. So what you can do with this is use the set integer call that's in almost all of the APIs, and that'll allow you to sync to the VBL here. In this case, you call AGL set integer with the context, and you want to use the swap interval and set that to one, and that means, hey, we're going to limit to the VBL sync. - So now let's move on to AGL full screen. In the AGL API, you can also do full screen inside of that API itself. This looks pretty much the same, but let's just highlight the things that are different. There's only about three or four lines that are actually different here. First, we've added the full screen attribute to the pixel format. So that's saying, hey, I want a full screen, not a windowed pixel format. Then we're going to get a main device, because we need a device to do full screen. You need to tell it where to do the full screen.

And AGL is a little bit different than CGL and the fact that you add, you put the display into the create pixel format instead of in the attributes. So you put the display you just got, one display to draw to, and the attributes to create a pixel format. And then instead of setting the drawable, you do a set full screen. Fairly simple. So this is really, really simple stuff as far as how you set up OpenGL. You can get running with OpenGL, depending on what API you pick, in a matter of minutes. Thank you.

Let's talk about NSGL and NSOpenGL. This is the Cocoa interface, very similar to the other interfaces we've already talked about. Two ways to use Cocoa and NSOpenGL. First is the NSOpenGL view subclass. We already have provided NSOpenGL view in the Cocoa interface. What that allows you to do is basically encapsulates a context in a pixel format already in it and gives you some basic utility functions handling some of the bookkeeping functions for you, and so you have to do very little work. There are some limitations to it. For example, if you wanted to have two contexts that work with one view, it's possible to do in Cocoa, but the NSOpenGL view subclass would not allow you to do it. In that case, you would have to roll your own NSOpenGL view.

So let me go back to the NSOpenGL view subclass. The last point on that is that it's basically, you can build it via an interface builder, which I'll show you in a minute. And it's fairly simple to hook this into an application. But let's say you have an application, you want to special case it. For example, you want a context that does, let me think, multi-samples. You want an anti-alias context and a non-anti-alias context. And you want them in the same view, and you want the kind of switch between the two without any flash. You don't want to actually replace the view. All you're going to do is not render one, and the next frame you're going to render the other. You can do this fairly easily by just replacing the context in a view. You're not going to tear the view down.

You're not going to have this big white flash or black flash of nothing being drawn. You're actually going to have to replace the context. But this would not work with NSOpenGL view. You'd have to roll your own custom NSOpenGL view based on the NSView subclass using the NSOpenGL context and NSOpenGL pixel format. This might seem daunting to some people.

Make sure they cover all the cases. To simplify this, we provided some sample code. The custom Cocoa OpenGL sample code, which went up this week, is on the web now. For folks who want to roll their own, it shows you basically a template that you can use directly and you can modify it as needed. And that should be a great starting point for that. I'm not going to go into detail here, but that sample code I wanted to point out is available and for your use today.

So let's talk about NSOpenGLView and using that. You create it for your interface builder. You create your window. You create your view, and maybe you use a custom view or an OpenGL view, and you drag it into your window, allowing that to have the class. You create a subclass of NSOpenGLView, having that class manage the view, creating an interface builder.

Then in code, what you're going to do is you're going to have to override a few of the methods of NSOpenGLView. First one is a knit with frame or a knit with coder. Depending on whether you use a custom view or OpenGLView, you can override one of those. And you can do things like set your pixel format up there if you didn't want to set it up in Interface Builder itself.

Then we have a new routine in Jaguar called prepareOpenGL. You can even kind of write code that uses prepareOpenGL, I'm sorry, for Panther, uses prepareOpenGL in Panther and then uses your own prepareOpenGL in Jaguar without having a problem. So what I do in prepareOpenGL is initialize all my OpenGL state one call to do that in the right place. If you don't have it, if you want to write an app that runs on both Panther and previously, what you can do is in draw rec, conditionalize an initialization variable, And if it's not initialized, you call your prepareOpenGL routine. This would work on both OS versions. Reshape call. Some of you may be noticing that there's a reshape call in NSOpenGL view, and here I'm not recommending to use that reshape call. You're free to use a reshape call if you like. There's no problems using it. But my recommendation is it's fairly difficult to write code that optimizes reshapes and has less overhead than just handling any reshape or resize in your draw rec. In your draw rec, you can easily check the view size and call glViewport to reset the view size if there's been any changes. That's kind of the recommendation for us for simplicity and for working across all kinds of code classes or for working in all situations. Just do handle anything that needs to be done for reshape in your actual draw rec class. Thank you.

Update. Update routine is called in about four cases, and there's a Q&A that just went up about updating OpenGL context. I suggest you all, when you're looking into update, look at that Q&A, and it describes all the cases that update needs to be called. In the cases it needs to be called are things like display configuration change, or if a window could be dragged and change renderer. The idea behind update is update takes care of renderer changes for you. So if you have two displays, and someone's going to drag your window from one display to the other, at some point it's likely a renderer will change into a second renderer or whatever your other card may be in your system. Update needs to be called in these cases to make it happen. Normally with NSOpenGL Vue, you probably just want to ignore update. unless you need to track renderer changes. You just need to make sure it works right. It'll happen behind your back. Update will be called, taken care of, and you won't need to do anything.

If, for example, in some reason, like in some of the examples that I have, I want to show what renderer I'm on. I want to update some text. I actually would subclass or override the update method, but you've got to make sure you call the super update first. Amen.

Lastly, drawRack. DrawRack's where you're going to do your work. I'm going to handle my resize there, and I'm going to do a lot of draws there. I'm going to draw the content in drawRack. Finally, animation timers. So if you want to do animation, you can use an NS timer, and the only note I would have other than normal NS timer use is you want to use both tracking run loop mode and normal run loop, and default run loop mode, event tracking run loop mode, which allows you to get the updates during the resize.

So I'm going to take a departure here from what normally folks would do with a demo, and I'm actually going to create some code. So what we're going to do, we're going to go to Xcode, and we're just going to start, and we're going to create a new project, and we're going to make it a Cocoa application and show you how easily it is to create an OpenGL demo live, onstage, even when you can't type well.

So first thing I'm going to do is I'm going to look at the nib file. And the standard nib file you get has just this one window in it. And what I'm going to do is I'm going to take a custom view. I'm going to drag it into the window. And I'm going to resize this the entire size of the window. Okay.

Then I want a class to control that custom view, so what I'm going to do is move in. We mentioned that NSOpenGLView is a subclass of NSView. It's NSOpenGLView, and I'm actually going to subclass that with my own class, MyOpenGLView, and that's fine for the name there. Going back to the custom view, get some information on it. The first thing I want to do is make sure when the window resizes that it actually adjusts its size.

I'll do it that way. And then finally, I want to make that a custom class using MyOpenGLView like that. Let's use Interface Builder's ability to create some files for that. It's going to automatically create those two files, put it into the demo right there. And then we're going to save this, and we're going to quit Interface Builder. If you notice now, we have two additional files that were created in our project by Interface Builder. We're going to add some code to these files. Thank you.

So because of my poor typing, I don't want to type all these codes. I'm sure I would make mistakes. But I'm gonna add some simple things like headers, some variables, and that kind of thing. So first thing we're gonna add is we're gonna add the OpenGL headers. The OpenGL framework, G-L-E-X-T and G-L-U. I'm gonna add some member variables over here.

And what that's going to do is that's going to add an initialization variable. It's going to have a timer, a flag on animation, a time for the timer, and then some rotation values. Nothing, no OpenGL values here. I mean, this is just things you can do for doing some simple animation to make the demo a little bit more interesting. And here I'm going to add the functions I'm going to override. I'm going to have a pixel format, update a projection, update my model view matrix. I'm going to do some animation right here, do my drawing routine.

This is the prepareOpenGL I talked about. We have our knitWithFrame. We have a wakeWithNib to set up some variables. So that's all I need in there. Thank you. In the OpenGL view class, where you're going to add, first thing we're going to add is the rotation drawing code.

This code does some things to actually calculate some rotations, to spin some things around to make it look interesting. And then at the end, this is the drawing code right here. This is the color. This is the vertex. So you draw some quads. So this is the actual OpenGL drawing code, and then I draw a line around it, around the cube, with this. So this is going to draw the cube. This data up here is the actual vertices for the cube. Notice they're all ones and zeros, but it's just a unit cube.

We talked about pixel formats. This code right here actually defines a pixel format. This looks very similar to the example I showed. It's going to be a windowed pixel format. It's double buffered, and I'm going to add a depth buffer here. So it's real similar to what we talked about in the CGL and the AGL. now. These two functions I added here, update projection, updates my projection so I actually get a projected spinning cube, and this actually updates the world, especially the rotate call here from GL, to actually rotate the cube.

Now I'm going to add the animation timer. So this is what we talked about before. This is the timer. It's going to get some time, do a difference here. If I'm animating, I'm going to spin the object, then I'm going to call that drawRefRoutine. This is the actual spin the object, does some math to make it look a nice, pretty spinning object. And finally... This is the four functions that you actually need to override. A draw rec routine. What's this going to do? It's going to handle the resize right here, which is what I talked about handling in draw rec. It's going to do the init at the very beginning if it's not been done. It's going to do a clear. It's going to draw the cube, and then it's going to do the flush or the flush buffer depending on what situation I'm in.

Prepare OpenGL, some OpenGL setup code right here. And again, this is the Cocoa version of that swap interval for BBL syncing. Very similar to the AGL version. A knit with frame is very simple. All I'm going to do is create a pixel format, what we talked about. And finally, the awake with nib, I'm going to use this to set up some values and set up a timer. So if I didn't do anything wrong, we should be able to build this and run it.

And so this is an OpenGL spinning square that resizes, handles updates correctly, handles the the full screen zoom very easily. And so that's all it takes to do an OpenGL app. And I put a square in here, but you can do any content you want. The point is it's real easy to get to the point where you actually can draw your content. And all this code that I used here is based on the sample code that's on the website. There's a Cocoa OpenGL sample that's really simple to use, and you can just take that, rip the guts out for the drawing code, and put whatever drawing code you want in. We can go back to slides now.

Last thing I'm going to talk about, GLUT. GLUT, as I said, is a source-level compatible cross-platform API. It's a limited API, but it's fairly simple to use, so it works really well for doing examples. Another idea is if you wanted to do an example or test some things, you could use GLUT, set it up quickly, and do it that way, and it works across many platforms.

It's callback-based. The setup is fairly simple. You initialize GLUT, you create your windows, you set your callbacks for the things you want to do, and you call GLUT main loop. Don't expect GLUT main loop to return So don't put any code past it. It'll probably exit without returning through a different code path.

Here's an example of what your main in Glut works like. We're not going to talk about the specific callbacks. I just wanted to show you an example of it. What you have is you have the init function. You have the window create. You set the display mode similar to pixel format. You create a window.

You actually can then initialize some OpenGL state if you need to, and then you set your callbacks. Realize that a couple of the callbacks are app-based vice window-based. For example, the idle function, you get one of them. So if you set an idle callback, you have one idle callback for your entire application where, for example, the reshape functions on a per window basis, whatever the current window is. So that's what some glut code looks like, and there's a lot of glut examples also on the web and on our site. Amen.

So let's talk about some OpenGL techniques here. These techniques are things you can use in your current applications to either improve them or some things to make more content that's more interesting than just having your spinning square. First technique is some texturing. Then we'll move on to some fonts, talk about handling images, and then movies.

In this section in general, you'll see that I'm going to talk about the technique and not specifically show you every nuance of the code behind it. Again, there's samples for all this on the web. I want to talk about giving you an overview and a context to work from so you can look at the sample and understand what it's doing. And rather have you work through the code at your own pace and ask questions on the mailing list or whatever if you need to after the session.

So first, texturing, we can do texturing in two different directions. One will be using Cocoa, and one is Carbon using QuickTime. First, for Cocoa, we're going to texture from a packed 8888 or 1555 ARGB buffer. What that means is it's going to be alpha, red, green, blue pixel format. It's going to be 8 bits per pixel or 1 bit alpha and then 5 bits per component. Sorry.

You can allocate a texture, for example, you want to do that, so you allocate a texture size buffer using new pointer or malloc, whatever your routine of choice is. That's going to be the exact size of your texture you're reading in. You can create a GWorld from the buffer with a Qt new GWorld from pointer. There's also the function is new GWorld from pointer, use whichever one you'd prefer. They're actually the exact same routine. What this does is create a buffer that you can actually draw into using QuickTime and QuickDraw that does not have padding on the end of it so you can texture from it easily. You can draw into the G world with whatever content you want, whether it's an image, whether it's a movie, whether it's just lines and circles or text or whatever you want to draw, you draw into it. You can then dispose the G world if you want, unless you want to draw some more, because you don't need to keep it around.

All you need is the buffer you created initially. You don't need that G world to texture from. And then you're going to texture from the buffer. You'll use standard OpenGL texturing techniques and understand that Apple Pack Pixels will be used here. It's an extension that's on every single Mac OS X implementation, so you can count on it being there. and the texture formats are either unsigned in 8888 reversed or unsigned short 155 reversed, and what that does is tells what the pixel format you have natively in that quick draw format is to use for texturing. So some code, fairly complicated, or fairly busy looking, not actually complicated, but I'll go through it and we'll talk about it and then we'll show a demo in a little while using this code. The Git graphics importer profile in this case is going to, you have a file descriptor and you're going to actually try and load a file in. You can get the natural bounds for that, which gets the size of it. You then can allocate a handle the size of the image descriptor here to get the additional information on the depth of the thing where I get the depth part of the image descriptor. I then get the height and width out of the image information. I then calculate a stride to use for convenience later and then allocate a new buffer. That new buffer is going to be my texture. I'm from pointer as we talked about, and that's actually going to create a GWorld from that buffer that you can then use to draw into with Quick Draw or Quick Time, and then you can texture out of using OpenGL.

Graphics importer, set G world, graphics import, set quality, we're going to do lossless quality, and we're going to get the Pixmap, lock the pixels, and we're going to draw on it, use the graphics importer. So this basically took the contents of that file and drew it into that G world, which in turn was drawing into your buffer. You can unlock the pixels, close the component for the importer, and actually you can just dispose the G world at this point, because you don't need it. All you need is that buffer you created. And then later on when you want to texture from it, you're going to just call glTextImage2D. Notice I use a rectangle texture, which you mentioned in brief earlier, which allows you to texture from non-power of two images, and I do a switch on the depth of that image and determine whether I'm going to use the 1555 or 8888 pixel format. That would be kind of the overview of how to texture from a G world. So moving on to texturing from Cocoa using NS image and NS bitmap image rep. So first, in this case, we're going to texture from the 8888 RGBA format, which is what you're going to see from Core Graphics or from Cocoa. We're going to create a texture-sized NS image. So you create an NS image as you would normally. We're going to lock the focus on this image, and we're going to draw into it. We're then going to create a bitmap image from NS bitmap image rep from the NS image. Usually I do the focused view to create that, so it's fairly simple to do. The texture size will be that bitmap size you create, and then you can texture directly from the bitmap data in this case, RGBA, and using the unsigned byte format. So what does code for that look like? A little bit shorter, a little bit simpler. So if you're using the Cocoa, a little bit less work to do here. The NSImage alloc init. Then you get the lock to focus for that image. You draw whatever your content would be there.

You create the bitmap, or you can get me back up a little bit when you create the NSImage. You can create an NSImage from a file. In that case, you would not really be drawing, just a lock focus, and then you'd create the bitmap. you'd already have the content. To create the bitmap, you unlock the focus because you don't need to have it locked anymore. You get the size from the bitmap, and then you call text image 2D, text or rectangle again to handle any size image, and you're going to use the RGBA and unsigned byte for that. So bitmap release, image release. Again, this is in Cocoa, same thing we showed in Carbon. Fairly simple to do. This is areas of OpenGL where you have to interface with the operating system to handle that image, and this is things that OpenGL does not have built into it. So there's two areas you would have the interface there.

Taking the text, then, we can extend that to drawing fonts. And if you want to use, there's a couple ways to do drawing fonts. First, on a per-character basis, there's things like AGL use font. And also, in the CGL example, I wrote a CGL use font, which works very similar to AGL use font, so if you're not using AGL, you can get that per-character bitmap font. Good for, as what you saw in that previous example, for putting up text like that, just kind of debug text, text that you want, information, but is not optimum. bitmaps are not real fast, especially redrawing the bitmap every frame is not a good idea.

So, what can we do? There's two options here. One I mentioned here and one we'll talk about at the end if we have some time and I'll show you a quick demo on that. First, if you're doing a per string textures or text and you want to sort them in textures, that's a fairly simple thing to do. Some tips on doing this is limit your updates. Every time you change that texture, you have to reupload the texture. If you keep your texture constant through through the entire application or you have some things that's limited to when the user responds to something, update only the string at that point, update the texture at that point, and you limit the amount of updates of the texture onto the graphics card. Using pre-multiplied alpha for textures, and if some of you aren't familiar with pre-multiplied alpha, we can take a little sidebar and talk about pre-multiplied alpha.

Premultiplied alpha is something that is not, a lot of people, depending on what your background is in graphics, you either think in this terms or don't think in this terms. But for people who don't think about premultiplied alpha, it's something really good to use in your application. It's alpha which is already multiplied through your pixel, your color. So a 50% gray, non-premultiplied.

Red of 1.0, green of 1.0, blue of 1.0, alpha of 0.5. premultiplied 0.5, 0.5, 0.5, 0.5. So that gives you the alpha's already premultiplied through. Why use it? It's simpler, and it's closed on the over operator. So over is basically the compositing operator. If you composite something over top of something else, you get a closed function. If you don't use premultiplied alpha, and you take two images and you composite them, what you get out is premultiplied. So then you try and use this image with something else. Well, this is, well, what's the first to know? Well, let's just use premultiplied all the time. Two pre-multiplied images, you get premultiplied out. Two more pre-multiplied images, you get premultiplied out. Take the results of those two, put them together, and you get premultiplied out. Consistent across the board, easy to use. You can use it very simply. The only change in OpenGL to understand is the blend function changes. Non-premultiplied, if people are doing blending out here, they're probably used to doing source alpha on the source component. In this case, since you know that the alpha is already multiplying through your color, you'd use GL1 and know that the color already has the alpha value in it. That's the sidebar in pre-multiplied alpha. We have more questions about it. We can talk more about that later. Back to drawing fonts. So the last thing, if you want to have a colored text, one, or color-changing text, instead of putting the color in the text itself, put it in the polygon. Use the polygon for alpha for blending. Use a polygon for colorizing font strings if you wanna do that. So it's really easy to get high quality fonts into OpenGL through textures. So let's go to a quick demo on that.

So this is my Cocoa OpenGL example, which is something also available on the web. I'll also leave the square there. So this is the same thing you've seen before. Probably too hard to read. It's not really important what the text says, but how it operates on the frame is more important. This is using a very, very simple NSStringTexture class that uses exactly the techniques we showed for texturing and for handling strings and creates simple strings that you can easily update, scale. They composite very nicely over top of each other using pre-multiplied alpha and it's set up in such a way that if you're using Cocoa, go ahead and download the sample and just use the class directly in your applications. You pass a string in, it creates a texture for you, and you can draw with the texture. You can go back to the slides.

So let's talk about handling images. Images can be displayed in a number of ways. Last year I took a long time, maybe half an hour of my session, to talk about image handling. There's a fairly all-inclusive sample out there that shows you how to handle images and drawing images. And I'll defer to that for the actual working code of handling every single case. But let me go through the technique here involved. So people who want to put images, especially people working with the 2D world, who want to use OpenGL, this is the technique you want to use to get images on your screen. First, the simplest thing to do is use an orthographic projection. If you're using 2D and you're actually compositing things, there's no reason to use a projective projection. You actually can just use an orthographic projection there, line everything up, set everything to a pixel kind of format into your window so you actually address things just in window coordinates. You can scale the polygon as appropriate. So if your image is inside a polygon, you can actually scale the polygon appropriately to draw your image. Three options for handling images that are not a power of two size, which OpenGL textures are things that want to be. First thing, you can scale to a power of two, which means that if your texture is, let's say, really thin, short of a power of two and really tall that's greater than a power of two, you scale it to a big square, and then when you actually draw it back into the polygon that's the right aspect ratio, it will unscale itself. You do lose data in this case, but for a lot of people that may work very well for what you need. Second thing, you can segment the power of two rectangles. You actually can slice the image up in place. You don't have to have separate textures.

You take one big image buffer and you actually slice it up, use different texture pointers, and you can use this to tile it across any constraints that you want a power of two textures. And then finally, you can use a texture rectangle extension that we mentioned already to do non-power of two textures. Makes things real easy. Understand that the texture rectangle extension uses image coordinates for textures rather than 0 to 1. Normally a texture is 0 to 1, 0 to 1, horizontal and vertical. If you had a 250 by 300 texture and you were using a texture rectangle, it would be 250 by 300 for your texture coordinates.

Let's look at some sample code. Again, the sample code is available today to look at and download. And you can look specifically through this. And we're going to pick an image. and there's a lot of, the sample code shows a lot of the different techniques. For example, you can use non-power-to-texture, texture rectangle, or you can use, in this case, you can tile it for, to tile the thing and set the max texture size so you can look at the differences in how it would handle different images. In this case, I'm going to tile it to 128K tiles, and this is the image itself, and this is using OpenGL. I'm not sure if I have a tear. Yeah, I have a tear here, and that probably is relating to the fact that we have multiple displays synced up, and we're not actually synced to the VBL of the projector. But normally in your applications on the screen, you wouldn't be seeing the tearing. But this is a very large image.

This is 2,700 by 1,700. This is about a 12-megabyte image, no problem handling the rotation. You can easily rotate it. You can easily zoom in very quickly using OpenGL and have OpenGL manipulate the image. You rotate it at high speed and you rotate and zoom at the same time. So this is using everything you get with OpenGL for free, no problems. Let me rotate this back up to kind of straight and show you what OpenGL is doing behind your back. Those green lines are the actual textures that it's using or the actual texture that it used. So if I look at like the corner, for example, that may be interesting, and I zoom in, I can actually see that OpenGL made some small textures and sliced this image to textures that were power of two necessary to texture without the GL texture rectangle extension. So that's an example of using OpenGL texturing. And again, let me zoom the image to fit the window and rotate it. And just so you know, this is, again, a 12-megabyte image, and it's texturing at, it's getting 160 frames per second without working real hard at performance. So you can see that that's 160 frames per second updating that 12-megabyte image or texture. We can go back to the slides now.

So OpenGL handling images works great. So let's talk about playing movies. Playing movies with OpenGL, the setup is very similar to the Carbon setup we saw before. You can use QuickTime as the API to play the movie, and you're going to use standard QuickTime setup techniques, and you can use this in a Cocoa or a Carbon application. One thing you do need to do is know when QuickTime is finished drawing a frame of your movie. What you'll do is use the new movie drawing complete UPP, and you'll create a callback, basically, that says, hey, I'm done with drawing a frame. At that point, OpenGL can check to see if the frame's been updated. If it has, it can texture from the updated frame. Same technique to just use an OpenGL image and update the frame on the screen. So really what you're doing is you're drawing a sequence of images and updating when you're told to by QuickTime. That's the simple technique. There's nothing more involved than playing movies on the screen.

Some people, when they sit back and think about it, will realize that there's actually two points of synchronization. One when QuickTime's done with the image, you can use it, and one when OpenGL is done with the image. It takes a little more complicated code to handle both those synchronization points. Many applications will do fine updating their OpenGL texture when QuickTime's done with it and not worry about the second sync. We do have a sample that's the OpenGL compositor lab that shows creating a custom codec to actually sync at both ends of the thing for things that people that need specific syncing and very fine control of the movie playing. And then we'll use glTextImage2D, and then we'll use a different routine, glTextSubImage2D, to actually update the movies when we're only updating the part of the movie that actually changed. And we'll go to another demo of this. Again, this is sample code that's available on the web. This is OpenGL Movie.

And I will grab something you've probably seen before. And this is, again, it's basically drawing a new frame every single time that the movie tells it has a frame update. And you can see right now I'm getting about 250, 260 frames per second of OpenGL drawing, updating the texture at the rate that QuickTime updates the texture for a 24, 30 frame per second movie. So the point here is that you can, with OpenGL, easily handle the texturing capabilities for a single a quick time stream or multiple quick time streams. So dual effects on movies, get movies to the screen at a high speed, it's not a bad thing for this integration to use OpenGL for your application. And manipulating the polygon, that's free. I can draw that just as fast, flipped, spinning, as I could square on the screen, full screen. Let's go back to slides.

So the final section of the presentation is some tips. And we're going to go through three things that a lot of developers run into and that can help you develop your applications, help you either polish them up or understand what's going on behind the scenes a little bit better. First, we'll talk about shared context. And we'll talk about full-screen anti-aliasing. And finally, we'll talk about render-to-texture, a much-requested feature that we've added for Panther. First, shared context.

You have a lot of windows. You have a lot of different stuff on your screen. You have a lot of textures. You have maybe used display list, you have vertex programs, fragment programs, but you only want to write them once or load them once. You don't want to have to load the texture into every single context.

You can use share context to alleviate the problem with loading multiple textures. Texture objects, vertex programs, fragment programs, display list, and a vertex array objects all can be shared between context. The other context state, like is texture enabled or not, is not shared. So it's just the objects and the associated state with those objects. The trick here is it's the same virtual screen configuration, which sounds like a mouthful and kind of in some cases gets tricky. There are two ways to avoid worrying about virtual screen configurations and sharing context. First, create the same pixel format, create a single display pixel format, or share it with other contexts of the same pixel format. So first, if you're doing full screen or if you know that you only have a single screen system or you want to constrain your windows to a single screen, Create that pixel format using some of the techniques we showed for the full screen section to only support that one screen and you'll be able to share no problem.

Second is you can create a single pixel format and share across contacts with a single pixel format. Full screen pixel formats and window drawables are also something new for Panther. For pre-Panther, people would create a full screen pixel format, and then if they tried to attach that full screen pixel format with the full screen attribute to a windowed drawable, we would fail on that. We relaxed that description. Basically made that full screen attribute a additional constraint on the pixel format, but when it attaches to the drawable, it's an optional item. So you can create one pixel format that's full screen, and you can attach it to your window drawable, you can manipulate your thing in a window, like maybe a keynote kind of application, you manipulate your slide, and you want to go full screen. But when the user shifts to full screen, you actually use the exact same pixel format, and you can actually create a full screen context and create that attached to a full screen drawable, rather than having to tear down and recreate everything. This will help your code and simplify your code path.

Looking at some context sharing code, for this example is an example of using a windowed and full-screen drawable and sharing the same pixel format. First thing we do is create a pixel format with full screen. We create the same pixel format without full screen. We're going to get the main device to show you that we create both, choose pixel format, uses the same device in both cases. And you'll notice that the second create context call, the final parameter is the AGL context of the first context. That shows you the code, what sharing a context is going to be like. In this case, this tool will share the object resources, those five things we talked about. There's some examples of this also in some of the samples on the web.

Full-Screen Anti-Aliasing, something that people ask about. How do you do FSAA or Full-Screen Anti-Aliasing on the Macintosh? It's not supported. I don't see a Full-Screen Anti-Aliasing button on a control panel. Well, what Full-Screen Anti-Aliasing does is it actually is using the R multi-sample extension, which is a standard way of supporting OpenGL Full-Screen Anti-Aliasing.

And I say scene rather than screen because it's not a per-window basis, it's not a per-screen basis. So you can have one window that's anti-aliased and one window that's not anti-aliased. The extension has specific details on how it works, but let me go and say the setup of it is pretty simple.

You're going to basically create a pixel format, and you're going to add a couple items to your pixel format, a sample and a sample buffer to your pixel format that tells you that you want to do full-screen anti-aliasing. Then you're going to enable ARB multi-sample with the multi-sample ARB GL extension, and then you're going to, if optionally, if you would like to, you can send a hint that GL can recognize nicest or fastest to tell the driver that you prefer either the best looking possible multi-sample or anti-aliasing or the fastest possible for the number of samples you've picked. That won't hurt even if you're not, it's an NVIDIA extension, but it won't hurt to call it on any card. It's not going to break your application or reject it. It will just ignore the setting if the card does not support that particular setting. Code for this, we'll get rid of the stuff that we've already seen and really, really simple here. Sample buffers arb is always going to be one. Samples arb, we're going to set the four. In this case I added the no recovery and that basically means I don't want a software backup because the software does not support multi-sample at this point. No recovery will tell you only to get hardware renders and you can see more about this on one of the Q&As that was updated talking about multi-sample and context selection. GL enable multi-sample arb, GL hint multi-sample filter hint, NV and NISIST if I want to use that, and that's how to set up full screen at any ALA scene.

For the final item, we'll talk about render to texture. Three ways to do render to texture. First is surface texture. We've had that in Jaguar. That was calling the AGL surface texture, Glot surface texture, or the NS create texture APIs, and you can use them for surface texturing.

Added for Panther was pbuffers. It's a Windows API. Prior to this, also supported across Linux. It's a WGL extension. We've taken that extension, taken the meat out of it, and basically implemented an API that corresponds to that. We couldn't implement it exactly because it deals with things like HTCs and Windows-specific kind of drawable code and formats. So what we've done is make the setup a little bit simpler, but then make the functionality the same. It's more robust in surface textures, allows you to do more things. In the end, what it allows you to do, it's create an accelerated offscreen to do some rendering into, and then use that rendering as the source of a texture. It's supported in AGL, CGL, and soon in NSGL. The Panther that you have does not have that support, but by the shipping time, we should have the support in for the NSGL version of the code to use pbuffers. Final method is superbuffers. We're following closely with the ARB working group on that, working directly with them, and when the superbuffers extension is finalized, we'll be shortly after that, should be having our implementation of that.

So P buffers, we talked a little bit about it, generalized pixel buffers. It can be the target of rendering. In this case, you're going to want to use the commands like AGL set P buffer or CGL set B buffer, which will basically say, hey, this is what I want to render into. Think of that as a set drawable call. So basically you have an off screen and you call that, and that's going to do your set drawable. Then if when you want to render from that, you're going to use AGL text image P buffer or CGL text image P buffer. Think of this exactly as a texture 2D call to render from the pbuffer. Or you can even use cube maps of pbuffers, or you can do a cube map texturing. And finally, the setup is gonna be, you're gonna set it up, you create it. You're gonna draw to it, you're gonna bind to it, and then you're gonna texture from it.

So code example, I'm not going to go through all the nuances here, but understand that this is things you've seen before with creating pbuffers. There's an AGL create pbuffer call, which is new. And then you have a set pbuffer call, which is new, but you notice these are very similar to other APIs, the APIs you've seen and stuff you've used before. You're going to draw to it. When you're finished drawing to the pbuffer, you want to use a GL flush to flush it, and then I set my current context to null for safety to make sure I'm not drawing to my pbuffer when I don't intend to. Thank you.

So if I don't have a texture ID created, I'm actually going to generate a texture. I'm going to bind to that texture and then I'm going to use, and I'm going to set linear as a parameter, as a filter parameter here. So I don't want to have any mipmaps. Pbuffers do support mipmaps. In this case, I'm only exampleing, I'm only showing an example without mipmaps. I'm going to texture from the Pbuffer. Once I have that established and I have that texture name established, I'm just going to bind to the Pbuffer directly without creating, using the text image. And then when I destroy it, it's gonna delete the texture, destroy the pbuffer, destroy the context, and destroy the pixel format like we've seen before. And let's show a quick example of that.

So what this example is was just a square. And then what I did, I took the Stanford rabbit and I actually rendered it to-- let me actually do that. I actually rendered the rabbit onto a pbuffer. And it's actually a flat-- you can see from the top, it's actually a flat surface. So it's on each face of the cube is rendered the same rabbit. You can do a lot of different things. any kind of rendering you want to do as a source here and then texture in any way possible. For example, you could render cube maps and render a full reflection map rendered and then put it into the pBuffer.

Go back to slides. So I do have one more quick thing to show you. I think we have, we're a little bit into the Q&A time, but I'd rather show you this, and we can hang around for questions if people don't get enough answered at the end. I showed you that demo earlier. We can go back to demo machine, sorry. I showed you the demo earlier of the, of how to create a simple OpenGL sample. Well, thinking about it, after Peter's session, we could take that sample and fairly quickly extend it to use another method of getting text on the screen and high-quality images to use with OpenGL, which is the CG on OpenGL. And so what we did was we took that sample that we had before that I just created in a few minutes, and we added the code required to do that. So let me go to the bottom and show you what code we added here. So this little section of code here basically is the create a CGGL context, a GL context in addition to our OpenGL view. We're going to create a color space there. And then we're going to use the new call, cgGLContextCreate, which actually creates a cgContext based on that open glContext we've created to draw cg into. Let me shoot back to the top here. Then we added a lot of code in this first section, but this is all cg code to draw something more interesting than nothing. So all of this is just cgDrawing code. I thought drawing one line to the screen would be pretty boring.

So the key here is this cgDrawRoutine, which does the fills and the strokes and those kind of things. That's a key routine at the very bottom. Let me scroll it up so you all can see that. It's a CG context flush. And what this does is actually flushes the drawing out so when you actually draw the OpenGL and flush the OpenGL context into the swap there, you actually get the CG context state updated. Additionally, going back to the bottom, things that we added, we added that one little piece of code, and then we added, we flushed our GL drawing, and then we just added the draw CG to the OpenGL drawing routine. This is a draw record routine. This is exactly the same code. No changes to the code. So I added the CG drawing, I added the CG create, and I added the draw CG call to call that CG drawing. That was it. And so what I end up getting from that, to save time, I already built this, is kind of the ovaltine sample with the spinning square in the background. You can see these are all the shapes in the front were all drawn with CG. It's doing good transparency and good blending with its pre-multiplied alpha on the square, and you can easily add CG overlays or CG content to your OpenGL views and to your OpenGL windows.

Just wanted to show you that, show you how easy it was to use those new routines. So back to the slides. So again, we talked in the introduction a little bit about OpenGL. We talked about the interfaces. We then talked about some techniques that everyone can use in their apps, and we talked about some tips.

Again, I want to point out that there is sample code out for almost everything we showed here. There is one Q&A that I'll be posting later this week, and the pbuffer sample will be posted probably at the very beginning of next week or shortly thereafter. It's complete. It just needs to be run through the posting process. Everything else is on the web, available for your use, so you can go look at it today. Bye.

The best place to go to for more information, though, is developer.apple.com. There's links to sample code. There's links to documentation. That's a good central repository for it. Or another good place for OpenGL in general is the opengl.org website. And I'm going to shoot through here. There's some documentation. I'll link to from it. The Q&As that I referenced, the tech note that I referenced, all the samples that were referenced, all the samples are on the web. They're all listed on the OpenGL website. And then I'm going to bring Travis up to talk about the roadmap and we'll take some questions. Thank you, Jeff. Thank you.

So real quick, what I want to do is just pop through the rest of the graphics and imaging track we have for you here at WWDC and focus on the OpenGL-related sessions. Next, actually, interestingly enough, in this hall, immediately following this presentation, is a special presentation that was not in your show guides, and that is the Technology Magica Keynote. And this is where we have an engineer, the lead engineer on the keynote product, our presentation package, is going to come essentially talk shop about the application and tell you what technologies they adopted, obstacles they overcame when delivering the application. And the interesting point is that app is a heavy user, a heavy client of OpenGL technology for a lot of its transition and 2D effects. Then obviously we have image capture update, which we talk about our scanning and digital camera support API inside the system.

So then we sort of dive into the deep end of the pool with OpenGL. Starting on Wednesday, we have a vertex programming with OpenGL. A big theme that we talked about in the graphics and imaging overview session was programmability, harnessing the GPU to do interesting things. So we have a section on vertex programmability, and then also if you notice, on Thursday we have a session on fragment programmability. And these are really key sessions if you want to be at the sort of cutting edge of the evolution of both 3D and 2D graphics using the GPU.

We're also going to talk about Quartz 2D in depth on Thursday as well. Another big announcement that we made in the graphics imaging overview, and Jeff did the quick demo where we have the ability to take our 2D drawing API, Quartz 2D, also known as CG by short, it's a whole different story, and point it into an OpenGL context. And that was what the Ovaltine example that Jeff showed you at the end was about.

And then we have a key session. If you're developing any OpenGL applications on Mac OS X, you want to attend Section 209, which is OpenGL optimizations. You're going to learn just tons of information about how to make OpenGL applications run as fast as possible on the platform. And we'll learn a lot about our enhanced OpenGL profiler application. And then I want to jump down into cutting-edge OpenGL techniques, which is on Friday. And this is going to be a great session that's going to-- we have our hardware partners from ATI, who their demo guides essentially are going to come and tell us how they do a lot of the absolutely cutting edge effects they do in their demo applications.

It may be very interesting for you guys to learn from. And they're going to talk about all sorts of different levels of programmability, vertex programmability and fragment programmability. And then obviously we have a feedback form on Friday. So what we want to do is, you need to contact either of us. I can be contacted. I'm [email protected]. And Jeff also answers developer questions, and he can be found at [email protected]. on.