Graphics • 1:04:30
Learn the techniques and features that allow you to take full advantage of the high-performance integrated graphics pipeline in Mac OS X. This session focuses on Mac OS X's OpenGL architecture, platform-specific features, and advanced capabilities. A must for any developer new to Mac OS X or OpenGL and for those who want to stay up-to-date with the latest OS-level features.
Speaker: Geoff Stahl
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Welcome to Wednesday at WWDC. This is session 209, Mac OS X OpenGL in Depth. I'm Geoff Staahl, and we're going to be going into OpenGL on the Macintosh today. So that's me. So, first let's talk about a little bit what we will and won't talk about at this session. This session will not be like an OpenGL 101 session. A lot of references for that on the web, and there's a lot of references in books and those kind of things.
It really At WWDC, we want to take the time to go through what differentiates the Mac OS, what's important for you to know about the Mac OS, and not particularly try and teach you exactly how to use OpenGL in a day or in an hour, which is probably a difficult task. So if you are interested in, like you just heard about some of the uses of OpenGL, you've seen the keynote, you say, hey, wow, that's a great technology I want to talk about.
One thing you can do is you can go to the OpenGL.org site. Great reference, has a lot of documentation on there, has news of what's going on with OpenGL, and has links to all the specifications and all the extension registry with the extension specifications. And this is a key reference for you when you're working with OpenGL, working with an extension, or working with the API. Also, again, if you're a beginner and you want to learn a little bit about OpenGL, Google's a great reference.
OpenGL tutorial, I went and did that the other day, and there's some great stuff there. Really, really well put together tutorials for just starting out with OpenGL, which will show you how to get going. And a lot of them actually have Mac OS implementations or Mac OS code for you to use, so you can just immediately get going with the Mac. Also, again, if you're starting with OpenGL, the programming guide, which is the Red Book, which is affectionately called the Red Book, and that's basically kind of the step-by-step tutorial about what basically is in OpenGL. There's also things you should use for your references, like the OpenGL specification.
And the Blue Book, which is actually a reference guide. Would not recommend that as kind of a learning tool, but definitely while you're using OpenGL, that's something you should have on your desk or on your computer in kind of PDF form. So now, what will we cover? What we will cover is OpenGL on the Mac OS.
First, I'm going to give an update and talk about what on the Mac OS is new since we talked last, since we talked last year. Then I'm going to move into OpenGL on the Mac OS and talk a little bit about architecture, talk a little bit about OS-dependent data, which is something kind of new to me.
This is kind of this mystic data that you need to understand, things like virtual screens and pixel formats that you may be using in your OpenGL applications, but you may not quite understand exactly how they fit in and exactly how you can best use them to your advantage to write great apps.
And finally, we'll talk about interfaces, and that'll go kind of a review of some of the interfaces you can use, depending on where you're coming from and what kind of application you have. Finally, I'll wrap up with a few techniques to take advantage of some of the great things you can do with Mac OS. OpenGL and the Macintosh.
So let's start again with the OpenGL update, where we've been and where we're going. One thing that, if you're familiar with OpenGL and the Macintosh, one thing you'll notice is our updates are slightly different than kind of the regular OS update. For some regular OS component, you may see a major feature increase at Panther, and then if there's a bug or two, they'll fix it in the software updates.
Well, because of the GPU and because of the driver vendors and because of the rate of increase of hardware capabilities, we're continuously trying to rev OpenGL, trying to get the features that you guys need as soon as possible. And what this means is you'll see our update cycle has been used software updates an awful lot to get features to you.
Just recently, Mac OS X Panther 10.3.4 was a software update, and it was a major OpenGL update there, and you'll continue to see, as we move forward, OpenGL updates. So what you have right now with you on the disk that you have on the Tiger seed is actually the current kind of snapshot of OpenGL.
As we move forward, we'll try and get more snapshots, more seeds, more updates, and those kind of things as we move forward to our Tiger. So expect some of the features I'm going to talk about will start rolling out as we move forward and moving closer to Tiger. So what have we been doing in the past year? Well, a lot. There's a couple things that we've been busy with.
And so since you can't probably read all this and take it in, let's break it down and talk about in three different categories. We'll talk about new features, talk about performance, and we'll talk about some of the bug fixes. And at the end, we'll talk about looking forward and where we're looking to go with.
So, in Panther, a couple key features were added, which you can take advantage of. First, the pixel buffers. Pixel buffers basically adapted kind of the industry standard off-screen accelerated rendering API with a little bit of Mac OS flavor to fit into our API as well, since it is a windowing system API. It has to do with the windowing system. It allows you to do things that you couldn't do before with off-screen rendering. So what this means is you do not have to bring a window up.
You can do all your calculations off-screen, and you can do all your calculations off-screen. then look at the contents. Things like Core Image can use pixel buffers to a great advantage, do all the calculations off screen, get the acceleration of the GPU, and then get the result and display it in any way or use it in any way you need, from a texture for a different image or to just displaying it or saving it off as maybe a movie or something like that.
Other things we added were floating point pixels. We didn't make a really large deal about them. It's on our web page that we have the extension support, and you'll notice in some of the pixel formats that I'll talk about later, you can add floating point pixels. What this means is you can actually do calculations and have the result be floating point numbers instead of just RGB coordinates.
This means you can render a data set. We're going to talk about shading later. You could take a vertex or a fragment shader, take the results of a rendering, it might be in floating point pixels, and use that as some kind of data set. So you could render a procedural bump map or something like that.
It's kind of sky's the limit as far as using the floating point pixels and the precision you can use there. Also in Panther, we add a lot of glut enhancements, which I'll go over a little bit later, which for folks, especially in the scientific community, allows people to take advantage of some of the unique capabilities of Mac OS, some of the desktop capabilities, some of the capabilities with integration with other peripherals. Since Panther, we've had a lot of work in the software updates. You can see a list of some of the extensions we've added, some of the features we've added. This is just the features, not even talking about performance updates.
What we're trying to do is we're trying to roll out these features as the Arbor proves them, as they're available in drivers, and as we're able to implement them and get them out to you and as wide for a wider audience of users as possible and as soon as possible.
So what that means is you'll see things like this continue on as we move forward. Things were added in software updates, occlusion query, vertex buffer objects, much-requested additional to vertex array range, much-requested way of optimizing transport of vertices up to the graphics card. And as I go through the presentation, I'll note some other presentations you might be interested in.
Tomorrow at 10.30, and I'm not sure the room, I believe it might be Nob Hill, but I'm actually not sure, is an OpenGL optimization session. And then followed in the afternoon, there's a session about actually showing you how to use some of our tools. We'll talk specifically about some vertex buffer object topics and talk about how to use that and how you can use that in your application. Things like point sprite, non-power of two textures, depth balance test, blend equation separate.
And texture mirror clamp. These are all extensions that were added, features that were added to OpenGL since Panther is shipped. In Tiger, one of the main things in Tiger, which we've kind of already announced, but will be the OpenGL shading language. We're working really hard on OpenGL shading language support.
You should see it coming soon to you guys so you can start working with it and start working with the shading language. And that will be full support for shading language, depending, of course, on whatever hardware you have supporting it. But we're also hardware and software paths for that. So you should be able to support the full API for your application. education.
So performance. Again, we've been really busy. One of the key things we've heard is that we really want as much performance as possible out of the API. So things like, for Panther, were things like static geometry optimizations with DisplayList. Since we're the largest Unix vendor out there, a lot of scientific applications have come to the Mac OS and said, hey, I really need to be used a lot of static geometry, molecular modelers, for example. So we've optimized the DisplayList to process into vertex array ranges, and this allows you to use static geometry in the optimal form. Improved hardware acceleration for texture copying. GL copy text image, VRAM to VRAM stays on the card, improved speed there. Optimized color conversion and texture compression.
Things like texture compression, which many people look at and think, oh, I don't want to use texture compression. That's that ugly thing that you get poor textures out of, and it really won't make my app look good. Well, it turns out that some of the texture compression schemes now are good enough that you can have less data and higher quality images by allowing you to use larger images in your app than you could before, given your data size constraints. So texture compression, something to look at, especially if you're dealing with large amounts of image data.
Streamline MIP app generation. Again, let's accelerate that. Let's get it as fast as possible so you can get the highest quality images, highest quality texture handling for you guys. Panther software updates. So since Panther shipped in the past year, we've done things like more immediate mode optimizations. Again, we've done things like more immediate mode optimizations.
So again, we've done things like more immediate mode optimizations. Again, scientific applications coming to the platform, needing to use immediate mode, having a cross-platform source-compatible data, instead of being cross-platform source-compatible, thus they don't have options sometimes to change to a vertex array range or display list. They may want to stay in immediate mode.
So we've optimized that. We've improved handling of pixel data throughout the system, working very hard to ensure that we have an optimum pipeline for getting the pixel data through there. We also have vertex program emulation, robustness, and speed enhancement. What that means is, we understand that some of you want to run vertex programs and you don't have the hardware support, but you want that to be your only path. We understand that. We're working to improve that as much as possible to get you the best CPU support for running vertex programs.
And then obviously, one thing we always look at is bandwidth improvements throughout the whole system. And lastly, but not least, is the adding of asynchronous texture fetching as a feature. John, in a session tomorrow, will talk about options for getting textures back from the card. So, GPU being your coprocessor, being there's a second processor on the system. You have a CPU and GPU. You want to load balance those. And to do that, you need to be able to get the data that you process with the GPU back off the card.
Asynchronous texture fetching, similar to OpenGL as an asynchronous API, will allow you to start the retrieval, use the GPU to push the data without using the CPU power, and then later get the data once it's retrieved. So that means your application does not have to block, and block at the point where it's trying to copy the data back. if you're moving large amounts of data back and forth.
So Tiger and future software updates. So what we're looking for as we move forward, we're going to obviously continue to improve the resource handling. What you may have gotten from all the talks that we've done is that there's a lot of clients of OpenGL right now. We have tons of people who are using OpenGL on the system from your applications to the Windows server to core image to core video. So, you know, lots of things you see on the screen are now running through OpenGL.
So it's really important to us, working from an OS standpoint, to make sure the resource handling is optimized, which benefits you guys. Because that means you have an optimal resource handling for your application. You gain those same benefits. And we're going to also look to optimize Vertex and Fragment Program and Shader implementation. So as we move forward with the GLSL, the OpenGL Shading Language, and with Vertex Programs and Fragment Programs, the Assembly Language versions of that, we're going to look to optimize that pipeline and ensure that we have software support for those and optimize software support for those.
And then the announcement that we made the other day with new GPUs, the NVIDIA G4 6800 Ultra is a great card to have on the platform. And we'll continue to work with both ATI and NVIDIA as their new products come out and get them to the Macintosh as fast as we can for you guys.
So, always bugs. There's always bugs that you guys run into. Corner case, something that you may be doing differently than what we're doing. We want to hear about that. We've fixed hundreds of bugs in Panther since the software update, but one thing we want to do is we want to make sure that we know what problems you have. So this is my call-out to you guys to say, hey, if you have a bug, make sure it's in Radar. Make sure you file it in the bug reporter on the ADC website and that we know about it.
If you have questions on it, you've probably seen our emails on the website. So you always can obviously query us either through the website or directly and see what the status of things are. We want to know what the problems you're having is so we can correctly prioritize and make sure we're covering the things that are causing you problems.
So let's look forward. So looking forward from today on toward the shipping of Tiger and beyond, one thing that we really want to do and one thing we feel like we've now, we have a mature system, we're going to continue adding features, adding hardware support, but we really want to focus on quality. We have new test suites that we're implementing on our end. We have unprecedented use in Tiger, as I've talked about, things like Core Image, Core Video, Quartz Extreme, iChat AV, those are all built on top of OpenGL. We can't build these apps.
These apps can't be reliable unless we have a quality, bug-free OpenGL stack. So it's really important that we have a quality, bug-free OpenGL stack. It's really important to us. We're working really hard on it. And I just want to make sure that you guys know that's what our focus is. And get us the information and we'll start, we'll continue to try and fix any problems you have.
So new features. We're participating in the ARB working groups. So you're seeing things like what, I'm not even sure what the spec final name will be, but like render targets or Uber buffers. We're working very strongly with those working groups to ensure the spec is a good spec, good spec for developers to develop on.
And as soon as things like that are ratified, we're going to start implementing them as the request and need is for the different specifications. If you have specifications or extensions that you're particularly interested or particularly beneficial to your application, please let us know. File a feature, go to the bug reporter, file a feature request bug, and it will let us know that your application is either blocked by this feature or that you're interested in using this in the future. One thing about the future. When is the future for an application? You may all be working on applications right now. That's great. We want to hear about bugs. We want to hear about features. But obviously there's some lead time for us to implement things. So thank you.
Think ahead. Think about your next application. Hey, cool. I'm going to use that thing. I'm going to use shading language. I'm going to use render targets in my next application. It's really important to me. I need that on the system. Let us know ahead of time. That will give us time to implement things and get us time to get caught up with where you guys are and where you guys want to be in the, what you guys want to use in the Mac OS.
So now that's the end of the update. So that's kind of where we are. We brought everyone up to speed so we know where we came from from last year. Let's talk a little bit, step back and talk a little bit about the system itself. So one thing that we haven't really covered before, I've kind of talked about in previous years, talking about interfaces and how to do a few techniques here and there, but we really haven't talked about the architecture of OpenGL and Macintosh, what makes it great and what makes it different than maybe our architecture is used to dealing with and how you can take advantage of that.
So this is my big sentence here. It's a multi-client, multi-threaded, multi-headed, high-performance graphics API with virtualized resources. So it's not your parents' graphics API, nor is it our competitors' graphics API. The key here is that there is a ton of clients using it. It has multi-threading capabilities. We're trying to eat all the performance we can out of it, and there are things that you can do and you can do in your application to improve your app, make your app run great on OpenGL.
So we're going to talk about the OpenGL driver model, talk about how it's architected and how that matters to your application. And then we're going to go into a little bit about the framework model and how the framework model will fit with your application. So even if you're not using OpenGL, it gives you an idea of where you need to start.
So first thing I want to talk about is a little bit about the framework interface and then the interface of how this fits into the driver model. So the way you read this particular slide is your application will always use OpenGL if you're an OpenGL application. That's kind of by definition.
The second part on your left side is what interface are you actually accessing, what windowing system-dependent interface. And the reason I have stacked it like this is to show you, just for your information, kind of where things are, what things are built on what things. So at the bottom layer, we have CGL. CGL is kind of the core OpenGL interface. It's the base windowing system interface. And applications can use this directly. The applications like a full-screen application might use this directly.
Built on top of that is both AGL and the NSOpenGL implementations, both for NSOpenGL context and NSOpenGL pixel format. And so if you're in a... If you're in a Cocoa application, you want to look at the NS classes there. If you're a Carbon application, AGL is where you want to go. But it also says you don't need to use CGL. It's not required that you use the lowest level. You can use that next level up.
Or further, if you're an NSOpen... NS... NS... If you're an AppKit application, a Cocoa application, and you really want to use OpenGL, but you don't really want to get into the context and pixel format, you can use the NSOpenGL view directly. And what that allows you to do is it allows you to, you know, skip the fact that you need to handle pixel format. Or context.
You can use the interface builder interface. Use the OpenGL kind of widget. And just directly... NSOpenGL view widget. And just directly use that and link that into your application. So it simplifies the process of getting up and running. So that's something you should look at if you're the Cocoa application. And you don't have a lot of special constraints.
Above that, actually, GLUT was built on top of the Cocoa Our Cocoa interface. And so GLUT is actually... It's a framework, but it's really kind of an application or a client of NS. So one... One thing... One thing I want to mention about GLUT right now is our full GLUT implementation.
Besides for being a framework, we also have the sample code up on the sample code site. So we've released all our code for GLUT. So if you ever want to see how we do things or how things are done in the GLUT application, you always can download that and look at it. So again, your application will... What it will do is it will reference some windowing system API. GLUT, NSOpenGL view, some of the NSOpenGL pixel format, NSOpenGL context, AGL, or CGL directly. It will reference that. it'll also reference OpenGL when it hooks into the system.
[Transcript missing]
These then hook into the rasterizers, and the rasterizers directly hook into the hardware. It's interesting to note I have hardware down here on the bottom, and you might say, well, the software renderer is not hardware. Well, the software renderer obviously has a CPU as its hardware. So, again, we're talking about GPUs and CPUs being kind of first-class citizens here.
ATI and NVIDIA renderers work particularly with their GPUs. Our software renderer works with a CPU. It might not be the best thing for all applications because you're using a lot of CPU resources up rendering, but in some cases, that may be what you want to do. You may want to use a software renderer either for capabilities that it has that some other renderer may not have, or as your fallback, and it will just move seamlessly onto the CPU.
So I've talked, I've thrown out some terms here like pixel format, I've talked about context, I've talked about screens and moving between monitors. Those things are based on OS-dependent data. These are things you won't find on other platforms. Particularly, you may find there are versions of them, but you won't find these particular definitions and these particular definitions have a little bit of, they're a little Mac-specific.
I mean, while context may exist on a Windows platform or may exist on a Unix platform, the context has some specific things you can do on the Mac platform. So what kind of things are we talking about here? Well, we talk about virtual screens first. Virtual screens are a great thing because you mention virtual screens to someone and you see a lot of heads nodding.
Oh, yeah, virtual screens. No one understands what you're talking about because virtual screens are this mysterious bucket. It's a parameter for like choosing a pixel format, but I just throw null in there all the time because I don't know what I'm doing, you know, that kind of thing. And you're not alone because everyone does it.
You know, this is some of the things when I'm putting this presentation together, I have to go back and look at the reference material and ensure that I have it correct for the presentation. Because it's something that, it's one of those things you use once and you don't care about. But there are some powerful things you can do with virtual screens and I'll talk about those in a minute.
Pixel formats, I think anyone who's used OpenGL knows what I understand as a pixel format. But we'll talk a little bit about what really is the pixel format and how you can think about it and how you can use it to your advantage in your application. Context. Context, basically, again, the state bucket for OpenGL, if you used OpenGL, you understand about context and controlling the rendering. And finally, we'll talk about drawables and how to think about drawables. So these four things are the only real OS dependent data you need to know.
Virtual screen talks about hardware and what renderer you're using on. Pixel format is a specification list, basically, for your application to say what you want as far as capability. Context is OpenGL state or your rendering target. And a drawable is going to be where you're actually drawing the pixels to.
That's really the top level. That's all you really need to know. But let's talk about the details so you can pick up some more. So if you don't remember nothing else from the presentation, remember that. And I think when you look at the APIs, you'll understand more about how they work and how they fit together and it'll make more sense.
Just one note on this. I'm going to use CGL calls for the presentation, just for consistency's sake, so I didn't have to list multiple API versions up here. But you don't, this applies to AGL or NSOpenGL equally. It's just I chose to use a CGL. I chose to use a CGL once for consistency in the presentation.
So let's talk about OS dependent data flow. Again, something that people may not understand. So you, again, we'll start with the application. In this case, we have a slightly different color application, but it's still the same application. So it actually, what is it first going to do when it uses OpenGL? It's going to create a pixel format.
Pixel Format's this thing you give a little list of attributes for, you give it some virtual screen specifications, and you're done, right? You don't need to worry about it. Well, what does that do? That has two pieces of information that are important. One piece of information that's important is your renderer attributes, and that determines your virtual screen list. And then the second piece of information is your buffer attributes, which determines your surface definition. So buffer attributes are things, do I want a depth buffer? How deep is my color buffers? And that basically makes a definition of what you'll accept for a surface.
So, for example, if you say I want a depth buffer, if a renderer didn't happen to have a depth buffer, that's going to prevent it from being selected. For your renderer definition list, you can do things simply like accelerated, software renderer, you know, you can say generic renderer, or you can pick a particular vendor. If, for example, you know that your application only runs on an ATI system, you can pick ATI, or the same for NVIDIA.
If it only ran on an NVIDIA system, you could actually specify in your pixel format that I'm only going to accept an NVIDIA or an ATI renderer. So those are renderer attributes. So renderer definitions. What that'll do is it builds a list of possible renderers, which are basically possible virtual screens.
Then you build the context. You need the pixel format to build the context, so the context basically kind of is attached to the pixel format. And what the pixel format is going to do is basically transfer this information to the context. So this, when you think about it, you can say, ha-ha, now I understand why when you share attributes and share things between context, they have to basically have the same virtual screen list. But the virtual screen list, that pixel format thing, defines what renders can be used. So for example, if you want to share textures, you have to have renders that are compatible.
You have to have, you know, a list of renders that basically kind of have this, can, texture can be rendered in the same places. Moving forward, you have this drawable comes in. And drawable can be anything. It can be a window, it can be an off-screen, it can be a P-buffer, it can be a full-screen.
It doesn't, you know, drawables aren't special. You know, set P-buffer, set drawable, those are basically the same kind of call. It's basically setting the information that the drawable has into, when you attach it to the context. And what happens when you attach it, the drawable contains surface dimension information.
And what happens is, now instead of having a list, you still have the list, but now instead of looking at the list of possible virtual screens, and surfaces as definitions of surfaces, you actually create surfaces. We actually create, when you attach, hardware surfaces are created. And that surfaces are created both from the pixel format information, and from the drawable size. So you then allocated your buffers.
The, when you actually set the drawable, you also set a current renderer. And so that takes a virtual screen list, looks at what the characteristics of the drawable are, whether it's on a screen, off-screen, or whatever. And then selects an appropriate renderer. So for example, if you attach to an off-screen, you're going to select a software renderer, for example. Or if you attach to something that's on a monitor that's powered, or a screen that's powered by an ATI card, you're going to select an ATI renderer. So that's that process. So attaching the drawable does that for you.
And then finally, let's just look at the actual flow of data through it. You take the application, you're going to issue an OpenGL call. It's going to go to the context. The context is the OpenGL state. It's going to apply that state to it, and it's also, the context knows what runs it. It knows what renderer it's going to have. So it then sends the commands to a specific renderer, which then draws into the target surface, and you see it on the drawable.
This is kind of a logical definition of it. It's not quite the actual data flow is not quite like this. But you can see logically how it works and how you can take advantage of this in your applications. So let's go back and talk about specifics of things we just showed. Virtual screens. Is a renderer associated with a pixel format? The reason I say it's associated with a pixel format is virtual screen numbers aren't absolute. A pixel format just has a list of them, 0, 1, 2, 3, 4, or 1.
If you have another pixel format, you also have a list 0, 1, 2, 3, 4. It doesn't mean that 0 and 0 are always the same one. This may be an error that someone can easily make because probably they will likely be the same one. But there's no guarantee between two pixel formats that a virtual screen of a certain number is exactly the same virtual screen. So a pixel format may have multiple virtual screens. You can get this by virtual screen count. So you can actually find out how many virtual screens or how many renderers that pixel format has through virtual screen count.
Normally, anyone out here with a PowerBook, they want to run their OpenGL application. What they'll see is when they call choose pixel format and they look at the list, they'll get two probably. They'll probably get a software renderer and they'll probably get a hardware renderer. We're going to prefer to use the hardware renderer, but you'll also have the software renderer in the list. They're all probably likely two in most cases.
So the context current virtual screen, the screen that is actually current, which is get virtual screen call, we'll give you that, is actually associated with the current renderer. So if your current virtual screen changes, your renderer has changed. That's a key point here. Let's think about this. So if you're on an app and you have a window on one screen and you drag it across the other screen and you look at the virtual screen as you drag it and you see a virtual screen change, that means you actually changed renderers. That means your capabilities changed and you should handle that. But it also means there's only one renderer at a certain time. So you're only in reference to one certain renderer.
And the last kind of caveat there is obviously with dual-headed cards, you could have two 30-inch monitors hooked to two G4 6800s, and both of those will be on the same virtual screen. So dragging between those two windows, you'll never see a change. Your application doesn't have to do anything. Everything's seamless, works perfectly. No work required on your end.
So let's talk about a little bit of developer notes about this. I kind of put some things together which may be used in your application, may be useful to you. Virtual screen change, as I said, equals a renderer change. So respond to this. Check settings. Check capabilities. You may have just lost the fact that you have GLSL shading language on one card and didn't have it on another card. And you may have just lost that capability, which would actually change the path you want to take in your application.
Sharing context. There's a tech note on, or there's a Q&A on this, and you might want to look at that if you're sharing context capabilities. But they must have the same, let me get this right, must have pixel formats with the identical virtual screen list. So what does that mean? You can create multiple pixel formats and create context from different pixel formats and share them, but they better end up with the same virtual screen list. The best way to do this is one of two ways.
Either use the same pixel format, which you guarantee has identical to itself, or the second way is to actually specify all the things that are controlling virtual screens, like render ID, software renderer accelerated, no recovery, those kind of things in your pixel format. Make sure they're the same between the two pixel formats you're using. So you can do that as another way of getting an identical list.
One thing you might want to do is, how do you get a current renderer ID? Some people want to look at their application and say, hey, I want to display it, or I want to know what it is, or I want to give the user some feedback they're rendering on the ATI 9800 card. And so for this way, you can get the virtual screen. And if you keep the pixel format around, you can then do a described pixel format, get renderer ID. And then the renderer ID will then feed back into our renderer list that are listed in our headers.
They'll tell you the renderer ID. So you can actually, if you wanted to specially code some handling of your app for different renderers, you can even just use this. Check the renderer ID and understand what renderer ID you have. Caveat here is you need to keep the pixel format around.
So let's move to pixel formats now. So now we've talked about screens, we've talked about pixel formats. Pixel formats are basically the OpenGL renderer capability, buffer, and buffer depth specification. We've talked about that already. I've covered a lot of this stuff, so I'm not going to spend a whole pile of time on these slides. More to spend on specific things you can do in your apps.
So let's go on some tips to selection of attributes. You want to require an accelerated renderer. So applications who want to say, I don't want a software renderer, what do you need to do? You need to do accelerated, you need to do no recovery. What this says is get an accelerated renderer.
Secondly, don't give me a software fallback. This succeeds, you have an accelerated renderer, there's no case that you ever go back to software in this case. You'll stop rendering. So if you only want an accelerated renderer, use this. You want to force a software renderer. For example, if you want to do a test, that's a good test for DB.
Is it a renderer problem? Is it a problem with an ATI card or an NVIDIA card? Or is it our problem in the OpenGL kind of stack and framework? Well, in this case, what you can do is you can set the renderer ID and set for the generic renderer. And that allows you to force a software renderer, force on the software path, great test for your application to make sure that where a problem lies maybe lies with your application and with us.
Renders to system memory. So there's a little bit of confusion about, we talked about P-buffers a little bit at the beginning, I'll talk about P-buffers again, but there's a little confusion about off-screen versus P-buffers. Off-screen has been around forever in the attribute list, and I think it's probably been around in other people's attribute list forever, other operating systems.
But off-screen, you can just say, in your mind, equals software renderer, system memory. Well, it's going to be software renderer, but it's going to be system memory. So if you want to render to system memory, not to VRAM, off-screen. So think of rendering off-the-screen into RAM, not off-the-screen as accelerated off-screens.
[Transcript missing]
So let's talk about something new for context parameters. One thing we've added for Tiger is back buffer size control. And what this will allow you to do is people who have, especially a video application, where they know the content is only a certain size. I know that in this case I have 720 by 480 content. Never going to change. Doesn't matter what the window system does. Doesn't matter how big the person drags the window.
720 by 480 content. There are a lot of applications who may have an image or something in the back buffer who is centric to the source material, not centric to what the application is doing. You can use this parameter, surface backing size, and you can enable it. And what this allows you to do is it allows you to take a fixed size back buffer, will scale the image on swap automatically, and then a variable size frame buffer image. So when you drag something larger in the frame buffer, it doesn't mean all your buffers in the back are changing. Could be real helpful, especially in a video application or when you're source centric.
Some more context parameters. I'm just going to walk through this fairly quickly. Retrace synchronization, swap limit. So if you need to sync to retrace, swap limit's your key. Surface opacity transparency control. We've shown this before. I think there's a Boeing X demo on the sample site, and this shows you surface opacity and transparency control. You can make OpenGL render above some on the desktop.
You render above things so you actually have a window there, but you're not seeing the actual window itself. So that's something you can use there. And drawing order. You can order the drawing above or below the actual kind of window surface. So you can use that for combining OpenGL with some other OS features.
Drawables. So let's talk about drawables. Drawables are actually pixels in RAM or VRAM. So drawable is actually going to be the window, the P buffer, the off screen, whatever. Full screens also. And so again, think about this as one kind of common thing. They're all kind of behave the same way. Couple different routines you need to use for them, but they're all basically the same behavior.
So one thing I wanted to highlight here is actually using drawables in the splay. So in this case, this little code snippet, what it's going to do here, we use the CG call, OpenGL display mask. And that gives us for the main display ID, for example, in this case.
So what we're doing is we're going, what is the OpenGL display mask for our main display? We then use that in our pixel format attributes to basically say, that's the display we want to render on. You can see I used display mask here and the display mask that I got from the return on that function call. And then I used choose pixel formats. But that basically is saying it's going to set that as your display. So you can control display as that way.
And for example, you can use this little trick to actually get information about a display before even showing a window. No windows created here. You can create a context, set the context current, and you can make OpenGL calls like get string and those kind of things. And they'll actually get you valid data without actually having to draw a window.
So let's talk on the interfaces. And the good news about interfaces this year is that we've covered it before. You can look at the previous sessions, and we have some documentation on the web for that. We have the Q&A 1269 interfaces, and I'm not going to spend a half an hour on interfaces.
So CGL, we've talked about that a little bit. It's core OpenGL. It's the lowest-level interface. It's low-level. It's the basis for AGL and NSOpenGL. It's full-screen only. No windowing system connections there. You can't draw a window attached to a window. But you can do things like P buffers with CGL.
And for context setup, you can see last year's presentation, there's a lot of samples on the web. And in this case, I have the GL Carbon CGL full-screen sample. It's on the web right now. You can download that. It goes through all the setup for CGL for our CGL application.
AGL. AGL is the Carbon Interface. No reason to think of AGL as some strange thing. And how does AGL fit to everything else? AGL, Carbon, good. So, it has windowing and full-screen support. So you can do windows, obviously, Carbon, you want to attach to a window. Or you can use the AGL full-screen, which will do a full-screen and basically capture the screen for you.
Again, same thing. See last year's presentation or GL Carbon AGL window as an example of how to do AGL system setup. I don't want to spend a half an hour going through things here that I think you all can work through and ask questions about and read the documentation we have. I think that stuff we've covered previously. NS OpenGL. A little bit more depth here. There's two ways to handle NS OpenGL. There's one way is kind of the NS OpenGL view. And one way is more using the pick-up line. So, there's a pixel format and context.
Why do you do what? That's one of the things that people may not understand. You're in an interface builder and you have the NS OpenGL view. Well, do I really want to use that? Not sure. What you do is you look at what you need to do with the relationship between a context and a pixel format. For NS OpenGL view, there's one pixel format and one context. That's set behind the scenes. You can't control that. If you have a different kind of relationship, you need multiple context, multiple pixel formats, you need to change some things up.
You may want to do a custom subclass of NS view. The good news again is there are samples on the web to do both of these things. So, you can see how to make a custom subclass. It should be basically skeletoned out for you. And so, you can use that just as easily as you would do in an NS OpenGL view.
And if you look at that particular sample, the custom Cocoa OpenGL sample, there's a file in there that basically is the equivalent functionality for NS OpenGL view in one particular .m file. And what that you can do is modify that as you would need for your custom class. So basically, it brings you back to the point where you got with NSOpenGL view.
Let's talk about GLUT for a minute. Excuse me. So GLUT is for cross-platform source compatibility. So if you want a full app that runs without having to do anything between the OI, between Windows, Junix, Mac OS, and also Linux. I shouldn't leave out Linux there. You can use GLUT.
And GLUT contains a Windows system and an event system. That's the good news. The bad news about GLUT is it's fairly old in design. It's a number of years old now. And it actually is fairly limited in what you can do. There's some amazing apps that have been done with GLUT. But I think if you talk to those authors, at some point you become, you're almost fighting upstream against what GLUT wants you to do. Great for prototyping.
Great for an application that can work within the constraints of the API. Probably not great for a fully, I'm sorry, an application. But it's a full GUI that wants to do some user interactions that may be fairly complicated. Things that are specific for Mac OS, which is why I want to talk here.
There's GLUT sample code, as I mentioned. And the GLUT sample code, you can go to the readme in there, and they go through all the things we've added to GLUT that are Mac OS. Some of the highlights are use extended desktop on startup for an application. So that means what you can actually do is you can actually have a window that expands the full desktop. And you can do that for like GLUT with full screen. We have a developer who's doing stereo applications.
By having a window that's stretched across two displays, using two projectors, and putting two screens up, putting one frame of the stereo image on one screen, one frame of the stereo image on the other screen. So the video card, as far as it's seen, is seeing the image rendered into two projectors. So from the user standpoint, it looks like two separate projectors, stereo image, works great. From the development standpoint, it's just one big image with two frames rendered side by side. Use Mac OS coordinates.
You can use negative coordinates as far as specifying windows. Otherwise, GLUT normally would clamp to 0, 0. So it allows you to move windows on a desktop with negative coordinates. Capture things single display. If you have a GLUT application that you only want to have one display full screen, you can use that. And GLUT stereo, so stereo support is in GLUT. We have UTF-8 string support for both inputs. So you get foreign keyboards.
And we have it internally. And we have an add exit handler. So if you have to do cleanup on GLUT, GLUT normally exits without telling you about it. You can set up the add exit handler so it calls your code prior to GLUT exiting so you can clean up what you're doing.
Again, Glut Basics sample code is on the web, so you can look at that for basics of using Glut. So the final session I'll talk about here is actually talking about some techniques in using Mac OS, some things, again, that may not be obvious, may not be clear, and talks actually about a few new things that we have for Tiger.
So I'm going to talk about detection of functionality, OpenGL macros, some texture loading stuff, context sharing, and finally, I'm going to talk about pixel buffers and some remote rendering. Detecting functionality. I think I've gone through this before, and there's some great samples for this, and there's a tech note specifically on this.
The key for detecting functionality, kind of my soapbox for this, is don't ignore it. The fact that you normally see texture rectangle, for example. Most cards, other than the Rage 128, have support for texture rectangle. Don't ignore the fact that some user might be running your application on a Rage 128.
So check for that functionality. Look at the extension string. Use GLU check extension to make sure it's supported. Understand what functionality needs to be supported for your extension to be supported. For example, we talked about OpenGL shading language. Arb Shader Language 1.00, but it's 100, is the name of the extension.
And if that extension is not available, then the shading language will not be available yet. So once you see that extension, you can expect the shading language to be available. Same for all of the new functionality. You can check it before you use the functionality. should be not a problem for you.
So OpenGL macros. Again, something that some people are not aware of. So the Mac OS, not the Mac OS, PowerPC architecture has a slightly higher overhead per function call than an Intel architecture. So does that really matter to you? Well, for most applications, probably not from the OpenGL standpoint. But if you're using immediate mode, you're really pounding the application hard and you're millions of function calls per second, this may be a significant part of the problem.
So OpenGL can be a significant part of your overhead. We provided you with the CGL macros, which eliminates one level of function calls. And for applications that are really in immediate mode that are actually sending a lot of function calls to OpenGL, this can be a significant speed-up.
Other folks who are using vertex array range, display list, and actually don't have a high call frequency may not see any speed-up at all from this. But basically, we'll discuss more specifics about that and show how to use it in the optimization session. A couple things you need to know here, though, as far as just user use.
So if you're using it in your application, you now need to track current context. We were eliminating the idea of current context if you use the macros. So you need to track it yourself to understand what your current context is. And that means you have to ensure your threading issues are handled.
You can use the CGL or AGL macro header. You can use the CGL header for NS. You can use the AGL header for Carbon. And the key to understanding this is very, very simple. There are two, one for AGL, one for CGL, variables that are defined in the macro header. One's AGL underscore CTX and one's CGL underscore CTX. You define these to be equal to your current context. What the macro header does, if you look at it, it then substitutes these into an indirection and will directly use that lookup.
And a dispatch table to call into the function, the OpenGL function. So once you include the header, and once you include, set that variable, you don't need to do anything different. So what that means is, you include AGL macro dot H here. You've now set this AGL CTX to be my context, which is your current context.
And amazingly enough, the rest of your code is the same. So you, everyone here, if they wanted to, not saying you should do this, but if you wanted to, you could probably put this in some of your OpenGL code within a few minutes. Again. Understand what your current context is. Because we normally track current context for you, and when you change it, you need to make sure you update the variable. Let's talk a little bit about texture loading now.
So in Cocoa, one thing we've talked about is how you need a bitmap representation of a texture. It may not be obvious how to do that from Cocoa. So I'm going to go through getting a bitmap image wrap from a view and then texturing the bitmap image wrap. And then at the end, after that, I'm going to talk about ImageIO. ImageIO is a new thing for Tiger and allows you to do some really cool things.
So basically, to walk through this, I'm going to talk about the top bullets and the bottom is the code for reference. And by the way, there's a Q&A on this, so no need to actually jot down all the code instantaneously. Just listen up here and look at the Q&A, which is on the web after the session. If you have an NS OpenGL, if you have an NS View Contents and you want to use it, anything you want as a View Contents, what you can do is you can allocate an NS Bitmap Image Rep.
You can then initialize it, initialize with a Focus View Rep, so you have your view focused and you initialize it and you provide the bounds. And what that does basically is give you a bitmap representation of that image source. You then move that along and you're going to texture with that.
To texture with that, you need to set your pixel store data. You need to ensure that the row length of the texture is handled, so I use the row length. And also you need to ensure the alignment is handled. A lot of times you may have RGB data, for example, so it's just RGB without the RGBA.
And you want to make sure that you don't have a 4-byte alignment implied there. Then you're going to generate some textures, you're going to bind that texture, and then the texture parameter I'm going to set here is I'm going to set the MIP mapping to be linear. In this case, for the texture range, it's kind of superfluous in this particular case, but realize that we're dealing with a single image without MIP maps, so you always want to make sure you don't have a MIP map filtering parameter. And then finally, I do one thing. I use a bitmap to get the samples for the pixel, so it tells me how many samples, 3 or 4.
Basically, in this case, I'm handling an RGB or RGBA image, and I'm just going to texture from it. It looks a little bit complicated, but basically all we're saying is either use RGBA8 or RGB8, and either use RGBA or RGB for your texturing parameters. It's pretty simple. So this is about maybe 15 lines of code total. It's written up in the Q&A, and you can just drop this into an NS app to grab a texture from almost any NS view or any bitmap image wrap, so a really real strong way to simply get that information. from an AppKit app.
Image I.O. New for Tiger. There's an Image I.O. session directly following this session. I'm not sure exactly what room it's in, but look at it. Image I.O. is pretty cool. There are some things that Image I.O. handles floating-point images, high-DPI images, high-dynamic range images. And what that allows you to do is allows you to have one single source in Cocoa or Carbon to get images that you may not have been able to handle otherwise.
This kind of replaces the QuickTime image importer, if you want to think about it that way, and allows even a larger subset of images. So, for example, you can do things like load a floating-point image into a floating-point texture and use that to draw into a floating-point back buffer with a shader.
And you have a floating-point path from image to your final destination without ever having an RGB, you know, integer, interstep in there. So, I mean, this is a really powerful thing. So it allows you to do great manipulation with data sets, great manipulation with high-dynamic range images and those kind of things. So what you're going to do here is, these are some new APIs, and again, I'm going to cover this reasonably quickly. I have a sample that's on your, tonight there'll be a DMG available for the session and a sample using.
It will be on there, so you can look at the sample. And you can see, you can grab and look at the code then. But basically what you're going to do is you're going to create an image at, you're going to basically use an URL to get an image. You're going to create an image at an index zero in this case. So basically we're going to get an image ref for the first image in that, from that URL. We're going to get some information about it, so that's the width and the height.
Set a rectangle. And also we're going to allocate some data based on this rectangle. So now what we have, we have the image ref, we have information about it. We've allocated a rectangle big enough. And now what we're going to do is make sure we cover color match, which is another great thing Image.io does. It maintains color information for that image, so you can make sure your images are color correct coming in.
Excuse me. We're going to create the bitmap context. We're going to draw the image into the context, and then we're going to release the context. So now we have the bitmap context, and if you look at this, all the way down to the end of the texture 2D, there's that data parameter I created.
And what that means for you is all you need to do is allocate the data. You basically get the data. You get the information from the URL. Set up a few things. Draw it. And then you can texture from it, just like we did before. So this is basically exactly the same kind of texturing code you saw before.
Real simple. What do we got? Eight lines of code from Image.io, and then the texturing code is six lines of code. So this supports things like OpenEXR and those kind of things. So it supports those images that you would have to spend a lot of time to make sure you correctly handle. It's a great thing to look at. I encourage you to go to the session after this session. Context sharing. I think is one of the final sections here.
A couple things about context sharing. We talked about it before. Remember, this is where you have to have virtual screen lists are the same. You have to be using the same set of renderers to share context. You share objects in that context. Texture objects, vertex programs, fragment programs, display lists, vertex array objects, vertex buffer objects. Those kind of things are the things you share. The state of the context itself is not shared. You're not sharing whether your current color is. You're not sharing what your texture coordinates settings are and those kind of things. You're only sharing the actual state of the context.
The actual objects and their state parameters. Same virtual screen configuration. We talked about that. One thing you can do is use a single pixel format to create your context or create a single shared context initially and build all your other context out of that. So you have one you've captured back. Build everything else out. Those can be thrown away. It's peer-to-peer sharing. Those can be thrown away, changed around. The last thing you do in your application is throw away that initial context you created.
And this is a simple code example. We create a pixel format of contacts. We create a second pixel format using the same display. And in this case, you can see we have the AGL contacts. We create the contacts, the second one shared with that. Pretty simple. Context sharing tips, Q&A covers all this. Good reference.
So last thing, I think we're just on time here, is pixel buffers. So pixel buffers, Apple Pixel Buffers extension string, it works very much like Windows Pixel Buffers with some slight changes because the Windows operating system obviously is not the same as the Mac operating system, and some of their pieces, their HDCs and those things, don't fit quite with our concepts in Pixel Buffers.
So we modified it, but have the basic same logical setup, but just have some different data calls out. Basically, it's hardware accelerated off-screen rendering. We've talked about that already. Remember I talked about drawables, it's all the same. You're attaching it just like a drawable. So in this case, you're going to use setPbuffer. You can use setPbuffer as your call, and that's just like setDrawable.
Also, one thing that's new here is the support for remote rendering. You can actually use pixel buffers remotely and render to a different machine. So, for example, let's say you have an application that wants to do a render farm or wants to render on multiple machines and gather the information back up.
What you can do is you can attach to a different machine, SSH in, run an application over there. No one needs to be logged in. Doesn't need to be a monitor attached to the machine. You can render using the hardware acceleration, and then you can retrieve the image and do whatever you'd like with it.
I'll demonstrate that in a minute. But moving on, finishing this out, basically, the pixel buffer's going to allow you to do is to render to something and directly texture from that without having any extraneous copies in there. So we'll call what you used there. GL text image 2D, well, in this case, a CGL text image P buffer. Same kind of call. If you look at them, they're almost exactly the same. Set a drawable, texture from it just like you'd texture from a texture. And talk about sharing object resources and state can be shared with the P buffer.
It can be shared with a context and a window drawable. Full screen drawables can't be shared with P buffers at this time. We have a CGL reference and also on, again, on the disk image that's going to be up on the seed site for WWDC tonight, there'll be preliminary documentation for how to set up P buffers that covers all the setup, covers all the API, and covers some pseudo code that I'm about to go through.
So I'm just going to walk through this pixel buffer usage. Again, this is covered in the documentation that you can readily access. But basically, you're going to create a context in pixel format. We've talked about that. You're going to create the pbuffer, which is just like creating a window. Then you're going to set the pbuffer as a drawable using CGL set pbuffer. That's, again, setting a drawable, same as everything else. Draw on the pbuffer. You set current context with the pbuffer's context, same thing you'd do normally for rendering. Then you draw with OpenGL.
So to set up the texturing, again, you're going to create a texture object. You're going to bind to the texture object, standard texturing stuff. You're going to set the texture parameters. And then you're going to create the pbuffer texture, which is create text image pbuffer. Again, it creates GL text image 2D, CGL text image pbuffer. So again, create, bind, set parameters. And that's the only call that's really different for here from normal texturing is you're actually going to use the pbuffer as a texture source.
Then you're going to draw in with the pbuffer texture. You're going to bind to the texture. You're going to bind to it, obviously. You're going to bind to the pbuffer texture object. Enable appropriate texture target with the GL enable. And draw primitives appropriate with texture coordinates. So again, you're going to set your drawable, and then you're going to draw.
And the destruction is pretty much opposite of creation. We're going to delete the texture object, set current context, we're going to destroy the P buffer, destroy the context, destroy the pixel format, and finally set the context to null. One thing in here is interesting. The fact that I'm destroying the texture object first. Not required, but a good idea. If you don't destroy the texture object first and you've actually destroyed data that the texture object references by destroying the P buffer, can cause crashes in your application.
It would be illegal to use a texture once you destroy the P buffer. But I do it this way just to, you know, it's kind of a safety thing. There's no reason that that texture object should persist after you've destroyed the P buffer. So do it in this way. It'll save you some trouble later on.
So I've talked about headless and remote rendering a bit. Remote PBuffer is an additional new pixel format attribute. Pixel buffer drawable. What you're going to do in the remote machine, not log in is not required. Monitor is not required. You're going to SSH through the remote machine. The reason we have the SSH in place is we maintain security.
We just don't want someone to be able to render to your machine or use a copy of pixels to get information from the machine. This is actually, you have to authenticate like you would normally. And then run application on target machine using the remote P buffers. And finally, you're going to retrieve the content however you see is appropriate. And we're going to do a demo.
So we're going to bring up demo two, I believe. See if I got the right one. Yes, demo two. So the only reason I'm showing you this is because this is the images folder. And you notice the images folder is empty. So this is my target machine. I'm just going to log out of this machine.
And I'm going to go over to demo one. Let's wait for it to log out. So I'm logged out. No users on there. I wouldn't even need a display attached, actually. Log into demo one machine, or switch over to demo one machine here. Okay, so now we're on demo one, and what you see, I have a terminal here, and I'm going to SSH into that machine here.
And so now I am-- that's the home, the base of the other machine's directory. And what you'll notice, the piece that I care about is an application called Remote Renderer, which you'll notice in the application list. So I'm just going to run that. It's a little application that you have in sample code. Oh, I can do that.
[Transcript missing]
So I'm just going to run that application, Remote Renderer, on this remote machine. So now I've logged into it, run the application, created the pbuffer. It's starting the rendering. I'm running a number of frames. I'm waiting for it to render. Render is complete. And now I'm going to exit from that machine. So I've run my application.
I go over to the other machine, back to Demo 2. Still logged out. So let me log back in and see what happened here. So now this was the images folder. This is where I started out. And you notice I've rendered about 120 images here. And no exciting content, but you'll see the results of this rendering.
So basically, I just rendered the kind of standard spinning cube thing remotely. Nothing on the display was touched. No windows needed to appear. You can just do this. So you could set up a render form of a number of machines, and you can just render content to those machines and then retrieve it back, put it into a movie, put it as image processing, all through the remote rendering API. It's a really good API to extend your rendering capabilities beyond the machine that you have. Now we can go back to slides.
And we're ready to wrap up. So what did we talk about? We talked about an update, OpenGL. Continuous improvement in OpenGL. We're continuously trying to get updates to you. You'll see them in software updates. You'll see new features. Really want to focus on quality going forward. Architecture. Multi-client, multi-headed, virtualized resources. A lot of folks are using it.
We're stressing the system pretty hard. That's good for us. That's the quality bar pretty high. And you guys, if you take advantage of this, can take advantage of the fact that we have this virtualized system. OS-dependent data. We talked about some specific things. Virtual screens, which are renders. Pixel formats, context, and drawables.
Four things you need to know. Four things you need to know. And finally, in that section, we talked about interfaces, CGL, NSOpenGL, GLUT, AGL. The interfaces you need for writing applications. And we talked about some functionality, some things that might help you write applications, some things that are new.
So, new, should be new on the WWCCs, I believe, is a new updated CGL reference, which covers pixel buffer things. There's also the session disk image, which has information for you. Also, I want to point you to some things for tomorrow, for later. There's the HDR image session, which is following this today.
Tomorrow at 10.30 is the optimization session. Tomorrow in the afternoon session after lunch is a second optimization session where we're talking about using our tools, which is really great for folks who haven't used our tools. And Friday in the morning is the GLSL OpenGL shading language session, talking about OpenGL shading language on Mac OS X.