Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2012-513
$eventId
ID of event: wwdc2012
$eventContentId
ID of session without event part: 513
$eventShortId
Shortened ID of event: wwdc12
$year
Year of session: 2012
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2012] [Session 513] Advances in...

WWDC12 • Session 513

Advances in OpenGL and OpenGL ES

Graphics, Media, and Games • iOS, OS X • 58:28

OpenGL and OpenGL ES are the foundation for hardware-accelerated graphics in OS X and iOS. Find out how to harness innovations in iOS 6 for fast geometry updates, streaming textures, and advanced blending. Learn about the GLKit framework and see how your apps can leverage its built-in features and effects. Understand how to update your apps for high-resolution displays on both iOS and OS X, and hear specific tips and best practices to follow in your apps.

Speakers: Chris Niederauer, Allan Schaffer

Unlisted on Apple Developer site

Downloads from Apple

HD Video (195.5 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

So in OS X, it's a very similar picture. We have OpenGL 2 and OpenGL 3 core profile. And so OpenGL core profile also allows you to basically take advantage of the newest GPUs in your systems. And also today we're, or this week we've been saying that GLKit is now on Mountain Lion. So that basically enables you to quickly get your OpenGL 2 apps running on OpenGL 3 core profile. By utilizing like the effects libraries and the math that's built into that.

So today, we're going to be going over some new extensions in iOS 6, so like programmable blending for one. Then we're also going to go into GLKit and do a quick refresh of that. It's already been in iOS 5, but on Mountain Lion, it's new on OS X, so we'll get into how it works on OS X. Chris Niederauer, Allan Schaffer And then finally, we'll be going into supporting the MacBook Pro with Retina displays. So with that, I'm going to hand it off to Allan Schaffe, our graphics and game technologies evangelist.

Allan Schaffe, Graphics and Game Technologies Evangelist Thank you, Chris, and hello, everyone. Good afternoon. So, you know, there's this point we've been sort of making here at the conference that among a lot of the really leading-edge games, it's become less and less common for them to just simply render OpenGL scenes and have that be what's displayed on the screen.

And, you know, what's really become a lot more common now is for games to be composing their scenes through multiple passes, like, you know, rendering a particular pass, sampling that texture in the next pass in their fragment program, and so on, and then compositing the results together at the end. That's just, that's really become the meta trend that is going on. And this first extension goes right to the heart of that. So let's talk about programmable blending.

And I'll start with the graphics pipeline itself. No surprise to anyone, I'm sure, who's in this room that we start with a teapot, go through the, you know, go through the transform and lighting stage, the output of that, then the spans and fragments get iteratively rasterized and go into the fragment stage of the pipeline. Output of that goes through a depth and stencil test. Anything that is, that survives that then goes through blending and eventually out to your, to your game. And then the output of that goes through your destination, usually the color render buffer.

Right? And the nomenclature of the destination. Then there is some function that is being applied to each of those colors that it's read. Some product is created and then the two are added together. That becomes the final color and it's written back out to the destination again. Right? That's how typical blending works. But so what's changing in iOS 6 is that your applications now have the ability to effectively ignore the built-in blending stage and just directly read values.

So, you can see that the And this ends up being really, really powerful. Because there's a lot of things now that you can do in a fragment shader that just weren't built into the standard blending operations. So, let's start taking a look at how this works. There's a new built-in variable called GLFragDataSub0. It's going to provide you with the current frame buffer color for that pixel. But back in your fragment shader, you can just actually read the color.

You can access it. It's something that you can read. And then you do your own math on it. And then finally you write out your color just normally. This is provided on all of the devices that support iOS 6. And it's worth just mentioning it can actually coexist with the built-in blending if you had some reason to use both.

Now, here's an example shader. So I'm starting out just by saying, okay, I intend to use this extension, shader framebuffer fetch. And you can see I have a color that's coming in from the vertex stage. Now, just here's a couple of different things you might do with it. So here, if I had a decal, for example, this might be the blending operation that I would do for a decal.

I would be taking all of the source color and then one minus the alpha multiplied by the destination color. And that would give me a decal-type blend, right? Let's look at one that's a little more advanced, though. Because obviously you can go beyond just the built-in stuff. So here, for example, would be a difference blend.

So I'm subtracting the destination color from the source color and taking the absolute value of that. And that becomes my output color. And what's cool here, you start to see. So there's no built-in blending functions that have absolute value as one of the operations. So you can do this here because it's in your fragment shader. You get to use all the standard syntax of GLSL.

Let's go even further. So some of these things like a difference blend or a hard light blend and so on are things you might see in photography packages. But now you can implement those things directly into your GL-based apps. Now, ignoring the math here, you know, a hard light blend looks at the color. It sees to see if it's less than 0.5. If so, we take 2 times the source times the destination. Otherwise, if it's greater than 0.5, then we take 1 minus all of this product that's there.

Okay, but the point of this isn't the math. This is just, that's the hard light shader. But it's the fact that look at these different operations that we could do. I showed you absolute value on the previous slide. Here is a branch, right? So we have a conditional here that's going to take two different values depending on the actual color that could be sitting either in the destination already or coming in from the source. So and then down here, we just fill out our color values. Okay, so pretty cool extension. Let's take it even one step further than what I've shown you.

You could use this to do local post-processing on your entire frame if you wanted to. So in this example, you could kind of think of this in a few steps. Step one would be to draw your entire scene normally into the color render buffer. Then step two, the last thing you draw that frame could be a full-screen quad with a shader attached. And this might be the shader, where it's going to essentially sample the destination values for every pixel, and then just do some full, you know, for every single one, do an operation on that.

So here, the operation I just chose, because it's nice and short, it fits on the slide, is to convert whatever the color is to grayscale. Right? And so that's what I'm doing here. I'm calculating a luminance value by multiplying each channel of the current destination color by those values, and then applying that luminance into red, green, and blue.

Right? So -- but if you think about this, this is more efficient than, for example, rendering to texture. And you get to take advantage of pipelining here. This shader just happens to be the last one that you would execute for that frame. There's nothing special that you have to do as a post-processing effect.

Now, so we've already gone pretty far with this. One more, some more directions that you can kind of take and discuss with us on your own as far as going even further. I mentioned some of these unique blends that we can do, you know, things like hard light and different kind of photographic effects.

You could use this to do custom overlays and kind of more atmospheric lighting kind of conditions in your scene. But then one I didn't show, but just think about this. You could sample the current color in the destination, read that back, and use the RGB values to do a lookup and a texture, right? And then take that and go further with that value. We imagine people could use that, for example, for color grading in an OpenGL app or OpenGL ES. You know, going even further, you could be storing non-color data. In the destination.

And you might be sampling that. For example, you might have normals that are being stored there, and you're reading those back and using that to kind of construct a G buffer. There's, you know, or calculate things like ambient occlusion and so on. There's a lot of these effects now that aren't going to require an extra pass to come back. They can just be directly done in that first pass. Okay, so that is extension number one. and it's programmable blending.

Next. This one is a little more advanced. But I'm going to go through it. And so there's different audiences who kind of might be interested in this. Let's talk about different audiences, though. So, you know, most games that we see on the App Store today are just loading all of their textures and they're done. They either have just only a few textures or only a few and that fits just fine. Or the developer has gone through a trial and error of like, okay, that was too much. I'm going to reduce it. And they finally get it down into something that works.

That's most games. This extension is not for you. Now, some games are managing textures fully dynamically. They're doing something like maybe it's a game with different levels and they are loading and deleting games as they -- excuse me. Loading and deleting textures as they go from one level to the next. So that's another possibility. And so, this extension is not for you.

Finally, we almost -- we have a couple of guys in the room. Okay. Who are left now? There's a few games who are -- that are doing really fully dynamic texture management where the concept that they're adhering to is that they kind of keep a texture budget to a particular size during run time.

And then perhaps it's a 3D game. And as they move around in the scene, whatever objects they're close to or they're the ones that they make sure those textures are loaded. And then as things get further away, maybe those are the first ones that get replaced for something that's closer. And they're dynamically loading and deleting those textures as they move around in the scene.

And the request that folks like that have had is that they actually want to take it even further than that. They want to have fine-grained control over essentially over the mipmap levels and everything else. They want to do some really clever stuff. And so, this extension is all about them.

So the concept of this is it's going to be for games that do really dynamic texture management. We're providing some new extensions for that that are going to let you very quickly create texture storage. And once you've created storage, to be able to copy data very quickly between different elements of that storage. And what this enables is what I call a texture swap algorithm.

And, you know, the idea that's going on is that what the game might be doing is sort of, okay, I'm getting closer to some object. I have it loaded at a particular resolution. And I kind of wish that I could load an even higher resolution base level for that particular texture. And then, meanwhile, I've gotten further away from something else, so I kind of want to demote it to a smaller base level.

And that's really what this is going to let people do. So there's two extensions that are involved here, texture storage and texture copy. So we'll start with texture storage. This sets up the memory, essentially. So texture storage, new extension to iOS 6, lets you create an immutable texture object.

So how it works is that in essentially one call, you can define all of the texture properties, and then after that, just nothing changes except... except for the actual texture contents themselves, perhaps. And so, all of the memory allocation and all of the completeness check that the implementation has to do can all be done upfront rather than kind of waiting for the first time that you go draw something. The way that you -- and then I said everything except for the texture data. It starts out with some just default data in there. But you define the data through subloads rather than regular calls to GLTech image 2D.

This is supported on all of our devices that support iOS 6. I think I'm going to show a code sample next. Yep. So, here is a quick code sample. So, at first, it's the same as you'd be familiar with. We gen and bind texture. Now here's the new call. So, we're defining and allocating the texture storage by calling GLTech storage 2D. We're passing in sort of normal arguments here.

The target, the number of calls, the number of calls. And then we're going to call the texture. And then we're going to call the texture. And then we're going to call the texture. And then we're going the number of levels, the format, width, and height. But now once you call this, the texture is there, it's ready to go. And the way that we load data into it is to iterate through each level of the texture and do subloads of the data into that.

So that's kind of part one of this texture swap idea, is just creating essentially temporary textures that you can swap into. The second part of a swap, of course, is copying from one thing to another. So we use this new extension, copy texture levels, to do that. And so what this provides, again, is a really fast copy of the mipmap levels that you specify between two textures.

But the thing is, I said that a little bit -- the way I said that may have been a little misleading. It's not the levels that you specify, but it's really that you specify based more on the dimensions of the levels that you are copying back and forth from.

So, for example, two different textures might have a different number of levels between them. And you're saying, okay, I want to copy level, you know, 0 through 10 of 1, and that's going to get us from 1 by 1 to 512 by 512. And it's going to copy it into the corresponding levels of the destination texture.

So it's very cool and actually really powerful. And it's going to enable this texture swap algorithm that we've been talking about. It only does this copy between immutable textures, meaning between that new extension that I just showed you. And this also supported across all the devices that run iOS 6.

Now, so let's go into how this algorithm would actually work. Let's just pretend that this is your situation. You have a texture budget that you are maintaining. And you've decided that, all right, at any given time, I'm going to have these 15 textures loaded. All of them are at least 512 by 512. But the three things that are closest to me, I'd like to have those have textures that are 1K by 1K.

And I'm going to dynamically manage which is which as I go along. Now, behind the scenes, every single one of these boxes is also a mipmap texture. Okay? So the grand total budget here would be 32 megabytes, roughly. And if these were non-mipmap, then it would be down about 24 megabytes.

So let's say that we run around our scene and at some point we decide, all right, the object that has the red texture on it is now close enough to me that I would like to promote it. I want it to be represented with a 1K by 1K texture. And the object with the blue one, it's far enough away, that's the one I'm going to replace. And so I'm going to do a swap. So let me show you the mipmap levels now for those. So here's each texture plus its mipmap levels.

And we're going to get started with this swap now. So the first part of a swap is to create a temporary variable, right? But here it's a whole temporary texture. And so I'm genning and binding and then calling GLTextStorage2D to create a temporary texture to swap levels into.

All right. Now I'm going to copy -- next thing. I'm going to copy a bunch of the blue levels into the temporary ones. So I say GLCOPY texture levels. The destination is the temporary texture. The source is the blue texture. I'm going to start at the blue texture mipmap level 1, which is the 512 by 512 one. And I'm going to copy the next 10 levels down.

So there's the copy. All right, next, I'm going to start copying up now. So I'm going to do the same thing for the red. Copy, destination is blue, source is the red. Starting at level 0 for the red, because that's its 512x512 one, 10 levels, and they go up.

Okay, now I've kind of started to swap, and now the goal of this is to, was to promote the red one, right, to load the new data for the red one. So in the big blue spot, I'm going to bind that texture, and then I'm going to subload big red data into it. That's what those two lines would have done.

And now I'm done. I've done my swap, and I have a choice. Sort of algorithmically, I could delete the temporary variable, and I would be entirely, like, finished with this. But it's a lot more likely that I'm going to keep on using it as I go on in my scene.

I'm going to move around some more, and then the next time, I can just reuse it again. So I don't necessarily have to delete it, but show you the code. So there, it's gone. Okay. So that is a fine-grained texture copy. Pretty sophisticated, but a new feature that we're providing in iOS 6.

All right, now let's bring it back down to something that's applicable to everybody, fast VBO updates. So the reason why I say that it's applicable to everybody is because really vertex buffer objects and using them is absolutely essential. VBOs are the fundamental method of defining geometry and rendering with OpenGL and OpenGL ES. You know, obviously, this is where you store your vertices, how you set up normals, texture coordinates, colors, so on. But the reason why I say it's so fundamental is because really this is the high-performance technique. The buffer object API essentially controls access to the data itself.

And so if you're not using buffer objects, like if you're just using regular vertex arrays, the implementation has to copy the entire array. Right? So the reason why I say it's so fundamental is because really vertex buffer objects can be so much faster. And so that's why I say that, you know, in the context of the VBO, we can manage the memory essentially over to the GPU every frame because you manage that memory. And we don't necessarily know what you might have changed from one frame to the next. But with the VBO, we manage the memory. And you only ask to make changes and then goes on from there.

And so that's why vertex buffer objects can be so much faster. They're even applicable to 2D games that are just drawing sprites. And I'll get back to that in just a second. And vertex buffer objects are supported everywhere. Okay. All of our devices across both OS X and iOS. And it's worth mentioning, over on the OS X side, with the core profile, it's actually required that you use vertex buffer objects there.

But okay, so, you know, usually people think about vertex buffer objects and setting that up as a container for your static geometry. But it's also very much advised that you use this for dynamic geometry as well, and there's an API for doing that. So let me just kind of show you the typical example here. First, we bind the buffer that we want to modify. We call map buffer, and that's going to hand us back a pointer to it, and now it's memory, and we can start to make modifications to it.

So here, mem copy, or, you know, anything else, an array, dereference, whatever you want to get into that memory and make some changes. And then when you're done, we call unmap, and that gets rid of our pointer, and now the implementation knows that changes are done, and it can go on with its own work. All right, so that's typically how you do a VBO update. But we've added something to it now in iOS 6, because there's two basic issues with this.

One is that actually a lot of the time when you're doing dynamic VBO updates, you're sort of doing modify, draw, and then coming back around to your loop. Modify, draw, modify, draw, you know, it's just this pretty tight loop. And you might actually get in a situation where if we didn't block when you call map, you could get in a situation where you're making modifications before we're done drawing it.

And so to prevent that, the CPU ends up waiting for the data to get in. And so that's one of the things that we're trying to do. And then the other thing is that we're trying to do is we're trying to do a draw to finish before the map hands you back the pointer to make modifications with.

So that's one. It's blocking, or potentially a blocking operation. The second is a little more subtle, and it's that when you call unmap, we have to clear the CPU memory caches. Or, excuse me, we have to flush them to make sure that whatever changes you made actually become visible to the GPU.

Because it could just be sitting in cache. Okay? But so those two issues exist. And really, fundamentally, that's what two new extensions are going to take care of for you. So you can do dynamic VBO updates, but have it be this kind of perfect case. So in iOS 6, we're adding these two new extensions, map buffer range and Apple sync.

So the point of map buffer range, this is very similar to an extension that we have in OpenGL on the desktop. It's going to give you explicit, let you control explicit subranges and control the flushing of those subranges. So you will specify the subrange that's been modified. You identify basically the data that you've touched.

And then you say flush just that. Okay? And really, kind of what this does is allow you to have a case where maybe the implementation is still drawing something that's towards the end. Okay? And you're at the end of your VBO array. And you are making modifications at the start. And you're very carefully managing the synchronization between those. Now, that's with map buffer range.

And then the second extension, Apple sync, is how you manage that synchronization. It provides you with a sync object that basically it's a fence that you can insert into the command pipeline. And it will signal when it's reached the end of the pipeline and been executed by the GPU. And so implicitly, then, you can know that anything that was in the command pipeline before that has also been through the pipeline now.

And make decisions based on that. So these two extensions are supported on all the devices that support iOS 6. And let's take a look. So here's the first one. Map buffer range. Essentially, here's the simple case with map buffer range. And then I'll show you the two of them together, map buffer range and sync, in just a moment.

So we're binding our VBO to make modifications to it. Here we're getting back our pointer into the array. But you see the arguments here are a little different. I'm saying, okay, here's the offset and the length of the spot in the array that I want to modify. And then I'm telling the implementation, I will flush, I will explicitly tell you when to flush this back.

Now here, I have the data now once I get -- once I return from map buffer range, and I can start making modifications. So mem copy there. And then down here at the bottom, I flush -- I'm done. And so now I flush explicitly the part back that I've modified, specifying, you know, potentially a new offset and length and then call unmap.

So that's kind of the simple case. You've just made a modification to a VBO. But what if you want to do something a little more sophisticated with the synchronization? Here is exactly the same code, just with a space in it, so I can fill it in with some things. Let's start out by looking at the call to map buffer range.

What I want to do first is to tell map buffer range, hey, don't block. Even if you still think you might be drawing that VBO, I will handle the synchronization myself. So I'm going to add a new flag, geomap unsynchronized bit that signals that, hey, I'm taking care of the synchronization. And now here's the code that's taking care of the synchronization.

A little bit before I do the modification and then a little bit after I do the modification, right? So the part that comes before is that I'm waiting for the fence that I'm setting below. So just forward reference that. It's there. Just remember it's there. And this might be the new sync point. If down below, if the fence that I set down there still hasn't finished when I come back around the top of this loop, then this will be the spot where I block.

Okay. And let's look at the part at the bottom. So the part at the bottom, the draw this VBO, that stays the same. But immediately after I draw it, I'm going to put another command into the command stream that sets the fence. Essentially is a synchronization object. And remember, you know, it's a deep pipeline. So it might take a little bit of time for that VBO and the sync object right behind it to both get through the pipeline.

But by then, I'll hopefully have had time to come around to the top. And maybe I'll have done some other work. And so when I get to the client wait sync, the first one there that you see, hopefully it won't block. I'll already be done with that drawing.

[Transcript missing]

So the purpose of the GLK view is to give you, basically, just an OpenGL ES-compatible view to use. So it's using all of the normal view mechanics, but it knows about OpenGL surfaces. So it gives you some very easy just properties to set up your render buffers. If you've been with us since the original days of the iOS SDK, you know that that used to be a very manual process. And now, with this, it's just one line, for example.

It also gives you some really straightforward ways of implementing your draw methods. Either you can just subclass GLK view and override draw rect, or it also has a delegate that can be called GLK view draw in rect. And that'll be called each frame. Okay? Now, going hand-in-hand with this is the view controller. Okay?

[Transcript missing]

Okay, so that was the second part.

The third part of GL Kit is a texture loader. And the point here is that, you know, most OpenGL and OpenGL ES based apps are loading textures. There's all kinds of different ways to do it, and some are more efficient than others. We wanted to provide just a really easy way for you guys to get textures loaded.

We support, you know, the usual suspects as far as file formats, ping, JPEG, TIFF, et cetera. On iOS, we also support PVRTC. And it's very, very flexible. So you can load from a file or a URL to a file or an NSData that contains a file or other data types like CG image rev.

2D and cube map texture targets are supported. And then maybe its biggest... subtitle kind of feature is that it supports either loading the texture synchronously, which is the way that many applications are doing it now, or it can load the texture asynchronously and give you back control of the thread while the texture is being loaded. There's a lot of you guys who are taking a long time to get to your main menus because you're loading textures during that time synchronously.

And what you could do instead is to just synchronously load only the textures that you need in order to bring up that menu and then asynchronously load everything else. And hopefully, while everything else is being asynchronously loaded, you know, hopefully by the time the user's finger hits the screen, you have asynchronously got everything else taken care of to get started on your first level. But so, you know, think about ways that you can kind of move things, work away from the screen and move things away from the screen.

So, you know, you can start with the main thread before you bring your application up. And it'll Now, a couple more things in the texture loader, some options that make it convenient. It can generate mipmaps for you. It can flip the texture if necessary, and it can pre-multiply the alpha if necessary. Okay, that's number three in GLKit.

Part four, my favorite, is the math library. So this is just a huge math library of a lot of different routines that are really useful for 2D and 3D graphics. And so, 175 functions, we support all different kind of vectors and a lot of different operations on vectors, different operations on matrices, both 3x3 and 4x4, and all the typical operations on quaternions as well, if you've moved over to a quaternion-based movement system. The entire implementation is really high performance.

And it's C-based, which is neat, because reducing function, you know, avoiding function-based movement is really important. So, if you have a function call overhead and so on, well, I'll say it differently. The implementation is C-based, and it's inline in the header file. And so, that means if you have multiple operations that come one right after the other, the compiler may be able to optimize those and avoid some extra function overhead.

But the reason why it's really so fast, on iOS, we provide you with an implementation that's scalar that runs in the simulator. But when you're running on a device, it's not. So, that's one of the reasons why it's so fast. It's using the neon instructions of the CPU to just be blazing, blazing fast. Same thing, same analogy goes over onto OS X as well.

The implementation there is optimized with the Intel SSE instruction set. And so, it's able to be very, very fast there, too. Now, all of that is just about the math operations. You know, think about things like dot product or rotate or, you know, et cetera. There's also... There's also a matrix stack library that's provided in this, in GLK math as well.

And that ends up being really useful for moving code that used to be doing GL push matrix, GL pop matrix, and so on, over to GLKit and onto the modern pipeline. So, those are the four pieces of GLKit. I've really... I've only scratched the surface here. But if you haven't heard about it yet and you'd like to find out more information about it, I'd love to hear from you. So, that's GLKit.

Final part of the talk, I'd like to discuss the new Retina displays on the new MacBook Pros. So, you know, now we have devices, of course, you know, that are supporting Retina displays. You see them, that the iPhone has a Retina display. There's a Retina display on the new iPad, and of course now on the MacBook Pro. Just gorgeous.

And so we've -- I want to talk about sort of how this works more from the coding perspective. So maybe the first observation to make is that both on iOS and on OS X, it's a very, very similar approach. This is the approach on iOS kind of broken down into stages here.

There's going to be a scale factor, which either will be set for you, for example, 2.0, or you may set it yourself. Then based on that scale factor, you will be allocating your color buffers, your depth buffer, and so on, and setting up your viewport. And then you render into those buffers, right? Now, on iOS, if you aren't scaling, then great, we're done. If you are scaling, then core animation will actually take care of the scaling for you.

That needs to happen. And then, you know, you do some performance tuning to make sure that now you're driving a lot more pixels, and you perhaps tune your fragment shaders and so on. Okay? So that's iOS. You know, I forgot to mention, I meant to say this in the beginning, existing applications that do nothing also, of course, work. So this is, you know, an existing application that does not take advantage of the retina display. The system will scale.

It will scale that application to the size it's supposed to be and all the views to the size. Now, this, what I'm showing you on slide is if you are deciding to take advantage of the retina display, how that works. The same thing is true on OS X as well. If you do nothing, your existing applications are going to continue to be presented at the same size. But now, if you are going to take advantage of the retina display, then here's how it works.

Really, each of the steps here. Are very, very similar. You're going to enable retina support on the Mac. You're going to allocate your color buffers, your depth buffers, and so on, and set up your viewports. But then there's a fork in the road. The fork depends on are you drawing at the full native resolution, at retina resolution? If so, you'll go down the left-hand side here. You just render, and you're done. Right? But if you are intending to scale, then you go down the right-hand side.

What you will do. Is render into a smaller than retina resolution FBO. And then explicitly do a blit that will scale it up to into the full-size backing buffer. Okay? So that's how you make your way through. And in both cases, of course, take a look at performance, and we'll see.

So we've covered iOS plenty in the past. Today I want to cover the specifics about how it works on OS X. So on OS X, here's that first step where we are enabling retina support. You do this within your NSOpenGL view subclass. And just basically there's a new method on NSOpenGL view. It's set wants best resolution OpenGL surface.

You set that to yes. And now you are saying to that view, ah, you know, view, you think you are in points at, you know, this many points, but really underlying that, we want you to be providing us more information about pixels, right? So then here, once you have opted in, here is your draw rect method for NSOpenGL view. So the first thing you want to get, you want to find out from the view is, well, really, what resolution am I at? Because a lot of other things know OpenGL sort of depend on pixel resolution.

Or, you know, on pixel dimensions, I should say. So there's a new method on views, which is convert rect to backing. We'll pass in the bounds of the NSOpenGL view, and it's going to give us back a new bounds, which is in pixel dimensions. Okay? Now, we use those dimensions. I just, I'm extracting some very, you know, some values out of this. And those are the dimensions that we pass in for calls like GLV. Okay? The actual backing pixel width and the actual backing pixel height.

Now, and then you go on, you draw something. Now, as you're drawing, you know, the drawing should basically function as before, but just be aware that there are several other calls in OpenGL that have to -- that are defined in pixel dimensions. You need to check through your code for those. I mentioned viewport, also scissor, read pixels, line width is in pixel dimensions, render buffer storage is in pixel dimensions, and so on.

And then also, of course, well, there's a reason why you're on the retina display. You want to also up res things like your -- you know, use higher resolution assets so everything also looks more crisp. Okay. So, calls like GLTechImage2D make sure you load in your higher resolution images. Okay.

So, that's the -- essentially the draw part. Now, as I said, if you are rendering at -- if your decision was I'm just going to render at the native retina resolution, then you're done on the previous slide. If you've decided I'm going to actually render to a smaller than native resolution surface and then scale that to the native resolution, then this is what you would need to do. You would have to be rendering into an FBO and then blit that FBO to the backing surface.

So, the idea here is you're finished drawing and you're going to essentially bind that FBO for reading and then bind the back buffer for drawing and then do a blit. So, the first two calls here are binding the read source. You're sourcing from the first attachment of an FBO. These next two calls are taking care of the draw part of the equation. So, you are binding -- you're setting the draw buffer to the back buffer.

Right? And now here is where you actually do the blit. So, this is doing kind of a copy blit from that read source into the write source, specifying the dimensions of both. And here I'm saying that I want linear scaling. You can also specify nearest scaling if that's what you prefer. And then you can go on from there and flush buffer and so on. That's the end of your frame.

Okay? So, that is how you can support Retina displays just as far as the code and a few things to -- a few decisions to make along the way. There's one more piece, and it's that tuning performance part of it. So, you know, you're drawing more pixels. Now, most of the apps out there are just going to be able to -- to draw four times as many pixels, and their performance is still going to be fine.

That's actually, you know, among more sophisticated game developers, maybe not true for you, but most of the apps out there, it's definitely true. So, the first thing that you should try for your game is to -- or any app is to just try the native resolution and see how you do. And, you know, optimize your fragment shaders, see if you can tune your performance, because, of course, you're drawing more pixels now.

If that's not going to work, if you've decided, okay, I need to do something more special here, there's a couple of different approaches to take. You could do an experiment. You could render at the original sort of resolution, but turn on anti-aliasing and see how it looks and see if that visual quality is good enough for you.

So, the third option here, which is really to just kind of iterate. Try different fractional values between 1.0 and 2.0, you know, between the two resolutions, and modify your, you know, make optimizations to your fragment shaders and your rendering as you go, and try to find essentially a happy medium where you're pleased with the performance and you're pleased with the display resolution that you're using.

Okay? Now, so there are a lot more topics to go through with retina displays that are maybe specialized to particular audiences or particular cases. For example, if you have already have an anti-aliased FBO that you need to scale, there's more discussion to have. If you are handling multiple displays and potentially you have a display which is retina and one which is non-retina, how do you handle that? How do you handle resolution changes and so on? We're going to have another talk.

It's the advanced high-resolutions talk that's going to be on Friday morning. So I'd recommend you go to that, especially if you're going to be supporting this. There's also a sample called GL Fullscreen, which shows you the best practices for fullscreen apps, and we're working on that for retina as well. But something you can start with today, actually, with the existing systems you have. Is the Quartz Debug app. So if you download Quartz Debug, it's up on our developer website. It'll end up in applications graphics tools.

And then there's a checkbox in there that you can do, which is shown here, enable high DPI display modes. If you check that, then it'll give you a few more display modes that you can then experiment with with your existing apps and kind of test their functionality. And, you know, maybe just let me give you one more thought on retina displays. Something that we saw on iOS.

Was, of course, you know, there were a few apps that were right out the gate very quickly with the retina display on iOS. And those early movers had an advantage, of course, but it was very quick. The transition was very fast where users began to expect that the applications they ran on their retina-enabled phone, in that case, would support retina graphics. And so your users who go and buy one of the new MacBook Pros. Will very, very quickly be in that same situation as well. I bet where they will come to expect that the applications start getting updates for retina graphics. So don't wait.

Just go for it. Okay. So that is supporting retina graphics. I want to bring Chris back up to wrap up. Thanks, Alan. So, yeah. That was a pretty good overview of some of the new features that we have for you. And so, for instance, like Apple shader frame buffer fetch.

Allowing the programmable blending and enabling things like efficient color grading and doing things a little bit more efficiently with that. Also with the texture storage and the copy texture levels and making sure that you're taking the most advantage out of your limited memory that you have in order to get high resolution textures.

Also the vertex buffer objects and sync with map buffer range. And being able to update the texture. Update your vertex buffer data as quickly as possible. And then geo kit. Hopefully you guys can start using that to transition your applications to OpenGeo ES2 and core profile. And be able to take advantage of the programmability that is exposed by the newer modern OpenGeo APIs. And finally, we hope you download the, as Alan was saying, there's the Quartz debug tool. Go to the downloads and search for graphics. Graphics tools. And it will be a damage that you can download that will have Quartz debug on it.

Enable high DPI mode. And you don't need to have a MacBook Pro with Retina display to start developing on it today. So try that out. So with that, a couple more resources. Alan already had mentioned during his talk. The tuning OpenGeo ES games goes a little bit into 2D sprite based applications. How to really geek the most out of.

That performance. And then harnessing geo kit and OpenGeo ES will apply to the desktop with Mountain Lion as well. So if you're not familiar with geo kit, recommend you check that out. And then finally, of course, we said we're going to be updating the geo full screen sample code to show you how to best take advantage of Retina displays. Do some of that scaling that we were talking about.

So, some related sessions we have right after this talk is actually the OpenGL ES tools and techniques. So stay here. It's right here in a few minutes where they'll be showing the state-of-the-art tools that they have for debugging your OpenGL ES applications. Also, tomorrow there's an OpenCL talk, and then on Friday, they're going to be giving a talk on more of the advanced topics related to taking advantage of a retina display on a Macintosh, like dealing with multiple displays, for instance.