Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2009-311
$eventId
ID of event: wwdc2009
$eventContentId
ID of session without event part: 311
$eventShortId
Shortened ID of event: wwdc09
$year
Year of session: 2009
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2009] [Session 311] OpenGL ES O...

WWDC09 • Session 311

OpenGL ES Overview for iPhone OS

iPhone • 49:51

OpenGL ES provides access to the stunning graphics power of iPhone and iPod touch. Learn what makes OpenGL ES unique on the iPhone and how it compares to desktop OpenGL. Learn how to access OpenGL ES from Cocoa Touch, and see how OpenGL ES can drive iPhone games and other mobile 3D applications.

Speakers: Mike Swift, Alex Kan

Unlisted on Apple Developer site

Downloads from Apple

SD Video (122.4 MB)

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Hello, and welcome to the OpenGL ES Overview for iPhone. My name is Michael Swift and Alex Kan will be joining us part of the way through today's presentation. We're both members of the Embedded Graphics Acceleration Team in Apple and work on the OpenGL ES implementation. As you heard in the keynote, the new iPhone 3G S introduces support for OpenGL ES 2.0.

So let's take a look at how this affects today's agenda. We'll start off today with an overview of OpenGL ES 1.1, OpenGL ES 2.0, their similarities and some of the reasons why we'd use one version of the API versus the other. Oh, shoot, wrong one. Following that, we will talk about how you create an OpenGL ES application, specifically using the OpenGL ES template provided by Xcode.

And then talk about how the template works and how your content gets onscreen. The third part of this is once your content is rendered and presented to UIKit, it needs to get composited with the rest of the iPhone OS UI. And so we'll talk about some of the best practices to ensure that your OpenGL ES content gets onscreen in an efficient manner. And then the last section before Q&A is all about the new device specifics.

So the iPhone 3G S introduces a whole bunch of new support, both API and extension-wise. And so, we'll talk about that there. So before we jump into the Overview, there are two other OpenGL ES section-- or sessions that are in this room today. The first of these is the OpenGL ES Shading and Advanced Rendering Topic.

This covers OpenGL ES 2.0 Shaders, Programmability, and we'll also go into a bunch of advanced 3D and 2D Image Processing Effects. It can do both in ES 1.1 and ES 2.0. The third session is all about performance. So here, you're going to learn how to identify and optimize pipeline bottlenecks and also how to tune your OpenGL ES 2.0 shaders.

So with that, let's jump into the overview. What is OpenGL ES? Well, it's an open industry standard that's for a 3D graphics API. And this is modeled after the OpenGL API from the desktop that's been around for many years. It's been simplified and streamlined such that it works on imbedded devices. The desktop version of this API has a lot of different ways of doing the same thing. So the ES version slims it down and makes it more efficient. And probably the most important thing here is that it implements a specific 3D graphics pipeline.

So let's talk some more about this. Important thing to know about OpenGL ES 1.1 and OpenGL ES 2.0 is that the underlying pipeline or how do they move through the system to generate your content onscreen is the same. The only thing that's different is the vertex and the fragment processing stages. And we call this the OpenGL ES 1.1 fixed-function pipeline and the OpenGL ES 2.0 programmable pipeline.

So let's take a look at this in the diagram. So what you see here is your application provides a bunch of input data. It flows through the pipeline and eventually creates fragments onscreen as your content is drawn. So let's zoom in on the first part of this, the vertex processing stage.

So the first thing that I said before is that the API provides a bunch of inputs; position, color, texcoords, and normals. These are all used and go through what we call Transform and Lighting in the fixed-function pipeline. More specifically, your position data is transformed by a modelview projection matrix to create the resulting coordinates onscreen.

And similarly, your texture coordinates go through another matrix and the normals and your color have lighting applied to them based on the fixed-function state, so a little bit more about the fixed-function state. This basically is your matrices, your lighting information and all your enables. And the output of this is position, color, fog, and texture coordinates.

And this flows through-- or, since this is per vertex, this will flow through and go into primitive assembly which is where you have your points, lines and triangles. That's the first part of the pipeline. Next part of the pipeline we're going to talk about is the Fragment Processing Stage.

So once you have your points, lines, and triangles assembled, they're actually broken down into small fragments so that they can be textured, fogged and alpha tested. So the fixed-function fragment pipeline specifically does that. So it has a bunch of fixed function inputs as well. These are your texture-- your texture units, your texture environment state, we'll talk about that in a moment, and the output of all of this is just color, because it will be depth tested, stencil tested, and blended into your frame buffer.

So, just like we said before, all the outlets of your vertex shader-- or sorry, all the outlets of your vertex stage go into your fragment stage and get-- had your texturing applied, there's some math that happens in the texture environment stage. You can add, subtract, multiply and do some dot products, and the result of all of that gets fog and alpha tested.

Let's go back to the pipeline. So as the summary of this, the vertex stage will do your transformations and your lighting. The fragment stage will texture, fog, alpha test. And since this is a fixed-function pipeline, it's configured through a set of parameters and a bunch of enables. So you can enable depth or enable lighting or specify what your matrices are. And the API will provide you with a lot of functionality to help manage that state, so API such as glRotate, glTranslate, glScale.

These are all APIs that will manipulate your matrices for you. So how does this change with the Programmable Graphics Pipeline? It's pretty much the same. Your Vertex Processing Stage turns into a Vertex Shader, and your Fragment Processing Stage turns into a Fragment Shader. So let's take a little closer look at this. Important thing to note here is this is actually your code that you create. You can specify all the inputs and all the outputs. But there are some key ones you need to have because this is a 3D Graphics API=.

So as you can see, the vertex shader needs to output glPosition. This is because the subsequent stage is all about taking the per vertex data and turning it into points, lines, and triangles. And similarly, just like you saw in the fixed-function fragment stage, the output is color. And what you see up here is a vertex shader and a fragment shader that are paired together as a program. This program is-- pretty much correlates to the programmable graphics pipeline.

And by setting the program, you're enabling the whole pipeline to work. So, what is a Shader? A Shader is, as I said before, application code. And you could specify all the inputs, all the outputs, and it works on each vertex and each fragment. And it's written a nice high-level, C-like language called "GLSL" and there's some naming conventions that you'll need to learn and that will be talked some more about in the second session today.

So just as an overview of the OpenGL ES 2.0 Programmable Graphics Pipeline, Vertex Shader needs to output position, and there are other things you can output is completely up to you. The fragment shader needs to output color or optionally discard just like alpha test does in ES 1.1.

Another important thing to know is everything you can do on ES 1.1, you can also do in ES 2.0. And as I mentioned before, the pipeline is configured by these pairing of vertex and fragment shaders called a "program." And also like I was talking about previously, the whole pipeline is fundamentally the same with exception of these two blocks. So the depth testing, stencil testing, blending, those are all still driven through the API with enables just like you saw on ES 1.1.

So they're very similar. So as an overview, we should first talk about some reasons why you'd want to use ES 1.1 or ES 2.0. First of these is programmability. OpenGL ES 1.1 is really good for doing a lot of things, but there are some things it simply cannot express.

And part of that boils down to the texture environment stage I was talking about previously. This is only able to do multiplication, addition, subtraction, dot products. There are a lot of things in ES 2.0 that you suddenly are able to do because it's a much more complete language. You can do refraction, you can do normalization, all those-- all those things that can happen on the hardware.

And so you have a bunch-- or you have a much more flexibility in what you can do, and you can also, you also have so much more flexibility on what all the inputs are, and all the outputs that flow through the pipeline. So if you need to do really, really creative things they may need to be done in OpenGL ES 2.0. Next reason, hardware support.

OpenGL ES 1.1 is supported on all devices. OpenGL ES 2.0 is only supported on the iPhone 3G S. So this is a big factor for a lot of developers here. Another reason for using OpenGL ES 1.1 is ease of use. So as I mentioned before, the OpenGL ES 1.1 API has a lot of helper APIs, more specifically as mentioned; glTranslate, glRotate, glScale met-- the management of matrices and management of state, a lot of that is built into the ES 1.1 API.

And it's not a part of the ES 2.0 API. So you may-- or if you use the ES 2.0, there's a little bit more of a learning curve because you have to implement some of the stuff in your application code.

And so, it takes a little bit more work to compile your shaders and link them before you can start drawing. ES 1.1, you can pretty much use out of the box.

You set up your arrays and you just call draw arrays. And stuff shows up on your screen. So three main reasons why you want to use ES 1 versus ES 2, programmability, hardware support and easy use. So as an overview of all of these, things to take away are OpenGL ES 1.1 and OpenGL ES 2.0 are fundamentally the same architecture. The major difference is what happens in these-- the two stages, the vertex stage and the fragment stage, and also how the data flows through.

And a nomenclature for this is the fixed-function graphics pipeline for OpenGL ES 1.1 and the programmable graphics pipeline for OpenGL ES 2.0. So now, I've given you a brief overview of OpenGL ES and the various versions. And now we want to talk about, how do you create an OpenGL ES application.

So let's jump into that. First thing is, you need two things. You need a place to draw into, and you need a way to talk to OpenGL. And these are your displayable rendering destination and your OpenGL ES context. All these are provided for you by the template. So, I'll start off with how do you use the OpenGL ES template, and I'm going to have Alex Kan come up and speak to that.

[ Pause ]

Thanks, Michael. So if you've created an iPhone projects in Xcode, you've probably seen the OpenGL ES Application Option in the new Project Selection for, you know, under iPhone OS Applications. And so, Michael pointed out that this [coughs] or that there are two things that you need in order to create an OpenGL ES application on iPhone and the template sets up all of these things for you.

And so let's take a look in particular at what it does. So, for displayable buffer, it sets up a single full screen view which OpenGL ES will then be able to draw into. Next, it creates an OpenGL ES context configured to use ES 1.1 and it sets up the rendering state such that any rendering commands that you issue through OpenGL will go into this displayable buffer. And the last thing it does for you is it sets up a function that you can use as a drawing callback.

Now, by default, the template will set up an NSTimer that will call this drawing function for you 60 times per second, but you can customize this to fit your needs. So on that note, what can you do with this template once you have it? Well, the first thing you can do is change what it draws, I mean, you can add your own textures, models, and drawing code to the [coughs]-- and change the initialization code to load these things for you when the application is launched.

And the second thing that you can do is you can customize how this application interacts with UIKit, like you know, how it deals with the user's finger on the screen, or accelerometer events, or if any alerts come up. If a phone call comes in, or you get an SMS from a friend or something. So let's take a look at the template. We'll take a look at what it does, and we'll also spend some time making a few changes.

[ Pause ]

OK, so let's start by creating a new template from the new-- a new app using the OpenGL ES template from the new project screen on Xcode. So as you can see, I've selected OpenGL ES Application. And if I-- well, let's just name this. So if we compile this and run it, what you'll see is that by default, the drawing callback is set up to draw something simple for you, this spinning-- the spinning colored square.

So now, now that we've seen what the template looks like initially, let's take a look at what-- how this is actually set up. So there are two relevant files here, the EAGLView which is that full screen drawable that I mentioned earlier, and the Demo App-- or the App Delegate which is responsible for handling some of the application events that you might get from the system.

So first, if we take a look at EAGLView, there are a couple of relevant functions here, the first of which is just the initialization of the view itself. And Michael will go into more detail about what this is doing specifically, but what this particular piece of code does is it sets up your OpenGL ES context. [coughs] There's an additional piece of context initialization that goes on create framebuffer And this in particular, is what wires up the context to that view. And Michael will also explain this in greater detail.

[ Audience Remark: Chaneg the font size! ]

OK, 1 second.

[ Pause ]

Better? [Applause] OK, and so here we have the drawing callback. This is what was responsible for drawing that colored square that you saw earlier. So, let's try making some changes to-- or actually hold on, let's also take a look at the App Delegate. And so what this does is it responds to a couple simple events.

It response to the application finishing its launching, at which point, it will start the timer to call the-- your drawing callback, and it will also change the framerate in response to alerts coming up over the screen but, you know, you may want to do something different if your application involves user input, because you'll probably want to pause the game or something like that. So let's try making a couple of change-- simple changes to this template. So I already have some changes on the side. So if you don't mind, I'm just going to drop them into the EAGLView here.

[ Pause ]

So what I've done in this case is that I've added a new different model that just draws at the screen, and I've also added a little bit of touch handling which if we take a look at the touch ended events, all I do is change the color of what I'm drawing.

So if I compile this and run it, you can see that the app is now changed to drawing this torus here. And if I click the screen, you'll see that this torus changes color. So that's a quick look at the template and how you go about making changes to it.

Now let's pass things back to Mike to really dig into what's going on here.

Thank you, Alex. So Alex just gave you a brief overview of how you use your OpenGL ES template. So a few key points here are, it's very easy to use, you just drop in your drawing code at the draw function and out of the box, it supports ES 1.1. So, I was mentioning there's a bunch more steps that you need to go through or that the template does for you.

So now, we're going to take a look at what those steps are and how your content gets onscreen. So first off, we need to ask the question, "How does the iPhone OS display regular content?" Well, you have your application here, and it's rooted in what's called a UIWindow. Everything that is a sublayer of the UI-- or of the UIWindow such as other UIViews is displayed onscreen. So here, I'm just going to explode out the UIWindow into a UIView, and as you can see, it's a subview.

So if you want your stuff to show up, it has to be a part of this UI interaction but the UIViews contain other UIViews because it's a layer hierarchy or it actually contains what we call a Core Animation Layer. The Core Animation Layer actually provides the hooks to the content.

In this specific case, it's a CGImage. You can also use a CGContext to draw your own content, draw fonts, all that kind of fun stuff. So this is generally the flow of how regular content shows up on the iPhone OS. So how does this change with OpenGL ES? It's very similar actually.

Everything is rooted, once again, in the UIWindow, and then it has a UIView as a subview. And as you saw on the template, we call this an "EAGLView." And we call it an EAGLView because just like-- 'cause all the views are backed by Core Animation Layers. We need to actually have this backed by a special Core Animation Layer called the Core Animation EAGLLayer. And this EAGLLayer is what provides the connection between OpenGL ES and the rest of UIKit. So this is how-- how it all fits together.

So this is a brief summary, pretty much everything is the same. Going from left to right, you have you UIWindow, you have your UIView and then the only key change that makes the EAGLView the EAGLView is it's backed by the CAEAGLLayer which allows OpenGL ES to render into it. So that's the only difference. That's how your content gets onscreen.

So there are five steps that you need to take in order to make this happen. First one is to customize your UIView. Second one is to initialize OpenGL ES, just create in your context, and then we need to set up your frame buffer and connect to your UIView to OpenGL.

And then you insert your drawing code and you present your contents from OpenGL to your UIView. So let's jump into the first of these steps. Step 1, customizing the UIView. This is, the-- so the connection between OpenGL ES and your UIView is a CAEAGLLayer, and we do this by overwriting the layer class of this-- of the UIView.

And that code looks like this. It's very small, all you need to do is stick this at the top of your code and I'll show you momentarily when we go to the next demo, this is what's at the top of the EAGLView file that's created by the template.

So this allows OpenGL to render into this view. So what is the CAEAGLLayer? It's your displayable color destination for all of your OpenGL ES rendering. And it has a couple of properties that are useful for you. The first of these is color format. This allows you to choose if you want, RGB565, RGBA8, or RGBA888, which is a 32-bit color format, and there's another property here called "retained backing." So the majority of you set this to No and don't think about it. If you do think you might need to use this, please come see us in the lab 'cause there are a lot of nuances with this topic.

So that brings us to step 2, initializing OpenGL ES. So we mentioned that this EAGLContext is this object, but what it really is is it contains the connection to OpenGL. It contains all your state and contains your-- the command string that's sent to the GPU. It also allows you to select what version of the API you want to use, if it's OpenGL ES 1.1 or OpenGL ES 2.0. And by default, the results of OpenGL ES rendering don't go anywhere until you've set up your frame buffer. But you can start issuing API calls as soon as you've created your context.

So let's take a look at what this might look like in code. So, what you see up here is an OpenGL ES 1.1 Context Initialization. This is exactly what's in the OpenGL ES template. First step here is to allocate your context and then you initialize it with the API you want to use. In this case, OpenGL ES 1.1. Second step is to bind your context.

This allows you to start issuing OpenGL ES Commands that are associated with the version of the API you chose. And third part is the example function you can call which is to query the renderer string. That brings us to step 3. You want to be able to connect your UIView and OpenGL together.

And we do this by configuring the framebuffer. So, this allows us to specify where OpenGL ES will render. And so there are a few key things here. OpenGL ES renders into what we call renderbuffers and into textures. And then well, there are objects called framebuffer objects that group the renderbuffers and the textures into color, depth and stencil groups so that the OpenGL ES API can draw into them. And this is all a part of the OES_framebuffer_object API that is available in ES 1.1 and 2.0.

So let's take a closer look at renderbuffers. As I said before, renderbuffers are where OpenGL ES draws its contents. And the APIs allow you to specify the format. So if you're using RGB or RGBA or depth or stencil, you specify what you want to use and you also specify the width and the height of the allocation. So that's how the-- that's what the renderbuffers are.

But then there are also the framebuffer objects. These determine where OpenGL ES renders. And they're just groupings of renderbuffers. So think of it as you have one encapsulating object called your framebuffer object that just groups your renderbuffers together and it tells OpenGL ES where to draw. And it supports color depth and stencil.

And the important thing to note about all these is since there's a lot of flexibility of determining what your attachments are for the color, depth, and stencil, you always need to check to make sure that it's supported and we'll talk about that momentarily. So, how do we fit this all together? We have our framebuffer objects, we have our renderbuffers, and we have the Core Animation EAGLLayer. Well, first off, we want to create a color renderbuffer. So how do we do this? We generate a name for it.

So glGenRenderbuffer, we bind the renderbuffer. And then there's a function on the EAGLContext that allows us to connect the CAEAGLLayer to this color renderbuffer. This is done through renderbuffer storage from drawable. This is how you get an externally visible color buffer bound into OpenGL. So that when you draw, your color results end up in that layer. And it's subsequently visible onscreen. So that's how you attach the CAEAGLLayer to your renderbuffer.

Similarly, you have a depth renderbuffer, you generate a name for it and you bind it, and then you-- instead of having external buffer, you have-- you're creating an internal private to OpenGL ES depth attachment so that you can draw your depth information. These two renderbuffers, we then want to connect together into a framebuffer object and this follows a similar pattern. We generate a framebuffer object. We bind it. And then what we want to do is attach the color renderbuffer to the color attachment point of the framebuffer object.

This is done through glFramebufferRenderbuffer. Confusing name but it works. And you want to do the same thing for your depth buffer. And you bind it to the depth attachment. And as I mentioned earlier, you need to check to make sure that the permutation of state that you chose for your color and depth and that all the renderbuffers are at the same size with glTechFramebuffer status.

As long as this succeeds, everything is good and you can start drawing. So as I mentioned, there are some supported configurations. The iPhone, the iPhone 3G and all the iPods which are PowerVR MBX Lite based, all support color and depth. The iPhone 3G S which is a PowerVR SGX chip supports color, depth, and stencil. And that brings us to step 4, drawing your content. This is entirely up to you, as you can see, we have some nice environment mapped photos here or-- and other post-processing effects, and some raytracing stuff.

All this is possible in ES 2.0 and many of these things are possible in ES 1.1 as you'll see in the second session. That brings us to step 5, presenting your content. So you've created your context. You've bound your CAEAGLLayer to OpenGL, you've drawn into it. Now you need to present it to the UIVIew and the rest of UIKit.

You do this by binding the color renderbuffer and then calling present renderbuffer, which is a method on the EAGLContext. That's all you need to do, and then it shows up in the layer tree and onscreen. So as a summary, the first thing you need to do is customize your UIView. This boils down to those two things I showed you; specifying the layer class, and optionally, adding some drawable properties to determine the initial color format. The second step is to create your OpenGL ES Context.

Choose the version of the API you want to use, and then start going. The third step was configuring your framebuffer. This is creating your framebuffer object and attaching your CAEAGLLayer to OpenGL. Fourth step is drawing the content, pretty self-explanatory. And fifth step is presenting your content from OpenGL to the UIView. That's great but I have-- or told you how to do this in ES 2. The good news is it's basically the same. The only difference is the second step, and that's because you need to choose a different version of to API to initialize your EAGLcontext with.

So show you some code of what this looks like. Just like you saw before, you need to allocate your EAGLContext and then you initialize it with the rendering API OpenGL ES 2 as supposed to the rendering API OpenGL ES 1. So the important thing to note is that this can fail if the underlying hardware doesn't support ES 2.

So that brings us to the second part of this which is if 2.0 isn't supported, 1.1 is supported. And you can allocate and then initialize your context to support ES 1.1. So let's take a look at how you build an OpenGL ES 1.1 and 2.0 compatible application. So first off, I'm just going to run the application. So you have an idea of what it actually does.

So what you see here is the Stanford Rabbit. This is an OpenGL ES 1.1 and it's doing per vertex lighting, both diffused and specular. Now I'm going to have it switch on the fly to OpenGL ES 2.0 by just clicking. So as I mentioned before, OpenGL ES 2.0 can do everything OpenGL ES 1.1 can and more. So this is the exact same thing. But since it's OpenGL ES 2.0, we did make some kind-- a few changes to it.

First of these is added deformation of the model. So I'm clicking and holding and now the rabbit is inflating, and he'll bounce back and forth. So this is happening in the vertex shader. So this was not possible on ES 1.1 running on the hardware.

You could always do it on the CPU, but then that's a lot of wasted work and a lot of power that you've consumed. ES 2.0 allows you to do this in real time on the actual hardware.

So that's great, and I'm going to inflate it again and we can see as we zoom in, the specular highlights look kind of funny. So-- 'cause there's all the-- because our per vertex see-- see [phonetic] all the triangles. So now I'm going to switch to a different mode, which is per pixel.

This is something that you can only really do in ES 2.0, and you can see that it's not quite as sharp as-- or as jagged as the ES 1.1 version, and the version that I just showed you, it was per vertex. It's much smoother and much nicer. So let's go to another fact that you can only do in ES 2 which is refraction and reflection.

So what we here is an environment map, and we're treating the bunny like glass. So we have everything in the scene being shown through the rabbit and it too can be deformed as you can see, and so you have all these cool effects. And you can see that there's-- it pretty much matches to the environment, it's nice and pretty. And we can go one step further with this. We can actually do refraction on a per vertex basis, or sorry, on a per RGB component basis.

So you get kind of like a prismatic effect. It's really quite interesting. So these are some of the things that you can do in ES 2 that you just couldn't do in ES 1.1-- or ES 1.1 because the hardware couldn't-- or was not capable of it. So, now that I've shown you that, let's go and see how the five steps that I talked about earlier are mapped out in the template. So I'm going to start with the EAGLView and the first thing you can see is step 1, which is specifying that the layerClass that backs the UIView that you created is a CAEAGLLayer and in order to allow OpenGL ES to render into it.

The second part of this is actually setting up those drawable properties that we mentioned. So how you do this is you get the layer that backs the UIView, that's the self.layer and then you-- on that layer, you mark it as opaque, which because you always want to do this for performance reasons, and then you want to set the properties. So in this specific case, set it as RGBA888, so it's a 32-bit color format and turns off the retained backing flag. Following that, we then initialize an ES 1.1 renderer and we start drawing.

So let's take a look at the ES 1.1 renderer code. And that brings us to step 2. So just like you saw in the EAGL template, you need to allocate and initialize your EAGLContext with a specific API. This object manages the ES 1 API. And allocates it, binds it, and sets up some initial state in order to start drawing.

And just like Alex had shown you previously, the template also supports configuring the framebuffer. So this section needs to be zoomed out just a bit. But how this works is you generate the framebuffer object, you bind it, and then you create a renderbuffer object for your color attachment, you bind it, and then there's that function I was talking about earlier which is the EAGLContext renderbufferStorage from drawable.

And as you can see, you pass in the renderbuffer enum to specify where it's going to bind on the OpenGL ES Context and the layer which is the Core Animation EAGLLayer. This connects the two objects and then you can bind that renderbuffer to the framebuffer object using this. So the other thing to note is that since your layer is going to animate, you also want to make sure that you query the right dimensions out of the object.

So this function gets called whenever your layer or-- and your view resizes. So you can query out the width and the height into local variables and this is important because you need to have your depth allocation via the same dimensions in order for it to successfully pair with the color and the depth attachment into a framebuffer object.

And then finally at the end, we check to make sure that everything succeeded happily and this allows us to start rendering. So let's take a look at the drawing code. So just like Alex said before, there's a function that allows you to render, it gets called back, you bind your context to make sure that you can issue drawing commands, and you bind your framebuffer object.

You set the view port, and then you render your scene and then you get to what was step 5, which is presenting what you just rendered. So usually, step 4 and step 5 go in hand in hand like this. You draw your content, and if your drew content, you then present the new changes to your UIView. So that's how the OpenGL ES 1.1 Render object works.

So let's take a look at how things change for the ES 2.0 renderer. So it's very similar. We just initialize our EAGLContext with OpenGL ES 2 instead of OpenGL ES 1, and as long as this succeeds, then we go through the whole initialization sequence and start drawing our stuff. And once again, the framebuffer setup is nearly identical.

Exact same sequence, you can actually use the exact same functions with the OES suffix if you so desire. But the framebuffer object API was pulled into the OpenGL ES 2.0 Core. So you don't have to use the suffix if you don't want to. You can have both of them, doesn't matter.

But for simplicity, we dropped it in the ES 2.0 version of the code. It's doing the exact same sequence and operations however, creating a framebuffer, creating the colorbuffer, binding the renderbuffer from the EAGLLayer, and exact same steps. And drawing once again is pretty much the same sequence, bind your context, bind your framebuffer, draw and then present.

So there's a bunch of similarities between the 1.1 and 2.0 APIs. And the only real difference is what version of the API you chose to initialize your context with. So if we could go back to the slides. So we just showed you how to use the OpenGL ES template, how to create your application, and how will pieces fit together. The latter part of it is a little more complicated.

So if all you-- all you have to know is just open up the OpenGL ES-- OpenGL ES template and then if you want to use 2.0, just add that one-line change from-- to select the ES 2.0 rendering API instead. At this point, I'd like to bring up Alex Kan to talk about how your content gets composited with the rest of the iPhone OS.

[ Applause ]

OK, so, as Michael mentioned early in the presentation, there are a lot of similarities between how 2D content gets to the screen and how 3D content gets to the screen. And namely, the only difference is that the UIViews that you used for 3D rendering are backed by CAEAGLLayers instead of regular CALayers.

So let's take a look at what that actually means for you in terms of using UIKit with GL Content. And so, what you'll find is that most things behave exactly the way that they did when you were using UIKit for 2D content.s-tyl So namely, when you call present renderbuffer to update your content, what you'll find is that the contents basically stay in that state until you call present renderbuffer to update that content again.

In addition, any UIView properties that you are used to applying, such as transforms, positions, alphas, all these things continue to apply to UIViews that are backed by CAEAGLLayers. And also, in addition to being able to fade these layers in and out, CA will obey the same compositing model that it uses to blend 2D content when it blends your 3D content.

So there is one difference that you need to be aware of, which is that if you create a CAEAGLLayer explicitly by you know, setting up a UIView subclass that uses CAEAGLLayer instead of CALayer, or by allocating a CAEAGLLayer at yourself, what you'll find is that the layer is marked opaque by default, which is a slight difference from what you'll find if you just use a regular UIView 'cause these are not opaque by default. But what do all these similarities mean for you? It means that when you use GL Content and UIKit, those views that you draw GL Content into are basically first-class citizens in your view hierarchy. It means that you can do anything that you could to 2D content.

But of course, if you're working on a game that's just mainly 3D and you're no=t really dealing or-- in-- you're not intending to deal with the window system, you maybe wondering, "Well, how do I know that this is going to be fast?" And so, it is true that some compositing operations are faster than others.

And there are a couple simple rules that you can keep in mind to make sure that things stay fast. And what you'll actually find is that if you have played with performances, many of the rules that applied there also apply here but on some level, they're even more important.

So let's-- let's take a look. So what you'll generally find is if you have a view with GL Content, you'll be able to update the screen faster if that view remains untransformed. I.e.-- like, meaning you don't rotate it or scale it. And things are generally fastest if the rectangle defining your GLView stays aligned to pixel boundaries. So the nice thing about this is that this is something that you can check, I mean that you can already check using the Core Animation Instrument and Instruments just by checking the color misaligned images checkbox if you're running your app through instruments.

So next, let's take a look at what you need to think about when you're blending GL Content. And what we actually recommend is that you should keep your views or your GLViews opaque whenever possible. So now, this has two aspects to it, the first of which is that the Core Animation, or you must signal the Core Animation that the actual Contents of your view are opaque. There are two ways that you can do this.

Michael mentioned the drawable properties for the CAEAGLLayer earlier in the talk. And so what you can do is you can pick a color format for your CAEAGLLayer that has no alpha channel. Now you can also explicitly hint to Core Animation that even for an image with alpha channel for-- or even for an image with an alpha channel that you want Core Animation to treat that as opaque by just simply setting the opaque property on that layer or on the view that that layer backs.

And so I said that there's a second aspect to this, which is that even if Core Animation knows that your layer is opaque, you need to make sure that your view is being drawn at full opacity and that's just a matter of making sure that your views opacity is set to 1.0 whenever your view is onscreen.

So the nice thing about this is that it doesn't preclude you from using other UIKit elements or-- and it also doesn't preclude the system from putting up other UIKit elements on your behalf, such as alerts, or more interestingly, the system keyboard, which you can basically bring up over your GL Content if you ever need to get text input from the user.

So that saves you the trouble of having to implement your own keyboard. So a small note about OpenGL ES Content that you actually want to blend over other content. Even though we don't recommend this, there's one important thing that you need to keep in mind, which is that Core Animation uses premultiplied alpha everywhere in the UI for blending and your app will need to obey the same rules as well.

And so because this is kind of an in-depth topic, we recommend that if you need to do anything like this, come-- to come to talk to us in one of the labs. And so one other thing that you may have done to regular 2D content, which you may want to apply to your 3D content, are animations and view effects.

And there's a simple thing to keep in mind here, which is that these animations and transitions will run faster if you're not updating your GL Content at the same time by you know, with your render callback or something like that. So what you want to do in this situation is pause that timer or whatever mechanism you're using to update the contents of your GLView whenever you also want to run a transition or animation. So what I have for you is an example app that demonstrates these particular principles in action.

[ Pause ]

OK.

[ Pause ]

So let me run the app for you. We won't take a look at the source core in particular but I'll show you what this app is doing. So here, we have a torus rendering in a window with what looks like rounded corners, and what we can do, is we can actually flip this around and we can apply vignetting effect to the entire screen.

And so now, you can see that we've darkened the edges of the screen, you know, and an additional thing you can do is you can drag the slider back and forth to change the alpha. So now that I've shown you the demo, let's take a look at how it's actually constructed.

Can we go back to the slides please?

[ Pause ]

So what you saw in screen probably looks a lot like this. And so I'm going to break it out into its component layers now. So the vignetting effect is actually achieved by having a separate image that we just faded in and out over our GL Content. And so, let's pull out another layer, which is the rounded corners and we actually do this by drawing a rounded rectangle of black over the content.

And so the important thing to note here is that we've achieved this effect even while keeping our GL Content opaque because all we need to do is draw these effects over it. And the nice thing about this is that because you're doing this via UIKit, laying out things like Complex UI, you can do just using the regular UIKit view hierarchy and you don't have to go through the trouble of repeatedly drawing this every single frame whenever you're updating your GL Content.

And so to summarize, basically, this is like repeating the sorts of rules that you would need to keep in mind whenever you're doing 2D content in UI-Kit. And, but first of all, I mean everything that you're used to should continue to work even if you're dealing with 3D content instead of 2D. But a couple simple rules to follow are that you need to keep your views screen-aligned and opaque.

And that's really all there is to it to make sure that your compositing stays fast. So let's talk a little bit about what the release of the iPhone 3G S means for you in terms of dealing with this new piece of hardware and this new GPU. So this manifests itself in three ways, the first of which is the API support.

Michael has already mentioned that open-- that the iPhone 3G S adds support for ES 2.0. And an important thing for you to remember is that it also supports ES 1.1. So if you wanted-- if you want to target an app to all phones and all iPod Touches, you can continue to do that through ES 1.1. So digging down into more specifics, [clears throat] or actually, I'm sorry, one more thing.

Now if you do decide to target iPhone 3G S in particular, it's important that you hint that your application requires ES 2.0 by editing the Info.plist for your application in Xcode. And so the important thing to keep in mind or the important thing to put in is this line that I've highlighted in blue. So you'll need to add the required device capabilities called "OpenGL ES 2.0." Hopefully, that's readable to you guys. So second, even in ES 1.1, implementations may differ in functionality from device to device.

And so how this typically manifests itself is extensions. And so what extensions allow piece-- particular pieces of hardware or implemented-- or drive implementations to do is to provide additional functionality beyond what ES 1.1 already provides. And so the caveat to using this functionality is that you need to query for its existence. And so, this can-- like I said, this can vary from device to device. So it's important that you query whenever you intend to use this functionality.

So let's take a look at the set of extensions that are-- that you might see on iPhone OS devices, running ES 1.1. And so if you've seen this list before last year, you'll notice that there are actually three new entries and these are extensions that were recently added for-- with iPhone 3G S. OK, so let's take a look at the list of extensions for ES 2.0.

What you'll notice is that this list is a lot shorter because ES 2.0 is a newer API, what has happened is that a lot of functionality that used to be optional in ES 1.1 has now been absorbed into the core functionality of ES 2.0. So the third thing that you need to keep in mind, is that because iPhone 3G S is a more powerful GPU-- or has a more powerful GPU than the GPU found in previous iPhones is that some of the-- some of things that it can do-- well, it can-- well, yeah, it can do more things. It can support larger texture sizes.

It can support more texture units for more complicated effects using texture combiners in ES 1.1, and of course it supports shaders. And the important thing to keep in mind is here, as with the other things, is that if you intend to use functionality, you need to make sure that the hardware that you're running on supports that. So you need to query the limits to make sure that you're using a number of texture units that's actually available on the system that you're running.

So in summary, I mean, the general thing to take away from the addition of iPhone 3G S to the set of iPhone OS devices, is that you generally can't assume that functionality is going to be there. So, and of course, OpenGL provides a mechanism for you to ensure that this is the case. And so what you need to do is you need to query your extension limits.

You need to-- or you need to query you extensions, you need to query your implementation limits, and as Michael showed you before with ES 2.0, you know, attempting to create an ES 2.0 Context may return nil, in which case, you need to prepare to fall back if you're not creating a 2.0-specific application. So that basically should hopefully cover everything you need to do, I mean everything you need to know to get started with OpenGL ES on iPhone. So if you have any more questions, you know, feel free to contact our, where's he at, our Technology Evangelist, Allan.