Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2010-417
$eventId
ID of event: wwdc2010
$eventContentId
ID of session without event part: 417
$eventShortId
Shortened ID of event: wwdc10
$year
Year of session: 2010
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2010] [Session 417] OpenGL ES S...

WWDC10 • Session 417

OpenGL ES Shading and Advanced Rendering

Graphics & Media • iOS • 54:26

The OpenGL ES Shading Language lets you tap into the programmable graphics pipeline enabled by OpenGL ES 2.0. Dive straight into the vertex and fragment shader code used to create spectacular visual effects. Find out how OpenGL ES 2.0 advanced rendering techniques can accelerate and transform your application.

Speakers: Luc Semeria, Mike Swift

Unlisted on Apple Developer site

Downloads from Apple

HD Video (232.8 MB)

Transcript

This transcript has potential transcription errors. We are working on an improved version.

[Luc Semeria]

Hello and welcome to OpenGL Shading and Advanced Rendering session. My name is Luc Semeria, and joining on stage later will be Michael Swift. What we're going to talk about here are what are the techniques that are used in Quest, like how to implement those techniques efficiently on all of our devices, and we go also a bit further. So here is the agenda. We know some of you are not quite familiar with OpenGL ES 2.0 yet and don't know everything about shaders yet, so we do a quick recap of the programmable graphics pipeline, and we'll do -- we go through the basics of programmable shaders.

Then we dive down in more details on real rendering techniques and how to implement them efficiently on the devices. So we talk about skinning, for animated characters, lighting, how to make nice lighting and shadows. So if you've been programming with the iPhone using OpenGL ES 1.1, you are going to learn a lot about shaders here and how to use them.

If you've been already programming with shaders on desktop or with OpenGL ES 2.0, you may already be familiar with many of those techniques, but we are going to show you how you can efficiently implement these techniques so you can have real time nice-looking effects on all of the devices including the iPad and the new iPhone. Let's start with the recap of the graphics pipeline.

I bet most of you have seen this picture before, right? Start with the objects, bunch of points, those points are broken down into vertices that get passed to Vertex Shader. One of the things that Vertex Shader does is it transforms the coordinate of those vertices from object space into cube space through eye space. Next step, you take those vertices, put them back into a triangles, the triangles get rasterized into separate fragments, those fragments are going to end up as pixels on your screen.

The Fragment Shaders, one other thing it does is compute the final cutoff of these fragments, and then it gets passed down to a few more steps and shows up on your screen. That's the graphics pipeline. Now two components here that we are going to focus on today, it's the Vertex and Fragment Shaders.

They are programmable, as I mention, and the way you program those is using a C like language that's called GLSL. And like, you know, any program, you need to compile it and link it into a program, but this time, the program runs on the GPU. So shaders are the way you program your GPU. And I think a programmable pipeline is great. OpenGL ES 2.0 allows you to do very, very nice effects. You can do better bump mapping, better environment mapping, better image processing, good effects like refraction.

But OpenGL ES 2.0 is also very flexible, and that allows you to create your own effects. Say you want to have things look more of a cartoony effect, you can do that. You want to make things look more like a real life effect, like we see on the horse here, you can do that, too. It's up to you.

The other good side of being flexible is it helps you tune your algorithms, select the right algorithms so you get the right effects, and at the same time, the right level of performance. And the right level of performance is especially important when you look at all of the devices that support OpenGL ES 2.0.

So we've said this before, right, they all use PowerVR SGX as a GPU, and devices are the iPhone 3GS, the 3rd generation iPod touch, the iPad and the iPhone 4. If you look at the screen sizes, you get about five times more pixels to drive when you write an active iPad app, about four times for pixels you want to use native resolution of your iPhone 4.

So if you just your application that runs fine on your iPhone 3GS and just try to scale it to the iPad or the iPhone 4, you may end up being fill-rate bound. So being able to tune the performances and select the right algorithm to get to real time is extremely important. Now you have the background. Let's look at the basics of how you can write shader.

There's our pipeline again, and let's look at the Vertex Shader first. I mentioned before that Vertex Shader is used to do your position and normal transformations, you can also do a texture coordinate transformations. This is typically where you would implement lighting and also skinning, as we will see later. There are several inputs to your vertex shaders. The first set of inputs are called attributes. Attributes are defined for each vertex on your object. So things that you would define as attributes are the position of those vertices, extra coordinates for those vertices, normals, the color.

That's the first set of inputs. The second set of inputs are called uniforms, and they are constant for all the vertices on your object. So example of uniforms, your ModelViewProjectionMatrix that you use to go from object space into cube space, ModelViewMatrix, the light position because your light is going to be constant at the same location for all of your pixels -- all of your vertices. And the outputs of the Vertex Shader are first and most importantly the final position of the vertex as well as varyings that get passed to the Fragment Shader.

So example of varyings, your texture of coordinates, final color of your vertex, the normals, if you need any Fragment Shader, so on so forth. Next, let's look at the Fragment Shader. So Fragment Shader is typically used to do mostly texture loading. This is where for every pixel, you are going to get the right value of the texture that you want to have -- you want to use for this fragment. That's why you do your texture environment. If you want to do some per pixel optimization like effects like fog, for example, that's where you do it.

Again, the Fragment Shader has several inputs. It has both varyings and uniforms as inputs. There is one pretty fine varying coming in, which is a fragment coordinate. This come directly from the rasterization stage. And they are sets of varyings that you define. Example of such varyings are normals, colors, texture coordinates and they get interpreted by the rasterization stage and passed from the Vertex Shader to the Fragment Shader. The other sets of inputs are uniforms. Again, uniforms are going to be constant for all of the fragments on your image.

Examples, if you do fog computation, this is where you have your fog factor. Which texture you need, you want to load your textures from, and, of course, the Fragment Shader is going to load the texture. The Fragment Shader has one output which is the final color of your fragment.

So now you've seen the interface of those Fragments Shaders, Vertex Shaders. How do you use them? Well, you use the OpenGL API so that you can compile, link and use those shaders. So let's start with the Fragment -- Vertex Shader. The way you use that is you create the shader, you attach your source code, you compile that shader. You use the same for all the Fragment Shader. Now you have both compiled. You can create your program, you attach those two compile shaders, you can link your program, you do all of this in the initialization phase of your program.

The other thing in this initialization phase is querying the locations of your attributes and uniforms. Then in the main loop of your game, you update -- you first say which program you want to use, then you can update the values of those uniforms and attributes and you can draw.

That's how you quickly write and compile, link, use shaders. Let's take a very simple example, yet very useful. Texture mapping. You probably all know what texture mapping is, right? You start with the view of the world. In this case, it's represented as a wire frame. It's the Quest World here, one specific room of the Quest game, and we have a texture atlas that we use. And the idea is to take a specific part of this texture atlas and map it onto our world.

So, in this case, we take a piece of the world, and we want to apply to the world in our model. That's texture mapping. That's how you can end up with a world that looks slightly more real. How does that look in a shader? Well, here is a Vertex Shader. It's pretty much as simple as it gets.

It gets your attributes and uniforms as inputs, two attributes, in this case. First, the position of each vertex, right? The second, the texture coordinate for each of those vertices on your objects. The next input is the uniform. It's the ModelViewProjectionMatrix which is used to transform the position from object space into cube space. And the output is going to be the final texture coordinate that you pass through your Vertex Shader.

There is a main function inside the Vertex Shader, and that's a program that's going to apply to every single vertices. First line is you do a matrix multiplication of your ModelViewProjectionMatrix by the original position of your vertex, and you end up with the final position of the vertex. And the second one is just you pass through the texture coordinate to the Fragment Shader. On the Fragment Shader side, we got a uniform, which is which texture in it we want to load the texture from.

In this case, we just have one texture in it. The texture coordinates that come from the Vertex Shader and have been interpreted for all the points on the wall, in this case. And in the main function of the Fragment Shader, we simply use the built-in function texture 2D to load the texture for the specific texture of coordinates.

And that's how we end up with the final color of the vertex. So you end up with a view of the world that kind of looks real but doesn't look that good. That is a start. The next step is before you want the add your character and animate it, and you want to make this look good, make this look real. So we are going to talk about these techniques now. Mainly, we're going to talk about skinning for animating the character, lighting, different ways of lighting, and finally, shadowing. Let's start with skinning.

So skinning -- the idea of skinning is to model the deformation of the skin based on the animation on a skeleton. And the technique we are going to use here is called smooth skinning or linear blend skinning because we use linear interpolation to do skinning. It's a pretty simple technique, and yet, it looks pretty good.

So the way you do skinning is you start by a skeleton, which is a hierarchy of bones and joints, right, each bone is connected with a joint. And that's how you animate your character, you animate the skeleton. That's the animation we use in this case. The next step is to bind a skin mesh on top of the skeleton so it starts looking more real.

And for each of the vertices on the skin mesh, we are going to bind it to one or more bones or joints. So let's look into the arm of our hero here. You can see here the different vertices on this arm, and we are going to especially look at one vertex, which is on the elbow, this vertex.

And it's bound to two bones; the upper arm and the lower arm. And here, we represent it as being bound to the joint, not the bone itself. And there are two definitions that we are going to put. The first one is a definition of weight. So weight represents the influence of a given bone or a given joint on that vertex position, on that point on the skin. So, in this case, we're on the elbow, which is right in the middle between the upper arm and the lower arm. So the weights are just 0.5 for each bone.

The second definition is for skinning matrices. And what the skinning matrix does is it combines the transformation of the bone, so the animation of the bone, with the position of the vertex, position of the skin with respect to that bone. So here, we move a little bit the lower arm and you can see we define two points; P1 and P2. P1 corresponds to the position of the points, the position on the vertex on the skin if it were only attached to the upper arm. So, in this case, it hasn't moved. P2 corresponds to the position of the skin if it were only attached to the lower arm.

And the way we get those two positions is by simply multiplying the skinning matrix by the original position of the skin. So we end up with two points; P1 and P2. Next step is to get the final position so we can move the arm a little bit more, and the next step is to get the final position of the skin. And the way we do that is by a simple interpolation. We multiply P1 and P2 by their respective weights, sum, and we end up with the final position of the skin.

That's how we animate the different points of the skin mesh. So we apply that for all the vertices on the body. Once we've done that, we can do our usual texture mapping. And by animating the skeleton, the underlying skeleton, we can animate the whole character. That's skinning; that's how skinning works.

How is it implemented in a shader? Let's first look at this shader, which is what you would find if you look on the web for a skinning shader. It looks pretty complex, right? You have four loop there, you get a branch. We can probably do better, especially from better devices, right? How about that? Let's assume that every vertex, every point of the skin is only attached, is always attached and only attached to two bones.

This way we don't -- we can unroll the loop and we don't need to check whether or not a given bone -- a given vertex is attached to a bone or not. We end up with this code that's pretty simple, and that looks like what we had in the algorithm. So let's now go through it quickly.

First we have a set of attributes, right, the position of each vertex on the skin, then the -- which joints are attached to that specific vertex, right, a joint 1, a joint 2, then what are the respective weights for those two joints? That's our attributes. Then the uniforms are the skinning matrices for each of those joints, so they correspond to the animation of the skeleton. And our usual ModelViewProjectionMatrix.

The main program for the Vertex Shader is -- well, you do your matrix multiplications of the skinning matrices by the positions. You end up with the two points I mentioned before, right, P1 and P2, and then we simply multiply P1 and P2 by the weights of those two positions of the -- that corresponds to those two bones and end up with the final position. Then we do our ModelViewProjectionMatrix to go into iSpace and we end up with the final position of the vertex.

We're going to do texture mapping here, so we pass, as we did before, the texture of coordinates to the Fragment Shader. This case, the Fragment Shader is the same as the one we saw before, so it's just doing a texture loop. That's how we implement skinning, and it's pretty efficient. So here I have my iPhone and it starts my application. We have two characters here that are animating and are being skinned, so you can see they look like they have kind of a real animation.

And you may think it walks a little bit funny because we actually use the animation and in the Quest game, it's carrying a big sword, so it's pretty heavy. That's why the arm is always at the same location. Now, I talked about performance and performance being very important. Let's bring up instruments and see how we are doing. We can switch to the demo machine. All right.

So I'm bringing up instruments, and I'm going to -- since I'm doing an OpenGL ES application, I'm going to set up the OpenGL instruments. Today, I'm not very interested in getting the samples. I'm mostly interested by what's happening on the GPU and especially, I want to look at the utilization inside my GPU. So there are two interesting statistics we use here. The first one is the render utilization and that's includes how much time you spend in your Fragment -- the time that's spent in your Fragment Shader.

Title utilization is the other side and includes the time -- important time that you spend in your Vertex Shader. So if you want to minimize what's going on in your GPU, try to minimize these two. And then we just want to make sure we are at 65 per second. Let's mix in some bigger too. And we have our app running. We are going to attach to it and see how it's performing.

And so we can see that title utilization here is 17, 18%. And in this case, I'm using the textbook implementation that I showed you first. Let's switch to the other implementation, the optimize implementation that we have. So you can see right away, right, the title utilization goes down from about 17, 18% to now 12, 11%.

You know, for just a few lines of code changes, that's 30% improvement. And you may say, oh, well, you know, 18%, that's pretty good all right because I'm not using 100%. That was just for two characters, and there was nothing going on in the world around it, right? So whatever you can save on that, it's more cycles that you can use to do better effects on your characters and you have more vertices and better effects you can do on your work. So 30% is important in this case.

So that's what I wanted to show you and tell you about skinning. Next, Michael is going to tell you everything about lighting and how to make all this more real using shadows. Michael.

[ Applause ]

Michael Swift: Thank you, Luc. So as Luc mentioned, my name is Michael Swift, and I get to talk to you about lighting and shadowing for the remainder of this session. So let's start off with lighting.

So what you see up here is our unlit world, just static and plain, just a simple texture fetch that Luc showed you earlier, and I want to use this as our starting point and turn it into something that looks like this, which is a fully lit environment with a shadow character which will animate and skin and have all the lighting effects applied to him as well as the world. So how do we get there? Well, it's important to know that light contribution is determined by three main factors, the first of which is distance. And as the character gets further away from the light, there's less light contribution.

The second is direction. So the light is going to be pointed at the geometry and as long as the light is -- or as long as the geometry is facing the light, it will have full light contribution. And as it is facing away, it will have less and less of that light applied to the vertex. And the third part is actually occlusion. So if there's an object in between the light and the character, then the character will not have any light applied to them.

So those are the three factors that determine how much light is applied to our hero in this scenario. So to help make this make more sense, we're actually going to break down our world and our character into two types of content. We have static content, which is our environment, and we have dynamic content, which is our skinned hero who's moving around the environment.

So, taking that as a baseline, we're going to start off with the OpenGL light model. So many of you have used OpenGL ES 1.1, and this is roughly comparable to that. And it accounts for distance. This is the linear light attenuation modes and factors and also accounts for direction. And this is -- we'll use the light vector, and the geometry normals to compute a dot product. And the nice thing about this is it works for all lights. So you can have static lights, dynamic lights. And it works for all content.

It'll work for both dynamic content and static content. So let's take a look at how that is in the world. And you can see it's a visual improvement over that simple unlit environment. And so how do we actually do this? Well, we have our Vertex Shader, and it has a series of inputs. We have our normal and we have our vertex position.

And these are passed in per vertex. And then we also have a series of uniforms. We have our light location, we have our light color, we have the attenuation falloff factor that we want to use. In this case, it is the linear attenuation factor. And lastly, we have our ModelViewProjectionMatrix. And the result of all of this is going to be a light color per vertex. So let's take a closer look at the main body of the code.

So the first thing that happens is we need to transform our incoming geometry to create the vertex in clip space. And the second part is the linear attenuation part, and this accounts for the distance contribution that I was talking about in the first few slides. And so we use our light location and the incoming light -- oh, sorry -- the incoming vertex location to compute a length and then use that to figure out how much we want that light's contribution to fall off as it gets further and further away from the light.

And also, we'll create a direction factor. So we mentioned earlier that we're going to use a dot product to vary the amount of light that's being applied for each vertex depending on whether or not it's facing the light or facing away. And so here, since we need the vector, which was originally in world space to actually be normalized so we can have a same result from our dot product. And we assume our normal is already in unit space as well. And so we'll create the dot product and clamp that to the range 0 to 1.

And then finally, we will create our final color by using the original light color, the direction factor and the attenuation factor all multiplied together. So why would you want to use this? Well, it's fairly straightforward. It's very similar to what you are already using, OpenGL ES 1.1, but it is computationally expensive. There's a lot of math, and if you want to implement the full light model, which is ambient color and specular color, it takes more and more and cycles. And you have to compute this for each frame, so it's highly expensive.

But it is an improvement over having no lighting, and it does account for both direction and distance. But it has no way of knowing about other objects in the world, and so there's no light contribution for geometry that is occluded. And that brings us to prebaked content. So what's real important here is we can simplify our scene to deal with just the static lights and a static geometry, and this allows us to precompute the light contributions for the world in an offline pass.

And this makes a lot of sense because all of that computation we did in the Vertex Shader and in the Fragment Shader, we're actually going to hoist out, and it'll make things run a lot faster. And so you can create these lightmaps, and you can atlas them as part as an optimization phase to make your game and your content run really fast. And it accounts for all of the distance and direction and occlusion information of the static lights and the static geometry. So, let's look at the first class of these. And specifically, it's the per-object lightmaps.

These are only for static geometry. So what we actually do is in your 3D modeling software or your level editor environment, you actually use like a radiosity, so it's like a nice soft shadowing algorithm or a direct illumination algorithm, so some hard shadows. And then prebake all this light contribution and create an atlas like you see on the lower left-hand side which we can then use to draw the entire world. And on the lower right, you can see that atlas being applied to the first room in the Quest environment.

So let's take a closer look at that. So we have our lightmap which is -- which was created for each piece of geometry in the world, and then we can multiply that against our original diffused texture, and we get something that looks like this. So it's a lot more interesting than what we got out of the OpenGL ES light model that you saw on the first few slides. So how do we actually make this work? Well, the only thing that's different here is the Fragment Shader.

The Vertex Shader is just like the ones we've showed you earlier which is just passing then the texture coordinate and transforming the incoming geometry. So in the Fragment Shader, we have samplers -- oh -- we have two samplers; one for the lightmap and one for the diffused texture map. And we have two sets of texture coordinates because the -- they're atlased separately. And we do the two sample operations and then multiply them together.

So very simple, very straight forward, very efficient on this hardware. So what I've been talking about is working with static lights and static geometry. And now we're going to take a look at how you can deal with static lights with dynamic geometry because it's a little bit different. You can't know during your offline phase where your dynamic geometry is going to be and how much light contribution it will ultimately have.

And so you cannot account for direction because you don't know if the content that's moving is going to face the light, face away from the light or be somewhere else or be animating or skinned. You have no idea. So you can't use those per-object lightmaps. Instead, you get to approximate it by using world space lightmaps.

And for our Quest example, we actually are doing this all in 2.5D. The game itself was constructed such that we could use just a single top-down lightmap that is -- that has all the light contribution and -- I'll move it to the side -- and apply that to the actual character as he moves around the environment. And we do this by having a reference point.

We need to know how to get our X, Y, Zs into UV texture coordinate space. And by having this reference point, we can actually make that possible. So we'll take our character, and we'll transform him or flatten him down to a specific part of the world as he moves around.

And so here are two screen shots. Let's zoom in on the one on the left. And as you can see, on the left-hand side of the hero, he's lit with a nice, bright, white light. There's less light contribution on his right-hand side. And then on the second image, the hero is actually in front of it of a grate and so the geometry that is of the grate itself is hiding the red light that's behind the grate, so you have him lit with the dark red on top and on bottom, but there's less light contribution in the middle.

So looking at the shaders themselves. So we're going to start with the skinning shader that Luc showed you earlier, and we're going to add in a couple of new pieces. So, as I mentioned before, what we're doing is we're transforming our X, Y and Zs into the UV coordinate space, and we can do that using a simple matrix multiplication. And so, our new input is a lightmap projection matrix, and then we're going to create a set of lightmap UVs as the output.

And so this is one extra line of code, which is simply the matrix multiplication of those two values, of the matrix with the postskinned location of the geometry. So these are great. These work really fast. You only need one top-down lightmap. It's very straightforward, and you don't have to add much code. However, this does not handle the direction contribution as I mentioned in the first couple of slides of this. And so you can get some weird artifacts.

So, specifically, we have our hero, he's standing on top of the grate, and there's actually a light underneath the grate. And so you can see the striated, dark and light shadows on top of the character, and that's not what you really want. You want something that's actually correct.

So how do we make that happen? Well, we talked earlier about the OpenGL ES lighting model, and we're actually going to use part of that. We're going to use that -- the normal and the light vector and this dot product to figure out whether or not the character is facing that light or facing away from that light, and then we're going to still use these world space lightmaps for each one of the lights.

But the difference here is instead of just a single world space lightmap, we're going to have a whole slew of them, and you're going to change through them as you move in your environment. And then it's the same straightforward process you saw earlier. We're going to multiply the two of them together.

So, how does this actually look? So let's first take a look at the lightmaps themselves. And so we have our first group -- our first light, second light and the third light. And so we're going to constrain our code to what would just be three lights, at least in the first room, so we can keep the greatest contribution as the character moves around the world.

And the results of those three you can see here. So the first light creates shadows on the right-hand side of the character, and the second light will create shadows on the left-hand side of the character, and then finally, the third light, which is actually behind the character and higher up, will have a small amount of light contribution on the shoulders and the head.

So once you put this altogether, you get something that looks like this. And it's actually fairly interesting. We're going to go into a demo in a few minutes and actually see this in action. It's a lot more nuanced than what you see here. So how do we make this happen? Well, we're going to start out with our Vertex Shader just like before, and it's slightly larger than you would like, probably. But there we go.

So we're start off with our skinning part that we mentioned earlier. And the next part was that lightmap UVs from world space X, Y, Z transformation. And what's new is just like skinning the geometry locations, we also need to skin the normals because as the character animates, we need the make sure the normals are also appropriately changed as the animation happens. So same process and what's different here is the normals are transformed, not by the ModelViewProjectionMatrix, but by just the model view matrix. And then we're actually going to do that dot product we mentioned earlier.

So for each one of the lights, we have three lights, we're going to create the light vector and then dot that light vector, and we're packing here in the first or X part of the vec3 varying light factor, the second and then the third. So this is how we put in each of the three light contributions and make sure that we can subsequently read them in the Fragment Shader, which we're going to jump to next. So, the Fragment Shader has the original diffused texture and our three lightmap textures that we talked about and has our lightmap UVs, our diffuse UVs and it has this new light factor into which we packed the N.L test.

And we're just going to sum all those lights contributions together. We're going to do a texture fetch and then multiply it by the light factor, texture fetch the diffuse color, and lastly, multiply it all. So that creates the final result. So, what's interesting about this is this is a lot more work, and this will solve those artifacts we talked about earlier, but they could also be -- those same artifacts could be solved by some tricks and things while you're exporting your lights and your content.

And for a game like Quest, they chose to modify their assets rather than implement a more complete algorithm like this because there is an increased cost. And as a result, this kind of algorithm is more suited for a first person shooter where you're really in close and can see all the various light changes on the character as he moves and animates and dies and falls over and all those kind of fun stuff. So, this gives you a really nice increase in quality, but it may be really subtle based on where your camera is. And so you might -- you need to do some tradeoffs to figure out which algorithms are appropriate for the kind of content that you're working on.

But the advantage of it is it will fully account for your distance, your direction and your occlusion information of the static geometry. So let's hop into a demo. There's the skin you saw earlier. There is the optimized skinning. As you can see, there's no difference. And so here we have our character with the OpenGL light model, and it's running on a nice fluid 65ths, nice and smooth, as you can see. Let's zoom back in. And you can see the character has a little bit of change of lighting as he moves around, but it's fairly straightforward and kind of plain. And this is what you can do right now in OpenGL ES 1.1.

And so we're actually showing the prebaked world lighting. And you get a lot more nuanced colors and shadows and you can -- it actually runs a lot faster. If we were to look at instruments, the utilization is probably about a third of that previous frame. And so as you can see, as the character moves around, there's different moves amounts of light being applied to the character. It would be most noticeable as he goes on the right-hand side of the screen.

So this is just the single top-down lightmap, and its looks pretty good. So let's jump back to the main room with the same lighting. And it's kind of interesting; it's not great. The interesting part about this screen, though, is that the grate is exactly in the center of those three lines, so it kind of has a very even light contribution. But when we add the direction test and use those per-light lightmaps, we get a much more nuanced kind of lighting effect.

As you can see, there's different parts of him that get darker and brighter as he moves around and just a lot more interesting effects, and it becomes more noticeable as you're zoomed in. If you're zoomed all the way out, you can kind of tell, but it's not -- it doesn't -- not quite as different from the previous single top-down lightmap. So in summary, there's a whole bunch of choices of what you can do with lighting.

There's the OpenGL light model that you saw which works for static lights, dynamics lights and static and dynamic geometry, but it's kind of expensive, and it's hard to make it run really efficiently on the GPU and also these other methods I just showed you actually look nicer. And so there's the per-object lightmaps that we mentioned which is solely for static lights and static geometry. And then you have these tradeoffs of performance versus quality for using these top-down lightmaps or any other kind of world space lightmaps based on your content.

And that brings us to shadowing because thus far, we've talked about how do you deal with static lights and their light contribution on dynamic geometry. Now we want to talk about how you do you handle the dynamic geometry actually shadowing both the environment and itself? And we're going to focus today on shadow volumes.

And we're doing this because they work well with the per-light lightmaps that we mentioned earlier, and they can do a per pixel test whether something is shadowed or something is lit. And it's also one of the full shadowing solutions that will implement self-shadowing and shadowing of the rest of the world.

So let's take a closer look on this. And specifically, we want to talk about how do you count shadows? Well, we have these three pieces of geometry, A, B and C, all of which are casting shadows on the things that are on the opposite side of the light.

And what we want to do is we have a camera and he's going to actually look through, and we want to figure out for a given point on that line, how many shadows are we inside of? Now you can see, on the left-hand side it's 0, as it enters A, it becomes 1, and then it exits A and goes back to 0 and so on and so on and so on.

And this is actually really important because we can use that entering and exiting information to figure out is something shadowed or is something lit? So let's do a brief overview of how all this works. And you start out by you render your world. In this case, you want to render our ambient light contribution.

This sets up our color and also sets up our depth information which is critical for the counting algorithm. And then for each light, we want silhouette or geometry with -- or from the light's perspective effectively creating that shadow as a volume and this creates just the front edge. Which we want to then extrude that out to infinity.

This, in fact, casts the shadow. And once we have these pieces of geometry, we just have to stencil buffer and we render the volume from the original camera's view. And what's important here is we're actually counting how many volumes have we entered and exited before we hit the first thing in our scene.

Because we want to stop as soon as we have a matching depth so we can accurately say for this location on the screen, was this fragment shadowed or lit? And then we're going to use that count information and specifically get the stencil to 0 then it's fully lit.

And if it's non-0, then it's shadowed. And so we're going to render our world through that stencil test and then add that light contribution. So we're going to do this for each one of the three lights, and then we're going to sum all of these to create our final image. Let's take a closer look at this.

So when we're generating the shadow volumes, as I mentioned before, we need to do this silhouette determination. This breaks down to two key things. What we need to do is we need to figure out what edges or what geometry are on the edge of the silhouette. And we do this by using the dot product test that I mentioned so we can tell if the geometry is facing the light and facing away. And it's actually when there's a shared edge between two triangles that face both towards the light and away from the light that we can actually say this edge is in the silhouette.

So once we have our silhouette, we can then extrude it. And we're going to do this by taking our geometry, copying it, and then setting the W to 0. Now, this is on a quick little hack because the ModelViewProjectionMatrix that we said that can be equal to 0, it goes off to infinity. So this works great.

So once we have our volumes, we then need to set up the stencil buffer or count them. And as I mentioned earlier, it's only when we pass the depth test or in front of the fragment that we want to shade it -- that we want to test the volumes against that we're going to increment the stencil when we enter a volume and decrement when we exit the volume.

Now the good news is OpenGL ES 2.0 as part of the core spec supports the increment WRAP extension, and you can set them both separately. So instead of having to do two passes of your silhouette geometry, you can instead use one and then have both a front test and a back test allowing this to happen in just a single pass. So that's great.

This also works in OpenGL ES 1.1 through some extensions. So, once your stencil buffer is set up, you then want to test is it lit or is it shadowed? And so when we actually do this test, we're going to do the stencil test of equal as the operator and the value of 0 because 0 we chose to be fully lit.

And then our depth test once again to be equal so we only draw the pixels where we've matched, and then we add in that light contribution by drawing our world and the skinned characters. So, let's take a look at the volumes. So, here we have three lights which are casting white light, but we've chosen to color them as red, green and blue. And the first volume here is the red volume cast by the red light.

And the next one is the green one from the green-colored light. And lastly, the top-down blue light. So let's get a little closer look at this. So we have our three volumes that have been extruded, and we're showing here in the world, and then we're going to turn those into actual shadows. So I'm going to zoom in on this, and you can see you get really nice, crisp per pixel shadow versus lit. And it looks really good.

And happens to work fairly well on the PowerVR SGX. So that's how all that works. And it changes only a small amount when you're working with prebaked lighting because we want to take all these things and mix them altogether to get a really, really good visual result. And so the first thing we need to do is we need to draw our world with the full prebaked light. So that was basically what we saw at the end of the lighting section. And then we're just going to silhouette our geometry, extrude it, and we're going the set up sensor buffer exactly the same way.

But instead, we're going to add a little twist here. Instead of adding light contribution, since the prebaked lighting already counted for all of the light's contributions, instead we're going to say if we are in shadow, then remove that light's contribution. So by doing this, we're all going to incur the cost of removing the shadows, and that's a lot less fill rate. So, once again, we have our environment with the three volumes visualized.

Zoom in on that real quick. And you can turn these into actual shadows. Now what you'll notice is the shadows are very subtle and this is because what we subtracted out were those individual lightmaps for each one of the lights. And as they get -- as the character and shadows get further away from those lights, there's less and less contribution.

Let's watch a quick little zoomed in movie of this. As you can the character is moving around and you can get some nice shadows and they're casting all their world geometry. And then finally, you can see on the lower right-hand side, that shadow is cast from the right-hand side up towards the upper left, has a really strong shadow initially, and then it fades off nicely. So this is kind of a way you can kind of mix kind of soft shadows with the hard shadows of stencil volumes by using these prebaked lightmaps. So there's a couple of things to know about this. It is expensive.

But by using the prebaked lighting, we only incur the cost for when things are shadowed and not when things are lit because as you noticed in our environment, there's a lot less things that are shadowed than lit. So we saved all of that fill rate. And shadow volumes are quite large, and they have an extra cost both on the vertex processing in the shader stage as well as when you are actually tiling this because this is a tile-based architecture. And we can make some of this tricks and hacks and we can use the prebaked lights specifically because the lights themselves contain the occlusion information of the static geometry which means you only have to deal with the contribution of the dynamic geometry.

So, in this case, our environment is 50, 60.000 tries, and our skinned character is only about 1,700. So those are vast orders of magnitude that you won't have to generate silhouettes and volumes because there's a small amount relative to the rest of the environment. And the best news is depth testing and stencil testing is really efficient on SGX.

And so you can do all of this in just a single pass unlike some other algorithms you can do for shadowing. So let's hop into a demo and see how this looks in action. So here we were with our lightmaps. We topped our lightmaps with direction and let's jump to the next one.

So here are our visualized shadow volumes, as you can see, if I zoom out, they do extend to infinity, and you have our blue light which is in the upper -- the top and behind, hence the shadow is cast very short and the red light and green light. So zooming and then switching to the actual shadows, you can see how they're nice and sharp and crisp, and it's all really fluid as the character zooms around. So moving out of the sky box environment, we actually put him in the world.

And so, as we zoom in, you can see just the same effects of the directional lightmaps being applied to the character and also the shadows animating and falling off as he gets closer and further away from the light sources. And if we zoom out, we can actually get -- you can see the shadows being cast on the various pieces of geometry in the world correctly. So those are shadow volumes.

[ Applause ]

And so, all of that is happening in real time on the new iPhone 4. So you can do the same algorithms and you can save all of that fragment cost by switching your algorithms around and removing the light contribution as opposed to adding it in. It saves a lot of fill rate. And so, here was our agenda for today, and let's just jump into quick summary.

So the key to all of this is OpenGL ES 2 allows you to choose the algorithms that are right for your content. You can choose algorithms that allows you to do a rough approximation that's really fast if you are zoomed far away from your skinned and animated content. Or if you want to be up close for like a first person shooter, you can choose higher quality ones. And you can make those tradeoffs, and you need to choose which ones are best suited for your content.

And the best part is since you can do all these tricks and change the math and like -- and prebake all this stuff, it allows you to simplify your GLSL in your Fragment Shaders to just a few lines or to just -- or to use more expensive shaders just for smaller amounts of dynamic content. And so you can optimize both for performance as well as optimizing for the visual quality based on where your camera is and how your world is constructed.

And you can hoist it all out to a preprocessing stage and make your games that much more visually interesting and just really fun to watch and like -- and see everything nice and smooth and run at 65ths. So, please contact Allan Schaffer if you have any questions. His contact information is here. There's also the Apple Developer Forums. Many of us on the OpenGL and Driver teams are on there making comments and trying to help out anyone who has questions.