Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2001-409
$eventId
ID of event: wwdc2001
$eventContentId
ID of session without event part: 409
$eventShortId
Shortened ID of event: wwdc01
$year
Year of session: 2001
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC01 • Session 409

OpenGL: Advanced Rendering

Digital Media • 24:40

View this session to gain a thorough understanding of OpenGL techniques you can use to exploit the power of Apple hardware. These include vertex and texture programs as well as other rendering techniques used in the latest games and demos.

Speaker: Troy Dawson

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

to the stage, Troy Dawson, 3D API engineer for Apple. I'm with the OpenGL group at Apple, and today I'll introduce four rendering techniques to produce interesting graphics effects on Mac OS X using OpenGL. These protects will be cube environment mass, improved texture filtering, using .3 texture combining to do bump mapping.

And finally, I'll talk about central shadow volumes.

[Transcript missing]

is the founder of OpenGL. He's also the founder of the OpenGL environment. He's been working on a number of new features, including a new environment environment map to map environment lighting effects onto a rendered object. As you can see, this is a spherical environment map.

Both the front half and the back half We use this texture combined with OpenGL's texture combining operations and the automatic coordinate generation to simply render the environment map onto the rendered object, as you can see here. So that was environment maps. Next, we'll talk about cube and texture maps. This is a cube texture map. As you can see, there's multiple faces of the images. The images are arranged in a cube structure.

For use as an environment map, the cube texture map must have the faces arranged to form a cubic panorama that shows the scene as you turn the object. Using cube maps is very similar to using regular two-dimensional textures. Instead of glTextImage2D, we use a new constant, which is textureCubeMap. And as you can see, the texture parameters are very similar to using regular two-dimensional textures.

After we've bound our texture map object, we need to download the textures. Downloading the textures images to the texture objects are very similar. We have these six new constants. Each constant corresponds to a face of the cube map. And as you can see here, I just go through the six constants and download them. These are cube maps, so the width and height must be identical.

Rendering the QMaps is much simplified if we do use OpenGL's texture coordinate generation facility. For Cubemaps, we'll bind our texture object. We'll set the texture generation to Reflection Map mode. And we enable Cubemap generation. We enable the texture coordinate generation. If you note here, there are three dimensions to texture coordinate generation, while spherical map mode uses two dimensions. Now I have a demo.

This is from the NVIDIA site. This is the NVIDIA Bubble demo, modified with local scenery. As you can see here, I have a bubble. This bubble is a cube map environment. I can rotate around the scene and you can see the environment being reflected on the surface of the sphere.

It works in all dimensions, both vertically... I can show you the actual texture images. Here are the six texture images of the cube map. As you can see, it does form a contiguous panorama of the scene as viewed from the rendered object. We can compare this panorama to a spherical map. Here you see the corresponding spherical map that was generated from these six images.

As you can see, it has less information than the cube map. Roughly 40% of the texture map area is representing the front half of the environment, while the next 40% is representing the back half of the environment. The black areas are wasted space, which is about 20% of the spherical map. This inferior texture mapping results in poor rendering quality.

Here you can compare the spherical map and the cube maps. So this is the cube map environment. This is a spherical map. There's cube and spherical. So the quality is different. and noticeably better for the cube map. Additionally, because cube maps are not view-independent, you can see I have a very beautiful back view here. But if I were to go back to spherical maps, you can see that there's a singularity at the back of the texture, which results in this rather poor rendering.

So in summary, we saw that texture maps for using, using cube maps for texture environments are, was better than spherical maps, mainly because they're easier to generate. Basically, spherical maps are generated from the cube maps, so if we skip that step, we can probably do it faster. Additionally, the texture maps for QMaps are viewer independent. So it's not sitting there at the back.

Next I'll look at anisotropic filtering. You may have noticed in your applications that when you have a texture that's using mipmaps, when the texture becomes a bleak to the viewer, you might notice that it has blurring effects. This is due to OpenGL subsampling the mipmap levels a little bit less than optimal. : OpenGL has a new texture parameter which will allow you to improve this sampling. Here it is. If the driver supports texture-- supports your anastropic filtering, you can use this function call to get the maximum supported value.

This number will range from 1 to some number like 8 or 16. The greater the available value, the better the texture filtering effect available. All you do to use this abstract filtering is use this texture parameter set value and you set it to desired value. The value set must range from one up to the maximum available value. Another demo.

Here I have a test image. Currently I'm using binary filtering. So when I move the text image down, You can see some sparkling effects. OpenGL provides trilinear filtering to remove sparkling effects, but as you can see, when you use trilinear filtering, the texture becomes blurred, which isn't so good. However, if I raise the on-stop filtering level, you can see that the image becomes sharp even when it's viewed at a bleak angle. I have another image. Here we see Tri-Linux Filter blurs out. By using the Anisotropic Filter parameter, we can see it's clear again.

We saw how using the anisotropic texture filter parameter will improve your texture filtering when the texture is viewed obliquely. But this does require a map texture loading. But you can set the desired level. Because using this anatopic filtering will-- will give you a performance hit for fill rate. So you may need to reduce the level to keep your performance rate.

The next is using the OpenGL Texture Combiner machinery to produce a per pixel dot product effect. Here's an overview of the process. The OpenGL Texture Combiner API allows you to use texture stages to do certain texture combined operations. As you can see here, we have three inputs to the texture combiners and two stages. The first stage will operate on the primary color operand and a texture operand. The output of the first stage will be given to the second stage to produce our final rendered image.

Let's look at the actual dot product. The dot-free-texture function operand for the GeoTexture combiner units will allow the texture units to do a per-pixel operation. This is kind of a hack in that we have to encode the vectors of the dot product into both color and a texture map. As you know, a dot product takes two vectors and does a dot product operation to produce a diffuse lighting effect.

The output of the dot product will be a scalar value, but we can duplicate the scalar value to make the output be a light map. To calculate the local light vector from the polygon to the light source, we can use this math. It's pretty simple. But the dot product operation requires that the light vector be in local coordinate space. So we must rotate this light by the inverse rotation of the polygon. In my code, I use quaternions. So mine looks like this. But I don't have a typo in my code, hopefully.

After we rotate the local light vector into the local coordinate space of the polygon, we have to encode the light vector into RGB color coordinate space. This is done like this. Here I'm using inside bytes as the color components. As you can probably figure out, the color components are mapped from 0 to 255. So a -1 vector component will map to 0, and a +1 vector component will map to 255. Additionally, you can do this encoding for all the vertices of your polygon. Here we have the second operand of the first stage. This is the normal map operand.

This texture map image, each pixel of this texture map image encodes a single surface normal. The texture map image is blue because the blue component of the color corresponds to the direction out of the screen. Usually these surface normal maps textures are produced during your content creation phase by the artist who has a height map and you apply a tool to convert the height map to a normal map.

Let's look at the first stage. As described previously, we have The first operand, which will be the local light vector. Our second operand will be the encoded normal map. Here we set the operation to dot operation. Additionally, the first texture stage for the dot-free combiner will allow you to scale the output, but I'll show you the code next. Here's the code that corresponds to the first stage.

: We're using the first texture unit as the normal map. So we bind the normal map to the first texture unit. Additionally, we set the texture combine operation to .3. The output of the .3 can either be a scalar value alpha, or it can be an RGB. Here I set the output to be an RGB value. And then finally I set the two operands of the operation. The result is the-- would be the grayscale output you see here for the dot-p result. The second stage of the operation simply modulates the output against an unlit-based texture.

Here's the code that corresponds to the second stage. I'm using the second texture unit for the base image. And again, I set the combiner operation to modulate. Then I set the first operand to be the previous output. Second operand will be the base texture. Next we'll see the code. It actually renders the image. It's a lot of code there, so let's look at one vertex.

: First I issued the local light vector, which is an unsigned byte vector. : Then I simply issue the two sets of texture coordinates from the source textures. As you notice, for each vertex of the polygon, I issue a separate light vector in the color. You can calculate these light vectors for each vertex of the polygon and issue it like : I'm going to show you how to use the vertex and texture program to interpret the normals across the polygon.

If the smooth blending mode is enabled, we can use the vertex However, you have to, uh, there are issues about keeping the vectors normalized, but there are ways to get around that. Now I have a demo. Here I have a positional light source circling the source texture. You can see a purposeful light effect, perhaps. Let's move in so we can see it better.

This polygon is simple for vertex polygon. There is no geometry. But because of dot-tree-texture combined, we can see the lighting change depending on the local light vector. I can show you the colors that correspond to the local light vector. Here you can see the color change. These colors are the encoded local light vectors. Here you can see... Bizarre. Here you can see the geometry of the polygon is flat.

So in summary, you saw how you can use the OpenGL texture combine API to do a diffuse lighting effect using texture units instead of geometry. This requires you to produce your normal map as a texture. And it's required to calculate the local light vector for each vertex. Or you can calculate the local light vector for the polygon.

Finally, we have stencil shadow volumes. Is that the time? Oh well. This is going to be a quick session. I like shadows. Shadows are cool. Using sense of shadow volumes will allow you to do this dynamically. Using the Stencil Buffer to do shadows requires employing a Stencil Buffer and a Stencil Test. As you can see, a Stencil Test is part of the Pixel Pipeline. It's a new feature in Mac OS X. We'll use the Depth Test and the Stencil Test for this operation.

The operation can be simple or complex depending on your application. But the basic steps is to first render your scene geometry normally, both color and depth. Then we construct our shadow volume. Then we stenciled this shadow volume into the stencil buffer, and we used a stencil mask to render either a lighting effect or a shadowing effect, depending on your technique.

First step is to actually construct our shadow volumes. This can be very application specific, so I'm going to hand wave it a bit. General idea, this is our positional light source. And this is an occluding object. We just will construct a separate shadow volume that will be extruded from the including object in the direction away from the light source. It's important that this shadow volume encompass the full scene geometry you wish to have shadowed.

After we have made our shadow volume, we will stencil it with a pretty simple two-step operation. and at the end of the stencil operation, we'll have a stencil buffer where the stencil buffer will have non-zero values where the scene geometry intersects the shadow volume. Here's the code for that.

Stencil operation only requires writing to the stencil buffer, not so we disable the color and depth rights. Next I will set up the stencil operation. The GL invert operand is important. This will set the stencil test action of the depth pass. to invert the stencil buffer when the depth pass is for your rendered volume. Next, I simply render our shadow volume twice. First the back faces and the front faces. Then I would reset the phrase state.

After we have rendered our stencil buffer, stencil mask, we have several options to get the lighting effect. Perhaps the simplest is rendering a two-dimensional overlay polygon with a shadow color and then using alpha blending. This works well, but does leave one artifact where the scene geometry within the shadow volume will still exhibit diffuse and specular lighting effects, which may not be what you want.

To avoid that, you can do more complicated multi-pass scene rendering, where you render first an unlit scene, then use a stencil mask you've created for your shadow volumes to mask out the diffuse and speculating effects. This will slow your rendering down. Or you can do what I do. Just render your shadow volume again into the color buffer using alpha blending. I'll show you that now.

Here we have a sphere inside a shadow volume. I'll show you the shadow volume. As you might be able to see, the shadow volume fully encompasses both the sphere and the ground plane. This produces the shadow effect. I can move the ball inside and out of the sphere. Here you can see the shadow effect of the sphere.

If I disable the depth test, the stencil test, like this, you can see how I render the shadow volume. This shot of all I'm using an 85% opaque alpha blend. So you can see the specular and diffuse lighting. If I use 50% opaque opacity, here you can see the effect is much more noticeable. But if you use 85%, it's not too bad.

Finally, let's look at the 2Pass operation in more detail. First, I'll show you the back planes. The green corresponds to the back planes of the shadow volume. Here are the front planes. by rendering both the backend front planes. You can see the cyan is where the backend front planes are both rendered. The blue region is where the front plane is rendered, but not the back plane. And you can see this blue region corresponds to the shadow area.

Should I do this for a half hour or should I continue? I guess-- So in summary, you saw how you can use shadow volumes in Stencil Buffer to-- Animal-to-Pass Effect to create your shadow effect. This will require both using a sense buffer and a depth buffer. And it's simple for my simple case, but using it in a real scene may be more complex than I presented. So you may have to optimize both your creation of your shadow volume and the fill rate and filling it.