Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2003-212
$eventId
ID of event: wwdc2003
$eventContentId
ID of session without event part: 212
$eventShortId
Shortened ID of event: wwdc03
$year
Year of session: 2003
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC03 • Session 212

Cutting-Edge OpenGL Techniques

Graphics and Imaging • 52:39

View this session to learn the absolutely latest techniques for creating stunning visual effects using OpenGL, including recipes for incredible 3D effects, from the experts.

Speakers: Alex Vlachos, Marwan Ansari, Rav Dhiraj

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Travis Brown, Graphics and Imaging Evangelist The latest techniques they use in terms of programmability and various different special effects that they can accomplish with their Radeon products. They sort of really let you know what the outer boundaries of what you can do with visual effects, what the state of the art currently is. Because it's also very interesting, we've had sessions on programmability from vertex programming and fragment programming, and that was sort of an entree into this whole topic. The interesting thing is we're fortunate enough to really gain all this programming knowledge.

We've gained all this programmability and fantastic graphical power through the efforts of our hardware partners like ATI, who are really truly out there creating the silicon that does these incredible digital effects. So it's my pleasure to invite Alex Vlachos to the stage to take you through the presentation. Thank you.

All right, good afternoon. How's everyone doing? Excellent. All right, so where are we? Sorry, my bad. We're already at that slide. So my name is Alex Vlachos. I'm from ATI. I'm part of the 3D Application Research Group. My primary role at ATI is I'm the lead programmer and general team lead of the demo team. So we're responsible for making all the demos for the latest Radeon cards, the 9700 and beyond in the past. On my left is Rav Dhiraj. He's part of our Mac development team, and he's responsible for doing the Mac-specific code in our demo.

So we're going to be showing a few different demos today running on OS X. And basically, our demo engines, like some game engines, are cross-platform, cross-API. So our demos run on three different sort of combinations. We run on the Mac through OS X, and OpenGL, and we run on the PC through OpenGL, and also on the PC through DirectX. So all our demos run on all three platforms consistently, which is interesting. So the things I'm going to be talking about today.

are these three topics. The first thing I'm going to talk about is Normal Mapper. So in the Radeon 9700 Car Paint demo, we used what we call a Normal Map. And we have this Normal Map tool, which will be available on the Mac soon, sometime in the next week or two.

Then I'm going to talk about a few video effects and more scene post-processing effects. So as graphics and hardware gets more and more interesting, and the APIs are there, you can post-process your scene and do some really interesting things. So we're going to show some post-processing effects on live video. We just have a camera set up that was all handed out, the nice new camera. So we're going to be showing the effects on live video.

Then the last thing we're going to talk about are the shadow techniques we used in the Animusic Pipe Dream demo. How many people have seen the Pipe Dream demo? Okay, how many people have seen the Car Paint demo? A few? Okay. So we're going to talk about how we did the shadow techniques in there. We didn't just brute force do stencil shadow vinyls, we did some more interesting things. So we'll talk about those three things.

All right, so the normal mapper tool. The normal mapper tool basically is a tool to generate high-res normal match. It's a way to make your low-res models look really high-res. And so why don't we jump into the demo now, to the demo machine. and we got the run demo machine two. Okay, thanks. So this is the Carpain demo. It's a nice Ferrari, the car I want.

[Transcript missing]

The next thing we're going to talk about, the next section, is some post-processing effects. We're going to talk about four different effects. One is sepia tone. Sepia tone is a way to do sort of antique pictures, basically. The next one is going to be Sobel edge detection. Edge detection filters are really useful for a lot of things, especially NPR rendering, non-photorealistic rendering. Then we're going to show posterized effect, which is an NPR effect that will also use the edge detection.

And we're just going to touch on for a minute real-time FFTs, fast-forward transforms. So, sepia tone. Can we jump over to the demo machine? Okay, so here's some live video of me. Great. So this is basically what the camera is. We've got to apologize for the resolution. We had to use this camera because it's so cool. But it's only at 640 by 480. So there's sepia tone, so I can just sort of switch back and forth.

Right, so sepia tone is meant to look like a sort of antique picture. And this can be used in games. There's a lot of times where you want to do flashback scenes or whatnot to make it look like, you know, you just want to go grayscale, that's kind of boring. But there's a way to, instead of going grayscale, it's a sort of sepia way of doing things. So we jump back to the slide deck.

[Transcript missing]

All right, so do you want to explain the difference between sort of the vertical and the horizontal? Yeah, so basically there's the Sobel Edge Detection, when we flip the whole thing on, the final result, is a way to look at your image and find all the edges.

We have this camera, it's a little noisy right now, so you've got a lot of extra little dots up there, but essentially it passes a filter over, two different filters, one to find the horizontal edges and one to find the vertical edges. Why don't we jump in and just show the horizontal edges? Okay, so the horizontal edges, if you look at that thing right behind me there, you see just these sort of horizontal lines here at the top and bottom. If we go to the vertical edges, you get those vertical, the two left sides, and when you have both of them up there, then you get all the edges basically in your scene.

Can we go back to the slides, please? Okay, so edge detection is just, the Sobel edge detection filter is really simple. It's just two kernels, your horizontal kernel down here and your vertical kernel. So for the pixel in the middle, if you're trying to calculate what the value is there, you take into account the three pixels above and below it and you weight them with 1, 2, negative 1, negative 2 appropriately, and you just combine those pixels and you end up with your horizontal kernel, sorry, your horizontal edge detection value. Do the same thing for your vertical kernel and you get the vertical value.

And here's sort of the output again. If you look at the picture down here on the lower left, you can see just the horizontal, sorry, the vertical edges versus the horizontal ones up top and they get combined. So this is used a lot in NPR looking effects. And here's the Sobel code.

I mean, it's pretty straightforward. You basically do eight texel fetches up front down here at the bottom, and then based on those eight fetches, which are the eight surrounding pixels around your main pixel, you can then take the top and bottom three and the left and right three appropriately on the right.

and combine them to do, to come up with that value. Okay, so the next effect is an NPR posterized style. Now, we're going to use the edge detection on this, and this is sort of, this is sort of inspired by this movie Walking Life. There's this edge preserving, it's called a Kuwahara filter, that basically does, gives you this nice look, right, that gives you this sort of NPR posterized look, which I'll talk about in a second.

What we're going to do is we're going to composite our, so our edges on top of that. So, can we jump over to the demo? Here we go. All right. All right, so here's the non-posterized output. We're going to turn it on. I'm going to have to apologize. It's going to be hard to see. The lighting conditions aren't ideal. There you go. So, unposterized, posterized. You can do a couple different passes to sort of, so if Alex can just sort of jump around, you can kind of see him. Jump around.

I'll pass on the jumping. There you go. That's posterization in real time. Okay, so if we go back to the slides, So this is the posterized output when we have really good lighting conditions. This is part of our Mac driver team coming out of a meeting, and we snagged them and said, stand there, we need an image for our slide.

So that's what the output looks like. Okay, so a core Hara filter is this. Essentially, for the pixel in the middle of this kernel here that we're trying to calculate, here's what we have to do. We take four quadrants of pixels. Each quadrant is a 3x3 pixel area.

The upper left quadrant is sort of the pink area, and we have the green area, the, I guess it's orangish, and blue. And they all overlap by one row of pixels. So each 3x3 kernel, each 3x3 quadrant there, what we do is we calculate the mean and the variance for each one of those. The mean color and the total variance of color in that 3x3 quadrant.

Once we have those four values, for each one of these quadrants we have the mean and variance, we then look at them and say, and we figure out which one has, which one of those four quadrants has the smallest variance. Whichever one does, that's, we just use the mean color from that section.

And it's that simple. We just look at those four quadrants, figure out which one has the smallest variance, and pick that one's color. Done. It's that simple. So here's basically, that's the idea. The implementation is basically that. We do this two pass effect, and essentially the way we speed this up, instead of doing these brute force calculations, is we do, we basically do two passes over the final image.

The first pass calculates the mean and variance for each unique 3x3 area in the whole image. That's one pass. We store that off and we render that into an off screen texture. Then we do another pass through it and we fetch from that texture to get the four quadrants out. Those four mean and variance values.

And in our shader we then do the comparison and just write out the right color. So we can get this down to two. So we get this down to two passes. Really simple. It's all image space. It's a constant overhead. The interesting thing about post processing effects is that it really is just constant overhead. It doesn't matter what you rendered that scene, whether you rendered one polygon or two million polygons, it's the same overhead because you're just doing the same math.

It's a fixed number of pixels. So it's great for games. You don't have to worry about the complexity of your game as long as you have a few frames to spare. You can do that kind of stuff. So here's the fragment shader code, which basically shows how do we compute the mean and variance. You can rip through this later. There's slides online.

But basically you just kind of go through, fetch your full three by three area of pixels and do your computations. Then we have the variance selection, how we actually fetch from those, make those so dead center here. You have those four text operations. We fetch four pixels from that off screen buffer. And then we do the comparisons down at the bottom half here and say, which one has the lowest variance? And then we use the color for that to output. Pretty straightforward. forward.

Okay, the last section of this, the last part of this middle section is FFTs, Fast Fourier Transforms. These are really useful. Anyone that's vaguely familiar with image processing knows that this is used a lot, and this is something that we can actually now do in hardware. The Radeon 9700 family products and other similar products that can actually do full floating point, that has full floating point precision and able to render out to a texture, 32-bit float textures, you can do some really interesting stuff now. So here's basically a test pattern on the left here, a very common test pattern for FFTs.

And this is pasting, this is output from one of our in-house demos. We haven't released this one yet, but it's coming soon. And this basically shows the two different, the frequency domain and the spatial domain of the FFT. That's all I really want to say about it. It's something that we're working on that we can actually start to apply. So this is sort of stuff that's in the works right now at ATI. We're starting to apply this, use to do some image processing effects.

Okay, so we talked about those four things: sepia tones, edge detection, posterization, and FFTs. Which brings us to the final section, which is the shadow techniques we used in the PipeDream Animusic demo. How many people have seen this demo already? A few of you. Okay, why don't we jump over to Demo Machine 2 with audio.

Do we have the audio hooked up for this? Okay. Great. So this Pipe Dream demo, this is something we first saw at SIGGRAPH Electronic Theater 2001, I think. Yeah, 2001. And we were watching that, and we looked, sitting there in the audience and said, "We have to do this in real time as soon as we can." And as it turns out, the next trip we were doing, we kind of went back to the office and thought about it, and I realized, "You know what? I think we can do this demo in real time." And so here's it running on OS X.

This was one of the launch demos for the, this was the main launch demo for the 9700 product. Family, I'll let this run for a minute for those of you who haven't seen it. We're going to do the beta build of our demo engine on OS X. So we're pretty much feature complete at this point.

So if you notice all the shadows that are happening, the dynamic shadows and the shadows globally in this scene. So we wanted to implement this and do this as one of our main launch demos. And one of the things we realized was, the first thing we asked ourselves was how are we going to do the shadows in this scene? Because the geometric complexity is really high. This scene, on average, we render about 400,000 polygons per frame in this demo. It peaks at around 550,000 polygons per frame. So there's quite a bit of geometry in here. And yeah, here we're at about 400,000.

So we're trying to figure out how do we do the shadows in this? We wanted to, as writing graphics demos, we wanted to sort of keep up with what games will look like in a year down the road as developers start to take advantage of these latest features.

So we looked at a few games that are in development and games out there and said, let's stick with full-scene stencil shadow volumes. How many people are familiar with stencil shadow volumes? Okay, so basically it's a technique that lets you render hard shadows globally, right? In the past, developers have used shadow textures.

Light maps, right? That sort of bake in your lighting globally. But it's sort of soft. It's not very exact. It's not hard shadows. And there's a way to do dynamic shadows, dynamic objects that cast shadows that look nice and hard and crisp. And that's sort of how games are going to be looking for the next couple of years. So we wanted to stick with that. So we first went through this.

We got the scene up and running. And we turned on stencil shadow volumes globally. And our performance was somewhere around two frames a second. And we realized we can't do that. There's just way too much geometry. The overdraw on any given pixel on average was near 100 at points. Because of all these different volumes, all these different pieces of geometry. These things up here are just insane, the amount of overdraw.

Especially if you look at that from the side, you get a lot of overdraw there. So I'll talk a bit about that in a minute or so. Rav, can you sort of pause this or come out of the fly-through mode? Yeah, and fly over here to the left. Okay, there we go. Let's zoom up on this area here. So here's what we do. We have to zoom out.

There's two different kinds of geometry in our engine. There's something we call static geometry and then dynamic geometry. So static geometry, so this is the dynamic geometry. This is the geometry in our world that actually moves, that's animated in some way. Right? So these are the only pieces of geometry that cast shadows dynamically.

This is our static geometry. This is all the geometry that never changes. It's baked in. It's not moving. There's nothing animated about it. So this, we can do something different with for shadows. We have two different shadow techniques that we combine to do our overall shadow. Shadow. So if you zoom in here. Rav, can you zoom in? Rav, on the left, if we look at that shadow area right up here. So we use this technique that I'm calling shadow cutting.

So what we do is we actually cut the shadows directly into the scene. So if you turn on wireframe here, these are static shadows here. This is geometry. We actually cut the shadows directly into the scene right here. Hit page up a bunch. So we just press A to pause the lighting. So you'll see we actually, the artists don't do this manually. They would. They'd quit if I made them do that. So we.

So we automatically generate this stuff. We figure out. We basically cache. Do shadow cutting, which I'll explain now. Can we jump back to the slides, please? Thanks. OK, so static shadows, like I said, it's a great opportunity to not do stencil shadow volumes globally, because their shadow volumes don't change. They don't ever change. And a lot of those shadows won't cast geometry onto dynamic objects. So you don't need their shadow volumes, right? So for a lot of those things, you can optimize this out.

So here's where we just showed the before and after. This is what you see, which looks like shadow volumes, which I'll get to in a minute. But instead, we're actually cutting the shadows straight into our scene geometry. And so the advantage is, like I said, is it looks just like stencil shadow volumes, right? This is what some very well-known, soon-to-be-released games are going to be doing, right? Global stencil shadow volumes. So it's going to look hard shadows everywhere, which is what we wanted. But we obviously have to cheat a lot. We're not going to do full shadow volumes.

We need to run more than two frames a second. So the interesting thing is that since we separate out our geometry, so you're probably thinking, isn't it more expensive? You have more geometry now that you diced your geometry up, right? Isn't that slower to render? The answer turns out to be no, because if you compare it to shadow volumes, you're rendering these huge volumes to your back buffer that have a lot of overdraw.

Instead, we're just going to have a few more vertices to process, which is the same, but a lot less than the shadow volume verts. And those polygons that... that we know are in shadow, we can actually draw with a simpler shader, because we know that none of our dynamic, our dominant lights really hit those polygons. So we don't have to do all the math that's required for those polygons that are in shadow permanently.

So, here's how we do it. We do it with shadow cutting. So, it requires beams. So, here's just some basics on what a beam is, for those of you who aren't familiar. A beam is basically this. So, a beam is sort of a pyramid-shaped volume, in a sense, with three sides.

And so, you have a light position, this yellow thing up here, that's a light source. And this polygon, this sort of horizontal edge at the bottom, if you think that the polygon is actually popping out of the screen, right, out of this screen here, the volume is basically the position of the light, and you calculate four total planes.

One plane is the point of the light with each edge, right? So, each of the three, three of the four planes are calculated by the light position and the edges of that polygon. And the fourth clip plane is the actual plane of the polygon itself. And that gives you, so depending on which way you look at that polygon's plane, you can have a near beam, which is basically anything in that triangular volume, and the third clip plane is the point of the light, which is basically the position of the light.

So, you can have a near beam, which is basically anything in that triangular volume, and the fourth clip plane is the point of the light, which is basically anything in that triangular volume, and the third clip plane is basically anything in that triangular volume, and the fourth clip plane is basically anything in that triangular volume, and the fourth clip plane is basically anything in that triangular volume, the algorithm takes about 20 minutes for the entire scene to cut those shadows in.

That's a pretty complex scene. So that timing's not too bad. So basically here's what we do. For every polygon in our scene, one at a time, we're going to dice it up. We're going to take all the other polygons in the scene and cut the shadows into it based on that geometry, one at a time. So what we do is we'll call that polygon A. Polygon A is for each polygon in our scene, we're going to find all the polygons that live between that polygon and the light source.

So everything in that polygon's near beam we're going to say has to cut into that polygon and dice it up for shadows. So for each polygon A, we have a bin of polygon Bs. For each one of these polygon Bs, we take its far beam. So for polygon B is between. So in the case of if this is our light source here and we're trying to cast shadows onto this table surface, this microphone here has to cast shadows onto it.

So for a given polygon B here, we're going to beam into polygon A, which is the table, and we're going to take its far beam and cut into this table with those three planes, which is what's happening here. Polygon B, this blue one being the microphone polygon, down into the table. And those end up cutting it into three different polygons. What happens is on the right here, you can then tag each resulting polygon as in or out of light.

So if it falls in polygon beams far frustum, far beam frustum, we tag it as in shadow. Otherwise, we just leave it alone. And it's that simple. You just keep doing that over and over and over. And you just have an optimization step that optimizes your mesh every step of the way. And you end up with what we just saw, the button here, the final shadow. So here's the interesting thing is a lot of the literature, a lot of different ways to solve shadow lines, to do shadow cutting in a sense, can't solve these cases.

Cyclically overlapping polygons are the root of all evil for anyone doing research in this topic. It's polygons that overlap each other, three of them. There's no order. You can't possibly sort these three polygons and figure out which one's on top because they're each on top of each other.

And so this breaks a lot of ways, different methods. This brute force method of just cutting everything up works fine. You end up with what's on the right there, exactly what you want to see, everything cut up perfectly. And there's the results. So those are our static shadows. We cut them into the scene, nice and simple. Now we have to deal with our dynamic shadows.

And in a second, we're going to talk about how we combine both of them so they look right, they look like they belong together. So let's talk about the dynamic shadows. So dynamic shadows are basically done with shadow volumes. Let's take one week. Can we jump into the demo, please? Okay, so, um, okay, why don't you pull the camera back a little bit? Okay, that's good.

And now, turn on just objects. So here's our dynamic objects, and if we turn on wireframe, these blue wireframes are the shadow volumes. So a shadow volume is basically, given our piece of geometry, we can generate a volume that represents what's in shadow. So anything that falls into the symbol, for example, here, anything that falls into that symbol's volume, this volume generated, is in shadow.

We tag that as in shadow. We can do tricks with how we render these volumes into the stencil buffer to make it look like, to actually tag the pixels properly, if you're saying what's in shadow and what's not in shadow. So those are basically what a shadow volume is. If you turn, actually, if you turn wireframe off and go back to the full scene and object geometry, yeah.

[Transcript missing]

Okay, so here's just a wireframe shot of these instruments going over the drum machine there, and here's their shadow volumes. And you can see where their shadow volumes intersect the wall, those pixels are tagged as in shadow, with sense of volume. So here's what a shadow volume is. Shadow volume basics, for those of you who aren't that familiar with it, is given a light position and, say, a sphere, what you do is you figure out the silhouette edges from the light's point of view.

What you want to do is you want to basically break that sphere in half, right? Keep anything that gets hit by the light in place and take the bottom half of that sphere and shoot it down, far away from the light, and at the angle of the light as well.

So what you end up with is this sphere turns into this sort of this weird-looking pill shape, this full pill shape, and that is your shadow line, which is a closed volume, and it's sort of, and that, you can use that to render into the stencil buffer to render, to mark each pixel as in or out of shadow.

Here's just sort of a brief look at one way to do this in hardware. Essentially, you have to... so there's this method... how many people here are familiar with this method of doing shadow vimes with degenerate quads along the edge of primitives? Not many. Okay, so the idea is basically this. So you have this model, this sphere, right? For each edge in your model, what you want to do is you want to stick a degenerate quad, an infinitely thin quad there, just two triangles that connect that edge.

Okay, so you want to take your sphere, basically separate out all these vertices and add up two polygons along each edge. That is basically a quad. So the idea is when you take this sphere and you split it at the seams and stretch it... and then stretch it apart, those quads along those edges, those silhouette edges, get stretched.

Right? And so if we go back one slide to this... Okay. over here, these represent sort of the quads. Each one of those vertical areas represent a quad that was originally stuck on an edge. Right? And when we split it apart, it's like putting like Elmer's glue there and it gets... and you just pull it apart and it sort of just fills in the gap or you just stretch it open. So the idea is this.

So we stick these infinitely thin quads along these edges. So for... here's two polygons in our given model. And that shared edge there, we stick these infinitely thin polygons right in the middle there. Now, they're actually right on top of each other. I'm just spreading them apart so you can see. You can visualize it. And what we do is we stick the face normals of each polygon. So polygon A, we take its face normal... Sorry.

Oh. We take its face normal and we embed it into the three vertices of polygon A. And we take polygon B's face normal and embed that face normal into those three verts. And the idea is this. The idea is that in your vertex shader, as you're running a vertex shader, you want to basically be able to shoot each vertex back one at a time. So you want to either keep it in place if it's front facing to the light or you want to take the vertex and shoot it back to generate this volume.

So the idea is that since you have your face normals embedded in your vertices, if you shoot back the lower left vertex for polygon A, you're going to end up shooting back the other two as well because it's the same normal. It's the same face normal you're doing the math with. You're going to basically do a dot product with the light vector and that face normal.

So if one shoots back, they all shoot back. And in the end, you end up basically... Either each primitive stays in place or it gets moved back. One at a time. This is sort of a brute force approach to doing shadow volumes in hardware. Fully in the hardware. There's no CPU work happening here. All happens in your vertex shader. So we render this into the stencil buffer.

and so they're in there. So now we can render these shadow lines in there and you have this in your stencil buffer. You can basically test it to say which pixel is in shadow, which pixel is out of shadow. So which brings us to the question is how do you get those shadows into the color buffer and make them look good and make them blend in with the shadows that we have baked in. Right, you have these baked in shadows, you want to make sure that you don't double darken pixels.

If something's already in shadow from the scene geometry, you don't want to have it go into shadow again. It's going to look bad, it's going to look obvious that you're doing something different for dynamic geometry. So we're at the stage right now where we have, we essentially have our full scene drawn into the back buffer with no dynamic shadows. Everything is drawn without the shadows and our stencil buffer lives each pixel tagged as in or out of shadow.

So now we have to combine these, the back buffer with our stencil buffer in some way to get them to look right. So what we do is, one thing you could do is just draw a full screen quad over your scene and for each pixel that's in shadow just dim the color by, you know, 0.5. Just dim it in half, right. You're going to get some really bad artifacts like that, it's not going to look right.

So what we're going to do is we use Destination Alpha. How many games here, how many, who uses Destination Alpha for anything? Actually stores real values in Destination Alpha? Perfect. So, four, sorry. So, Destination Alpha, you can think of this as a whole other channel you can store data in.

As you render your scene, as you render each polygon, you're writing to RGB, also write something to Alpha. You can write something useful to Alpha, and then use it later in a separate pass. So this is a way you can do a lot of interesting things. So we're using Destination Alpha to do our shadows.

And the idea is this, is instead of drawing a quad over your screen,

[Transcript missing]

If you zoom back and just sort of... Yeah, what is it? G, is that the color? Yeah, I go two more times. There we go. So, here is what the scene looks like. Now, you're going to notice it looks counterintuitive. Where there's no lights up at the ceiling, there's no real dominant lights hitting that, you have white.

What white means is it's just one. You're going to basically take... If you multiply that pixel by one, it's not going to change, which is what we want. If that falls into shadow up there, it doesn't matter because there's no light already hitting it, so there's nothing to mask out. So, you want to just multiply by one and leave it alone.

For the pixels in here, in these dark areas,

[Transcript missing]

and what you're going to see is we have, we're going to, these are baked in shadows here on the wall. We now have two different lights casting shadows in this area. Okay, if you turn on the wireframe quickly, so this is going to show you, we actually cut in, in our shadow cutting, to back up for a minute, we have, we cut into our geometry shadows from multiple lights, and we can preserve which polygons fall into one light source, two light sources, or are shadowed by one light or two lights. So if we turn the wireframe off for a minute, turn it, go into the Dest Alpha, view Dest Alpha there.

There we go. So if you notice, this is what we write into DestAlpha for this area. So the interesting thing is, for the pixels right here in this darker gray, there's no baked in shadows. This is just a pixel being hit by the light full on. The pixels down here that are lighter gray are being hit by just one light.

[Transcript missing]

It's already fully in shadow. There's nothing to shadow there. So we can write out, when we pre-process and do our shadow cutting, we tag each polygon, and we know how many lights is already being shadowed by, or how many lights is being hit by. And we write out the right value.

So what we end up with, can we go to the right over the drum machine? On the left? Yeah, right there. Let's page down a little bit and get these guys up here. Okay, good. So I want you to notice, this is the whole reason for doing this dim factor thing, right? With this sort of Dest Alpha is, as these shadows go out of this light source, they dim and they blend in perfectly. This bar here, this big horizontal bar, that shadow is baked in.

That's from our shadow cutting, right? And the idea is that those shadows blend in with that bar fine, and they disappear nice and smoothly. You never know that right when it gets out of that volume, that disappears. We don't actually draw that shadow line because we do culling based on our light sources. So we end up with these nice shadow volumes, these nice shadows that work perfectly, that just integrate with the scene well. So can we go back to the slides? Okay, so the shadow quad.

So now that we basically have our Dest Alpha drawn, right, everything, all the contents of our Destination Alpha and our RGB channels are filled with all the right data. We have our stencil buffer filled with the right data. Now we have to basically draw a quad over our full screen to get the shadows for the right pixels.

And so what we do is this. This is our blending. In OpenGL, you set your source color to zero and your Dest Color to Dest Alpha. Sorry, you set your source factor to zero and your Destination Factor to Dest Alpha. So the equation you end up with when you plug this into your Alpha Blending algorithm is you end up with, this turns out to be zero, and you end up with this Dest Color times Dest Alpha.

Right, Dest Alpha is your dim factor. So you just end up with the equation that you want. And you turn on stencils. You turn on your stencil, your stencil test as well, and you only write to the pixels that are in shadow. And it's that simple. And you get those shadows in the end.

So, to summarize on the shadows, we basically have, we talked about the differences between the static and dynamic shadows. Went into some shadow cutting algorithm and some of the shadow volume basics. So, and that's how we do shadows in Animusic. So, I'm looking at the clock and I have another, I have a few minutes to burn here. So I want to show one other effect. which is, if we could go back to the demo machine and add a music.

So, if we press 1, press 1, start this over. If you watch this, the balls in that music are actually motion blurred. When we first got the DVD in our hands from SIGGRAPH to watch this video, I stepped through every frame and I watched everything they were doing in the full video. And the only thing they motion blurred were the balls, because those were the only things moving fast. Everything else was moving slow.

So we said, well, how do we do motion blur on balls that make it look realistic? And so, we came up with this vertex shader to do what looks like motion blur. So if we press A to pause the animation. Okay, so here's the pause. Here's the actual geometry. Okay, that's good.

Actually, back up just a little bit. Okay, so the ball right in the middle there, down on the bottom, is almost a full ball shape. And that's moving pretty slow. The ones that are moving faster look like hot dogs, right? So what we do is we hit wireframe.

One more time. Okay, so it's going to be hard to see. Turn off. Press D. Turn off the... One more time. Okay, so the ball shape here looks like a pill. What we basically do is we take our ball and we split it at one of those seams. And we generate... It looks like it's motion blurred. We have the right shape for it, right? And what we do is we then use... When we fetch from our environment map, because these things are all environment mapped, we lower the MIP level bias, the LOD bias.

So... So when we fetch the textiles from the... From our environment map, it's blurrier. We also change the opacity value on those pixels. So as the more round it is and not moving, it's a solid ball. As it becomes more and more stretched, we make it more transparent.

So it looks like you can still see through it, right? Because if you keep it solid, it's going to look like you're launching hot dogs, right? You... - Which isn't good. So you want to make it look like you're going to be able to see through it. Because motion blur, you can sort of see through anything that's motion blurred.

So now you're probably wondering, well, this is a cool technique, but not a lot of games have balls launching out of machines and instruments. But you do have other projectiles. You have rockets. You have someone's going to throw a grenade over something. Whatever things that are going to be moving in your scene in your games, you can apply this technique to it, right? Especially things, even things that are like a...

[Transcript missing]

So that's motion blur and then music.

All right, can we go back to the slides please? OK, for more information. So here's some references. These will be on the web. These are some books I highly recommend as far as current graphics go. And some links to-- the normal mapper tool will be on ATI's developer website sometime in the next few weeks. And here's us. That's me, Rav. Marwan is another guy in our office that worked on some of the video shaders. And if you have any questions, feel free to drop us an email. And that's it.