Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2007-418
$eventId
ID of event: wwdc2007
$eventContentId
ID of session without event part: 418
$eventShortId
Shortened ID of event: wwdc07
$year
Year of session: 2007
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2007] [Session 418] Leveraging ...

WWDC07 • Session 418

Leveraging the OpenGL Shading Language (GLSL)

Graphics and Imaging • 1:12:52

The OpenGL Shading Language (GLSL) enables you to program the GPU and transform your 3D rendering into a cinematic experience. Learn how to create spectacular visual effects through control over vertex and fragment processing. Find out how programmable shading can accelerate complex renderings, enable new ideas in 3D graphics, and transform your application. A must-attend session for advanced OpenGL developers.

Speakers: Geoff Stahl, Alex Eddy

Unlisted on Apple Developer site

Downloads from Apple

SD Video (236.8 MB)

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Good morning. How you guys doing this morning? It's a little bit early on a Thursday. And I'm Geoff Stahl, and I'm going to be talking about the OpenGL Shading Language. One thing I wanted to do at the start of this presentation is actually kind of see a show of hands on, on what people's familiarity with it, because I can, I can gauge my talk to how the audience is, what you guys want to hear.

So, who has never used GLSL or OpenGL Shading Language at all? So, probably about two-thirds of the room. That's good, that's good. Who is, who's, say they're pretty familiar with it? So, about a third or so. So, I, Ok, that's good. We can, we can work with that.

So, the first thing I'm going to talk about, which will really hit the people who have not used it is, what is GLSL and why do you care? And I think that what I'm going to show you today will be, will really show you that if you're coming to OpenGL or if you may have traditional GL application or even if you're thinking about it and you're not sure, that GLSL gives you a really good entry into working with OpenGL and getting the system to do exactly what you want it to do, which is sometimes daunting.

I think I've, I've talked to people before and one of the things we've talked about and when you look at like core animation and Core Image and some of those technologies that Apple has, we've divided those technologies out as really easy to enter, easy to use technologies but below them they're sitting on OpenGL.

Sometimes complicated ways, sometimes not so complicated ways. But GLSL is another entry point, which I think really actually makes it really easy to enter and use the power of GPU for your application, whether it's a 3D app or a 2D app, and not have to learn every nuance of a very thick OpenGL specification.

So, basically GLSL, OpenGL Shading Language is a high level C like language and it's integrated into the OpenGL API and allows access to the power of the GPU. So, first thing we'll do is just look at a piece of code up here and that code gives you an idea that it looks familiar to you.

If you're a C programmer, it looks very similar to what, what you'd see. But it has a few kind of specifiers on the variables of the beginning. There's an attribute, a uniform and a varying. And what that does is basically is a qualifier to tell the compiler what you're going to be doing with these variables. The attributes are like coming in through the vertex path. Uniforms are like constants. And varyings come out from a vertex shader and go into a fragment shader and are interpolated when they're in a fragment shader. So that's kind of qualifiers there.

Then there's a main, which you would expect. And then we have what you would really, except for the kind of built in function there, something you can really understand. It's a... There's a mix function that's built into GLSL. We take an input, which is the GL underscores or all the built ins, and we applied a waiting on it. Understand here though that GLSL is a vector language. So that's a vec 4 so we have four components there. We're mixing across all four components with that wait. So it's really nice and you can start thinking about problems in that way.

Texture coord, again, it's a GL underscore so it's a built in and we pull some texture coords out and then we have a, we calculate a position. The last line is basically what the fixed function pipeline does in OpenGL to calculate, transform your vertex. So it transforms it from 3D space into basically screen space.

So that just shows what you're looking at. So, why do you care about GLSL Why is GLSL interesting to you? Why is it easier? I mean, some people say it's a programming language, I don't, you know, calling functions might be easier than that. But this is, really is, what I've found is with the tools that we give you, some of the sample code we have and some of the stuff out there on the web and some of the things you can do yourself, you really can tailor a solution to your problem.

So it's approachable. It's very easy to use and I'll show you that. It's understandable. We already said it's a C like language. So that's pretty easy. It's direct. You can directly do what you'd want to do. So in some cases in OpenGL you may have to use the texture combiners to actually combine different textures to get certain effects. Well now you can actually just blend them in a C program.

And it's fast. Access is a GPU. So the newest features of the newest GPU's, the way you get them is through GLSL on Mac OS X. And it's an industries focus, the ARB Industry Standards Group is working to improve and expand and continue to develop OpenGL Apple as a part, a very active part of this and it's Apple's focus as far as moving forward with GPU. So you're on a technology that we move forward. Next year when you come here we're talking about GLSL, the year after we're talking about GLSL.

Let's jump in to some of the points and dive a little bit deeper. So it's approachable. Basically it's simple to set up and easy to experiment with. You don't need a lot of OpenGL knowledge. And so what I'm going to show is this little example here and we're going to look at this live and show you what it does. So, someone out here has an image processing app and they've seen Core Image and say, hey that's pretty good, but I want to write a little bit differently.

I want to do some of my own stuff. So what do I do? I can write an edge detection filter in GLSL very simply and just apply it to a texture and if I look and load the texture in I can apply an edge detection filter. And I can modify this. I can work with it. And so let's go right into a demo and see how someone could easily bring this up.

So, there's a piece of sample code on all year, that's been around for about a year now. It's called GLSL Sample Editor. And I'm going to use that as kind of the editor example. The reason I'm using it is because you have it on your DVD's. It's in the sample code under OpenGL and Cocoa. And you can write your GLSL shaders just like I'm doing on stage here.

You can write them and experiment with them, you know, not while the presentation's going on but maybe after. So this is the edge detection filter. It might be a little bit hard to read but I'll walk through, I'll talk about what I'm doing with it. I'm not sure if this even works.

I don't think I hooked that up. But, so at the beginning I talked about constants. In the first line it says sampler 2D rec. And all a sampler is is basically something that accesses a texture. It's a 2D rec, so it's a rectangular texture. So I have to load that.

So let's, let's look at our little rendering window, or look in the shader program here and I have a texture tab we put in here. This is all in the sample code. This is all just compile, run the sample code. And there's a texture in here. We have a type. I'll make it rectangular just so it works with the shader so it works with the shader I wrote.

Now what does this shader do? It basically, This is a kernel filter. It takes the center point that you're sampling, multiply by negative four and then takes the surrounding, takes the X and Y, takes the vertical and horizontal surrounding pixels and multiplies them by one point 0 in this case and does an edge detection.

So it basically, when it crosses the boundary you're just going to change the color. We have a texture input, we have this. What I'm going to do is, this is the equivalent of calling use program in OpenGL. And down here, let's bring this a little bit bigger so you guys can see it.

So, I am going to contend that this, that that five lines of OpenGL in vector space is, makes an edge detection filter. Ok, great. Well, then I have to type code to change things. Well, what's, what's the one thing we can do in OpenGL or GLSL that I mentioned about constants? So if we look at constants we have a constant here, a point five and a five and we have some of these one point 0's. So let's throw something in here to make it a little bit, show you how easy it is to experiment with this.

So I'm going to add another uniform and we're compiling on the fly and I'm going to make it called adjust and I'm going to make it, actually I'm going to make it a vec three because I want three values and I'll call it adjust, and so that's my. And what I'll do is, remember this is a vector, so I'll simply paste it in here, make that the X component, make that the Y component and I'll make the, the, how big the kernel is I'll make that the Z component.

So this is the offset in that texture sampler. So I'll make that the Z. There's a minus one and there's a plus one here and there's one more plus one and we do that. So now we have this variable that we've added in that's a constant coming in, your API specifies this. And you can adjust this. So now we can adjust the intensity, we can adjust an offset level and we adjust the size of our filter. Again, these five lines of code.

In this sample we have this, You see adjust came up in our uniform tab. We talked about uniforms. And so now we have these sliders. And so, if you notice, that's my, the first thing was my base level. So that goes from basically black to white. And we're doing alpha too so that's why it goes back to the gray graph. We'll put that at 50 percent. We'll make the intensity can go to 100 and the offset can go to 10. So now, we bring this up and we bring that up and you can see, we'll make actually this a little bit lower to 15.

So that's basically what we had before. But you can increase the intensity of your edge detection through the uniform. One variable is going down to the GPU right now. It's all in the GPU. You can, again, control the base line and you control the size of the kernel that you're actually filtering on to get edges. So this is real similar to what you would see in Core Image. Again, GLSL, real similar.

Got some basic five lines of code and then I can play with it and I can experiment. I can build that filter or whatever I'm looking at. So. And so my point is there is that GLSL, you didn't need to know any OpenGL, except for maybe the texture code, the rest was your solving your problem. It's solving the problem of, hey, I want a kernel filter. So I'm going to go back to the slides.

So it's understandable. We've already seen that the code is pretty clear. It's straight forward the program. I gave the texture combiner example. If you're in OpenGL you can do lots of texture combiners. But maybe that's not what you want to do, maybe you just want to blend something.

So, this was using color mask, and I wrote this four years ago. It may not be the best code in the world but, oh my God, I mean, if I put it on the slide that would be four or five slides. Let me tell you how to do a color mask and turn components on and off. Not what I want to do. So, let's look at equivalent kind of blend color, color masking to control, color component control in GLSL.

So that's the fragment program that does basically the same thing. And what this does basically, if you notice, is a contribution uniform value. We talked about that. It does a, It has textural look-up and does the vector multiply. Remember, we're working in vector space. And then it gets a color.

So all you're doing is you have an adjustment now for the different color components. So if you want to have a blender that changes color components, again in like an image processing app, very simple. So we'll go back to the demo machine. And so we'll use this program and, if we look at, Again, we go to textures and this is a texture rectangle, look at uniforms.

So the contribution is the only uniform I have up there and the contribution is all maxed out. But let's say I want to turn down the red channel or I want to turn down the green channel or the blue channel. And so you can also can do it this way and you could do something like this. We go turn it up.

You can see, let me move this, make this bigger, then you can see that depending, and this is just, you know, we're just sending a single uniform in and on the GPU it's doing the blend itself with a one line shader, get direct. You're actually affecting what, It's very clear. You're effecting exactly what you want to do.

And with our, sample editor you can always take any image you want, dump it in and you can say, you know, you want to dim it out, you know, you want to, sorry, brighten it. And I need to redraw it. I'll just do this, oops, uniforms, there we go, back to editing my uniforms, there we go. That's what I wanted to do. There you go. So it works with any image. So we'll go back to the slides.

Says color control and blending. The example was you can do a lot of different ways in OpenGL and there's probably better ways than I showed you in the code. Obviously I was trying to make a point there. But the example is there is a lot of code that you set up OpenGL state and manipulate state and set contributions. In GLSL you actually just write the code you want to write.

So it's also direct. You can accomplish what you, what you're, what you really want to do. And we showed that in the other example. But the point here is you can actually get direct access to features of the GPU. So the GPU has improved fragment programs, has looping and branching and conditionals, has geometry shader.

All these things are exposed through GLSL. So if you want to, for example, do some kind of fractal pattern where you actually kind of build geometry on the fly on the GPU without using the CPU you can do that using a geometry shader. If you want to have a loops and you want to do conditional discarding of pixels.

You want to punch holes depending on values coming in you can do it, but you can do it if my value coming in, my texture look-up value is less than point five for alpha I want to discard that fragment. And so you punch a whole through it. I mean, it's directly representing what you want to do.

So, for example, I have this fairly long kind of complicated thing here, which is actually not that complicated. It's actually an interesting little piece of a shader. You notice there's some vec three's and it's greater than and it does basically a texture look-up as we've seen before, a surrounding texture look-up, to get some count, and the count is neighbors.

How many neighbors do we have? It's doing it in three channels, or four channels at one time. So I think, I'm sorry, it's three channels at one time. And then it, then there's an if statement. If the vec equals three cell gets one, alpha stays the same. So some people might be thinking, I've seen this before. The key I want to point here is with the conditionals. Instead of having to do some kind of funky, you know, tricky logic, you actually have the ability to get conditionals.

You have the equal. You have the greater than or equal, less or equal, which is basically the mathematical operation built into GLSL that you can use to do conditional, conditional processing on that. So this is the equivalent of having a number of if statements. And I'll get back to what this is, I assume some of you can probably figure out where that's going. It's just an interesting of GLSL. The last thing I want to talk about is fast.

So it's optimized with high and low level compilers. We put a compiler technology behind your GLSL program. We compile it. We send it to the driver vendors. They have their own hardware specific compilers. They compile it. And we really work hard to make sure the GLSL works really well. And that's where we're putting our efforts in. So this is something to take, to leverage, take advantage of. You can use more complex constructs and then take advantage of that in GLSL.

Power of the GPU. The GPU, the X1900, for example, is speced at 9 giga pixels per second of fill rate. This isn't, you know, if you're clearing, this is not doing complex, this is exactly the high level rill rate. And so if we take that and put that on to a 30 inch display.

What does that give you? So a 30 inch display is 2500 by 1600. How many frames could you clear, can you actually write to the memory, how many pixels can clear? That's about 24,000, I'm sorry, 2400 frames per second. So there's a lot of power in that card, a lot of power to use.

Using it wisely and using it well in your application can really differentiate your application and can make you be able to do things you couldn't do on the CPU. I mean, imagine if you had to process a four mega pixel frame on the CPU an upload it across the bus every single time.

So move that to the GPU, move that to the GPU and I think you'll see great, great things will happen with your app. 41 gigabytes in memory bandwidth in our new laptops, the fill rate on those is seven point six giga pixels per second. So it's approaching the desktop level of fill rate. So large displays well supported, larger set of pixels well supported.

So let's go back to the demo machine. And I want to do one fairly quick last demo. And that is something that I showed you before a piece of and it's not, you can do a lot more with the GPU but it's just an interesting use of directly programming the GPU and showing kind of what the power is. It's running on a laptop and, let me see if I can get this right and it'll work. There's, there we go. So that wasn't very interesting. That might be interesting. This is, If I can zoom in.

This is Life running entirely on the GPU and it runs at, for this implementation, it's getting about 100 mega pixels per second, running at 90 frames per second for the entire screen. It runs three versions of Life at the same time. So each channel has it running simultaneously. You see red, green, blue are running independently and simultaneously.

The nugget of code I showed you before was the conditional statement that determines whether it's alive or dead and counts neighbors. So the deal here is not, you know, it's a little fun game but it's an interesting use of the power of the GPU to do all on the GPU to do kind of fragment level processing that is not, so to speak, image processing.

Let's see if I can get just one more quick one here. So you can see we can reseed it at, again, about 100 mega pixels per second running Life on the GPU. And this is included in the OpenGL, GLSL showpiece, which is, again, on your developer DVD. It's been on there for about a year. It's just an interesting idea for a shader. You can look at it.

You can play with it, get new ideas. But again, you can run it on the GPU directly affect, you know, directly accomplish a task here at hand. In this case we're just playing the game of Life. So it's kind of fun. So we'll go back to the slide machine.

So that was my, Let's introduce kind of GLSL, give you a feel for what you can do with it, get you thinking about it's not just about drawing polygons, drawing vertices, drawing fragments. So I'm going to cover a bunch of things. First I'm going to talk about a shader.

I'm going to break it down so people who haven't used GLSL, I'm going to talk about how to get it set up. So we gave you some tools that can already do it but you really want to know how you can write some code. You can fool with the shaders by themselves and then there's some API stuff we'll put in there. Then I'll talk about GLSL and Leopard.

And we'll talk about what we have and where it's going. We'll talk about working a little bit with hardware, which is something that everyone will want to worry about. And then we'll talk about fourth generation shader support, which is now going to be in Leopard for our new hardware and on software.

So let's talk about shaders, jump right in. So first thing is this is a image provided by Moto by Luxology, just to zoom in on one of their renderings. Obviously this is not the full rendering. You want to kind of look at what constitutes this 3D scene. So first thing we're going to add a grid to it. So this is the grid of vertices that is used to draw the 3D geometry. Remember each point on here is a vertex.

Vertex shaders operate on vertices. So every single point that is sent to the GPU, the GPU is going to do, execute your program one time for each vertex. So if there's a thousand vertices here, you have a thousand iterations on that vertex shader. If we zoom in a bit you can see that the grid makes up the geometry and we'll highlight a triangle. This is a primitive. And a primitive, in this case, is just a simple triangles.

Triangles are operated on by geometry shaders. So the geometry shader operates at a primitive level. Three vertices fed into a geometry shader, geometry shader can decide to put out three vertices, decide I want more vertices, decide to put out no vertices. So a geometry shader basically is working on that at the primitive level.

If we move down and we zoom in even further the primitive in screen space, so in the space that's facing you when you rasterize it, is made up of pixels. So if a primitive is very oblique, I guess is the right word, to you, you have a very, It may only be four or fives lines of fragments on the screen or pixels on the screen. As it comes to be plainer it kind of grows in scope.

The point here I'm making is, is that the fragment processing operates on every single pixel that's included in that primitive. So it operates basically scans across. Your fragment program operates on every single, The fragment shader operates on every single one of those. So you can imagine if you have a thousand triangles stacked it still operates on every single one of those. You can be blending. It needs to get the results.

So that's when you call depth complexity, when you have things stacked. In this case we're just talking about a single triangle. So that's, those are kind of the scope of things. Vertices, primitives, fragments and it goes from a thousand to maybe a million in a scene. So understand that when we're thinking about where you want to do work.

This is something we're not going to talk about today. We're not going to talk about the OpenGL fix function pipeline. Fix function pipeline. We put this slide up every year, this is out of the red book and this is OpenGL, great, yeah. It takes some time to talk about this and in reality a lot of the things in it are things you'll use but they're not things that you really want to spend a lot of time explaining, you know, what a display list is. So we're going to talk about a different pipeline. We're going to talk about this pipeline.

So basically this has some vertex attributes coming in position, for example, color. Go through a vertex shader, goes to a geometry shader, goes to a fragment shader. They all can reference texture data and outputs to the frame buffer, your screen, whatever. This is the OpenGL programmable pipeline. So there's nuances to it and there's more depth you could go into, but the point here is that GLSL and using shaders allows you to directly manipulate data and simplify the pipeline and allow you to concentrate on what you guys do best rather than learning a 200 page API spec. It allows you to learn exactly the pieces you know and learn to, and code to things that are leverage work we do to make your problem, solve your problem.

Additional piece we're going to talk about a little bit today is transform feedback. And that's an additional stage that is new with the fourth generation shading support and it allows you to take the output of a geometry shader, build new primitives, as we talked about, and feed them back in to have kind of a feedback loop, transform feedback. So it's transformed and fed back in.

So what is a vertex shader? I think we've covered a lot of this already but really it replaces a transform and texture coord and generation. Transform, texture coord and generation and lighting parts of OpenGL. So that first piece. So calculating the light positions, transforming the vertices from a 3D space to a 2D space.

In many cases you can do kind of your, If you're doing image processing kind of things more than vertex processing you can just drop in a piece of existing code. You can also do set up for lighting. You can do set up for bump mapping. You can do set up for some geometry shader. So it's a very worthwhile piece. An in a little while I'm going to talk about why you really want to remember that you do have a vertex shader to use.

Because forgetting the vertex shader and putting everything in the fragment shader sometimes the economy of scale is not very good. And we'll talk about that in a few slides. The input is like vertices for vertex data, which you call attributes, which I've covered already and state like uniform. So constants, things that change for vertex and the vertices themselves, which vertices you could say change per vertex.

But basically the idea here is if you have a color per vertex you're going to get a new color, you're going to get a vertex position. You can make it a new texture coordinate. And you can use these however you'd like to use them for your vertex shader. And kind of the connection you want to make here is if you're thinking why is this texture coord in an OpenGL and I get data in my vertex shader.

I'm not, I'm not really sure, you know, how do I get basic data in? Well, you have these, all these things, vertex, vertices, color, texture coordinates are basic just buckets of flow data, buckets of vec four flow data. So if you think about them in that way you can make them generic attributes. If I want to put in four float numbers into my vertex shader I send some vertices in. I do a draw arrays call.

I set up some other arrays that are basically just have my flow data, just array of flow data. And the vertex shader gets them and you can address them as flow data. You can call them whenever you want. So you can actually not think about it in the texture coordinate kind of mentality. You can actually think about it in the, I just have vertices and then I have some data.

So, in this case, for a simple shader all you're going to do is get a color value that you're passing on in a varying and varying are sent through the geometry shader to the fragment shader and interpolated. That was that interpolation. I'll talk to you about that in a minute.

And then F transform is a built in function that does the standard fix function vertex transform. This could be a vertex shader for many people that would just do enough of what you want it to do. You could probably get away with this for many functions of GLSL that you're going to play with. So you don't need to write a lot of code.

So geometry shader. Geometry shader is new for Leopard. It's supported in both software and on the new generation hardware, new generations of hardware and it's run after the vertices are transformed but prior to the color clamping, flash shading, all the rasterization stage of it. So basically it kind of fits in the middle.

I originally had kind of this, you know, I would think a geometry shader wants to run first. You want it to run like before everything else and build geometry. Well, in this case, you actually want to do the transform and you have the option to modify, build, feedback additional geometry. So think about it in that way.

You have some geometry that you transformed into screen space and then you can do more things with it. I think if you were at the modernizing session you saw we did some shadow volume stuff. You can do extrusion. You can draw additional lines. You can look at adjacencies.

And Alex is going to come up at the end and talk about geometry shaders and show you some things that you can do with it. It's really, really interesting. And it does more than you think it can do because you've got to remember that you're really working with basic building blocks and you can build them in a lot of different ways. So it's part of your construction set, you know, your modern day tinker toys.

And it does more than just, you know, build a bridge. You can build all kinds of things. And it outputs vertices to define the primitive. So basically the idea here is that you can, in this case I have just a very simple triangle geometry shader, which is a pass through shader, sets the colors.

So the color or the colors being put out, the front color in position are colors that'll come in. So if you think about this, the geometry shader, instead of having a single vertex come in has three, in this case triangles. So it has a zero one and two in an array.

It reads each of them and does an emit vertex. The emit vertex says send the vertex further out. So geometry shaders, you don't, you don't particularly need to understand how they work if you're not using them. But in this case it actually is a pretty simple concept. So this, again, this is just a pass through geometry shader.

Fragment shaders, we've been talking about those. I've been manipulating those in the talk, showing you what kind of things you can do with those. And basically they operate on a per fragment state, per fragment basis. So every, every pixel on the screen that you touch, your fragment shader is operating on it. If you draw it twice, your fragment shader is going to operate twice. It replaces but through the rasterization, kind of fix function rasterization. Inputs are transform vertices and associated data.

They're varyings. And why is it called a varying vice, you know, Why is a position does not come back in? Think about this. Ok, if you think about the, the triangle we drew and you show that the line of pixels that comes across it, and that is actually, Just think, let's make it a line. Let's make one end of the line is a red color, one end of the line is a blue color and you want to draw the fragment shader. Well, fragment shaders have to touch every fragment that's going to be shown on the screen so it interpolates.

And it turns out that color coming through will be automatically interpolated by the hardware. So at one end you get pure blue, one end you get pure red and every place in between the incoming color is interpolated. So if you want to think about that when you get, If you have a constant color every thing's white. You just get white. But if you want to, you know, colorize something you can use that vertex color coming in and every fragment may have a different variation of that vertex color on that interpolation.

And, So in this case, this is actually the wrong shader. I apologize. The, We've seen some fragment shaders. The idea with the fragment shader is you can actually put out the frag color at the end. And I'll show you one. Well, actually why don't we go to the demo machine so I can show you a better fragment shader.

We'll show you the one line, the one liner that I showed you before. So, in this case, the key here is this frag color and that frag color is the output color and then all of this is going to do is do a texture look-up. So this is a basic fragment shader. The frag color is, The texture coordinate is the incoming, incoming data. The frag color is the required output. If you don't write the frag color the output of a fragment shader is undefined So we can go back to slides.

So, let's talk about using the API a little bit. In utopian world you guys just all start coding GLSL and everything works. Well, you'll have to do a little bit of OpenGL API to kind of get things wired together. So, extensions. In Leopard all the top three extensions and the, basically the fourth one there are supported. But you should still be checking for extensions.

OpenGL has a core set of features. It also has a set of extensions. Make sure when you use an extension you actually check with it so if new hardware comes out, maybe something gets replaced, maybe we have a new version of something, you actually know what version you're using.

The next one, the fourth generation shader support is two of the pieces you can use. If you want to use GPU shader, which we'll talk about, or geometry shader, those are the extensions you check for to make sure they're supported. Again, our plan is have a hardware support pretty much across the board for these. I'm sorry, software support for across the board and hardware support on the GPU's that actually are able to do geometry shaders. So those are the extensions.

Simple thing here is the three last steps are basically what you have to do to wire things up. You have to create your shaders and load the code. You have to create a program object, which is just a container and you have to attach the shaders to that program objects. And the program objects basically says I want to operate on this vertex shader and this fragment shader and I'm going to put them in my program object and I'll bind that to make it current. So we can imagine that you have, like Core Image for example.

Core image would have different effects and different shaders and they would bind them all up and they would have, They would bind whichever one you've selected as the one you want to make, make active now. When I said use, for example, in the example, that was the same thing, that was using the shader.

The, What you can do with this model is you can take, Let's say you have a single vertex shader that works for all your fragment shaders, you have a single vertex shader and a you have a bunch of fragment shaders and you combine, combine them, combine that one to multiple programs that they can cross over. So they're discreet objects. And then you're going to link it like you would do for normal for a program. You're going to use it and you're going to draw with it.

Create and shaders. Create shader object. You're creating objects as a standard OpenGL API there. And you can be like vertex shader, fragment shader or geometry shader. You're going to supply the source code. And there's a pointer there, which basically says this is my text source code. At that point you can, after that you can call compile.

We have logs. You either, you can check for errors. You can look at the logs if you, Actually, let's jump back to the demo machine for a second. Let me put a error down to show you kind of what it does. So if I just, if I just do that you notice we highlight a red line. Well, the simple way we do that is we get the error log. The error log, it says EC4 instead of vec 4.

There's no matching overloaded function. It tells me a line number and it says assigned, it can't convert. So it puts an error up. So, again, if I make that instead of EC4, I make it back to vec 4, which it recognizes, the log is empty. Ok, back to slides.

So, the part I don't show you here is that it's a simple function to get a log. It's a text file and we use it, for example, on our sample code to actually give you that interactive editing. You can get it in your program and just make sure that the compile succeeded and there's no error or you can put out to the console, whatever you'd like to do. Creating a program object, similar. Create a program object, you attach your two shaders that you're going to use and you're done with that. Let's move into linking.

Link program. You have a program object, you have attached your shaders, you link it, you use your program object, you draw something and then you can use null to turn off that GLSL, that's it. You obviously have to have some geometry, you have to have some textures and those kinds of things. That's pretty standard OpenGL. There's lots of examples.

You can tear apart our samples and use whatever you want out of them if you have the basis for that. If you're not familiar with those pieces in OpenGL or you might have a lot of stuff set-up already to do that. But, in any case, my point is that it's not that hard to get from where you are, I haven't used GLSL before, to using GLSL.

We talked about constants. There's a uniform API that allows you to set constants. This is entirely something new. Remember, we have a compiler, we have a linker and we have symbols on the backend of OpenGL now. So how do you tie in stuff in the front? Like I have three, I want to put a three into that, in to fu. How do I get fu? Well, this is basically called get uniform location, my uniform is fu and it returns basically an opaque value.

You don't really need to, You don't want to iterate on the value. There's rules about how you can use this for a raise and those kinds of things. But in this case basically assume it's an opaque value and then you're going to use the uniform 4FV in this case.

So this is basically going to set any, a set of uniforms but it's going to set them to the value you have, F. F is a vec 4 there. And you're basically going to set that value using the GL uniform API. So this is loading those four values in to a uniform, which the is again sent to your shading.

Remember, this is a constant API. So what this is, if you have like user time interactions. So in user time interactions I was dragging those things around, you know, once, you know, once every few seconds, once every, you know, a few times a second and those kinds of things, constants can work great. If you have things that are running per vertex you don't want to use a uniform.

So, for example, if you're trying to do lighting, don't use uniform because lighting should be in those attributes, those things that'll change every vertex. So you have to understand that there's a kind of a, I don't want to say a pipe size, but it's kind of a rate of change that's assumed here. Uniforms, you can do a lot of things in uniforms and they're very convenient but you don't want to do things that change like on a per vertex basis.

So, that's kind of brushing across the top of GLSL as far as getting up and going, where do you turn to get more information about the built in functions, the built in variables, attributes, varyings, uniforms? OpenGL Shading Language orange book, really good book. It's a kind of a combination of the previous red book and blue book. It has a reference in it, which is good for people who know GLSL but it also has a lot about the shading language and a ton of examples in it.

A lot of the GLSL show piece examples that we have, that you have on your developer DVD come from this book and they're up and running for you so you can play with them. They have a good interface and all that kind of stuff. So, I would suggest that if you're interested in GLSL you go out and get the orange book.

I also always suggest that the spec is a good thing to have by, download the spec, make sure you have it because in the end the orange book could have a typo or we could have changed the spec and it's really important to actually have that reference with you.

So let's talk about working with hardware. So, one thing that's really apparent once you start working with GLSL is if you have, let me think of something that I don't offend someone in the crowd, let me, you're, you're first, I got it, the, the old Wall Street laptop that's a, that's a, maybe it's a Rage128, maybe it was a Rage2 processor.

It's not the same as today's GPU. So if you really have some really, really crufty old hardware you can still work in software but understand that you want to, you need to understand what the hardware's capable of because it really matters for what the hardware support for GLSL is.

( Silence )

So we're going to do five ways to stay in hardware and to make your life happy. Use Moto (sp?) hardware that gets back to my, my, the old, the old brown PowerBook that, At this point I don't even know if it was brown laptop. At this point the new hardware that will come out is hugely better than what we've had in generations past.

The brand new hardware on our laptops in now capable of the fourth generation shading language. So really the hardware support is amazing these days. So, vertex/fragment program-capable hardware is required to get hardware GLSL support. So if it has vertex or fragment program support it will have GLSL, basic GLSL support.

And proof support is on the Radeon 1600 and later. So what that means is every Intel iMac, every Intel MacBook Pro has pretty improved GLSL support. There's some things it still can't do but they work really well. Also our desktops, obviously, the desktops with the NV 7300, ATIX 1900 or the Quadro (sp?) all have very good GLSL support. NVIDIA 6600 and later again, means all Intel NVIDIA products have good support. The exceptions to this is the MacBook, which has reasonable support but doesn't have the same level support as the Pro products or as the iMac.

The fourth generation shading support, which we talked about a little bit, is available on our newest laptops in hardware, which is the Geforce 8600 MGT and in Leopard you will see that we have hardware support for geometry shader, transform feedback, GPU shader and bindable uniforms, which make up kind of the core of a lot of the fourth generation shading extensions.

We want, I want to emphasize that you want to test your shaders on the target hardware. This is not something that works everywhere exactly the same. You want to make sure that if you have some shader that's reasonably complicated you want to run it and verify that it runs on the hardware that you're targeting to put your product on.

One thing you also can do here is to determine where your shader is executing. It may not be clear that, You have a very good software fall back mechanism and in some of the high end CPU's if your vertex shader falls back to software, I mean, let's do a little sidetrack here.

So Alex is working on some demos and I walk into his office and he's running it and I'm like, is that running on hardware? You don't know because he's working the vertex and geometry shader side of things and really he's not sending a ton of vertices down and geometry shader runs really fast on a fast CPU so you don't really know if it's on hardware or software.

So the way you can tell is we have these queries that we've added and they've been in for a while and these allow you to determine whether your shaders and any OpenGL actually is running on hardware or software. There's a vertex fragment processing part, is it running on the GPU and there's a vertex processing query. And these will tell you, basically returning how they are, yes or no. In this case they're true or false. Sorry, one or zero. And it'll tell you whether they actually run on the GPU.

So you compile, link and bind your shader, you can call this and it'll tell you where it's going to execute. I highlighted this earlier, that you want to think in vectors. So what you want to do is you want to actually expand your thinking or widen your thinking out to think in, try and put your problem into vector space. If you can do more work at one time, that's great. Life is a good example.

While it was drawing white on the board it actually is doing three calculations at a time and some of the other things. The color blending is doing three calculations at a time. You could easily have done the color blending or the edge filter, I could have created three uniforms, controlled each independently, labeled one like base and labeled one like offset and then labeled one magnitude or something like that. But instead I used a single vector and we can actually pull those in in one piece, which saves, you know, kind of saves real estate in the CPU but the big thing is if you can execute using Y data you should do that.

So GPU is a vector processor, combine operations, float/init, going to be wasting, I mean, it's going to be, It has these empty components it basically does the operation, whether you put something in it or not or you care about it, it's going to do the operation in the full vector format. So if you have one value you're calculating, the other values are actually being calculated and just discarded.

So instead use vec of something or i vec of something so vec 3, vec 2, vec 3, vec 4, for example, the first thing is just doing some math, some valid code, absolutely valid, no problems with that. But you could easily do it by setting some vectors, adding two vectors together and you get the same values. So that's something to keep in mind when you're doing, you need to crease the number of instructions.

With the lower end, we talked about hardware, with lower end GPU's the number of instructions can be limited so you want to make sure you have a compact program. And if you're in a fragment program remember when I talked about processing power and talked about fill ray and talk about bandwidth, every single time you do a, do a, you know, go to memory pull it back in, send it back out to the GPU memory that's, that's doing processing.

So you're throwing away that bandwidth. If you have a high, you know, top line bandwidth, why don't I run so fast? Well, if you do three adds instead of one add and you do it on a million pixels that's, you know, three million more adds then, or two million more adds then you would have had to do if you'd just combined them to a vector.

So use vertex processing when you can. It may seem like some of the things I've talked about, and it is, it's easy to do things in fragment programs because they're operating on a per pixel basis, you can think in that, Oh, it's just a pixel I can change the color. It's great. I can see what's going on. Vertex for programs are really your friend. Ok. So if you're calculating something in a fragment program that is constant across a vertex program, or a set of fragments or all your fragments move that calculation into a vertex program.

Do whatever you can to get that calculation moved out of the fragment even if you're, Let's say you have a linear interpolation between two values. Think about that and get it out of the fragment. You do not want to be linearly interpolating in your fragment program. So Quake 3. We're going back a little ways but Quake 3 has about 10,000 vertices per frame.

Not too bad. That's kind of a lot. What does a cinema display have? A cinema display has four million pixels. Let's assume your program has a little bit of overdraw so you're blending a little bit. You're doing, Like the Mac OS X desktop has blending. You're doing blending. That's overdraw. So you touch a pixel twice, 50 percent overdraw. You're now touching six million fragments to draw that 30 inch display. So if you're doing something and you're drawing to the screen, you're drawing six million fragments.

So how do you think about that? If I have to do like calculate value A in a fragment program and I could have done that calculated value A, which is a constant across fragment programs and vertex programs, what's the relative scale of the work that I'm kind of doing? So, the little dot two scale is the relative workload on your 10,000 vertices, point zero five inches. The big thing that doesn't fit on the screen is your relative fragment load.

New architectures or unified shader models, unified execution units, it sends execution units where it needs to go. So if you were calculating that, my important value, if you're calculating that every single fragment, and it's actually constant, you are, it's 4000 to one or something like that I calculated in this case. You're doing 4000 to one extra calculations. You want to move that work into the vertex processor and only do it a few times. So think about your problem in that way.

If you have some kind of gradient you're calculating that's interesting, maybe you can make 15 vertices or 15 triangles that show, cover the gradient rather than doing some precise calculation in the fragment program to do the gradient that you're actually burning fragment program cycles. So when you zoom in you have a huge number of fragments calculating the same value. Think about things in that way, using a mesh, animating on meshes, using meshes to control geometry can really be a win when you're doing, compared to direct fragment manipulation, if it fits your problem.

Sometimes you want to really get that, that reflection of factor, that really nice sheen on that surface and you really want that per fragment thing, absolutely do that on, do that on the fragment shader but think about it because you don't want to always be in the fragment shader. Fragment shader is not the bucket for every single calculation. Remember the vertex shader, remember it's there, it operates on a lot less, a lot less vertices.

A really good example of this is like the Core Image kind of application where you have all of four vertices total and you have the 30 inch display worth of zoomed in thing. Aperture has this huge display, well you can do it four million times or you can do it four times. So that's a million to one kind of calculation. So think about that when you're coding. If it's constant move it to the vertex side.

Geometry shader, the note here is geometry shader operates on relative to primitive. So every primitive, every set of three vertices, for example, on a triangle is going to be operating geometry shader. So geometry, these kinds of vertex shaders are 10,000 times maybe, 10,000 vertices and then maybe the geometry shader operates on 4000 triangles @ and then this fragment shader operates a million of the pixels.

One more thing on that is vertex shader you control the amount of data that you're getting into them, how many triangles you have in there. Fragment shader if they drag your window bigger your fragment load goes way up or you can make it smaller. So understand that you control one and the, kind of the user controls the other. So let's talk about branching.

So branch judiciously. Branching is not supported overall and every GPU is not supported very well. So I'm going to go into some details for people who have been using GLSL before. If you're new, just breathe for a little bit and we'll get back into some more interesting for the, for the new folks in a second.

So dynamic branching. Geforce 5200 ATI 9600 and the Intel embedded processors have less supported branching. They don't really have a lot of good branching support. So you've got to be really careful using shaders on, shaders that use branching on those processors. All the iMacs, MacBook Pros, any NVIDIA Intel product have very good branching support.

It's easy to stay in hardware. You still, in some of, everything but our newest laptops you still may have loop iteration constraints. So 255 is a good number. So you don't want to really do looping more than 255. The newest stuff is, is, really these days are getting wide open.

If you get one of the new laptops what you'll really see is that while it may, you may not see huge improvement in drawing a screen but you see a huge massive improvement in the amount, the length of shaders, the branching you can do, the amount of, the kind of instructions it supports and it's just an amazingly more powerful programmable piece of hardware with the kind of unified execution models. So that's where those things, That's the axis those are progressing on. And that actually takes away a lot of this stuff.

Not so dynamic branching. So this is your limited support thing. So what do we want to do here? We want to do like less than ten iterations because we're on a loop and roll it. So if you're going to do 100 iterations on something that doesn't have a very large set of instructions it can operate on anyway you don't want to, because we're going to unroll it on these processors because they don't support our dynamic branching. So the key here is to use supportive loop forms, limit the number of iterations you have.

Those two forms are basic, some basic loop forms that do exactly what you think they would do. You basically have an iterator. You increment the iterator and you don't do any other kind of calculations in there. So basically you can loop as a constructing program but we are, in the back end, going to unroll that for you.

So loop iterator cannot change anywhere in the loop body. Conditioning cannot change over the loop. Starting any iteration value should be constant ints so you should you should constants in there. So you should start from zero go to ten or whatever it is, zero less than ten. No degation to swizzle. So you shouldn't be, Swizzling is changing the vector pieces around.

So you shouldn't be having, use X and Y and swizzling them to actually determine when your loop ends. And avoid complex loop parts. For example, the first one can be simplified in to the second one. So the first one basically is, I have kind of the bail out case in my loop.

So I have the loop iterates five times and then I want to bail out if something, if some value hits something. This is something from, I believe, the man that brought the shader in the GLSL showpiece and you can make it run in hardware by simply moving to the second form. The second form basically says I'm going to iterate five times. We're going to unroll those five times and then we're going to put the five if statements in. So you don't want to have that complex loop part.

So fragment shader branching. A less capable GPU is on the same set of the older NVIDIA products, the older kind of PowerPC generation kind of ATI products and the new, the newer, the MacBooks. We have less fragments. Fragment shader branching not very supported. One of the issues here is they only have like 96 instructions, for example, total with 32 texture instructions, 64 math instructions. So you don't have a lot of instructions. So let's say your loop is six instructions, you know, that gives you the ten. But let's say you have 15 other instructions, well then it won't fit.

So the loop unrolling is going to blow both things up. And additionally Sine, Cosine and Tangent, those kind of things can expand out. Some of the instructions can expand to larger set of instructions so if you have one Syn you could do that ten times in a shader and you're out of instructions basically. So remember that when you're working with less capable hardware.

Conditionals, kind of same set of hardware. I mean, the idea here is basically everything we've shipped on an Intel, and this is not something, you know, we designed it this way, this is where the state of the hardware is when we did in transition. Made a really nice spot for a transition. Intel, mostly Intel hardware can really do fairly well in the ATI and NVIDIA side can do well for GLSL.

So basically for conditionals you want to avoid return break continue inside of conditionals. And don't use if statements to call the same function with a set of arguments. It's just this is, this is a kind of be nice to the compiler time, don't, don't hurt your compiler, it's trying to be nice to you. The first one basically says when we go through the optimization step we see the two function calls and we have to do a lot more work and to understand that's what you're doing. All you're doing is sending a value and calling a function.

So do the second one, select the value you want to do and then call the function with it. And that's really easy, the second one is really easy to unroll and really easy to work with, especially on these hardware. But the latest generation hardware the first one's fine. It just works.

So last thing is in working with hardware, where is the cool shader stuff? Where is that new kind of cool stuff? So advanced shader support is on, We have differencing functions. We have some shader texture LOD. We have noise functions and we have the final, the fourth generation shading and support.

The differencing functions are on again. The newest hard, the same set of hardware. So it's the...it turns out the 5200, in this cRase, has it so that drops down a little bit. But all the, all the new Intel, ATI and NVIDIA hardware has a differencing, or has a...I'm sorry, that's, that's not, yeah, that's correct, that's correct. I was thinking, noise is software only. That's correct. And the same thing with basically the shader texture LOD.

And then for noise support right now we have noises in software only. I believe by the time Leopard ships, and I don't want to promise this, but I believe we'll have a noise implementation because we have enough space to do it, kind of in, micro code is not the right word, but in software what we do with kind of software insertion. You call a noise and we're going to put a block of code in that does the equivalent of our software noise on the newest, the 8600. But everything else is going to be dropped to software if you try and do noise.

The thing you do here instead, which is a standard technique, is shown even in the orange book is to use a noise texture, 3D texture pregenerated noise. You do a look up in it based on some values and it'll give you a good noise function. The noise, noise you can get a little bit deeper but it works, that works pretty well.

And finally the fourth generation shading support in hardware on the 8600 MGT and later and in software we're going to look to produce it everywhere. So basically what happens is if you don't have hardware geometry shader support your vertex processing will fall back to the CPU, not a bad thing depending on what you're doing.

I mean, if you're the latest generation 3D game app it may not work for you but if you're doing an app and you have a little bit CPU overhead, we have multi-cores, it may work really well for you. Again, economy of scale. Few vertices, lots of fragments, the vertex and geometry stuff is not executing a lot and it will base, it'll very easily be able to fit it on that hardware or on the CPU and that should happen automatically.

So that's what's new in Leopard. So basically what we're going, What's new in Leopard, we covered this last year and some of this has changed but it, I just want to add a few things to it. Fourth generation shading support, we've already talked about that. That was for geometry and GPU shaders, binable uniforms, transform feedback. We're improving the profiler to give shader editing and we're adding a new shader builder.

Shading language one point two. So in the shading language one point two stuff basically all it does is add some new syntax or new options to the shading language itself. There's new specification out. You can look at the details in it. Some of the things we have, we have invariance so you can use fixed function and programming together. We have centroid variants for multi-sample, non-square matrixes and we have additional built in functions.

Other things we've promoted arrays to first class objects. We've added a link thing so you can iterate over arrays in length. There's automatic conversions from in to float. So that's something that one point two has, make it a little bit more convenient for you. And finally uniform initializers, which gives you the ability to have some basic, some basic good values for your uniforms that you can have to start with instead of starting from scratch.

So shader builder, you do not have it in your beta but before Leopard ships we'll be seeding it and you'll be able to, with, when Leopard ships you'll have a new shader builder. Basically it's kind of a combination between our editor sample and the old shader builder. It combines our programs and shaders.

It allows automatic compilation parts in editing as you've seen. It has a plug-in architecture of geometry. We're going to give some little extensible. It won't be the deepest architecture but it allows you to bring in some of your own models and kind of work with that and some of your own information into the vertex pipeline.

It's going to have much improved texture handling for cube maps, 3D textures, those kind of things and it'll be coming soon. At the end I'll put up a CD mail if you're interested in getting on our OpenGL CD you can do that and we'll be CDing it to you at some point.

So I'm going to jump into a quick demo here and just show you some debugging with the, from OpenGLSL showpiece, which is what I showed you before. And I'm going to open GL Profiler and kind of show you how you can attach and debug a shader. So we've attached and what I'm going to do is I'm going to bring up the break point view.

And this, we'll talk about this tool more in depth in one of our sessions on tuning this afternoon. I'm going to break on the flush and so now you see it stopped. I have a back trace and what I can do is I also have a view that's a resources view. Resources view contains information about shaders.

So we click on that and you see a number of shader objects. This is a program object that actually is drawing the teapot and you can see it has the uniforms, it has some of the built in uniforms, it has some attributes. We've all talked about that during the day and has their current values. You also can look at that it has actually the fragment shader we have here in, in the...it shows you the code you have.

So let's say I want to see actually how the texture coordinates are represented on this teapot. What I can do here instead of using the color value I got, I'm just going to substitute it in, the GL text coord. I can hit a compile thing. It says compile succeeded. And then I'm going to step through a few frames and see what it shows me.

So in this case what it did was it basically ran that shader and basically used the text coords as the output color. And so you can use this kind of method to put things in the output color and see what your shader is doing. You can turn things on, turn things off while you're running, debug your program on the fly and so you don't always have to go out to your editor, change some things and go back in your program. So that's the idea here as we allow that shader builder actually, or allow that.

You'd use Profiler to actually debug your shaders without having to actually open up your app, recompile your app all the time. And we'll talk more about the nuances of using Profiler, give you the touring, you will tour it, talk about all the windows what they actually do in the tuning session this afternoon. And so that's, If you want to go learn more about actually how to use a tool, we'll talk about it there. So back to slides, please.

And finally I'm going to finish up with a little bit on fourth generation shading support. I don't want to go into too much depth. I'm not going to try and teach you these shading support today but I'll talk to you about, give you an idea about what it is. I think a lot of this stuff I've already talked about and then we'll kind of review some of it.

So geometry shaders, creates new primitive from existing ones. It can be a single point, line or triangle input. You also have adjacency information. You have information that says what's beside it so you can actually have a line strip. And Alex will show you some things in a few minutes about that.

The output is fixed. You have to actually decide what it is. Going in you have to say this geometry shader has this kind of output. I'm outputting triangles, for example. So geometry shaders are kind of designed specifically for the kind of output they're going to have. They don't really on the fly change their output. Line, point line strip, triangle strip and you can output multiple primitives. So you can have one set of triangles come in and you can test light, for example. So you can build out multiple primitives so you can smooth your surface using a geometry shader.

It's executed after the vertex shader and before the fragment shader, like we talked about. And typical uses, point sprites so you can generate geometry, you can have a point and generate geometry. You can do tessellation. You can do extreme shadow volumes. We showed that yesterday. And you can do some single pass rendering to a cube map. The idea here is to get the work to the GPU.

The two things you need to know in your shader are very simple, emit vertex and emit end primitive. If you're going to do multiple primitives you have to end primitive after each one, emit vertex for each vertex. So the pass through one just goes emit vertex, emit vertex, emit vertex, the same triangle that came in goes out.

That's geometry shader. You'll be able to play with this. My recommendation is wait for the next seed because we're going to clean some things up and, but it is, it is on in software right now on the beta you have. GPU shader, haven't talked about this a lot.

It's really a grab bag of a lot of different stuff. It allows you to basically do integer textures. So it allows you to put a list of integers, a big array into a texture and access them like an integer rather than a floating point value. I put the long list of different types it represents.

It's a huge spec, 38 pages. I don't know if that's huge but 38 pages is a pretty long spec. It has a lot of different stuff to it. So what we're going to do is we're going to implement this in kind of a roll out type fashion. So in Leopard you'll see much of the integer operations kind of just the basic stuff supported and we may roll, as developers need it, as people request it and we see the utility of it, we may roll pieces of it out.

We'll have dev notes to tell you what, you know, what's supported and what's not supported but it really, This is one of those things where some of the nuances of it may not be particularly useful to a large audience. And my request from you guys, if you have interest in GPU shader and a particular part of it just send us an email and we can take that and we'll take that into consideration in our implementation plans. But we want to get it out there for you, get you using it and get some feed back and then we'll continue to improve it.

Transform feed back, we talked about that. It's pretty simple. You can output from the geometry shader. You can either output the primitive of geometry shader. If you don't have, sorry. Transform feed back technically does not require the geometry shader. I should have been clear about that. Either the primitives of geometry are outputs our if you don't have a geometry shader the set of vertices that go out will be output and can be used into a buffer and can be used again. So basically rights to a buffer object.

You can use it as a vertex, the vertex buffer object or you can use a different kind of buffer object. So you can actually write to a buffer and use it in a lot of different ways. You can recover data, for example. You can just output and recover data.

An example of this is you're, in the C side of things are basically, In this case I'm going to say I'm not rasterizing. So I'm going to turn off the fragment side of stuff. I'm going to set a buffer, buffer object up and say, hey, here's the offset to it.

I'm going to begin transform feed back. I'm going to draw some stuff. And then I'm going to say, end transform feed back. So basically what I did was I say, I'm going to short circuit the API, feed it back through and get that data out. So I rendered some stuff through the vertex geometry shader and recovered some stuff. So if I want to tessellate I recovered more triangles than I put in in the first place. Fairly simple concept in transform feed back.

And finally, bindable uniform. Bindable uniform is really simply, it allows uniform sets. You can set, It's an object of uniforms. You can...Let's say you have a thousand uniforms in your shader, a really complex shader, a thousand uniforms and you have a different sets of uniforms, different shaders that allows you to put those into an object and kind of bind them to different places. And since you can bind one set of uniforms to different shaders it gives you, If people are familiar with the program interface it gives you the program ENV kind of thing. One set of uniforms can affect shaders.

Basically uniform buffer EXT is what you call to set it up and you're, to set it, to set it and then it's only valid after you link. So you link, you have a shader using and you can uniform buffer to actually set the set of bindable uniforms. The, May be a little bit confusing about it is that you actually bind it and you modify values. You're actually modifying it live. When you unbind it those kind of go with you. So it's, it's actually you bind your set of stuff, modify your stuff and you can then unbind it, bind to another one.

So, I think that's all I want to talk about. I'm going to bring Alex up and talk, and have him demo some of the things you can do with geometry shaders. Geometry shaders are real interesting because it's not always about lines and points, lines and points and triangles. You can do a lot of different things and he'll show you in some of his demos.

Thanks Jeff. Ok, I'm going to show a few GLSL demos today. Ok, my mic's on now. Thanks Jeff. I'm going to show a few different GLSL demos today focusing on two of the new extensions, geometry shaders and transform feed back. I think two-thirds of you are new to GLSL so today's probably the first time you've ever heard about geometry shaders. So before we jump into the demos let's take a look at the codes, review what it looks like.

So here I have two really simple shaders. Can we get the demo machine, please? Ok. So two really simple shaders. On the top a vertex shader. This is just doing the simplest possible transform of incoming vertex. And remember, vertex shaders work on one vertex at a time. They always output one vertex. This is a standard transform model view projection times the incoming vertex position.

Geometry shaders, there's really three key points here to keep in mind. I'll just recap what Jeff said. The first point is that they work on specific types of inputs and create specific types of outputs. So you have to, when you compile and link the program object together you have to tell it what type of input is this geometry shader going to work on and what type of output is it going to create. So it might be working with points or lines or triangles and it's going to output points or lines or triangles. There's also adjacency inputs, lines of adjacency and triangles with adjacency. I'll touch on that a little later in the demo.

So the second point is that in the geometry shader you're working on arrays of inputs. The vertex shader output the single position, from the geometry shader's viewpoint that thing is an array, is a geo position in. And this particular shader works with the points so I only have to touch index zero.

If this was lines I'd have zero and one, if it was triangles I'd have zero, one and two. And you could do math between those things to find the midpoints or do any other kind of weighted average you wanted on them. Third point is this new function here.

You have to explicitly emit vertices. In a vertex shader you're always implicitly working on one vertex and you set whatever attributes you care about, the color, the texture coordinate, the position. And when your shader is executed those things are done, the vertex comes out. In a geometry shader, same thing, you set up the attributes you care about. Here I'm doing texture coordinates in a position, then I explicitly emit a vertex, change the attributes, emit another vertex, change the attributes and so on.

This shader is taking one point in and it's going to take a point and kind of project a screen aligned quad around it so that it can kind of transform points into textured billboards basically. So there's just some constants here and it just basically calculates the top left and the lower right and so on. And there's hard coded texture coordinates just go from zero to one and I end up with a textured sprite on the screen. So kind of real simple geometry shader example. Ok, let's go to a couple demos.

( Silence )

So it's everybody's favorite teapot. Everyone's seen this before. What's new here? So this is the natural progression of the teapot. It has to emit some steam. So what's going on here and what does this have to do with geometry shader? Well, the way this is working is I'm drawing this twice.

First it's just a regular teapot, nothing special about this. The second time though I draw it again as points. And these points are fed into a geometry shader. Inside the geometry shader it's a slightly more complicated version of what I just showed you in code. And each point can conditionally emit zero or one or more screen aligned quads.

Right? And these quads are actually inside a particle system implemented inside the geometry shader, which has a single input time and as time progresses I make them go up and they get bigger and the color changes and they rotate a little bit. So it's just a simple particle system. There's a texture applied to those things. So the fragment shader can texture each of the quads. And the result can be blended into the scene like this. Now you notice that only the spout of the teapot is emitting the steam here.

Well, it's really simple to do this because you're working on the incoming vertex position. I can simply compare on the shader if the position is inside this area then emit something, otherwise, don't emit. So you can emit zero if you, if you, zero vertices out if you want to. So I'll let this go. Over time maybe some more parts of the teapot start to emit.

( Silence )

And so there you go. A simple example of points in, bill boarded quads out, the particles in geometry shader. Ok, so another demo, slightly different kind of input. I call this kind of the paper doll demo. What is this going to look like? It's going to look something like this.

So what's going on here? This is adaptive tessellation of an input mesh in a geometry shader. In this case the input is lines. And so on the left here I have the original mesh, this is drawn with fix function. This has about 30 points in th0e line loop, really simple, the simplest kind of mesh I could think of.

And on the right the same thing but drawn with a geometry shader. So how is this working? Well, this time the input type is lines with adjacency information. What that means is that the working unit in the geometry shader is one line segment. Let's take the segment that's atop this guy's head, but you also have access to the lines immediately adjacent on either side of that data. So in the array this would be zero, one, two, three points I can work with.

If it had those four points accessible I can then calculate a weighted supply that runs through all of them and I can emit as many subdivisions as I want calculating the points at each subdivision. So that's pretty neat. What else can I do with this? Well, similar to how the teapot was working conditionally only at certain positions, there's another new built in variable in geometry shader called primitive ID. Every single primitive that's coming in here has a unique number associated with it.

So like if I start drawing this mesh at the guy's neck, this might be line segment one and two and three and so on. That's another variable I can key on in a shader and conditionally apply effects with. That's regardless of the position like in the teapot example.

It's, I can rotate the mesh and do whatever I want, it's still line segment one. So maybe I'll displace just the segments in this guy's head to grow some hair on him. And all these things are running live every time you draw this mesh. So if I animate the original mesh the updated mesh is tessellated appropriately every single time I draw it.

And as Jeff is getting across this whole entire session, you have complete creative control over every thing you do in the shader. So it's really simple to apply any kind of effects you want to this geometry. Let's get this guy's hip shaking, make him dance around and now it's an iPod commercial.

( Laughter )

So this is a simple 2D example with lines but you can see how, from the CP's point of view here, you're only working with 30 points and the CP load on this just to animate like this guy's hand is almost nothing. But on the GP's side you can tessellate this as much as you want and create thousands or hundreds of thousands of vertices on it and make it look really good.

Ok, so the next demo is a little different. I'm using, I'm going to use geometry shader again and also transform feed back. So it's a less complicated one. It's one line. Actually it's one line and three uniforms. So can you guess what's going to happen here? Right. So I'm creating a fractal with this.

So how is this working? So it's the same kind of subdivision as a the previous example where I have one input line segment and I'm using these uniforms as the relative offsets into that line and I break up the incoming segment into four line segments coming out. But instead of drawing to the screen I can capture the output positions into a VBO with a transfer feed back extension.

Then guess what? I just turn around and I draw that VBO with the same geometry shader and on the first line when it comes in and it gets subdivided again and the second line and so on. And I can repeat over that as many times as I want to build up detail.

Now what's cool is I can animate the uniforms per frame by just dragging the points around and now I've got this kind of interactive fractal shape that I can do whatever I want with. So that's neat but still drawing this thing as one line segment isn't as cool as it could be. So let's try to be a little more creative and artistic and see what else we can do with it.

So the first thing is instead of drawing this as lines, what if I emit just the points and I feed those points again to another geometry shader just like the one I showed you in the beginning that's going to turn each incoming point into a textured quad. So there you see I'm just drawing a bunch of textured balls now, which already makes it look a lot more complicated because you can see some interactions happening with the alpha.

What else can I do with this? Let's try randomizing the point size. I can do this with the noise functions in GLSL. Here I'm actually using a texture look-up in the geometry shader to look at some random texture data. Remember that texture look-ups are available to vertex and geometry shaders not just fragment shaders.

And I could do the same thing with, say, the color that I pick for each of these. So now it's looking a lot more complicated and organic. Let's apply a little bit of dot product magic and a fragment shader. So now I've got this much more interesting looking microorganism kind of thing.

And to finish this off let's try animating some more of these attributes like the point size and apply some random displacement to everything. So now I've got a very complicated looking scene, which is completely interactive. And remember all of this was generated from one line segment with three uniforms I'm touching per frame. All the extra geometry here is being calculated and animated and actually created on the fly in the shader where we can accelerate that on the GPU.

So I think that's the end of the demo here. I just want to say that there's a lot more of these effects that you can come up with. You could be doing the same kind of tessellation with 3D objects. You could be doing shadow extrusion. There's just a kind of unlimited world of possibilities here. So please go to OpenGL dot org, download the specs and read through them and try experimenting with all the stuff yourself. Thanks. ( Applause ) >> We'll go back to the slides and I'll finish up here.

So programmable GPU's are here. GLSL ships everywhere. It's a, OpenGL Shading Language it is approachable, understandable, direct, fast way to get at the power of the GPU. It's our focus and we'd really like to have it be your focus. Moving forward we're going to be doing a C program. We're continuing from last year. OpenGL C dot Apple dot com.

What we'ree going to try and do is get you new graphic drivers, new OpenGL to allow you to work on top of Leopard and really give us some feed back on what you have. If you find any bugs we can get those quick turn fixed to you and you can make sure that Leopard is a really great thing to use these new features and use OpenGL and GLSL. I don't think we have time for questions today but I want to bring up Alan's name. Alan Schaeffer (sp?) is our 2D, 3D evangelist and if you have questions specific you can go to him for kind of generic questions. There's obviously the sample code and resources there