Graphics and Imaging • 53:15
A must-see for developers who are interested in unlocking the maximum performance from the latest generations of graphics hardware, this session covers techniques for increasing 3D performance by transforming geometry using the display card's Graphics Processing Unit (GPU).
Speaker: Michael Larson
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Good afternoon everyone, my name is Travis brown, I'm the graphics and imaging evangelist and it's my pleasure to welcome you to Vertex Programming with OpenGL, session 205. And sort of the theme that we have at this year's WWC is the aspect of leveraging the GPU. And in the graphics and imaging overview yesterday we showed a lot of examples of using programmability to do interesting things, interesting visual effects.
Those were fragment programs, but there's also sort of a companion technology or another way to program the GPU called Vertex Programming. Which instead of touching fragments and therefore pixels, it's actually going to be using the GPU to do very high speed geometry calculations. And the two go hand in hand in terms of the ways you can really offload the burden of doing the graphics from the CPU and allow the GPU to do what it does best. Which is work with geometry and then obviously draw that geometry. So, let's get started. So, it's my pleasure to welcome our speaker today, Michael Larsen up to the stage so he can take you through the presentation.
Thanks Travis. Today we're here to talk about Vertex Programming with OpenGL. It's been around for a couple years, starting with the ATI RV200 in that generation. So we'll talk about it a little bit. Let's start out, we're going to go through an introduction to Vertex Programs. We're going to talk about Vertex Programs in the OpenGL pipeline, the computation model used by the Vertex Programs, the program syntax for the RVertex Program, and go through a number of examples, starting from simple ones using Shader Builder and some more complicated ones using the Project Builder.
So starting out, why use Vertex Programs? Say you're using fragment programs, a lot of the inputs that go into a fragment program come out of the vertex program. You might have texture coordinates, you might have some varying position coordinates that come through, or just in general, you'll probably end up using vertex programs if you're using fragment programs. Say your application doesn't fit into the standard OpenGL lighting pipeline, you've got an existing lighting model that you want to implement in Vertex Programs and you don't want to change that to the OpenGL one, you can use a Vertex Program for that.
Say your application is pre-processing vertex arrays, or every time you draw an object, you're touching the data. One way to get rid of that, touching the data, and see-view time is actually moving up to the vertex program, isolate that part of your program that actually does that and move it up. Examples would be surfaces and vertex tweening. in between is from keyframe animation. It's an old term. You know, you have the general cartoonists and the people who drew everything in between, so it's called tweening.
So the benefits of vertex programs, essentially you're going to be offloading functionality from the CPU to the GPU. You have more flexible per vertex geometric operations, higher performance, and lower memory usage. So let's talk about Vertex Programs and the OpenGL pipeline. So here's the old fixed model. You start out with the OpenGL vertices. You have your standard model view of transformation, the color materials and the lighting effects, and your perspective division.
and that's been replaced with Vertex Programs. Essentially, your Vertex Program replaces a lot of the same functionality that was "a fixed functionality". A lot of that was actually implemented in microcode. You're just essentially allowing you to use that as microcode. The SGI machines for years used microcode for all these transforms.
So what changes? Your program, if you're using Vertex Program, you're now responsible for doing all the transformation, the color materials and lighting, and a whole slew of other issues if you're using the OpenGL Color Lighting Model. And if you're using them and you disable them, Here's a scenario. Say you're switching out of Vertex Program and you're expecting the standard OpenGL lighting model to be there. You have to disable them or you'll get everything run through the Vertex Program regardless. So let's talk about the computation model.
So what invokes a vertex program? Well, there's a number of standard methods to find the ARB spec. Say that you're issuing a GL vertex command, which usually kicks off some kind of operation indirectly through OpenGL vertex draw elements or draw arrays commands. The current raster position is changed. and essentially a vertex program terminates at the end of reaching the end of the program.
The computation model is you have independent execution on each vertex. There can be, and there likely are, two, four vertex units in each GPU. temporary data so from when you write a program you kind of feel there's going to be data left over but verdicts program you run the program, you're done it's all gone So, the input/output model, you have your Vertex Program, You have Vertex Data Inputs, which be your position, color, textures, coordinates, and those kinds of values. Environment parameters.
OpenGL State. Let me talk about Environment Parameters real quick. Environment Parameters are values that you can load up to your Vertex Program through GL or across all Vertex Programs or for a single one. And OpenGL State information which is the current lighting model and a number of matrices available from OpenGL. In addition, you have a number of temporary registers.
available for use while you're executing the Vertex Program. And then once your program completes computation for each value, say the color, the position, and everything else, you spit it to the Vertex data output. And once you reach the end of the Vertex Program, you spit out everything to your output registers, you hit an end, Vertex Program stops and it kicks it off to the rest of the engine. What's available to Vertex Programs? Standard attributes, position, color, textures, coordinates, and additional programmable attributes that you can set. OpenGL Transformer Lighting State.
So the parameter information is read only by the program. It's defined in the spec. Essentially constant values from the from the sense of the program. From outside you can program them up using environment commands from OpenGL. There's two types. There's global parameters, which say you have multiple vertex programs, you can load up these environment parameters across all vertex programs. Or you have local parameters, which only affect the current vertex program running.
The instruction set is limited to about 27 instructions. They're focused on transformer lighting. Remember, this is kind of like a microcode implementation exposed out, so they're very limited. There's no loops, no branches. You have limited program length. Some instructions are macros, which means you have instructions that are complicated that might do multiple things in one operation. They might actually be two instructions, or two or three instructions, and they are defined by the spec. So not all instructions will require two instructions or more.
There's data operands. There's scalar operands, which are standard floating point values, and SIMD operations, which are AlteVec-type values, for floating point values indexed by X, Y, Z, and W.
[Transcript missing]
So data component selection, you have a normal SIMD operation like an AlteVec instruction. A equals B. You've got some input parameter, goes through some operation, gets flushed out at the bottom, and they all get set.
You have a Swizzle SIMD selection which allows you to move data between components. So it's on the input select, it's not on the output select. So if you have a two source component, you might have source A might be XYZ and source B might be ZZZ. You can select them all at the same one. You have a scalar mass operation, which is, these are the scalar operations. You can select which source operation, or source component you want for the operation. So if you're doing like a reciprocal divide and you want to see the reciprocal of y, you essentially select y.
The instruction set has a number of standard scalar and SIMD instructions, like you see in AlteVec. You have your standard add, subtract, multiply and add. Multiply and add is a very important instruction. It's not a macro, it's a one instruction. You can do a lot with multiple adds and you'll see in the examples I'll show you. You have absolute value, minimum and maximum, dot product, disinfectors, so real lighting. You can see the lighting in here and reciprocal functions. Math functions are pretty limited. You have the exponent base two, floor fraction and log base two.
And then the more complex instructions you have: component selection, lighting, conditional set on compare, doesn't operate like you think it does, but I'll show you how it works, and indirect registers load from parameter space. So you can't modify your temporary registers, you can't index them through indirect addressing, only parameters can be indirect values.
So, let's look at the constraints. These are the base requirements for all Vertex Program Implementations. So, if you write a Vertex Program, the base implementation will have this many values. It has 96 Program Environments, 96 Local Values, 8 Program Matrices, 1 Address Register, 128 instructions and twelve temporary registers now spec allows for anything more than that but you do the base requirements that you can you can you can look look to have all implementations So how do you find out what your new piece of hardware has to you, R300 or whatever you put in there? using GLProgramArb and query the interface and what's there. There's a whole bunch of anumes that you can ask for and see what's out there. So let's talk about syntax.
All programs must start with the version signature of our Vertex Program 1.0. Must terminate with an end statement. and just about anything in between is okay. It has loose temporary and parameter variable definitions. You don't have to find all your temps at top. You can define them in line just as long as, you know, before you use them, just like C++.
So, what is a syntax? It's effectively a GPU independent assembly language. It's not targeting any specific GPU, it's independent of all types. It's not C, and it's not pure assembly. There's no fixed registry allocation. It's independent. Like I said, it's GPU independent. They're loaded as strings, they're runtime compiled by the driver, ATI, Nvidia, whoever, and there's no fixed allocation of GPU registers.
Three kinds of parameters. You have the constant parameters, which are not programmable from your program. You have the environment parameters, which are programmable across all Vertex programs. And you have local parameters, which are specific to the currently bound Vertex program. You have temporary registers, essentially they start out with a temp and you can name off up to 12. You'd be surprised how few you actually have to use.
The key thing about using temporary registers is you get a mindset from AlteVec or whatever you're used to programming on of creating a whole bunch of data and then having a result and storing them away. The key thing about using the temporary registers for Vertex programs is compute your result and stick it in the result as fast as you can and reuse that temporary register.
Then a number of urinary, binary, and ternary instructions. You know, reciprocals and maths and adds and multiplies, those things. So, just to look at the instruction set real quick, this comes out of the ARB spec. You see the instruction, you have a defined set of inputs, everything, just pretty much everything takes a vector input. Might have two sources. The output, that's the important part by looking at the spec, is you might have, it might be a scalar function or it might be a vector, a SIMD operation.
You can see the flavor of the instructions and the syntax that's used. and more additional instructions. You can look at some of the math functions, the power, the reciprocal, the reciprocal square root. All those instructions are scalar outputs. So you have to select a particular, if you want a square root of some value, like a square root of the y value, it's going to be spread across all the outputs.
And that's what it shows in this example. So, where do you get more information on our Vertex programs? Well, the shader builder and the development tools comes for free, and it's a great place to start. It has active syntax checking. You don't have to build a framework and build a program to make it work. You can just turn it on, start dialing away, and things work.
So let's talk about loading programs and controlling the execution environment for Vertex programs. So, Vertex programs are loaded as strings. Like I said, they're runtime compiled by the driver. Essentially, there's only one way to do it today. They've left ways to do it different ways in the future, but strings are the only way today. It has active syntax checking. So, after you try a load of Vertex Program, you can test the GL_GET error, and if you call a GL_GET string with its new, it'll actually come back with a string reporting the actual position in the code where the error was found.
which is very useful. You'll find out. So loading, like I said, there's parameters. You have environment parameters. There's a specific call for loading environment parameters. It's Program A and B parameter. And they're all loaded up SIMD values. There's no way to load a scalar value. Everything gets loaded as SIMD.
Also, there's an equivalent call for local parameters, you know, GL, local parameters, and they also get loaded as SIMD. So let's go through a basic Vertex Program. Michael Kors: What it's going to do, it's effectively the shader build example, maybe a little bit simplified. We're going to do a model view perspective transform of an input vertex and set the result and move it on.
So here we have Shader Builder. Let me bring it up a little bit. Let's look around here a little bit. You have essentially your code input here. This is actually loaded as a vertex program. You have to load it before any begin end statement, by the way. You can't change it while doing a begin end. Even, well, you know, draw arrays and draw elements have implied begin ends, but effectively before a begin statement, you have to have this loaded as a program.
So let's go through this again. You have your editor interface right here. You can enable/disable the Vertex Program. You have a number of objects available to draw your test your vertex program out on. You have a sphere, teapot, a hedron, plane, I think it's actually called that, I don't know. - Everybody draws teapots. And you have your GL parameters.
And you can dial up your color, you can change the color on the fly, go to uh... you can also select uh... a number texture units available for input uh... you can load them up and select them through this interface but today we're just going to use uh... Texturing unit zero. And you can enable it or disable it. Turns things on. So.
Then over here you have a bunch of debugging information. I'm not going to go through it. It essentially allows you to step through a Vertex Program and watch the values change. In addition, ShaderBuilder has an instruction reference that comes up over here. So you're not digging around the ARB spec. It's 64 pages long and it's really detailed. You don't want to read it. Read it when you get there, you know, okay. But you can essentially have online documentation of all the instructions and what they're supposed to do. So let's go back. Turn that off.
Start the program out, the R Vertex Program 1.0. I have an offset, a scale, and a zero value. This is a constant thrown in here. Once you program enough, you start throwing in your standard cut and paste, and I have a whole bunch of standard values I use all the time. Here's a temporary value, the vertex position. Let's show you an example of the runtime compile.
"Inline error checking" comes up with an error saying there's something wrong here. If you look at the bottom it says line 9. Actually if you did the DL error check and actually pulled that string out like I showed in the previous example, it would return this same value.
So as you work into project builder examples and you start doing it by yourself, just about all your Vertex programs will pull that string out just to help you out. So, let's start out. Right here I'm just adding a, here's the vertex position right here. That's an input value that comes from the program. Here's a temporary called VFOS and a zero value. So I can inline actually add an offset. and actually automatically adds it to the program. I can change that offset real time. These are great tools, by the way.
You know, one thing I do a lot is I actually get a program up and running here, cut and paste it, stick it in a file, and add it to my project builder. And then once I, if I develop errors in the program, I always come back and paste it in here, see where the error's at.
And right here, so I'm taking the vertex position, the vertex from the input, assigning it to a temporary by doing an add of an offset, and then multiplying by some scale value. And you can comment all this stuff out.
[Transcript missing]
and then also set the text coordinate. So that's an example of a basic Vertex Program within Shader Builder. It's pretty simple, it's probably 10 instructions, five instructions long. So, let's go back.
So we're going to go through next, we're going to go through lighting vertex program, that's another major use for vertex programs for, like I said, if you don't have a standard GL lighting model that you want to use. What we're going to do is we're going to use Shader Builder. It's essentially the same example that comes out of your development example area. We're going to model view the transform of an input vertex, perform some lighting computations, and then move on. Let's do the demo. A little more complex.
So I wanted to remove some functionality here before I moved on too much. I'll leave that over there. So you can look at this in stages, this thing being applied in stages. A Trib, you know, effectively it's a pound to find. I mean, it really is. So if you want to use nice names for standard input values, you can call them a Tribs. You have, now I have two parameters here, and if you notice, in this parameter right here, I'm sharing a whole bunch of values, rather than creating multiple parameter variables, I'm jamming four scalar values, three scalar values into one SIMD value.
- This is a parameter space. And here's your light position. So what we go through here, we do our Model-Viewed Perspective Transform. We start computing the lighting value.
[Transcript missing]
The nice thing about this tool is you can do things right in line and test them as you go. Say you develop a bug through here, you can comment it out real time and actually see what part of the program is actually messing up. and then we go through and we move the result to the output position and the color.
So those are two simple examples of how to use vertex programs in the Shader Builder. We're not going to talk about Shader Builder too much. We're going to move on to some new ideas. So as far as Shader Builder goes, it's a great place to start. It's a great tool. You load up textures, you don't have to build a framework. You can get started today. It also provides active syntax checking of your vertex program as they develop. In addition, as you start out, it has online instruction information.
So, as far as program possibilities, you know, a lot of people You look at the instruction set, it's rather small, lighting focused. The code space is pretty small, we're all used to having as much code as we want to. And the number of temporary registers is actually pretty small. But you'd be surprised what you can actually do with that. You can do surfaces, you can do active displacements, you can do real-time displacements, you can do advanced lighting effects, keyframe animation, and visual computation.
So let's go through some real quick tips from an application standpoint. Best way to use Vertex Programs is using Vertex Arrays combined with Vertex Array Ranges. Key thing about this is you're moving all the CPU from the issuing and fetching of all the Vertex Arrays. So you're keeping the CPU out of the equation.
Use compound selection to save parameter space. Rather having multiple parameters defined, like one, two, the runtime compiler is not going to be able to figure this out. Stick them together. In addition, use compound parameters. Save yourself reciprocal multiply. Save it by actually pre-computing these fixed values and combining them in one array. So here I have factorial 6, 720, negative 720, the inverse of that and the negative of that.
Additional programming tips, these are things I've had to figure out how they went. Only one value for some instructions can be selected. At some point, you might want to merge all of those values back into the same vector, or SIMD value and start computing again rather than generating a single scalar value and then doing a whole bunch of work on it, generating another SIMD scalar value doing a whole bunch of work on it.
You might want to merge all back together and compute them like you would using AlteVec. You can use the select parameter and multiply add instruction to merge everything back together. This shows how you're using -- you're doing a reciprocal on three values, and then you're starting to merge them back together by doing multiply add back into one value. This actually is quite helpful.
Conditional selection. There is no branches, so how do you do conditional selection? Well, the only way to really do it is through multiplies. So you have a value, say you want to do if A greater than or equal to B, then assign C equals A or something like that.
You have to do, you know, set greater than equal, which actually sets, it doesn't actually, what it does, it sets the, if it's true, it sets the output to ones. So you're going to use that ones value to multiply against the input value, so it's either going to be one or zero, and a MAT instruction in between that merges the two together like a conditional would. So I've got it down to four instructions, and there's no branches. So let's talk about vertex programs for surfaces.
So a lot of the lighting functionality for vertex programs is actually moved down to fragment programs. You can get per-pixel lighting. So what else can you do with this? This is a pretty powerful little tool you have here. So rather than tweaking with your vertices every time you want to move somebody's position or you want to define a new shape, just use UV mesh, essentially two-dimensional mesh that you create once and you never touch it again.
So the vertex program is going to compute the XYZ position of all the values. You can be surprised what you can do with it. You can do quadratic surfaces, implicit surfaces, parametric surfaces. You can do bilinear interpolation, Bezier, B-splines, and NURBS, all with vertex programs. So, what's a UV mesh? Well, a UV mesh is a 2D mesh. It's bounded by UNV. UNV can be bounded between any kind of fixed value.
A parametric surface, such as base area surfaces, or nerves, or B-splines, are bound between 0 and 1 implicitly for the whole surface. So, why would you want to use a vertex program for a surface? Well, you have to define one mesh. One mesh for all your objects. And by using that, you never have to touch your vertices again, you can load them up in VRAM, and you'll never see them again. You can animate surfaces using control parameters. We'll show an example of that. And you're offloading the work from the CPU to the GPU. So, let's talk about an implicit surface vertex program.
school math thingy picked it off a web page uh... you know that's three sign values I got a u and v for x and y, sign of u and v for x and y and then uh... four control parameters combined uh... with u and v for the final z value So we're going to build a UV mesh as a vertex array, and we're going to bound it between negative pi and pi, and have some number of steps in between. So, and then we're going to fill in the position with the UV mesh, u and v. I see Z is zero is actually right here. The U and V are just input parameters to the vertex program.
So we're going to create our program, debug it through ShaderBuilder, and load it as a program string. Then we're going to load the shape control parameters, the A, B, C, and D you've seen in the previous equation as local parameters. And then we're going to submit a number of quad strips for drawing.
So here's our object in line format. This is actually a two-dimensional UV mesh being input and it's finely tessellated and all the positions are being computed by the Vertex Program. Right here are our control parameters. We can dial in anything we want, change the shape, So these are the A, Bs and Cs. Everybody who sees this wants to go out and do knots and stuff like that. Good luck. You do a lot of cool stuff. So let's do a little demo here.
Now we're computing the shape on the fly. It's been loaded once in the VRAM. It's not coming across AGP at all. Text recorders are being completed by the vertex program this is an opengl context right here i'm pulling the input from a quicktime and here's the volume indicated. It's kind of fun.
Michael Kors: What's that example? It's fun. So what did you see there? You've seen a vertex program that computed the object shape and color and the texture coordinates. The control values, A, B, and C, D, were fed up as control parameters. So one SIMD value per frame was loaded up and it computed everything else based on that one value. There's no sine function. So how did I compute that? I used a McLaren power series. You can do a lookup, but it's just about as much work as doing McLaren because you're going to do indirect address register loads and all that stuff.
And then once I had the values computed, I computed all three sine values at the same time using an ultimate similar code. So let's take a look at the program. It's rather small up here. You guys see that? and the lights, maybe. So there's an input parameter, ABCD, four temporary values, not a whole lot.
So the surface is described by two fixed UV values for X and Y, and then the control parameters for the shape are in the Z. So I had to compute those on the vertex program rather than stuffing the vertex array every time with new values. So that's about six instructions there. That's the example of the syntax. It's very assembly-like.
Then I'm cranking on and I'm starting to compute the sign values through a number of terms. so if you notice i'm doing a multiply and a mat instruction and i'm continually moving on Then we do a few more terms. We set the output position. So once I have the output position, I jam it away, because I'm going to use that temporary value again.
And then I set the texture coordinates to some input value, which is actually get time. I jammed it in the EQ0123x value. And then I add it to the vertex coordinate, and that value's been normalized between 0 and 1. And that's it. So, where do you go from here? Try doing waves. Waves are actually pretty simple.
Very similar, use cosine functions. You can have essentially multiple emitters. You think about things bobbing up in the water, you have three, four of them bobbing in the water, and for each point on your water, you just simply compute how far you are away from that cosine emitter and compute your Z value on that and sum across all the emitters in the whole thing. There's a demo tomorrow that shows this. It's pretty cool.
So let's go through NURBS. Actually, I had bilinear surfaces, B-splines, and everything else figured out. NURBS took a while to figure out how to do them. So why use a vertex program for NURBS? Most people who use NURBS in OpenGL have already figured out that NURBS is slow.
You have to do a multiply, accumulate across a number of control points and they've either rolled their own NURBS evaluators. You can store multiple patches as a single vertex array and just load up control points for each patch. You can do automatic level of detail based on how close the patch is to the user or to the screen by selecting multiple different UV meshes.
So, what's a NIRB? Actually, I was going through the airport, and my book dropped out, and a guy comes up and goes, what's a NIRB? And I got to about non-uniform rational, and his eyes glazed over, and he walked away. So... Most people say that. Nobody really understands NIRBs. Actually, that's what it means. So, it's a surface defined by interpolating a number of control points. It's used a lot in CAD.
It's dominant in CAD for... Because it can do sphericals and a number of things you can't do with Bayes-Zier surfaces. If you want more information on NIRBs, I'm not going to explain how they work here. Go search for Introduction to NIRBs on the web, or Rogers has a pretty good book on it, and it's very simple to read.
So, we're going to do an outline of how to draw NURBS using a vertex program. We're going to compute the B-spline basis functions for that particular NURB on the host. It's a recursive evaluator. That's the biggest problem I had with figuring out how to do it on the vertex shaders.
It's a recursive function. There's no branches, there's no calls, there's no jumps. So it had to be computed on the host. That's actually a pretty good thing because you only compute it once and you never use it again. So, you're going to load the control points as local program parameters.
And then you can do the same thing for your position, your normal, your texture coordinates. They just get loaded as control parameters. They don't get loaded as a new mesh. It's essentially just control point parameters. So you're only loading 16 points for each parameter that you want to load in this example.
This example is a 4x4 controlled mesh. The position information is interpolated by the Vertex Program. A real simple example. On the implementation side, so you have four by four control mesh, if you don't know anything about nerves, that means you're going to have 16 control points, 16 basis functions per UV value, and that's a lot of storage. So what I do is I actually post-multiply the basis functions in terms of rows and columns on the vertex program as I go along, and that essentially reduces the requirements down to eight floats.
So if you look at the little, you know, formula right here, you see, you know, the UV point is the sum of your number of control points, which is 16, and then you have two sets of basis functions, one for the rows and one for the columns. Let's do a demo.
so we're here we are surface i put a texture on it uh... just uh... Here's a UV values coming in there. It's actually quite a few number of points. It's 10,000 points. It's 100 by 100 grid. There's a number of the control points right here. You can grab the control points.
You can move the surface real time. This is all being computed on the Vertex Program. It's been loaded as a UV mesh in VRAM. It never goes across the AGP bus more than once. And you can do a lot of fun things with it. NURBS are actually a very flexible surface.
[Transcript missing]
So, let's talk about what we've seen. Mesh was 100 by 100, it's 10,000 points. If you do the math, just for the position only, that's 96,000, 960,000 flops that the host didn't have to do. A full-screen application running at 1280 by 1K, 75 frames per second, blah, blah, blah, blah, is about a gigaflop, just for the position only. So if you want to add textures and normals to that, you can add on an entry of three gigaflops. And the GPUs can do it.
So let's talk about the program real quick. Hopefully you're not straining your eyes. I have two inputs, a U basis and a V basis. There's four floating point SIMD values there. Bring them in through the vertex position and the color. And I have a number of control points as program locals.
I didn't show them all, there's a number of them. 16. So here I am, I'm pre-computing, post-multiplying the basis functions to come up with the required basis functions I need for each row-column combination. So I multiply, post-multiply, and then I start my multiply-accumulate on all the control points as I go along for each For this example, it's a four by four, so there's four rows, so there's four sets of these things.
I go along continually computing the new basis functions, and I finally end up with a final position. Do a transform, I have a result position, and then I start, then I just move the texture coordinate along. The texture was normalized between zero and one, so it fit across the whole surface.
so where do you go from here pretty cool stuff Michael Kahn: Well, you can import the position and normals and the color information, texture information, from a modeler program directly into this. You can design vertex programs for different size meshes, you know, 3x5s, anything in that value if you look at the number of instructions it takes, once you figure out how it works, you know that you're bound to a certain size.
You can't interpolate 10x10 control measures. There's not enough space in the vertex program to do that. Michael Kahn: And then you can actually do, I haven't figured out how, but I know I can do it, subdivision surfaces. Subdivision surfaces, people have always said that they're algorithmic, they're always, they're like similar to B-splines.
Michael korsan: But if you look for it, there's an exact solution for subdivision surfaces in SIGGRAPH98, in the backend of it. Somebody actually sat down and figured it out. So that was one of the biggest problems of implementing subdivision surfaces. Michael korsan: You know, if you want to do very simple demos, bilinear surfaces are easy, basiar surfaces you can put the basis functions on the vertex program, and NURBS get a little more complicated. Michael korsan: Like I said, it uses recursion, but basiar are simple. So let's do a wrap-up.
So Vertex Program, they're a powerful tool for shading and lighting. You can use custom lighting models outside the OpenGL lighting model. You can add additional complexity to your geometries using Vertex Programs. And also you can actually go to a lot finer level of detail. I mean if you look at some of the games out there today, the level of detail is horrific. You can use UV meshes to define most surfaces. And once a UV mesh is defined, you can load it in VRAM and it'll never go across the AGP bus again. and Michael Larson, who is the executive director of the OpenGL Graphics Processing Unit.
[Michael Larson]
Thank you, Michael. I'm going to go ahead and turn it over to Michael. So where do you start? Start with Shader Builder. Start with simple transforms, you know, play with the scale, play with the position, move those things around, change the color, get a feeling for how the language works.
and move on to lighting models. I actually skipped the lighting models, I wanted to write the services, but that's just the way I am. And then move on to simple UV meshes. Try implicit surface models, come up with your crazy cool math thingies, and then start exploring B-splines and NURBS and everything else.
So, more information on the Arb Vertex Program. You can read the spec. It's 60 pages long and very detailed. Lots of information in there. It gives you kind of a guideline of what things are supposed to do. Or you can use Shader Builder. I'm going to try and have these posted in the next couple weeks. Hopefully. . Yeah. Yeah. And then, or through the Mac OpenGL mailing list.
Now I'd like to do a quick twirl through the roadmap here. We have planned for the remaining sessions, talking about OpenGL and also the graphics technology in Mac OS X. on Wednesday obviously. So what we have remaining for you tomorrow in the OpenGL track is essentially fragment programming with OpenGL. This is the sort of more pixel-specific companion to programmability. It's also going to be a really cool session, lots of interesting demos.
So if you're interested in programmability, unlocking the power of the GPU, you should also attend that session. We also have, obviously we're going to talk tomorrow about our course 2D technology, which is also relevant for OpenGL developers, 'cause one of the things we announced in our graphics and imaging overview session yesterday is the ability to use 2D drawing API to draw directly into an OpenGL context. That can also be very exciting for OpenGL developers, 'cause it can make things like doing high-quality text in OpenGL very easy.
Then we have a great session that if you're doing any OpenGL development, you have to attend session 209, which is the OpenGL optimizations. One thing that I've realized in working with a lot of developers who are using OpenGL is in many cases, they're leaving performance on the floor on the platform.
And it's in many cases we have a great tool set and this session will provide a lot of information for you to learn how to take a look at your applications and figure out how to unlock the true performance potential of both the GPUs and also the Macintoshes of the platform. And then another thing that might be interesting is session 211, which is going to be on Friday, which is introduction to Quartz Services. Now this is a non-drawing API in Quartz, but one of the things it does that's very important is it manages the displays.
So a lot of OpenGL developers have need to find out what the configuration of the display is, what display modes available on the particular display are. And this is the core APIs that we really want developers to adopt and use in their applications. To do things such as screen capture and display reconfiguration. So you should attend that.
And then also we have our hardware partners, ATI, are here this year. And then session 212, which is also on Friday, they're going to sort of push the envelope and show what you can do with their latest generation of GPU hardware and also programmable features. Because their demo team is going to essentially show what they do to create their incredibly captivating demonstrations that they do. Thank you. To support their announcements of their hardware. So it's going to be a really cool session. And it's going to leverage a lot of what you've learned today and tomorrow in fragment programming with fragment programming session and vertex programming.
And then, obviously we have a feedback forum on Friday, our traditional spot, the last feedback forum. But we urge you to come back, come to the feedback forum and let us know what sort of things you want to see in Mac OS X from a technology perspective. Because a lot of the feedback we get at WWC is what we take back and start working from to figure out what goes in the next major release of Mac OS X.
So I think we have a QA, or we have contact information. If you have questions about what you've learned, you can contact Michael Larsen, who is the presenter today. And if you have other questions, Jeff Stahl is very active with developers directly, and you can also use him as a contact reference point. I'd also like to add my name to this list, if you had questions about this session or any of the graphics technologies on Mac OS X, you can feel free to contact me, Travis Brown, at [email protected]. Thank you.
So we have some additional information. These are essentially where the R Vertex programming spec can be found. And I'll leave this up here for a little bit. And what I'm actually going to do is I'm going to go ahead and invite the Q&A team up, the OpenGL engineering team up to the stage. And we'll actually engage in a question and answer.
And I'll leave this up here. So if you want to copy information down off this, you'll have plenty of time. Okay, so we have some microphones in the center right here. If you have any questions about what you've seen today, please feel free to go to microphone. You know, basically announce your name and your company and we'll field your question.