Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2004-220
$eventId
ID of event: wwdc2004
$eventContentId
ID of session without event part: 220
$eventShortId
Shortened ID of event: wwdc04
$year
Year of session: 2004
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC04 • Session 220

Introduction to the OpenGL Shading Language

Graphics • 43:29

Mac OS X OpenGL will include support for the OpenGL Shading Language (GLSL), allowing creation of vertex and pixel shaders using a high-level C-like language. View this session to learn details on GLSL and its implementation within Mac OS X. This session is ideal for developers of games, scientific visualization applications, and others looking to exploring the power of the GPU.

Speaker: James McCombe

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Session 220, which is OpenGL Shading Language. And we're pretty excited about being able to have for Tiger the OpenGL Shading Language available to our developers because Apple's extremely excited about the fact that we have programmable hardware, these fantastic GPUs that we can do incredible things with, many of which you've seen particularly this year and even starting last year with some of our demonstrations and also sessions on programmability with GPUs. The interesting thing is that previously, working with these GPUs has been basically almost a step back to the past in terms of the way you program these GPUs was often in sort of like the assembly language level, which isn't approachable to a lot of developers.

So what's been happening in both GPU development-- and that's also been driving how these functionalities are exposed to the developers-- is the evolution of higher level shading languages. And that's exactly what OpenGL Shading Language is. And it's an approachable way for developers experienced with languages such as C to be able to really look at leveraging the power of the GPU in their application. And it's also important for you to realize that there's been some confusion that we've had at this WWDC because we've talked about OpenGL Shading Language in two contexts.

One is obviously in its relationship to Core Image, where Core Image has sort of a subset of OpenGL Shading Language that supports, that enables you to do a lot of things. So we've been working on that. And we've been working on that for a while now. And we're going to do all the interesting effects that Core Image supports. But I also want to be clear that the purpose of this session is to communicate about the full OpenGL Shading Language, which will be part of OpenGL in Tiger.

So that's going to give you all the capabilities of the language and that will expose all the capabilities of current and future hardware, GPU hardware that supports programmability. So on that note, I'd like to actually invite our speaker, James McComb, to stage to take you through the session. Thank you. Thank you.

Good morning and thanks for coming to this session. I guess quite a lot of people make it back from Cupertino last night, but thanks for coming along. Yeah, we're pretty excited about this shading language stuff that we've been working on. I guess I want to sort of start off with looking at where we were.

Two years ago, we came here and hardware was coming out that had a programmable vertex unit, and we talked about our vertex program, which was a language to allow, well, reprogramming of the vertex unit. And last year, we talked about our fragment program, which allowed you to reprogram the sort of fragment lookup, the per fragment lookup.

And some of the visual effects that you can do with this really are very powerful and incredible. And increasingly, more and more, your games are showing up and various things are using this. It's becoming clear that you can do some very powerful real-time effects with that. However, those languages were designed -- suddenly, you know, we were handed with this fancy new hardware. We needed to come up with a language to program it. The languages that were created were very -- quite close to the hardware. They were assembly-like in nature.

Well, now we've had some time. We've had some time in the OpenGL community to sit down and discuss a way to expose that hardware and also future hardware as new capabilities appear. We want a language that can expose that, be as portable as possible in the sense that the same program can run on various pieces of hardware.

Hardware supports new operations. The language will expose that. And we don't want to be revising the language every year. We want something that we can nail down. And there's been a language now for a very long time that's proven itself to be very good at this, and that's the C programming language. For the general-purpose processor, C got people away from programming in assembly all the time.

And I guess the fact that we're still programming in C today is a good sign that it's a pretty well-designed language. So the OpenGL ARB got together and pretty much took the C programming language and added some slightly new stuff. And it's got some slightly new syntax to it that's required to, I guess, make it appropriate for shading.

So let's take a look at the support for this. If you want to check in your application if it's there, these are the four extensions that are going to be on an OpenGL implementation that support this. As I go through the presentation, you'll see why really there are four of them. But the most important one really to check for is the one that's highlighted there, ARB shading language. If you check for that and that's there, you should have at least some level of support for the shading language on your implementation.

Also, hardware support for this. Really, any hardware that has support for ARB fragment program and ARB vertex program, that hardware can support the OpenGL program. So the OpenGL shading language, or at least varying amounts of it. The shading language being C, it has looping, branching, function calls. And also it has a lot of built-in functions for generating noise and various things.

Well, it turns out that a lot of current hardware simply doesn't have the actual machine code instructions that will do that. So there could be programs that you'll submit that simply won't run on the hardware at this time. However, this is a forward-thinking language. And as hardware keeps coming out, soon it will be fully supported.

But that leads me to my next point, which is that current hardware resources, they're limited either in terms of the range of instructions that are available or in terms of how many registers it has. Therefore, I want to talk about software rendering. Now, here we are at the cutting edge of technology. There's fancy new hardware, and well, we're back to software rendering again. Well, let's think about really why that is.

In the past, OpenGL really has sort of been used mainly for sort of non-photorealistic rendering. It's been used for either previews in an application like a CAD application. It's been used as a preview, but for the actual final rendering, typically applications have had to use their own software renderer. Or in the movies, people have used RenderMan and various things for that.

Well, OpenGL is developing, and now with these shading languages, photorealistic rendering is becoming a possibility. So a software renderer becomes necessary because we want something that is high-precision, has real 32-bit floats.

[Transcript missing]

Let's have a demo here of the software renderer in action just to show you sort of what we've got here.

So, for any of you who were here last year, you might recall this particular image here. This is actually just an arb vertex and fragment program in ShaderBuilder. This isn't even high-level shading language I'm showing right now. But this is really merely just to prove the point of our software renderer. Here you can see the scene being rendered. This is running on a Radeon 9800 in hardware. So it's obviously pretty quick here. And if I can move the light around.

"Now if we click this button here, you can see now we're running in software. This is a fragment program running in software. Now yes, it's obviously slower, but it's not bad. So that's an example of our software renderer right there. So let's take another look at another example of this in action. Let's see here.

This is another one that if there's any medical imaging folks in the audience, they might find this somewhat interesting.

[Transcript missing]

There's a program object, and inside a program object you have a vertex shader and a fragment shader. The intent of a vertex shader is that it generates a transformed vertex with an untransformed vertex and OpenGL state as inputs.

That's its goal. It takes vertices that have been submitted from your application and performs some operation to put them into window coordinates. A fragment shader is similar, except it's dealing with pixels. It generates a pixel on the frame buffer with the vertex attributes interpolated across the scan line, passed into it, and also it has access to OpenGL state. Obviously, it can sample from the texture units.

Where do they fit in the pipeline? So let's look at that. You have your vertex data coming from your application and then you have the pixel data handed in through your text image calls or whatever. And also this can all obviously come from display lists. Normally the vertex data gets through a fixed function gets multiplied by a model view matrix, it gets transformed and it gets clipped by the clipping planes on the sides of the screen or even user clipping planes if you have any.

And also the vertex color gets set by a very basic lighting model that's built into OpenGL. And then basically you end up with a bunch of triangles and then it goes down to the graphics driver or the software renderer which takes those triangles and breaks them up into sort of trapezoids with a flat top and bottom and goes through them scan line by scan line. And basically stamps these pixels into the frame buffer and it's a very fixed operation. If you've got multi-texturing turned on it will pick a textile from one texture unit then another and maybe modulate them together or something like that.

Well, the programmability part comes in whenever you start to pull those out, and you're going to then have the ability to replace these two parts of the pipeline. That's what you're doing, essentially. So, take a look at some terminology here. A primitive is really a series of vertices with a connection rule. So, in a sense, if you're in immediate mode, everything between your GL begin and end, in the begin call, you specify the connection rule, and everything in between that is your vertices, and you can specify vertex attributes there, too.

Shaders are applied on a per-primitive basis. You can't switch shader halfway through a primitive. So it's important to note that the shader remains sort of constant throughout that. Also, OpenGL state, it can only change on a per-primitive basis. You know, you can only enable or disable something outside of a begin-end.

So that's going to be -- the state is going to be uniform throughout the execution of that shader. And that term uniform is important because it's actually going to be a keyword in this -- in this shading language. And I'm just trying to explain where they come from.

Also, looking at a vertex, obviously it's a 3D point comprising a primitive. The vertex shaders are ran once for every vertex. Now, in hardware they could be ran in parallel, but still there's one execution for every one of those vertices. They're independent from one another for that -- for the purpose of parallelism. So therefore, you cannot have communication between subsequent executions of a shader. They're discrete from one another. Also, vertices, they have attributes associated with them.

You know, mainly their position, the color, they could have a normal texture coordinates, and a bunch of other user-defined attributes that you might want, weights or whatever else you might be using. And then a fragment is really -- it's a pixel on your frame buffer that comprises a scanline during the triangle rasterization. So, that's the first thing.

The Fragment Shader determines its color. The Fragment Shader has access to data that's varying coming from the Vertex Shader, and it's varying because for every execution of that Fragment Shader, those attributes are interpolated across that scan line. James McCombe So let's take a look at the current languages that are available today.

The ARB approved languages are Vertex and Fragment Program. Again, as I said, they're assembly-like, except they've no bitwise operations. They're all kind of high-level instructions, somewhat high-level, mainly sort of mathematical operations. James McCombe They all deal in floating point, and it's a SIMD instruction set. An ADD is going to do an ADD on four components in a register. James McCombe All the registers are four components.

James McCombe And also the instructions are a fairly close mapping of the hardware instructions, which is actually the very thing that we are trying to get away from now. James McCombe And the language has enforced resource limitations, which can be annoying, but the upside of that, the good side of enforced resource limitations is that you know if it's going to run on hardware or not.

James McCombe You know that. You can query it, and as long as your program is within those well-defined bounds, it will run on the graphics hardware. James McCombe This is just a look at the hardware. James McCombe So this is the instruction set that was available. I broke it into four sections. This is the ARB fragment program instruction set.

James McCombe You can see, you know, assembly-like names. This is an example of an ARB vertex program, a very simple one that merely transforms a vertex and moves the texture coordinate through. Pretty straightforward. James McCombe And this is a fragment program that basically samples a text line. James McCombe So this is a sample of a texel from texture unit 0 and texture unit 1, and then it multiplies them together and writes it to the output fragment. James McCombe That's what those languages look like at this point. Pretty straightforward. But it can get pretty complicated if you want to express large programs and complex effects.

So let's take a look at the high-level shading language at this point. Why do you want to use this? Potentially less code to write. You can express it in terms that you're more familiar with. I'm sure some of you are familiar with writing assembly code, but no matter what, you're really only writing it in your inner loops. You're not writing it every day with your code, so being able to write in C could be convenient. You no longer need to deal with specific registers anymore. That's all virtualized.

Just declare variables and use them as you please. You can depend on the compiler to some extent to take care of that for you. Also is the goal of hardware independence. This language doesn't tie itself to any one piece of hardware, which I think is a good thing in the long run.

And also is the nice intent of being able to have code reuse. The ability to have function calls and also to be able to draw in multiple files comprising one shader is very useful. You could have like a file, like a library almost full of like useful chunks of shader code that you use, and you can function call into that from the shading language. So that code reuse is pretty nice.

Here's just a comparison of sort of what the two things look like side by side. You can see there's the arb vertex program that transforms a vertex and passes its color through. And then you can see the GL shading language version of the same thing. It's a more natural form of expressing it really. It just looks like C really. It has a main entry point and so forth.

So, The object model here, the way the API kind of works is And to make a vertex shader object you basically need, you can have one or more strings, which are the strings of the C code that you basically want to put in. And there can be more than one of these strings and they're concatenated together. And you can function call in between them.

So you need a vertex shader object, and then also it's the exact same thing except for the fragment shader. When you have these two objects created, you basically pass each of those through a compile stage, which compiles it down to the machine code, which then gets passed into a linker.

This is a linker that builds a final program object. That is what you then use that program object and start drawing primitives and the shader will be applied to those. The language is C-inspired. It's got the usual main entry point. The language supports looping and branching and all those other things just like you'd expect. There are no pointers.

Since really you're not accessing memory directly, there's no real concept of accessing memory in this, the idea of having pointers doesn't really make sense in this language. So that syntax you will not see. There are new data types since the hardware registers are all vectors. There are new data types to support that. And also there's a matrix data type as well.

Since we're introducing vector data types, what happens if you want to pick a scalar out of that? You want to get access to a scalar or you want to reorder the scalar components in a vector. Well, there's new syntax for swizzling now. We can pick out a certain component or reorder them.

There are new qualifiers. You know, you've got the C program for a vertex shader and then you have a C program for a fragment shader. They're separate. How do you get communication in between them? Well, the way that's done is via declaring a variable with a particular qualifier in both programs and having the names match.

That's what the linker stage does is it ties up those loose ends. And then we have a, there are a bunch of intrinsic variables, just predefined variables that are there, and either they allow you to access OpenGL state or you write your value into these predefined variables. So let's take a look at the qualifiers, the type qualifiers that exist.

The first stage is an attribute. Inside your vertex shader, to access the vertex attributes, you basically just declare something with the attribute qualifier. They only make sense to be declared in a vertex shader. And attributes, they change at most once per vertex. And they're read-only. You can't write to them.

The variables that are declared with the uniform qualifier, basically those are things that are set outside the shader, and they're uniform throughout execution of that shader. So they're read-only, and their frequency of change is at most once per primitive, things like the OpenGL state. And even in fragment programming, the texture data is considered uniform as well.

The other thing, just like C, you've obviously got a const, which is just inline constants that you want to declare in your program, obviously read-only. If you don't put a qualifier in, you're just declaring, really, a temporary variable, just for scratch space. And it's important to note, like I said earlier, the data in those will not persist between executions of the shader. You can't write something into a temp and expect it to be there on the next, when it processes the next vertex. That isn't going to happen. They are read-write access.

And then there's the qualifier varying. Basically, variables declared with a varying type are basically your pipeline of communication between the vertex shader and the fragment shader. Declare something as varying, the rasterizer is going to take that value from the vertex shader and it's going to interpolate it, and the fragment program will get that value as it's interpolated across the scanline. So... Let's take a look at some of these new data types. You basically have three main classes of data type. You have floating point types, there's an integer type, and there's even a Boolean type.

Now, you can't expect that that integer is actually going to be stored in a fixed-width register, so there's still no shifting or bitwise operations, but the type still exists because some of the functions expect it to be there. Also, these are the vector types, and there can be two-component, three-component, and four-component vectors. And again, use swizzling to pick out scalar components.

Also, they can be accessed as a zero-based array. So if you have a vector and you want the second scalar, you could do, you know, variable, open the square bracket, and actually pick it out like that. So it's the two ways to access it. Obviously, there are scalar types now. If you declare something as a float, it's just going to have one scalar in there. And of course the flow control constructs in this language, they will only accept a Boolean type to branch on.

There's also a bunch of matrix types. You'll probably want to use these for getting the transformation matrix from OpenGL and store stuff in there, or you might have your own matrices within your application that you want to pass in as parameters. Stored column major, which is compatible with the way the rest of OpenGL works.

And again, you can access these as an array of column vectors. If you take a matrix and, I guess, take the first element of that, that will return a vector type. And then with that vector, you could access the first element and get a scalar. That's how it'll work.

The last data type is a sampler. Basically, it's a handle to encapsulate a texture unit. There are function calls in this language for getting out a text from a texture unit, and a sampler You need to pass that in as the first argument and then outside of the shader in your application, you then sort of attach a texture unit to a sampler. That's how they work. It's just a handle.

And again, they're always declared with a uniform qualifier because texture units don't change between executions of the shader or during execution of the shader. Data from these is of course read-only. So here is another sort of deviation from the C programming language. Because we now have, I guess, aggregate data types in a sense, basically things that are non-scalar, we need these constructors.

Not so much constructors in the C++ sense, but it's probably perhaps similar in idea. Anyhow, they're used to initialize an aggregate type. And of course in C you had this concept you could just declare a variable v= and then you could open braces and fill in each of the components.

What a constructor is, is basically when you declare a variable, if you want to assign something, if you want to put something into it, you basically make a function call which is the same as the type, as the name of the type. And then in those brackets you fill in the values. That's how it works. So I've shown you a couple of examples of that. You can see there. I'm declaring a four component vector type. I then have to call a function vec4 with four scalars inside of that to fill it in.

And then you can see the similar example filling in an actual structure. You make a call which is the same name as the structure and you pass in the values and you can see how they actually nest. These constructors nest. That's one thing that's a little different that you'll need to get used to, but it's not hard to figure out. In terms of the operators you have, there's nothing really new here. The only thing that you'll see missing is the bitwise operations. There's the logical and and or and so forth, but not the bitwise.

So, swizzling, just to give an example of how that works, you can see here I declare v0 and construct it with four scalers. And you can see then, I'm assigning v0 to v1, but v0 has a swizzle on it. And you can see what that's actually doing. It's actually reordered the components within the scalers within that.

These swizzles, they can either be x, y, z, w. They have, I guess, they have convenient names. x, y, z, w you would probably use if it's positional data. RGBA if it's color space data. And there's even an strq if it's used for storing texture units. You can use them interchangeably, but to make your code more readable, you might want to take advantage of that.

Also, as a result of a swizzle, you can actually change the data type. You can see that when I'm assigning V0 to V3, V3 is only a VEC2, so it has only two components. So you're going to need to pick the two components that you're going to assign to that, which is why I'm picking Q and S. And also you can see that type conversion can be used to just produce one single scalar.

In terms of the operators, in a sense, they're overloaded in the sense that depending on the type that they're working with, they'll have a different behavior. No, it's not. Don't get too scared of that. It's pretty simple, the behavior, actually. For all the operations, apart from the multiply, the behavior is very obvious. It simply applies. If you do an add A equals B plus C, it's going to perform that add for all the scalers within that.

If it's a vector, it's going to do four adds. If it's a matrix, it's going to do 16 of them. The reason multiply is a little different is that if you multiply a vector times a matrix, it does a linear algebraic multiply where it's actually going to be four dot products.

That's what it's actually going to break down into, which is really convenient because at the beginning of your vertex programs, you're always having to do these four dot products to transform your vertex. Well, now it's just a one-liner. Position equals vertex times model view matrix. It's pretty convenient. Matrix times matrix also works that way. I talked about these predefined variables.

You can see that we have predefined ones, one for the vertex color, normal, and access to the texture units. Varying types to communicate from vertex to fragment. Again, texture coordinates in the front color. And outputs in the fragment shader. When you write your final color into GLFragColor, and that's what will get written to the frame buffer. And then there's a bunch of uniforms that are declared in your model view matrix, and access to the lighting information from the GL state.

So, let's take a look at, we've talked a little about the language here. Let's look at how to, I've shown you the program strings, how do you actually load these into your GL and actually make use of them in your application. Well, the first thing you want to do is acquire the program strings, load them from a file, do whatever you need to do, but load them into two strings. We've got a type declared for that, GLCharR pointer. Load them in, and then you want to actually put them in an array of strings, and the reason for that is what I said, you can actually have multiple strings that comprise a shader with function calls between them.

So in this case, I just have one string. So I'm obviously setting the array size to be just one, but that could be any number. The second step, you need to declare three handles, one for the vertex shader object, one for the fragment shader object, and then one for the final program object that these are both going to be attached to.

So you have those declared and then other two variables that are useful is basically a status variable and also the length of a log because when you perform compiles and links you're actually going to get information back from the compiler that you might want to check or if there's errors you're going to want to see them. So yes, there's now a log that comes back.

Your next stage, let's create the vertex shader object. You can see from the code I've got here how to do it. It's not very hard. You basically create shader object and you tell it, you know, it's a vertex shader. GL shader source arb allows me basically to attach the string to that.

Compile shader arb will actually perform the, will start the compiler going, generating the machine code for this. And at that point, you want to get back the status of the compile to see if it succeeded or not. And also you want to get back the length of an info log. And you need that length because you'll need to allocate some memory to store your log into.

So, you will repeat that step for the fragment shader. It's exactly the same thing except with vertex pretty much replaced with the word fragment. So, you're going to end up with these vertex and fragment shader objects. At that point, you create a program object, you attach the vertex and fragment shader to that, and then you call link program.

And that's basically going to tie up varying variables that you declared in the vertex and fragment shader. It does a name match on those and connects them together. And the linker, you also need to check for errors as well because the link stage, like C programming can feel, and you need to check that.

So, assuming that that all succeeded and you have your program object ready to go, if you're ready to start drawing geometry with that program object, you make a call to useProgramObject, and then you just put it in there and it will start to draw with that. Unlike our vertex and fragment program, there's no glEnableDisable. You don't enable and disable it. Rather, you use a program object, and then if you want to turn off the shader, you call it with null, and that will switch you back to the fixed function.

pipeline. So I mentioned that inside these languages, you can declare variables with a uniform type. They basically are your way of passing parameters into your program. For those familiar with our vertex and fragment program, these environment parameters and local parameters, well, they're all environment parameters at this point, and this is how you set them. You basically, there's two calls required.

The first one is, you need to get, like, an index back, which is the position of that uniform, and the thing you pass in is the actual string that you declared in your program. It gives you an index back, and then you make a call to GL uniform, and there's several variants of that for, you know, passing in an array or some immediate inline floating point values to pass in your uniform data. The same thing is true for vertex attributes. You basically get the name. You get an index from the name of the attribute, and you just set the values.

Um, to take a look at, uh, there are a bunch of built-in functions that are really nice. There's a kind of a library of functions that come along with this. Um, I would encourage you to, if you're interested in this, in this language, to go and get the, the official, the orange book on the shader language because the, the, the range of these functions is, is, is too much really for me to go through today.

Um, but you should look through them because there are a lot of very interesting ones. Some of the ones that I, I've been finding quite interesting are the noise generation functions, um, which are really good for image processing. Uh, basically random number generators, but there's a, there's a lot of variations in them.

Um, there's other things like, you know, like a, a, like a, a mix function for doing a linear interpolation between two values, which is convenient. And, uh, also the, uh, the, the, the texture access functions. James McCombe Um, and actually I want to take a look at some of those because those I, I, I feel are probably the most important.

Um, one thing that's really pretty nice about this language is we've always thought of texture access being available in fragment programs because obviously you want to texture map your surface. Well, the shading language allows you to do texture sampling in vertex programs as well. This has some pretty powerful, um, side effects.

You can imagine passing just a, a bunch of vertices into OpenGL and in the vertex program displacing them based on a texture map, suddenly, you know, you could have a live video stream going into your GL, have a bunch of vertices and have like, you know, like the, the three, like a 3D face almost showing up. Um, there, there's a lot of, a lot of possibilities there.

Um, or, or you could even just store generic vertex data in your texture units. It's, it's kind of, kind of mind bending, but the things you can do there are, are, are, are large. Um, in terms of, uh, of texture access, basically it's just a function call and you pass in a texture coordinate, a sampler, and basically it gives you back, um, a four component value which is the RGBA value at that point.

Um, the, really the, the functionality here is, is the same as our fragment program. Like, obviously with the addition that you can do it in the vertex program. Um, it does add support for depth textures where it can get a text on, automatically do a comparison against a depth buffer, which is useful for shadowing effects. Um, so now actually I, I would like to, uh, switch over to show you a bit of a demonstration here.

Now, it's important to note that this is really the bleeding edge here. Um, so it's not done yet, but I want to give you an, an example of sort of to show you how far along we are with this. Um, so hopefully things will go according to plan here.

So I went ahead and did a little work here to get this integrated into Shader Builder so that I could show you what we have. Let me first show you, just typing this in. I'm going to make the font here a little bigger so that you can see a little more clearly. So let's go to the fragment program here and write a simple one. Just declaring my main function right here.

And then basically in my fragment program, I'm merely going to pass through the interpolated color data coming from the vertex shader to the output. So, This is the predefined variable GL_FragColor. And you can see as I'm typing here, again, as is common in ShaderBuilder, it'll do the real-time sign text checking. And even though this is C, we're still able to do that, which is kind of nice. So that's the fragment shader done. That's a very simple one. And then in the vertex shader, let's pick OpenGL's shading language.

Let's try that again. Let me just show you what I've got here. As I said, this is very recent code. Here you can see the high-level shading language being used to express a pretty simple lighting model. It's just done in the vertex program, in the vertex shader space. The fragment shader is merely passing through the color. Let me make the code a little bigger.

Hopefully everyone's able to see that okay. But you can see the kind of stuff we're doing. This is common in C, you know, basically able to nest brackets and so forth. Basically you can express these things in a lot less lines of code. If I want, you can see on this line I declare F1, and here I use it again. Well, I can go ahead and, like, move it inside.

"Delete that line. So it's... We're getting there. We're not done yet. But this feature is slated for Tiger. But we really want to get it to you as fast as we can. So we're doing our best. It's a difficult spec to implement. So we'll keep you posted on that.

So I guess I want to wrap up at this point. I hope you enjoyed hearing about this at least. And I hope that when it becomes available, you'll start to use it. And yeah, I guess at this point, I'd like to invite Travis up if... I'm sure there'll be a lot of questions.