Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2008-737
$eventId
ID of event: wwdc2008
$eventContentId
ID of session without event part: 737
$eventShortId
Shortened ID of event: wwdc08
$year
Year of session: 2008
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2008] [Session 737] FxPlug Deve...

WWDC08 • Session 737

FxPlug Development for Motion and Final Cut Pro

Media • 1:02:48

The FxPlug architecture enables you to create extraordinary effects for use in Motion and Final Cut Pro. See what's possible with FxPlug and learn how to create your own GPU-accelerated plug-ins for filters, generators, and transitions. Understand the best practices for developing plug-ins targeting Motion, Final Cut Pro, or both. Go deeper into advanced topics with details of OpenGL usage and examples of methods to move existing code to FxPlug.

Speakers: Darrin Cardani, Paul Schneider, Pete Warden

Unlisted on Apple Developer site

Downloads from Apple

SD Video (764.2 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

And I also work on some of the internal plug-ins that we ship. The FxPlug community has really been growing by leaps and bounds over the past several years. We found that we have everything from one-person shops to large companies writing FxPlugs. And they encompass everything from hobbyists to Oscar-award-winning developers. You can see some of their effects in movies like Harry Potter, Lord of the Rings, and things like that.

So you're in good company. Plus, it's just really kick-ass to write plug-ins for a living, you know? So what we're going to talk about today is what the FxPlug SDK is. You'll learn how you can write FxPlug plug-ins. We'll talk about what's new in 1.2.2 and the soon-to-be-released 1.2.3.

I'll talk about porting your plug-ins to FxPlug from other architectures. So if you've already got some After Effects plug-ins or things like that, you can bring them over to FxPlug. Paul Schneider's going to come up and talk about working with FxPlugs in Final Cut Pro. And then Pete Warden's going to talk about some advanced topics, some OpenGL things, and how to get your things running really quickly. quickly.

So what is the FxPlug SDK? Well, it's a visual effects architecture that we created at Apple for the Pro apps, and it's hosted in Motion, Final Cut Pro, and Final Cut Express. It's Objective-C based, so if you've been writing Cocoa code and Objective-C code already, you're already familiar with how to use it, and it uses OpenGL for extreme performance.

So why would you want to write an FX plug versus some other type of plugin? The biggest reason is that we have a million registered users of Final Cut Pro. We probably have a few more users than that, but registered users is about a million. As I said, you can get extreme performance using OpenGL, and most of your plugins can probably run in real time, which your users will really love.

You produce a universal binary, works on PowerPC and Intel, and that's apparently a new feature for some other architectures. You can have custom UI in your parameter list, so if you want to have, say, curves or something that we don't supply, you can make that yourself. You don't have to if you don't want to, but the option is there. And the great thing is that you'll base it on Cocoa and you can build it in Interface Builder.

Paul will talk about that a little bit. So you can really rapidly develop nice UIs. You can get pixels that are in RGBA format or YCBCR with alpha format. You can get floating point pixels. So if you're working with extended dynamic range and you need to, you know, avoid clamping and things like that, you have that option. In Motion, you can create onscreen controls using OpenGL, and that allows the user to directly manipulate whatever is on the canvas so they don't have to just set parameters.

They can actually move things around and things like that. And you can also get some 3D information about where your layers are located in space, where cameras are located, things like that. So I'm going to go over how to write an FXPlug plugin. We'll talk about how to create a new file, a new project in Xcode, we'll create some unique IDs to identify. We'll write some source code to create our parameters and render. And then we'll build and install.

And hopefully that goes well. We'll test it and then ship it. So let's look at this. In Xcode, to create a new project, you go to the File menu and select New Project. And it brings up the new project assistant. And you can see we've got under the heading of standard Apple plugins, we have FXPlugs.

And there are three types of FXPlugs you can write. There's the standard, which is the standard, which is the standard, which is the standard plugin. And then there's the standard, which is the standard, which is the standard plugin. There's FX generator, which takes no image input. So if you're generating a pattern of some sort, you don't need to modify an existing image, something like a checkerboard or stripes or a fractal or something, you'd create a generator.

And FX filter takes a single image input. So if you're doing a color correction or a distortion where you take an image and you modify it in some way and output it, that would be a filter. And there are FX transitions. And these are for moving between two images. So if you want to do a crossfade or some sort of flying in and flying out, that sort of thing, you'd have a transition.

So once you choose which one you want to write, it asks you for a name and where to put it. We'll call this my filter. And it brings up the source code and the resources. And the first thing you're going to want to do is open up that Info.plist resource file. And in there, you'll find the pro plug plugin list. And this is a -- excuse me -- pro plug plugin group list. And this is the list of all the groups that your plugins are going to go into.

You can have more than one plugin in a binary. In this example, we'll just make one. So we only need one group. I'm going to call it my filter. You can put them into existing groups. Like I think we have color correction and distortion, for example, in motion. Or you can create your own.

Most people create their own. You might have it named after your company or named after the particular product that you're working on. Whiz bang filters 1.0 or whatever. But the important thing is this unique identifier here. You generate this UUID, a universally unique identifier, by using the command line tool UUID gen. And I'll talk about how to do that without going out to the command line in a little bit. But it's a unique number. Every time you run the command, you get a unique number. Every other developer who runs it will get a unique number, too.

So then you find the ProPlug plugin list in the InfoP list file, and this is the list of all the plugins that are going to be in this binary. As I said, in this case, we just have one, and you can see we have keys for the class name that it corresponds with and the display name. And you'll notice we have group.

And again, this UUID is the same one that we just generated on the previous slide, and this says that this filter goes into the group with that UUID. And the reason we use IDs instead of names is because it's potentially possible that you might come up with a name like Cool Filters or something, and some other developer comes up with the same name. You want to make sure that yours go into the right place.

And likewise, you need to generate another unique ID for your actual plugin. So if you have a plugin with one name, it doesn't conflict with anyone else, or if somebody's using a localized version of the app in a different language, we don't search by name, we search by UUID.

And the easy way to do this is to write a little script and put it in the script menu in Xcode. If you look on the right side of the Xcode menu bar, there's a little scroll menu and that's the script menu. If you open that up, you'll see a bunch of scripts that you can run. And you can create your own scripts. In this case, this is a bin sh script.

And it's a single command, echo minus n back tick uuidgen back tick. And you can either put this in slash library slash application support slash apple slash developer tool slash scripts. Or you can go to the script menu and I think the bottom item is edit scripts. And if you do that, it will bring up a dialogue right in Xcode and you can add new ones and change the existing ones.

And once you do that, you can just select the uuid in the Info.plist file and then from the script menu, choose new uuid or whatever you named it. And it will replace the one that's there. And as you can see, we put a comment in there that says you must change this group ID. And there's another one for the actual plugin ID as well.

Okay, so then you're going to have to write some source code. I'm going to go into much greater detail about this in a little bit, but suffice it to say you need to add your parameters. This is, you know, if you've got sliders for controlling the amount of blur or the size of something in your filter, you need to add those to the parameter list.

And then you need to write the code that's actually going to render your output. And you can do that in either the CPU or the GPU. We highly encourage you to write both so that the host application can decide which is better for it to use and then choose the appropriate one.

So then you'll build. Just choose build from the build menu. And assuming you don't get any errors that you need to correct, you need to install the plug-in. And the plug-ins go in either slash library slash plug-in slash FxPlug or in tilde slash library slash plug-in slash FxPlug.

And obviously you don't want to have to be copying the built binary every time you're done building it to test it. So what you can do is create a symlink from the plug-ins directory pointing back to your built library. It's just ln minus sf. The path to your built library or your built plug-in, I should say. And then the path to the plug-in directory.

And then you can just build and run. And of course, you're going to want to run. You're going to want to test in both Motion and Final Cut Pro. In Motion, if you click on the Library tab in the Inspector window, you'll see you get, on the left-hand side, you get a list of categories, two of which are filters and generators.

And if you click on Filters, on the right, you'll see the groups, and these are the groups that you create and that we create internally. And you can see there's the My Filter group that we created, and if you click on that, down below, you see all the filters in that group. In this case, it's just My Filter.

Likewise, in Final Cut Pro, in the Browser window, there's the Effects tab. There are folders for video transitions, video filters, and video generators. If you twirl open the video filters, you should see My Filter in there, and if you twirl that open, you should see the actual filter inside the folder.

All right, so let's talk about the SDK and get into some of the details. FxPlugs are a form of ProPlug, and these are NSBundles that we use in Pro apps to extend the functionality of our applications. Plug-in classes are going to conform to certain plug-in protocols, and I'll describe those in a lot more detail in a second. And host applications are going to provide objects to the plug-ins so that they can talk back to the host app and ask it for things and tell it things that it needs to know.

So what are these protocols? Well, in Objective-C, a protocol is similar to an abstract base class in C++ or an interface in Java. It's a description of things that will be implemented. It doesn't have any implementation behind it. And you create your class, and you say it implements a protocol, you have to write the functionality for those functions, or methods, I should say.

So in this case, we see we've got a protocol called fxBaseEffect, and it has two methods. There's actually a couple more, but these are the two important ones for now. AddParameters is where you'll actually set up your sliders and checkboxes and custom parameters and things like that. And then ParameterChanged is where the host app will call back into your plug-in and tell it that somebody changed a parameter. You can use that for supervision if you've got, say, a min and a max, and you always want to keep the maximum greater than the minimum. You can keep those in sync there. Now, that's the BaseEffect protocol.

There are two other protocols, fxGenerator and fxFilter. Actually, there's one for fxTransitions as well. And you can see that they both inherit from fxBaseEffect. That's what the brackets there mean. And fxGenerator adds the renderOutputWithInfo method. And that's because it can create an output without any input. You'll see fxFilter has renderOutputWithInputWithInfo. And that's because it takes an input image as one of its inputs and as one of its arguments and then generates its output. Now, both of these, when you implement an fxFilter and fxGenerator, also include the addParameters and parameterChanged methods.

So let's look at our add parameters method and how we'd write that. When you want to tell the host app to do something, like create a new parameter, you need to first get an object from the API manager. You'll be passed the API manager in your initialization method. It's usually called a knit with API manager.

And it's an object that you use to communicate back with the host. In this case, we tell the API manager, give us the API for this particular protocol. In this case, it's the FX parameter creation API protocol. It's kind of a mouthful, but basically, we're going to be creating parameters. Give us the object that allows us to create parameters. Once you get it, you can call add float slider with name, add point parameter with name, add color parameter with name, things like that.

And that's all you have to do to actually tell the host app that you've got a parameter, that you've got a slider or whatever. You don't need to go into Interface Builder if you don't want to. You don't need to write any action methods to get called back and things like that.

The host app will take care of all the keyframing and everything. So obviously, there are a lot of different parameter types. There is the floating point slider. Looks like our graphics aren't showing. Oh, there we go. OK. So we've got the floating point slider, the integer slider. Sorry about that. Let's go back here.

We've got toggle buttons, which are just check boxes, an angle slider, which is a knob. We've got RGB colors with alpha, and that's kind of nice so you don't have to have two parameters, a color parameter and then an alpha slider, unless you do it all in one.

Likewise, we have 2D points so that you can place across here right on the canvas and let the user drag things around, so if they want to position some aspect of your plug-in, it gives them direct feedback, which is really nice. Pop-up menu, this can have any number of textual items in it.

You can have an image. And the great thing about an image well is that it doesn't have to be a still image. It can be. It can be footage. In Motion, it can be a group which contains footage and stills and shapes and other things like that. So it gives the user a lot of power. So they're very useful. You can have text if you want to have some sort of titling plug-in.

It's really useful for that. And you can have groups of parameters, which is really nice. If you've got a lot of parameters, and you'll find that some users really like having a lot of parameters, you can group some of them together. The user can twirl/close the ones they aren't using. It cleans up the UI, makes it a little easier to use.

You can have custom parameters, as I said. And I'll talk about these in greater detail in a minute. But you can have them be anything you want. In this case, we've got this crazy text style. You could have an RGB curves, things like that, all kinds of things that you can do with that.

We have some optional parameter types you can use, histogram for doing levels and things like that. A gradient parameter. And this is really great because you don't have to have the user go out to an image editing app, create a one pixel high image, import it into the project, find it in the project, and then drop it on your plug-in.

You can just put a gradient parameter right there, and you can see they can create all kinds of gradients with that directly. Okay. So once we have these parameters, of course, we have to get their values in order to render. We might have a blur amount that we need to get or something like that. So it's the same thing as before. We ask the API manager for the object that implements the parameter retrieval API.

And once we have it, we just call get float parameter from RAM at time. And we give a unique ID to each of our parameters when we create them, and that's how we identify them. And these are just -- you know, we usually just number them starting at one. They aren't UUIDs or anything like that. And you'll notice that it's got that time parameter in there.

In some other applications, when you want to get a parameter at a different time than the current time, you have to check out the parameter, do something with it, and then check it back in. And that's really cumbersome. So we made it a lot easier for you. You just pass it the time value that you want, and you should get that.

Likewise, for setting values, you'll get the object which implements the parameter setting API, and then you'll just say set float value to parm at time, set int value to parm at time. And that's how you can do parameter supervision. For example, you get the min and the max, and if the max is less than the min, set it to one greater than the min, things like that.

Okay, so let's look at custom UI in a little more detail. You can assign a custom UI to either a standard parameter, like a floating point parameter, or to a custom parameter, which is some custom data that you will create yourself. For example, if you're doing curves, we don't have any sort of data type that works with curves, so you'd have to create that. They're going to appear in the inspector, just like all the other parameters. They'll be in the parameter list.

They can be resizable, and it's going to be a subclass of NSView. So if you've been writing Cocoa code, you already know how to write an NSView. And it will act just like an NSView that was created by the application. And you can create them programmatically, which I'll show, and Paul's going to show how to do them in Interface Builder. So let's look at some of the things that we can do.

Okay, so the basic tasks are we have to tell the host app that we can support, that we can host a custom parameter view, and we do that by adding the FxCustomParameterViewHost protocol to our class declaration. We'll need to create the custom parameter in the add parameters method. We'll need to write the, well, we'll need to tell the host app that we're going to add the parameter in the add parameters method, and then we'll have to write the function, the method, which actually creates the view and returns it to the host app.

And then we'll have to write code to draw the view, respond to events like mouse down, key down, things like that. And then we'll write code to modify the data if there's custom data associated with it. So let's look at that. This is a typical class declaration. You can see we've got the interface, my plug, it's a subclass of NSObject, and it implements the FxFilter protocol because it's a filter. Well, all you have to do is add comma FxCustomParameterViewHost to that, and that tells the host application that you're able to host a custom parameter view.

So in "Add Parameters" we need to actually tell the host app which of our parameters are gonna be custom parameters. And we do that by getting the FX parameter creation API object and then calling add custom parameter with name. We pass it a parameter ID. You can have more than one custom parameter.

So later when you go to create them, you'll get past the ID of which one you're supposed to create. Pass it a default value. In this case, we've got a type that we've created called "point array." And then you have to pass it the custom UI flag. It's K FX parameter flag custom UI. And that tells it that this custom parameter has custom UI.

Then you have to actually create it or read it in from a nib or whatever you're going to do. And you do that in your createViewForParm method. And as you can see, it passes you the ID of the parameter that it wants you to create. It'll get called once for each parameter that has custom UI.

So in this case, we check it to see which one we're creating. We're creating the myCustomItemID. And we allocate our custom view and then call its initialization method. And we pass it the API manager so it can do some things later. And then we simply return the NSView, the pointer to the NSView that we get back.

And that's all there is to actually creating it. Now you've actually got to do all the things that you would do for an NSView. So you're going to have to override the drawRect method to draw whatever it is you're drawing. You're going to have to override all the NSResponder methods, things like mouseDown, mouseDragged, keyDown, and respond to all those things. If you want to assign contextual menus, you can do that. And the view controller that you write is going to want to change the value of the custom data. It's going to want to tell the custom parameter view to update and things like that.

In order to communicate back with the host, there's a little problem here, though. And that's that because you're an NSView and it's in the application just like any other NSView, the OS calls it directly to do things like redraw and respond to events. So the host app doesn't know that you've been called to do that. So if you start asking for parameters, it gets a little confused. So we have the custom parameter action API in order to handle that.

And what this is is it's just like any other thing you get from the API manager. You ask for the object, which implements that API. And it has a method called startAction. And what this says is we're going to start modifying some parameters. And you pass it a pointer to the custom view.

And it says this is the view that's going to be modifying parameters. The other thing is you've been called by the OS, as I said, so you don't know what the current timeline time is because usually that's passed to you as a parameter to your render method or your parameter change method or whatever.

Actually, not parameter change, just your render method. So you don't know what the current timeline time is because usually that's passed to you as a parameter to your render method or your parameter change method or whatever. So in order to get that, you have to ask the action API for the current time.

Then you can do all the usual things you'd do as if it was any other part of your plug-in. You can get and set parameter values and things like that. And when you're done to clean up, you just call end action and pass it a pointer to that NSView again.

Okay, so if you've got custom data associated with your custom parameter view, you need to write some code to deal with that. You're going to need to make it an NSObject subclass, and it's going to need to implement the NSCoding protocol. One of the things you're going to have to do is create some default data for it. In this case, we've created a class method called emptyData that returns some empty data that we can pass as a default.

And the other thing you're going to want to do is pass the not-animatable flag because currently custom data is not key-frameable. Hopefully we'll get that in the future, but we seem to keep putting it off for some reason. But as I said, you need to implement the NSCoding protocol for your data.

And what that does is it allows us to have you serialize your data for writing out to the user's document. So when the user saves, we'll call your ENCODE_WITH_CODER method. You'll serialize your data just like you would in any other Cocoa app. And then later when we read in the user's document, we'll ask you to deserialize it, and we'll call your INIT_WITH_CODER method to do that.

So that's parameters and custom parameters in particular. Let's look at rendering. Now, all rendering happens with an FxImage. An FxImage is a base class, and there are two subclasses of it. FxTexture, which is a thin wrapper around OpenGL textures, and that's how you do hardware-accelerated stuff, and FxBitmap, which will give you-- it's basically a wrapper around a RAM-based bitmap image, and that's convenient for doing things with code that you might have already written, say, for another plug-in architecture.

In the case of -- well, actually, both FX bitmaps and FX textures can have a number of different pixel formats. You can get 8-bit integer ARGB. For textures only, you can get 16-bit float ARGB. And for both of them, you can get 32-bit float ARGB. If you want, you can get 8-bit integer R4-08 or 32-bit float R4FL. And those are YCBCR formats. And they're described in letters from the ice flow 19 and 27. If you go to developer.apple.com and search for letters from the ice flow or just ice flow, it should pop those up. They're QuickTime documents.

Bitmaps have pre-multiplied alpha. So keep that in mind if you have to do some things -- you may need to do some things that un-premultiply them and just remember to pre-multiply them when you're done. And they have a row bytes parameter. And I think Paul will get into this in a little more detail. But basically, it tells you how many bytes there are per scan line. And if you're looking at something that's, say, 640 by 480 and it's an 8-bit integer ARGB format. . . . You may think, okay, so 640 by times 4 is roughly 2400 bytes wide.

But you get the row bytes and you see, oh, my gosh, it's like 5200. What's that about? Well, if you've got an interlaced frame where you've got an upper field and a lower field, we may pass you a pointer to the beginning of the frame, tell you that the line is 640 pixels wide but that it's 5200 bytes wide.

So just process the first 640 pixels, skip the next 2400 bytes, and go to the next line. Don't assume that's what we're going to do because we don't always do that. The different applications work differently. Some applications do that, some don't. So just always obey the row bytes and you'll be fine.

Okay, FX textures, as I said, they're a thin wrapper around OpenGL textures. You want to always get the texture coordinates. There's a method for getting those, and your texture may be located somewhere in texture space that you're not expecting. It's not necessarily always oriented at the origin or anything like that, so you need to make sure you get the coordinates. You can do a bind and enable to start using them. You can get the texture ID. If you need to pass that to, say, an OpenGL routine that needs a texture ID, you can get that.

They also have pre-multiplied alpha, and I just want to give you one small warning here. You probably won't go creating a lot of FX textures yourself. They're not real useful for doing things like communicating with the OS, and generally it's hard for the host app to tell who owns the texture, so it doesn't know when it can delete it if you do that.

So if you're just going to be creating textures for your own use and talking to OpenGL, you can just create regular OpenGL textures with gl-gentext or whatever you use, but when you communicate with the host app, you'll get... It'll give you the FX textures, and it may occasionally ask you to give it an FX texture.

But in general, it's just too thin of a wrapper to do a lot with it other than communicate with the host. Okay, there's a whole bunch more FX plug functionality that we don't have time to get into today. I think Paul's going to talk a little bit about the FX timing API.

You can request images, input images, at different times using the temporal image API or the temporal transition image API, and that's good for doing things like retiming or, if you're doing something like this, or if you're doing echo or feedback effects. In Motion, you can get 3D layer and camera position and orientation information.

You can create on-screen controls, as I said. In all the host apps, you can get host information, so you can tell if you're running in Final Cut Express, Final Cut Pro, Motion, and you can get plug-in versioning information. So if you created version 1.0 of your product and somebody applies it in their document, and then later they update to version 2.0 and they open that document, you can see that the data for your plug-in was created with 1.0, and you may need to add some information to it or remove some information or whatever.

[Transcript missing]

So we can use this to our advantage. In After Effects, you call the PFAddFloatSlider macro to create a new floating point slider. In FxPlugs, you call the Param API's AddFloatSliderWithName method. You know, they've got PFAddPoint. We've got AddPointParameterWithName, things like that. So why don't we wrap these up in some generic wrapper and call it from both plug-ins? In this case, what I'm going to do is I'm going to create some FX helpers.

And so I've prefixed everything with FXH for FX helper. We're going to create a method, a single function. Because Objective-C is a superset of C, we can use straight C in this. So we're going to create a single function, which will add a float slider, and it will do the appropriate thing in After Effects and the appropriate thing in FxPlug.

So this is what our header is going to look like. We're going to have a fxh create float slider. The first argument is going to be a pointer to some app specific data. Because each application has different requirements of how it has to create the parameter, we're going to pass the appropriate data in that pointer.

And then you can see the things that are going to be the same among them. The parameter name, the parameter ID, the default value, the min and max, things like that. Likewise, we're going to have to have a method to actually get that value back when we go to render. So we're going to call fxh get float param, and we're going to pass it that app specific data again, the ID and the time that we want.

So here's what it's going to look like in After Effects. The first thing we're going to do is we're going to cast that app-specific data to a pf indata pointer, and that's the gigantic structure that they send to your plugin. And then we create a param def structure, we clear it, and we call the pf addFloatSlider macro. And that's all there is to it for adding a floating point slider.

Of course, it's very similar in FX Plug. We cast that app-specific data to be the object which implements the parameter creation API, and then we call addFloatSlider with name, pass it the name, we pass it the parameter ID, the default value, the min and the max, and all that stuff.

So we can write a single function called SetupGammaParameters, if this is a gamma filter, for example, and call that from both applications. So this is how we'd call it in After Effects. We just call SetupGammaParameters, and we pass it the inData pointer. In FxPlug, we call SetupGammaParameters, and we pass it the pointer to the object that implements the parameter creation API. And then this is what it actually looks like. It calls the helper function that we wrote earlier. And when we compile for After Effects, it'll call the After Effects version of that function. And when we compile for FxPlug, it'll call the FxPlug version of that function.

So in summary, you can see that there's a pretty good correspondence between the two APIs, and other image and video processing APIs are very similar as well. You're going to implement either a filter, a generator, or a transition, and you can write code that will work both in FxPlug and in After Effects and other architectures as well pretty easily. So I'm going to turn it over to Paul, and he's going to talk about advanced topics with Final Cut Pro. So take it away, Paul.

Hi, everybody. I'm Paul Schneider. I'm an engineer in Final Cut working on the plug-in hosting. And today I'm gonna talk a little bit about some more sort of, uh...

[Transcript missing]

So I'll just go through these one by one. The first, with the aspect ratio, you can see that when we're displaying it on the computer monitor, we'll scale horizontally when we show it to the user to simulate the actual device that they'll be looking at it on. So you need to be prepared to handle aspect ratio. And how do you do that in FxPlug? Well, we tag each FX image with an aspect ratio. So you can simply ask your inputs and your outputs what their aspect is and then adjust your rendering accordingly.

The next one is the interlaced processing. We'll ask you to render one field of video and then a second field of video, and we'll interlace that together. So you can see in this picture, the first image is red, the second is blue, and we combine them. So you need to be ready for that.

Another--there's a difference between Final Cut and Motion here as well. In Final Cut, the fields are half height because, you know, we take--we combine two of them to form a frame. In Motion, they'll actually take the half height fields, they'll scale them to full height, so it's a little easier to process, but not quite as fast.

So you need to be ready for handling fields, and you need to be ready for the difference between Final Cut and Motion. And this is how you do that inside the FxPlug API. Similar to aspect ratio, each image is tagged with a field. The Fx field in them will be either the lower field, the upper field, or a full progressive frame if you're working in a progressive sequence. And then we have the host capabilities object, which gives you information about the host you're running in. And one of the things you can ask about is whether this host upscales fields.

If this is yes, then you're running in an app like Motion, where the fields will be stretched to full height for you to process. If this is no, then you're running in an app like Final Cut, where the fields will just be the raw field data. So you need to handle both cases.

Then finally, the low-resolution proxy, you can see here in this example, we've, um, we've asked you to process an image that's half the size, and then we'll scale that up for display, just to make playback faster or something like that. And you can tell this is going on by looking at the, um, the scale info in the render info struct that we passed to you during render. So we have the scale x and scale y. If, uh, we're rendering 1:1, these will be 1.0. If we're rendering half size, it'll be 0.5, et cetera.

So that seems like a lot to keep track of. Um, but you can kind of combine them all into just a single scale factor that you use to adjust your rendering. Here's a quick snippet of code, uh, that shows how to do this. This is, um, taken directly from an example plug-in that I'll be talking about in a minute that ships with the SDK. So don't worry about typing this down.

But you can see I'll start out with a uniform, you know, 1:1 scaling. I'll scale-- I'll-- I'll take the aspect ratio of my output, and I'll adjust my horizontal scale by that. I'll check to see if I'm working in a half-height field. That's a two-part scale. I'll check to see if I'm working in a half-height field. That's a two-part scale. That's a two-part check.

First, check to see if, um, I'm processing a field right now. And second, check to see if the fields in this application are half-height. If both of those things are true, then I'll adjust my vertical scaling by 0.5. And then finally, I'll-- I'll blend in the, uh, the resol-- the low-resolution proxy, if there is any.

So I'm just going to step over to the demo machine, and this is a lot easier to see than it is to talk about. You can see here I've got a plugin called Scrolling Rich Texts. We shipped this with the SDK. It's one of our examples. And this is just a simple plugin for example purposes.

Let's see you scrub through, and we'll do an easy title crawl here. This also demonstrates the sort of gotchas I was talking about. This plugin correctly handles aspect ratio, half height fields, and the render scale. But it also allows you to turn handling for each of those situations off so you can see what difference it makes.

For example here, if I turn off handling of the aspect ratio, you can see the text sort of gets squished. This is an NTSC sequence, which has an aspect of 0.9. It's pretty close to 1.0. So the squishing isn't that obvious. If I was working in HD or with an anamorphic sequence, the squishing would be pretty similar. It would be pretty severe. It would be obviously wrong.

So you can see, but when I turn on the aspect ratio handling, the text looks correct. The second one is the half height fields. If I turn off support for that, the text is twice as big as it should be. This is because I'm drawing that text into each field, and Final Cut is interlacing them together. So if I draw the text at full height twice, it's going to be twice as tall as it should be. So I need to adjust that, draw my text at half height into each field so that when the two fields are combined, it looks correct.

And this third one is render scale. And this one is something a lot of people miss because if you drop support for it, it doesn't seem like there's any problem right away. But when you go to play back, suddenly the text is twice as big as it's supposed to be. And this is because by default, at the default settings, Final Cut will drop down to half resolution during playback for an effect where it doesn't have any profiling information. It doesn't know how fast it is.

So when we drop down to half resolution and I draw my text full height for full size into the half resolution image, Final Cut's going to scale that image up. It's going to double it in size. The text is going to be twice as big. What I need to do is look at my render scale, see that I'm currently rendering at half resolution, and then draw the text smaller so when it gets scaled up, it'll look correct.

And you can see here, that's exactly what happens. So I'm just going to go through a couple of the features of this plug-in, which, as I said, it comes with the SDK. It's sort of a grab bag plug-in that shows you how to do a lot of things that people have asked about on the mailing list.

So as I said, this is a fairly simple title crawl. It's a generator. One nice thing about this is that the title crawl animation takes up the duration of the item in the timeline. So at the first frame, my text is all the way at the bottom of the screen. At the last frame, text is all the way at the top of the screen. This is true no matter how much time the generator takes up. If I want to trim it in the timeline, the animation stays the same.

The crawl speeds up so that I'm at the bottom of the canvas here, I'm at the top of the canvas here. And you can see I've got some controls here. I can make the text bigger or smaller. I can change the size. I can change the value. These are standard FxPlug controls. Control types, parameter types. I didn't have to write any special code for this. And you can imagine there's a lot more I could do with this text. I could change the typeface. I could maybe change the justification.

I didn't want to add all of those controls to an example plug-in. What I decided to do instead was if you want more rich formatting, just allow the user to pick an RTF file somewhere on the disk and use that. So you can see up here, if I change the text type from simple text to a text file, all of these controls go away, and they're replaced with a new type of parameter, which allows me to choose a file. Now, this is not something that's built into the FxPlug API. This is a custom parameter that I created myself.

So I created this UI in Interface Builder, and you can simply, you know, choose a text file, bring it into Final Cut, and here it is. So this is a lot more complicated text than, um...

[Transcript missing]

Save it. Go back to Final Cut. Scroll down so I can see that change, and there it is. So this was pretty powerful and not very much code. I'm going to go back to the slides now and talk about how the plug-in was written.

So there's a couple of features of the plug-in that people asked about in the mailing list that this tries to demonstrate. The first is the FxTiming API, which I believe we added in FxPlug 1.2. The Timing API gives you a lot of information about the context of your effect and the timeline. It'll tell you the frame rate of the timeline you're in, the start time and the duration of your effect if you're a generator, or even if you're not, if you're a transition, say. It'll also give you information about your input.

So if you're a filter, you can find the properties of the clip that you've been applied to. It'll also let you convert between timeline times and item times. If you've been applied to a clip that doesn't have the same frame rate or field order as the timeline you're in, this allows you to do some conversion.

Now, this plug-in, you know, obviously just uses it in a simple way just to find out the duration of the generator and the timeline. So here's the code to do that. The first thing I do is get the FxTiming API object from the API manager, similar to how I get everything else in FxPlug. And then I just ask the Timing API for the duration of my effect.

And once I know how many frames the effect takes up, it's a pretty simple calculation to look at the current frame I'm rendering, find out how far into the total number of frames that is, and adjust the scrolling appropriately. The next thing you may have noticed is that this plug-in features dynamic parameter display. So you change that pop-up, some parameters hide, some new parameters show that were previously hidden. I'm just going to talk about that quickly.

So if you want to add a dynamic parameter display to your plug-in, the first thing you'll probably want to do is create some of your parameters initially hidden. And you can do this by passing the KFX parameter flag hidden flag to when you create your parameters inside your add parameters method.

If you've got a hidden parameter, you'll probably want to show it eventually. You can do this inside parameterChanged. Here's just an example block of code that sort of simulates that menu tracking code. So in my parameterChanged method, I check to see if the parameter that changed was the hideShow pop-up menu. If it was, I get the current value of the pop-up. And if the pop-up is currently set to hide, I'll set the hidden flag on the parameters I want to hide. If the parameter is set to show, I'll clear that hidden flag, which makes the parameters visible.

[Transcript missing]

Another thing that you noticed was I had some custom UI, and rather than create my own custom view class and implement the drawing method and the mouse tracking methods, I decided I didn't need that. I just wanted to use some standard AppKit controls and stick them in a view. And the best way to do that is with Interface Builder. This is something that we hadn't shown you how to do in our examples before, and this one shows you how to do it.

So the first thing you do is you tell InterfaceBuilder-- you create your nib, and you tell InterfaceBuilder about your plug-in class. So the name of the class that implements this plug-in is ScrollingRichText. It's just--you can see I've told InterfaceBuilder that this class exists. It's a subclass of NSObject. I'm going to tell Interface Builder a little bit about the class. The class is one outlet, which is a pointer to a custom view.

And two actions, choose text file and edit text file, which correspond to those two buttons you saw on the interface. The next thing to do is actually create the UI. So I've added a view to my nib, and I've dropped in two buttons and just a static text. Um, standard Cocoa controls.

Now I'm just going to wire these things up. Oh, no, I'm sorry. The next thing you want to do is change the files owner of the nib to be an instance of your plug-in class. So now this nib is owned by an instance of the scrolling rich text class.

Now I can start wiring things up. So I'll connect the file owner's outlet to the view I've created. So this scrolling rich text's custom view points to the view that I've created. And similarly, I'll connect the buttons to the actions of the file's owner. So the Choose button will trigger the Choose Text File action.

The Edit button will trigger the Edit Text File action. Now I'm ready to write just a little bit of code. Here's the declaration for my plug-in class. You can see it's a FxCustomParameterView host because it does have custom UI. And there's the outlet and the actions that I added in Interface Builder.

Now when I go to my Add My Parameters, I'll create a custom parameter. I'll pass the custom UI flag. And you can see the default value of this parameter is just an empty NSString. Darrin talked about when you're creating custom parameters, you can create your own class for the data type, and you can do anything you want as long as it conforms to NSCoding.

One of the nice things about FxPlug is that AppKit actually ships with a lot of classes that conform to NSCoding that you can use directly if they meet your needs. So if you want to use NSString, NSNumber, NSData, you can. You can just create a custom parameter, use that as the data type, and you don't have to write any serialization code.

So now in my createViewForParm method, this is when I actually create-- this is when the host asks me to create the view that'll be used for the custom parameter. This is pretty easy, 'cause I did all the work in InterfaceBuilder already. So if I'm being asked to create the UI for my text file path param, I'll load the Nib for that custom view I created, and I'll pass myself as the owner. Remember, the owner of the Nib was an instance of the scrolling rich text plugin. This plugin is that class, so I just pass myself.

And when the Nib is loaded, the outlets and the actions are wired up. So by the time the Nib is loaded, my custom view pointer points to a valid NSView instance. So all I have to do is return it, and I'm done. And I'll just go back to the demo machine quickly to show you that one more time.

You can see I've got my standard controls here. If I change the menu, the plug-in's parameter change method is called, and it responds by setting the hidden flag on some parameters, clearing them on others. So this new custom parameter is now visible, and here's the UI that I created in Interface Builder. Everything is wired up with very little code. With that, I'm going to hand it over to Pete Warden, who's going to talk about advanced GPU topics inside Final FxPlug.

Thanks, Paul. So I'm Pete Warden. I've been doing a lot of filter work over the years. And the particular advanced topics I'm going to be talking about are the ones that relate to GPU programming. In particular, all the stuff you need to do to use raw OpenGL to write GPU plug-ins.

Why is it so tricky? OpenGL is not designed for image processing. It's an API that's designed for 3D rendering. It's possible to do image processing with it. It's possible to use the GPU for image processing. But you have to use some fairly advanced and obscure features, which have been hard to get documentation and examples on.

The first thing I should say is that Apple has recognized this problem over the years and actually has a couple of very nice APIs for doing a lot of GPU processing, like Quartz Composer and Core Image. And we have quite a few developers who are using these pretty successfully to get the performance advantage that you can get out of the GPU without having to delve quite so deeply into some of the dark and dusty corners of the OpenGL API.

But there's still a lot of valid reasons why you may need to write direct... ...directly to OpenGL. So to answer developers' questions, we actually put up one of our internal filters into the SDK, demonstrating the techniques that we use to get very fast image processing working on the GPU.

Probably the thing that we get most questions about that's most confusing is how you do intermediate rendering on the GPU. On the CPU, it's very simple. You have a system memory bitmap that you can just access as a piece of memory. On the GPU, you need something where you can actually render into a texture and then use that texture as a source for subsequent operation.

Traditionally, that meant using pbuffers. There are some other alternatives like FBOs that I'm going to briefly talk about. We don't have any help from the host for creating or managing these objects yet. That's something we've had a lot of requests for, and we're definitely looking at what we can do there. But for now, you should use the sample code that we ship and actually look at the pbuffer code that I'm going to talk about a bit and use that as a basis. for your own plugins.

So what is a P-Buffer? It's a way of rendering into a texture. The key attribute is that it stays in VRAM. Everything's still happening on the card. It's very tempting when you're first looking at doing GPU programming to try and pull down image data from the card to do just some CPU processing on it to get some intermediate images, but that completely kills performance. You lose all of the parallelism that you can get from running both CPU and GPU code. So, I think that's it.

FBOs are very similar under the hood to PBuffers, but they offer a much more modern and, in a lot of ways, easier to use interface. And we're moving over to using FBOs, but we still have a lot of code using PBuffers, and our example will show you how to use PBuffers to do this stuff.

So this is the list of functions to do with pbuffers that we actually give you in the sample code. I'm not going to spend too much time talking about them, but... The create and destroy functions are hopefully fairly self-explanatory. That's how you create these images that you're going to be using on the GPU. When you want to actually start drawing into the texture, you need to redirect all of your GL commands into that area of texture memory.

And you do that by calling pbuffer begin, which under the hood does the right commands to redirect the Subsequent GL commands that you're going to issue to draw into that area of VRAM. When you're done drawing, when you've done all of your drawing operations, you want to end up in that texture, you call pbuffer end, and then you go back to drawing into whichever context you were actually in before you started drawing into the pbuffer. When you want to use that image that you've drawn into as a texture source, for example, for a fragment program or other shader, you call pbuffer use.

And this is very, very similar to the standard bind and enable on an FxPlug, an FxTexture, or just to binding and enabling a raw GL texture. You can use it exactly the same way if you've got multiple textures. If you've got multiple texture units, you can set GL active texture to control which unit you're going to use. It looks exactly like a normal GL texture.

The next most asked topic is shaders. How do you write a pixel shader to do some interesting operation that isn't supported by the standard GL fixed function pipeline? We've got a lot of mileage out of our fragment program. Again, this is kind of similar to the difference between PBuffers and FBOs.

GLSL is a much higher level language that has been introduced over the past couple of years. We're starting to use it. We're having a lot of success with it. Probably for new developers, it would be a good idea to definitely consider it. It's a lot easier to get into. But we still have some reasons why we go back to our fragment program. For example, running on very old hardware.

Some of the very early ATI cards, which can run pixel shaders, have very low limits on the number of instructions they can actually run. They can only run 64 arithmetic instructions, for example. And it's a lot easier to predict if your program is going to run on that sort of hardware if you're writing in something that looks more like Assembler and has a closer correspondence to those actual programs. actual limits.

In the code sample, we have just a couple of helper functions. I mean, the first thing you should always remember is always check to see that your program can actually be loaded. Because of these limits, programs that run on one card may not run on a whole bunch of other cards that you test. So it's going to be very hard to tell unless you have some kind of error reporting, possibly from your QA or your users, to tell why things are going wrong.

Unless you have some explicit checking to see if you can run the fragment program. And quite often what we can do in those cases is if you've got CPU implementation, just fall back to actually running the CPU implementation in that case instead if you're running on hardware that isn't capable.

[Transcript missing]

If you really, really, really, really need them, or you've got some legacy code that relies on them, it is possible to emulate at the cost of some performance. And the directional blur sample code that we ship with the X SDK now demonstrates how to do the four-tap filtering within your fragment program to emulate bilinear filtering.

We don't have a sample that demonstrates how to emulate blending, but the basic idea is that you draw one polygon into a buffer, and then you finalize that P buffer, and then you use that as a texture source to do the blending of the next polygon. So you end up with a pass per polygon, which is very inefficient from the performance side.

So Paul mentioned already that to give users good performance, we use low resolution proxies. And usually that's just a case of scaling parameters. If you're just drawing into a single context and you don't have any intermediate buffers, you just can apply the scale and possible offset if there's any fields involved.

But when you've got intermediate buffers, when you've got p-buffers, it's very, very easy, and I've made this error myself quite a few times, to not take that into account when you're doing your intermediate buffer rendering. So you'll get a low-res input image, you'll accidentally scale it up into a full-size intermediate p-buffer, and then do all of your processing on that large intermediate p-buffer, and then you'll end up scaling it down at the end, and the results will look fine.

But the users won't be getting the performance that they really should be getting when you go down to a low-resolution proxy. So just kind of keep an eye out for that. If you're jumping down to low resolution and your plugin isn't speeding up, and you're using intermediate buffers, that's something you really should be checking. So those are the topics I'm going to cover. I'm just going to hand over to Darren now to wrap up.

Thank you very much, Pete. Okay, if after WWDC you still have questions, well, first, there will be a lab after this, which I'll get to in a second, but after WWDC, you can go to the ProApps Dev Mailing Lists and add yourself to that and search the archives and see if your question's already been answered. You can do that at lists.apple.com, and it's a really great group of people. There are a lot of developers who are currently shipping products on it, and they're very helpful, very useful for answering questions when we can't or when we don't have the time.

If you have questions of a proprietary nature that you don't want your potential competitors to see, you can send them to the [email protected] address, and that'll go to me and Pete and Paul and a couple other people internal to Apple, but other people won't see it. If you want to see the sample code for porting your plug-ins from After Effects to FxPlug, that's available at developer.apple.com slash WWDC slash attendee.

As I said, there's a lab today from two-- there are actually two labs starting at 2 o'clock in the Graphics and Media Lab. The Final Cut Pro lab, you can get some help with your Final Cut Pro XML questions, and the FxPlug Effects Lab. So in summary, FxPlug 1.2.2 and 1.2.3 have some new features. The FxColorImageInfo API, excuse me, FxImageColorInfo API.

The Xcode templates, you should be able to find those again and use them. And of course, improved support in Motion and Final Cut Pro. Please join the mailing list. Bring us your questions. Let us help you. And if you can, optimize for hardware and software rendering because your users will appreciate it.