Media • 1:02:48
The FxPlug architecture enables you to create extraordinary effects for use in Motion and Final Cut Pro. See what's possible with FxPlug and learn how to create your own GPU-accelerated plug-ins for filters, generators, and transitions. Understand the best practices for developing plug-ins targeting Motion, Final Cut Pro, or both. Go deeper into advanced topics with details of OpenGL usage and examples of methods to move existing code to FxPlug.
Speakers: Darrin Cardani, Paul Schneider, Pete Warden
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript was generated using Whisper, it may have transcription errors.
And I also work on some of the internal plugins that we ship. The FX plug community has really been growing by leaps and bounds over the past several years. We found that we have everything from one person shops to large companies writing FX plugs. And they encompass everything from hobbyists to Oscar award winning developers. You can see some of their effects in movies like Harry Potter, Lord of the Rings, and things like that. So you're in good company. Plus it's just really kick ass to write plugins for a living.
So what we're going to talk about today is what the FX Plug SDK is. You'll learn how you can write FX Plug plugins. We'll talk about what's new in 1.2.2 and the soon-to-be-released 1.2.3. I'll talk about porting your plugins to FX Plug from other architectures. So if you've already got some After Effects plugins or things like that, you can bring them over to FX Plug. Paul Schneider is going to come up and talk about working with FX Plugs in Final Cut Pro. And then Pete Warden is going to talk about some advanced topics, some OpenGL things and how to get your things running really quickly.
So what is the FXPlug SDK? Well, it's a visual effects architecture that we created at Apple for the Pro apps, and it's hosted in Motion, Final Cut Pro, and Final Cut Express. It's Objective-C based, so if you've been writing Cocoa code and Objective-C code already, you're already familiar with how to use it, and it uses OpenGL for extreme performance.
So why would you want to write an FX plug versus some other type of plug-in? Well, the biggest reason is that we have a million registered users of Final Cut Pro. Now, we probably have a few more users than that, but registered users is about a million. As I said, you can get extreme performance using OpenGL, and most of your plug-ins can probably run in real time, which your users will really love. You produce a universal binary, works on PowerPC and Intel, and that's apparently a new feature for some other architectures.
You can have custom UI in your parameter list. So if you want to have, say, curves or something that we don't supply, you can make that yourself. You don't have to if you don't want to, but the option is there. And the great thing is that you'll base it on Cocoa and you can build it in Interface Builder. Paul will talk about that a little bit. So you can really rapidly develop nice UIs. You can get pixels that are in RGBA format or YCBCR with alpha format. You can get floating point pixels. So if you're working with extended dynamic range and you need to, you know, avoid clamping and things like that, you have that option. In motion, you can create on-screen controls using OpenGL, and that allows the user to directly manipulate whatever is on the canvas so they don't have to just set parameters. They can actually move things around and things like that.
And you can also get some 3D information about where your layers are located in space, where cameras are located, things like that. So I'm going to go over how to write an FXPlug plugin. We'll talk about how to create a new file, a new project in Xcode. We'll create some unique IDs to identify our plugins and tell them apart from other companies' plugins. We'll write some source code to create our parameters and render, and then we'll build and install, and hopefully that goes well. We'll test it and then ship it. So let's look at this. In Xcode, to create a new project, you go to the File menu and select New Project, and it brings up the new project assistant. And you can see we've got, under the heading of Standard Apple Plugins, we have FX Plugs, And there are three types of FX plugs you can write. There's FX generator, which takes no image input. So if you're generating a pattern of some sort and you don't need to modify an existing image, something like a checkerboard or stripes or a fractal or something, you'd create a generator. An FX filter takes a single image input. So if you're doing a color correction or a distortion where you take an image and you modify it in some way and output it, that'd be a filter. And there are FX transitions, and these are for moving between two images. So, you know, if you want to do a crossfade or some sort of flying in and flying out, that sort of thing, you'd have a transition. So once you choose which one you want to write, it asks you for a name and where to put it. We'll call this My Filter. And it brings up the source code and the resources. And the first thing you're going to want to do is open up that info.plist resource file. And in there, you'll find the ProPlug plugin list.
And this is--excuse me-- ProPlug plugin group list. And this is the list of all the groups that your plugins are going to go into. You can have more than one plugin in a binary. In this example, we'll just make one. So we only need one group. We're going to call it My Filter. You can put them into existing groups. Like I think we have color correction and distortion, for example, in motion. Or you can create your own. Most people create their own. You might have it named after your company or named after the particular product that you're working on. Whizbang filters 1.0 or whatever. But the important thing is this unique identifier here. You generate this UUID, a universally unique identifier, by using the command line tool UUIDgen. And I'll talk about how to do that without going out to the command line in a little bit. But it's a unique number. Every time you run the command, you get a unique number. Every other developer who runs it will get a unique number, too.
So then you find the ProPlug plugin list in the InfoP list file, and this is the list of all the plugins that are going to be in this binary. As I said, in this case, we just have one, and you can see we have keys for the class name that it corresponds with and the display name, and you'll notice we have group.
And again, this UUID is the same one that we just generated on the previous slide, and this says that this filter goes into the group with that UUID. And the reason we use IDs instead of names is because it's potentially possible that You might come up with a name like Cool Filters or something and some other developer comes up with the same name. You want to make sure that yours go into the right place. And likewise, you need to generate another unique ID for your actual plug-in. So if you have a plug-in with one name, it doesn't conflict with anyone else, or if somebody is using a localized version of the app in a different language, we don't search by name, we search by UUID.
And the easy way to do this is to write a little script and put it in the script menu in Xcode. If you look on the right side of the Xcode menu bar, there's a little scroll menu, and that's the script menu. If you open that up, you'll see a bunch of scripts that you can run. And you can create your own scripts. In this case, this is a bin sh script. And it's a single command, echo -n backtick uuidgen backtick. And you can either put this in /library/application-support /apple/developertool/scripts/10 -userscripts. Or you can go to the script menu, and I think the bottom item is edit scripts. And if you do that, it'll bring up a dialog right in Xcode, and you can add new ones and change the existing ones. And once you do that, you can just select the UUID in the info.plist file, and then from the script menu, choose new UUID, or whatever you named it, and it will replace the one that's there. And as you can see, we put a comment in there that says you must change this group ID. And there's another one for the actual plugin ID as well.
So then you're going to have to write some source code. I'm going to go into much greater detail about this in a little bit, but suffice it to say, you need to add your parameters. This is if you've got sliders for controlling the amount of blur or the size of something in your filter, you need to add those to the parameter list. And then you need to write the code that's actually going to render your output. And you can do that in either the CPU or the GPU. We highly encourage you to write both so that the host application can decide which is better for it to use and then choose the appropriate one. So then you'll build, just choose build from the build menu, and assuming you don't get any errors that you need to correct, you need to install the plugin. And the plugins go in either /library/plugins/fxplug or in /library/plugins/fxplug. And obviously you don't want to have to be copying the built binary every time you're done building it to test it. So what you can do is create a symlink from the plugins directory pointing back to your built library. n minus sf, the path to your built library -- or your built plugin, I should say, and then the path to the plugin directory.
And then you can just build and run. And of course, if you're going to want to run, you're going to want to test in both Motion and Final Cut Pro. In Motion, if you click on the Library tab in the Inspector window, you'll see you get, on the left-hand side, you get a list of categories, two of which are filters and generators. And if you click on Filters, on the right, you'll see the groups, and these are the groups that you create and that we create internally. And you can see there's the My Filter group that we created, and if you click on that, down below, you see all the filters in that group. In this case, it's just My Filter. Likewise, in Final Cut Pro, in the browser window, there's the Effects tab. There are folders for video transitions, video filters, and video generators. If you twirl open the video filters, you should see my filter in there. And if you twirl that open, you should see the actual filter inside the folder.
All right, so let's talk about the SDK and get into some of the details. FX plugs are a form of Pro plug, and these are NSBundles that we use in Pro apps to extend the functionality of our applications. Plug-in classes are going to conform to certain plug-in protocols, and I'll describe those in a lot more detail in a second. And host applications are going to provide objects to the plug-ins so that they can talk back to the host app and ask it for things and tell it things that it needs to know.
So what are these protocols? Well, in Objective-C, a protocol is similar to an abstract base class in C++ or an interface in Java. It's a description of things that will be implemented. It doesn't have any implementation behind it. And you create your class, and you say it implements a protocol, you have to write the functionality for those functions-- or methods, I should say. So in this case, we see we've got a protocol called fxBaseEffect, and it has two methods. There's actually a couple more, but these are the two important ones for now. Add parameters is where you'll actually set up your sliders and check boxes and custom parameters and things like that. And then parameter changed is where the host app will call back into your plug-in and tell it that somebody changed a parameter. You can use that for supervision. If you've got, say, a min and a max, and you always want to keep the maximum greater than the minimum, you can keep those in sync there. Now that's the base effect protocol. There are two other protocols, FX generator and FX filter. Actually, there's one for FX transitions as well. And you can see that they both inherit from FXBaseEffect. That's what the brackets there mean.
And FXGenerator adds the render output with info method. And that's because it can create an output without any input. You'll see FXFilter has render output with input with info. That's because it takes an input image as one of its inputs and as one of its arguments and then generates its output. Now, both of these, when you implement an FXFilter and FXGenerator, also include the add parameters and parameter change methods.
So let's look at our add parameters method and how we'd write that. When you want to tell the host app to do something, like create a new parameter, you need to first get an object from the API manager. You'll be passed the API manager in your initialization method. It's usually called a "knit with API manager," and it's an object that you use to communicate back with the host. In this case, we tell the API manager, "Give us the API for this particular protocol." In this case, it's the FX parameter creation API protocol.
It's kind of a mouthful, but basically, we're going to be creating parameters. So give us the object that allows us to create parameters. Once you get it, you can call addFloatSliderWithName, addPointParameterWithName, addColorParameterWithName, things like that. And that's all you have to do to actually tell the host app that you've got a parameter, that you've got a slider, or whatever. You don't need to go into Interface Builder if you don't want to. You don't need to write any action methods to get called back and things like that. The host app will take care of all the keyframing and everything. So obviously, there are a lot of different parameter types. There is the floating point slider. It looks like our graphics aren't showing-- oh, there we go. OK, so we've got the floating point slider, the integer slider-- sorry about that. Let's go back here.
We've got toggle buttons, which are just check boxes, an angle slider, which is a knob. We've got RGB colors, RGB colors with alpha, and that's kind of nice so you don't have to have two parameters, a color parameter and then an alpha slider, unless you do it all in one. Likewise, we have 2D points so that you can place across here right on the canvas and let the user drag things around so if they want to position some aspect of your plug-in, it gives them direct feedback, which is really nice. pop-up menu, this can have any number of textual items in it.
You can have an image, and the great thing about an image well is that it doesn't have to be a still image. It can be. It can be footage. In motion, it can be a group which contains footage and stills and shapes and other things like that. So it gives the user a lot of power, so they're very useful. You can have text if you want to have some sort of titling plug-in. It's really useful for that. And you can have groups of parameters, which is really nice.
If you've got a lot of parameters, and you'll find that some users really like having a lot of parameters, you can group some of them together. The user can twirl/close the ones they aren't using. It cleans up the UI, makes it a little easier to use. You can have custom parameters, as I said, and I'll talk about these in greater detail in a minute, but, you know, you can have them be anything you want. In this case, we've got this crazy text style.
You could have an RGB curves, things like that, all kinds of things that you can do with that. We have some optional parameter types you can use, histogram for doing, like, levels and things like that. A gradient parameter, and this is really great because you don't have to have the user go out to an image editing app, create a one-pixel high image, import it into the project, find it in the project, and then drop it on your plug-in. You can just put a gradient parameter right there, and you can see they can create all kinds of gradients with that directly. Okay, so once we have these parameters, of course, we have to get their values in order to render. We might have a blur amount that we need to get or something like that. So it's the same thing as before.
We ask the API manager for the object that implements the parameter retrieval API. And once we have it, we just call getFloatParameter, from param at time. And we give a unique ID to each of our parameters when we create them, and that's how we identify them. And these are just--you know, we usually just number them starting at 1. They aren't UUIDs or anything like that. And you'll notice that it's got that time parameter in there. In some other applications, when you want to get a parameter at a different time than the current time, you have to check out the parameter, do something with it, and then check it back in. And that's really cumbersome, so we made it a lot easier for you. You just pass it the time value that you want, and you should get that. Likewise, for setting values, you'll get the object which implements the parameter setting API, and then you'll just say, "Set float value to parm at time. Set int value to parm at time." And that's how you can do parameter supervision. For example, you get the min and the max, and if the max is less than the min, set it to one greater than the min, things like that.
Okay, so let's look at custom UI in a little more detail. You can assign a custom UI to either a standard parameter, like a floating point parameter, or to a custom parameter, which is some custom data that you will create yourself. For example, if you're doing curves, we don't have any sort of data type that works with curves, so you'd have to create that.
They're going to appear in the inspector just like all the other parameters. They'll be in the parameter list. They can be resizable, and it's going to be a subclass of NSView. So if you've been writing Cocoa code, you already know how to write an NSView. And it will act just like an NSView that was created by the application. And you can create them programmatically, which I'll show, and Paul's going to show how to do them in Interface Builder.
OK, so the basic tasks are we have to tell the host app that we can support-- that we can host a custom parameter view. And we do that by adding the FX custom parameter view host protocol to our class declaration. We'll need to create the custom parameter in the add parameters method. We'll need to write the-- well, we'll need to tell the host app that we're going to add the parameter in the add parameters method. And then we'll have to write the function, the method, which actually creates the view and returns it to the host app. And then we'll have to write code to draw the view, respond to events like mouse down and key down and things like that. And then we'll write code to modify the data if there's custom data associated with it. So let's look at that. This is a typical class declaration. You can see we've got the interface, my plug. It's a subclass of NSObject. And it implements the FX filter protocol because it's a filter. Well, all you have to do is add comma FX custom parameter view host to that. And that tells the host application that you're able to host a custom parameter view.
So in add parameters, we need to actually tell the host app which of our parameters are going to be custom parameters. And we do that by getting the FX parameter creation API object and then calling add custom parameter with name. We pass it a parameter ID. You can have more than one custom parameter.
So later when you go to create them, you'll get past the ID of which one you're supposed to create. Pass it a default value. In this case, we've got a type that we've created called point array. And then you have to pass it the custom UI flag. It's k fx parameter flag custom UI. And that tells it that this custom parameter has custom UI.
Then you have to actually create it or read it in from a nib or whatever you're going to do. And you do that in your create view for par method. And as you can see, it passes you the ID of the parameter that it wants you to create. It'll get called once for each parameter that has custom UI. So in this case, we check it to see which one we're creating. We're creating the my custom item ID. And we allocate our custom view and then call its initialization method. And we pass it the API manager so it can do some things later. And then we simply return the NS view, the pointer to the NS view that we get back.
And that's all there is to actually creating it. Now you've actually got to do all the things that you would do for an NSView. So you're going to have to override the drawRect method to draw whatever it is you're drawing. You're going to have to override all the NSResponder methods, things like mouseDown, mouseDragged, keyDown, and respond to all those things. If you want to assign contextual menus, you can do that. And the view controller that you write is going to want to change the value of the custom data. It's going to want to tell the custom parameter view to update and things like that.
In order to communicate back with the host, there's a little problem here, though. Because you're in NSView and it's in the application just like any other NSView, the OS calls it directly to do things like redraw and respond to events. So the host app doesn't know that you've been called to do that. So if you start asking for parameters, it gets a little confused. So we have the custom parameter action API in order to handle that. And what this is, is it's just like any other thing you get from the API manager. You ask for the object which implements that API. And it has a method called start action. And what this says is, we're going to start modifying some parameters, and you pass it a pointer to the custom view. And it says this is the view that's going to be modifying parameters. The other thing is, you've been called by the OS, as I said, so you don't know what the current timeline time is, because usually that's passed to you as a parameter to your render method or your parameter change method or whatever. Actually, not parameter change, just your render method. So in order to get that, you have to ask the Action API for the current time.
Then you can do all the usual things you'd do as if it was any other part of your plug-in. You can get and set parameter values and things like that. And when you're done to clean up, you just call end action and pass it a pointer to that NSView again.
Okay, so if you've got custom data associated with your custom parameter view, you need to write some code to deal with that. You're going to need to make it an NSObject subclass, and it's going to need to implement the NSCoding protocol. One of the things you're going to have to do is create some default data for it. In this case, we've created a class method called emptyData that returns, you know, some empty data that we can pass as a default.
And the other thing you're going to want to do is pass the not animatable flag because currently custom data is not key frameable. Hopefully we'll get that in the future, but we seem to keep putting it off for some reason. But as I said, you need to implement the NSCoding protocol for your data.
And what that does is it allows us to have you serialize your data for writing out to the user's document. So when the user saves, we'll call your encode with coder method, you'll serialize your data just like you would in any other Cocoa app. And then later when we read in the user's document, we'll ask you to deserialize it and we'll call your init with coder method to do that.
So that's parameters and custom parameters in particular. Let's look at rendering. Now, all rendering happens with an FX image. An FX image is a base class, and there are two subclasses of it. FX texture, which is a thin wrapper around OpenGL textures, and that's how you do hardware accelerated stuff. And FX bitmap, which will give you-- it's basically a wrapper around a RAM-based bitmap image, and that's convenient for doing things with code that you might have already written, say, for another plug-in architecture.
In the case of -- well, actually, both FX bitmaps and FX textures can have a number of different pixel formats. You can get 8-bit integer ARGB. For textures only, you can get 16-bit float ARGB. And for both of them, you can get 32-bit float ARGB. If you want, you can get 8-bit integer R408 or 32-bit float R4FL. And those are YCBCR formats. And they're described in letters from the ice flow 19 and 27. If you go to developer.apple.com and search for letters from the ice flow or just ice flow, it should pop those up. They're quick time documents. Bitmaps have pre-multiplied alpha. So keep that in mind if you have to do some things. You may need to do some things that un-pre-multiply them and just remember to pre-multiply them when you're done. And they have a row bytes parameter. And I think Paul will get into this in a little more detail. But basically, it tells you how many bytes there are per scan line. And if you're looking at something that's, say, 640 by 480 and it's an 8-bit integer ARGB format, you may think, okay, so 640 by times 4 is roughly 2400 bytes wide, but you get the row bytes and you see, oh my gosh, it's like 5200. What's that about? Well, if you've got an interlaced frame where you've got an upper field and a lower field, we may pass you a pointer to the beginning of the frame, tell you that the line is 640 pixels wide, but that it's 5200 bytes wide. So just process the first 640 pixels, skip to the next 2400 bytes and go to the next line. Don't assume that's what we're going to do, because we don't always do that. The different applications work differently. Some applications do that, some don't. So just always obey the row bytes, and you'll be fine.
Okay, FX textures, as I said, they're a thin wrapper around OpenGL textures. You want to always get the texture coordinates. There's a method for getting those, and your texture may be located somewhere in texture space that you're not expecting. It's not necessarily always oriented at the origin or anything like that, so you need to make sure you get the coordinates. You can do a bind and enable to start using them. You can get the texture ID. If you need to pass that to, say, an OpenGL routine that needs a texture ID, you can get that.
They also have pre-multiplied alpha, And I just want to give you one small warning here is don't-- you probably won't go creating a lot of FX textures yourself. They're not real useful for doing things like communicating with the OS, and generally it's hard for the host app to tell who owns the texture, so it doesn't know when it can delete it if you do that. So if you're just going to be creating textures for your own use and talking to OpenGL, you can just create regular OpenGL textures with gl-gentext or whatever you use.
But when you communicate with the host app, you'll get FX textures and it may occasionally ask you to give it an FX texture. But in general, it's just too thin of a wrapper to do a lot with it other than communicate with the host. There's a whole bunch more FX plug functionality that we don't have time to get into today. I think Paul's going to talk a little bit about the FX timing API. You can request images, input images at different times using the temporal image API or the temporal transition image API. and that's good for doing things like retiming or if you're doing echo or feedback effects. In Motion, you can get 3D layer and camera position and orientation information. You can create on-screen controls, as I said. In all the host apps, you can get host information, so you can tell if you're running in Final Cut Express, Final Cut Pro, Motion, and you can get plug-in versioning information. So if you created version 1.0 of your product and somebody applies it in their document, and then later they update to version 2.0 when they open that document, you can see that the data for your plug-in was created with 1.0. You may need to add some information to it or remove some information or whatever.
All right, so let's talk about FXPlug 1.2.2 and 1.2.3 and see what's new. We've got a new API, the FXImage Color Info API, and I'll get into that in a second. We've moved the Xcode templates so that they can be found more easily by Xcode. We've improved support in both Final Cut Pro and Motion, and I'll talk about some of those details in a second. So the FXImage Color Info API allows you to get color information about your images. When you're working with YCBCR formats, you need to convert, it's common to need to convert between YCBCR and RGB. So we'll give you the color matrix that we use, whether it's Rec. 601 or Rec. 709, so that you can have the same conversion we do and not worry about color shifts and things like that. For RGB images, it'll give you the gamma of the RGB image. So if you want to convert to a linear color space and, you know, do realistic lighting or compositing effects, it's really great for that. The Xcode templates, you may have noticed in the Project Assistant, If you had had the SDK installed and later you upgraded your Xcode version, you may have noticed that the FX plug templates suddenly disappeared.
And that's because they moved where Xcode looks for the templates, but we hadn't updated the SDK yet. So we've updated it. You should be able to find those templates again in the new project assistant. Let us know if you have any problems. In Final Cut Pro, they've made several improvements. In the parameter changed method, you can now get parameter values at other times. groups can now be collapsed. So as I was saying, you can twirl close those groups that you aren't using if there's a bunch of parameters there that you don't want to see. And likewise, if there are groups that are nested and you disable or hide the parent group, the subgroup will disable or hide as appropriate too. In transitions, the start point, end point, and reverse controls are now respected. And there have been some bugs, bug fixes, and improved consistency. In motion, custom controls will now set the document to dirty when you change them. If you had created, if you had saved a document and then changed the value of a custom control and then quit, it wouldn't ask you to save. So we fixed that. Images and image wells will now have the same gamma they do in the timeline. It used to be if you dropped an image into the image well, it would suddenly get brighter. So we fixed that. So everything should look the same now. Upsampled fields are now interpolated instead of line doubled, and that looks a lot better. And there have been some internal changes with respect to threading. Generally in motion, you'll get asked to render and do some other things on a thread other than interface thread. And there were a couple of edge cases where you could get into some deadlock, so we've gotten rid of those.
All right, let's get into some advanced topics. I'm going to talk about porting your plugins from other architectures to FX Plug. I'll go over some of the differences between the C-based APIs that most applications use and the Objective-C APIs that we use in FX Plug. I'll talk specifically about the similarities between the After Effects API and the FX Plug API because they're very similar and you can reuse a lot of code. And I'll show you how to write code that's easily reusable between the two. All right, so in a C-based architecture, generally you have a single entry point into your plug-in. You have a main function, and the host app calls through that main function to do everything, and it'll pass in a selector that says what you're supposed to do. So you generally have this huge case statement that says, you know, based on the selector, call this function or call that function or whatever. In addition, because all communication has to go through this one function, it'll give you either a giant block of data, and some of the fields are valid for some selectors and some are valid for other selectors, and it's very confusing. Or it will give you a pointer to some opaque data, and you don't know what it is until you look at the selector, and that causes issues with type safety and stuff like that. So it's a fairly inefficient way to communicate. Well, as you saw earlier, Objective-C plugins implement a particular protocol. So the host app doesn't have to guess if you implement a render method or a frame setup method. It just knows you do because you've agreed to write those because you conform to a particular protocol. So it has all the entry points into your plug-in that it needs. So it can call directly into them. It can call your render method directly. And what that means is it doesn't have to pass a big structure or an unknown structure.
It can tell you exactly what data you need and you know what type it is, and you can use it immediately. So it's a lot-- there's a lot less communication overhead. And the same thing goes for communicating back with the host app. In a C-based API, a lot of times, you end up having to get the suite manager, ask it for a suite, for a function within that suite, a function pointer, then call the function. With Objective-C, you just ask for the object that implements that particular API, and you call it. So a lot less communication overhead. So let's take a closer look at the FX filter protocol. As you can see, it inherits from FX base effect, as I said earlier. There's a get output width and height method, which allows you to change the size of the output image, at least in motion. There's a frame setup method. This is where you say, yes, I can do hardware. Yes, I can do software, yes or no to both. Frame cleanup, if you allocate anything in your frame setup method, you'll clean it up in frame cleanup. And then render output with input with info is where you actually render, you know, the actual effect that your filter does. So let's compare this to the After Effects API. In FX Plug, we have add parameters. In the After Effects API, they have PF commands param setup selector. You know, we've got a frame setup method. They've got the command frame setup selector.
We've got a render output. They've got command render. So you can see there's pretty much a one-to-one correspondence between the types of things that you're going to do in your plugin. So we can use this to our advantage. In After Effects, you call the PF_add_float_slider macro to create a new floating point slider. In FXPlugs, you call the param API's add_float_slider_with_name method.
They've got PF_add_point. We've got add_point parameter with name, things like that. So why don't we wrap these up in some generic wrapper and call it from both plugins? In this case, what I'm going to do is I'm going to create some FX helpers. and so I've prefixed everything with FXH for FX helper.
We're going to create a method, a single function. Because Objective-C is a superset of C, we can use straight C in this. So we're going to create a single function, which will add a float slider, and it will do the appropriate thing in After Effects and the appropriate thing in FX plug.
So this is what our header is going to look like. We're going to have fxh create float slider. The first argument is going to be a pointer to some app-specific data. Because each application has different requirements of how it has to create the parameter, we're going to pass the appropriate data in that pointer. And then you can see the things that are going to be the same among them. The parameter name, the parameter ID, the default value, the min and the max, things like that. Likewise, we're going to have to have a method to actually get that value back when we go to render. So we're going to call fxh get float param, and we're going to pass it that app-specific data again, the ID and the time that we want.
So here's what it's going to look like in After Effects. The first thing we're going to do is we're going to cast that app-specific data to a PF in-data pointer. And that's the gigantic structure that they send to your plugin. And then we create a paramdef structure. We clear it. And we call the PF addFloatSlider macro. And that's all there is to it for adding a floating point slider. Of course, it's very similar in FX plug. We cast that app-specific data to be the object which implements the parameter creation API. And then we call addFloatSlider with name, pass it the name. We pass it the parameter ID, the default value, the min and the max, and all that stuff.
So we can write a single function called setup gamma parameters, if this is a gamma filter, for example, and call that from both applications. So this is how we'd call it in After Effects. We just call setup gamma parameters, and we pass it the inData pointer. In FXPlug, we call setup gamma parameters, and we pass it the pointer to the object that implements the parameter creation API. And then this is what it actually looks like. It calls the helper function that we wrote earlier. And when we compile for After Effects, it'll call the After Effects version of that function, and when we compile for FXPlug, it'll call the FXPlug version of that function.
So in summary, you can see that there's a pretty good correspondence between the two APIs, and other image and video processing APIs are very similar as well. You're going to implement either a filter, a generator, or a transition, and you can write code that will work both in FX Plug and in After Effects and other architectures as well pretty easily. So I'm going to turn it over to Paul, and he's going to talk about advanced topics with Final Cut Pro. So take it away, Paul.
Hi, everybody. I'm Paul Schneider. I'm an engineer in Final Cut, working on the plug-in hosting. And today I'm gonna talk a little bit about some more sort of, uh... issues that have come up a lot with people trying to develop for both Final Cut and Motion, what you can do to get around them, and also some commonly requested things that people would like to do and how to implement those as well. First thing I'm gonna talk about is the images that you can Final Cut and also in Motion. If you're coming from a web graphics background or a more strict CG background, you might not be used to working with digital video images.
They're a little different than other images in a few ways. The first is that our pixels aren't necessarily square. A lot of the digital video formats have rectangular pixels, so you need to be aware that you're working with an image with an aspect ratio. The next difference is that our images are often interlaced.
So you'll be processing a single field of video at a time, which the host will then combine to form a frame of video. And then the third one doesn't really have much to do with video. It's more to do with the applications. We'll often ask you to render at a lower resolution for speed purposes. We may ask you to render at half resolution, and then we'll scale that up for display. Um, so you need to be ready for that as well.
So I'll just go through these one by one. The first, with the aspect ratio, you can see that when we're displaying it on the computer monitor, we'll scale horizontally when we show it to the user to simulate the actual device that, you know, they'll be looking at it on. So you need to be prepared to handle aspect ratio. And how do you do that in an FX plug? Well, we tag each FX image with an aspect ratio. So you can simply ask your inputs and your outputs what their aspect is, and then adjust your rendering accordingly.
The next one is the interlaced processing. We'll ask you to render one field of video and then a second field of video, and we'll interlace that together. So you can see in this picture, the first image is red, the second is blue, and we combine them. So you need to be ready for that. Another--there's a difference between Final Cut and Motion here as well. In Final Cut, the fields are half height because, you know, we take-- we combine two of them to form a frame. In Motion, they'll actually take the half-height fields they'll scale them to full height. So it's a little easier to process, but not quite as fast. So you need to be ready for handling fields, and you need to be ready for the difference between final cut and motion. And this is how you do that inside the FXPlug API. Similar to aspect ratio, each image is tagged with a field. The FX field in them will be either the lower field, the upper field, or a full progressive frame if you're working in a progressive sequence.
And then we have the host capabilities object, which gives you information about the host you're running in. And one of the things you can ask about is whether this host upscales fields. If this is yes, then you're running in an app like Motion, where the fields will be stretched to full height for you to process. If this is no, then you're running in an app like Final Cut, where the fields will just be the raw field data. So you need to handle both cases.
Then finally, the low resolution proxy, you can see here in this example, we've asked you to process an image that's half the size, and then we'll scale that up for display just to make playback faster or something like that. And you can tell this is going on by looking at the scale info in the render info struct that we passed to you during render. So we have the scale X and scale Y. If we're rendering one-to-one, these will be 1.0. If we're rendering half size, it'll be 0.5, et cetera. So that seems like a lot to keep track of. But you can kind of combine them all into just a single scale factor that you use to adjust your rendering. Here's a quick snippet of code that shows how to do this. This is taken directly from an example plug-in that I'll be talking about in a minute that ships with the SDK. So don't worry about typing this down. But you can see I'll start out with a uniform one-to-one scaling. I'll take the aspect ratio of my output, and I'll adjust my horizontal scale by that. I'll check to see if I'm working in a half-height and that's a two-part check. First, check to see if I'm processing a field right now. And second, check to see if the fields in this application are half height. If both of those things are true, then I'll adjust my vertical scaling by 0.5. And then finally, I'll blend in the low-resolution proxy, if there is any.
So I'm just going to step over to the demo machine. And this is a lot easier to see than it is to talk about. You can see here I've got a plug-in called Scrolling Rich Text. We shipped this with the SDK. It's one of our examples. And this is just a simple plug-in for example purposes. Let's see you scrub through, and we'll do an easy title crawl here. This also demonstrates the sort of gotchas I was talking about. This plugin correctly handles aspect ratio, half height fields, and the render scale. But it also allows you to turn handling for each of those situations off so you can see what difference it makes. For example, here, if I turn off handling of the aspect ratio, you can see the text sort of gets squished. This is an NTSC sequence, which has an aspect of 0.9. It's pretty close to 1.0. So the squishing isn't that obvious. If I was working in HD or with an anamorphic sequence, the squishing would be pretty severe, it would be obviously wrong. So you can see, but when I turn on the aspect ratio handling, the text looks correct.
The second one is the half height fields. If I turn off support for that, the text is twice as big as it should be. This is because I'm drawing that text into each field, and Final Cut is interlacing them together. So if I draw the text at full height twice, it's going to be twice as tall as it should be. So I need to adjust that, draw my text at half height into each field so that when the two fields are combined, it looks correct. And then this third one is a render scale. And this one is something a lot of people miss, because if you drop support for it, doesn't seem like there's any problem right away.
But when you go to play back, suddenly the text is twice as big as it's supposed to be. And this is because by default, at the default settings, Final Cut will drop down to half resolution during playback for an effect where it doesn't have any profiling information. It doesn't know how fast it is.
So when we drop down to half resolution and I draw my text full height for full size into the half resolution image, Final Cut's going to scale that image up. It's going to double it in size. The text is going to be twice as big. What I need to do is look at my render scale, see that I'm currently rendering in half resolution, and then draw the text smaller so when it gets scaled up, it'll look correct. And you can see here, that's exactly what happens.
So I'm just going to go through a couple of the features of this plugin, which, as I said, it comes with the SDK. It's sort of a grab bag plugin that shows you how to do a lot of things that people have asked about on the mailing list. So as I said, this is a fairly simple title crawl.
It's a generator. One nice thing about this is that the title crawl animation takes up the duration of the item in the timeline. So at the first frame, my text is all the way at the bottom of the screen. At the last frame, text is all the way at the top of the screen. This is true no matter how much time the generator takes up. If I want to trim it in the timeline, the animation stays the same. The crawl speeds up so that I'm at the bottom of the canvas here, I'm at the top of the canvas here. And you can see I've got some controls here.
I can make the text bigger or smaller. I can change the size. I can change the value. These are standard effects plug controls, control types, parameter types. I didn't have to write any special code for this. And you can imagine there's a lot more I could do with this text. I could change the typeface.
I could maybe change the justification. I didn't want to add all of those controls to an example plug-in. What I decided to do instead was, if you want more rich formatting, just allow the user to pick an RTF file somewhere on the disk and use that. So you can see up here, if I change the text type from simple text to a text file, all of these controls go away, and they're replaced with a new type of parameter, which allows me to choose a file. Now, this is not something that's built into the FXPlug API. This is a custom parameter that I created myself. So I created this UI in Interface Builder. And you can simply choose a text file, bring it into Final Cut, and here it is. So this is a lot more complicated text than-- than previous. You can see there's a lot of different typefaces, a lot of different fonts, a lot of different colors, and it all works. And another nice thing is that the animation is still correct. The text is right off of the bottom of the canvas on the first frame. Scroll, scroll, scroll.
Right off the top of the canvas on the last frame. And if I want, I can go in here. I can launch the text file in Interface Builder. You can see I've already edited it once. I'll edit it again. Save it. Go back to Final Cut. Scroll down so I can see that change. And there it is. So this was pretty powerful and not very much code. I'm going to go back to the slides now and talk about how the plugin was written.
So there's a couple of features of the plugin that people asked about in the mailing list, so this tries to demonstrate. The first is the FX timing API, which I believe we added in FX Plug 1.2. The timing API gives you a lot of information about the context of your effect and the timeline. It'll tell you the frame rate of the timeline you're in, the start time and the duration of your effect if you're a generator, or even if you're not, if you're a transition, say. It'll also give you information about your input. So if you're a filter, you can find the properties of the clip that you've been applied to. It'll also let you convert between timeline times and item times. If you've been applied to a clip that doesn't have the same frame rate or field order as the timeline you're in, this allows you to do some conversion.
Now, this plug-in, you know, obviously, just uses it in a simple way just to find out the duration of the generator and the timeline. So here's the code to do that. The first thing I do is get the FX timing API object from the API manager, similar to how I get everything else in FXPlug. And then I just ask the timing API for the duration of my effect.
And once I know how many frames the effect takes up, it's a pretty simple calculation to look at the current frame I'm rendering, find out, you know, how far into the total number of frames that is, and adjust the scrolling appropriately. The next thing you may have noticed is that this plugin features dynamic parameter display. So you change that popup, some parameters hide, some new parameters show that were previously hidden. I'm just going to talk about that quickly.
So if you want to add a dynamic parameter display to your plug-in, the first thing you'll probably want to do is create some of your parameters initially hidden. And you can do this by passing the KFX parameter flag hidden flag to-- when you create your parameters inside your add parameters method.
If you've got a hidden parameter, you'll probably want to show it eventually. You can do this inside parameterChanged. Here's just an example block of code that sort of simulates that menu tracking code. So in my parameterChanged method, I check to see if the parameter that changed was the hideShow pop-up menu.
If it was, I get the current value of the pop-up. And if the pop-up is currently set to hide, I'll set the hidden flag on the parameters I want to hide. If the parameter is set to show, I'll clear that hidden flag, which makes the parameters visible. And I use the, um... fxParameterSettingApiObject to set the parameter flags. So you can dynamically change these flags at any time, and the UI will update.
Another thing that you noticed was I had some custom UI, and rather than create my own custom view class and implement the drawing method and the mouse tracking methods, I decided I didn't need that. I just wanted to use some standard AppKit controls and stick them in a view, and the best way to do that is with Interface Builder. This is something that we hadn't shown you how to do in our examples before, and this one shows you how to do it.
So the first thing you do is you tell InterfaceBuilder-- you create your nib, and you tell InterfaceBuilder about your plugin class. So the name of the class that implements this plugin is "scrolling-rich-text." It's just--you can see I've told InterfaceBuilder that this class exists. It's a subclass of NSObject.
And I'm going to tell Interface Builder a little bit about the class. The class is one outlet, which is a pointer to a custom view. and two actions-- chooseTextFile and editTextFile-- which correspond to those two buttons you saw on the interface. The next thing to do is actually create the UI. So I've added a view to my nib, and I've dropped in two buttons and just a static text, standard Cocoa controls.
Now I'm just gonna wire these things up. Oh, no, I'm sorry. The next thing you want to do is change the files owner of the nib to be an instance of your plugin class. So now this nib is owned by an instance of the scrolling rich text class.
Now I can start wiring things up. So I'll connect the file owner's outlet to the view I've created. So this scrolling rich text's custom view points to the view that I've created. And similarly, I'll connect the buttons to the actions of the file's owner. So the "Choose" button will trigger the "Choose Text File" action.
The "Edit" button will trigger the "Edit Text File" action. Now I'm ready to write just a little bit of code. Here's the declaration for my plugin class. You can see it's a FX custom parameter view host, because it does have custom UI. And there's the outlet and the actions that I added in Interface Builder.
Now when I go to my add my parameters, I'll create a custom parameter. I'll pass the custom UI flag. And you can see the default value of this parameter is just an empty NSString. Darren talked about when you're creating custom parameters, you can create your own, you know, create your own class for the data type, and you can do anything you want as long as it conforms to NSCoding. One of the nice things about FXplug is that AppKit actually ships with a lot of classes that conform to NSCoding that you can use directly if they meet your needs. So if you want to use NSString, NSNumber, NSData, you can. You can just create a custom parameter, use that as the data type, and you don't have to write any serialization code.
So now in my createViewForParam method, this is when I actually create-- this is when the host asks me to create the view that'll be used for the custom parameter. This is pretty easy, 'cause I did all the work in Interface Builder already. So if I'm being asked to create the UI for my text file path param, I'll load the nib for the-- for that custom view I created, and I'll pass myself as the owner. Remember, the owner of the nib was an instance of the scrolling rich text plugin. This plugin is that class, so I just pass myself. And when the nib is loaded, the outlets and the actions are wired up. So by the time the nib is loaded, my custom view pointer points to a valid NSView instance. So all I have to do is return it, and I'm done.
And I'll just go back to the demo machine quickly to show you that one more time. You can see I've got my standard controls here. If I change the menu, the plugin's parameter change method is called, and it responds by setting the hidden flag on some parameters, clearing them on others. So this new custom parameter is now visible, and here's the UI that I created in Interface Builder. Everything is wired up with very little code. With that, I'm going to hand it over to Pete Warden, who's going to talk about advanced GPU topics inside FinalFX Plug.
Thanks, Paul. So I'm Pete Warden. I've been doing a lot of filter work over the years. And the particular advanced topics I'm gonna be talking about are the ones that relate to GPU programming. In particular, all the stuff you need to do to use raw OpenGL to write GPU plugins.
Why is it so tricky? OpenGL is not designed for image processing. It's an API that's designed for 3D rendering. It's possible to do image processing with it. It's possible to use the GPU for image processing. But you have to use some fairly advanced and obscure features, which have been hard to get documentation and examples on. The first thing I should say is that Apple has recognized this problem over the years and actually has a couple of very nice APIs for doing a lot of GPU processing, like Quartz Composer and Core Image. And we have quite a few developers who are using these pretty successfully to get the performance advantage that you can get out of the GPU without having to delve quite so deeply into some of the dark and dusty corners of the OpenGL API. But there's still a lot of valid reasons why you may need to write directly to OpenGL. So to answer developers' questions, we actually put up one of our internal filters into the SDK, demonstrating the techniques that we use to get very fast image processing working on the GPU.
Probably the thing that we get most questions about that's most confusing is how you do intermediate rendering on the GPU. On the CPU, it's very simple. You have a system memory bitmap that you can just access as a piece of memory. On the GPU, you need something where you can actually render into a texture and then use that texture as a source for subsequent operation.
Traditionally, that meant using pbuffers. There are some other alternatives like FBOs that I'm going to briefly talk about. We don't have any help from the host for creating or managing these objects yet. That's something we've had a lot of requests for, and we're definitely looking at what we can do there. But for now, you should use the sample code that we ship and actually look at the pbuffer code that I'm going to talk about a bit and use that as a basis for your own plug-ins.
So what is a pbuffer? It's a way of rendering into a texture. The key attribute is that it stays in VRAM. Everything's still happening on the card. It's very tempting when you're first looking at doing GPU programming to try and pull down image data from the card to do just some CPU processing on it to get some intermediate images. But that completely kills performance. You lose all of the parallelism that you can get from running both CPU and GPU code.
FBOs are very similar under the hood to pbuffers, but they offer much more modern and in a lot of ways easier to use interface. And we're moving over to using FBOs, but we still have a lot of code using pbuffers, and our example will show you how to use pbuffers to do this So this is the list of functions to do with pbuffers that we actually give you in the sample code. Uh, I'm not gonna spend too much time talking about them, The create and destroy functions are hopefully fairly self-explanatory.
That's how you create these images that you're going to be using on the GPU. When you want to actually start drawing into the texture, you need to redirect all of your GL commands into that area of texture memory. And you do that by calling pbuffer begin, which under the hood does the right commands to redirect the subsequent GL commands that you're going to issue to draw into that area of VRAM. When you're done drawing, when you've done all of your drawing operations, you want to end up in that texture, you call pbuffer end, and then you go back to drawing into whichever context you were actually in before you started drawing into the pbuffer. When you want to use that image that you've drawn into as a texture source, for example, for a fragment program or other shader, you call pbuffer use. And this is very, very similar to the standard bind and enable on an FX plug, an FX texture, or just to binding and enabling a raw GL texture. You can use it exactly the same way.
If you've got multiple text units, you can set GL active texture to control which unit you're going to use. it looks exactly like a normal GL texture. The next most asked topic is shaders. How do you write a pixel shader to do some interesting operation that isn't supported by the standard GL fixed function pipeline?
We've got a lot of mileage out of our fragment program. Again, this is kind of similar to the difference between PBuffers and FBOs. GLSL is a much higher-level language that has been introduced over the past couple of years. We're starting to use it. We're having a lot of success with it. Probably for new developers, it would be a good idea to definitely consider it. It's a lot easier to get into. but we still have some reasons why we go back to our fragment program, for example, running on very old hardware.
Some of the very early ATI cards, which can run pixel shaders, have very low limits on the number of instructions they can actually run. They can only run 64 arithmetic instructions, for example, and it's a lot easier to predict if your program is going to run on that sort of hardware if you're writing in something that looks more like Assembler and has a closer correspondence to those actual limits. So... In the code sample, we have just a couple of helper functions. I mean, the first thing you should always remember is always check to see that your program can actually be loaded. Because of these limits, programs that run on one card may not run on a whole bunch of other cards that you test. So it's going to be very hard to tell unless you have some kind of error reporting, possibly from your QA or your users, to tell why things are going wrong, unless you have some explicit checking to see if you can run the fragment program. And quite often what we can do in those cases is if you've got CPU implementation, just fall back to actually running the CPU implementation in that case instead if you're running on hardware that isn't capable.
Another very confusing topic within the GL world is floating point support. This isn't something that's heavily used, especially a few years ago when it was introduced, wasn't heavily used by games, but it is something that's very important for motion graphics and editing and compositing. People want to be able to work in high dynamic range. They They want to be able to work with YUV, super whites, and super blacks. So we require your plugins to actually support floating point. You'll be giving the user a degraded experience. If you don't, you'll be clipping their colors, and they'll be unhappy. But on the GPU, there's a couple of key features that many cards don't support when you're running in floating point. These are bilinear filtering, where you're actually getting nice filtering when you're scaling up or scaling down or doing 3D rendering. And there's hardware blend mode compositing, where you're actually drawing multiple polygons over the top of each other, and you're using the alpha to blend them together. A lot of older cards especially only support that kind of compositing in 8-bit textures. If you try and do it on float, you'll just end up overwriting the previous pixels. And there isn't really a very good way to check for this. We actually have... We started off trying to do graphics card ID checks, but we ran into the obvious problems there that when a new graphics card comes out, you don't know what its capabilities are. So we actually run checks at startup where we read back the pixel values to see if the operation has succeeded. This is obviously pretty awful. This is not something you want to be putting into your own code. So our very strong advice is try and write your algorithms, your GPU algorithms, so that they don't rely on these two features. If you really, really, really, really need them, or you've got some legacy code that relies on them, it is possible to emulate at the cost of some performance.
And the directional blur sample code that we ship with the X SDK now demonstrates how to do the four tap filtering within your fragment program to emulate bilinear filtering. We don't have a sample that demonstrates how to emulate blending, But the basic idea is that you draw one polygon into a buffer, and then you finalize that p-buffer, and then you use that as a texture source to do the blending of the next polygon. So you end up with a pass per polygon, which is very inefficient from the performance side.
So Paul mentioned already that to give users good performance, we use low resolution proxies. And usually that's just a case of scaling parameters. If you're just drawing into a single context and you don't have any intermediate buffers, you just can apply the scale and possible offset if there's any fields involved. But when you've got intermediate buffers, when you've got p-buffers, it's very, very easy, and I've made this error myself quite a few times, to not take that into account when you're doing your intermediate buffer rendering. So you'll get a low res input image, you'll accidentally scale it up into a full size intermediate p-buffer, and then do all of your processing on that large intermediate p-buffer, and then you'll end up scaling it down at the end, and the results will look fine, but the users won't be getting the performance that they really should be getting when you go down to a low resolution proxy. So just keep an eye out for that. If you're jumping down to low resolution and your plug-in isn't speeding up, and you're using intermediate buffers, that's something you really should be checking. So those are the topics I'm going to cover. I'm just going to hand over to Darren now to wrap up.
Thank you very much, Pete. Okay, if after WWDC you still have questions, well, first, there will be a lab after this, which I'll get to in a second, but after WWDC, you can go to the ProApps dev mailing list and add yourself to that and search the archives and see if your question's already been answered.
You can do that at lists.apple.com, and it's a really great group of people. There are a lot of developers who are currently shipping products on it, and they're very helpful, very useful for answering questions when we can't or when we don't have the time. If you have questions of a proprietary nature that you don't want your potential competitors to see, you can send them to the [email protected] address, and that'll go to me and Pete and Paul and a couple other people internal to Apple, but other people won't see it. If you want to see the sample code for porting your plug-ins from After Effects to FXPlug, that's available at developer.apple.com/wwdc/attendee.
As I said, there's a lab today from two-- there are actually two labs starting at 2:00 in the Graphics and Media Lab. The Final Cut Pro Lab, you can get some help with your Final Cut Pro XML questions, and the FX Plug FX Lab. So in summary, FXPlug 1.2.2 and 1.2.3 have some new features.
The FX Color Image Info API, excuse me, FX Image Color Info API. The Xcode templates, you should be able to find those again and use them. And of course, improved support in Motion and Final Cut Pro. Please join the mailing list. Bring us your questions. Let us help you. And if you can, optimize for hardware and software rendering because your users will appreciate it.