Apple Applications • 45:21
In this session we will discuss the new FxPlug SDK in detail. We will demonstrate how to create custom UI controls using NSViews, and how to implement OpenGL on screen controls which will enable you to create highly interactive and easy-to-use filters and generators in
Speakers: Dave Howell, Pete Warden
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Hello, welcome to the FxPlug in Depth session. Sorry we started a little late, but Pete's plane showed up about five minutes late on Tuesday, so we've just been off a little bit since then. We're going to talk about some advanced topics that we didn't get to cover on yesterday's FxPlug session, the overview. and do some demos and talk about OpenGL and so on. I'm Dave Howell again.
In this session we will talk about how you can do retiming, how you can, when it's time to render a frame, you can get an input frame for a filter that comes from a different time. So you can do a video reverb style effect or motion blur or something like that.
We will talk about the two optional parameter types that some hosts might support, the gradient and the histogram. And custom parameter types, which we kind of briefly talked about in yesterday's session, but we will talk more about how you can implement custom parameter type and what the requirements and restrictions are.
and custom parameter UI which is often associated with a custom parameter type. We will also talk about how to do on screen controls in OpenGL where you put your controls directly into the canvas and let people manipulate the parameters of a filter by dragging or drawing or that sort of thing, or key presses. And we will talk about some of the details of using OpenGL in an FX plug.
So when you do retiming, this is only for filters, for generators you can get all of your input parameters that you want at any time you want. You can find out what the value say of a slider is at the current time or at some other frame time.
But for filters, in order to get an input frame at a different time, you need to use the FXTemporalImage API, host API methods.
[Transcript missing]
and also you can get the input as a bitmap or a texture. So in your hardware render method normally you would get the texture and in your software method you usually get the bitmap.
So I'll just really quickly show you the, those methods for getting the inputs at different times. There's one called getSourceBitmap. and get input bitmap. The source is the unfiltered and the input one is the filtered version. The time value is a frame time, but it's a floating point value, so you can give it a and the same thing, there is a get source texture and a get input texture.
and in each of these the render information is passed in, in your render function you're given a render info structure which you then can pass back into these methods. So to show you what kind of things you can do with retiming, here is Pete Warden, graphics coder extraordinaire from Motion Team. Hey, thanks Dave. Okay, so I'm just going to give you a little demonstration of what we've actually been talking about with all the code here. So if I actually bring up some footage and just get this actually playing back.
I will change the project length here. If I enable a filter that lets me scrub in time, If I just stop the playback, you can see that you can actually access the input, which can be footage or which can be anything that has gone through the motion render engine with any effects applied or anything else like that. You can just say, "Okay, I want the footage at 10 frames a go, 30 frames a go, just purely random access." and you can see it's very useful. Back to slides.
As I said, there are two optional parameter types: histograms and gradients. And the reason they're called optional is that while Motion does support both of those, some other hypothetical host app in the ProApp Suite might not support these types. So when the support is added to Final Cut, we don't guarantee that these will be there.
and Dave Howell, Pete Warden. Just like the way that you create regular parameters, the standard parameter types, and you get their values and you set their values, there are analogous host APIs for creating, getting and setting values for the optional parameter types. So the histogram is used in one Motion plugin, the Levels plugin. It may not be too useful for general use, but it's there if you want to use it in your plugin. You can see that it has a very simple interface that can be expanded and a user can go into quite a bit of depth with the levels.
So to create these you use the API manager to get an object that's a member of a class that implements the protocol called FxOptionalParameterCreationAPI. And that creator object then has methods addHistogramWithName and another one for making a gradient parameter. You can see in this example, most of the examples I've given have just had the default flag for the parm flag, so I've set some flags in here just to give you a flavor for what kind of things there are in the parameter flags. And the levels, you probably actually would want to save this, but if you're using the histogram just for displaying levels and not really interacting with them, you might not want to save the values.
[Transcript missing]
and setting the histogram levels is similar. So you can get the levels, you can change Here's the gradient parameter, the other optional one shown closed and open. and this is probably more generally useful to plugin developers. You can use gradients which you can see the little pop-up and Dave Howell, Pete Warden.
So let's get started. So I'm going to start with a quick demo of the new FxPlug SDK. It's a very cool new tool. I think this is a very popular tool. It's called the FxPlug SDK. It's a very popular tool, and it's a very popular So to show you what those look like we'll go to the demo machine. Yeah, I'll just, again, I'll drag over the same footage.
and Dave Howell, Pete Warden.
[Transcript missing]
and Dave Howell, Pete Warden. I will start with a nice little widget letting you do all the things that you would expect to do with a Levels color correction tool. Now if I actually go over and select gradient color eyes while I have got the image selected.
and Dave Howell, Pete Warden. You can see this whole gradient UI widget that you basically get. You really don't have to do any work for this on the FxPlug side as Dave was saying. All you do is request a gradient parameter and we set all of this stuff up for you.
For example, I can put a, you know, and Dave Howell, Pete Warden. and Dave Howell, Pete Warden. It really is very straightforward to actually be using this in the code. and Dave. Back to slides. So custom parameters. You create a custom parameter the same way as you create the other parameters with the addCustomParameterWithName method which is one of the methods in the standard parameter creation API.
But the interesting thing here is that the default value is an object of your custom class. So the type can be any class that you want, except that it needs to conform to the NSCoding protocol. So, and that's used for motion to be able to flatten your custom data when it's saving a project or for duplicating a set of parameters from one track to another, that sort of thing.
So here you can see we instantiate something called MyDataClass with the empty data method. and this is the NSCoding method. You can see it's really straightforward. Obviously it will get complex if your custom class is complex, but it's just two methods that you need to implement: encodeWithCoder and initWithCoder.
[Transcript missing]
and Dave Howell, Pete Warden. I've added another restriction which is that you have to do keyed coding with your NSCoding implementation. It needs to use encode object for key and decode object for key. and as long as you do that you'll be fine. You can't just use NSString which does conform to NSCoding, you have to make it use keyed coding.
So when you do custom parameter UI, what you do is you create an NSView and and tell the host app that using the FxPlug protocols that that's to be the control for one of your parameters. and it can be any subclass of NSView. It could be a subclass of NSOpenGLView or your own thing. Here's an example that is NSTextEditView which is really pretty simple to implement.
You get the standard methods. You can use the standard NSTextEditView methods to get the value of the string inside the view. The other thing you can do with the NSView is you can create an interface builder, save it in a nib and because an FxPlug is a bundle, you can look inside your resources folder and load that nib. It will be localized and all that good stuff.
Another thing you need to do when you use the custom parameter view host and Dave Howell, Pete Warden. The first thing you need to do is add the name of that API to the list of protocols that you implement in your plug-in. Here is the example from the Info.plist of a generator that has
[Transcript missing]
So in Motion your custom view appears in the inspector. You can make your view resizable, although you might notice that in Motion right now the resizing will only be horizontally. There's a fixed height. Whatever height you create your view as, it's going to always be that height. There's no control for resizing that in the app.
You can also use any subclass of NSView, and I just made a silly example of something you might do in a custom view, but it can really be any kind of graphical stuff. and of course you can use Core Image and Core Graphics and all of that for preview.
Now, you get the methods, you would override methods in NSView and NSResponder classes to get mouse down events and anything else, key events, even complex tablet events. If you use the tablet events for things like angle and whether you're using the eraser or the pen tip, then you would get those events too. and of course a scroll event. You can also assign a contextual menu to your view so when the user right clicks or control clicks you can present a menu.
One thing to note though is that unlike any other of the methods that you implement in your plugin, the custom parameter UI methods are they're called -- I'm sorry -- the NSView and NSResponder methods are going to be called by the system, not by FXplug. So the host app might not be in a state where you can access parameter values, get and set parameters or change the state of parameters like hiding them or deactivating some, that sort of thing.
So in order to access your parameters, you need to use the action API. You get that host API from the ProPlug API for protocol method and just make two calls. One is start action before you start accessing the parameter values and then end action when you're done. Pretty simple, but if you don't do it, you'll suffer heinous results.
The other thing that's in the Custom Parameter Action API is one method for getting the current time. And that's just because, of course, when you get a mouse down event, you don't know what the time in the timeline is. It's the current time in the timeline, not the system time or anything like that.
If you want to know what frame you were on when the user clicked or dragged or pressed a key or something, then you just use that and get the current time value. And then when you go to get a parameter value, you can pass that frame number, that time in.
So you probably want to see what these things look like. Okay. Should we go to the demo machine? I'm Pete Warden, I'm an engineer with the Motion Team, and I'm going to be talking about on-screen controls initially. But what I want to really do is just give you an idea of what we're actually talking about, what the motivation is here to be doing this.
So, if I go over to our Kaleidoscope filter, and you pay attention to the thing that I'm actually dragging around with the mouse, Um... This whole UI widget is actually a custom UI widget that we are drawing through FxPlug. You get control over a whole bunch of segment angle things, you can drag the center of this around, you can actually change the rotation you have got for the kaleidoscope filter, all through just one fairly simple widget.
And this is really something that our users have been very keen on and very impressed by, and it is something that if you are writing your own filter, if you can find a way to give users this sort of control over what you are doing, they are going to be very happy and there is going to be, you know, people are definitely going to look at this, you know, buy it if there is this sort of high level of user interaction there.
Do you want to come over to slides? Yeah, if you have got the, yeah. Some slides for on screen controls. Yeah, so now that I have given you the kind of, you know, the 10,000 foot view of what on screen controls are, I am just going to go over some of the details of actually implementing these, the sort of stuff that you will need to know if you are planning on doing this in your own plug-ins.
[Transcript missing]
and Dave Howell, Pete Warden. As far as user interaction, we have our own custom API that gives you mouse and keyboard events. As Dave was saying about the set parameter stuff, when you actually get those events, what you should be doing is calling set parameter on the parameters for your filter and doing control over your filter in that sort of way.
[Transcript missing]
A way of drawing into the screen buffer where instead of using textures you actually draw in IDs of the user interface elements that you are actually drawing. And then we take that screen buffer and we read the ID when we want to figure out which object or which part of your filters UI the user has just clicked on and selected. So you do the same thing as you would do to actually draw the UI except instead of setting colors or textures you just set an ID saying which part of the UI you are actually drawing at the current time. time.
Well, that's just been a quick skip over on screen controls. Now I'm going to go into some of the really kind of dark and dingy corners of the OpenGL rendering side. The first thing that I want to talk about is pbuffers. What is a pbuffer? A pbuffer stands for pixel buffer. It is an OpenGL term.
What you can think about is if you ever need a temporary image to draw into, if you ever need intermediate results in your filter, if you ever need to be doing multi-pass rendering at all, then you are going to have to start to get to grips with pbuffers.
You can do simple filters such as color correction that you can fit into a single fragment program without using pbuffers. As I was saying in the introductory talk, you can just set up your fragment program and draw a quad. But we find for the majority of our more complex plugins, we actually end up doing things like blurs, we actually end up having to render out intermediate results. And then use those intermediate results in further stages. And in those cases, you really need to be getting to grips with pbuffers.
Now, we have some example code that we give out about pbuffers for the creation and deletion. I just want to go over some of the policy, some of the things that you really should know if you're actually looking at using pbuffers. One of them is pbuffer creation and deletion.
We really highly recommend that you make sure that you just create pbuffers once in your render function and then you actually keep them around for that plugin's instance for as long as you can. The pbuffer creation is a very expensive operation. There's a whole bunch of system resources and on the OpenGL driver level, there's an awful lot of work. It has to go in every time you create or delete a pbuffer.
It makes a massive difference to your rendering performance if you can just do that creation or deletion once and then just have them sitting there. This gets a little bit more complicated when you're dealing with a situation where a filter is on something that's changing size, for example, a particle system.
You really need to be creating pbuffers that fit the size that you've been given. Especially particle systems are really a pathological case, but it comes up in a lot of other situations as well where something's continuously changing size. If you're not careful, you end up actually recreating those pbuffers that you're using within your filter every single frame. One of the things that we have been doing is we've been doing a lot of work on the pbuffer rendering.
We've been doing a lot of work on the pbuffer rendering. We've been doing a lot of work on the pbuffer rendering. One of the things that we've been doing is using various strategies to over-allocate pbuffers so that we actually allocate pbuffers larger than we need at the current time so that we're able to deal with some growth before we have to reallocate. That's actually been a big part of our optimization strategy. That's made a real difference to our performance with the filters.
Now, when you actually come to trying to do some rendering using PBuffers, the way that it works is you call some sort of PBuffer begin function which redirects all of your OpenGL drawing into the PBuffer context, all of your rendering calls then get rendered into that PBuffer and then you call some sort of PBuffer end which returns you to the context that you were in before. Once you've done that, really you can call some sort of use function on the PBuffer and that just lets you use that exactly like a texture. There really is no difference from the actual user point of view.
[Transcript missing]
any of the other things you'd really like to be doing there. I mean there are some limited ways that you can use the geo blending to do accumulation into the pBuffer that you're drawing into, or to do simple compositing but there really isn't a good way of accessing your current pBuffer that you're drawing into.
So the scheme that we always use, we use an awful lot, is the ping pong as we call it, or the double buffer approach where we draw some, do some rendering into a buffer and then when we need to reference that to do some further processing on it we go to a, we switch over to a second buffer and use the first pBuffer as a texture, as an input into the fragment program. And with two buffer you can pretty much do all of that you want, you just switch back and forth for every stage that you're doing.
So I've covered pixel buffers there. Now what I want to talk about is some pixel shader stuff. Just some of the sort of things I want you really to know about fragment programs and some of the funky little details of using our fragment program within FX plug filters. As I said, fragment programs, pixel shaders, there's a lot of different terms for all of these things. What they involve is calling assembler-like functions on every pixel that you're drawing through the OpenGL rendering engine.
We have some code up as part of our Xcode template that demonstrates how to create and delete these R fragment programs, it really isn't that much code. And as far as using them, if you're used to using textures for OpenGL rendering, it's very much the same syntax. All you do is you bind the fragment program just before you're ready to draw your quad, and then when you actually call the rendering routines, the fragment program gets called on every pixel that you render using that quad.
And I also briefly want to talk about some of the possible alternatives. I mean, one of the things you can do as well is if you have a filter that will fit into the core image idea, then that's definitely something that you can do within your FX plug filter using core image filters, chaining them together, that's a very, very viable way of approaching this.
Another thing we have looked at is using CG, using--because we really-- we know that our fragment program is a pretty low level language, so it would be nice if we actually had a more high level way of approaching writing our pixel shaders. So we have looked at trying to use CG within our FX plug filters. And we've had some success.
We also have run into some problems with the Mac CG implementation. So it's something to think about, but we really have turned back to just using our fragment program as our main implementation language for all of the filters in motion, just because it's a fairly stable, it's a fairly well understood system.
And it also lets us do some of the stuff that we need to do to run on, for example, ATI cards, which happen to have a lot of restrictions on the sort of programs that they can run. It's a lot easier to actually be dealing at the assembler level with those sort of restrictions rather than trying to figure it out from compiled output of a more high level language.
Another thing that I'm going to talk about is the floating point support within Motion, and the floating point support in OpenGL in general. There is some floating point support in Panther, but in general it's a Tiger-only feature. That's where it's really been, you know, all of the bugs have been ironed out and it really seems to be working well on all of the cards that Motion ships on, all the way from the MV34 up to the newer cards.
It doesn't actually make much of a difference to simple filters. Since our Fragment program runs internally at floating point depth anyway, as long as you're just doing a single pass fragment shader, pixel shader plugin, then the conversion is kind of handled for you. It's handled outside of the R Fragment program. When it pulls in the texture and when it writes out your final result, it converts down to whatever bit depth you happen to be rendering to.
[Transcript missing]
This is another question that we get asked an awful lot is, "Okay, I want to do some 3D rendering. Where's my ZBuffer?" When we call an FxPlug plugin, we call you in a context that doesn't actually have a depth buffer attached. That's a conscious decision on our part because most plugins don't actually need the depth buffer, and every time you use the depth buffer it takes up VRAM, it takes up system resources.
So if we can avoid rendering using a depth buffer, that really does speed things up a fair bit. So what do you do if you actually need a Z-buffer for your rendering, if you're doing some sort of 3D rendering within your plugin? Well, you just create a P-buffer internally to your plugin, render into that with a Z-buffer attached to that P-buffer, and then you take the results from that P-buffer and use it as a texture to draw into the context that we give you.
Now one thing I should mention is that you will, especially in this space, if you're doing 3D, using a Z-buffer to handle intersections that tends to show up some really nasty artifacts that people in the motion graphics industry really don't expect to see the sort of jagginess that you end up using just a standard Z-buffer implementation. So some sort of anti-aliasing, some sort of improvement to the quality there is very recommended.
Another thing to be aware of, or hopefully we take care of most of this for you, but just in case you ever run into this situation, I want to go over some of the limits to the texture and pbuffer sizes you can have. On current ATI cards, you can go up to 2048x2048 for a single texture or for a pbuffer that you're rendering into. The same is true on NV34 cards, the 64MB NVIDIA cards that are out there. But for all other NVIDIA cards you can actually go up to 4096x4096.
I This is the situation for 8-bit, and this is just a hardware limitation of the cards. Now, for float it gets a little bit more complicated because you start to run into the VRAM limits on the card. You can only fit so many P-buffers and so many textures on the card at the same time.
And it gets kind of variable depending on even stuff like if people have two monitors attached, that actually splits the VRAM in half and gives half to each monitor. So there's a whole bunch of calculations we end up having to do to figure out, okay, what's the maximum size that we can have for textures and P-buffers if you're at higher bit depths.
Now, as I said, we try and handle this for you. We will always clamp our input and output sizes to be legal sizes. We do a lot of calculation, we do a lot of work to actually make sure that we never call any plugins asking for output or output sizes that they can't deliver, or giving input textures that are illegal sizes.
So, um... and Dave Howell, Pete Warden. The place where this gets tricky is, as I was saying about on the Z-buffer side, trying to do multi-sampling or trying to do anti-aliasing to improve quality. Then very often what you want to do is create a P-buffer that is larger than your input so you can render into it and then use that as a multi-sample buffer to improve the quality.
gets very tricky. You really have to be careful that you're not running into these size limits I've just been talking about when you're actually doing that sort of allocation of p-buffers that are larger than the input that we're giving you. So we don't have a magic solution for this, but we want to make sure that if you're seeing corruption, if you're seeing issues and you are allocating p-buffers that are larger than the input or the output, that this is something that you talk to us about and this is something that you're aware of.
Okay, well I'll just pass back to Dave here. I wanted to point out one other thing that because FxPlug plugins are CFBundles, NSBundles, and they have resource folders, while you can use the standard CI filters built into Core Image, all the regular Core Image filters should work, but you can also make your own image units and they can sit inside your resource folders, you can load them when needed.
And there's a method, you give a partial path name or file name and you say, use the standard bundle method for getting that file, the file with that path out of your resource folder. And you can use the type of image unit that has actual Objective-C code in it, or you can do the script only image units, those will work great. too.
Another thing that I want to talk about is Intel-based Macs and this whole thing. Of course, we don't have the SDK revved yet. You wouldn't be able to use it anyway until Motion is running on Intel-based, on the transition kit. But it will come and we'll let you know what the strategy is. Just keep watching the website. There is a list of links on the developer, the WWDC page. It has, as I said in the session yesterday, the place to go to download the FxPlug SDK.
You can download it and you can look at the headers and check out the Xcode templates and the examples and read the documentation and all that. But to actually run and execute the FxPlug SDK, you can download it and you can look at the headers and check out the Xcode templates and the examples and read the documentation The only host app, of course, is Motion, and only Motion 2.
There is a 30-day trial version of Motion available too, so if you want to just check out the SDK for a while, you can do that. and Dave Howell, Pete Warden. So we'll take questions at the microphone there, but there's also afterwards a lunch at 12:45 upstairs in the pro audio and pro video connection room.
So grab a lunch and come on up and we'll talk more. and also if you have any, if you want to contact us we have a mailing list that's [email protected]. and Phil Fitts. It's a list of maybe a dozen people at Apple who are working on FxPlug and we'll be glad to answer questions.