Apple Applications • 45:21
In this session we will discuss the new FxPlug SDK in detail. We will demonstrate how to create custom UI controls using NSViews, and how to implement OpenGL on screen controls which will enable you to create highly interactive and easy-to-use filters and generators in
Speakers: Dave Howell, Pete Warden
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it may have transcription errors.
Hello. Welcome to the FXplug in-depth session. Sorry we started a little late, but Pete's plane showed up about five minutes late on Tuesday, so we've just been off a little bit since then. We're gonna talk about some advanced topics that we didn't get to cover on yesterday's FXplug session, the overview. and do some demos and talk about OpenGL and so on. I'm Dave Howell again.
In this session, so we'll talk about how you can do retiming, how you can, when it's time to render a frame, you can get an input frame for a filter that comes from a different time. So you can do a video reverb style effect or motion blur or something like that. We'll talk about the two optional parameter types that some hosts might support, the gradient and the histogram. custom parameter types, which we kind of briefly talked about in yesterday's session, but we'll talk more about how you can implement a custom parameter type and what the requirements and restrictions are. Thank you. And custom parameter UI, which is often associated with a custom parameter type.
We'll also talk about how to do on-screen controls in OpenGL, where you put your controls directly into the canvas and let people manipulate parameters of a filter by dragging or drawing or that sort of thing, or key presses. And we'll talk about some of the details of using OpenGL in an FX plug. Thank you.
So when you do retiming, this is only for filters. For generators, you can get all of your input parameters that you want at any time you want. You can find out what the value, say, of a slider is at the current time or at some other frame time. But for filters, in order to get an input frame at a different time, you need to use the FX Temporal Image API, host API methods.
And they're pretty simple. They're used to request an input image at a specified time, and you can get the input in four ways. You can get it filtered or unfiltered, which means if you have an input that has other filters applied to it before your plug-in is applied, then you can get the input with or without those filters applied. So you can get the source image untouched, or you can get the filtered one. And also you can get the input as a bitmap or a texture. So in your hardware render method, normally you would get the texture, and in your software method, you usually get the bitmap.
So I'll just really quickly show you those methods for getting the inputs at different times. There's one called getSourceBitmap. and get input bitmap. So the source is the unfiltered and the input one is the filtered version. And the time value is a frame time, but it's a floating point value. So you can give it a partial frame. And then the same thing, there's a get source texture and a get input texture.
And in each of these, the render information is passed in. In your render function, you're given a render info structure, which you then can pass back into these methods. So to show you what kind of things you can do with retiming, here is Pete Warden, graphics coder extraordinaire from the Motion team.
[Pete Warden]
Hey, thanks, Dave. OK, so I'm just going to give you a little demonstration of what we've actually been talking about with all the code here. So if I actually bring up some footage and just get this It's actually playing back. Change the project length here. If I enable a filter that lets me scrub in time, If I just stop the playback, you can see that you can actually access the input, which can be footage or which can be anything that's gone through the motion render engine with any effects applied or anything else like that. You can just say, "Okay, I want the footage at 10 frames ago, 30 frames ago, just purely random access." And you can see it's very useful. Okay, back to slides.
Now, as I said, there are two optional parameter types, histograms and gradients. And the reason they're called optional is that while Motion does support both of those, some other hypothetical host app in the Pro App Suite might not support these types. So when the support is added to Final Cut, we don't guarantee that these will be there.
Just like the way that you create regular parameters, the standard parameter types, and you get their values and you set their values, there are analogous APIs, host APIs for creating, getting and setting values for the optional parameter types. So the histogram is used in one motion plug-in, the levels plug-in. And it may not be too useful for general use, but it's there if you want to use it in your plug-in. You can see that it has a very simple interface that can be expanded, and a user can go into quite a bit of depth with the levels.
So to create these, you do the same sort of thing. You use the API manager to get an object that's a member of a class that implements the protocol called fxOptionalParameterCreationAPI. And that creator object then has methods, addHistogramWithName and another one for making a gradient parameter. You can see in this example, most of the examples I've given have just had the default flag for the parm flag. So I set some flags in here just to give you a flavor for what kind of things there are in the parameter flags. And the levels, you probably actually would wanna save this, but if you're using the histogram just for displaying levels and not really interacting with them, you might not wanna save the values.
And again, to get the value of a histogram parameter, you just use the get histogram method, which takes a bunch of pointers to doubles and gives you back the values. And the channel number that you ask for has five different values. It's either the red, green, blue, or alpha, or it's the RGB, the sort of composite of the levels.
And setting the histogram levels is similar. So you can get the levels, you can change them, and set them back. And here's the gradient parameter, the other optional one shown closed and open. And this is probably more generally useful to plug-in developers. You can use gradients, which you can see the little pop-up pop-up menu in the upper right of the parameter control in the inspector there. And that lets you access sort of a library of gradients that you might -- or it lets your user get gradients from a library and add gradients to that. So I'll just whip through these. The gradient creation and value setting and retrieving is pretty much the same as all So to show you what those look like, we'll go to the demo machine. Yeah. I'll just-- again, I'll drag over the same footage.
And then if I just add a... levels felt on here, switch over to the inspector, you can see that you get a a nice little widget letting you do all the things that you'd expect to do with a Levels color correction tool. Now, if I actually go over and select gradient colorize while I've got the image selected.
You can see this whole gradient UI widget that you basically get. You really don't have to do any work for this on the FX plug side as Dave was saying. All you do is request a gradient parameter and we set all of this stuff up for you. put a sort of a green color in the middle of this white to black gradient here.
Yeah, it really is very straightforward to actually be using this in the code. And yeah, back to Dave. DAVE BELL: Back to slides. So custom parameters. You create a custom parameter the same way as you create the other parameters with the addCustomParameterWithName method, which is one of the methods in the standard parameter creation API.
But the interesting thing here is that the default value is an object of your custom class. So the type can be any class that you want, except that it needs to conform to the NSCoding protocol. So, and that's used for motion to be able to flatten your custom data when it's saving a project, or for duplicating a set of parameters from one track to another, that sort of thing.
So here you can see we instantiate something called MyDataClass with the empty data method. And this is the NSCoding method. You can see it's really straightforward. Obviously, it'll get complex if your custom class is complex. But it's just two methods that you need to implement-- encodeWithCoder and initWithCoder.
The custom parameters are not animatable explicitly by motion. But that doesn't mean that you can't do your own animation if you present your own interface for that. It's just that there's no support in Motion's timeline for displaying values of a custom type. So generally set the not animatable flag.
I've added another restriction, which is that you have to do keyed coding with your NSCoding implementation. It needs to use encode object for key and decode object for key. And as long as you do that, you'll be fine. You can't just use NSString, which does conform to NSCoding. You have to make it use keyed coding. So when you do custom parameter UI, what you do is you create an NSView and you create a custom view. and tell the host app that using the FXplug protocols that that's to be the control for one of your parameters.
And it can be any subclass of NSView. It could be a subclass of NSOpenGLView or your own thing. Here's an example that is NSTextEditView, which is really pretty simple to implement. You get the standard methods. You can use the standard NSTextEditView methods to get the value of the string inside the view. And the other thing you can do with the NSView is you can create an interface builder, save it in a nib. And because an FX plug is a bundle, you can look inside your resources folder and load that nib. It'll be localized and all that good stuff.
Another thing you need to do when you use the custom parameter view host APIs is you need to add the name of that API to the list of protocols that you implement in your plug-in. So here's the example from the info.plist of a generator that that there's the array of dictionaries describing each plugin in the pro plug plugin list. And you can see we just added that string as one of the protocols. So when the host app sees that you've implemented that and it sees that you've created, you a parameter that says that it has custom UI, then it will -- the custom UI flag there, then the host app will call your method that you implement that's part of the FX custom parameter view host protocol. And that's the single method in that protocol. It's called create view for Parm. So you'll get called one time to add your view. And the simplest case of this would be I have something called My TextView and I just I go ahead and allocate that and pass it back to the host app. And then it will put it in the inspector and do the normal things like hiding and collapsing it. If it's in a group, it'll disappear when it's supposed to and handle all that for you. But you will get the standard events, the NSResponder and NSView methods.
So in Motion, your custom view appears in the Inspector. You can make your view resizable, although you might notice that in Motion right now, the resizing will only be horizontally. There's a fixed height. Whatever height you create your view as, it's going to always be that height. There's no control for resizing that in the app.
You can also use any, as I said, any subclass of NSView. And I just made a silly example of something you might do in a custom view. But it can really be any kind of graphical stuff. And of course you can use core image and core graphics and all of that for preview.
Now, you get the methods -- you would override methods in NSView and NSResponder classes to get mouse down events and anything else, key events, even complex tablet events. If you use the tablet events for things like angle and whether you're using the eraser or the pen tip, then you would get those events too. And of course, a scroll event. You can also assign a contextual menu to your view. So when the user right clicks or control clicks, you can present a menu.
No, one thing to note though is that unlike any other of the methods that you implement in your custom--in your plugin, the custom parameter UI methods are--I'm sorry, I'm They're called-- I'm sorry, the NSView and NSResponder methods are going to be called by the system, not by FXPlug. So the host app might not be in a state where you can access parameter values, get and set parameters, or change the state of parameters, like hiding them or deactivating some, that sort of thing.
So in order to access your parameters, you need to use the Action API. You get that host API from the Proplug API for protocol method. And just make two calls. One is start action before you start accessing the parameter values, and then end action when you're done. Pretty simple, but if you don't do it, you'll suffer heinous results.
The other thing that's in the custom parameter action API is one method for getting the current time. And that's just because, of course, when you get a mouse down event, you don't know what the time in the timeline is-- it's the current time in the timeline, not the system time or anything like that. So if you want to know what frame you were on when the user clicked or dragged or pressed a key or something, then you just use that and get the current time value. And then when you go to get a parameter value, you can pass that frame number, that time in.
So you probably want to see what these things look like. OK. Should we go to the demo machine? Yeah. If I actually... I'm Pete Warden, I'm an engineer with the Motion Team, and I'm going to be talking about on-screen controls initially. But what I want to really do is just give you an idea of what we're actually talking about, what the motivation is here to be doing this. So, if I go over to our kaleidoscope filter, and you pay attention to the thing that I'm actually dragging around with the mouse, Um... This whole UI widget is actually a custom UI widget that we're drawing through FXplug. You get control over a whole bunch of segment angle things.
You can drag the center of this around. You can actually change the rotation you've got for the kaleidoscope filter, all through just one fairly simple widget. And this is really something that our users have been very keen on and very impressed by. And it's something that if you are writing your own filter, if you can find a way to give users this sort of control over what you're doing, they're going to be very happy. And there's going to be-- people are definitely going to look at this, buy it, if there's this sort of high level of user interaction there. Do you want to come over to slides? Yeah, if you've got the-- yeah. Some slides for on-screen controls. Yeah, so now that I've given you the 10,000 foot view of what on-screen controls are, I'm just going to go over some of the details of actually implementing these, the sort of stuff that you'll need to know if you're planning on doing this in your own plugins.
So, I mean, one simple example of an on-screen control that you get for free is if you ever use a point parameter in your plugins, you'll actually see a small UI widget turn up on-screen. Your users can drag around. So, you know, if the only thing you really need to do is be having a position that the user can interact with, then the point parameter is a perfectly fine way of doing this. But as I was demonstrating with the kaleidoscope, there's a lot more possibilities than that. You can draw anything you like into the canvas.
It really isn't, you know, we really have tried to limit this as little as we can. Some details. You have a choice of drawing coordinates. You can either draw in the object coordinate space or the window coordinate space where the origin is at the corner of the window or even in the document coordinate space where the origin is determined by the corner of the project that you're actually working in. And so that copes with stuff like zooming or panning if the user is moving around. And we actually use all of these different coordinate spaces for different purposes within our own internal filters. One thing I should say is all of our drawing for the on-screen controls is done through OpenGL. Really one of the essential things is to get the quality looking as good as possible and use anti-aliasing. Make sure that you enable that when you're actually doing the rendering code for your UI controls.
As far as user interaction, we have our own custom API that gives you mouse and keyboard events and as Dave was saying about the set parameter stuff, when you actually get those events what you should be doing is calling set parameter on the parameters for your filter and doing control over your filter in that sort of way.
Now, selecting is a little bit use the GL Select A way of drawing into the screen buffer where instead of using textures, you actually draw in IDs of the user interface elements that you are actually drawing and then we take that screen buffer and we read the ID when we want to figure out which object or which part of your filters UI the user has just clicked on and selected. So you do the same thing as you would do to actually draw the UI except instead of setting colours or textures, you just set an ID saying which part of the UI you're actually drawing at the current time.
Well, that's just been a quick skip over on-screen controls. Now I'm gonna go into some of the really kind of dark and dingy corners of the OpenGL rendering side. The first thing that I want to talk about is pbuffers. What is a pbuffer? pbuffer stands for pixel buffer. It's an OpenGL term.
And what you can think about is it's, if you ever need a temporary image to draw into, if you ever need intermediate results in your filter, if you ever need to be doing multi-pass rendering at all, then you're going to have to start to get to grips with pbuffers. And you can do simple filters such as color correction that you can fit into a single fragment program without using P buffers. You can, as I was saying in the introductory talk, you can just set up your fragment program and draw a quad. But we find for the majority of our more complex plug-ins, we actually end up doing things like blurs. we actually end up having to render out intermediate results and then use those intermediate results in further stages. And in those cases, you really need to be getting to grips with pBuffers.
Now, we have some example code that we give out about pbuffers for the creation and deletion. I just want to go over some of the policy, some of the things that you really should know if you're actually looking at using pbuffers. One of them is pbuffer creation and deletion. We really highly recommend that you make sure that you just create pbuffers once in your render function and then you actually keep them around for that plugin's instance for as long as you can. The pbuffer creation is a very expensive operation. There's a whole bunch of system resources and on the OpenGL driver level there's an an awful lot of work that has to go in every time you create or delete a pbuffer.
So it makes a massive difference to your rendering performance if you can just do that creation or deletion once and then just have them sitting there. Now this gets a little bit more complicated when you're dealing with a situation where a filter is on something that's changing size, example a particle system because you really need to be creating P buffers that fit the size that you've been given but especially particle systems are really a pathological case but it comes up in a lot of other situations as well where something's continuously changing size and if you're not careful you end up actually recreating those P buffers that you're using within your filter every single frame. So one of the things that we have been doing is trying, using various strategies, to over-allocate p-buffers so that we actually allocate p-buffers larger than we need at the current time so that we're able to deal with some growth before we have to reallocate. And that's actually been a big part of our optimisation strategy. That's made a real difference to our performance with the filters.
Now, when you actually come to trying to do some rendering using pbuffers, the way that it works is you call some sort of pbuffer begin function which redirects all of your OpenGL drawing into the pbuffer context, all of your rendering calls then get rendered into that pbuffer, and then you call some sort of pbuffer end, which, you know, returns you to the context that you were in before. And once you've done that, really, you can call sort of some sort of use function on the pbuffer, and that just lets you use that exactly like a texture. There really is no difference from the actual user point of view. when you're using it in code, you really can't tell this isn't a texture that you've created by uploading the data. You just get a texture ID and you bind it and you do rendering with it. It's very straightforward to be using.
Now, one of the tricky things and one of the that's very tempting to do and it really feels like a limitation when you're first starting to write filters for using pbuffers is the access to your current pbuffer as you're drawing into it. You really can't read the pbuffer that you're drawing into as you're drawing into it. So that means that you can't use that as a source in your fragment program and do sort of funky accumulation schemes or... any of the other things that you'd really like to be doing there. I mean, there are some limited ways that you can use the GL blending to do accumulation into the P buffer that you're drawing into, or to do simple compositing, but there really isn't a good way of accessing your current P buffer that you're drawing into. So the scheme that we always use, we use an awful lot is the ping pong, as we call it, or the double buffer approach, where we draw some, do some rendering into a buffer, and then when we need to reference that to do some further processing on it, we switch over to a second buffer and use the first P buffer as a texture, as an input into the fragment program. And with two buffers, you can pretty much do all of that that you want. You just switch back and forth for every stage that you're doing.
So I've covered pixel buffers there. Now what I want to talk about is some pixel shader stuff. Just some of the sort of things I want you really to know about fragment programs and some of the funky little details of using our fragment program within FX plug filters. As I said, fragment programs, pixel shaders, You know, there's a lot of different terms for all of these things, but what they involve is calling assembler-like functions on every pixel that you're drawing through the OpenGL rendering engine.
We're gonna have, we have some code up as part of our Xcode template that demonstrates how to create and delete these R fragment programs. It really isn't that much code. And as far as using them, if you're used to using textures for OpenGL rendering, it's very much the same syntax. All you do is you bind the fragment program just before you're ready to draw your quad. And then when you actually call it rendering routines, the fragment program gets called on every pixel that you render using that quad. And I also briefly want to talk about some of the possible alternatives. I mean, one of the things you can do as well is if you have a filter that will fit into the core image idea, then that's definitely something that you can do within your FX plug filter, using core image filters, chaining them together. That's a very, very viable way of approaching this. Another thing we have looked at is using CG. Using-- because we really-- we know that our fragment program is a pretty low-level language, and it would be nice if we actually had a more high-level way of approaching writing our pixel shaders. So we have looked at trying to use CG within our FX plug filters. And we've had some success. We also have run into some problems with the Mac CG implementation. So it's something to think about, but we really have turned back to just using our fragment program as our main implementation language for all of the filters in motion, just because it's a fairly stable, it's a fairly well-understood system, and it also lets us do some of the stuff that we need to do to run on, for example, ATI cards, which happen to have a lot of restrictions on the sort of programs that they can run. It's a lot easier to actually be dealing at the assembler level with those sort of restrictions rather than trying to figure it out from compiled output of a more high-level language.
Another thing that I'm going to talk about is the floating point support within Motion and the floating point support in OpenGL in general. There is some floating point support in Panther, but in general, it's a Tiger-only feature. That's where it's really been, you know, all of the bugs have been ironed out, and it really seems to be working well on all of the cards that Motion ships on, all the way from the MV34 up to the newer cards.
it doesn't actually make much of a difference to simple filters. Since our fragment program runs internally at floating point depth anyway, as long as you're just doing a single pass fragment shader, pixel shader plug-in, then the conversion is kind of handled for you. It's handled outside of the R fragment program pulls in the texture and when it writes out your final result it converts down to whatever bit depth you happen to be rendering to.
It gets a little bit more complicated once you start doing multi-pass rendering, because, well, for example, you need to actually look at the depth that we're asking for you to render at in the render info and pass that in to when you create your own pbuffers so that you actually have-- you don't do any clamping, you don't lose any depth precision when you're doing your multi-pass rendering. And then once you're at that stage, You should also know that there are quite a few limitations on the rendering things that you can do when you're at floating point depth on current cards.
You really can't do blending. You can't do bilinear filtering on floating point textures. Well, that's not actually quite true. On 16-bit flow on the one particular card, the NVIDIA 6800, you can do both blending and bilinear filtering, but that is really an exception to the rule. You really have to have, if you're using blending or bilinear filtering as part of your plug-in, you have to have alternative code paths that emulate that if you're going to be running in floating point bit depths. But as I said, otherwise, If you're not doing any multi-pass stuff, for our experience, switching over to float for Motion 2 was--it was so much easier than actually trying to do it in a normal CPU software-based approach. We really--since our fragment program was already running at floating point resolution internally, that made things so much easier.
This is another question that we get asked an awful lot is, "Okay, I want to do some 3D rendering. Where's my Z buffer?" When we call an FX plug, plug-in, we call you in a context that doesn't actually have a depth buffer attached. That's a conscious decision on our part because most plug-ins don't actually need the depth buffer. And every time you use a depth buffer, it takes up VRAM, it takes up system resources. So if we can avoid rendering using a depth buffer, buffer, that really does speed things up a fair bit. So what do you do if you actually need a Z buffer for your rendering, if you're doing some sort of 3D rendering within your plug-in? Well, you just create a P buffer internally to your plug-in, render into that with a Z-buffer attached to that P-buffer. And then you take the results from that P-buffer and use it as a texture to draw into the context that we give you. Now, one thing I should mention is that you will, especially in this space, if you're doing 3D, using a Z-buffer to handle intersections, that tends to show up some fairly nasty artifacts that people in the motion graphics industry really don't expect to see the sort of jagginess that you end up using just the standard Z buffer implementation. So some sort of anti-aliasing, some sort of improvement to the quality there is very recommended.
Another thing to be aware of, or hopefully we take care of most of this for you, but just in case you ever run into this situation, I want to go over some of the limits to the texture and pbuffer sizes that you can have. On current ATI cards, you can go up to 2048 by 2048 for a single texture or for a pbuffer that you're rendering into. The same is true on NV34 cards, the 64 megabyte NVIDIA cards that are out there. For all other NVIDIA cards, you can actually go up to 4096 by 4096.
This is a situation for 8-bit, and this is just a hardware limitation of the cards. Now, for float, it gets a little bit more complicated because you start to run into the VRAM limits on the card. You can only fit so many P-buffers and so many textures on the card at the same time. And it gets kind of variable, depending on even stuff like if people have two monitors attached, that actually splits the VRAM in half and gives half to each monitor. So, there's a whole bunch of calculations we end up having to do to figure out, "Okay, what's the maximum size that we can have for textures and pbuffers if you're at higher bit depths?" Now, as I said, we try and handle this for you. We will always clamp our input and output sizes to be legal sizes. We do a lot of calculation.
We do a lot of work to actually make sure that we never call any plug-ins asking for output or output sizes that they can't deliver or giving input textures that are illegal sizes. So, um, The place where this gets tricky is, as I was saying about on the Z-buffer side, trying to do multi-sampling or trying to do anti-aliasing to improve quality. Then very often what you want to do is create a P-buffer that's larger than your input, so you can render into it and then use that as a multi-sample buffer to improve the quality. But gets very tricky. You really have to be careful that you're not running into these size limits I've just been talking about when you're actually doing that sort of allocation of P buffers that are larger than the input that we're giving you. So we don't have a magic solution for this, but we want to make sure that if you're seeing corruption, if you're seeing issues and you are allocating P buffers that are larger than the input or the output, this is something that you talk to us about and this is something that you're aware of.
OK, well, I'll just pass back to David. I wanted to point out one other thing. Because FX plug plugins are CFBundles, NSBundles, and they have resource folders, while you can use the standard CI filters built into Core Image, all the regular Core Image filters should work. But you can also make your own image units, and they can sit inside your resource folders. You can load them when needed. And there's a method. You give a partial path name or file name. And you say, use the standard bundle method for getting the file with that path out of your resource folder. And you can use the type of image unit that has actual Objective-C code in it. Or you can do the script-only image units. Those will work great, too.
Another thing that I want to talk about is Intel-based Macs and this whole thing. Of course, we don't have the SDK revved yet. You wouldn't be able to use it anyway until Motion is running on Intel-based, on the transition kit. But it will come, and we'll let you know what the strategy is. Just keep watching the website. So there is a list of links on the developer, the WWDC page.
And it has, as I said in the session yesterday, the place to go to download the FXplug SDK. You can download it and you can look at the headers and check out the Xcode templates and the examples and read the documentation and all that. But to actually run and execute, the only host app, of course, is Motion, and only Motion 2.
There is a 30-day trial version of Motion available, too. So if you want to just check out the SDK for a while, you can do that. So, we'll take questions at the microphone there, but there's also afterwards a lunch at 12:45 upstairs in the pro audio and pro video connection room. So grab a lunch and come on up and we'll talk more. And also, if you have any, if you want to contact us, we have a mailing list. That's proappsdk at group.apple.com. And it's a list of maybe a dozen people at Apple who are working on FXplug and we'll be glad to answer questions. you