General • 54:56
This session focuses on the different plug-in models and APIs available to extend Apple's suite of professional applications. By using plug-ins, developers can extend existing functionality and/or add entirely new features directly to the application environment. Topics discussed include the development of audio processing plug-ins using the Mac OS X Core Audio Units API, adding project data and workflow processing tools using the Apple Pro Plug-in API, and the development of custom image/video processing tools using the AfterEffects plug-in SDK and FXScript for Apple's Professional Digital Production Applications: Final Cut Pro, DVD Studio Pro, Shake, and Logic.
Speakers: Brett Halle, Roger Powell, Angus Taggart, Donald Liu, Avi Cieplinski, David Black
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
I'm Brett Halle. I'm the Director of Pro Video Engineering. We're going to spend a few minutes this afternoon and talk about how you're going to be extending the Pro applications with plug-ins. This is going to be a pretty busy session. We've got a lot of different plug-ins we want to cover, so we'll get right to it. Very quickly, just so you have a bit of a context, we have a number of applications in our application suite or Pro application suite.
Final Cut 4, which is a whopping, gee, I think it's almost two weeks old, launched a week ago last Saturday. And Shake 3, which launched last weekend. Final Cut Express, which came out earlier this year. And Logic, which is an acquisition that we did last year. We're going to cover a number of different plug-in models that are supported by all these applications. And to kick us off and to talk about audio units, I'd like to invite Roger Powell, the Lead Engineer for Final Cut Audio.
Thanks very much for coming today. We're going to start off this session by talking about audio units and how they're deployed in these three Apple Pro Media applications: Final Cut Pro, Soundtrack, and Logic. We're not going to teach you how to write an audio unit today. For that, you should go to the developer site, developer.apple.com/audio, if you're new to this technology.
Each of these applications has individual usage scenarios and requirements for audio units, and we thought it would be beneficial to you if we could go over those points so that you could effectively deploy your audio units in any or all of these applications. These points that we're going to cover fall into three general categories.
The first is the user interface, the parameter user interface that's supported, either custom or generic. The second is the input and output channel configurations that your plug-in should support in order to work in these applications. And then there are some issues with audio unit property implementations. Let's start off with Final Cut Pro.
[Transcript missing]
This, you can see on the left, we have the controls that we've generated by reading your parameters. And then on the right is the, what we call the keyframe interface. And these keyframes can be produced either through recording gestures while you, during playback while you're using these controls on the left, or they can be edited manually with the standard editing procedures.
I'll go back for a second. For channel configurations in Final Cut Pro, at this time, because of our clip-based and track bussing architecture, we support mono-in and mono-out. You can think of this more or less as an insert/effect type of model rather than as a send/return type of model.
It's okay if you support other configurations in your plug-in, but they must support Mono In and Mono Out in order for them to be loaded into Final Cut Pro. At this time, we're not supporting the Audio Unit Preset mechanism. However, users can make changes to the plug-in and then save a version of that using our Favorites mechanism.
Let's move ahead to another important item for Final Cut Pro. This is an audio unit property called Tail Time, and it's very important to Final Cut Pro. We sort of expect that you're going to implement this or at least understand the issues surrounding it. Tail Time can be thought of as the decay of a plug-in.
A good example would be a reverb. A reverb, obviously, you're going to set to, you know, a two-second decay, and so when a signal hits it, it's going to decay out for about two seconds. This actually indicates the length of the sample history to reach steady state. This is how we interpret this.
We use this for a pre-roll computation so that we can maintain sample accurate seeming across edits. The example of this, again, is the reverb. If we place the reverb on a clip in the timeline, and then we render the first half of that clip, we then go back and start playing that clip.
The first half of the clip will play from the render file, which has the computed reverb effect, and then suddenly when we hit the crossover, normally we would start feeding source samples again to the reverb. Well, the reverb has to get primed up again, so this would produce an inconsistent sample stream across that crossover point.
In order to avoid this, we query the tail time property, see how many samples we have to pre-compute to reach a steady state point at the crossover, and then when you play back, we've pre-computed that, and then we start up with the samples that are current at the crossover point. So this essentially eliminates the seeming artifacts.
We do expect this to be implemented; however, there are cases where you should actually report not implemented. One example of this would be if you have an infinite tail time on a reverb, which would last forever. Clearly, at that point, we'll have to render the entire clip in order to do that, because we can't pre-roll forever.
Or if the state can be indeterminate, and this is a little interesting, but an indeterminate plug-in would be something like where there was an internal modulation from a low frequency oscillator that was not synchronous. Or if there's some other random element in the plug-in which would cause its state to be indeterminate at a given sample position. These types of plug-ins should report not implemented, and there's some gory details associated with how you actually handle that, which I won't go into right now.
But we do have experts from all of the applications up here, so you can grab one of us later, and we can give you more detail on that. Also, the latency property, which is somewhat similar in usage. If your plug-in has a latency, that is a delay between when the input goes into it before samples come out. We need to know that as well in order to produce sample-accurate seaming.
That's it for Final Cut Pro. Let's go on to Soundtrack. Once again, Soundtrack, and this is a theme that you'll hear throughout all of this, we want you to produce complete and accurate parameter descriptions, type and range. This is so that automation can be handled accurately. Soundtrack actually supports the Audio Unit custom UI and presets, and in order to do that, you would click on the Advanced button in the Effect panel. I have a screenshot of that as well.
On the left, you'll see the generic view for the preset, or for the plug-in, and you can see the controls listed and the generic controls, the names and the controls listed there. If you hit the Advanced button, you'll get a screen like what's on the right there, where it gets overlaid with the custom UI that's been embedded in the plug-in.
Channel configurations for Soundtrack is stereo in and stereo out only. Again, you can put other channel configurations in your plug-in, but they must support stereo in, stereo out in order to be used within Soundtrack. Soundtrack also uses the tail time property. It uses it in a little bit different way. It's only required for audio units that would have an audible tail, again, something like a reverb. And the property value must be accurate to avoid artifacts.
Otherwise, there will be some truncation of output. The way this is used is if you have a reverb on a clip and that clip is the, that track ends and you don't want the reverb to cut off when the source signal cuts off. So Soundtrack will query that property and allow playback to continue for the length of time required to process all of the effect samples.
Let's move on to Logic. In Logic, the reference hosts are Logic Platinum 5.5.1 or higher and Logic Platinum 6.0.1 or higher. This is where the Audio Unit Support was first introduced. You should put your resource allocations into the initialization part of the plug-in rather than the instantiation. This is because Logic will instantiate one copy of each plug-in on startup, and so you don't want to incur startup penalties.
The custom UI is supported, and the generic parameter UI is also supported. Parameter descriptions must be complete and accurate. We'll take a quick look at the Logic Audio Unit views. The generic views are on the left and the top right, and the custom UI is displayed in the bottom right.
For Logic, the channel configurations that are supported are mono to mono, mono to stereo, and stereo to stereo audio channel configurations. So this is a little more flexible than the other two. Each of these applications has a different purpose in life and has different support for these things.
There are a couple of other points for Logic. Logic will use the standard .au preset for saving and loading presets. It also pays attention to the latency and tail time properties. And then the last two points are regarding automation. Your plug-in should send the Audio Unit Carbon View Event parameters or messages, mouse down and up in control, to the host. The reason for this is mostly to support automation recording for latch mode in automation. And likewise, for automation support, your plug-in should use the Parameter Listener Scheme to inform the host of parameter changes.
That basically covers the points that we wanted to make for these three applications. We know it's probably generated some questions and will all be available for QA at the end of the session. At this point, I would like to hand things over to Angus Taggart. He's from Shake, and he's going to talk about Shake plug-in development. Thank you, Roger.
Thanks again for joining us this afternoon. As Roger said, I'm going to be presenting an introduction to the Shake plug-in architecture. Our goal is to cover some brief introduction to Shake itself, try to give you a feeling for what the application does. We're also going to cover what you get with the Shake SDK and also look at some basic concepts that are involved with building a Shake plug-in.
So let's start out with an introduction to Shake. Shake has very rapidly established itself as an industry-leading compositing and 2D effects solution, an invaluable tool in many major post-production houses. So what does Shake do? It does things such as color correction, filters, grain removal, blurs, all kinds of filtering.
It ships with two of the industry-leading color keyers, Photron's Primat keyer, and also the computer frame store key light keyer. It has tools for doing tracking, masking, rotoscoping paint, also retiming controls. And for those that are interested, there's a huge list of Shake's capabilities up on the Apple website.
So how does Shake work? A very brief introduction to Shake is under the hood, there is an advanced, very efficient node-based compositing engine. The highlights of it is that the primary processing within Shake occurs within nodes, and that nodes are connected and data gets in and out of nodes through plugs.
And somebody who's using Shake will actually create very complex compositing trees and effects trees by connecting nodes up. I've got a simple example where you can see there's a foreground and a background layer being composited over each other, and you can see the node representation of this operation in Shake.
Something that's very important to plug-in developers is who uses Shake? Who's my audience going to be when I build a plug-in? Shake is used at a lot of the top post-production facilities. A list includes Weta, the New Zealand-based company that's doing the Lord of the Rings work. Escape, based close by here in Alameda, who's doing The Matrix. Cineside, DreamWorks, Blue Sky.
It's pretty much the A-list of post-production houses and a lot of others. To its credit, it has been used in the production pipeline for the last six Academy Award-winning movies. It's becoming more commonly used in film schools. With Apple's aggressive pricing and positioning, it's being used more and more for commercial work and that sort of thing.
Who uses the Shake SDK? In terms of third-party commercial plug-in developers, pretty much, you know, we've got fairly strong industry support. The Foundry uses or develops for Shake with Tinder and also with their Furnace plug-in suite. Also, GenArts with their Sapphire plug-in suite supports Shake, and we've also got support from Ultimat and Revision Effects' warping and morphing tools. Also, another very big user of the Shake SDK is actually the Shake customers, the large post-production houses that have to integrate Shake into their production pipelines, and also to create custom effects within Shake to get the kind of creative effects that they need.
So, you're a plug-in developer, you're ready to start working with Shake. What you'll need to do is actually request the Shake SDK package through Apple Developer Relations. And when you get a hold of that package, you'll see that it not only supports the Mac platform, but it also provides support for the IRIX platform or SGI platform as well as Linux.
One of the things that we've put a lot of effort into over the last year is really enhancing the Shake docs. We've got great tutorials, really good reference guides, some white paper kind of things to give you some technical overviews of Shake's Node engine. It also ships with 18 example plug-ins for doing everything from image processing filters, custom overlays, custom widgets. There's a lot of examples to help you get started. And we've got an example project builder development environment.
The last point on this slide, and this is something that is both very powerful and it's something that also requires a little bit of a learning curve, is that there isn't a layer that Shake provides for its plug-in development environment. Basically, when you start working with Shake, you're using the same headers and frameworks that the Shake internal developers use. And this means that you get the same access to the Node engine that the Shake internal development team has.
And that's a very cool thing. It's a very powerful thing. It means that you can do just about anything with Shake that you want. It also means that you need to spend a little bit of time making yourself familiar with some of the concepts of how the Shake Node engine works. And so what I'm going to do really quickly in the final few slides is just hit on some very basic concepts involved with Shake plug-in development.
As we said earlier, the building blocks or the basics of the Node engine, of Shake's Node engine, is just that. We've got nodes and plugs. And in review, a node is where processing occurs. And this is where you're going to put your logic for image processing or whatever you're doing.
It's going to be embedded in the node. And the node is a C++ class. And there's a base class defined that has a lot of functionality that you'll derive from. And when you want to get data into your node or data out of your node, you're going to do that through a plug. That's the way to get -- that's the way the data moves through the Shake Node engine.
Once again, it's a C++ object. One concept with plugs is that they don't exist on their own. They're always owned by a node. And so that's something that they'll -- within a node, you'll add a plug that provides a mechanism to get data either in or out of your node. And it has a base class as well that provides a lot of functionality.
So what are the mechanics of the Shake Node engine? How do you actually produce your result for Shake? Well, there's a concept called lazy evaluation. Shake really only cares about your output plugs, what you can produce. It's not going to ask you for any information about your input plugs. It's only ever going to, when it comes time to render and evaluate, it's going to call your node and it's going to ask for information about your output plugs.
And so at that point, once there's a request comes in for a value out of one of your output plugs, what you'll be doing then is pulling information in from your input plugs, whether it's input parameters that users define or it could be data actually coming in from a node that you're connected to.
So you'll be pulling on a plug, getting data out of it, processing it, and placing that value in an output plug. And then Shake then takes that data and moves it down to the next node or it might actually take -- if it's image data, it'll maybe even put it up in a render view.
Let's take a look at the code structure that supports this calling mechanism. As I said before, Shake plugin is basically a node. It's a C++ class. In this case I've got it deriving from the NRI node base class and there's three primary methods that you will be working with extensively.
First of all, and even though it's probably considered best, not the best practice in C++, but Shake plugins do a heck of a lot in their constructors. So essentially when your constructor is called in Shake, you're going to be adding all the plugs, the input plugs that you need, do any wiring that you need inside of your node, any setup that you need is going to occur in your constructor.
So you'll end up doing quite a bit of work there. Another key routine is your eval routine. This is where Shake calls for the value of your output plugs. So any output plugs that you've registered that Shake wants a value for, it's going to call that eval routine. With a pointer to the output plug that it's interested in getting a value for. So you'll look at which output plug it's asking for and do whatever you need to do to compute an updated value for Shake.
And finally, there's a virtual notify method. You can actually register with Shake to be called when one of your input plugs is somehow modified. And so Shake will call you and you might want to do something, either some kind of processing or changing the structure of your node. upon the notification that you get.
This is the final slide. Something that's very important for Shake plug-in developers, I do quite a bit of Shake SDK support. One of the things that we find is that Shake provides quite a rich set of base classes that you can start from. We talked about the NRI node base class a couple slides back, but as a plug-in developer, you will rarely start and derive from that base class directly. There's a number of base classes depending upon the operation that you want to do, whether it's a filter that takes one, two, or N number of input images, whether you're a custom widget or an overlay.
That's something that we really like to emphasize with plug-in developers is become acquainted with the base classes that are provided by Shake. You can save yourself a huge amount of work. There's a lot of functionality built into those base classes. That wraps up my presentation, and at this point, I'd like to introduce Donald Liu.
How are you doing? Today I'll be showing an overview of FXScript and a couple of demos at the end. So, what is FXScript? FXScript is a video scripting language used to create video effects for Final Cut Pro.
[Transcript missing]
Since it's built-in, it uses the same rendering engine as Final Cut Pro. There are over 150 effects in Final Cut Pro which are written in FXScript. These are some of the features available in FXScript. Variable types. You can declare multi-dimensional arrays up to five levels: built-in functions, Standard Input Controls, and Loops and Branches.
The subroutine supports recursion up to 12 levels. There are three types of effects in Final Cut Pro that you can create using FXScript. Filters for single video streaming, Transitions for two video streams. And there's a generator, which is a special type of clip, such as text or particle generators.
Final Cut Pro has a built-in tool which you can edit and preview your script. There are three windows. You can edit your script in the text entry window. You can run and see your effect in the preview window. And then you can adjust your input controls. You can edit your parameters in your input control windows.
There are several ways of saving your script in FXBuilder. You can directly add your effect to your project. Or you can make a favorite effect, which is saved in your preference file. Or you can save it as a text file. You can save as a plug-in or you can save it as an encrypted plug-in. Once you encrypt your script, you cannot view the script again. All the plug-ins, all the effects in Final Cut Pro are saved as text files, so you can view them in an FX builder or in any other text editor.
This is an example of FXScript. There are two sections, the header and the body. In the header, you define the type and the name of the effect. You can also define the input parameters. The main body starts with the keyword "code." So who is this for? For Final Cut Pro developers, post-Effect houses, and advanced users. And now I'm ready to show you a demo.
Okay. This is one of the filters I wrote. Let me turn this up in here. So this is a rock. If I turn off this filter, I can see an object in front of the car. So I want to remove that object. This filter removes that object. And let's open that up in FX Builder. So I'll be customizing this in just a few minutes for this sequence right here. So this sequence has a diagonal object. And I'll be removing that object by customizing this effect. First of all, I'll be adding an input parameter.
[Transcript missing]
The Rotate function: Polygon, the Center Point, the amount of rotation, and the aspect ratio. Let's just run this to make sure there's no syntax error. So you can see right here, there's a new angle control right here.
[Transcript missing]
Then, offset the timer. And let's just adjust a little bit the source. Just get rid of the overlay. Soften the edges. There you go.
In my second demo, I'll be showing how to make a call to a built-in filter. So this is a generator which generates a random pattern. Let's just open this in our FX Builder. Let's just run this. It's just random pattern. And I like to make a function call to one of the filters in Final Cut Pro. Just stop this for a second.
[Transcript missing]
So I'll be making a quick call to find edges to this pattern. So I'm going to temporarily save the image to a temporary buffer. I'm going to make a call filter. And the name of the filter, which is find edges. And then the source, which I copied it to buffer. And then the destination. And then frame, duration, and frame rate. So let's run that.
[Transcript missing]
And you don't have to even type or copy and paste the script of the find edges. This should eliminate such problem as variable name collisions, typos, and other problems.
And you can, once you make this change, you can save as a plug-in, but this time I'm going to make it Where is it? Make Fabric Effect, right here. If you look in here, it will save it as a favorite effect and you can use it. So let's close that. And in my third demo, I'll be showing performance issues in FXBuilder. Let's open this. This is a particle generator. Let's open this up. Let's run this.
We have all these parameters you can control: gravity, initial velocity, speed, decay, the size of the particles, softness, you can change the color,
[Transcript missing]
[Transcript missing]
Now this is drawing 10 times 100, over 1,000 particles. This has same control, some similar controls as the first one.
[Transcript missing]
So with all the additional calculation, you don't see any performance difference between these two right here. Okay, that concludes my demonstration for FXScript. And next, I'll be, I think, Avi will be up here to show you AfterEffects plug-ins. Thank you. Thank you. So my name's Avi Chaplinsky, I'm one of the engineers on Final Cut Pro, and I'm here to talk about After Effects plugin support inside Final Cut Pro.
So as a bit of an intro, I'm just gonna cover some of the basics, 'cause some of you may be aware of this, and some of you may not be. So what is After Effects? After Effects is Adobe's package for 2D and 3D compositing and effects for video and motion graphics. What are After Effects plugins? After Effects plugins are usually third-party developed modules that add some functionality to the host application.
These are typically things like effects, titlers, keyers, time remappers, but really there's a wide variety out there. Now, why does Final Cut Pro support the After Effects plugin? Well, it is a public SDK, so it's available for everyone to download and play with. And, when Final Cut Pro came onto the market and in the intervening time, the After Effects plugin has sort of developed into a de facto standard for, well, cross-application plugins. So it's actually supported by several other applications besides After Effects.
From Adobe, Premiere has support for them, but also Combustion and Commotion also have support for After Effects plugins. So for people developing them, and if they're interested, if you're developing them in a third-party setting, your audience is a lot wider than any one specific application standard, because it's embraced by several competing applications.
So in this brief presentation, I'm just going to cover some of what's of interest to us in Final Cut Pro. So that is what works inside Final Cut Pro, any limitations, and some Final Cut Pro specific API that we've added in the last release of Final Cut Pro.
[Transcript missing]
Okay, so some limitations to our implementation of the After Effects standard. In Final Cut Pro, we don't allow plug-in defined on-frame UI elements. So if you require the user to directly interact with the frame in order to place elements or to pick up various kinds of things, then you can't do it directly in the Final Cut Pro UI. Typically, if this is the kind of thing you need to do, you'll want to do your own UI on top, and you can pull the frame out and draw whatever you want on top of it and let the user interact in your own UI.
We support only 8-bit RGB rendering, although Final Cut Pro 4 has a brand-new 32-bit float rendering engine for After Effects in Final Cut Pro. The greater-than-8-bit format defined in After Effects is 16-bit integer, which is not something we have native support for in Final Cut Pro. Another key thing to keep in mind, especially as you're working on your own, as you make more elaborate plug-ins, is we don't allow dynamic parameter lists.
These typically are if the list of parameters you have wants to depend on the state of a pop-up or something, then we can't allow that in Final Cut Pro. All the parameters that your effect has need to be static and defined when the effect is initially read in. So basically, Final Cut Pro doesn't really have an analogous construct to changing the number of parameters, so we don't allow it. for the After Effects plugins themselves.
All right, so a quick overview of some new API calls. So we added some that these are useful if you're trying to better integrate your plug-in into Final Cut Pro, and they deal with some of the differences between After Effects and Final Cut Pro. There's also a few that are particularly useful if you have your own custom UI that you're drawing and you want to interact with the application, and particularly if you'd like to send frames out to the currently enabled video out device.
So briefly, there are four new calls in Final Cut Pro 4. The first two are rather simple. They simply allow you to retrieve the current version number of the Final Cut Pro you're running under. So there's the major version number, things like 3, 4, and then the minor version number for the dot releases.
The third one is just a call to cause Final Cut Pro to redraw all of its UI. This is good if you've drawn custom UI on top that might have interfered with or drawn on top of any of it or otherwise damaged the UI. When your plug-in is completed running, you'll probably want to call this function to just make sure the UI is in a good state when the user comes back into Final Cut Pro. The last one is what's of particular interest, again, if you have custom UI. So it allows you to... force the viewer or canvas, the window that the user is seeing for the clip that they've applied your effect to, to move to a particular frame inside the clip.
What this means is that if you've got your own UI with maybe your own kind of timeline and you want the user to be able to scrub through the clip, you can call back into Final Cut Pro and have Final Cut Pro move to that same frame. And what that really gets you is not only showing it in the Final Cut Pro UI, but if there's a video device turned on and enabled.
Final Cut Pro will want to do the rendering for that new frame, which, since it'll call you if you're in the render stack, that'll let you push your frame with all your effects out to the video out device directly without having to worry about writing your own interface to do all that.
So in order to support those additional calls, what we've done is added a single function pointer that you call that's in the PFUtil callback, if anyone's familiar. But that's basically a set of utility callback function pointers that the host application provides to every plug-in. In this case, the host is Final Cut Pro. If you call that function at the bottom to get the private callbacks with a block of memory, then we'll fill in those function pointers for you, and then you can just call them directly to do basically what I outlined before.
Okay, so quickly in summary, the After Effects API is a very useful way to add additional image processing to Final Cut Pro. This is particularly true for those in the third-party setting because since it's a standard API that's supported by several other applications, it's a good opportunity to sell to a wider audience than just one application. But of course, keep in mind that we only support a subset of the full After Effects API. This subset is what we felt was most important and appropriate for Final Cut Pro, and it's, yeah, what we felt was what you would need to best interact.
But a good thing to keep in mind is that we really would welcome your feedback. If there's pieces of the SDK that we don't support or that might have been added since we have done our last revision, we'd be very interested to hear from people on what they... what they'd be looking for to give us an idea of what the benefits might be because if it, you know, if it looks good, we'd like to add any support we can in order to make your lives a little easier.
So we'd really encourage feedback, not only now, but later there'll be some email addresses we'll post at the end to give you a forum to provide that kind of feedback to us. Okay, well that is basically everything I wanted to talk about, so I'm going to welcome David Black up to talk about some future plug-in directions. Thank you.
Good afternoon. I'd like to spend a little bit of time talking about sort of where the future lies with plug-ins and Apple's Pro Video Applications. Plug-ins, we've seen, are very important to us and sort of want to take things to the next level and only increase the opportunities for developers going forward.
really what sort of our core direction comes down to trying to move to a unified plug-in model where it makes sense for Apple's professional applications. And it's important to note that this is not something that's going to supplement normal current operating system plug-in models like the audio unit specification. Really what it boils down to for us is having a common base architecture in place across different types of plug-ins and different applications so that it's only easier for us to support them and you to develop them.
On top of that basic layer, we have sort of function or tasks of APIs depending on what you're doing. So there might be sort of one set of APIs for an effect plug-in or one set of APIs for a data interchange plug-in. And really, again, sort of the key point of all this is giving you a model where you can know that multiple applications will support this. Certainly we're looking to commit to support these across our applications and also provide enough detail so that others can support it as needed.
Also another key point is trying to build the support in so that you'll have the choice of developing a model that's going to support you and you'll have the choice of developing a model that's going to support you and you'll have the choice of developing a model that's going to support you.
tools to use for these plug-ins. Certainly a lot of models in the past have been very focused on C or very focused on C++. To as much of an extent as possible, we really want to give you that choice because certainly different tasks may require different tools and developers may be at different technical skill levels.
It's going into a little bit more detail about what we're trying to go for. We're basing this model on Objective-C protocols. If you're familiar with Objective-C protocols, it's a very nice way to sort of pass data messages between objects without being forced to rely on a common base class. And we found that actually solved quite a few problems for us in the implementation phase. The files on disk just stored as standard bundles. Nothing really new there.
Bundles are actually great because they put everything together, even sort of defining multiple plugins within one, sort of to the user at least, object. We will be supporting static and or dynamic plugin loading and registration. Your plugin may have requirements that depend on what libraries are installed or what application is running, or it may be very simple and just can simply document that in a structure.
And services and data from the host application will be made available via objects and callbacks so that you'll be able to make those calls. And it won't sort of be a simple one-way model where image buffer comes in, you do your thing, image buffer goes out, but there is some bidirectional communication, support in there.
But you might ask, what about other plug-in models? How does this relate to everything we've been doing and sort of new industry standards that may be coming out? Really, this is meant to be a superset of a lot of things. For us, it's very important to support what makes sense, but at the same time, providing sort of native engineering support for multiple plug-in models at the core application level is really kind of a lot of work. By building sort of a superset glue layer, we hope to support more plug-in APIs, not fewer, and also leverage those efforts across multiple applications.
And to this end, it actually is designed to support adapter or host plug-ins, so that you can sort of have that indirection of you have a pro-application, you'll have a pro-application plug-in, and then an adapter layer that goes off to some other vendor's code. And we are certainly intending to provide those adapters for the most popular plug-ins and support developers in providing more. are.
So I might ask, when will this be available? We're going to begin to roll this out later this year on top of Final Cut Pro. The first sort of functional API we're going to implement is a data interchange model based around the XML data format that we discussed earlier this week.
In brief, we're essentially making the entire contents of a Final Cut Pro project available via XML to developers in a very clean manner. And this is really giving you a programmatic interface to define commands within the Final Cut Pro application environment that receive this data, perform necessary translations, and do something useful with it.
We'll be releasing a public beta of this in August. That's our current intention. If you watch the Final Cut Pro website, details will be up there, all so you can get in contact with developer relations, and they'll make sure you're in the loop. And the intention is also to release this in final form to developers and users by the end of the year. Earlier this week, one of the demonstrations at the data interchange session was AutomaticDuck, providing AAF import and export support for Final Cut Pro.
AAF being the advanced authoring format, an industry-standard binary container format intended to take data between editing applications. That plugin is currently sitting on top of a very early version of this plugin specification, and it's basically a great proof of concept for the whole idea. So to sort of now summarize the entire session here, All of our applications support plug-in models.
This is really very important to us. Logic, Soundtrack, and also, excuse me, Logic, Soundtrack support audio units. Shake, of course, supports Shake plug-ins. And Final Cut Pro supports audio units, AfterEffects, and FXScript plug-ins at this time. Of course, we're hoping to make this list grow over time, both with your help and with new technologies internally.
And really, they're important, and they're not only important to Apple and important to you, but they're really important to end users. Really, no one tool can do it all, and providing plugins allows end users to get the tools they want, sort of unique tools that Apple may not develop to be integrated into that space, and it really gives the end users more control over their product in the end. And it also just helps advance technology. Certainly, we're not going to think of everything, and by at least opening that door, it's available to put new technologies and new workflows together just as they come up in the marketplace.
And really, we want more extensive support for plugins in the future. All the apps intend to support more plugins. We're also trying to move toward modern plugin architectures. As mentioned earlier, this is not sort of a replacement for existing models, but just sort of open the door wider to more models. And we also really want to be able to share plugins across multiple applications. We certainly have this today with audio.
We have a plugin unit supported across Logic, Soundtrack, and Final Cut Pro, and this just adds more value to sort of everyone, and it's really important to us. And we want to add sort of more and different types of APIs. Again, the data interchange API is an example of this. Certainly, it's very simple to provide import and export functionality with this plugin API, but it's also there for sort of integrated tools for data management and workflow purposes.
And really, I can't emphasize this enough, that really input from you is very important to us. We've certainly done all our research. We have our own ideas. But we really need to know from you what we're missing, what we're not doing. So please let us know what sort of APIs you want, what are the development tools you prefer to work in? Certainly, Code Warrior and Project Builder, now Xcode, are very popular.
But how much of a difference would it make to be running from a Java IDE or even from AppleScript? And are there tools that you want to do that you just don't know how to fit in the current framework? Certainly, we might be able to suggest approaches with current APIs or use that feedback to then generate new APIs in the future that are just going to open up the platform even more. At this point, I'd like to invite up Brett Halley to do the wrap up and do QA.
Thanks very much. This week is kind of an applications introduction to WWDC. We've worked this week to have a number of different sessions available to you to show that we really like to see developers get more involved in our professional applications. Be it with plug-ins, be it cards and various type of hardware devices, be it content creation.
The plug-in session here today is intended to be yet another way that you can participate in our applications and to provide new products to our collective customers and to make this a great platform. If you have questions about any of these things, we strongly encourage you, one, please send us feedback. We do have a feedback box. We have a feedback address, [email protected]. And our industry evangelist for professional development and video is Jeff Lowe, and he's the person for you to get in touch with should you have more questions or interest in discussing opportunities in this space.