Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2010-407
$eventId
ID of event: wwdc2010
$eventContentId
ID of session without event part: 407
$eventShortId
Shortened ID of event: wwdc10
$year
Year of session: 2010
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2010] [Session 407] Editing Med...

WWDC10 • Session 407

Editing Media with AV Foundation

Graphics & Media • iOS • 47:31

The AV Foundation framework gives you a new level of power and control over iPhone OS media. New classes introduced in iOS 4 extend your ability to work with movies as well as audio. Discover how you can edit media files for outstanding results.

Speaker: Eric Lee

Unlisted on Apple Developer site

Downloads from Apple

HD Video (178.4 MB)

Transcript

This transcript has potential transcription errors. We are working on an improved version.

[Eric Lee]

My name is Eric Lee, and I work on media technologies. Today, I have the pleasure of presenting to you the new editing APIs and AV Foundation. In iOS 4, We've added a powerful new set of APIs so that you can incorporate editing in your application. These are the same APIs that were used to build iMovie for iPhone like you saw in yesterday's keynote. Throughout the session, I'll be giving a series of demos. The applications I'll be using are AVPlayerDemo and AVEditDemo.

The source code for these applications are available for you to download on the developer/attendee web site. So I encourage you, if you haven't already, to go ahead and download these applications. Let's quickly recap where AV Foundation sits in the context of other iPhone OS frameworks. Many of you may already be familiar with the Media Player and UIKit frameworks. AV Foundation sits below these frameworks and builds upon the low level services provided by Core Audio, Core Media and Core Animation.

Simple editing features have been available to you as a user since iPhone OS 3. When you open up a movie in a camera roll, you'll see a timeline slider with thumbnails. These thumbnails show you snapshots of the movie at various times. We also provide a UI for trimming. Trimming allows you to select a region of the movie that you find interesting and save it to a new file.

While these features were available to you as a user, we didn't provide a way for you to incorporate this functionality in your apps. Well, -- with iOS 4, using AV Foundation, we're giving you this functionality. Of course, the APIs in AV Foundation editing go beyond much more than just allowing you to create an image for a time and trimming a movie to a time range. And today, I'll be going through in this session a series of scenarios showing you the kinds of things you can do with the editing APIs in AV Foundation.

I'll show you how you can combine clips from multiple movies. I'll show you how you can mix in additional audio tracks. I'll also show you how you can spice up your -- spice up your movie by using video transitions such as zones. And I'll show you how you can incorporate some amazing graphics and animation using Core Animation into your movie. But before I dive into these topics, let me quickly reintroduce Core Media. Core Media is a new C base framework that we're introducing in iOS 4.

Core Media provides some basic data types and data structures that AV Foundation uses for working with media. And one that is particularly important that we will be seeing throughout the rest of this session is CMTime. CMTime is a C struct for representing time much like CGSize and CGRect are structs for representing geometry.

CMTime represents times as a rational number with the 64-bit numerator, the time value, and a 32-bit denominator known as the time scale. We like rational numbers because that allows us to preserve precision when we do calculations. We provide interested constants for interesting time such as 0, positive infinity, negative infinity and invalid. We provide utilities for you to add and compare times.

We also provide a data type for time ranges. A time range consists of a start time and a duration, both of which are CMTime values. And we also provide a data type for mapping one time range to another time range. But enough of Core Media. Let's talk about AV Foundation. In the last session, Discovering AV Foundation, Kevin introduced AVAsset as an abstract model for working with assets and AV URL Assets allow you to represent movies in files.

An AVAsset consists of one or more AVAssetTracks and each track, AVAssetTrack, represents -- corresponds to a track whether it is video or audio in your movie file. And to quickly recap, once you have an AVAsset, you can play it using AVPlayerItem. So now that -- what other things kinds of things can we do with an AVAsset? Well, one of the things we're going to want to do is to create an image for our time. And let me quickly switch to the demo and show you what I mean.

So here I have my demo. I'm going to launch AVPlayerDemo here. AVPlayerDemo, which you may remember seeing in yesterday's State of the Union, Graphics and Media State of the Union, is an app written using AV Foundation. In the center of the display, you see your movie, and at the bottom are a series of scrolling thumbnails. So when I hit Play, you'll notice that the thumbnails slide along synchronously with the movie playback.

So you'll notice as I scrub through the movie, these thumbnails continue to show what you currently see in the movie. All right. Let's go back to the slides and take a look at how these images were generated. Here is a short code snippet showing you how you can use AVAssetImageGenerator to extract images from a file, from a movie. You start by creating an AssetImageGenerator and you pass in the AVAsset that you want to extract images from.

Then you can go ahead and start requesting images asynchronously using the Generate CGImages Asynchronously API. When you call this method, you will pass in an array of movie times that you will want images for, and you will also pass in a handerBlock, which will be called when an image is ready for you to consume.

If you're not familiar with blocks, think of it as a callback function with some extra context. Like many other AV Foundation APIs, this method is asynchronous. This means that your handlerBlock won't be called until sometime after this method returns. So you should make sure is that you retain your ImageGenerator object until you've received all of your images.

This is an example of what a typical handlerBlock that you pass into the AVAssetImageGenerator will look like. This handlerBlock will be called for each image request that you make, and it's important that you check the result. The result can take on one of three values. In most cases, of course, it will succeed, and you can go ahead and use the image object passed to you through handlerBlock. In certain cases, however, the image generation can fail and you can retrieve more information about the error using the Error Object.

Finally, you can also -- we provide a way for you to cancel any outstanding image requests. And any image request that has been cancelled will result in the handlerBlock being called with a cancelled status. So we saw how to create images from an asset using AVAssetImageGenerator. Let's move on and look at how you can trim a movie and save it to a new file. And the way you do that is with AVAssetExportSession. Here's some codes showing you how to use AVAssetExportSession.

When you create an AVAssetExportSession object, you will pass in the source asset that you want to export and a preset. We provide a variety of presets for you to choose from. These presets have varying bit rates and quality settings so you can choose from, based on your needs.

Once you've created an asset -- an ExportSession, you will want to specify an upward destination URL and a file type. You can also optionally specify a time range that you want to trim. If you don't specify a time range, then by default, AVAssetExportSession will export the entire duration of your movie.

If you like, you can also optionally add or change metadata. And finally, this is how your start your ExportSession asynchronously. Again, you will pass in a handlerBlock which will be called when the export finishes. This is what a handlerBlock that you pass into AVAssetExportSession looks like. And you'll notice that it looks remarkably similar to the handlerBlock that you pass into AVAssetImageGenerator. And once again, you will want to examine the return status for one of three values; completed, failed or cancelled. I'd like to take a moment and talk about error handling.

It's important that in your code you handle export failures gracefully. So what are some ways that an export can fail? Well, an AVAssetExportSession will not let you overwrite an existing file, and attempting to do so will cause your handlerBlock to be called with a failed status. If you'd like to overwrite an existing file, you should remove the old one first.

Perhaps you already know this, AVAssetExportSession will also not let you write two files outside of your application sandbox. iOS 4 is a multitasking OS. This means that you can have your export continue in the background while your user switches over to check her mail or update her Facebook. Well, this is great.

But it does introduce a few other situations why your export can fail. Again, the message here is to handle failures gracefully. For example, if your application is exporting in the background and your user switches over to Pandora or the YouTube app. Well, starting media playback will interrupt your export. Even if you're exporting in the foreground, an incoming phone call will interrupt your export. In situations like these, our recommendation is that you let the user restart the export.

So we saw how you can export a movie using AVAssetExportSession. Let's switch gears now, and I'm going to introduce AVComposition, which together with AVAsset form the cornerstones of editing and which we will see throughout the rest of this hour. So we saw how AVAsset can be used as a source object for a single movie.

Once you have an asset, you can play it using AVPlayerItem, extract images from it using AVAssetImageGenerator and export it using AVAssetExportSession. Well, now, what if you have multiple movies? The tool we give you to work with multiple movies is AVComposition. And AVComposition is a subclass of AVAsset. This means that you can use it in the same great ways that you use an AVAsset. You can play it, you can extract images from it, and, of course, you can export it.

So let's take a look at a first example of how we can use AVComposition. And what we're going to look at is how we can combine clips from multiple movies. And to illustrate, I think it's best if I just show you. So we're going to leave AVPlayerDemo now, and I'm going to launch my second demo app, AVEditDemo.

The first thing you'll notice about AVEditDemo is that it's not the typical user interface that you would see in an application that you see in the app store. That's because AVEditDemo was not meant for end-users. With the target audience for AVEditDemo are people such as yourselves, developers, and think of it as a playground for exploring the various AV Foundation editing APIs.

Again, the source code for this application is available for you to download, so I encourage you to play with this on your own. The first thing I'm going to do is select some clips. These clips were synched over to this phone using iTunes file sharing, and they were taken on an iPhone OS 4. And these movies were taken in full 720p HD quality. So I'm going to select this first clip of a cat here, and I'm going to select a range that I want this movie to play.

So I'll say from here to here. I'm going to select a second clip. Let's take this beach scene here and play about 8 seconds of that too. And I'll select a third clip, and I will take another 8 seconds or so of this scene as well. So let's see what happens when I hit Play.

OK. Let's go back to slides and review what we just saw in this demo. We started out with three clips, with the cat, the beach and the flowers, and we took clips from each movie and composed them together on a timeline. AVComposition is the tool that we provide to allow you to do this. AVComposition allows you to lay out asset segments on a timeline.

And AVComposition has one or more AVCompositionTracks much like AVAsset has AVAssetTracks. And AVCompositionTrack -- each AVCompositionTrack has an array of AVCompositionTrack segments. Let's take a look at what is inside a track segment. Each track segment contains information to identify the source movie, the track inside the source movie and the time range that you want to include in your composition.

The mutable variant of AVComposition is AV MutableComposition, as you might expect. And we provide a number of APIs for you to insert assets into your MutableComposition. For example, if you want to insert all of the tracks of an asset into your composition, this is how you do that.

You can also insert just a single part of a single track of an asset into your composition. In your application, you may have your own representation of how movies -- you want movies to be combined together, in which case, it may be simpler to just give us the array of track segments and set them on your composition track directly, and we provide a way for you to do that as well. This is how you create a MutableComposition. You start with an empty MutableComposition, then you can add -- start adding MutableComposition tracks. In this case, we're adding a video MutableComposition track. And once you have your tracks, you can go ahead and start inserting asset segments.

A note on working with MutableCompositions. It is not safe to modify a MutableComposition while you are playing it back, extracting images from it or exporting it. Well, why, you may ask. Well, remember that AV Foundation uses an asynchronous model. So taking the example of playback, if you're playing back your MutableComposition, AV Foundation will be accessing it continuously, periodically from another thread or maybe even another process.

Well, now, if you go ahead and try to change it essentially behind AV Foundations back, well, you can imagine that bad things can happen. So our recommendation is that you pass a copy for these tasks, and then you can go ahead and safely modify the original. In this example of code snippet, I'm making an immutable copy of my MutableComposition and create -- using that to create my player item.

In your application, you may wish to have a live view of your composition, and you want this live view to update whenever your composition changes. The way you can do that is by periodically creating a new snapshot of your composition, creating a new player item and using the ReplaceCurrentItemWithPlayerItem API to safely and seamlessly update your player with this new snapshot.

So we saw how to cut together multiple clips using AVComposition. Well, that was a good start, but let's do something a little more interesting and add an additional audio track to our movie. And, again, I'd like to start by showing you. So we're going to go back to our controls here, and we're going to go down, scroll down to these options, and I'm going to enable an audio commentary track. I'll select an audio clip, and I'm going to set the start time, the time at which I want this audio commentary track to start.

And I'm going to set it to start at around 11 seconds. So now if I hit Play, you should expect to hear an audio commentary track at around 11 seconds.

You know, it's true, when something exceeds your ability to understand how it works, it sort of becomes magical.

[Eric Lee]

All right. So let me play that for you again. I won't play it quite from the beginning, but this time I want you to listen and notice how the main -- the volume of the main audio track quiets down just before the audio commentary kicks in.

You know, it's true, when something exceeds your ability to understand how it works, it sort of becomes magical.

[Eric Lee]

That was Johnny Ives, by the way, in case you didn't recognize his voice. So let's go back to slides and review again what we just heard in this demo. So we added an audio commentary track. We started out with our composition with the video and audio track, we added a third audio track for the audio commentary. Notice how that this new audio track doesn't start at the same time as the other audio tracks.

There are gaps, and we call these gaps empty edits or empty segments. The second thing that I showed you was the volume of the main audio track quieted down just before Johnny's voice came on. This technique is known as ducking, and the way we do that is with audio volume ramps. So let's look at the AV Foundation objects involved in making this happen. Again, we started with our AVComposition, video and audio track. We added a third audio track.

And to do the volume ducking, we created a new object, AVAudioMix. AVAudioMix allows you to add volume ramps to the audio tracks in your composition. So AVAudioMix is a tool for adding volume adjustments to your composition. An AVAudioMix has an array of AVAudioMixInputParameter objects. And each InputParameter object allows you to adjust the volume of a single audio track. If you don't have an AudioMixInputParameter object associated with a track, that track will simply get the default volume. Here's how you create an audio mix. Let me walk you through this. Start by creating an empty MutableAudioMixInputParameters object.

Then you can set the volume either at a specific time, such as I'm setting the volume to 1 and 0. You can also set a -- you can also ramp the volume. So here I am setting the volume to ramp from 1 to 0.2 between times X and Y. Note that in between the times that I specify explicitly, the last value of the volume continues automatically. So by setting -- specifying the volume at time 0 in between X and Y, that is enough to describe the yellow volume curve that you see to your right.

Once you've created your InputParameter objects, you can go ahead and create an AudioMix object. To use AVAudioMix, simply set it on your player item for playback or your ExportSession for export. So we saw how to add in additional audio track and adjust the volume of our audio tracks using AVAudioMix. Let's move on and talk about video transitions.

So go back here to our options, and we're going to turn on Transitions. I'm going to leave the Default Transition Duration at 1 second. And let's see what push transitions look like.

You know, it's true when something exceeds your ability to understand how it works, it sort of becomes magical.

[Eric Lee]

So let's go back and take a look at what cross-fades look like.

You know, it's true, when something exceeds your ability to understand how it works, it sort of becomes magical.

[Eric Lee]

All right. Let's go back to slides. So what did we just see? We added our old friend AVComposition. We have our -- again, our video and audio and track. But this time, instead of simply cutting from one clip to the next, we're going to add an additional video track and an additional audio track, and we're going to overlap between the -- overlap the video tracks and audio tracks during the transition. During this overlap, we're going to be decoding and playing two video tracks and two audio tracks at the same time.

When you play two audio tracks, they mix together naturally. You just hear both audio tracks at the same time. But for video, we're going to need a way to describe explicitly how to combine the two video tracks, and the way we do that is with AVVideoComposition. AVVideoComposition allows you to specify when to play one track, when to play the other track and when to play a combination of the two. And the result of applying the video composition that you see to the video tracks is something like this.

Let's talk a little bit more about AVVideoComposition. AVVideoComposition has an array of AVVideoComposition instructions. Each instruction describes the output video in terms of input layers, and we call these layers AVVideoCompositionLayerInstructions. Each layer instruction allows you to modify the opacity or the affine transform of the original video track.

And you can ramp these values, what we call tweening, and this allows you to get transition effects. So, for example, if you tween the opacity value, you can get a cross-fade. Or if you tween the affine transform, you can get a push transition. Let's take a look at how we can construct an AVVideoCompositionInstruction. Again, I'll walk you through this code snippet here. We start by creating an empty MutableVideoCompositionInstruction. First thing we're going to want to is set a time range.

This is the time range over which we want this instruction to be executed. In this case, I'm describing this transition here. Then we can go ahead and create layers instructions for this instruction. We're going to want to create a layer instruction for Track A, and we're going to fade out Track A by setting a ramp on its opacity from 1 to 0.

We'll also want to create a second layer instruction for Track B. For this layer instruction, we're going to leave the opacity at its default value of 1. And once we have our layer instructions, we can add them to our transition object in top to bottom order. Once we have our instruction objects, we can create a video composition. Start with an empty MutableVideoComposition and set the instructions on this video composition object. There's going to be a few properties you will want to set on your video composition object.

To specify the frame rate at which you will want your composition to run, use the FrameDuration property. So, for example, if you want your composition to run at 30 frames per second, you will want to set your FrameDuration at 1 over 30. The render slides tells AV Foundation how large of a canvass to use when rendering and compositing your video frames together.

In my case, I'm rendering HD 720p video, so I'm just going to set the render slides to 1280 by 720. In certain cases, it may be unnecessary to be rendering your video composition at its full resolution. For example, if my source material is 1280 pixels wide but my view is only 640 pixels wide, well, I don't really need to be rendering every single pixel.

And we allow you to set a render scale on your video composition, and this allows -- this tells AV Foundation it can do less work, and it will also ensure that your playback runs as silky smooth as possible. Using an AVVideoComposition is very much like using an AVAudioMix. You can set it on your player item for playback.

You can also set it on an AssetImageGenerator for extracting images from a composition. And, of course, you can set it on an AssetExportSession for export. There's a few things I would like to point out to you when working with AVVideoCompositions. The instructions inside your AVVideoComposition must not overlap or contain any gaps.

You should ensure that the time range of each instruction in your video composition immediately follows the previous one. Your video composition may also not -- must not be shorter than your AVComposition. And you should ensure that all of the instructions in your video composition span the entire duration of your AVComposition. I'd like to talk briefly about hardware requirements.

If you would like to use AVVideoComposition in your application, it must be running on an iPhone OS 3 GS or higher or a third generation iPod touch. While we're on the topic of hardware requirements, I'd like to quickly point out that there is a practical limit to the number of video streams that a device can decode simultaneously, and this limitation is hardware specific.

For those of you in the audience who are familiar video compression techniques, you may be wondering to yourself, "Well, what about i and p Frames? Does AV Foundation restrict me to do my edits on I-frame boundaries?" And the answer is "No." AV Foundation allows you to place your edits anywhere you like. If your edit doesn't begin on a key frame, however, AV Foundation will need to start decoding starting from an earlier key frame, and this extra decoding is known at catchup. Our recommendation to you is that you alternate your edits in your composition.

What do I mean by that? Well, in this example, I have an AVComposition with two video tracks. I start with my first edit, video edit, in the lower video track, my upper video edit is in the second video track, and I alternate back and forth in this manner for the duration of my composition. Laying out your edits in this manner allows AV Foundation to do the catchup decoding during the empty edits. And, again, this will ensure that your playback is as smooth as possible.

If you employ this technique, however, you will need to use an AVVideoComposition to tell AV Foundation when to use which track. I'd like to briefly talk about some other uses for AVVideoComposition. If you insert two assets with different video sizes into your composition, AV Foundation will, by default, place your edits in two different video tracks. AVPlayer, however, will only play one of them.

And just like in the situation I showed you in the last slide, you will need to add an AVVideoComposition to tell AVPlayer when to play which track. If you're working with movies captured on an iPhone device, these movies will have a rotation matrix associated with them, and you can examine this rotation matrix by querying the PreferredTransform property on your video track. If you play this asset directly using AVPlayer, then AVPlayer will take care of doing the rotation for you automatically.

But if you now insert this asset into a composition, the rotation will be ignored, and you will need to use an AVVideoComposition to reinstate the rotation. So we saw how to use video transitions in your composition using AVVideoComposition. Let's move to our last topic, and I'm going to show you how you can incorporate Core Animation into your movie. And again, I'd like to show you with a demo.

So we're back in AVEditDemo here. I'm going to go back to my options, and I'm going to go to the last option here, Titles, and I'm going to add an animated title to my movie. So let's say "Magic." Let's see what happens now when I hit Play.

You know, it's true, when something exceeds your ability to understand how it works, it sort of becomes magical.

[Eric Lee]

So this animated title sequence was drawn using Core Animation, rendered using Core Animation and composited directly on your video frame. But it's actually -- even though it's drawn using Core Animation, it's actually part of the movie. So if I scroll -- scrub back through my composition, you'll notice that my animation reappears. And if I scrub forward, you'll see that the stars will rotate clockwise. And if I scrub backwards, they'll be animating counterclockwise. So I hope you indulge me here, and I'm going to play this for you one last time.

I'd like to point out, once again, that what we're looking at here is HD video, 720p HD video edited together. We've added video transitions, so we're decoding and compositing multiple HD video streams, we've added an extra commentary track, and now we've just thrown in Core Animation into the mix. And all of this is happening in real time on an iPhone.

You know, it's true, when something exceeds your ability to understand How it works, it sort of becomes magical.

[Eric Lee]

All right. Let's go back to slides.

[ Applause ]

So, again, to quickly review what we just saw, we had our good old friend AVComposition with our video and audio tracks. What we did was add some Core Animation layers, and we added a layer for the magic title and for the ring of stars. We also some animations to spend the stars and fade out the entire title sequence after 10 seconds. Let's take a look briefly at the Core Animation objects involved in this.

We had a single animated title layer with two sublayers; one for the title, the text and another for the ring of stars. I added an explicit animation to spend the ring of stars layer and another explicit animation on the parent animated title layer to fade out the entire title after 10 seconds.

If you like more information on Core Animation, I encourage you to attend Sessions 424 and 425. And now, I'd like to a bit about animation video and timing. So animation, as you know, is the result of changing a property such as position or size over time. Some of you may already be familiar with incorporating Core Animation in your application for real time effects.

With AV Foundation, we're allowing you to use these same great tools, same Core Animation, but put it in your movie. And the only difference is that instead of your animations running in real time, they're running on the movie timeline. Let's take a look at how this works. We start with a UIView. This UIView will have a layer associated with it.

This layer will have some sort of layer hierarchy. One of these layers will be your AVPlayer layer. This AVPlayer layer actually has a private sublayer for the video. One thing I'd like to point out is all of the objects in this diagram, except for the video, are running in real time. What do I mean by real time? Well, Core Animation counts time in seconds since boot.

This time is always monotonically increasing and is outside of your control. The video, on the other hand, is running in movie time. The movie time is counted in seconds since the start of the movie. Movie time is under the control of you, the user, or you, the developer. If I start playback, movie time will advance.

But I can also pause movie time by pausing playback. I can even make movie time go backwards by scrubbing back to an earlier point in the movie. When I add my animated title sequence to the movie, I want my animations to be running on the movie timeline. And the way we let you do that is with AV SynchronizedLayer. So when you're playing back with Core Animation, we give you AV SynchronizedLayer to make the animation run according to movie time.

The actual rendering is still done by Core Animation's onscreen rendering facilities. It's just the timing that we modify. But what happens if you want to export your composition with Core Animation? Well, we're going to have to do something slightly different because we can't rely on Core Animation's onscreen rendering facilities anymore. So what we're going to do is introduce a post-processing stage to our video composition. We're telling -- we're essentially removing the Core Animation from the onscreen layers and we're going to add it to our video composition as a post-processing stage.

And the way we do that is with AVVideoComposition Core Animation Tool, which I think has a distinction of being the longest class in AV Foundation. The way AVVideoComposition Core Animation Tool works is very similar to AV SynchronizedLayer. There's actually a couple of ways where you can set it up. I'll show you one here, and you can look in the header for more information on other ways you can set this up, or you can come talk to us in the lab.

We're going to have a single parent layer in our VideoComposition Core Animation Tool, and this parent layer will have both our video sublayer and our animation title layer. Now using Core Animation -- using -- rendering Core Animation offline can be expensive. And you're going to want to disable this postprocessing when it's not needed. And the way you do that is by setting the EnablePostProcessing property in your composition instruction to No. And this tells AV Foundation it can skip the Core Animation Postprocessing stage for the duration of that instruction, and this will speed up your export. Quick note on multitasking.

Core Animation is disallowed in the background. This means that your export is running in the background and you suddenly hit on a Core Animation that you need to do some postprocessing, your export will fail and the handlerBlock will be called with the failed status. Core Animation provides a number of very useful default behaviors that help you when working with real time animations.

But these default behaviors can introduce some unexpected behavior when you're trying to add your animation to your movies. For example, if you set your begin time in your animation to 0, Core Animation will automatically translate 0 to the current media time using CACurrentMediaTime. This is actually the current real time, and it's most likely not what you want. You really want 0.

So you're going to want to set your begin time to a small non-0 number, and we provide a constant AVCoreAnimation BeginTimeZero for this purpose. Core Animation will, by default, remove any animations that it thinks have passed. So if you want your animation to run and run multiple times, which you will, then you'll need to set the RemovedOnCompletion property on your animation to No.

When you change a property on a Core Animation layer, by default, Core Animation will add what we call an implicit animation. This implicit animation will animate your property from its old value to its new value. But these implicit animations -- you probably don't want these implicit animations in your movie because you want to control them yourself. So you're going to want to probably disable implicit animations.

And the way you do that is by surrounding your property changes with a transaction and disabling actions using the SetDisable Actions method on CATransaction. In certain cases, you may want your Core Animation Animations to continue past the end of your video. This is OK. But you will need to explicitly tell AV Foundation how long your playback or your export should run, and the way you do that is by setting the ForwardPlaybackEndTime property on your player item for playback or setting the TimeRange property on your ExportSession for export.

So we saw how to add Core Animation Animations to your composition using AV SynchronizedLayer and AVVideoComposition Core Animation Tool. At the beginning of the session, I promised to show you how to do these things with the aid of the APIs of the AV Foundation. Let's quickly review. If you want to extract images from an asset, you want to use AVAssetImageGenerator.

If you want to export your movie with some optional trimming, use AVAssetExportSession. If you want to combine clips from multiple movies, you want to use AVComposition. If you've added additional audio tracks and want to adjust the volume of these audio tracks, you want to use AVAudioMix. If you want to spice up your movie with some video transitions such as cross-fades, we provide AVVideoComposition. And finally, if you've added Core Animation animations to your movie and you want them to run synchronously with your movie on the movie timeline, we provide AVSynchronizedLayer for playback and AVVideoComposition Core Animation Tool for export.

If you have any questions on any of the material I presented in this session, feel free to contact Eryk Vershen, our Media Technologies Evangelist. And if you missed Kevin's wonderful session on Discovering AV Foundation, don't panic, he'll be giving the session again on Thursday at 4:30. And please stay tuned for our next session in the series on using the camera, which will take place right here immediately following this session.