Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2011-405
$eventId
ID of event: wwdc2011
$eventContentId
ID of session without event part: 405
$eventShortId
Shortened ID of event: wwdc11
$year
Year of session: 2011
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2011] [Session 405] Exploring A...

WWDC11 • Session 405

Exploring AV Foundation

Graphics, Media, and Games • iOS, OS X • 54:56

AV Foundation harnesses a streamlined media architecture that enables high-performance audio and video playback, editing, recording, and more. Explore the features provided by this technology and understand how to add advanced multimedia capabilities to your apps. Get an overview of the classes in the AV Foundation framework on iOS and get introduced to the powerful capabilities brought back to the Mac in Lion.

Speakers: Kevin Calhoun, Simon Goldrei

Unlisted on Apple Developer site

Downloads from Apple

HD Video (187.1 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Thank you. Good afternoon. Let's go exploring, shall we? But first, by way of introduction, what is AV Foundation? AV Foundation is Apple's foundational Objective-C framework for processing timed audiovisual media. It made its first appearance in iPhone OS 2.2, at that time with classes for the processing only of audio media. But even then, you could tell by the name with the V in it that we had vast plans for expansion.

And in fact, in iOS 4, we did expand the framework vastly to cover timed audiovisual media, not just audio media, including playback, capture, editing, export, re-encoding, and even media sample processing at the low level. We've continued to enhance the framework in iOS 5, and now we've brought it to the desktop as well in Lion.

One of the features of AV Foundation is that it's well integrated with core technologies of both platforms, iOS and Mac OS X. It's designed to fit snugly on top of core audio, core media, core animation, and foundation. And because its dependencies are in common across the two platforms, its model is also in common on both Mac OS X and iOS.

It offers the same classes with only a few minute differences between the two platforms and the same model for dealing with timed audiovisual media. So you have the opportunity by adopting AV Foundation, whether you build applications for the desktop or for the mobile devices or for both, for your code for dealing with timed audiovisual media to be in common across the two platforms.

Here's our proposition to you today. This is the media-centric framework that we're putting the bulk of our effort into. And it's the set of APIs that the majority of our apps are using. So here's a consideration for you. For your new applications, adopting AV Foundation for timed audiovisual media, and for your existing applications, making a transition to this framework as well.

We're going to cover the full breadth of the framework in several sessions over the course of today and tomorrow. In this session, we're going to cover primarily inspection and playback. And I'm going to turn things over to my colleague, Simon Goldrei, who's going to explore those with you. Thanks.

Thanks, Kevin. Many of you here have been developing on Mac OS X for some years now. You've both developed and deployed wildly successful media applications for the platform. You've heard that AV Foundation is new, back on the Mac, in Mac OS X Lion, and you've come here to discover its suitability for your media application.

The other half of you have been developing for iOS these last few years. And you were here in this session rediscovering AV Foundation last year. You too have developed some really innovative, wonderful media applications for this mobile platform. And now you'd like to learn what the new features are, what's new in iOS 5 that we've brought to AV Foundation.

What we all strive for, what we all desire from these applications is that they remain responsive, robust, and efficient. I'm Simon Goldrei, and I'm part of the Media Systems team at Apple. And over the next short 15 minutes that we have together, I would like to introduce to you one of the core concepts of AV Foundation.

How do we facilitate these goals of a robust and efficient media application? What are the protocols and interfaces that AV Foundation provides that support these goals? Now, while we're going to survey the full set of classes that AV Foundation provides, I'm going to only focus on those that provide media inspection and discovery facilities and media playback.

But last, and certainly not least, I'll introduce you to some of the new APIs that we've been able to provide in iOS 5, as well as the desktop. So let's begin. The first thing is where is AV Foundation? AV Foundation on iOS sits above the dependent frameworks of core audio, core media, and core animation, but it sits below the application toolkits of UIKit and Media Player Framework.

At this foundational level, AV Foundation is able to provide your media application a much richer set of API than Media Player Framework can, as well as a much more tightly integrated and greater opportunities for runtime efficiency. Of course, the best efficiency that we now are able to provide is a common media framework between the two platforms. For the first time, we've got a common media playback API, as well as a rich set of API, for both iOS and Mac OS X. And on Mac OS X, AV Foundation sits in much the same place. Again, above the dependent frameworks, but below the application toolkit.

So when to use AV Foundation? AV Foundation provides this really rich media API set. And we've covered the full gamut of media operations that you may desire. And we've made it really easy for you to adopt it in your application by providing separate interfaces for each aspect of a media framework. In fact, by providing separate interfaces, we've been able to separate the operations for, say, just media inspection from the much more involved, requiring much more overhead, set of operations that require to prepare a media resource for playback.

AV Foundation supports five key areas of media play-- media playback. Oh, sorry, media API. The first key area is media inspection, media discovery. What's inside your media resource? Next, there's a playback and presentation set of classes. And these provide a really customizable set of API. You get to control playback behavior. You get to control playback presentation.

There exists a set of composition classes. Composition classes expose to you editing API, editing facilities. They allow you to combine segments from multiple media resources, The goal is to create a new media platform that can integrate different segments of media within those resources and then compose them both temporally as well as spatially.

There exists a set of export API. This allows you to transcode your media and support a vast array of codexes output for your application. And lastly, and certainly not least, there's a set of Capture API. And the Capture API allow you to provide fine-grained control over the cameras in both your iOS handheld devices as well as the desktop environment. We provide you the ability to control image attributes about what you capture and persist this image stream to disk.

So this rich media API that is common to both iOS and Mac OS X. It's 64-bit native and hardware acceleration comes standard. There's absolutely nothing further for you to adopt. We have an advanced video render pipeline that is GPU-backed from the time of decode all the way through to display. And to do this, we've been tightly integrated with Core Animation.

Before we go any further, what are the challenges of using time media? What makes this so difficult? Time Media Resources, as I'm sure you're now all very aware, are often gigabytes in size. They take time to download from remote resources. They take time to decode and parse. In designing AV Foundation, we've specifically sought not to obscure this reality from you, the programmer. AV Foundation will only perform the operations necessary to support your specific API requests. In addition to your requests, we'll coordinate with a dynamic, lively playback environment that is able to mutate objects beyond your control.

And we do all this while providing a set of protocols and interfaces, which we'll discuss in more detail shortly, which support you and your development in remaining responsive and efficient. Let me give you a concrete example. Suppose you're working with a media resource that does contain summary information. In that case, it's all rather convenient.

Ask for the duration of this resource. AV Foundation will download just enough data to find the summary information and return to you the answer duration. But things are not always this convenient. You may find that your media resource does not contain summary information, in which case, when you request the value of duration, AV Foundation needs to download the entire media resource and only then return to you the value of duration.

You do not want your media application to be unresponsive during this lengthy period of time. In fact, you would like your media API to help facilitate, to direct you towards a responsive user experience. So, let me take this opportunity with you together to explain, to introduce you to the classes and protocols that we provide in AV Foundation that help us meet these goals.

Albert Einstein once quipped that the only reason for time is so that everything doesn't happen at once. And while I don't disagree, I really think that some things best happen concurrently. In AV Foundation, we try to like to have these two notions of media models, the static and dynamic. The static models are used for media inspection and discovery. You make requests on them, reload the data, and once they're loaded, the values of these properties don't change.

The dynamic objects, on the other hand, are part of a lively playback environment. You observe them as they mutate within the playback environment. To benefit from this useful distinction of static versus dynamic, inspection versus playback, you are to request values of static models and await loading of values asynchronously. And for dynamic models, you use key value observing where it's offered.

On iOS, there's an additional consideration. On iOS, your media operations can be interrupted. They can be interrupted by telephone calls, by FaceTime requests. So you need to be prepared. Additionally, the shared media services on iOS can be reset. In order to avoid these services being reset, you should be considerate. You need to share these media resources in an asynchronous manner. Fortunately, to make this really easy, there's a protocol for that.

Let's now survey the core playback classes present in AV Foundation. It starts... with the AVR set. The AVR set is a model for the entire media resource. You request values on AV asset using a protocol called AV asynchronous key value loading, which I'll describe in more detail shortly. You might request of an AV asset its collection of tracks, in which case AV Foundation will download all the data necessary in order just to specify, in order to just answer this one request. We'll populate the collection of AV asset tracks and return this to you.

and AV Asset Track is a static model for the single media stream within a time media resource. All of these classes

[Transcript missing]

If your media application intent is just to perform media inspection and media discovery, these are all the classes you need to worry about. But let's now expand your application scope to include playback.

Let me introduce you to the AV Player. A sophisticated media controller for playback. And by sophisticated, I mean that it's customizable. You have control over playback behavior. But in and of itself, AV Player and AV Asset are not all you need to achieve playback. You need a model for the AV asset during playback. And for that, let me introduce you to the AV player item.

a model for the AV asset for the entire media resource during playback. AV Player Item is an example of one of these dynamic model objects. You add yourself as an observer using foundations and a key-value observing interface. And you want to add yourself as an observer to an AV player item before you associate it with an AV player because as soon as you do, it's part of the active playback environment.

AV Foundation will again download all the data necessary to prepare this item for playback, populate the collection of AV player item tracks, and finally make that player item ready for playback. You want to observe these changes. An AV player item track is a model for a single media stream, or an AV asset track, during playback.

Let's explore some more and take a look at the first of these, the AV asset. The AV asset is inspectable via the AV asynchronous key value loading protocol. It models the entire media resource. You don't instantiate an AV asset directly, but rather you instantiate one of its subclasses, the AV URL asset. And you can provide an NSURL for this purpose. What's important to note is that an AV asset in this state, in this newly initialized state, is not ready for any particular task. You haven't loaded any of the values for any of its properties.

An AV asset provides lots of information about the whole media resource, the collection of tracks, the lyrics, possibly the duration, but it also provides four properties which are particularly useful. It tells you what the AV asset is good for, what you can do with it, what its utility is. An AV asset may be exportable, composable, readable, that is, can you get samples from it, and last, playable.

AV Foundation supports a rich set of modern and relevant media codecs, and it's through the playable property that you discover if AV Foundation is able to service this asset for playback. Let's take a look at an example of how you would load the playable property on an AV asset.

It starts off with allocating a collection of properties that we're interested in loading. In this case, it's just one, the playable property. We use the method from the AV asynchronous key value loading protocol, load values asynchronously for keys, provide this collection, and then provide a completion handler, which is a block.

Within the block, when the loading operation has succeeded or progressed, we get the value of the status for the loading operation. Then we switch over the possible values. Possible values may be loaded, failed, or for the sake of brevity, canceled. Ideally, if everything worked out nicely and the value of the playable property has indeed been loaded, we end up here inside the switch statement.

In which case, all we need to do now is enable our UI for our user to select that AV asset for playback. What is important to note here is that this is an asynchronous loading. It's occurring on some arbitrary queue. If you want to perform user interface operations, you need to move execution back to the main queue.

Let's return to our class collection here and drop down one level into the AV asset track. The AV asset track, another static model object, is also inspectable via the AV asynchronous key value loading protocol. And it models the static state of a single media stream within the resource. In this little coding example for obtaining the collection of AV asset tracks within the resource, I've chosen to first test the status and then progress further and continue with the AV asynchronous key value loading protocol, loading the values asynchronously for keys.

Each AV asset track is of a single media type. It details information such as the format description, the frame rate of the media stream, data rate, language tagging, and some sundry metadata. Returning to our class collection, class diagram, let's jump to the other side and focus on the media controller, the AV Player.

The AV player, because it's part of the dynamic playback environment, is observable via NSKValue observing. It's a media controller with just a few AV player item conveniences. You instantiate an AV player by typically providing an AV player item. AV Player is also the object for observing time. You observe time to implement a customized scrubber UI or transport control.

But you don't use key value observing for observing time, because time, as it progresses through playback, is a constantly changing value. It's not a discrete transition from value A to value B. So in AV Player, we provide two interfaces for observing time. The first of which is the periodic time observer, and the second of which is the boundary time observer.

The Periodic Time Observer allows you to specify a single interval of time. As each single interval of media time is progressed, or is traversed through normal playback, you'll get a callback. The Boundary Time Observer is a little different. Instead, you provide a collection of points of media time within the media resource. And as each of these points in time are traversed through normal playback, you'll get a callback. Your block will be invoked. Let's take a look now at a little coding example of how you might implement a scrubber UI using the periodic time observer.

It starts off, well, I've defined two methods, set up transport UI and a cleanup method. And it starts off with adding a periodic time observer for an interval. And I provide a single interval, a nominal one, of say half a second. That's reasonable. That gives us enough fidelity in our scrubber UI. We provide a dispatch queue where we want to get the call back on.

In this case, I'm going to be doing a lot of UI, so I've chosen the main queue. And then I provide a block that is invoked after each successive time interval is traversed. In this nice, simple example, we move the playhead over half a second's worth. And lest we forget, we have a cleanup method which removes the time observer from the player and releases the reference count that we have over the token that was returned to us earlier.

Returning to the classes again, let's now look at the central object, the AV player item. AV Player Item. AV Player Item is one of these dynamic model objects. It's your dynamic model of an entire media resource during playback. What's useful to note is that you can create many AV player items with a single AV asset. You can also instantiate an AV player item with an NSURL instance directly.

AV Player Item provides a bunch of useful tools. The first of which is that it gives you properties, property values about your media resource that are only discernible during playback. For example, a streaming resource, you can only learn about the duration at playback. It provides the collection of tracks, the presentation size, and you in iOS 5, the ability of a player item to play while fast-forwarding or fast-reversing.

AV Player Item is also your object for manipulating time. It's how you can seek to a specific time, step by a count, that is step by a single frame or a single sample, and specify a forward playback end time and a reverse playback end time, that is when playback will cease when playing forward and when it will cease playing in reverse.

AV Player Item also provides hints, indications on how playback is going, the ability or the likelihood of playback likely to keep up, whether the playback buffer is full or empty, and what the set of loadable and syncable time ranges are. AV Player Item is also your guide to see how the playback environment is going. It gives you information about services. It gives you access to the network access log and the network error log.

And it gives you a property called status. And status is your indication for the eligibility for a player item for playback, whether AV Foundation is likely to service this player item for playback. It also gives you a hint as to when the shared media services may have been terminated. Possible values for status are unknown, ready for playback, or failed.

Let's take a look now in code on how you would add yourself as an observer and observe the status property on an AV player item. It starts by adding observer with an observation context. And we're interested in the new option, the new value that we transition to. Next, we override the NSObject method, observe value for keypath. We check to see if the observation context is the same as the one we provided earlier.

Get the reference to the item, get a reference to the status, switch over the possible values of status, and if everything worked out just fine, we arrive here where the item is ready to play. The only thing left to do now that we know that the item is eligible for playback is to message our player with the message "Play." Let's drop down one level into the AV Player Item track.

AV Player Item Track is your model for that single media stream during playback, the AV Asset Track during playback. It provides two properties, the first of which, a mutable property called enabled, gives you the ability to disable a single AV player item track for playback. We'll discuss why that might be useful in a moment. It also provides a reference to the AV asset track that backs the AV player item track.

The enabled property is useful for disabling a player item track for playback. For example, if you have an audio-visual resource, you might want to disable the video tracks so that the audio-visual resource plays back as audio only. And this is particularly useful on iOS so that your audio-visual resource is still eligible for playback in the background. Let's take a look now at how you might disable an AV player item track for playback.

Again, it starts off with adding observer for the tracks property on the AV player item so that we get the full collection of tracks. We override the observe value for key path method, check to see if the observation context is the same, obtained the reference to the tracks and iterate over each track in turn.

As we iterate over each one, we get the reference to the AV asset track that backs it, ask for its media type, and check to see if the media type is of type video. If it is, disable that AV player item track for playback, and we end up with an audio and audio-visual resource with just the potentially only audio tracks eligible for playback.

So, what's the recipe for playback in just four easy steps? The first of which is create your AV URL asset instance. Next, load asynchronously using the AV asynchronous key value loading protocol the tracks. Now that you've done this, you can create your AV player item. And finally, you can either create your AV player with this item or replace the current item with this item.

Now, I've shown you all the core classes for playback that AV Foundation provides. But since iOS 4, we've introduced a new media controller, the AVQ Player. The AVQ Player, unlike an AV Player, as you're probably guessing right now, not only lets you manage a single AV Player item to AV Player relationship, But multiple AV player item to a single controller relationship. Again, much like AV player, any AV player items that are associated with the AV queue player will be prepared for playback.

As soon as you associate them, All the data necessary in order to prepare those items for playback will be downloaded by AV Foundation on your behalf. So... An AVQ player is not a playlist model. The AVQ player will prepare all the media resources that are associated with it for playback.

But what an AV cue player does provide is a mechanism for gapless audio item transition. It's also your mechanism for obtaining asset or AV player item looping. Remember, we can create many AV player items with a single AV asset, and then looping is just enqueuing multiple of these AV player items into a cue player and letting it progress from item to item. You instantiate an AV cue player, typically with a collection of AV player items.

Now, I've told you that it's not a playlist model. It's not a playlist controller. So how might you implement a playlist using the AV cue player? Before we dive into the code, let's have a look at a nice, simple graphical representation of how this is all going to work.

On the left-hand side, we've got a collection of lifeless, Persistible AV Assets. They're gray. They don't have any color. Sorry, that was on the left-hand side. On the right-hand side, we've got an AVQ player already populated with ready-to-play, colorful AV player items. The story goes like this. An AV player item finishes playback. The current item is the one that's underneath the triangle.

The item finishes playback, and we pop off the collection of AV assets, the next AV asset that we want to enqueue. We load the value of tracks using the AV asynchronous key value loading protocol. And once that's been done, we can create our AV player item from it. We enqueue the AV player item, and AV Foundation will download all the data necessary in order to prepare that item for playback, and it becomes ready for playback. The lifecycle continues.

An AV player item is finished playback. A new AV asset is dequeued. We ask for the tracks. We create an AV player item. We enqueue it. And AV Foundation prepares that AV player item for playback. Now that you've got a good graphical image in your minds and how this is all going to work, let's take a look at the code. It starts off with our resurrection from disk of the lifeless collection of AV assets. It's in an NSMutable array.

We'd like to populate our AVQ player with just enough items to maintain seamless playback. Of course, the first thing that we have to do as we enqueue or dequeue an AV asset is we check to see what the status of the tracks property is. Now, for brevity and keeping everything nice and simple on my slide, I'm just going to check what the status is of the tracks property.

But in reality, in your code, you'll have to use the AV asynchronous key value loading protocol to load the tracks. I've chosen the number three here, which will work in most cases, but is by no means a magic number. Next up, we observe the current item because it's part of the playback environment, and we provide an observation context.

We observe value for keypath, check to see the observation context is the same as the one we provided earlier. Obtain the last, or reference to the last item in the AVQ player. pop off the next AV asset from our collection. Again, check the status for tracks or, in your code, you'll load asynchronously the value for keys tracks.

Test to see if it actually is loaded and if it is, create an AV player item with it. Finally, we insert the AV player item we've just created into the AV queue player because we have the last item reference that we had before. We can specify to enqueue after it.

So I've shown you all the classes used for playback, and I've introduced the new one that we've introduced since iOS 4. But I haven't yet shown you how to display any of this on screen. And, well, that's only because I didn't have enough space in my slide. And so I'd now like to introduce you to the AV Player Layer. The AV Player Layer is your presentation object. It's how you display stuff on screen.

It's a part of the playback classes, so it's observable. And you attach your AV player layer to, on iOS, an instance of a UI view, on the desktop, on a Mac OS X, an NS view that wants layer, or on either platform, An arbitrary CA layer from some hierarchy. You typically instantiate an AV player layer with an AV player.

There are two properties that I'd like to share with you to discuss with you about AV Player Layer. The first of which is ready for display. Ready for display is an observable property which you observe to see when the first sample is populated inside your AV player layer.

When your AV player layer has some contents that's worth displaying on screen, you use ready for display to ensure that your AV player layer doesn't leave any, you know, voids, black boxes inside your user interface. Once ready for display, achieves a value, mutates to a value of yes, you're now ready to open the red curtain on your content and display it to your user. No glitches, no black boxes. The next property I'd like to show you is video gravity.

There are many ways that you can display your audiovisual content within an AV player layer. You can Maintain aspect ratio and keep all the content inside the extents of the AV player layer. You can stretch your content so that it fills the extent of your AV player layer but not maintain aspect ratio. And lastly, you can fill the extents of your AV player layer and maintain aspect ratio, but some content clipping may occur.

Let me take this opportunity to summarize the key points on how to use AV Foundation efficiently. How do you achieve our original goals of building a robust, efficient, and responsive media application using this framework? I introduce you to the notion of the static model. It's used for media inspection and media discovery.

We perform loading only upon your request, asynchronously, and when the value is available, you'll get a callback. Once the value is loaded, it does not change. This is not to say that you can't ask for values synchronously on a static model. You just may be waiting a really long time, and your application will be perceived to be unresponsive, or on iOS, you risk media services being terminated. Thank you. The two examples of objects that we saw that adhere to the static model notion are AV Asset and AV Asset Track.

I then introduce you to the notion of a dynamic model, which is part of the lively playback environment. The objects that I introduced that adhere to this model are AVPlayerItem and AVPlayerItemTrack. You use Foundation's NSKey value observing interface to observe as values mutate independently of your requests inside a playback environment.

These protocols are really important. They're part of a platform etiquette. On Mac OS X, if you don't use these protocols, your application will experience the spod, or the spinning pinwheel of death. Your application will appear to be unresponsive and poor-performing by your users. On iOS, the situation's even more dire.

If you don't use these protocols, You risk media service termination, which will not only affect your application, but all the other applications that are reliant on the shared media services. Again, use the AV asynchronous key value loading protocol for static models and observe dynamic models using key value observing. Right. Now I'd like to introduce you to some of the new exciting API that we've provided since iOS 4. The first of which is AirPlay.

AirPlay enables your media application to stream your content wirelessly to an Apple TV. We've made it really simple. There's two methods and a property. To enable AirPlay, message your AV player instance with set allowsAirPlay to yes. To observe if AirPlay is currently active, observe the key value observable property, AirPlayVideoActive.

Now, to enhance the quality of your transmission to your Apple TV while AirPlay Screen is active, you can tell the AV player to set AirPlay Video to, sorry, set User's AirPlay Video while AirPlay Screen is active to Yes. This will improve the quality of the transmission. To learn more about AirPlay and more about the displays that iOS devices offer, stick around. Don't go anywhere. It's happening right here in Presidio following this session.

Next up are media options. There exist many ways to represent media streams within a time media resource. You want to be able to localize media streams for certain users that is inside their native language. You might also want to provide alternative encodings, alternative representations of time media for persons of hearing or visual disability. It's through these API that you can provide these facilities to your user. You might want to provide a user interface like this one, where you provide a set of localized subtitles as well as a set of localized audio streams.

This functionality is exposed through the AV Media Selection group and the AV Media Selection option. These classes provide the ability to both discover and select from these alternatives. Alternative audio language options that are localized for your user. Alternative accessibility options for the sight and hearing impaired. And alternative or alternate visual media options. Different camera angles for, say, a sporting event. You could provide a home stream from a home camera angle and an away stream for the away team. Let's take a look at a graphical representation of how you work with these objects.

It starts with the good old AV asset. Using the AV asynchronous key value loading protocol, You load the available media characteristics with media selection options. To find out what your AV asset, what alternatives does it provide? Once loaded, you have a collection of AV media characteristics. And the values that this may be are audible, legible, or visual. You select one of these characteristics based on your application's logic. Say you choose Audible. And using the AV asset, you obtain the AV Media Selection Group. In this case, we've chosen a group that represents the Audible options.

From the group, you can ask for the collection of AV media selection options, and you can use the group to help filter and hone in to the option that you need for your user. So we saw that an AV Media Selection Group provides a collection of AV Media Selection options.

We saw that the AV Media Selection Group provides the ability to filter, to hone in on a particular option, whether the option has a particular characteristic or doesn't have a particular characteristic, or whether the option's even playable. The option details specifics about, or properties about, the single media stream, such as its media type, whether it has a particular characteristic, or whether it's even playable. It also details its locale.

Finally, to nominate to select one of these options at playback, message your AV player item with select media option in media selection group. Let's take a look now at a coding example on how you might select an audible stream, an audible option, inside an audiovisual media resource for playback. It starts with learning asynchronously the available media characteristics with media selection options.

Inside our completion handler, we obtain the status for the loading operation. Switch over the possible values of loading and move execution back to the main queue because this is an asynchronous load occurring on some or returning execution to you on some arbitrary queue. We're going to be performing some UI. We're going to be messaging the AV Player item and AV Player. So we need to move back to the main queue. We then call a helper method, mySelectAudioOption.

mySelectAudioOption obtains the collection of available media characteristics within the asset, and we do a quick little check to see whether the asset even has any audio ball options available to us. We obtained the group of audible options if the group is non-zero. We then use the group to hone in, to filter based on the options that match the user's locale.

In your code, you may not have a matching -- you may not know the matching locale and you'll have to get your user's preferred languages, instantiate an NSLocale for each, and match what's authored into the content. In my case, I've taken the easy out and I happen to know what's there, so it's my matching locale.

If the options are non-zero, I select for playback the first of these by messaging the AVPlayer item. The next new API, the next new feature set that we provide are chapters. Chapters are like bookmarks in a time media resource. They allow your user to jump to a pre-specified and authored in point in time within the media resource.

You use chapters to implement a user interface like this one, where you have a set of bookmarks, a set of chapters, and allow your user to jump into the program content at any of the options. This is exposed to the AV Time Metadata Group class, which provides the facility to not only discover these chapters, but also select from the chapters and discover the associated artwork, each of which localized for your user.

To use these classes, it starts with the AV asset. And we load asynchronously the value for key available chapter locales. This loads a collection of NSLocale. We select one of these NSLocales. and using the AV asset via the method chapter metadata groups with title locale containing items with common keys, we obtain a collection of AV time metadata groups.

[Transcript missing]

You can ask it for its items. This is a collection of AV metadata item. These are all static model objects. We know how to work with static model objects. We use the AV asynchronous key value loading protocol. And we use this protocol to load the value or to load the property value. AV Foundation does just the work necessary to provide you the value.

Value may be a localized chapter string in the case that the common key is equal to AV metadata common key title. We have very descriptive names, as you can see. Or it may be the image data itself for the chapter. AV Foundation provides on either platform the ability to obtain resources from the network, be they progressive download, like trailers, or HTTP live streams, like the keynote address.

AV Foundation also, on the desktop, obviously allows you full access to the file system. And on iOS, you have access within your application bundle. On iOS, there's additional sources of content. You can access the camera roll of your user to obtain videos that they may have captured using the iOS device camera, as well as their iPod library. These are provided through the Asset Library Framework and the Media Player Framework.

To obtain references, to obtain URLs from the user's iPod library, it starts off with an MP Media query. Conveniently, there's a convenience constructor for all the songs within the user's iPod library. We obtain the first item within their library. And then we ask the item, the MPMedia item, to provide to us the value for a property, MPMedia item, property asset URL. Once we have the URL, we follow the four-part recipe that we saw earlier.

To obtain videos from the camera roll, you use the asset library framework. It starts off with an instance of the AL asset library object. We enumerate through the types of assets that it provides. In this case, we're interested in all the saved photos that they have in their camera roll. And we provide two blocks, a block that performs the enumeration and a block in case the user denies access to their personal camera roll.

Looking at the first block in more detail, for each group that is returned, we apply a filter. We're not interested in all the saved photos within our user's camera roll, but rather we're interested in just the videos. So we apply an AL asset filter for all the videos. We enumerate the remaining items within the group. And for each AL asset that we traverse, we ask for the default representation and its URL. Then it's as simple as following the four-part recipe that we know from earlier.

That's all the time that I have with you today, but there are so many more sessions available to you this week. It starts off right here in the Presidio following this session with AirPlay and external devices, external displays, I'm sorry. And later today with an HTTP live streaming update. Tomorrow, you can learn about composition and editing with Working with Media in AV Foundation. That's happening downstairs. There's two sessions on dealing with the capture classes within AV Foundation. One for the desktop and one for iOS.

For more information, check out the documentation online. There's a developer forum where you can share ideas with your fellow developers, as well as speak to a few engineers from Apple. And if you have further information that you want to glean, please email Eric Verschen, our technology evangelist. Thank you very much for coming, and I hope you're all able to develop really efficient and robust applications using AV Foundation.