Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2011-423
$eventId
ID of event: wwdc2011
$eventContentId
ID of session without event part: 423
$eventShortId
Shortened ID of event: wwdc11
$year
Year of session: 2011
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2011] [Session 423] What's New ...

WWDC11 • Session 423

What's New in Core Motion

Graphics, Media, and Games • iOS • 51:20

Core Motion fuses data from the built-in hardware to let you determine precisely how your iOS device is oriented and moving in 3D space. Gain a detailed understanding of the features and capabilities of the Core Motion framework, and understand how you can use device motion to create an amazingly immersive experience for games, augmented reality, and much more.

Speakers: Chris Moore, Xiaoyuan Tu

Unlisted on Apple Developer site

Downloads from Apple

HD Video (440.6 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good morning. My name is Chris, and I have the pleasure of speaking to you guys today about the Core Motion framework, and more specifically, what's new in Core Motion in iOS 5. We have some great features that I think you're going to love. So here's some more details on our agenda. We'll start off talking about an idea that I have for an app that I think would be really great to see in the store.

We'll talk about the basics of using Core Motion, how to start and stop different types of data, and tell us how often you want to receive different types of data. We'll write some code. Then we'll deep dive into some of the new features of Core Motion and talk about how you can use them in your apps. And then we'll talk about one very common use case of the data from Core Motion, and that is to control a camera into some 3D world. We'll give you some tips. on how to do that.

But before we talk about any of that, let me give you a quick overview of what's new in Core Motion in iOS 5. So the first thing that we're providing now is raw magnetometer data through Core Motion. Next thing is North-referenced attitude data. And if that doesn't make any sense to you, it will in a few minutes, so stick with us. The final thing that we're supporting in iOS 5 is the use of Core Motion in the background. So if your application already has permission to run in the background to use something like location, you can now use Core Motion at the same time.

So let's set the stage. Let's talk about this idea for an app. So there are a lot of turn-by-turn navigation apps in the App Store, and they're awesome. I mean, they're one of our most popular categories of apps, and they're just so incredibly useful. But all of them render directions and places of interest on some sort of artificial rendition of the application.

So we're going to talk about the core location of the real world. We don't have anything like an augmented reality turn-by-turn navigation app that gives you maybe a live camera feed out of the front of your windshield with directions, street names, places of interest overlaid on top of that augmented reality view in the appropriate locations. So what would we need to achieve this? Well, we'd really need two things. The first thing is your location, of course. We know that the core location framework does a great job of providing you.

The next thing we need is attitude, or how your device is rotated with respect to the world, whether it's tilted, whether it's facing north, south, east, west, that kind of thing. And that's something that Core Motion can help you with. So it would be great if Core Motion could use a single attitude sensor that would give you a great estimate of attitude directly. But if things aren't quite that simple, we actually have to use three sensors that complement each other in order to calculate attitude.

So let's talk briefly about what those are. The first sensor is a gyroscope, and this measures the rate at which the device is rotating around each axis, and it's very, very responsive to that and very precise. So what this will let you do is get a really accurate, really responsive estimate of how attitude is changing from one point in time to another. If you integrate this data. And Core Motion will provide the raw gyroscope data to your app if you'd like to use that.

The next sensor is one that you're probably more familiar with, and that's the accelerometer. It's a sensor that we've had in every single iPhone, every single iPad, and every single iPod Touch that we've ever shipped. And this measures the sum of gravity plus whatever acceleration the user is giving the device, and we call that user acceleration.

Now, if you'd like to use the raw accelerometer data, we provide that to your app through Core Motion as well. And in fact, the old interface for getting accelerometer data through UIKit is being deprecated in iOS 5. So Core Motion is really the new home for raw accelerometer data.

The next sensor is the magnetometer. Now, if you've ever used the Core Location and Compass framework, or if you've ever used Compass Mode and Maps, or the standalone Compass application that we have, you've used the magnetometer. And this measures basically the local magnetic field that's surrounding the chip. And we can use this to determine which direction your device is facing, north, south, east, west, that kind of thing. And now in iOS 5, the raw magnetometer data is available to your app through Core Motion as well. Thank you.

So these sensors are great by themselves, but there are some challenges to working with them in isolation. So with the gyroscope, you have no absolute attitude reference. You can integrate the data to tell you how the attitude is changing over time, but it doesn't give you any sort of world-referenced starting point, basically.

The other two issues with the gyroscope are you have to worry about bias and scale error. Now, bias basically means that if the device is not rotating, you would actually measure a non-zero value from the gyroscope, as opposed to the ideal value of zero. So what that means for you is if you're integrating the data to compute how attitude is changing, and you don't correctly compensate for that bias, then your attitude error will grow with time, regardless of how the device is rotating. So the x-axis on this plot is time.

So the other challenge to working with the gyroscope is dealing with what's called scale error. So what this means is the gyroscope will actually read the true rotation rate times some of the data that you're generating. So if you're integrating this data, you're going to have some fraction that could be greater than or less than one. What that means for you, if you're integrating this data, is you'll have an attitude error that grows with how far the device is rotating. So the x-axis in this plot is the amount of rotation.

With the accelerometer, if you want to measure attitude using this, you can isolate gravity, and that'll give you two components to attitude. It'll tell you how the device is tilted with respect to gravity. But it can't measure any sort of rotation around gravity. Anything that doesn't cause gravity, as seen by the device, to change won't be picked up.

The other challenges are, if you do want to use this to measure gravity, you'll have to apply some filtering. You have to use a low-pass filter to isolate gravity from user acceleration. Now, the more heavily you low-pass filter, the more latency you'll introduce in your data. So there are some drawbacks to this. If you're interested in the user acceleration or shaking component to the accelerometer data, you'll have to use a high-pass filter to isolate that.

Now, the magnetometer is arguably the most challenging sensor to work with on mobile devices. It's susceptible to not just interference caused by the device itself, so electronics within the device turning on and off, or distortions to magnetic fields caused by other metals in the device, but it's also susceptible to interference from objects in the real world external to the device. So normally when you use this, what you really care about is measuring the magnetic field generated by the Earth. Unfortunately, that's very, very small compared to the fields that are generated by a lot of really common objects, like computers and refrigerators and power lines.

So the solution to a lot of these challenges is to use all three sensors in tandem with each other. And we provide algorithms to do just that in Core Motion. We call the output of these algorithms device motion, and you can get, among other things from them, full three-dimensional attitudes, so how the device is tilted, as well as whether it's facing north, south, east, west. And that attitude is very responsive to changes in it because of the gyroscope.

So now that you've gotten an overview of what's available in Core Motion, let's go through the basics of how to use Core Motion. So your main control center for Core Motion is an object called CM Motion Manager. And this is how you will tell Core Motion which types of data you'd like to receive, as well as how you will start and stop updates, and tell Core Motion how often you'd like to update different types of data.

Core Motion will send different types of sensor data to your application in different objects. So there's an object of call-- excuse me. There's class CM accelerometer data that incorporates the raw accelerometer data, CM gyro data for the raw gyro data, CM magnetometer for the raw magnetometer data, and CM device motion for the sensor fusion or device motion data that we talked about. Now, a big part of CM device motion is a property of type CM attitude, which contains this three-dimensional attitude. And we'll spend a lot of time later on talking about that.

So there are two methods to retrieving data from Core Motion, push and pull. With push, you need to provide an NSOperationQueue and a block to Core Motion. And then Core Motion, whenever a sample of data comes in for your app, will execute a new instance of that block in that NSOperationQueue. With pull, all you have to do is periodically ask CM Motion Manager for the latest sample of data that you're interested in. And this is often done when your view is updated within a CA DisplayLink callback, for instance, in a game.

So there are some trade-offs to these two different approaches. With Push, you've got the advantage that you never miss a sample of data. If we get 10 samples of data for the sensor that are destined for your app, then you will get all 10 of those samples. A big disadvantage is that there is increased overhead to using this over pulling for data.

And oftentimes, especially if you have a very interactive application such as a game, if your app gets a little bit behind and can't keep up with the high rate of data that's coming at it, really your best bet is to just drop a few samples, cut your losses, and catch up. So for these reasons, we recommend this predominantly for data collection apps. Now, pulling for data is a lot more efficient.

And then oftentimes, there is actually less code required, particularly if you already have a periodic event coming in that determines when you need data. A disadvantage is that if you don't already have such an event coming in, then you may need to set up an additional timer upon which you'll pull for data. But despite this, we recommend this for most apps and games, definitely any interactive applications.

Let's talk about threading for a moment. So Core Motion creates its own thread to do two big things. One is to handle raw data from the sensors, and then the other thing is to run these device motion or sensor fusion algorithms. So what that means for your apps is that Core Motion will never do any of its own processing on your threads at all. The only thing that will execute on your threads is exactly what you want to execute.

So how do you use Core Motion? Well, there are three big steps. First step is setup. Second step is retrieving data. And the third step is cleanup. Let's walk through those. Now, in this example, let's say that we're interested in pulling for raw accelerometer data. So to setup, we'll first need to create a CM Motion Manager instance, as you see there.

Next step, we always want to check to make sure that the data that we're interested in is available on the platform we're running on. So we're interested in accelerometer data now, so we'll check CM Motion Manager's "Is Accelerometer Available" property. And there are corresponding properties for gyro data, magnetometer data, and device motion. And as always, if the data that we're interested in is not available, we want to fail gracefully.

The next step is to set the desired update interval. So in this case, we'll say that we're interested in 60 hertz accelerometer data. So we'll set the accelerometer update interval property to 1/60. And again, there are corresponding update interval properties for gyro data, magnetometer data, and device motion. The final step in setup is to start the actual data that we're interested in. So here, since we're pulling for data, all we'll do is call start accelerometer updates.

That will cause Core Motion to turn on the accelerometer and start retrieving data in the background and caching it for us. And then we'll just pull when we're interested. Now, if we wanted to have Core Motion push each sample to us, then we could have called start accelerometer updates to queue with handler and pass that NSOperation queue and block.

The next step is to retrieve data when we're interested. So here we're looking for accelerometer data. So all we'll do is examine the accelerometer data property of our motion manager whenever we're interested, and that'll contain the latest cached sample that we've gotten from the sensors. And again, there are corresponding properties for the latest sample of gyro data, magnetometer data, and device motion data if you've turned on those types of data.

So the final step is to clean up. We always want to do this when we're done with sensors. That'll allow Core Motion to actually turn off the relevant chips if they're not being used anymore, and will ultimately save your users some battery. So here we'll call stop accelerometer updates, and there are corresponding stop methods for the other types of data as well. And then we'll go ahead and release our motion manager instance. So just to recap, there are two methods to receiving data from Core Motion, push and pull. All processing is done on Core Motion's own thread.

There are three steps to using Core Motion, set up, retrieve data, and clean up. With that, I would like to invite my colleague, Xiaoyuan Tu, on stage to walk you through the steps of modifying an application that uses the existing UI kit accelerometer API to use Core Motion for accelerometer data. Xiaoyuan? Thank you, Chris. In this demo, we're going to do a simple exercise together.

We're going to modify an existing sample app to use Core Motion. This app is called Accelerometer Graph. Many of you may be familiar with it. It plots a three-axis accelerometer value over time. And this app has been there since iOS 2.0. It's a little outdated now, in that it uses UIKit to access the accelerometer data, which is a deprecated method now. The new and proper way to access all motion sensor-related features is Core Motion.

So adopting Core Motion allows you to take advantage of all the additional motion features that are to do-- not only with accelerometer, but also the gyroscope, the magnetometer, and all the cool stuff that are based on fusing all three sensors together, like what Chris has just told you about. And as you will soon see, it's very straightforward and simple to use Core Motion. In a nutshell, it's a win-win situation. And if you have code that still uses UIKit to access accelerometer data-- It's high time to switch to Core Motion.

Okay, so let's begin. Actually, before that, I'd like to walk you through some deprecated data structures that were in UIKit. When you make the switch from UIKit to Core Motion, you want to update these data structures to their Core Motion counterpart. So the first one you want to update is UI accelerometer object. Its shiny new counterpart in Core Motion is CM Motion Manager. That allows you to access all the other features as well. So to borrow a Chinese phrase, means you're switching from a BB gun to a cannon.

The second one, UI acceleration, is a data structure that holds the three accelerometer values. The new data structure in Core Motion is CM accelerometer data that not only holds the three access accelerometer data values, but also the timestamp of the sample, which is actually really important for many apps.

The third one is UI accelerometer value, which is a type def. Now it's simply double, and that's all you need to update, switching from UIKit to Core Motion. Okay, let's now do some coding. Super. Great. So it's already loaded. Magical. Accelerometer graph. So to use Core Motion, the first step to do is to add the Core Motion framework to your project. So click on the project.

Sorry, here. You go to build faces and find link binary with libraries and click on that. And that shows you all the frameworks that's already included. And you want to add the Core Motion framework, click on plus sign. And that will give you a pull-down menu that shows you all the available frameworks that you can add. So I'm just going to go ahead and type Core Motion. Thank you.

Okay, let's find it. There you go. Add. Now it's added there. Great. Simple. The second step to do is to import the Core Motion header file, and we're going to add that to the prefix file, which is here. All right. We're going to add-- let's make a space for it.

Let's drag it here. There you go. Import Core Motion, coremotion.h. Simple. That's two steps you need to do. First, add the Core Motion framework, and then import the header. Now we're ready to go actually modify some code. There are basically three things to do. The first is to insert some new code that does the setup of Core Motion, retrieve the accelerometer data, use it, and then clean up.

The second thing to do is to remove or comment out the deprecated stuff that used to be in there. And the third one is to go through your code and find those deprecated data structures that I just showed you and update them to their Core Motion counterparts. So let's do that. So the first thing to do, we want to declare the Core Motion CM Manager instance. We're going to do that in the main view controller header file.

We're going to do that here. Main View Controller. Make a space for it. And...

[Transcript missing]

And the old, deprecated one that we were using is this UI accelerometer object. We find it here. Let's just comment that out. Okay, now we've declared the Motion Manager instance. Let's go to the implementation. In here, we just need to make changes to three places. The first one is in viewDidLoad. In that function, we'd like to go ahead and set up Core Motion. So let's go here and drag that code snippet here.

Okay, let's actually go through line by line this code snippet here. The first line, as you can see, we allocate that motion manager instance, and then we set the Accelerometer update interval to K update frequency, which is instantly set to 60. So we set a 60 hertz interval, a 60 hertz update frequency.

And the cert line here, we're going to call start accelerometer update to Q, which is a push method to get the accelerometer data. Because this is essentially a data collection app, we don't want to miss any samples. So after that, we turn on the accelerometer and ask it to update.

Then if the view is not paused, let's go ahead and retrieve those accelerometer values to use them. That's showing this block of code. All right. And actually here, you can see the UI accelerometer objects that were now obsolete, well, deprecated. So let's go ahead and comment those out.

All right. So the next function we want to add the cleanup to is the pairing function viewDidUnload. That's where we want to put our code, insert our code to clean up Core Motion. So let's drag this code snippet here. Very simple, only two lines. The first one is to tell the accelerometer to stop updates, so turn that off. And then the second line to simply release Motion Manager.

So we're pretty much done with setting up and retrieving used data, and then clean up in Vue.did_unload. And we go ahead, and that's the deprecated-- method for UI accelerometer delegates. So we go ahead and comment that out. So we're essentially done. We've done the two steps, insert code to set up Core Motion to use and then clean up. The second step is to, you know, while you're at it, go comment out all the method and stuff.

So finally, what you really need to do is to go through the rest of your code and update those data structure showing here, which is the UI acceleration, change that to CM accelerometer data and UI acceleration value, simply replace that with double. And you're done. Really simple and straightforward. So actually now let me switch two I'm going to use the iPad to show you the accelerometer graph app in action. Actually, it also shows you high-pass or low-pass filtered accelerometer value as well. With that, I'll pass the ring back to Chris.

Thanks, Xiaoyuan. Well, that was easy. You guys are all now experts in using CM Motion Manager. So let's get into the juicy stuff now. Let's talk about device motion, and we'll talk along the way about a lot of the new stuff that we've added in Core Motion in iOS 5.

[Transcript missing]

The first new reference frame that we have in iOS 5 is almost not really a new reference frame as a new way of turning on additional correction in Core Motion. So it's called CM Attitude Reference Frame X Arbitrary Corrected Z Vertical. Now, again, this is the same as the previous frame in that you have no control over where your X and Y reference frame axes point.

But the big difference is that we will also turn on the magnetometer in addition to the accelerometer and gyroscope, and use that opportunistically whenever it happens to be calibrated to correct for any accumulation in Yaw error. So ultimately what this means for you is that you end up with better long-term Yaw accuracy, but because we have to turn on that additional sensor and pipe data into the CPU and do a little bit more math, you will incur higher CPUs.

So you will have a higher CPU usage if you use this over the original reference frame. And another potential downside is if your user is in an environment where there's a lot of variation in the external magnetic field in space, in their vicinity, you might be a little bit more sensitive to that with this reference frame than the previous one.

So one of the really, really useful new reference frames, I think, especially for augmented reality app developers, is called CM Attitude Reference Frame X Magnetic North Z Vertical. So in this reference frame, unlike the other ones, you're actually guaranteed that your reference frame's x-axis will be pointing in the direction of magnetic north, or to be more precise, in the direction of the projection of magnetic north onto the horizontal plane. And then, of course, your y-axis will just complete a right-handed coordinate system, as always.

Now, there are some drawbacks to using this. The main one is that, you know, in order to actually know where magnetic north is to define your reference coordinate system, the magnetometer has to be calibrated, which may require the user to take some action, as if you've ever used, you know, Compass Mode in Maps or the Compass app you may have seen.

Core Motion has the ability to display the same Compass calibration HUD that's used by the Core Location Compass framework and any apps that use that, if you request that we display it when necessary. So to do that, you'll just set the "shows device movement display" to "yes" on your CM Motion Manager instance.

By default, it's "no," so we'll never show that HUD. But if you'd like us to, set it to "yes." The final new reference frame that we're making available in iOS 5 is called CM Attitude Reference Frame X True North Z Vertical. It's very similar to the previous one, except your X axis will be pointing towards True North as opposed to Magnetic North. So you might be asking yourself now, "Hey, what's the difference between Magnetic North and True North?" Well, I'll tell you. So Magnetic North is the direction of the Earth's magnetic field.

So it's basically, from where you are, it's the direction to the Earth's magnetic North Pole. Unfortunately, that's distinct from the Earth's True North Pole. And the direction to that is given by True North. That's the direction that we're most familiar with, and it's the reference point that's used in all maps, basically. It's not a big deal, though, because we can calculate this using a few things. One is Magnetic North.

Wow. And the other one is a model of what's called the Magnetic Declination, which is the difference between True North and Magnetic North in different places of the Earth. And Core Location and now Core Motion actually share a model of what that looks like. So if we know your approximate location, we can look up in that model what the declination is, where you are, and apply that to the attitude data that we give you, so that it's referenced to True North as opposed to Magnetic North.

Now, one complication to keep in mind here is that we do need to know your location, obviously, to do this. So you'll have to use Core Location to turn on location updates in order to use this reference frame. You don't have to do anything with those location updates. Core Motion will handle that under the hood.

But your app does need to turn them on. And I should note that if the only reason that you're turning on location updates is to use this reference frame, you should remember that you're not going to be able to do that without Core Location. So you don't have to do anything with those location updates. Core Motion will handle that under the hood. But your app does need to turn them on.

And I should note that if the only reason that you're turning on location updates is to use this reference frame, you should remember that you're not going to be able to do that without Core Location. So you'll have to use this reference frame. You should request the least accurate location updates from Core Location that you can. And this is basically to save the user's battery. We don't need to turn on GPS or anything like that for location here. Just very, very coarse cell accuracy location is fine.

So you probably noticed that all of these reference frames are defined such that the z-axis is vertical. So let's go through an example to really solidify some points that we've made. Let's say that we have an instance of CM Motion Manager, and we grab the latest device motion sample from it.

Let's say that we grab the attitude from that device motion instance, and we say we want it formatted as a rotation matrix. Now, recall that this attitude gives the rotation from the fixed reference frame to the device's frame. So if we set up our gravity vector in our reference frame, which is always 0, 0, minus 1 in all of these frames, then if we just multiply that by the rotation matrix, that'll give us a very good estimate of where gravity is pointing in the device's reference frame. Excuse me, in the device's frame. Now, you don't actually have to do this math yourself. Core Motion-- CM Device Motion has another property called gravity, which we'll talk about now, which, when you request it, will basically do that math for you.

So gravity is of type CM acceleration, and there's a corresponding similar property called user acceleration, which gives an estimate of the user acceleration or shaking motion in the device's reference frame. Both of these are stored in structs, and the units are type G-- excuse me, the units are Gs.

The next property in CM device motion is called rotation rate. And this is essentially the best, basically the latest sample of gyro data that we've gotten, minus all of the calibration, excuse me, including all of the calibration parameters that we've applied to it. So we remove bias and things like that.

And this will be given to you in units of radians per second. And if you've ever taken a physics class, you might remember the right-hand rule. So that's what we use to determine the direction of this rotation rate vector. So basically, if you take your right hand and imagine curling your fingers in the direction of rotation, the direction that your thumb points is the direction that this vector will point.

A new property in CM device motion in iOS 5 is called magnetic field, and that basically gives our best estimate of the direction of the Earth's magnetic field within the device's reference frame. And it includes not just an estimate of the field itself, but also an estimate of how well calibrated the magnetometer is at that point in time.

So the field will be stored in units of micro Teslas in the CM magnetic field struct. And the calibration accuracy will be given in an enum. Now, I mentioned earlier that if you use the original reference frame that was in iOS 4, CM attitude reference frame X arbitrary Z vertical, we won't actually turn on the magnetometer. So if you use that reference frame, then this calibration value will always be uncalibrated, because there will be no data for us to work with. And again, that's really to save CPU.

So let's talk about where each of these features is available. So iPhone 4 has an accelerometer, gyroscope, and magnetometer, so everything's available on iPhone 4. iPhone 3GS lacks a gyro, so only the raw accelerometer and magnetometer's available on iPhone 3GS. There's no device motion or gyro data. iPad 2 is in the same boat as iPhone 4. It's great. Everything's supported. The original iPad, though, same as iPhone 3GS. It doesn't have a gyro.

The latest iPod Touch is interesting. It has an accelerometer and a gyroscope, but no magnetometer. So you'll be able to get the raw data from the accelerometer and gyro, and you'll be able to use the original reference frame, CM Attitude reference frame, X arbitrary, Z vertical, on this device, but the new reference frames will not be available.

And the third generation iPod Touch only has an accelerometer, so that's all that's available. And of course, the simulator, none of this is supported on the simulator. So with that, I would like to invite Xiaoyuan back on stage to walk you through using some of these new features of Core Motion in an application. Xiaoyuan? Thank you, Chris.

As many of you know, finding parking is a huge pain in a city like San Francisco. Wouldn't it be great if we had an iOS app that shows you in augmented reality view where to find available parking as you pan your device like so? We actually wrote a sample app that did just that. It's called Park, and Park is what I'm going to demo to you today. So that's the Park project here. This project essentially demonstrates how to use Core Motion's True North Reference Attitude API.

And it contains subclass to UIView, which is called ARView. ARView stands for Augmented Reality View. And it displays a live camera feed with places of interest overlaid at appropriate coordinates. And it uses core location to determine where the user is, and it uses Core Motion to determine where the user is facing.

Okay, let's go look at this ARView class. As I said earlier, it's a subclass to UIView, and it has a property, an NSArray property called PlacesOfInterest. Let's go look at what is contained in PlacesOfInterest. It's an NS object that contains two properties, a UIView property and a CLLocation property. And it uses these two properties to actually render each PlaceOfInterest.

Okay. Actually, before I go into the implementation, by the way, if you want to make your own very cool AR apps, this AR View class, there's two files, arview.h and arview.m. You can pretty much just grab them and drop them in your code and use as is. The entire project source code is available to download.

. After this, Chris will tell you. All right. Let's go look at the AR view. I'm not going to go through the entirety. I'm just going to go through the pieces that are relevant to Core Motion. How you set up Core Motion to do the AR view. There are just three functions here that are to do with Core Motion: Start device motion, Stop device motion, and OnDisplayLink. Let's go through each of those.

Start device motion. All right. OK. In here, first, as usual, we allocate in it a motion manager object, and then we set the show device movement display to yes. The reason that we set that to yes is because we need to use a true north reference attitude, which requires that the magnetometer be calibrated. So you want to set that property to yes to notify you in case the magnetometer isn't calibrated. It will show you the calibration HUD so you can rotate the device around to calibrate.

After that, you set the device motion update interval again to 60 hertz. I don't know why we always say 60 hertz, but that's a good enough frequency. Right, and the next one, we go ahead and start device motion updates using reference frame TrueNorth. And we're done. That's all we need, four lines to set up device motion.

And then, In Stop Device Motion, we stop device motion. The first line is to stop device motion updates here, and we release the motion manager to set that instance to null. That's a pair of functions we set up and clean up. And on this Play link, that's where we use it.

Here's the function. So on display link, first of all, we get the device motion property from the motion manager, and we check that this is not null. The reason that we check it is that we need core location to be available for true north reference, and we also need the magnetometer to be calibrated. So those conditions, if either one is not met, D would be null.

So we check that both conditions are met. Then we go ahead and retrieve that true north reference attitude in the form of a rotation matrix, and that is R. And the next step is to call transform from CM rotation matrix. We use that R to calculate the view matrix for the camera.

And after we update the view matrix with our current attitude, we go ahead and call set needs display to tell the system to render that view. Before we show you the demo, Chris will spend the next few moments to tell you how to use Core Motion's attitude to do camera control.

Great, thanks Xiaoyuan. All right, so let's talk about camera control. So one of the big use cases for Core Motion that we see is just this, actually. We'll talk about two paradigms for camera control. One is the paradigm that's used in Park, and I like to call that camera-centered pivot.

Here you have some three-dimensional world, and the camera is at a fixed location in that world. And then we use the attitude data from Core Motion to simply rotate the camera around that point that it's at. So there's no translation involved, basically. It stays at that fixed coordinate. The next paradigm I like to call object-centered pivot.

So in this paradigm, we've got some object that's centered at the origin of our world, and we use attitude data from Core Motion to rotate the camera around that object, always facing it. So if you've ever seen the Core Motion Teapot sample app, that's what it uses. And then if you saw the keynote demo last year where Steve Jobs played a game called Blocks, that's the paradigm that we used in that as well.

So let's talk about the math behind all those. First, some preliminaries. If you've ever used OpenGL, this should be very familiar to you. So both paradigms for camera control involve applying what are called rigid body transformations to the camera. And these can be described very succinctly using a 4x4 matrix in which the top 3x3 portion is just a rotation matrix, and the top three elements in the rightmost column are the rotation matrix. And the top three elements in the rightmost column are the rotation matrix.

describe a translation. Now, for a rigid body transformation, the bottom row of this matrix will always be 0001 for reasons that we won't go into, but if you ever do a search for rigid body transformations or homogeneous coordinates on the internet, you'll find a lot of information on this.

So how do we construct a rigid body transformation matrix for these two paradigms? For camera-centered pivot, let's say that the camera is fixed at position pw in world coordinates, and let's say that we have the latest attitude estimate from core motion, and we'll describe that using a rotation matrix, r. Now, recall that we could get access to that if we had a motion manager instance called motion manager using the code that you see here.

So what we'll need to do is use attitude to transform the camera position in world coordinates to device coordinates. So to do that, all we have to do is multiply by that rotation matrix that we got from Core Motion. That'll give us a new vector called PDX, PDY, PDZ, and that's the camera position in device coordinates. With that, our 4x4 rigid body transformation matrix for the camera view is just what you see here. It's got the attitude matrix from Core Motion in the top left corner, and then minus one times the camera position in device coordinates down the right-hand column.

Object-centered pivot is actually even easier. So here we have the camera position in world coordinates described by vector PW again, and we grab our attitude from Core Motion in rotation matrix R. We could use the same code to get that. Then our 4x4 transformation matrix is simply, you know, that attitude matrix in the top left corner, and then the minus 1 times the camera position in world coordinates down the right-hand column. So with that, I would like to show you Park. So let's go to the iPad. So here we have our augmented reality view with locations of some pretty suspicious sounding parking lots rendered on top of it at the appropriate location.

The Bat Cave Drunken Valet So in case you can't tell, all of these parking lot names are fake because there really is no parking in San Francisco. So for more information, you can contact Alan Schaffer. He's our graphics and game technologies evangelist. Check out the event handling guide for iOS for documentation.

And our developer forums are always a great place to post questions. If you're interested in checking out some related sessions, they're given here. The top two are about to take place actually now, Essential Game Technologies for iOS, Parts 1 and 2. And the bottom two have already taken place, so you'll have to check out the videos. Thanks.