Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2012-524
$eventId
ID of event: wwdc2012
$eventContentId
ID of session without event part: 524
$eventShortId
Shortened ID of event: wwdc12
$year
Year of session: 2012
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2012] [Session 524] Understandi...

WWDC12 • Session 524

Understanding Core Motion

Graphics, Media, and Games • iOS • 42:15

Core Motion fuses data from built-in sensors to determine precisely how your iOS device is oriented and moving in 3D space. Walk through the features and capabilities of Core Motion to understand common use cases and recommended best practices. See how you can create incredibly immersive experiences for games, augmented reality, and much more.

Speaker: Andy Pham

Unlisted on Apple Developer site

Downloads from Apple

HD Video (150.3 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good morning. Welcome to Understanding Core Motion. For those of you who are new to Core Motion, this talk will provide an overview of the framework, as well as helpful hints and techniques for you to help bootstrap your motion-able apps. For those of you who are already familiar with Core Motion, we'll dig deeper into sensor and algorithm performance.

Explore capabilities, usages, help you refine ideas you have for your existing applications, or perhaps come up with ideas for new ones. Additionally, I'll also take you through the process of using Core Motion from A to Z, pointing out design considerations and best practices along the way. And we will culminate today's talk by reiterating some of the concepts we've covered today with a couple of demos. The code for which are going to be available for download after this talk. So Core Motion is about the sensors and sensor fusion. And I'm going to start with the sensors.

We shipped the accelerometer with the very first iPhone, and it's been really popular ever since. The accelerometer measures gravitational as well as inertial forces that impinge upon it. What that means for you as a developer is that there's actually quite a bit that you can do with it, because if you can measure gravity, or to be more precise, if you can figure out how gravity is oriented with respect to the device, you can back out its pose, its attitude, its tip, or its tilt.

And that's the principle behind apps such as the iHandy Leveler, which surprisingly remains one of the most popular apps among sensor-enabled apps, I should say. So besides gravity, you can ignore it altogether. For instance, assume only horizontal motion. And that's the principle behind games such as Shufflepuck, or Goff-Puck.

Excuse me, Goff-Putt. The thing I like about the accelerometer is that it is very efficient. To give you a sense of how efficient it is, compared to the magnetometer, it consumes 10 times less power, 40 times less power than the gyro. What this means for you as a developer is that if you are power conscious, You can still afford to be lax about how long you run the accelerometer.

Or you may want to look at the accelerometer first as the first stop. It's also very responsive. And what I mean by that, or what it means for you, is that you're going to be able to afford to filter it much more aggressively if you need a really clean signal.

You have to be careful, though, about using the accelerometer by itself. And the reason is because of ambiguity. And to get a sense of what I mean by that, take a look at this plot behind me. This shows basically the output of the three-axis accelerometer. I'm only looking at the X and Y.

And you can, just from basis on the plot, you would be tempted to assume that that trajectory resulted from a pure rotation of that device, right? First, it's canted at some angle, so both the X and the Y axes pick up some measure of gravity. And then as the device is rotated through a vertical, now all of the gravity is sensed in the Y axis, and none of it is sensed in the X. But in fact, that trajectory was created from a much more complex set of motions.

Because you probably recall from your first semester dynamics class, right?

[Transcript missing]

But the sensor is on the device, which contains contributions from a lot of the nearby electric field. You've got electronics, you've got electrons zipping down the traces, you've got magnet speakers. All of these generate a net magnetic field, which is going to cause your reading to be offset by some potentially huge amount, which is going to cause your heading to be off, right? Now, it's interesting that those contributions, since they travel with the sensor, if you were to rotate your device again through a complete circle, you'll trace yet another circle, and the center of that circle is going to be offset by the net contribution of those disturbances that I talked about. So if you know that net offset, then you can basically back out your new heading.

And so that's the principle behind a lot of the calibrations behind the compass. And I bring that up here because, again, if you're going to use the raw magnetometers, you have to be aware of both the contributions from the device and also what you'll need to do to take that contribution out.

After the magnetometer, we started shipping the gyros with the-- 3GS. The gyro is actually one of my favorite sensors. It is my favorite sensor. The reason I like the gyro is because, unlike the other sensors, it's very precise. There's no ambiguity. It's going to measure rotation and only rotation.

It doesn't matter what that device is doing. It's only going to pick up the rotational part, the spin part of what that device is doing. So it's very, very precise. So it's good for free space motion. You don't have to worry about constraining it. You don't have to say, oh, you know, I'm only going to use it in quasi-static state conditions. You're going to pick up spin no matter what you do. So it's very, very responsive.

Andy Pham The other thing I like about the sensor, it's got a great dynamic range, which means that you're not going to worry about saturating the sensor. You can use it however you want. Your user is going to use your app however they want, and you don't really need to worry about saturation.

And as we said before, it's very precise. So since it does such a good job with measuring spin, you're going to -- might be tempted to want to integrate the signal to get the attitude because a lot of games are really interested in where that device is pointing or where the user is pointing that device.

I would caution you against that. And the reason is because the gyro, just like any sensor, it's going to contain some bias, some non-zero reading while it's at rest. And that's going to be common to any sensor. And in fact, this gyro is actually very good about that.

The bias offset is only about 1/100th percent of its full dynamic range. So it's still a very good sensor, but there is a bias. And that bias is non-zero so that if you integrate that bias over time to try to get the attitude, you know, within half a minute, you're essentially going to be pointing 45 degrees off from where you want to be. So again, you want to be careful about integrating your measurements.

So you're sitting there and you're thinking, well, you told me I can't use the gyro to get attitude, but I really need attitude. I can use the accelerometer to give me the tip and the tilt, but only under quasi-static state conditions. And the magnetometer is great for heading, but I got to be wary about all these other conditions.

So what if -- What if we can combine all three of those sensors, fuse them all together to give you basically the pointing stability of the magnetometer and the accelerometer and the responsiveness of the gyro? And you could do that. We also do that for you, and that's device motion.

That's sensor fusion. It's basically going to be able to give you accurate, responsive estimates of the device's pose and motion throughout the entire range of motion. Use device motion, use sensor fusion for your games if you need that full range of motion. If you need that responsiveness. You're racing games. If you need it to be more responsive. Your first-person shooter game, if you need it to be more tight.

Your augmented reality game, if you need it to be more real. That's not to say that sensor fusion is going to be the be-all and end-all. You'll always want to use sensor fusion. That's not what I mean because there are going to be certain instances where you want to use the raw sensor. Accelerometer, for instance. We already said it's great for attitude. It's great for shape.

But again, be careful about ambiguity. The magnetometer, good for heading. The gyro, great for spin. But again, the sensor fusion is the one that, removes the uncertainty about ambiguity so that you can get attitude. It does a lot of calibration so you get the heading. And of course, it gives you the spin and it gives you the shape. So, that, it gives you all of that. So, With that, let's dig a little deeper into sensor fusion. We'll take a look at what it does, what it gives you, what sensor fusion gives you, and we'll just dive a little deeper into the framework.

Your starting point is going to be the Motion Manager. That essentially is your gateway. It's going to run in your application space. You're going to create an instance to the Motion Manager, and then through it, you're going to specify what you want, whether you want the raw sensors or whether you want sensor fusion.

You're also going to specify how often you want that data, and also you'll start and stop the updates. That's all it is. So within the Motion framework, You'll get, depending on your requirements, you'll get the raw accelerometer data, the gyro, the magnetometer. We give that raw to you. We also give you the sensor fusion. See on device motion.

So, sensor fusion, what it gives you, and this is actually the meat of it, You'll get gravity. Again, this is gravity in a completely unconstrained motion. Not the steady-state gravity, but the dynamic gravity. It'll also disambiguate the user acceleration for you. You also get the rotation rate. The difference between this one and the raw gyro is that this one is going to be bias compensated. And then we give you the attitude and also the magnetic field. And again, the difference here between this one and the raw magnetic field is that this is also bias compensated or disturbance has been removed. The sensor has been calibrated.

So we'll start with the user acceleration. It's a property of device motion. It's a struct CM acceleration. And it's essentially -- it's just the inertial acceleration seen by the device. We remove gravity. That's all it is. The units are going to be in gravities. So one is 9.8 meters a second squared. And it's a struct just double XYZ. Nothing mysterious about it.

Gravity. It's the same struct as user acceleration as you would expect. The units are going to be the same. To describe gravity or what we give for you, so take a look at this, right? Suppose you had a device that was sitting flat on the table. You would expect that the horizontal components of the gravity would be zero.

Now, if you were to move that device back and forth, You would get that kind of signal on the X and Y axis. Now, without a priori assumptions about whether that device is flat on the table or not, you don't really know what gravity looks like throughout all this.

But because sensor fusion knows what the attitude of the device is, and it knows that in real time, we can project that signal. on to the gravitational axis and get what we estimate gravity is going to look like. And so as you can see, it does a fairly good job of extracting gravity out, disambiguating that from the user acceleration.

The rotation rate, that's just the device's spin. And again, as we said before, it's corrected for bias. The units are gonna be in radians per second. And the struct, again, is gonna be given as x, y, z. Again, these are rotations following the right-hand rule. So when we say rotation rate dot x, it's essentially the rotation about the x-axis. The magnetic field. So this is given as a struct called CM Calibrated Magnetic Field, and it's basically the nearby magnetic field from everything.

Corrected for bias. So again, when we talk about the device, it's basically the magnetic field of everything external to your device. We calibrate it for the contributions that are coming from the device. And it's filtered because it's quite noisy. The results are stored in a struct. The struct contains two elements. One is the field itself, and the other one is a measure of the accuracy you should expect if you were to use this field that we give you in terms of how well we think we've calibrated the device.

So again, the magnetic field, nothing surprising there. XYZ. The units are going to be micro-Tesla. Look, it sounds like I'm repeating myself, you know, the units. I keep bringing the units back up. But I think it's an important point enough to belabor because the units are so important, right? Misunderstanding of units are the cause of so many falling bridges, satellites lost in space, right? Much suffering in guilders. So I'm going to keep hammering on the units.

In addition to the magnetic field, we provide what the calibration accuracy is. And I'm just going to go through the enums real quickly. The first one is uncalibrated minus one. What that means is that if the field that you get has uncalibrated level of accuracy, it's essentially a random number generator for you in terms of the heading.

Low means it's marginal. You can get something out of it, but you certainly wouldn't want to bet money on it. You really want to live around the medium and the high. And again, I'll tell you a little bit later of how you can prompt the user to recalibrate the device so that you can improve the calibration accuracy.

This is the core of device motion. It's the attitude. Everything we've done up to this point essentially has been a byproduct of trying to get the attitude. And the attitude we presented, the attitude is just the pose of the device, right? It's how it sits in 3D space. We provide it in three flavors.

Euler angles, quaternions, and rotation matrices. Euler angles because it's intuitive. It's basically going to be presented as doubles, roll, pitch, and yaw. And it's easily visualized, so you're going to want to use Euler angles if you're going to express the pose of the device through any user interface.

And it's also familiar to the user. And so we provide it as a convenience. But mathematically, it's very, very difficult to use, as you know. It's not. Efficient. It's not unambiguous. And you have the gimbal lock issue problem. So we also provide quaternions. Quaternions, elegant. You can represent everything through four scalars.

Computationally, it's very, very efficient and also avoids the gimbal lock problem. Another great thing about the quaternions, as you probably already know, provides smooth interpolation. So if you started out one quaternion and you want to get to the other pose defined by the other quaternion and you interpolate that, then your motion in real space will also be very, very smooth. So that's a great property of quaternions. So I like quaternions, but unfortunately, oh, and it's given as WXYZ.

Unitless, because it's some imaginary fictitious concept. So we provide some utilities to get you from the quaternions that you work with back to other angles. So let's take this example. You're going to start out with a quaternion. So you're going to get it from the device motion, motion manager.devicemotion.quaternion. And let's say you do some math with it.

The point of this is not to show you the quaternion math, but let's just say, you can do some math with it. Right? And then again, we provide the utilities so that at the end of all of that, you can take your rotated quaternion and convert it back to roll, pitch, and yaw. So everything is defined in motionutils.h.

Go ahead and check it out. It has a bunch of handy utilities that go back and forth between older angles and quaternions. All right. If you're like some of those developers who prefer to... I don't know, eat broken glass in the morning rather than granola to get yourself going. If you like your math straight up.

You want to do things in 3x3 matrices? That's fine. We give you rotation matrices. It's 3D representation. Like I said, it's math straight up. It's just math matrix, linear algebra, stuff that you know from freshman college. We present it as a 3x3 struct. Elements are M11 all the way to M33. Again, no great mystery there. So I'm going to give you a flavor of how you could do some operations with a matrix math. So let's take a look at this sample snippet. This is actually -- we pulled this from the demo that we're going to show later.

and it's also going to be in that code that's available for download. So suppose you had some GLK base effect, and I'm not going to talk about how you initialize that and such, but basically what you want to do is, you want to define some operations on that object, and then you also want to then apply the transformation that comes from the attitude of that device, right? So let's get the attitude of the device from DeviceMotion, and that's going to be a rotation matrix. So that's going to be r equals MotionManager.DeviceMotion.RotationMatrix . Now, we're going to convert that 3 by 3 into a homogeneous transform.

And again, it's not the point of this thing, but basically it's going to include the rotation piece and the translation piece, right, and then the one at the bottom. And that's just to get it into that 4 by 4 so that we can use the matrix math that comes with that.

And we get the--we establish the model view of that object that we talked about earlier, the GLK base effect, and then we're just going to rotate it by pi over 3. And then in addition to that rotation on that object, we're basically going to-- Do another--we're gonna do the transformation between our view, which is defined by the-- our rotation matrix from the device motion and the rotation of the device, and you just chain them together. That's it. And then when you basically just update the view, you'll see the rotation.

So it's really pretty neat. I'm sorry, I seem to have gotten ahead of myself. That's it for attitude. But when we talk about attitude, that presupposes a reference frame. And the reference frames that we use, we actually allow you, the application developer, to specify. There's four of them. And I'm just gonna go through them.

They all start out with CM Attitude Reference Frame, but they're essentially distinguished by one being arbitrary, another one essentially being heading corrected, the other one, the third one means that we peg it to magnetic north, and the fourth means we peg it to true north. Andy Pham Then I'll go again into those in greater detail in the following slides.

And we allow you to query which of these are going to be available, because by specifying one of those four reference frames, you are going to specify the flavor of sensor fusion that you're going to use. Andy Pham That's really important. But I'm going to say that again.

You're going to specify the flavor of the sensor fusion you use by specifying one of those four reference frames. Andy Pham And the reason why that's important is because not all devices are going to have the same sensor fusion as the other devices. are going to have all these sensors.

'Cause you're going to want to build your app once, and then it's going to, you know, potentially run on the iPod-- iPod Touch versus the iPhone versus the iPad. And so you want to be very careful. And so that's why we provide that bit master, that you can query the device to see what's available before you go and start the updates.

So once you've specified the reference frame, then you essentially just tell Device Motion to start sending you updates by calling startDeviceMotionUpdates using the reference frame. So that's it. So again, coming back to the reference frame choices, X arbitrary. So regardless of which reference frame you specify, the only thing they're going to have in common is that Z always is going to be aligned with the gravitational axis.

Now, using X arbitrary means that the X and Y, though, is completely undefined. It's basically going to specify by the initial pose of the device, wherever it's sitting. So it could be rotated this way, or it could be that way, or it could be that way. You basically have no control over that. Another thing that you want to be aware, when you specify this frame choice, is that heading is not going to be corrected. So not only is your initial reference going to be completely arbitrary, but you're going to drift over time with respect to that.

Now, if you specify X arbitrary corrected, the magnetometer kicks in. And so now we use the magnetometer to essentially provide opportunistic heading corrections. Since the magnetometer is additionally running, that means you're going to consume a little bit more CPU, but it does provide you better long-term Yaw accuracy. You don't have to worry about drifting in your view. All right, but again, the reference point is going to be arbitrary in the sense that it's going to be pegged as soon as we start device motion, depending on what your device is pointing.

If you want it to be fixed on absolute reference every time, then use Magnetic North. And for this one, you were essentially using the compass. The difference between the magnetometer and the compass, the magnetometer is a sensor. The compass is yet another sensor fusion algorithm that is running using the magnetometer to give you the best estimate of the Earth's Magnetic North.

That means, though, if you're gonna use XMagnetic North, calibration's gonna be required. So we're gonna provide you, basically, a property that you can set. Show device movement display. It's gonna be set to false by default. You're gonna set it to true if you want that HUD to pop up when you detect that the, um-- when the device becomes uncalibrated.

And lastly, we go to X True North Z Vertical. So the difference here is now your reference frame is pegged not to magnetic north, but true north. And of course, as you know, magnetic north isn't the same thing as geographic true north. It's going to differ by some amount, and that amount actually varies throughout the globe because of differences in the magnetic field.

Andy Pham I won't bore you with the details, but suffice to say that in some applications, you're going to care about the difference, right? If you're hiking, you may care about the difference because you're going to lose hours walking if you start heading in the wrong direction. Andy Pham If you're my mom and you're feng shui, for feng shui purposes, you really, really need to know true north, not magnetic north. You're going to care. I didn't know it was that precise of an art.

So why do you care, though, about TrueNorth? Well, it's the direction that we're most familiar with, and that's what maps are using as a reference. So that's why you'll, for navigation purposes, you'll really want the TrueNorth. We calculate TrueNorth for you, but what that means is that you're going to have to enable location services because we need to know where you are in the lat-long, and then we use a model of the Earth's magnetic field or declination to give you the approximate difference between the TrueNorth and the magnetic north.

So key takeaways from this is that sensor fusion does a lot of hard work for you if you're not interested in separating out gravity, if you're not interested in removing gyro bias, and if you're not interested in calibrating the magnetometer. And also you can use the reference range to specify different flavors of sensor fusion.

So with that, I think we're gonna step away from kind of the high-level view of what sensors are and what sensor fusion is, and we'll actually get down to the nuts and bolts of actually using Core Motion. So the outline for using Core Motion, there's basically three steps.

You set up, and during the setup, you're essentially going to select what you want, the raw sensor stream or sensor fusion. You're going to define your update interval. And the second step, you're essentially going to retrieve data, and when you're retrieving data, you basically only have two questions you're asking yourself.

Do I want device motion to push it to me, or do I want to just pull to get the data? And then, of course, you'll have to clean up, and that's the last step. So during the setup, first thing you want to do is essentially allocate an instance of the motion manager, calling CM Motion Manager, allocating it.

Then what you want to do is you want to make sure that if you're interested in the sensor, that it's available. If you're interested in sensor fusion, that it's available. Because like I said before, not all devices are going to have everything. And next you're going to specify the desired update interval. And I'll talk a little bit about how you do that a little later. But you don't want to just generally pick one because it actually has huge implications for the performance of your app as well as power considerations. And then finally, you just start updates.

And you do that by just calling start accelerometer updates. You can also call start accelerometer updates to queue with handler. And that's actually the push method. And we'll talk about that a little later. So to retrieve data, all you have to do is just basically, since you're pulling data from Motion Manager, all you have to do is in some context basically just go and query the accelerometer data.

So, what's going on underneath? Well, what Core Motion is doing underneath on the framework side is it's creating its own threads, and it's using that to essentially handle the raw data for you and also to run the sensor fusion algorithms. On your side, if you decide to do push, in other words, if you want Core Motion to push the data to you, all you have to do is provide an NSQ operation and block, and then your block will execute essentially on your own threads. If you were to pull, all you're doing then is periodically asking Core Motion for the latest update, and again, you're going to do it in your own context.

The difference between push and pull. With push, you're never going to get a sample. Every sample that Core Motion gets is going to push to you. But the disadvantage of that is that you have increased overhead. And sometimes for your application, you may not want every sample. It's okay to do best effort. You drop a couple of samples, so what? You'll catch up.

And so you'll want to use push only if your application absolutely needs every sample. When fidelity is really, really important, then you'll want to use this for data collection apps. But most other apps, pull is fine. It's more efficient, less code is required. But again, since you're providing your own context, you can piggyback onto something or if you need to create another timer. So there's a little bit more complexity involved there. And it's a mode that you'll want to use for most apps and games.

So the final step, cleanup. And there all you have to do is essentially stop the accelerometer update and then set motion manager to nil. So it's just those three steps. We're going to talk a little bit about the implications of everything that you're doing in those three steps by going through these set of best practices.

We'll talk about how you're going to change reference frames or you'll want to change reference frames, why you'll need to characterize the sensors in order to decide which sensors or which sensor fusion algorithms to select, what update rates that you're going to run it at, how you're going to optimize all of that. And we'll also talk about some common pitfalls that you'll want to try to avoid.

All right, so what I said before about the reference frame, it's kind of an overloaded term. So your device, you'll start out and you'll have some reference frame, and that is the reference frame from which all your attitude is going to be referenced from, your role, pitch, yaw, whatever. But that may not be convenient to express, especially through UI. And so, you know, say, for instance, you want to reset that initial reference to something that is pointing right at the user, some useful resting reference frame.

So what you want to do there is you want to cache the reference attitude that is something that is pleasing to you. And then during subsequent updates, all you have to do is just take that attitude and multiply it by the inverse. And then you'll get essentially your new attitude referenced through the reference frame that is a suitable reference.

So characterizing sensors. These sensors are-- they've been specced, basically, to meet the widest range of applications for a mobile device, but that doesn't mean that they're gonna meet all applications. So before you use them, make sure that you understand what the dynamic range of these sensors are. If the accelerometer's measuring 2G, and you want to strap it onto a missile and shoot it up and impress your friends, it's likely gonna saturate, so you want to avoid doing that.

Sensitivity is the converse of dynamic range. Dynamic range is how far can you measure. Sensitivity is essentially what is the smallest change that you can sense, and again, that's something that you want to characterize. Just put the sensor on a table, collect the data, and see how it changes.

Dynamic response, that's a measure of how quickly it can respond to changing signals. Different sensors have different dynamic responses, and your application, obviously they'll have different needs. And basically the rule of thumb that you want to do is-- or you want to use is that for more dynamic applications, you'll want to sense faster and faster-- excuse me, you'll want to sample faster and faster.

Okay? Noise, that actually eats into your dynamic response, because to get rid of noise, you'll want to filter, and that'll, again, cut into your phase and margins. Bias we've already talked about, so I won't belabor the point there. But again, the point of all of this is before you do anything, before you even start writing the app, start characterizing your sensors first.

Update interval. So you saw that in the example code, we just arbitrarily chose 1/60 hertz, and a lot of folks will do that because that is also the frequency at which the scene is being rendered. So it's a very convenient frequency to choose. But what's really driving how fast you should sample the sensors? Well, one is fidelity, right? Nyquist. If you're gonna be sensing a 10-hertz signal, you want to sample at least 20 hertz.

But in fact, you want to sample more because you have noise, and so you want to sample much, much higher than Nyquist in order to do some filtering and yet be able to reconstruct that signal. But the faster you go, the more power it's gonna consume, so you're really gonna have to back off. And instead of sampling at 200 hertz like you really, really wanted to, maybe you've got to run the gyro at 100 hertz. So again, that's something that you're gonna have to do to optimize for your code.

Now, for a lot of motion-enabled apps, it's a bit of a quandary because you'll want to be aware of the time in which you're getting these samples. But you're basically not getting these samples under any defined contracts. In other words, you're going to specify an update interval, but you may not necessarily going to get an update interval because we're going to deliver the samples to you best effort. So you're going to get the samples at the update frequencies that are defined by the sensors and by CoreOS. So, and on top of that, there's going to be jitter.

So if you really need to be aware, if you really need to be time aware of your sensors, you'll need to check the timestamp of the sensors. Okay, if you're going to reconstruct the signal. That's really, really important, especially for applications that are trying to reconstruct anything. But if you're going to use the timestamp, use the correct one.

There's two times, there's multiple times. You can get timestamps in a number of ways. The timestamp that we provide for you in that sample, that is a timestamp that is generated at the driver when we clock that signal. Basically when we clock that sample off the sensor. So it's going to be the most accurate timestamp. Don't use the timestamp in the callback because who knows when that sample is actually delivered to you. So that's just something to be careful about.

Next, don't assume an update interval. I already touched on that before. So again, if you want 60 hertz, you may not necessarily get 60 hertz. You're actually going to get 50. You may get 55. You may get 40. So if it's really important to you, actually check the timestamps. Look at the difference. Now you want to be careful because you don't want to just take two timestamps and look at the interval. You probably want to do a moving average to smooth out the jitter in order to get the average delta T.

And lastly, this may sound trivial, but in fact, it's really important. So in your code, what you do is you essentially say, start updates to queue with handler. Excuse me. You basically call start update. And then right away, you want to get in there and start pulling the data. Well, the problem is that these sensors, they're mechanical devices, and so there's some startup time. And different sensors will have different startup times. The accelerometer starts up right away. The gyro takes a little longer, as much as two minutes. It takes two seconds to spin up.

So if you were to start pulling right away, you're going to get nonsensical data. You're not getting data at all. So what we do is we hold that data back until it becomes sensical, and then we push it on to you. So what you want to do before you start doing anything on the data is check for no. Okay.

Avoid integration. I think I kicked that horse to death, so I won't belabor the point, but essentially, anytime you integrate the signal, you have to worry about instability. Again, check for existence. Very important. So if you're going to do, say, the sensor, check whether the magnetometer is available or not. If you're going to use sensor fusion... This is just an example of how you can check what reference frame is available by using the bit mask.

So now we get to the coding examples. And again, this is examples that we're going to have available for download. The first one is basically going to be a motion graph. And what that does is it allows you to essentially select what sensor signals you want to visualize. So, you know, you want to plot accelerometer, you want to plot user acceleration gyro. So it's a great tool, again, to help you characterize your sensors. Some of the concepts we cover in there are push semantics and how to manipulate the update interval, specify the update rates.

So to start using that, all you have to do is essentially import the Core Motion framework. I won't go into how you actually set up the build, but... in the header. We put the motion manager in the delegate, and the reason we do that here is because this thing contains a lot of views, and we just wanted a single motion manager, so we kept it in the delegate. And there we instantiate the motion manager.

So now in the view controller, what we do is... We basically, in each view, we get a reference to that motion manager that is shared among all the views. And then when the view appears, all we do is we just call start update. essentially with the interval. That right there is just a wrapper, which I'll show you the implementation underneath. So start updates with motion data type and slider values, just a wrapper around the start accelerometer updates, et cetera.

And similarly, if you're going to change the slider values, again, we just call that wrapper. So here, that wrapper is essentially where we-- Do all the things that I talked about, how it's important to do. Check whether the accelerometer is available. Set the update interval and then essentially start the accelerometer updates to queue. And that's it.

And then on cleanup, we just... essentially call stop the updates. So that's it, really, on that coding exercise. It's just a few lines of code, a few handfuls of code, and what you would actually get are some really cool plots of the sensor data so that you can play with it to see what the difference is between gravity and user acceleration and get a sense of how when you change the slider values and you add your own filtering, how that degrades fidelity or not. The next demo, which I'm not going to provide the code for, at least on the slides, is a virtual... an augmented reality app.

And here we show concepts such as how you go from reference attitudes to chaining all the transformations together so that you can basically visualize how... So the idea is that you're going to hold this... device up, and as you rotate the device, you'll visualize objects in 3-space that you're seeing. So the device basically provides a window into the solar system. So that's it. Thank you.