System Frameworks • iOS, watchOS • 34:30
Discover how enhancements in authorization simplify accessing sensitive and historical motion data. Learn how to use DeviceMotion effectively and how to leverage SensorRecorder to capture hours of motion data. Walk through adding immersive motion controls to enhance an existing game.
Speakers: John Blackwell, Ahmad Bleik
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
Hello, everyone. My name is John Blackwell, and I'm an engineer on the Core Motion framework. Today, we're going to be talking about creating immersive applications with the Core Motion framework; and we're also going to discuss a number of best practices along the way. So today, we're going to be covering a number of things. The first is a brief overview of what Core Motion provides. Next, we're going to talk about authorization, a frequent pain point for many of our developers.
Then, we're going to look at some new additions to Historic Accelerometer; and then, we're going to dig into DeviceMotion, the sensor fusion that we provide in Core Motion. And finally, we're going to look at Badger with Attitude, a game where we'll take some of the concepts that we've talked about in DeviceMotion and put them into practice. And with that, let's take a closer look at the Core Motion framework.
The Core Motion framework provides access to the accelerometer, the gyroscope, the magnetometer and the altimeter; and through the framework, you can access raw forms of the sensor data, as well as process forms and use these in your applications. Now, there are a number of interfaces available through Core Motion.
The first is CMMotionManager, which provides access to the raw sensor data, as well as the sensor fusion in the form of DeviceMotion. CMAltimeter provides access to relative altitude updates for the device. CMPedometer provides access to step counts as step-related information, as well as work-out, pause and resume events. MotionActivityManager provides access to the activity context of the device; for example, walking, running or automotive, etc. And CMSensorRecorder provides access to historic accelerometer data.
So that's a brief overview of what the Core Motion framework provides. Next, let's take a closer look at authorization. So of the APIs that I just mentioned, these following four are sensitive due to the nature of the private information that they expose about the user. So to handle this, we have a prompt that appears the first time you invoke one of the sensitive APIs.
Keep in mind that the first time you invoke the sensitive APIs, a prompt will appear for users; but after that first time, your users will need to go into Settings to change the authorization state for your applications. Now, let's take a look at what it looks like the first time you call one of these APIs. At this point, your users will need to decide if they want to grant access to the Motion Fitness data; or deny access. And as a developer, you're going to want to handle the case where they deny access.
Now, in the past, we've asked you to do something like this where you call any of our APIs that are sensitive. In this case we're calling queryPedometerData; and then you ignore the result that you get back from the API and only look at the error code. And if the error code is not authorized, at this point, you know your application's been denied access to that motion and fitness data. Now, we realize that this is less than ideal.
You need to jump through hoops to get access to your authorization state, and once you get it, you can't tell exactly why your app has been denied access. And that's why this year, we're providing an Authorization Status API. This API is available in the same four classes that I mentioned before, and it's available on iOS and watchOS. Let's take a closer look at the CMAuthorizationStatus value that you'll get back from this API.
The first state, notDetermined, represents the state before the user has been asked about authorization in your application. Restricted represents the state where the user is unable to change the authorization state for your application specifically; and this can happen when motion and fitness is disabled in privacy. The next state -- denied -- means what it sounds like. Your application's been specifically denied access by the users. And authorized means your app is ready to access the user's motion and fitness data.
Now, let's come back to the authorization check that we were looking at before. One of the first best practices that I want to talk about is making sure that you check for the availability of a given API first before you ask for the authorization status. In this case, we're asking for a StepCountingAvailable.
And the next thing that you're going to want to do is query for the authorization status. And at this point, it's up to you as a developer to decide how you want to handle the authorization that you get back from the framework. You now have a simple mechanism for doing this on both iOS and watchOS.
I encourage you to consider how you can use the authorization status app -- authorization status API in your applications -- and you should definitely use it. Next, we're going to talk about Historical Accelerometer. Historical Accelerometer, also known as CMSensorRecorder, provide 50 hertz accelerometer data; and we can record this for your applications while your apps are in the background. You can request up to 36 hours of accelerometer data, and this data will be stored on your behalf for up to three days.
Now, Historical Accelerometer is currently available on Apple Watch, and today, I'm excited to announce that it's now available on iPhone 7 and 7 Plus. This opens up a whole new set of use cases for your applications, and to get you thinking about how you can use Historical Accelerometer on iPhone, let's walk through one sample application now.
So let's say you're a big automotive enthusiast, and you want to build an application to enable you to track your car's performance over a long-track day. How would we go about building this? Well, the first thing that we want to figure out is when the user is driving, and for that, we can use Motion Activity. Motion Activity provides an automotive state, and we can use this to determine the periods in which the user is driving.
Now, I want to take a brief minute to talk about the automotive detection. In iOS 11, the automotive detection has received special attention to ensure best-in-class performance. This automotive state in Motion Activity is the same state that's being used to power Do Not Disturb While Driving; and it's also available for you to use in your applications. If, say, you wanted to personalize your application's UI while the user is driving.
Now, coming back to our automotive performance tracking application, the next thing we want to do, is we want to gather the accelerometer data. And for that, we can use SensorRecorder. We can gather the accelerometer data for the periods in which the user is driving, and translate that into metrics for the user, such as how many Gs they were pulling through a turn, or maximum lateral Gs or longitudinal Gs. Now, there are many different ways you could consider building this kind of application, but by using the Motion APIs, you can provide your users with a low-power, all-day experience.
Now, there are a number of best practices to keep in mind with Historical Accelerometer. The first is that you want to choose the minimum duration that makes sense for your applications. So for automotive performance tracking application, we may not need the full 36 hours of data -- something more like 8 to 12 hours would make more sense.
And the next best practice to keep in mind is to consider decimating or dropping samples if you don't need the full 50 hertz accelerometer data. These two suggestions, these two best practices, will reduce the amount of processing that your application is doing, and in turn, save the user's battery life. Now, I encourage you to consider how you can use Historical Accelerometer on iPhone.
Next, let's talk about DeviceMotion. DeviceMotion is the name for the sensor fusion algorithm that we provide in the Core Motion framework. Now, there are a number of things that go into DeviceMotion. The first sensor that we use in DeviceMotion is the accelerometer, and the accelerometer enables us to measure the accelerations imparted by the user, as well as the acceleration imparted by gravity. The gyroscope enables us to precisely measure the rotation rate of the device. And the magnetometer allows us to measure local fields around the device as well as the Earth's magnetic field.
Now, when dealing with the raw sensors, there's a number of challenges to keep in mind. With the accelerometer, it can be difficult to distinguish from the accelerations imparted by the user; and from those imparted by the force of gravity. With the gyroscope, we can have bias in the measurements over time, and with the magnetometer, it can be difficult to distinguish between the local fields and the Earth's magnetic field.
And this is where DeviceMotion comes in. DeviceMotion provides 3D attitude tracking while the device is undergoing free-space motion. And it does this by fusing together all of the sensors to relay the advantages of each, while minimizing the disadvantages. And as a developer, what this really means is it enables you to focus on how you want to use the motion data rather than the mechanics of trying to get the best from the sensors.
Now, we've talked about DeviceMotion in a number of previous sessions. I encourage you to check them out. We go into details about the sensors and the DeviceMotion algorithms. But today, we're going to think about how we can use certain aspects of DeviceMotion to create immersive applications for your users.
So as a developer, when you're first getting started with DeviceMotion, the first thing that you need to consider are the reference frames. The reference frames determine which sensors are used in the fusion, and how attitude is calculated for your applications. The first reference frame, xArbitraryZVertical, fuses together the accelerometer and the gyroscope but does not use the magnetometer.
And the last three reference frames, xArbitraryCorrected, xMagneticNorth, and xTrueNorthZVertical, use all three sensors. Now, let's talk about the first reference frame in more detail. Let's say you've got an awesome racing game where you're currently using touch controls to allow your users to steer in the game. This is great, but we could provide a more immersive experience using motion.
What we want to do is we want to enable users to steer using their device; so when they tilt their car -- tilt their device to the left, the car will turn left. Now, as long as the device is relatively static, we can use the accelerometer to estimate the force of gravity; and once we have gravity, we can use that to determine the tilt, or the angle offset from gravity.
One thing to keep in mind, though, is that if you were using the accelerometer on its own, certain gestures can be ambiguous. Tilting the device to the right can look the same to the accelerometer as sliding the device to the left. Now, one way you could consider dealing with this is by averaging over the accelerometer signal.
In doing so, you would reduce the short-term effects from the user and leave only the long-term effects, such as the force of gravity. However, in doing this, you would notice your application would now respond more slowly. And this is where DeviceMotion comes in. DeviceMotion means that your application doesn't need to build up filtering logic to get great performance from sensors.
With the xArbitraryZVertical reference frame, this is the default reference frame that your application will receive if you don't explicitly specify one when you start DeviceMotion updates. And this reference frame is great for use cases where you want to track the tip and the tilt of the device. And for our use case in the game, the accelerometer and the gyroscope are going to fuse together to allow us to more quickly and precisely track gravity. And once we have gravity, we can translate that into tilt for our game. Now, to walk through exactly how to do this, my co-worker Ahmad is going to come to the stage a bit later and show us.
This reference frame is also great for use cases where you want to track gestures from the user. I encourage you to check out our sample application, SwingWatch. SwingWatch is an app that runs on the watch that uses DeviceMotion to track when you've made a forehand or a backhand gesture during a tennis game.
Both the sample code and the session are online, and I encourage you to check them out. So, let's say you've got another game. Let's say it's a shooter game where you allow the user to aim using virtual thumb-sticks. This is great, but we could provide a more immersive experience using motion.
And what we want to do is we want to determine where the user is pointing their device and have that translate into aiming in the game; and for that, we want to use attitude. Attitude provides the rotation from the reference frame, fixed when you first start DeviceMotion updates, to where the user is currently holding the device in 3D space. Now, one way you could consider getting attitude is by taking the integral of a raw gyroscope signal.
Your sway would determine the attitude. However, this method for determining attitude would drift over time due to bias in the gyroscope, and that's where the xArbitraryCorrectedZVertical reference frame comes in. This reference frame uses the magnetometer to improve the horizontal attitude estimation that we provide; and as a developer, what this reference frame means to you is it provides reliable attitude performance with a fixed center reference. So in the game, your users can move the device around and then come back to a known center location.
Now, there are many other ways you could consider using this reference frame. Let's say you wanted to build a 360-degree photo and video player application. Your users would be able to move their device around and then bring it back towards the center -- towards the dock overlooking the lake.
Now, another way you could consider using this reference frame is for something like a virtual-reality real estate application where you want to allow your users to look around at different parts of a room just by moving their device. Now, let's say you build this application out, and your users love it; but they want to get a better sense of in which direction the windows face, or in which direction they can expect the sun to rise. And for that, we want a world reference.
Now, you could consider using the magnetometer for this. This would provide a world reference and enable your applications to orient with respect to the world. However, using the raw magnetometer, you would find the output susceptible to disturbances. Some of these are inherent to the device, and some of these are external to the device.
This is where the final two reference frames come in. These reference frames use the magnetometer to orient the device with respect to the world, and these reference frames handle magnetic device-level effects; as well as in challenging magnetometer situations, we can stabilize the output. Now, choosing between these two reference frames is going to be based on the needs of your application. If you've already got data that's referencing true north -- for example, map data -- it would make sense to use the xTrueNorthZVertical reference frame.
Now, how could we consider using these reference frames? Well, one example is things like stargazing applications, where you want to allow your users to point their device at a star in the sky and identify it. Another way you could considering using these reference frames are for things like augmented reality applications where you want to fuse the camera data with a world reference. And for that, we encourage you to check out ARKit. ARKit's session follows immediately after this one.
Now, let's come back to our virtual-reality real estate application. Let's say you built the application out, and your users love it, but you want to provide them with more features. Let's say you want to put landmarks on the horizon in the direction that they actually are. How would we go about doing this? Well, what we want is heading.
Heading gives us the direction the device is pointing with respect to north. So when the device is pointing straight towards north, heading would return 0 degrees and as the user rotates their device around, the heading angle would update. Now, one way you could consider getting heading is by using CoreLocation. CoreLocation provides a start updating heading API that you could then use the heading from and fuse that in with the data that you're already getting from DeviceMotion.
One thing to keep in mind, though, is that CoreLocation's heading can fuse course. Course is the direction of travel for the device. Now, this may make sense for things like turn-by-turn navigation applications, but for things like augmented reality, this may make less sense. I encourage you to check out CoreLocation's section on Thursday for more details.
Now, another way you could consider getting heading is by calculating it from the attitude estimation that we provide in DeviceMotion. However, getting this right in all circumstances is non-trivial; and this is why this year, we're adding heading as a first-class property in DeviceMotion. Heading fuses together the accelerometer, the gyroscope and most importantly, the magnetometer, to give us the direction the device is pointing with respect to north.
Keep in mind that heading is only available on iOS, where the magnetic field property is available. Now, let's take a closer look at the API. Heading is only valid for the xMagneticNorth and xTrueNorth reference frames. For the other two reference frames, you'll receive a negative value back from heading, and heading will give you from 0 to 359 degrees from the X axis that you've chosen in your application, either MagneticNorth or TrueNorth. Now that we have heading, we could use that to overlay the Golden Gate Bridge on the horizon in the direction that it actually is.
Now, with DeviceMotion, there are a number of best practices to keep in mind. The first is that you want to make sure that you check for the availability of a given reference frame before you start updates. And to do that, you can use the availableAttitudeReference Frame's API. This will return you a CMAttitudeReferenceFrame bitmask that you can then and, with the reference frame that you are interested in, to determine availability.
And the next thing to keep in mind is that the choice of reference frame is key for your applications. This is going to determine how attitude is calculated in your applications, as well as which sensors go into the sensor fusion. Now, we've talked about a number of things in DeviceMotion, but let's get a little bit more practical.
Let's take some of the concepts that we've talked about in DeviceMotion and put them into practice using a game. Badger with Attitude is a game where we're going to apply DeviceMotion to translate into controls for the game, and for that, I'd like to invite my co-worker, Ahmad to the stage to walk us through this.
[ Applause ]
Thank you, John. Hello, and welcome. Today, I'm going to be taking some of those concepts that John just talked about and help you put them through a practical example. My name is Ahmad, and I'm an engineer on the Core Motion team. I'll be using the Badger app, which has been developed by our colleagues over at SceneKit.
You may have seen this app in last year's session. In it, you play a cute little badger called Bob. He sits in a mining cart rolling down some tracks, selecting gems and power-ups along the way. So we're going to take this app with existing swipe controls and use DeviceMotion to transform it to motion-based gestures.
Here's what we've got in store for you today: First, we're going to talk about the existing controls and what we're trying to accomplish. Then we'll show you some of the basics of using DeviceMotion. And finally, the algorithms we use to detect those gestures. So the Badger app allows you to swipe your finger up across the screen to make the badger jump, and if you swipe your finger down, the badger ducks and hides inside the card to avoid the obstacles. And swiping left or right will make the badger swing to reach out for those gems.
If you've worked on beautiful graphics like these, it's a shame to have to hide them behind your finger while you play the game. So let's use the phone as our controller here and have the user fully immersed in the experience that we've built for them. So first, we'll detect if the user has rotated the device towards them to make the badger jump up. A slight bump in the device downwards will make the badger squat and hide in the cart. And tilting the device left or right will make the badger lean either way.
Now I want you to focus on those couple of points and think about them when you're looking at the sample code later, or thinking about incorporating motion into your application. As John has mentioned earlier, Core Motion allows you to interact directly with the sensors. Let's take the accelerometer, for example.
That input may look fine as long as the user is semi-stationary, but if you start walking around or get on a bus, then you're going to have to account for these additional accelerations. So with DeviceMotion, we've taken other sensors like the gyro and the magnetometer that complement the accelerometer really well, and we fuse those inputs for you so we can minimize those environmental factors and let you focus on capturing those motion controls rather than how to process the sensor input. The DeviceMotion APIs allow you to query for samples in two different ways, the push and the pull mechanisms. Let's take a close look at those two.
The push mechanism is a great way for you to detect a discrete gesture across a small window of time. In the SwingWatch app from last year's session, we used the push mechanism so that the framework will push to us updates as they are available at a fixed interval.
Then we would capture those updates and detect if the user swung their arm to perform a backhand or a forehand. This is what the API looks like. You use the CMMotionManager's start DeviceMotion updates function; you provide a reference frame of interest, provide an operation cue for your motionHandler to run on as soon as those samples are ready.
However, if you want to know what the current state of the device is, then you want to use the pull mechanism here. As we'll show you later on in the Badger app for the tilt gesture, we want to make the Badger lean at the same angle that the phone is in. So we ensure we provide the responsive and smooth experience for our graphical application. The API for the pull mechanism is a bit simpler. You call startDeviceMotionUpdates, and you provide the reference frame; and whenever you're ready, you can pull for the latest DeviceMotion sample from the framework.
We'll be releasing the sample code for use, so you'll be able to take a look at it later, but let's focus on the Core Motion parts for now. So to get start with them for the Core Motion framework and then instantiate an instance of the CMMotionManager. Then we'll check if DeviceMotion updates are available on this platform.
And if you recall from John's talk, the tip and tilt gestures that we're interested in track where gravity is in the device frame. So we'll be using the xArbitraryZVertical reference frame, and we'll check if it's available on the platform. You may have noticed I did not check for the authorization here, and that's because I'm using the MotionManager API, which does not access sensitive data.
For the rest of the talk, we'll assume the device is in the [inaudible] orientation, but in the sample code, we'll show you how to detect and handle other device orientations. For the first gesture, when rotating the device towards you makes the badger jump up, we want to capture the magnitude of that rotation rate along the horizontal axis of the phone. In this case, it's the Y-axis.
So we'll be looking at the rotation rate property from the DeviceMotion object. And we chose that specifically because we're not interested in the current angle the device is making, but rather a quick change in that angle. So if we use the rotation rate, we'll be able to detect a quick pulse and make the badger jump accordingly.
This is a gesture that we're detecting across a small period of time, so we're going to be using the push mechanism for it. Let's see how that will look in the code. To start off, we'll set the update interval to 50 hertz. And you want to be careful when you set that.
You want the samples to be coming in fast enough so that you can capture that gesture; but not too fast that you're increasing your computational and memory requirements. Then we'll start DeviceMotion updates using the push mechanism, provide our xArbitraryZVertical reference frame, a queue, and I'm using a standard operation cue here, and finally, our motionHandler.
This is what our motionHandler will look like. It will get called as soon as samples are ready. The first thing we'll do is check for any errors and then grab that rotation rate from the DeviceMotion object, and look at the Y-component that we were interested in. Then we store that in a buffer, and I'm using the circular buffer here, so as soon as the samples are coming in, we'll accumulate more of them. Since we're using the SceneKit renderer for this application, I chose the renderer's updateAtTime function. This will get called just before you render a new scene, and this is the ideal place for me to check for the state of that buffer and then update the game.
And I'll leave it up to you to find out where in the application is the best place to do that. And then I simply check if the mean of that buffer has crossed a certain threshold and make the badger jump. Keep in mind this threshold is tunable and adjustable to your app's specific needs.
Next, we'll take a look at the second control, where bumping the device downwards will make the badger duck. For this one, we want to measure the user acceleration along the gravity vector. So we'll be looking at the user acceleration property, and here we chose that because even if the device is slightly tilted or rotated at an angle, the user acceleration is going to still look the same regardless of the attitude.
So this is one of the gestures again that we're detecting over a window of time, so we'll use the push mechanism as well. Since we're already set up for the push mechanism, we'll head back to our motionHandler, where we were storing those rotation rates previously. But this time, we'll put out the gravity property out of the DeviceMotion object and the user acceleration as well.
We compute the magnitude of that user acceleration along the gravity vector and store that in a buffer. And once the mean of that buffer crosses a certain threshold, then we'll make the badger duck down and hide in the cart. So we've taken a look at two controls where we were monitoring motion across a small window of time and we used the push mechanism for that.
Let's take a look at our final control, where our requirements are slightly different. For the tilt control, in the simple case where the device is held in a vertical position, you can break down the gravity vector into its x-component and its y-component; and by applying simple trigonometry, you can arrive at the tilt.
But since we want this gesture to be slightly more flexible, we're going to break gravity into its y-component; and its component in the x-z plane of the device. This will allow us to tilt the phone even if the device is slightly rotated at an angle. What's different about this control is that we want to know the current state of the device and not a discrete motion that occurred.
This will allow us to make the badger lean at the same angle the device is leaning in and provide a very responsive experience. And for those reasons, we'll be using the pull mechanism for this control. Since we're already set up for the push mechanism, the framework is ready for us to pull for samples at any point in time.
So we'll go back to our renderer function. If you recall, this is getting called just before you render a new scene. That makes it a perfect place for me to pull for the latest sample, make a calculation, and then update the graphics. So we'll ask the MotionManager for the latest DeviceMotion sample, grab the gravity property out of it, compute the vector and the x-z plane, and arrive at the tilt as the atan between that x-z-component and the y-component.
Don't forget to let the framework know that you're no longer interested in DeviceMotion updates if they were previously active. This is a great thing to do when your game is paused or has ended to make sure you're not wasting more battery than you need. So here you see the results.
The user is able to play the game by tilting the device from side to side to make the badger lean either way, and tipping the device towards them makes the badger jump up, and finally, a small push of the device downwards will make the badger squat and hide inside the cart. It's pretty cool, we've taken an app with swipe controls and used DeviceMotion to replace them with motion-based ones. I'm excited to see how far you can push the DeviceMotion APIs.
[ Applause ]
So to wrap up some of the key points that we talked about today, we encourage you to look at the authorization API and check for your app's authorization if you're using one of the sensitive APIs. While you might have your use case for using the sensor data directly, we encourage you to look at the DeviceMotion APIs, because our sensor fusion algorithms handle the most common case.
It eliminates those environmental factors so that you are able to focus on the motion controls of your user. The APIs provide you a smooth and consistent experience across all our supported devices, and we've made enhancements this release to make sure that when you query for those updates, you do that in a very power-efficient manner.
Remember the two different ways to query for updates: Use the push when you want to detect a gesture across a small window of time, and use the pull when responsiveness is key and you want to know what the current state of the device is. Here are a couple of the sessions that we think you might be interested in.