Integration • 47:01
Find out how your application can leverage the iPhone's built-in hardware and use detailed device information to provide a revolutionary user experience. Learn to identify an individual iPhone or iPod touch, and pinpoint a user's whereabouts at runtime. Use the built-in accelerometer to create unique interfaces for applications and games, and add a personal touch to your application by incorporating iPhone's camera and photo library.
Speakers: Scott Herz, Ron Huang
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
So, howdy everybody. I'm Scott Herz. I'm an engineering manager for the iPhone. And today we're going to talk about some of the hardware APIs that we have, specifically our location, location frameworks, the accelerometer and the camera. So, I'm sorry if I feel like I, it sounds like I'm talking quickly. It's not because I'm nervous. Well, I'm nervous, but more importantly, there's 90 minutes until beer. So.
So we're going to get through this fast. So yeah, we've got a lot of cool features, and we want you guys to use them. So specifically, we've got our camera, right? Two megapixel camera. We've got our core location. Got our radios that we can use to figure out where the device is.
We have our accelerometers, which we use to do all sorts of cool stuff. And we've already seen-- I've been here, what, four days now? Seen all sorts of cool stuff that you guys are doing, so pretty excited about that. So because some of these are sort of hardware APIs, sometimes the support for them in the simulator is a little spotty. So when we come to situations like that, we'll sort of let you know.
So first up, our image picker. So our image picker lets you allow the user to choose from a number of sources. So what are some of these sources of images? So one source of images is the camera. So if you were to set up a picker as a camera source, it'd look like this. It probably looks a little familiar. This is what the address book uses.
Another source is the Save Photos album. So if you've-- maybe you remember from the keynote, Steve mentioned that today with the new-- with the 2.0 software, you can save photos from Safari or from Mail. And your applications can do that too. And when you do, they'll go into the Save Photos role. So that's another source. So the last source we have is the photo library source. So if the user synced down a bunch of photo albums from iTunes or iPhoto, that would be a source. They could go and pick something from there.
So how do you use this thing? What are the classes that you use? It's pretty straightforward. There's one class, UI Image Picker Controller. It's built on top of all this cool UI view controller stuff you've been hearing about, and I hope you've been going to those sessions. As it is, as such, it's kind of designed to be used as is.
You really don't have to screw with it that much at all, really. All you really need to do is define an object that conforms to the UI Image Picker Controller delegate protocol. And then with that delegate, we'll send any kind of messages when interesting things happen, like when a user chooses an image or hits cancel.
So when you're using this thing, you'll find yourself kind of doing three things. The first thing you'll do, that you'll definitely want to do, is check the source availability. So what does that mean? Well, so say you're an application and it has some use for the camera. You've got to keep in mind, though, that you might not be installed on just iPhones. You might also be installed on iPods.
So what you need to do before you try to put up a camera picker, or maybe even show like any sort of other user interface that you may have that implies that there's a camera, check the source availability. So we'll show you how to do that in a little bit. So next you'll assign a delegate object, which I just alluded to a little bit earlier. And then using the UI view controller mechanisms there, you'll present the controller modally so it slides up from the bottom all nice like.
So here's some sample code on how to do this. It's pretty straightforward. You will, as I mentioned, you'll check your source availability first. So in this case, we want to present a camera picker. So we're going to look and make sure that-- oh, yeah, there's a camera here.
So once we've passed that check, we can go ahead and alloc init our image picker controller. So keep in mind, you know, standard Cocoa rules apply, right? So we've alloc init-ed this object, so it's up to us to release that object after we're done with it. And I'll show you a good place to do that in a little bit.
So now we set our source type. So in this case, we want to set the source type as being camera. We'll put up a camera picker. And we'll specify the delegate as being self. So what this means here is that we're saying that this-- in this example, I'm saying that we're doing this from a UI View Controller object, which makes sense, right? It's typically how you would kind of build your application.
We want to have maybe a button, and when you press it, this camera picker pops up. And so a great way to do that is from your own View Controller. So what we're saying is, though, that this View Controller that you have has implemented the delegate protocol that I mentioned earlier.
So once you've done all that, you're good to go, and you can present the view controller. And here again, like I say, we use the standard view controller way of kind of doing that. So we're presenting the picker, and we're doing it in a nice animated way so it slides up from the bottom.
So that'll happen, and the user will pick an image. And because there's a few different sources, there's a few different UIs, like I showed you earlier, one thing that is common among them, however, is that they will be given-- the user will be given an opportunity to either cancel and tear down the picker, or actually choose the image.
So in this case, we've got these buttons here. What happens when you hit those buttons? Well, then your delegate gets involved. So there are these two methods in the delegate. It's pretty straightforward. There's either the top one, which covers the case where they actually chose the image, or there's the bottom one, where they just hit Cancel.
So let's look at the first case, the Accept case. So in the Accept case, you're going to do something with the image that came in. So see that didFinishPickingImage? So that image will contain the full resolution of the image that we have. So if it was a camera picker, it'll be that full giant 2 megapixel image.
So at this point, it's kind of up to you to-- you want to do something with the image, but you need to keep in mind that the picker's still up there. And so if you do something too long, then your image picker is going to feel kind of sluggish. So it's best to-- if it's a quick operation, do it, or get ready to do it later. And once you've done that, go ahead and dismiss the view controller and slide it back down.
And then as I mentioned earlier, since we alect and anited this view controller, it's up to us to go ahead and release it. So do that here. So in the cancel case, it's even easier, right? There's no image to deal with. So all you got to do is basically tear down the picker, and then go ahead and release it, because we're done.
So there's an interesting kind of property on the image picker. This allows image editing. So if you turn that on, set it to true, what that does is kind of enable this crop UI that we have. And with the crop UI enabled, the user is able to pinch and zoom and scan around and do all the kind of cool iPhone-y stuff.
So if they've done that, and then they hit choose, the image delegate that you'll get, the method of your delegate that it will be called, is the same as you saw before. But this image editing info dictionary that I kind of glossed over is a bit more interesting now. In addition to that, the image that you'll receive in this case is the cropped image, the one that fit in the little box.
So let's say, though, that, well, the cropped image is cool, and I'm definitely going to do something with the cropped image, but I'm also interested in that great big image. So you can still get at that stuff. This editing info has these two keys. And one of them contains the original image, and the other one contains the actual crop rec that the user-- where they finally kind of landed there. So you can use that.
So I mentioned before when we were talking about the image, get your image, do something cool with your image, and then get done with it. That's very, very, very important. I don't know if you were here for the previous talk when they were talking about performance, but these images are quite possibly some of the biggest pieces of data you're going to come across, especially the camera images.
You can't really, there's no room on the system to have just lots of them kind of hanging around. So definitely get the image in, do whatever it is you're going to do with it, you know, find all the faces and put mustaches on them, something like that, and then release them. Get them out of the way. Otherwise, as was put very politely, like you will be asked to like leave the party. We don't want that.
So as was mentioned in the keynote, your applications can save photos or images to the Save Photos album, just like Safari and Mailcam. As a matter of fact, this is the same function they use. So all it does is it takes an image and writes it to the photo album.
And what's cool about that is then it's in this album, and then when the user goes ahead and syncs their device, it will get pushed back up either through Image Capture or iTunes, iPhoto. And it's on their desktop. So it's kind of a neat way to get images from the device to the desktop. If you're interested, there's a completion callback, so you can hear when it's actually done. Because sometimes it can take a few seconds.
So this is available in the simulator. You can bring up a few of the sources, not the camera source. There's some-- I don't know if you've played with it yet. There's somebody's pictures of somebody's graduation in there. I don't know where it came from, but you can play with them.
So what did we learn about the image stuff? It's pretty straightforward. The really thing-- two couple things you really need to think about are always check your source availability. So don't sort of give the illusion that you can bring up a camera picker on an iPod touch. That'd be wrong. Make sure that if your delegate used any resources, like the image picker itself, that it cleans them up. And please, please, please, please don't hold all these images around. It's a recipe for disaster.
So up next, Ron's going to tell us a bunch about Core Location. Thank you, Scott. Hi. So just about now, it's only about an hour until the beer bash. So for now, let me help you kill some time. My name is Ron Huang, and I'm a software engineer on the iPhone team. Today, I'm here to talk to you about Core Location, and together, we can learn how to make applications location-aware.
Let's say after tonight's beer bash, you still want to go outside and party some more. Wouldn't it be nice if when you pull the iPhone out of your pocket, we just know where you are? So instead of entering a zip code, we automatically list all the coolest hangouts that are near you.
Or let's say you want to go watch a movie. Wouldn't it be nice if the iPhone just knows where you are again and lists all the local movie theaters, sort them based on their distances away from you, and show the corresponding show times? That would be really cool.
Or let's go even one step further. Let's say you have an app, and you want it to behave differently depending on where the user is. For example, if the user has left work and gone home, maybe you want to change ringtone automatically. Or maybe you want to change wallpaper to a family picture of his. That would be really cool. So with Core Location, you can do all of this very easily now.
And I think there are a lot of apps where location is going to be a very central part of it. You know, it's going to be its feature number one or feature number two. But I think there are also a lot of other apps where simple touches of location can make your app a lot more polished, a lot smarter to the user, and overall just helps the user type less. So let's get started.
What exactly is Core Location? Well, many of you may have actually seen it working already, and that's because Core Location is used by Maps. When the user hits the bottom left button, Maps uses Core Location to find out where you are. Maps takes the raw latitude and longitude info and paints a circle or a ring on the map indicating to you where you are.
So in addition to latitude and longitude, Core Location also serves up an accuracy value. This accuracy value is basically the way we convey how confident we are in the location provided. So Maps takes this value and uses it to determine the size of the ring drawn on the map.
Well, now you're going to ask, how does Core Location know where you are? Well, we do several things. The first thing we do is that we look around for cell phone towers. We find out what cell phone towers are near you, we look them up in a database, and we triangulate your current location that way.
So these kind of locations are very good to tell you and give you a general idea of where the user is. They're usually within one to two kilometers of the user's actual location. But we can do better than that. We also look around for Wi-Fi access points, and using that again, we triangulate your current location. These locations are much more accurate. They're usually about 100 meters away from the user at most.
So of course, using Core Location, you now have access to the GPS. Steve announced on Monday in their keynote that the new iPhone 3G will have GPS built into it. So of course, using Core Location, you have access to that. And so now, via this location framework, you can get very, very accurate positioning.
All these cool technologies work very closely with each other inside Quill Location. We definitely use them to bootstrap each other so that we get you a location as fast as possible. We also use them to cross-check each other so that all the locations we provide you can be as reliable as possible.
Of course, they also complement each other very well. And imagine if you were outdoors with a great open view to the sky and getting satellite signals and having GPS locations. Then you start walking indoors. Well, as you start walking indoors, you may lose the satellite signals. In which case, Quill Location will automatically transition into positioning using Wi-Fi or Cell. So your app doesn't have to do anything. It doesn't have to worry about anything. It's all handled by Quill Location.
So all these new exciting features are now put together into the new Core Location Framework, the same API that is used by Maps and also by the Camera app, which geotags your photos automatically. The same API is now put into Core Location Framework, and it's available to you in the iPhone SDK today.
So now I want to show you this SDK, I mean this API. Core Location is something we've designed to be very simple and very easy to use right from the get-go. So as such, there are just three core things that you need to watch out for. First, we have the CL Location Manager class. As the name suggests, you'll be using this class to manage how you want to receive your location updates, and also set up your quality of service parameters.
Second, we have the CL Location class. This class is really more of a data wrapper class, and it has a bunch of properties on it, so you have direct access to the latitude and longitude info, and also accuracy and timestamp and those kind of data. Finally, we also define a protocol. This is a CL Location Manager Delegate. For those of you unfamiliar with OS X development, a delegate is simply a set of callback functions. And a call location is going to invoke these callback functions when there are events to be delivered to you.
Now let's take a look at some of these events. Inside our delegate protocol, we define two optional methods you can choose to implement. The first one we have is LocationManagerDidUpdateToLocationFromL ocation. So of course, we're going to invoke this callback when we have a new location for you. As you can see in the API, we pass you an instance of CLLocationObject, which represents your current location, and we also pass you another instance that says where you last were. So together, you can very conveniently do any differential checks if you wanted to.
The second callback we have is LocationManagerDidFailWithError. And of course, this callback is used to handle any error conditions we may encounter. And we also pass along to you an error code when this happens. So I'll get back to the error codes and how you should handle them a little bit later. But for now, just notice the asynchronous nature of these callback functions. And that is because it takes time for us to find out where you are.
It takes time for us to scan and look up the cell phone towers or Wi-Fi access points. And it definitely takes us time to fire up the GPS, lock in satellites, and determine where you are. Therefore, we do this for you in a background thread. And when we have location available for you, we're going to invoke your callback function on your main thread.
Second thing I want to point out is that Core Location doesn't necessarily update you with locations periodically on a set interval. Now I know many applications out there are very used to the GPS spitting out a location every once in a second. But Core Location doesn't work that way, so you don't want to be using our Location updates as a central heartbeat to your apps.
So that's how you receive locations. How do you start them, though? Well, it's very, very simple. You allocate and initialize an instance of the CLLocationManager class. You do this just like you would with any other Cocoa class. The second thing you do is you assign a delegate property to yourself, simply because you will be the one implementing two functions that we just looked at.
Finally, you just call one single API. Start updating location. Once you do that, CoreLocation will begin invoking your callbacks from their events to be delivered. Now the event we all care about is, of course, location updates. And again, when this happens, we're going to invoke the callback, "Location Manager did update to location from location." So now I want to show you a sample implementation of this callback.
As you can see in the code behind me at the bottom, we just have direct access to the latitude and longitude information. We do so by accessing the coordinate property on the CLLocation class. So it's all very simple. There are a couple of things I want to point out. First is that CoreLocation can and will send you cache locations.
These cache locations can be anywhere from a few minutes old, a few hours old, or maybe even a few days old. So of course, depending on the type of your app, this may or may not be very helpful. Let's say you just have a widget and you want to show the local weather automatically based on where the user is. And of course, locations determined within the past hour or so is probably good enough for such a use case.
But let's say if you have a cool social app and you want to track all your friends, show where they are real time, display them on the map. Well, for such kind of apps, you probably need more real time and more up-to-date locations, in which case you can use a CLLocations property timestamp and filter out anything that's too old. So as you can see in my sample code here, I'm simply discarding anything that was determined over 10 seconds ago.
The next thing I want to point out is the horizontal accuracy property on the CL Location class. So as you know, we have all these different technologies that combine together and find out where you are. Now the accuracy levels of these different technologies vary. So if your app requires very accurate locations only, you want to check this property and use it to filter out anything that's not good enough for you.
We report the accuracy to you in meters. Think of it as the radius to a circle, you know, centered at the coordinate that we specified, the latitude and longitude. And basically, we're saying that a user is somewhere within that circle. So, as you can see in my sample code here, I'm discarding anything where we're less certain than 100 meters.
So that's how you receive and handle your locations. Now I want to circle back and talk about CL Location Manager, that class itself. As you know, Core Location runs on the iPhone, and so in contrast to a desktop system, you have very different power constraints and of course battery limitations. So we've added a few properties on the CL Location Manager class to help make your app more power friendly. Now I want to show this to you. The first property we have is desired accuracy.
So again, all these different technologies have different accuracy levels, but more importantly, they have different power consumption characteristics. Whereas a cell or Wi-Fi location is basically a scan followed by a database lookup, it definitely eats a lot more battery for us to power up the GPS, sync up with the satellites, and do the advanced math to find out where you are. So by setting the desired accuracy property, you're allowing CoreLocation to strike the best balance between accuracy and power consumption for your particular use case.
Now, I think that many apps actually don't need pinpoint accuracy. For instance, the weather widget I talked about, or if you want to show different things if the user's at work versus at home. Those things only need very general locations. But, so they just need very general locations.
And I think even apps that really need pinpoint accuracy don't actually need it all the time. Let's say you have a mapping application, and you have several different zoom levels for the user. Now imagine that the user is zoomed all the way out. So basically, we're looking at the entire United States.
At that time, a very approximate location is probably good enough. Then as the user begins zooming in, when you start seeing the different states, and then the cities, and then finally the streets, you can adjust the desired accuracy property on the fly to match up with your zoom level. This will allow us to save a lot of power, so please use this wisely. One thing I do want to point out, though, is that you're setting the desired accuracy here. The actual accuracy is reported to you with each CL location object inside the callbacks.
We have another property for you. That is the distance filter. Let's say you have an app, and you want to show the local news based on where the user is. Now for something like this, you don't really care if the user is moving from one block to another, but really if the user is moving from one city to the next. In which case, you can use distance filter and set a movement threshold in meters, and by doing so, you'll prevent crow location from poking at your app over and over when you don't really care about these very fine movements.
And also, more importantly, you're allowing Core Location to throttle the underlying technologies so that we can save power whenever possible. Of course, the biggest power saver is definitely remembering to stop the service when you don't need it anymore. To do so, you simply call the API "Stop Updating Location." You can always restart it later when you do need it.
So those are the fun parts of Core Location. Now, as I promised, I'm going to circle back and talk about the error cases. When there's an error condition, we're going to trigger a callback, "Location Manager did fail with error." And we're going to pass along to you an error code. The first error code I want to talk to you about is something many of you may have actually seen screenshots of on the web.
When an app tries to use Quill Location API to find out where the user is, we're going to pop up a dialog on the screen to let the user know about it. Now the user can hit OK, in which case they'll acknowledge the fact that your app is going to know where they are, and they'll get enhanced experience out of that.
So you start getting location updates, just like you normally do. But the user can, of course, hit Don't Allow. If they do that, we're going to send you an error code, KCL Error Denied. Once you get this error code, Chrome Location will not send you any location updates. Of course, we're doing this to protect our users' privacy.
And it's important to know that we're tracking each application individually, and that every single application on the system is subject to the same user approval process. As you can see in the screenshot behind me, even Maps pops us the same dialog. So we're going to show this dialog each and every time your app calls the Chrome Location API, unless the user has hit OK twice in a row for your particular app.
The second error code I want to point out is not really an error, but more like a warning or progress indicator. If core location cannot determine your current location, we're going to send you the error code KCL_error_location_unknown. Now, this is most likely just a temporal situation. It could be because the user is indoors, and therefore we can't get a lock on the satellites, or maybe because the cell phone towers and Wi-Fi access points nearby are simply not known in our database. It's important to know that when this happens, core location will still keep trying for you in the background, and when the user moves to a more favorable spot, we will update you with a new location.
There's one case I want to point out though. On the iPhone, we have this thing called the airplane mode. When the user puts the iPhone into airplane mode in flight, we are going to turn off all our RF interference things, but user can still use the phone as an iPod, as a game player or whatever.
When under airplane mode, we're not going to be registered to the cell network. We're going to have Wi-Fi turn off by default. We're also going to disable the GPS in airplane mode. So under this situation, you probably will not get a location, and it's probably something you want to watch out for and keep an eye out for in your app.
So that's a quick overview on a new Core Location framework. We hope you're all very excited, and we'll begin experimenting and incorporating Core Location into your apps. Since location is so intrinsic to the actual hardware, we have limited support for it on the simulator. So we encourage you guys to load up your app, put it on your iPhone, and go outside and run around with it.
As you start adding location into your app though, if for some reason you're not getting any location updates, you should go online to the iPhone developer website and download our sample app, and just do a quick sanity check and make sure that your current location is actually known in our database.
So with that said, tomorrow we have a lab session at 10:30 in the morning. Myself and other framework developers will be there to answer any of the questions you have. And even better, the iPhone Maps application engineers will be on site as well. They were really the first ones to use our API, and they worked very closely with us to come up with the current set of behaviors and feature set.
So I think it will be very invaluable for you to be able to talk to them and share their experiences when they develop the first Maps tracking application using the new Co-Location API. So with that said, I'll hand you back to Scott to finish you up with accelerometers. Thank you.
Accelerometers. So first off, before we get too in-depth into accelerometers, what is an accelerometer? Well, very simply, it's this little tiny piece of silicon that sits inside your phone, your iPod touch. And what it does is it sits there and it experiences force. So let's take a look at this example here that we have up. Let's say that it's an iPhone, and it's docked. It's sitting straight up. So what's happening is it's experiencing gravity along an axis here.
If we sort of tilt the device, then gravity is going to be experienced along two axes. And so what the accelerometer does is take that information and just package it up and send it on up through the stack. And so that's where we get to do cool stuff with it.
For example, we use it to figure out what is the overall orientation of the device. Is it portrait? Is it landscape? And then given that information, the applications like Safari here do things like reorient themselves, right? They'll rotate their window, get a little wider. Or maybe in the case of our music player application, maybe you're in portrait mode, you're in a nice sort of traditional now playing screen, right? And then when you rotate it, we go to this crazy cool cover flow thing.
So that brings up kind of an interesting point in the sense that there really are sort of two different types of orientation on the device, or on the iPhone, in the SDK. There's the physical device orientation, which is, you know, it's tied to the device. Is the device pointing straight up? Is it landscape? Is it sitting up on a table? And then there's the interface orientation.
You know, what is the status bar doing? What is the nav bar doing? What are the button bars doing? And that's kind of different. So a way to kind of think about this, or a place where you can kind of see this in the wild, is the photos application.
So if you bring up photos and you go to a photo, and you sit there and you spin your device around, you'll notice that the photo reorients itself to match, you know, which way you're looking at it, which way is up. But if you tap it and then bring up the user interface, you'll notice that the user interface is always portrait. We don't change the interface orientation in that case. So that kind of shows that there's a, you know, even though the device may be doing one thing, the user interface may be doing kind of something else. And you can -- your application can use both.
So to get at the physical orientation of the device, you'll of course use UI Device, right? It's our one-stop shop for all things device. So the first thing you'll need to do is start to tell it that you're interested in accelerometer events, you're interested in orientation changes. So what you'll do is you'll begin generating device orientation notifications. And what that does is tells us, okay, someone's interested. We can go and power up the accelerometer hardware.
And then once you've done that, you can either register for the notification, or you can poll and say, OK, I'm going to do something kind of interesting here. What is the orientation of the device? Now, if you hadn't started it, if you hadn't called that Begin Generating Device Orientation Notifications call, then you'd get a known back, because we don't know yet. Chances are that we've not powered on the hardware yet.
So it's important that you do that. Now, just as important as stopping it, when you're not interested in getting these notifications anymore, the accelerometer takes power. It doesn't take a ton of power, but it takes some power. So we encourage you to end generating these device notifications when you're not interested in them anymore.
So for the user interface side, it's a little bit different. There's a couple classes you can use. If you're interested in the status bar orientation, which kind of reflects what the interface is doing, you can get the shared UI application class, and then get the status bar orientation. And like I said, this really defines what the user interface is doing, not so much what the physical device may be doing.
So what I think people are mostly going to do, though, to incorporate orientation into their applications is use the awesome support we have for it in UIViewController. They've made it very, very simple. All your view controllers need to do is override this very, very tiny looking-- this very, very small looking delegate message. Should auto rotate to interface orientation. And what that's going to do, it's going to pass you an orientation. And it's going to say, hey, like, do you do landscape? And you're going to go yes or no.
And then if you set up your resize flags on your views, and all your other view controllers are kind of on board with this, because they'll get asked the same thing, it'll go ahead and take care of rotating your windows and resizing your views and all that other stuff. It's very cool, and I encourage you to do it this way instead of trying to roll your own.
So, we've definitely done a lot of cool stuff with general device orientation, portrait versus landscape. But we're definitely doing lots of other cool stuff, and you guys are definitely doing lots of cool stuff with the raw data. So we definitely want to show you how to get at the raw data.
Again, it's all built into UIKit. We don't hold anything back. We give you all three axes of data. We let you set the sample rate. And as these events come in, we deliver them to a delegate of your choosing. So we'll show you how to do that in a minute.
Before I do though, I want to give you kind of a little picture of how the iPhone feels its world. The iPhone, in this case let's say it's sitting up in its dock again, would experience gravity along the negative y-axis. If I took the same iPhone and I put it face up on a table, gravity would then be felt along the negative z-axis.
So as we kind of go on here, I'm going to do a few little scenarios, and so kind of keep in mind that, you know, plus y is up. So the classes that you'll use are UI Accelerometer, which is the shared object, which represents the hardware. And what that will do is that will be generating UI acceleration objects, which will send to your UI Accelerometer delegate.
So to start the flow of these things, what you'll do is grab the shared accelerometer, Specify an update interval. So in this case, I'm saying I want an event about every 50th of a second. And then you'll specify a delegate, just like we've done in all these other cases of delegates. This is an object that you say is going to implement this protocol. And when we get at an event, we're going to call a message on it.
So, just as before, when we said, you know, we told the UI device to begin that giant name, you will, this is a hint to the system to say, oh, someone is interested in accelerometer information. And if it's not already on, it'll go ahead and turn on the hardware. And as soon as the hardware's on, we're going to start sending events to your delegate.
And when we do, it's going to look like this. They're going to come in on your delegate message here, method. Accelerometer did accelerate. So the acceleration object that you'll get handed back is very straightforward. It's basically a carrier for the XYZ access data, and here's where you would go off and do something very cool with these numbers. So it's pretty simple, that part of it.
So are there any caveats? There's kind of two. There's only one delegate per application. So if you find that your application has-- one part of your app needs accelerometer stuff, and another part needs it, it's up to you to fan it out to those places. The other kind of interesting thing is these are delivered on the main thread.
I mentioned a little bit earlier that you can set the update, the sample rate. We've learned, while kind of playing with the stuff and implementing it for the rotation case and doing a lot of sample code and talking to folks, that it's best to really only ask for what you need. In the case of trying to detect the overall orientation of the device, you can do that in 10 to 20 hertz. There's no real point in asking for anything more.
If you're trying to maybe simulate like an analog pad, where we've seen a lot of game developers doing that, maybe do it a little closer to frame rate. However, if you're looking for things like very sudden taps or like a pedometer kind of thing or things where it's like the event is very quick, then it makes sense maybe to try to kind of walk up the scale and ask for events a little quicker.
So lastly, just like starting the event delivery is obviously important, stopping it is just as important. Because if you're not using these events, there's no point in us generating them and sending them to you and keeping the accelerometer hardware active. So if you're in a part of your application that doesn't need them, or you're done, definitely nail out your delegate. And that'll be a hint to the system that it can turn all this stuff off.
So that's how you get the data. So now I want to spend a little bit of time talking about what you might do with the data. So we're going to cover a few simple filters. And they're actually kind of more accurately described maybe as sort of simple approximations of these filters. We've learned that you can do some very powerful things with some very simple techniques, and we're going to share those today.
But definitely, if you find yourself needing something a little more sophisticated, like we recommend Wikipedia, they've got a pretty cool signal processing site going right now. You can definitely learn a lot more there. So the two filters that we want to show you are a simple low-pass filter and a simple high-pass filter.
So the low-pass filter is great for just limiting the signal to the constant parts of it. So for example, if you want to isolate gravity, or maybe sort of slow constant movements, a low-pass filter works well for that. If you are instead trying to-- you're interested in maybe what a low-pass filter would consider noise, you're looking in a high-pass filter, maybe you want to isolate like taps or quick shakes, you would use a high-pass filter to kind of focus on just those parts of the signal and kind of cancel out the other effects, maybe the effects of gravity.
So we have this neat application, I hope you've all had a chance to play with it, where it shows you the raw data coming off the accelerometer. It can be a great tool to work through these kinds of things. So this device here is sitting face up on a table. So as expected, gravity is pulling on it, and the accelerometer is experiencing a force of negative 1g along the z-axis.
So we're doing this in the time domain, right? So as time progresses, the samples move from right to left. But when we're applying these filters, we want to move into the frequency domain. And so we do that through a Fourier transform, traditionally. And when we do that, this graph kind of changes, right? Instead of it looking like this, it starts to look like it looks like this.
So what does this mean? In the frequency, the frequency of this graph, it's kind of a trick question. There is no frequency of it, right? It's constant, fortunately. Otherwise, we'd all be floating around the room. So, it's just there, and as a result, all the energy that we're feeling in the accelerometer, it's all piled up at zero.
Now, if we were going to take the same device and shake it like crazy, we would see a different set of signals, a different graph. We'd see something with a lot more cycles in it, a lot more frequency in it. And when we do this same kind of analysis, we would see... Something kind of the same, but something kind of different.
Gravity's still there, still piled up at the middle. And if you look at this graph, if I get out of your way, you can see that along the z-axis, all in all, it's still very shaky, but it's kind of shifted down by about a g, which makes sense, because gravity hasn't gone away just because we shook the phone. But what is there, what's new, are these sort of outlier frequencies, these new higher frequency instances. So what are those? Well, those correlate directly to how quickly you were shaking the device. That's where those show up.
So to go back to our filters, let's say we want to apply a low-pass filter. Let's say we're working on our analog pad, and we think these higher frequency kind of instances are just the fact that the guy had too much coffee today and is a little tweaky.
So we want to focus on just this part, and we want to ignore the other parts. We want just this yellow part. So how do we do that? So here's a very simple kind of approximate way to do it. The math is fairly straightforward. The idea is that we're going to weight sort of older values that we've seen, we're going to give those greater weight than necessarily the newer signals that we're getting. Right? And so what that's going to do is the constant parts of the feed are going to kind of accumulate, and maybe the more like, you know, outliers, sort of the jittery parts are going to overall sort of cancel each other out.
So that was kind of a cheap low-pass filter. So say you're interested in that tweakiness. You're interested in looking for hand claps or pedometer-type stuff. Then what we want is probably these outer--
[Transcript missing]
So we have a couple pieces of sample code that actually use all this stuff. One of them is the bubble level sample, which I encourage you to kind of go check out to look at.
How all that stuff, you can actually see the kind of code in use. Here's a little snippet of it. What it does is there's two low-pass filters that we've applied to the X and the Y axis. And so from that, we're able to make an angle. And from the angle then, we're able to update the cute little bubble thing. So I put together a quick little demo that uses the same, a single accelerometer feed, and then we run it through two filters, a low pass filter and a high pass filter. And while I walk over here, we can cut to the demo.
So, as I mentioned, we've got two filters going, a low-pass filter and a high-pass filter. And the low-pass filters, we have a little physics simulation in here. I'm using, what is that, Box2D. It's very cool. You should, like, check that out. There's a whole other thing. So, the low-pass, the results of the low-pass filter I have going to gravity. So, as I'm tilting the device sideways, and so I'm feeding my low-pass filter into gravity.
Now, if I, you know, flip it the other way, then, you know, gravity goes the other way. And it's got a little bounce into it, just because that's how I have the stem set up. But I have this other filter looking for higher frequency instances. So, if I give it a shake, then, like, you know, it's kind of a "she loves me, she loves me not" simulator. But... So, and now it's kind of sad looking, so I'll close that up.
So we'll go back to slides. So again, that was two filters kind of looking at the same raw general feed, but tuned for two different kinds of motions. So unfortunately, there's no real simulator support for the accelerometer. I wanted you guys to be able to pick up your MacBooks and shake them around, but they were like, that's dangerous.
You shouldn't do that. But one thing you can do in the simulator, if you are playing with the view controller stuff, you can reorient the device. There's some menu items there. And so that's a great way to test out whether or not all your auto-resize flags and things like that are set up.
So all in all, an accelerometer is what we learn. Definitely use UI view controllers. They're great. Definitely use filters to kind of hone in on the pieces of motion that you're trying to isolate. And like all the hardware we've talked about, if you're not using it, please let the system know so we can turn that stuff off and save some battery.
So all in all, what did we learn? We learned, I hope, a bunch about our hardware APIs. And I hope you guys will just-- I know you will. You'll run off and just do all sorts of crazy cool stuff with them. Specifically, for the image picker, definitely check your source before you go, just assuming that there's a camera, something like that. And for the other hardware stuff, by all means, for core location, for the accelerometer, If you're not using it actively, please turn it off.
So we've got a lab tomorrow. A bunch of us are going to be down there. Ron's going to be down there. We hope to see the creations that you guys have put together so far. That's tomorrow at 10:30. We've got Matt Drance, so if you have any general questions or feedback, he's a great guy to send that stuff to. And please keep reading our documentation, keep sending feedback on it, it's pretty good. So check that out.