Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2011-312
$eventId
ID of event: wwdc2011
$eventContentId
ID of session without event part: 312
$eventShortId
Shortened ID of event: wwdc11
$year
Year of session: 2011
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2011] [Session 312] iOS Perform...

WWDC11 • Session 312

iOS Performance and Power Optimization with Instruments

Developer Tools • iOS • 55:37

Creating an app that performs great is essential to making your users happy. Learn the techniques that will make your app launch faster, require less memory, efficiently use the network, and minimize power consumption. A must attend session for all iOS developers.

Speakers: Tim Lee, Chad Woolf

Unlisted on Apple Developer site

Downloads from Apple

HD Video (641 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Hi, everyone. I'm Tim Lee from iOS Performance, and today I'm proud to present to you the iOS Performance and Power Optimization with Instruments session. So I just want to start off and say that performance and power are extremely important to everyone at Apple and hopefully everybody here. I probably am preaching to the choir here.

But these are two major features, and we treat them as such, of our entire platform and every app that we work on. They're a key aspect of app store reviews. There are a lot of times when you go through your app reviews and you see X and Y crashed on launch or something, or it eats a lot of memory. And hopefully we'll deal with that today.

You have the tools available to you to fix these problems. You might not know it, but by the end of this session, hopefully you'll have some idea. And to get more, there's lots of resources available. But today in this session, we're going to cover a couple of common cases and some general strategies that you can use to optimize both power and performance of your apps.

So an overview of what you're going to learn. Two big main areas are, number one, and the big takeaway is to how to measure the performance of key aspects of your app. And measurement is king of the game. If you can measure precisely and accurately, it makes your job 10 times easier.

And the second part, which you probably is why you're all here, is how to improve key scenarios. So we're going to talk about a few that apply to pretty much every app available. So the first is speedy interaction, making sure that when the user is using the app, it feels magical and there's no delays or waiting or anything like that. Everything moves along with the touch.

The second is a slim memory footprint. We are still on very constrained devices, and the key is to make them not feel that way. And in order to do that, it takes some work. So we'll talk about how to keep the memory footprint down, making multitasking feel great, and your app will do the same. And finally, we're going to have some tips about how to use the network and battery-specific or power specific optimization, some tips and tricks that you can use to effectively use the network and battery.

So we're going to start off with, again, number one, the big takeaway, measuring performance. And the big one here is to not guess. We do this all the time. We start writing some code. Maybe you design, oh, I have this brilliant algorithm. And I know it's going to be slow here, or my app is going to suck over here, but I'm going to have to spend a lot of time on it. You're almost always wrong. We do this all the time. And I can let you know, the first thing that everybody should do is to take measurements.

And once you do that, you will be surprised at where things are. And sometimes the optimization just fall out immediately when the part that you thought was slow is actually fast, and the part that you thought was fast is slow and is easy to fix. That's the ideal case. And even after you do that, sometimes maybe you took a measurement in a way that was not

[Transcript missing]

In the end, it's all about how the app feels. So measure, don't guess, and use your app.

Now we're going to focus on individual scenarios. It's really hard to just take a whole app and say, I'm going to optimize this whole thing. I'm just going to go through and line by line go through and fix stuff. What you want to do is look at key interaction points, key places where the user will really notice your effort. Maybe when you're pushing a new navigation controller on or when your app first launches, when you switch tabs.

These key individual interactions. It also makes it easier for you to measure. Again, back to the key point. So you measure a single interaction. You make a code change. And then you measure again to make sure what you think you fixed was actually fixed. So focus on key scenarios.

And finally, in the strategy section, focus on making sure that you have a realistic data set. We spend a pretty reasonable, or maybe unreasonable amount of time, making sure that internally we test with big photo libraries, big music libraries, and various Wi-Fi cell characteristics, and you should definitely do the same. Your app will behave very differently when it's loaded up with a lot of content. So make sure that you put on some realistic content when you're testing your apps.

On top of that, you want to make sure you test with the devices that you plan to support. Now, I'm assuming a lot of people here have the latest and greatest, but I know that not all of our users do. So on top of actually testing on devices and not just in the simulator, make sure to test on the older devices that you plan to support.

All right, on to tools. So the ways that are available to you to actually do these measurements. Number one is instruments. First thing I do whenever I get a performance problem, fire up instruments, take a trace. And after that, if there are more issues, and I've dug in a little bit, then there are some other options that are available. There's the old standby, there's logging, and there are cases where it's useful.

You'll NSLog, say, particular instances that maybe are very infrequent, so they would be hard to find in a big trace, or happen a lot, and you want to see, you know, maybe there's a weird distribution that you want to graph. Use NSLog for that. And if you're going to do a lot of logging, you know... So, if you're going to be logging while scrolling, you want to be doing that to a file.

Now, NSLog is great, but it does do a lot of things for you. So, there is some overhead incurred there. So, if you're going to do a lot of logging, try logging to a file and take some of that overhead off. And finally, you want to make it very, very easy, if you have to resort to this, to turn it off. You can use pound define, environment variables, user defaults, whatever you want. But make it really easy so that when you build for release, it's gone.

Now, a quick note about the simulator. It's really handy for development, but it's completely useless, completely useless for performance. For speed. For memory, it's actually pretty good. So if you want to run leaks, allocations, those sorts of things, you can do that on the simulator. But for anything that's time sensitive, completely separate, run it on your device.

Now, one other way to do measurements is side by side. I'm assuming most people have more than one device. An interesting way to go here is that if you make some change, you might think, maybe this doesn't quite look right or maybe it doesn't behave as well as it used to. But it's a lot faster. Well, do a blind test. Do a side by side test with somebody. Give them a version of the old code, a version of the new code, and have them play with it.

And if the old is the same as the new, you know, in terms of visual fidelity and everything, then you're good to go. All right, so let's talk a little bit about what we-- some general strategies for improving performance. And we're going to start with speed and responsiveness.

So the importance of speed and responsiveness shouldn't, you know, I really shouldn't have to say this, but slow performance is no good for anybody. And, you know, we go to some length. We have this system called the Watchdog to make sure that a stray non-performant app doesn't take down the entire system. So we call this the Watchdog.

And what it does is in some key app lifecycle cases, if the app takes too long to, say, launch, then the app will be terminated so that the user can continue to use their phone. Now, the times here are for iOS 5, and they are subject to change. They're also extremely conservative. I don't think anybody wants to wait 20 seconds for an app to launch. I start hitting the home button once it's at 5 seconds. So don't take these as guidelines for times to shoot for.

So for launch or resume, really one second or so is what you want to shoot for. And then the other key scenario, smooth scrolling. We hear it all the time. What's iOS good at? What do we have that everybody else doesn't? We have the smooth scrolling, and we need everybody to work on it. We know it's not easy, and there are lots of techniques available to you, and we're here to help you to get that smooth scrolling. Hit that 60 frames a second.

All right, so some general strategies for improving speed. So the first one is to do less work. We see all the time that when somebody loads up an app, a lot of apps just load a lot of data at launch. Maybe they were testing with small data sets and didn't anticipate it, but they might load the whole database to show a table view that has 10 rows.

Don't do that. You don't have to do that. Only load as much as you need to show the next screen to the user. And this correlates very much with the next one, which is to do work later. You can load as much as you want right now, and then later, when the user requests another screen or selects some detail view, then fetch it and do whatever processing.

And the third general category is to do work faster, the one that's actually hard, right? And the name of the game here is to get the most bang for your developer buck. Focus your energies on the parts that are slowest first. And in the case that, you know, there's just some inherently complex problem, you're running some fancy algorithm and it's just going to take a while. Well, in that case, put a placeholder, let the user continue to use your app in some way until that work is done. Do that work in the background.

On to memory. So memory is now a pretty big deal now that we added multitasking last year. And we need every app to do its fair share to make the system behave well. So analogous to the WatchDoc system, we have something called Jetsam. It's been around for a while. And what it does is it watches around -- looks around the system for memory pressure.

So it'll terminate, you know, background apps and apps that are -- that are -- have been suspended and not used for a while in order to provide more memory for the app that the user is currently using. So this is Jetsam. It'll also, if necessary, terminate the frontmost app if it's gone wildly out of control. So you really want to avoid dealing with this thing. Hopefully, you -- you just never hear about it.

Additional benefit of keeping memory down is that larger suspended apps in general are going to be jets on first. If I need a lot of memory for Safari, I'm not going to go around killing a bunch of 10K apps, if they were that small, as opposed to one 60 megabyte app. There's just no benefit. So if you keep yourself small, then you'll stick around in memory. And so when the user comes back to your app, it'll be quick and snappy. It won't have to start from scratch again.

New to iOS 5 is a new revamped Jetsam system. We've taken feedback over the past year and we've revamped the system such that memory warnings are really now your last chance to save yourself. What that means is when you get a memory warning, there are no more apps available to terminate. So if nothing is done, your app is going to be terminated. So be sure to respond to memory warnings in iOS 5.

All right, so some key areas to focus on when you're debugging memory problems. Spikes. So a spike is when, in a very brief period of time, less than one turn of the run loop, so a very brief period of time, you allocate a bunch of objects immediately. Generally this happens when you're processing a large amount of data. You've downloaded a big chunk from the network and you're doing some bunch of work on it to display.

And, you know, it's pretty easy to get caught up in this. It's a pretty simple fix, though, which is good. All you need to do is break up your work into small independent batches so that, you know, you process the batch and the memory that you used for the processing you can use again for the next batch over and over and over until you're done. I'll emphasize again that if there's a lot of this, you do want to do this in the background.

And another thing that contributes to this is auto-release objects. You don't necessarily have complete control over this all the time. So you want to be very careful about these things and try to reduce the object lifetime of these auto-release objects. Now, I know a lot of people might not know exactly what they are. They're new to the platform. So I'm going to talk a little bit about what that really means. What is auto-release? Now, we've heard this before.

Auto-release. It's used to avoid worrying about retain and release. There's this, you know, retain and release systems, Objective-C, what is this? Auto-release fixes my problems. That's not really a great way to think about it. Really, the way to think about it, and it's not too complicated, is it's a way for frameworks, you know, the stuff Apple provides or a third party provides, to manage object ownership. And I'll show you what that means by comparing and contrasting it a little bit with the regular retain and release system. So we've got... regular retain and release. So, let's say we have our app, and it asks for an object.

You know, it does alloc init, and we get our object, and it calls retain on it. So, now the app owns the object. Now, now that it owns it, it is responsible for releasing it. So, you know, sometime later, the app is done with the object, and it calls release. Object goes away. We're all in the clear. Memory's back in a good place.

So how does auto-release work? So in the auto-release case, it asks a framework for an object. So you'll see the telltale sign is the object name. So NSArrayArray, NSStringStringWith, or NSStringWhatever. And so the framework will make an object and put it on this auto-release pool to release, because it doesn't know when your app is done with it. And so your app has this non-owning relationship with the object, unless you call retain on it. If you call retain on it, then you'll get another solid line, and you're responsible for calling release on it.

But in this case, only the auto-release has that owning relationship. And so sometime later, your app is done with the object. Doesn't call release on it, because it didn't call retain. Doesn't own the object. But the auto-release pool will call release. And from there, the object goes away.

Now, the problem here is that before the auto-release pool does its thing, in the meantime, your app could have allocated a few more objects, and this is a very common source of memory spikes. Now, the new Arc system alleviates this problem quite a bit by doing some tricks under the hood with object lifetime.

But in case you haven't done that, or in the case that it hasn't fully solved the problem, the way around this is to use a nested auto-release pool. And what does that mean? That's basically when you make your own NS auto-release pool, or use the new at auto-release pool syntax, and that will attach any new object that you create onto that pool. And then when you call drain, or when you hit the end of that block, then all those objects will be released. So you can control the lifetime of these objects with these nested pools.

So that's spikes and auto-release pools. So the next big category of memory is leaks. We've all heard about leaks. There's a whole instrument named after it. And what it is, it's pretty simple. It's you've allocated some memory, you used it, and then you just don't have a pointer to it anymore. You have no way of touching it. So it's completely gone, and it's just taking up memory.

And leaks is pretty smart. It can scan memory for these things. And it'll tell you where these objects were allocated. So it doesn't know when it's supposed to be released. It can't really read your mind. But it'll provide you context for where to look in your code, for where the appropriate section of code is to do the release. And the common mistakes with using it is just retaining an extra time without a balanced release. So in the traditional system, for every retain, you should have a release.

The other very common leak is to-- uh Forget to release a property when you're setting a new one on a setter. And so, you know, these are things to look out for. Now, Arc largely removes this problem as well. You're not even allowed to call retain and release.

There is some exception in pure C frameworks for Graphics, AV Foundation, et cetera. But in general, Arc will make this problem a lot harder to hit. So more incentive to convert. And finally, the last category is abandoned memory. So abandoned memory is similar to leaks in that you have memory that you allocated and used, and you're not going to use it anymore. The only difference is you still have a pointer to it.

So in theory, you could still access it and use it. This is a little bit trickier. Leaks, you can obviously go look around and see what's residing out there, and nobody's touching it. In abandoned memory, it's really up to the individual developer to know what data is no longer used.

And the way we generally find it is to, you know, start from some baseline, do some interaction, maybe go into the detail of a table view and come back out, and then do a diff to see, is my memory state the same? So we have allocations and keep shot, and you can take two snapshots. So you take a snapshot, go in and out, take another snapshot.

And memory should be about the same, because you're looking at the same view. If there's stuff left over from what you were doing in the meantime, that's probably abandoned memory, and you want to go and look for a place where you can do a release on it. Unfortunately, Arc doesn't help you here, because it doesn't know what stuff is available. What you want to be doing here, is nilling out references to stuff you're not going to be using anymore. All right. So I'm going to demo a couple of the instruments. So last year, at a similar talk, we demoed an app called Compositions. And this time-- whoops.

[Transcript missing]

The next trace I'm going to show you is a problem with a slow launch time. So when I first added the face detection, there was a problem where after I launched up and did all the face detection, it would just take a really long time before it kicked in. And I wanted to see what was going on. So I took, again, time profile. And this is what showed up.

So once again, I go back into the CPU strategy view, because that's pretty much what I do. That's just step number one. And wow, look at that. This looks like a big waste of time, right? Half our time is unused. So we can, if we're so interested, we can drill in here. Another good time to show a new feature, or a feature associated with this. We have the sample list. So what the sample list does is it lets you pick some time. So I'm going to zoom in here by holding down the arrow. shift.

Let's see, let's pick one over here. Pick some time. And if you pick a certain sample, you'll see the highlight. It'll take you to the sample down in this list. And over in the detail view on the side, which you access over here, you'll see the backtrace of what your code is doing, so that's fantastic.

What I usually do is I can go and see what's taking a lot of time at certain periods of time. You can see what happened first, what happened second. So this is fantastic. You can walk through a timeline. And so in this case, you can see here I was spending a lot of time generating composition thumbnails, and one quarter time. It's the same call here and down here.

All right, so what does a good case look like? Here we go. This looks more or less the same, a bunch of purple. Go back. CPU Strategy. Always remember that one. And now we've got a good case. This is what you want to be shooting for on an iPad 2. You want to see a big block of blue when you're doing batch processing.

And you can go in again, go to the sample list, make sure you're doing the right thing. If you have multiple operations going on, make sure that the priorities are right. You can look here. So that's it for CPU Strategy. And this is the one tool that I use the most. So just remember to use this. It'll tell you exactly what your app is doing.

And finally, I'm going to hop over to the memory side. So these we've had around for a while. We've called allocations, VMTracker, and leaks. I put a custom template together to run all three at the same time. And this is a profile of the memory usage of the app.

And I was seeing actually some jetsamming as I was launching the app. Actually, what happened was a little bit after launching the app, it would jetsam. So I took a memory trace. And this is what comes up. And this is somewhat-- it's kind of cool. It's kind of pretty. But it's not clear how to interpret this thing. So I'm going to give you a few tips here. What I usually do is--this is a statistics view, right? I'm gonna switch this and switch to the object list.

What I'm going to do here is sort by size. After I sort by size, I can look at the biggest things here. And I can see these things are taking 45K. And a lot of time and a lot of objects in face core light. I'm assuming you can guess what that means. And actually what happened here-- I can show you in the detail view-- what happened here-- is we have this CI phase score detector in it with context.

[Transcript missing]

I'm going to show a few other features while I'm here. Leaks, like I said, very, very simple. There's all these things that leaked. What were they? Oh, what do you know? CG objects. So these are not Objective-C. You call CG bitmap context create, and you get a CG image, you have to call CG image release.

And Arc doesn't do that for you. It really only covers Objective-C retain and release. So this is how you can still get leaks. And it's very easy to find. You go over here, you can see in my colorized image with color call, I didn't release. So I go in there and I add the release.

And finally, VM Tracker. So VM Tracker is great for knowing overall how much memory is my app using and the numbers you want to focus on. Dirty. Dirty resident. What that is, is memory that your app is using that the system can't reclaim if it needs it. So this is the number you really want to target.

And... You can break down and look at the different regions here. And there's a lot of stuff to learn here. And actually, there's a whole talk-- well, part of a talk that covers this tomorrow at the exact same time. So if you want to know more about this, come back tomorrow for this one. But what you want to focus on is resident, and in particular, dirty resident. Now you can see here there's some other things, CG image, which I was leaking.

This makes a lot of sense. In this view, you won't get backtraces like you do in the other two. But it'll give you a good idea of overall how much your app is using. And rule of thumb, if you're using 100 megabytes of memory resident, that's way too much. So that's going to be a problem, even on an iPad 2 or an iPhone 4. All right. So that's it for the demo.

All right, so to review. For speed and responsiveness, we have the system watchdog that will terminate your app if it's not behaving well at key points. The overall strategies for speed and responsiveness are to do less, do later, and do faster. The hard one. Do slow operations in the background with placeholders, and spend your time where it matters, on the things that are the slowest.

It sounds obvious, but a lot of times you might think, oh, do something fancy to speed up this thing. Don't do that. Just work on the stuff that's slow. And finally, at launch, the most important time, the first time somebody uses your app, only load what you need at launch.

And on the memory side, the three big categories: spikes, leaks, abandonments. You might not remember the details about all of them. Slides will be available. Jetsam will terminate your app, just like the watchdog will. New in iOS 5, memory warnings are your last chance. I just demoed some of the instruments that are available. If you want some more help, there's going to be labs tomorrow, and there's plenty of developer documentation.

The big one for spikes is to add the nested auto-release pool. And finally, it's really not that hard, convert to using Arc. It's very simple, and it'll save you a lot of trouble. And with that, I'm going to hand it off to Chad to talk about networking and power.

Thanks, Tim. My name's Chad Woolf. I'm a Performance Tools Engineer for Apple. And I'm going to talk to you guys now about networking and power optimizations. Now, a lot of times when we think about performance optimization, we think about speed. How do we make our code faster, right? And speed is a really important part of our user experience for applications, right? But there's another important part of the user experience, and that's battery life. It's not often that the two are optimized at the same time. And so we really want to talk about optimizing now for battery life. So really performance optimization is about optimizing for efficiency. And faster code is usually better, but it's also better on battery life.

So when we look at network and power optimizations, you're going to notice here that all of the networking optimizations that I'm going to recommend today also have a positive impact on battery life, meaning that they reduce power. So here's what we're going to talk about in the second half of our session. First, we're going to talk about reducing network traffic and bursting. Those are two networking topics. We're also going to talk about core location accuracy, sleep/wake, and dynamic frame rates. And those will all help you save energy.

So let's talk about that first one, reducing network traffic. Now aside from reducing the amount of drain on your battery for sending the network traffic, reducing network traffic also has a couple of key benefits. First, it reduces network congestion, and that makes everybody's networking applications, including your own, run much smoother and much faster.

But there's also a dollars and cents argument here, because a lot of our customers are actually paying by the byte for their data plans, right? So if you're actually able to reduce the amount of network traffic that your application is producing, then you can actually save them a few dollars every month on their bill, right? All right, but how do we measure traffic, right? We have our happy application over here on the left. And he's made a bunch of network connections out there to the servers.

But the key here is always to measure first, right? So how do we measure network traffic? How do we go on about looking at which connections our application's making, where they're going, how much data's flowing to and from the server that they're being connected to, and what's the health and status of these connections? Are they having any retransmission problems? Or what's the average round trip times? That type of thing, right? We also might want to know, is our application doing more networking in the beginning? Or is it maybe in the middle? Or is it maybe towards the end here when we start to talk about switching out or multitasking, right? Well, in iOS 5, we have a new instrument, which we hope can shed some light on that for us. It's called the Network Connections Instrument, and it basically does everything I just said.

It measures data volume, it works for TCP IP, and it also works for UDP IP ports if you have those open, and it collects those important performance metrics that I wanted to talk about. Now, let me show you that in a quick demo. Of course, I'm not going to commit the cardinal sin here of trying to do a network demonstration at WWDC. Okay? But I do have a trace that I recorded a little bit earlier in the week at the office.

And it was on an application out there that's demo code, or it's example code on developer.apple.com, which you can get. It's called Lazy Table. And what it does is it shows you how to use placeholder images and lazily load data over a network. It's actually a very good application, and it's fairly well optimized. So it's a good target here for the network connections instrument to see if we can find out any more about it.

Now, if we go over here to our network connections track, you'll see along the time axis, these are periods of network activity. So you can see we have a little bit more activity in the beginning. Let me zoom in here for you. And then we have some network activity trailing out a little bit later. So we can see their application is doing different things at different times.

Now, here in the details view, we also have some information. And this is the total, all the connections and how much data they've sent and received and to which IP address and port. So that's a lot of really important information. Now, if you want to look at just one bump here, you can hold down the option key and select a time filter. And that will show you only the network connections that were active and only the data totals that were sent during that interval. So here we can see that we sent 11k.

Or sorry, we received 11k over eight packets. We sent 247 bytes with one packet. And some of those interesting statistics like duplicate data, out of order retransmissions, and the round trip times are also calculated here. So you can maybe use this information to help troubleshoot your latency issues. So now that we have a fair understanding of our data, sorry, of our application with this data, let's talk about some of the optimizations that we can make. So let's talk about some of the optimizations that we can make for network traffic reduction.

All right, so the first one, the big one here, is caching content. If you're using our URL loading system in Foundation, you're going to get caching for free, and it's on by default. So if you're using the URL loading system to pull URLs from an HTTP server, it gets even better, because the HTTP server can tell the cache which responses can I cache and how long can I cache them for. And that's a lot of benefit, and that's a lot of code that you don't have to write. And that can really seriously reduce your network traffic.

If you use it effectively. So that's the URL loading system. Now, the NSURLCache object, which we had in iOS 4, used to cache things only in memory. So all of the responses were cached in memory. When your application was terminated and then you started it again, you were starting from a clean slate.

But in iOS 5, we now have persistence, which means that all that cached data -- I'll be sure to pass that on. All of that cache data will be loaded next time you load your application. And that's also on by default now. So you'll see a lot of network applications just respond a little bit better not starting with a clean slate.

All right, the next topic here for reducing network traffic is compression. If you're starting with your own network protocols, try to pick the most compact data formats you can. And I would tend to prefer data formats that are inherently compressed, such as MP3 or JPEG, something that's inherently compressed like that.

Now, if you're transferring data that's not inherently compressed, like plain text or XML, I would suggest maybe zipping it up on the server, sending it over to your application, expanding it, sending the reply also in a zip format. Try to use compression the best you can to reduce your network traffic.

Actually, there's one more thing. Not that kind of one more thing. And that's reducing--so if you're downloading images over the web, try to pick the URL that has the closest image size of pictures to the one that you need, right? If you download the big one and then scale it, you're essentially wasting a lot of time, energy, and money transferring this data and then just scaling it.

Now, the next one for reducing network traffic is resumable transfers. On a mobile device, we call it network connection volatility. It's a fact of life. When you move from Wi-Fi hotspot to Wi-Fi hotspot, you'll be breaking connection and reforming another one. You're going to get a different IP address. Any sort of long-term running transfers that you're executing, well, they'll be severed. The connections will be severed. Right? Also happens with cellular networks.

When the user goes into an elevator and the doors close or they enter a parking garage, something like that, it'll break the connection. Now, if you're building your own transfer protocols, make sure you support resumable transfers. Do not try the big download all the way from the beginning.

Not a great idea. If you're using HTTP, there's a range header. There's a field in the header of the request that you can put a range specification. You can range extract the bytes you need so you can continue a download if you're using HTTP. HTTP. Most servers will honor that.

All right, now the last one here for reducing network traffic is download profiling. Now, this is a little bit more abstract, but the idea here is that you want to take your application and look at the amount of data that's coming over through your network connections and also track how much of that content that you've downloaded is actually being viewed by your users. You might create logging and send those statistics back, or you might be able to even look at your server logs and see how many times you've had canceled transfers.

So if you have some preview content and you realize that 90% of your users are only getting through the first third of the content before they cancel it and move on or make a purchase, something like that, then you can actually make some optimizations and only send that first third instead of sending a whole bunch of bytes that aren't going to be used.

So that's just something to take home and think about. So reducing traffic, in summary, measure first. We have a network connections instrument for that now. Cache content when available, when possible, and the NSURO loading system. It has some great advantages there, and now we support persistence in iOS 5. Compress when possible, resumable transfers, fact of life, you definitely need to support them in your own protocols, and try to download only what's statistically likely to be used.

All right, next optimization, Bursting. Bursting is also network optimization. And the idea here is that we take all the data that we want to send in one big block, we send it, and we wait for a period of time, and then we send the next big transaction. So period of activity, period of silence, period of activity. That's Bursting. Now why do we do that? We do that primarily for energy reasons. And let me explain why. Sending and receiving data on a cellular network requires a significant amount of energy. You can probably imagine.

But beyond that, when you use a cellular radio, the radio has to stay in a high power state after that last byte of data that you transmitted for up to 10 seconds. And even if you only send one byte after that, you're going to be resetting that timer and it'll have to stay in a high power state for another potentially 10 seconds. So imagine this graphically.

Here we have this area in yellow as the energy it takes to send your data. Now after you finish sending your data, the radio is still in that high power state before it comes back down. Now it's still consuming energy. Now you can think of this red as waste because we're actually sending data, but we're consuming energy. Now if you send small bytes of data, you can see exactly the effect. We have lots of waste and actually very little data being sent.

So how do we go about measuring bursting? Well, last year we introduced the Energy Diagnostics template, and it was a major step in the right direction here. Because we could measure the energy coming from our battery, we could see different statistics about our CPU, what they were doing. We could also see power states of GPS and different radios.

But this year, we wanted to add the network activity instrument to that template. So now, in addition to these various radio states, you can actually--or various power states, you can actually see how many bytes were being sent over your Wi-Fi and cellular interfaces, along with your power data, and you can make those correlations yourself.

[Transcript missing]

Possible slide malfunction here. Or I hit the wrong button, which is totally possible. Because I'm on stage in front of 1,500 people. Okay, the final one here is the energy usage instrument at the top. Okay, the energy usage instrument at the top used to sample in fairly large buckets.

Now in iOS 5, we're sampling every second. So you have a much higher fidelity view of what is going on and how your code is impacting that energy usage. Right button. OK. So now let's talk about a demo for-- let me just show you a trace I recorded to demonstrate the effect that bursting can have on an application.

So here's a trace I took with the energy diagnostics template with iOS 5 and the network activity instrument in place. And here we see our first scenario, which is we're transferring a large file over time. And if I look at the network activity instrument, we can see here, if I expand the column here, That we're sending about 500K every 30 seconds. So that's about a megabyte per minute.

Now you can see the corresponding effect this is having on the radio. You see the energy usage. We have it spike, and then it sits down, and then it spikes again during the next network transmission. Now also in this document, I have a second run, which shows a completely different result.

In here, in the network track, we'll see that we are sending 100K every 10 seconds. Now, that's a different data rate. That's actually 600K per minute, right? So you'd expect this to be more energy efficient. We're sending less data, right? But look at this track up top. If you look at this track, you can see how those little bursts of data, without very much time in between, is keeping that radio on almost all the time. And if you look at the area here, it almost looks like a 2X improvement.

And we can actually confirm that. In the 1MB per minute transfer, where we're transferring 500K every 30 seconds, we can do that for about 15 hours. Now, the other trace, where we're downloading less data, but we're doing it every 10 seconds, 7 hours of battery life. So it's huge. It's a 2X improvement. Just by taking data and going through bursting. All right, so let's go back to slides, and we'll talk about maybe an example on how you might go about this.

[Transcript missing]

If you see the GPS on, that means that core location is at the end of its rope. All of the energy efficient optimizations it could make to find your location, they just won't work in this scenario. So that can happen in two different cases. One, you're using best accuracy, and the GPS is the only way to get the best accuracy. Or two, you're using maybe some very low granularity accuracy, but there's no other points of references. There's no other points of reference. There's no cell towers or Wi-Fi hotspots in the area. and it has to turn the GPS on.

So how do you go about tuning core location code? So you have the core location manager object, and it has an attribute called desired accuracy. You can set it to one of these constants. By default, it's set to best, which works great for our hiking application, but it may not be appropriate for applications such as one that might get the weather in the area, the forecast, right? You could probably get away with three kilometer accuracy for that type of case.

Now, the second one is optimization or tuning that you can do for the core location manager is start updating location and stop updating location. You want to call stop updating location as soon as you possibly can, right? Now, in our hiking application, we need to keep core location going and get all those updates.

But in our weather application, we can turn it off immediately as soon as we get our first location information. So try to turn it off whenever you can. If a user moves away from that panel that is using the location information, you can turn it off immediately. Turn it off. That'll cause everything else to idle back down. And you'll see that in the energy diagnostics template. You see the effect in the energy diagnostics template. All right. Our fourth optimization for power here is going to be sleep wake.

Okay, optimizing for running battery life, like where we're showing how we can get with bursting 15 hours versus 7 hours, that's great. But standby time is also a very important part of the user experience when it comes to battery life. We want to try to make that standby time as long as possible.

The expectation here is when I take my phone and I'm not using it and I put it in my pocket, that I should be able to leave it there for a couple days before I need to recharge it, right? So standby battery life is an important part of that user experience. Now there's things that our applications can do to keep the device awake, such as background activity. But there's also things that we can do that will wake the device up. And those things include push notification and voice over IP packets.

So how do we measure if our application is messing with sleep/wake? Well, in the energy diagnostics template, there is a sleep/wake track. The dark areas show you when you're sleeping and the light areas when you're awake. You want to try to make that as dark as you can for as long as you can. Now, here's an example of how your application could affect sleep/wake.

What we did here is we took a device and we put it to sleep--or we put it to sleep and then we woke it up every 30 seconds with a network packet, okay? Now, that does two things. First off, it turns the radios on, but it also wakes the device up.

And you can see here in the sleep/wake track that it's actually staying awake for a while. Now, this can happen--this happens because it takes a bit before the device will go back to sleep just to make sure it doesn't have anything else it needs to do. Now, our normal standby time is 300 hours. If we wake the device like this with that type of a radioactivity, we have a standby life of 30 hours. Okay, that's a 10x reduction just because we're pinging the device every 30 seconds.

Obviously, we don't want to continue doing that. OK, so what kind of optimizations can we make here? Well, the first one I want to make note is that push notifications. Push notifications and, to some extent, voice over IP packets, they can wake the device. So you have to be careful when you send them.

Now, push notifications are great. We want you to use them. They're a lot better than some of the alternatives. But you have to be careful that when you do push a notification, you understand that when you do push a notification, you are waking the customer's device. So if you're pushing notifications and the customer hasn't responded back to your server, probably is a good indication that their device is either in their pocket and they can't get to it, or they're asleep, or some other thing. So it's probably a good time to stop pushing those notifications.

If you get into that pattern where you're pushing a notification every 30 seconds, you're going to bring their battery down to potentially 30 hours of standby time. You don't want to do that. So let the device sleep as long as possible. That's the key here. So if you absolutely have to ping the device, do it every 10, 30 minutes, something as as long as you possibly can make it.

All right, finally here, dynamic frame rates. Now, we know that the smoothest animations and the best looking games come in at 60 frames per second, right? But is that true for everything in all scenarios, right? As it turns out, for scenes that maybe are a little bit more still or there's not a lot of motion, you can get away with maybe lower frame rates.

Now, we're not saying across the board, set your frame rate down and reduce the quality of your application, but for certain scenes, you might be able to get away with lower frame rates and not have too many people notice a difference, and you'll get much better battery life.

So how do we measure that? Again, energy diagnostics template. But we also have the core animation template, and that will allow you to measure the number of frames per second that you're actually pushing to the device. Now, if you're looking at the energy diagnostics template, you want to look for foreground activity and graphics activity. Those are going to be the big ones that you want to try to reduce, and that's what usually frame rate is driving.

But ultimately, the final say here is what the energy usage instrument is telling you. It's trying to tell you that you're getting better or worse energy efficiency, so watch that track above all else. So here we took a popular gaming engine for iOS, and we ran it at a full 60 frames a second, and we got a 15 out of 20 energy level. Now, we took that same engine and set it down to 15 frames a second, and we got about a 10 out of 15 energy level.

Now, you'll see the reduction here also in the CPU track. Everything came down universally. Now, we're not saying that everybody's going to get a huge benefit by going to 15 frames a second. We're just saying that if you adjust your frame rate and take an energy trace, you can see what that difference is going to be and see if it's worthwhile to your application.

So how can we do this dynamic frame rate thing? Well, there's no big switch that you can throw that turns on dynamic frame rates. It's kind of an application engine type thing. The idea here is to draw only what's new. Don't draw redundant frames if you can avoid it. Also experiment with different frame rates. Maybe certain parts of your application can get away with slower frame rates and nobody will notice.

And as Tim was saying, try double blind test. Try taking your game at one frame rate and at another frame rate, handing it to different people, see if they notice a difference. If they don't notice a difference, then you're picking up free battery life. But the other goal here is, of course, to reduce your CPU and GPU activity.

So even if your game is running super smooth and the animations are perfect, if you further optimize, maybe make your geometry management a little better, a little bit more efficient, even if it doesn't affect the frame rate, it will keep the CPU. But make your CPU more idle, and that will save you a significant amount of energy.

So the more efficient we can make our apps, the more energy we're going to be saving and the longer our battery life will be. Same thing goes for GPU shaders. If you can make them more efficient, you'll also see a positive impact on battery life, which you'll see with the energy usage instrument.

Okay, so that's it for energy and networking. So let's talk about the session review. Measure, always measure first. Measurement will make things a lot easier. Instruments has a lot of tools in the toolbox, and it can do a really good job of hooking into your application in pretty unique ways to find out what it's doing. So start there. Of course, for everything else, there's logging. Try logging if you can't use instruments.

Now, if you have some performance problem that slips out into the wild, you're going to see that on your iTunes Connect report. Jetsam notifications, if your app has been terminated because of Jetsam, or if your app has been terminated because of the Watchdog, you'll see those logs accumulate in your iTunes Connect report. So take a look at those and make sure that those are absolutely addressed at highest priority. Now, the goal here for the whole session is really to talk about promoting lean apps. But it's not just fast apps.

It's also apps that are more efficient in terms of energy and more efficient in terms of networking. And also, Well, actually, those are the big ones. So if you make your apps leaner, everything is much more pleasing. It's faster. Multitasking is faster if you're more memory efficient. And you can get much more life out of your battery. So the key here is performance. One aspect is speed, but it's really about efficiency.

So for more information, you can contact Michael Jurowicz, who's our developer tools evangelist. For documentation, there's the Instruments User's Guide. That will actually show you how to enable the energy diagnostics instruments and be able to take your own power recordings. There's always the Apple Developer Forums for questions and answers.

For related sessions, I would suggest taking a look at the ARC and What's New in Instruments. Unfortunately, they've already passed, but they will be on video. And same time tomorrow in this room, so 4:30 Thursday, Presidio, there is the iOS Performance In-Depth, which is the sister session to this talk, and it'll be talking even more about performance. Have a good rest of your day.