Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2012-242
$eventId
ID of event: wwdc2012
$eventContentId
ID of session without event part: 242
$eventShortId
Shortened ID of event: wwdc12
$year
Year of session: 2012
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2012] [Session 242] iOS App Per...

WWDC12 • Session 242

iOS App Performance: Memory

Essentials • iOS • 51:09

Using memory responsibly can be the key to a great user experience for your iOS app. Get a detailed look at how virtual and real memory work on iOS, discover key ways in which the system handles memory pressure, and learn what you can do to use memory even more effectively. A must-attend session for all iOS developers.

Speakers: Daniel Delwood, Morgan Grainger

Unlisted on Apple Developer site

Downloads from Apple

HD Video (307.2 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

My name is Morgan Grainger, and I'm a software engineer on the iOS Apps and Frameworks team. And I'm delighted to be here to give you the tools that you can use to create better experiences for your users. Our goal today is to help you. So to do that, we're going to cover a large swath of different topics. First of all, we're going to try and give you the basis that you need to understand why memory issues occur and what you can do about it.

We're going to talk about why memory matters. Why does it make such a huge impact on your users? Then we're going to get detailed and kind of show you exactly how iOS deals with memory. And we're specifically going to talk about what happens when the system runs under memory pressure.

[Transcript missing]

So, why is memory such a big deal? Memory is such a big deal because our devices are immersive experiences. They create these immersive experiences. The user is in your application. They're focused on the task that they're trying to accomplish. And in the worst case, memory problems can manifest themselves to your user as crashes. And that is pretty much the worst experience possible. It takes the user out of what they're doing, puts them in a whole new context that they were not expecting, and then they have to start again.

And while that may be the worst case, users also expect to be able to multitask with our devices. And if they're switching back to your application and all you have to show them is a loading spinner because your application was terminated in the background, that's also a very poor user experience. Ultimately, memory matters because it has a direct correlation to what your users see and what they experience. And this is a very real problem. You see in App Store reviews, we see this in feedback from customers. this is a very common complaint.

Now, it's tempting to think that the march of time is going to take care of this problem. You know, if you look at the original iPhone or the iPhone 3G with its 128 megs of memory, and you look at what we've accomplished over time, each device has more and more memory. And the new iPad actually has a full gigabyte of memory available.

And so it's tempting to think, well, you know, this may have been a problem at the beginning, but now is it such a big deal? And the answer is that yes, it is a big deal. This is an iPad 2, and it's a beautiful device, and users did a lot of things with it. But now with the new iPad, you're essentially having to do the drawing work of four iPad 2s.

So users are going to expect more out of your applications. As hardware gets more powerful, as capabilities become greater, users are accordingly going to expect more out of your applications. And the applications that deal with these constraints most effectively are the ones that users are going to love the most. So memory matters.

Next, I'm going to talk about how memory works on iOS. We're going to take you behind the scenes and talk about exactly what happens at a memory level in the system. And so I'm going to cover a lot of things today, but what I really hope you come away with is this idea of what exactly this means. This is a low memory log.

You may have seen it in iTunes Connect. And ultimately, these are generated when your app is terminated in the foreground because of low memory. So we're going to talk about what a page is and how you can reduce this number here, this "our pages" number, so that your application stays running longer.

So there are three main things that I'm going to cover in this section. First of all, we're going to talk about how memory is allocated and managed on iOS. Secondly, we're going to talk about what types of memory use matter. It turns out that not all allocations are created equal, and there's a particular distinction between clean and dirty memory that we're going to discuss. And then third of all, we're going to talk about what happens when the system runs low on memory and how it deals with that situation.

So let's start. How is memory allocated and managed on iOS? And to start off at a basic level, one key thing to note is that memory is a per-process thing. If you've got a pointer in your process, in your address space, that is different from that same memory address in a different address space, in a different application. Memory is a per-application thing.

Now, one thing you'll notice is that the range of pointer addresses in your process, in your application, is actually quite large. A pointer is 32 bits, and that's actually 4 gigabytes, which is a lot of space. In fact, that's more space than there is physical memory available on any device that we've ever shipped.

And the question might be, how can that be? How can we have, you know, logically make this amount of memory available per process without physically having that much? And the answer is virtual memory. We take all of the memory available in the system, divide it up into 4 kilobyte chunks, and then not all of the memory that your application can access is actually stored in physical memory at the same time.

So let's dive a little deeper into this. So this is kind of just a very simplified view of physical memory is divided into 4 kilobyte pages. But you're all the way up at your applications level. And what you're doing is you are allocating objects. You're creating new objects. You may be calling malloc directly. But these are ultimately allocations.

Now, it would be really interesting to see if you could do this. It would be really expensive to make a system call into the kernel every time you needed, you know, a few bytes of memory. So what happens is that these allocations end up getting carved out of larger chunks of memory that are allocated as necessary. So for example, when you first start allocating memory, the malloc library may allocate a 1 megabyte chunk, and then as you ask for little bits of memory, it'll carve pieces out for those objects.

So then what happens in the kernel is it takes these virtual memory objects and then it maps them to physical memory. It takes the pages that are used and then as your application needs them, it brings them in into physical memory. Now, there's one very important thing to note here.

As I mentioned, when you think of your application, you think of heap memory, the things that you're allocating. But the reality is that that is just part of the story, just the tip of the iceberg, as we might say. The answer is that there is a lot there under the surface.

Question is, what are these other things? What is filling up your address space and using your memory? So in addition to this heap memory that you're familiar with, there are also things like the code for your application and the code for frameworks and libraries that you're linking, global variables and statics, your thread stacks, if you have any local variables or you've got registered data that's saved onto the stack, image data, decompressed image data, CA layer backing stores, the contents of your layers. If you're using some sort of database, there's probably a cache. That is in use to speed up accesses. And there's actually additional memory being used outside your application in the render server.

So that's kind of the basics of memory on iOS. So what types of memory use actually matter? And again, this is a distinction between clean and dirty. We're going to keep coming back to that, so keep that in mind. And so I want, just for a second, for you to put yourself in the frame of mind of someone who's designing a system where you're trying to manage a limited resource, and memory is a limited resource.

And so when the time comes when it's running out, what do you do? How can we reclaim more memory for our needs? So one answer, and this is a very common answer that is used in a lot of systems, is to take stuff, the data that's in memory, and write it out to disk, create a swap file and store it there.

Because of kind of the characteristics of our devices and our environment, that's actually not what happens on iOS. We do not page memory out to disk. So we can't do that. What can we do? Well, we can just throw it away. And this is, in fact, what happens. We take stuff that is in memory, and we just dump it.

So that seems problematic, because what if that was memory that your application needed? And sometimes that is the case. And if the system has no choice but to get rid of this memory that your application depended on, its only recourse is to terminate your application. And obviously, you don't want that to happen.

So how can you keep it from happening? And the answer is in this distinction between clean memory and dirty memory. Clean memory is memory for which a copy exists on disk. So for example, the code for your application is needed in memory to execute, but there's an exact copy of it already present in disk.

And so if the system is running low on memory, it can just purge that memory and then bring it in at a later time from disk if it needs it. Same thing with the code for any frameworks that you use. If you've got memory mapped files, same deal.

So then what's dirty? The answer is everything else. So if you've got heap allocations, if you've got, say, a JPEG that you're displaying on the screen and needs to be decompressed, that decompressed data is dirty. These database caches are going to be dirty. So the answer is that most things are dirty. So this is kind of an abstract concept, and I wanted to help make it more concrete, so we're gonna play a little game show. It's called Clean or Dirty. There are no prizes, but hopefully you can follow along.

So we've got this code snippet here, and what I want to do is display "Welcome to WWDC" in a UI alert view. So what I'm doing is I've got an NSString that I'm creating dynamically via NSString string with UTF-8 string. And so what this actually does is it creates a copy of this constant C string and creates the NSString from that. And the question is, is this NSString clean or is it dirty? The answer is that it's dirty. Because we had to make a copy of this constant, this is an object that we were allocating on the heap, and thus it's going to be dirty.

So let's modify this example a little bit and just use a constant NSString. Is this going to be clean or dirty? Now, one interesting thing about NSString is that they're actually stored in the data segment, but they're stored in a read-only portion of the data segment. And so what that means is that since that data isn't written to, there's still a copy of it on disk, and so this will actually end up as clean.

Let's get a little more complicated. So here's a function that is probably inappropriately named. It should probably start with new, but it's allocating some memory, and we're just going to allocate a 10-megabyte buffer directly using malloc. Is this memory clean or dirty? Now one thought might be, well, it's an allocation in the heap, it's going to be dirty, but one thing that you may know about malloc is that the contents of that memory upon allocation are undefined, and so they don't really have any specified value. So there's no need to keep undefined values in memory. So this memory is actually clean. However, as soon as you go and write to it, it becomes dirty, because again, there's no copy of this present on disk. There's nowhere to bring it in from.

Let's do some image examples. So here we've got a UI image that we're creating from an image in our app bundle. That's the WWC 2012 logo. And it's kind of important to note that a few things happen when you create a UI image. So initially, UI image is just kind of a thin wrapper around a CG image, as you may have heard in the last presentation. And then, so in this case, we've got a JPEG that's backing it.

So then at the point where we go and display it on screen, that's actually going to get decompressed and a bitmap is going to be created. So is this clean or dirty? Well, there are actually a number of different parts here, right? We've got our UI image, we've got our CG image, we've got our bitmap. But the answer is that actually all of those things are dirty. We had to decompress the JPEG into a bitmap, and that bitmap itself is not stored anywhere else. So this counts as dirty memory.

And then just one last image example. In this case, we're actually going to go and take a snapshot of a view that's on screen using UI graphics begin image context and then see a layer rendering context. We're basically taking a view and we're going to render it into this image context. And that's going to create a bitmap.

And then we're going to go and create a UI image from that. So in this case, there's no file backing it. It's just the bitmap. Hopefully at this point, you can see that this is going to be dirty as well. None of this existed on disk. It's just a snapshot that we took of something on screen.

So the bottom line of all this, and this is something that I've already said, is that most app allocations are indeed dirty. Most things that you do are going to result in dirty memory. So it's really important to look at all of your memory usage and see where you can reduce it. So that was number two. Number three is what I think is a really interesting part, which is what actually happens when the system runs low on memory.

And we've got our nice little iTunes-style progress bar here. And it's helpful to view the memory available in the system in terms of clean memory, dirty memory, and free memory. And this is actually a pretty normal state of the system. At first glance, it might be a little concerning to see so little free memory, but there's a large amount of clean memory, and if the system needs more, it can just evict some of those clean pages to use them for something else. Daniel Delwood, Morgan Grainger And one model that we have is free memory is wasted memory because it's possible that those pages might be needed at some point in the future. So there's really no need to throw them away.

So let's say we launch your application, and your application is cutting edge. It's doing cool new stuff, and that takes memory. So you're going to allocate some objects, maybe bring in some images, display them on the screen, and that is going to cause your amount of dirty memory to go up. And by and large, the system will indeed evict clean pages. in order to make room for that.

However, if your application continues to allocate memory, eventually the system will run into memory pressure. And the definition of memory pressure that I'm going to use is that the amount of time or the rate at which the system needs to turn over clean pages, you know, it's bringing in a page, it's using it, and then it needs to evict it again to bring in something else, gets faster and faster and faster.

And that turns into a performance problem. You know, it's kind of a manifestation of thrashing. So at this point, the system needs to recover some memory in order to restore performance and keep operating. And so what it's going to do is it's going to go and terminate some background apps.

At that point, all of the dirty memory from those background apps is no longer useful, and so it ends up in the free column. And then the rest of it is primarily your memory. And then, kind of as you go and you continue and you bring in more pages, the system will actually move that free memory back into clean. And we're kind of back to a normal state again.

So when I do this example, I talked about your app being the one in the foreground and we're terminating other people's applications. But your application could just as easily be in the situation of being the one that's about to be terminated. So how do you avoid that? We have our primary mechanism for that is in memory warnings.

And they are a fact of life, they are going to happen, and so the question is, how will your application respond? This is your last chance to preserve the user's experience before your application is potentially terminated. And that's a serious event, and it's something that it's vital to respond to.

So the first thing is to make sure that your application is even in a position to respond. So these notifications are delivered on the main thread of your application. So if you're kind of running and you're doing something on your main thread, you're allocating a lot of memory, you may be blocking your main thread, and so these notifications never actually get through to you, and you may never have a chance to respond to them.

The other thing is to avoid large, rapid allocations. The system will send you a memory warning, but if you're allocating memory so quickly that it is forced to make a decision before you have a chance to free memory, then it may have no choice to terminate your application, even though you could have freed some of that memory if you'd gotten the notification in time. So by allocating memory gradually instead of all at once, you avoid forcing the system's hand.

So, sorry, one other thing is that if your application is in the background and it is suspended, it actually won't be resumed in order to handle these notifications. So if you're in the background, it's wise to actually release non-critical memory in your application did enter background UI application delegate callback.

So how do you handle these warnings? The answer is that you want to free as much memory as possible without sacrificing the user experience. And those are at odds. And that's going to depend on your particular use case. You need to decide what is acceptable, what is that trade-off.

We provide a lot of different points at which you can receive a memory warning and handle it. First of all, there's a notification posted to the default notification center. UI application did receive memory warning notification. We invoke a method on your UI application delegate. And then each of your view controllers will also have the did receive memory warning method invoked on them.

Now, one other mechanism that you may know of from past years to release memory in situations that are no longer needed is UIViewControllerViewDidUnload. In iOS 6, we're actually no longer calling this method. And the reason is that iOS has gotten much better about purging memory associated with your views when they're not on screen.

This was actually a major source of bugs where applications were freeing memory without necessarily getting rid of all the references to them. So there's actually no need to do this anymore, and we think it will eliminate a lot of bugs. But what that does mean is that if you were using that as a place to release memory that otherwise you might want to release in a memory warning, you'll probably need to adjust.

So, this is a question we've gotten a lot: How much memory can I use? And our answer has always been, first of all, as little as possible, but second of all, you need to test on each device. Our devices have different characteristics, and things like the screen resolution, things like the amount of memory available, are really going to affect how much memory is available. So on the new iPad, we have actually set a limit, and the limit is 650 megabytes.

And we did this to provide certainty to you so that you know that there is some amount of memory that is likely to be available to your application and that you can use. If you go over this amount, the system will terminate your application. However, that doesn't mean that the 650 megabytes is free, and you should just go ahead and use it right away. At that point, there are going to be effects in the system. It's very likely that some of the user's background applications will be terminated to create memory for your application and so on. So this isn't a free amount, but it is the maximum that you can use.

All right, that was a lot of stuff. I talked a lot. But now I want to get really practical and kind of show you, using our tools, how you can kind of see some of the concepts that I've talked about. And so for that, I'm going to show you a little demo.

So for this demo, we're going to use our trusty friend, the WWC application. And just to make it clear, these are issues that we have inserted in order to demonstrate the concepts here. Rest assured that the version of the app that is running on your devices has been thoroughly memory tested and behaves like a champ.

So the first thing I'm going to do is I'm actually going to run the application on an iPad. I've got my iPad over here. And I'm using the profile action, so this will actually launch instruments. Just give it a second here. And I'm going to use the allocations instrument, which may be familiar to many of you.

All right. So I'm just going to-- I'll be coming back to the VM Tracker. For now, I'm just going to turn on automatic snapshot because it's a snapshot-based tool. And so you'll see that we're gradually-- our memory usage is gradually increasing. So what I'm going to do on the iPad is I'm in the photos view here. And so I'm just going to go and start swiping through some photos, just so you can see that.

And so now I'm actually going to go back to my two instruments here. And you can see that, you know, kind of there are... Maybe a few spikes, but in general, the memory usage is remaining pretty flat as I move through these photos here. And so if you look at the allocations instrument alone, you might conclude that, OK, this isn't actually using that much memory.

However, if you look at the VM tracker here, which is an instrument that maybe doesn't get as much attention as it should, it's actually telling a different story. So if I just... So, if I just kind of look here and drag, you can see that my allocations have kind of, you know, they have really not changed that much.

In fact, they were kind of at 7 megabytes and they ended up at maybe 5 and a half. Just let me zoom in here to show. Daniel Delwood, Morgan Grainger Daniel Delwood: But my VM tracker that shows my dirty memory shows that I went from 66 megabytes of dirty memory and I'm up to 244 megabytes.

So that's a big difference. If you just looked at the allocations tool, you might conclude that things were fine. Daniel Delwood, Morgan Grainger Daniel Delwood: But VM tracker is telling you a different story. It's showing you memory that is beyond your heap that is stored in other regions. Now, if you kind of look at the VM tracker here, you can kind of look at the different regions. And again, it's this dirty amount here that we really want to emphasize. That's really what's most important to look at here.

Now I'm actually going to go and stop this. I'm going to rerun it on the simulator just to demonstrate what happens in this application when we simulate a memory warning. So I'm going to run the allocations instrument again. And our application is launching. And I'm going to go to the Photos tab. And let me just turn on my VM tracker here again. And I'm going to do the same thing and swipe through my photos.

And you'll see essentially the same effect over time. It's only snapshotting every three seconds, so it takes a little time for it to show up. But my allocations are essentially flat, but my dirty memory is steadily rising. Now, what I would hope is that I've implemented a response to memory warning, so that if the system does run low on memory, I can respond. And so in the simulator, you can simulate a memory warning under the hardware menu. So I'm going to do that.

Unfortunately, okay, that did make a difference in my dirty memory. That's unexpected. What I was expecting to see was that there was very little difference in the amount of memory being used. So let's examine why that is. So the view controller that is responsible for this is actually called the WC Photos View Controller.

All right, and you can see here that I've left a to-do for myself saying I need to handle a memory warning. So what I'm going to do is I'm actually going to go and very quickly implement a handler for these memory warnings. And look at that, that's magic.

And so all I'm going to do is I'm going to go through my list of photos and set the UI image associated with each of those photos to be nil, which, since this is an Arc application, will cause them to be released. So let's profile this again and then see what happens.

All right. Oh. My mistake, so I just need to... Run this again here. All right, let's just do this fresh. So we're going to launch Instruments, run the allocations tool. Again, I'm just going to turn on VM Tracker. Go back to these photos again. I'm just going to continue to swipe through them.

And you can again see that our dirty memory is in fact steadily rising, so I've gone through them all now. And so I'm going to go through again and simulate a memory warning, and let's see what happens this time. All right, well, it got a little lower than before.

But the point is that that's made a real difference to my dirty memory, even though, as you can see, the amount of heat memory that I'm using actually didn't change. So just looking at the allocations instrument does not give you a full picture of what goes on. It's a good starting point, but in terms of seeing the full effect of what you're doing in terms of memory in the system, VM Tracker is your friend. It's what you want to look at.

So if you get one thing out of that demo, one thing, what I'm hoping that you'll get out of it, is to pay attention to dirty memory. Dirty memory is the scarce resource in the system because it leaves the system with no other option to reclaim that but to terminate your application.

Now, on that same note, one thing that's very important is to avoid usage spikes in your application. And so what that means is that because dirty memory is the most important, any time that you have a big spike where you're using a lot of memory even for just a second, it requires the same actions to be taken as would be taken if you were using that memory for a long period of time. Because physical memory is a resource that's limited, those spikes really matter.

So if you're seeing these big spikes in instruments and allocations where you're going way up and you're going down, it's important to try and smooth those out. And one thing that can really help is if you've kind of got a tight loop where you're allocating a lot of objects that are then auto-released is to use auto-release pools. I guess at this point, I will invite up my colleague Daniel Delwood, who is a software engineer on the instruments team, to kind of get really practical and show you how you can put these tools and tricks to use for you.

Thank you very much, Morgan. I want to talk to you about using tools to really identify the memory usage in your application and what you can do to use it better. So, the first thing I want to talk about is the process. As you may have seen this slide in other talks, we really want to emphasize that it's important to follow a process when you're looking for performance issues.

You need to identify and reproduce the problem you're trying to solve so that then you can profile it with tools, make some hypotheses, and iterate here to understand really what's going on in your application. And then finally, you can make those changes that will hopefully make your application much better and much more memory efficient.

So, going back to the idea of an iceberg, and most of the memory that you think of being in the heap is on top of the iceberg. And that's kind of a problem. Because, you know, this memory is not the whole story. And so you may think, okay, what can I do? Because there's a lot of memory under the surface, and I really want to get the most out of my application. Well, the good news is that most dirty memory is in some form or fashion related to the heap.

So this means that, you know, if you've got a UI image, it has those references to backing stores, and if you optimize your UI image objects, you'll do a lot better. So let's talk a little bit more about this. What can you do specifically to reduce your entire application's memory usage? Well, first thing is to understand your view hierarchy. And as the other talk that was just in here, the iOS performance graphics talk was talking about, the more pixels you draw, the more memory you've got to use to make those large bitmaps and show them on the screen.

And you can really get a lot of mileage just by doing that. And I think that's what we're going to focus on today. And that's just by following some of those performance tips in that graphics talk. And I'd like to refer you back to that. But what we're going to focus on today is the second thing, and that's avoiding recurring heap growth in your application. And like I said, it doesn't matter if these objects are small.

They may be referencing really large VM regions that are hard to see. So what are the top three that you should look for? Well, first of all, you should look for leaked memory. And leaked memory is probably the easiest to detect. We've got great tools for it. And it shows you memory that's inaccessible. There's no other references to it. And it really just can't be used again. So it's completely wasted.

And so whenever you detect leaked memory, as I'll show you a little bit later in a demo here, you should do your best to get rid of all of it, especially because it's so easy to find. The second thing, and probably more important, is abandoned memory. And this is memory that is not in your memory. It's memory that you could reference, but you never actually use again.

And so this is memory like caches that are slightly misimplemented, and memory that you allocate but never go back to. And so it's a real, real big problem that you need to look into. The third one, of course, is those caches that are referenced and waiting for your use, but they're speculative. The user may use them, but they also may not. So you need to be very smart about allocating only the memory that you think the user is actually going to use.

So let's talk about how to detect it. Well, the central premise behind this is that memory use in your application shouldn't grow without bound when you repeat an operation. So what do I mean by this? Well, for example, say you're pushing and popping a view controller. You push, you pop, you expect that the memory usage when you get back will be the same as when you left. And if you repeat this over and over again, you really don't want to see your memory usage going up.

If you're scrolling in a table view, that's another example of that, where, you know, your scrolling goes by, you see all these rows, and you think, okay, well, my memory usage, it's fine if it goes up. But you should really be reusing those table cells and hopefully not using memories you go through the scroll view multiple times. So let's talk about repetition here and how it actually elicits that memory growth.

Well, say you've got a graph, and at some point in your application, you're going to see a graph that says, oh, I'm going to use this graph. And you're going to see that the graph is pretty stable. Well, this, you can sort of think of as your baseline, right? Your application isn't really doing much, and it's a great starting point for your investigations.

Now, you're going to perform that operation, which is going to bring it into a new state, and perhaps you're using more memory. Perhaps you're viewing a photo, right? Your application is going to do some really interesting and cool things. But then when you return to that original state, you might expect that it would be the same memory usage as the baseline. Well, of course, the first time you do this, there's a lot of caches to warm up, and that is really going to be your warm-up phase. So the first iteration doesn't give you the information you need to solve the problem.

So that's why you go ahead and repeat it, let's say five times. Once you've repeated it, then you can take a look at the difference between returning to the original state after the first iteration and then returning to the original state after the last iteration, and that is your wasted So how can you find that with our tools? Well, the allocations tool is really powerful in that it can show you backtraces for every single heap allocation you make. So it tracks all of your heap allocations, including your Objective-C and C++ objects. Now, for those of you who are C++ developers, you may actually need to enable this option because it's not on by default for performance reasons.

There's also the ability to record not just the malloc and free events, but the retain, release, and auto-release events, which comes in really, really useful if you're trying to track down leaks or other reference counting issues. And finally, allocations has statistics that it provides you by allocation type, and also you can put this information in call trees, which is really helpful for determining the point in your code that's responsible for all these allocations.

Finally, there's heap snapshot technology built in, which is the really important part to find that abandoned memory and persistent memory growth over time. So let's talk about it. Well, taking a practical example of that pushing and popping a view controller, let's say you're getting your app into a steady state. When you launch your application, it comes up and it's showing your view. And then you go ahead and perform the operation.

And after each iteration here, what you're going to do is take a snapshot of the heap with the allocations instrument and repeat this multiple times. Ideally, once you've done this, you know, five, ten times, it's those middle iterations that you really want to focus on and hope that those go down to zero. Because if they do, then your application is indeed well-behaved as you'd expect.

So let's go ahead and take a look at this in a demo. So I'm going to use the same application, the And I'm going to go ahead and run this in the simulator. Now, luckily for us, heap allocations work mostly like they work on a device, the same as they are in the simulator. And so I'm going to just go ahead and select the leaks template here and profile.

And my app starts up in the simulator. And this is really a great place to analyze, because it's really great for rapid prototyping, doing things over and over again, and so on. So I launch my application, and I'm going to select different views here. And you'll notice that Instruments is recording in the background.

And what I see is that allocations, you know, gets to a pretty steady point of about four, five and a half megabytes. If I click on my leaks instrument, you'll notice that it's doing automatic snapshotting at 10-second intervals. Now, what this is doing is it's really pausing my app and taking a lot of time to look through it and find memory that's unreferenced. In this case, I'm not finding any.

So I'm going to go to my application, and I'm going to pick a scenario that I want to So for this one, I'm going to show off the filter capabilities of the iPad app. And I'll bring up the filter field. And I can really call anything I want the baseline for my investigation.

So I'm going to call this screen with the filter options open with everything selected my baseline. So I'll just go over to instruments and select marqueep. And it creates that baseline snapshot for me at 5.11 megs. All right, so why don't I turn these off, close it, turn it back on. Now I've returned to my steady state, and I'll mark my heap again.

And I'm just going to repeat this a couple of times to see how my application performs. What I'd really expect here is that I wouldn't have many allocations, even though my filter is actually doing a lot of work for me. I'm going to a core database, looking at values and presenting them on the screen.

So I've done this a few times now. And I'll stop my application. And you'll notice that for each one of these in the middle, I'm experiencing a heap growth of about 300K. Now, that's kind of interesting. If I turn it open, you'll see a list of the objects that were allocated between Heapshot 2 and Heapshot 3 that are objects that, you know, I didn't anticipate being alive.

I can look through these and see I've got an array, I've got a set, I've got some -- just general malloc memory. But what I want to look for here in this list are objects that hold on to a lot of resources. So in this case, I can see pretty quickly that this UI tap gesture recognizer is probably what I'm going to be looking for, because, you know, this is a UI element. It's got to have views and other objects behind it. In this case, no views. But I can turn this down and see a list of all the objects. I created 816 of these between the snapshots.

So if I select one of them and bring in the extended detail view, I'll see a backtrace on the right of where this came from. Now, I'm just going to go ahead and double click on a black frame in this backtrace to jump directly to my code. And I'm actually going to show it in Xcode to make this easier to see.

So here we are, and we're in a table cell for row at index path method. All right, that's fine. It's not uncommon for us to be allocating a gesture recognizer here. But what we're looking for is why we're allocating so many. So let's look for the allocation here. Here's our gesture recognizer, allocating it with target. We're also creating a double tap recognizer. Great. And then we're adding it to our session cell.

Now if we go back and look at our session cell, it comes from-- here we go, DQ session cell. We're doing cell reuse properly. That's great. And if there's no session cell, we create a new one. Oh, but there's the issue. We're actually adding our tap gesture recognizer even if we reuse the cell.

Now it's already got one attached, and we don't need to attach one every single time. So what I can do is I can actually just attach this and I are going to talk about the most common mistake we make when we create a new tool. So I'll just save and profile that again. And actually, to show you a different issue, I'm going to remove this from the simulator.

does not seem to like me today. So I'll build and run my application. It launches. And this time I'm actually going to watch it live to see if my change has actually made the gesture recognizer issue go away. So the allocations table view here is actually showing me all of the different categories in my app.

I see strings, malloc blocks, a bunch of stuff. But what I'm interested in in this case is the gesture recognizer. So I can just type in that. You'll see that I've actually created 86 so far, or 86 targets so far. So let me go to the schedule, pick a day, and filter.

And actually, we'll notice in the background that the number isn't going up. So this is exactly what I'd hoped for. So, all right, that actually is dealing with the issue that I was seeing earlier. That was kind of using the tools to find some persistent memory growth using the heap snapshotting technology.

And it's a very, very powerful tool. It's one that I really highly encourage you to exercise on your application. Come up with a set of scenarios, some things that the user will commonly do, and then repeat them over and over again, taking snapshots after each one. Now, let's go on and talk about some tools, tips, and tricks, some ways of getting the most out of instruments and to understand the memory in your application.

So I'd like to encourage you to get to know your application. And what I mean by this is pay attention to that list of objects, specifically objects holding onto resources. Now, these may be UI images, view controllers, NS operations, a bunch of different things. And anything that really wraps large data, because it may be difficult to see even though the objects are small, there may be a lot under the surface. And so, you know, for instance, this could even be an NSData with a one megabyte buffer.

Now, another thing that's important to track are your objects. And these are objects that hopefully, you know, you may have a class name prefix. It's really, really easy just to type that into the search field, filter down to your objects, and validate those patterns that you expect. If you expect that, you know, your view controller goes away when the user moves away from it, make sure that's happening. And finally, simulate memory warnings in your application, like Morgan showed, to make sure that you're being a good citizen and doing all that you can to work well on iOS.

Now, memory bugs actually are pretty expensive when you encounter them in the wild. And if your users are reporting them to you, it's going to make you have to turn another build, and it's a very time-consuming process for you. So I'd like to really encourage you to switch to Arc as the first thing, because it's a really, really powerful technology and allows you to stop thinking about retain-release and allows you to think about the object relationships in your application. Another powerful technology is the Xcode Static Analyzer. And this tool is great for finding uninitialized variables, tracking down some common programming mistakes that it's very easy to make and sometimes very difficult and time-consuming to find.

Finally, use the leaks instrument and make sure you only make one fix at a time because if your fix is a little bit wrong or sometimes there's other code that was relying on a bug and you just need to make sure that you don't introduce other issues when you fix these leaks.

Now, when it comes to fixing leaks, you may say, well, I am already running Arc. There's not going to be any leaks, right? Well, it's not entirely true, because Arc doesn't solve the problem of retained cycles for you. Now, you can use the cycles and routes view in instruments just by going to the leaks instrument, pulling down off the jump bar, and selecting cycles and routes. And one thing I would really like to caution you to be careful of is block captures.

Now, blocks are a really, really powerful technology, but they can sometimes lead to unexpected behavior if you're really not anticipating it. So, for example, here, I'm using the really powerful block-based NS Notification Center API. And if you'll notice here, I've got my self.document being set whenever this notification gets sent.

Now what the compiler does here is it actually is going to capture my variable strongly and retain it. And what this does is mean that if I have a retain cycle created by this, it's just because the compiler is doing something to be very safe. The notification center is going to have to copy my block to the heap from the stack and in doing so create this capture itself.

All right, so what if you're not actually referencing self? What if you're just using an I var? Well, if you're using an instance variable like this, the compiler is going to do the exact same thing. It's smart enough to know that you need self around, and so it just captures self and references through the pointer.

All right, so if this is the cause of your retain cycles, how can you get around it? Well, it's pretty easy. All you need to do is create a weak variable to capture self, and then use the weak variable in place of your self or even your instance variable. And I'd highly encourage you to use property accessors here. Now, one thing to note is that if you're running Arc, weak is sufficient. If you're running with manual retain release, you'll also need to add the block keyword.

Finally, there's incorrect retain release, which can lead to leaks. And this is a very classical problem, very easy to do under manual retain release, just forgetting to write a release or writing an extra retain. And it's also possible with incorrect arc bridging. And so you'll need to be careful about this.

If you encounter leaks of this type, just select a single instance, hit the focus button, and go in and take a look at the full retain release history for it. Now, I'd really encourage you also to look for the backtraces at both the beginning and the end, because these are most likely to be at fault.

[Transcript missing]

Now some final tracing tips to hopefully get the most out of your use of instruments. Make notes with each trace that you take. These can be totally invaluable later as you send the trace to other members of your team or you're looking back at it from, you know, a month later.

These can be very helpful for knowing what you were doing at a certain time that you see a memory spike. And I really highly encourage you to add these. Also, filter specific time intervals. Using our range selection buttons, you can set in and out points, or you can also option drag in the timeline and select just ranges to filter down your detail view to those time ranges.

And finally, please be conscious of the snapshot intervals. As Morgan showed, the VM tracker was snapshot-based, as well as the leaks instrument. And there may be slight pauses when that happens on your iPad or iPhone. And what you'll want to do is just not worry too much about frame drops during these intervals.

[Transcript missing]