Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2009-412
$eventId
ID of event: wwdc2009
$eventContentId
ID of session without event part: 412
$eventShortId
Shortened ID of event: wwdc09
$year
Year of session: 2009
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2009] [Session 412] Advanced De...

WWDC09 • Session 412

Advanced Debugging and Performance Analysis

Mac • 1:06:05

Learn from the experts as they dive into the depths of an application to track down, identify, and fix the most difficult bugs. Get the most out of Xcode's debugger and symbolicate crash traces, add DTrace scripts to your tool set, and learn other advanced debugging skills.

Speakers: Chad Woolf, Jim Ingham, Greg Clayton

Unlisted on Apple Developer site

Downloads from Apple

SD Video (135.9 MB)

Transcript

This transcript has potential transcription errors. We are working on an improved version.

[Chad] This is advanced debugging performance analysis. My name is Chad Woolf, I'm a Performance Tools Engineer. I work on DTrace and instruments. We've got a two-part session for you guys today. The first half we're going to be talking about performance analysis using DTrace and a little bit about instruments.

And then for the second half, Jim Ingham from the debugger team is going to come up here and talk about some advanced debugging with Xcode and GDB. So for the first half we're going to talk about performance analysis, and we're going to talk specifically about DTrace. DTrace is a tracing technology built into the OS X kernel.

It'll allow you to trace anything from the kernel all the way down to your application. We'll get into that. We're going to learn DTrace by example. So I'm going to show you a series of examples, and hopefully by the end of these we'll have a pretty good working knowledge of the language and what it can do for you. Then we're going to take a walk over here to one of our demo machines, and we're going to demo the technology live.

So if you guys have a laptop and you want to follow along all you need to do is be logged in as an administrative user and have a terminal ready, and when we get to the demos you can hopefully follow along, play with it as we play with it. And then finally when we come back from the demos we're going to talk about how you can use DTrace to extend the power of instruments like creating your own instruments using DTrace.

So let's start with an example here. So your an app developer and then some, one of your customers or someone in the office comes to you says, Your app's working a little slower than usual, so I'm getting some sluggishness here, and, and can you come over to my desk and take a look? So you do.

You go over. You confirm, Yes, it's definitely sluggish. So you call up Instruments, you attach the running process, and you take a time profile. And what you see is that 75% of your time is now being consumed by a function that you're calling called blackBox. Right? Now of course there's no function called blackBox, I made it up.

But the point is that eventually when you guys are doing your performance analysis, you're probably going to hit a, a point in your, in your debugging process that you're going to find that you hit a function and you've optimized all your code around it, and you really don't know what this thing is doing, right? And from, for our example here, 75% is high.

This thing has never taken that amount of time before. So before we write a bug against blackBox let's, let's go off and see if we can figure out a little bit more information about it. Now typically what you might want to do is find your code where you're calling blackBox, and then insert some logging around it. So here you can see we're going to take the value of the key being passed in, and we're going to record the return value from blackBox.

Right? And then maybe inside your start you can record the time stamp, and on your end you can record a time stamp and see how long the calls are taking. You would send it maybe out to a file, or standard out, or maybe put it in shared memory. Right? All these different types of things are things that we, we do when we're doing tracing. And that's really what this is. Now the problem with our scenario is that the app is running, and the user doesn't know how to reproduce this problem.

Right? It just all of a sudden it started doing this. So we can't take the app down. We can't do this type of insertion. What we'd love to be able to do though is insert code, you know dynamically, as this things running without having to take it down, or at least get that same affect.

And the way we do that is with DTrace. So you can think about DTrace as a hardware logic analyzer, except for software. So when we're trying to debug a circuit board or something like that, we, you know, we don't take a new run at the circuit board and add circuits to it to try to record what's going on.

Instead we use a logic analyzer. And so if you've seen this picture here on the right you'll see that there's a probe being placed against the metal pins of the circuit board. And when those pins fire, a signal goes up that probe into a logic analyzer, and then after the run an engineer can look through the trace and figure out what happened.

Right? So with DTrace we can actually do the same thing with code. You can place a probe on a piece of code, and then you don't have to worry about writing your custom logging into it. So how does this work? Basically, DTrace can find any instruction in your application as long as it has a symbol table or a dSYM file.

So once you have those two pieces of information, DTrace sees-- as you can see here we don't have the code for black box, but because we have the symbol table in dSYM we can find the entry point and then the exit point. Now you can put a probe on any arbitrary instruction, in case you want to open up the blackBox and you know what you're looking for, just like giving DTrace an offset from the beginning of the function. So let's talk about DTrace scripts for a minute. Because basically what happens when a probe fires is that it runs a DTrace script or a set of action statements.

So in general, DTrace scripts are composed of probe clauses. A probe clause has a probe descriptor, that's the actual description of how to find the probe that you are interested in. A predicate, which will actually limit the firing of the action statements to a set of conditions when they become true. And then the action statements. Now inside our action statements we can do a couple of the things that we would normally do. Like I was giving you the example of blackBox, we can say print out to the console, we can record a user stack.

And we can do a little bit more along the lines of statistical analysis by using these thread safe accumulators, called aggregates. So if you want to keep track of how many times something happens, or how often it happens, or how long it took, you can keep them in these thread safe accumulators. And we have associative arrays, as long as, as well as normal variables, associative arrays are like a dictionary, you can imagine Cocoa. So here's an example of blackBox placed with probes and action statements.

And then we're going to go over that script and explain to you roughly what it's doing. But before we do that I want to mention that when you place a probe in an application it's very similar to sending a break point. A, when your, when your apps, threading application hits a break point it suspends that thread. And then it transfers control over to the DTrace interpreter running in the kernel.

And DTrace will then run all of the action statements that are associated with that address, and that resumes your thread. OK? And now since this works very similar to the way that debuggers work, they're actually not compatible. If you have DTrace and you're using it on an application, you can't attach the debugger and vice versa. So let's go over the probe descriptor a little bit in detail. It's a, it's basically a string that's separated by colons.

It's built into 4 pieces. The first part is a provider. And probe descriptors describe the, the probe itself. And the module function and name that follow the provider name are really determined by the provider itself. So a system call provider will have a different module function and name semantic then, let's say, the pid provider, which is used to look at your application.

We'll go over those in a, in a, in a minute here. You can actually use wild cards here or you can completely omit it. And that will give you basically everything that matches. This is the only required part of a probe. If you want to just simply record trace, the default action is to log it out to the console. And you can get a list of the probes if you execute the DTrace -l command. That'll give you the list of every probe that's registered in the system.

And let's, it's just going to be the system wide probes. And if you want to just limit it to a specific provider you can say DTrace -l and give a -n command, that'll actually type a script at the command line. And with the -l it'll just simply enumerate all the probes that follow that specific pattern.

Let's go over a couple of examples before we continue. This is an example of a syscall provider. And what it's going to do here is place a probe on the entry and exit point of every system call in the Darwin layer, the, the, sorry, the BSD portion of the Darwin kernel. The next one is a, it's the pid provider. And specially its pid1017. And it's going to look at the return from the printf in libSystem and that's going to place a probe on that.

So every time you return from printf, it's going to execute that action statement. And then next one, we also have an Objective-C provider, which in that same pid we're going to instrument all of the draw methods, you can see the wild card there, and we've left the library blank, which ends up being the class name. So it's all classes, all draw methods on the entry point, that probe would fire. All right, so let's talk about providers here for a second.

There's a bunch of them here, and I've listed them on the slide. Let me cover a few. Syscall is the BSD system calls, and you have mach_trap, which is going to be the mach family of system calls. You have an fbt provider. Inside the kernel you can trace all of the exposed symbols in the kernel through the fbt, the function boundary trace. You have a proc provider, so any time that the kernel creates a process, or destroys a process, or creates a thread you, you'll have probes that will fire for those.

We have statistics for locking, virtual memory, and I/O. Now when you go and look at this slide, maybe afterward, you'll see that the second column here is the name in the instrument builder, and we'll talk a little bit about that when we get to it. Now along with this, this static providers, we, these providers have the names like syscall, we have these meta providers. And the meta providers are not create-- well the probes aren't created until you specify them.

So when I say pid and then followed immediately by the process id, that actually creates the provider and creates the probes. There's the pid in Objective-C, which ones we saw, and then there's the profile and pid providers, which can be used to place action statements that fire periodically. So you can see some examples. So for example if you wanted to say, I want to fire a probe or an action statement every hundred milliseconds, you can say tick -100ms as your provider name.

And then you can associate an action script with that. So let's take a look at exploring our blackBox function. Let's get back into our example, see what we can do with it. So I'm going to say my process id number is 123. So I'm going to say pid123 inside the black, blackBox, the entry point. So I'm placing a probe there. And my default action is simply to record everything. Let's start there and see if blackBox is even still being called now that we've been sitting here talking for a while.

So I'd go to DTrace, I'd go to the, the console; I type in DTrace -s with my script name, which is myscript.d. And I'd run it. And then I get all these traces out to the console. So clearly blackBox is still running. But when I hit control-c to stop the trace, I get this message.

DTrace has dropped 53,686 events on CPU 3. So what DTrace is here is telling us is that blackBox is being called a lot. And logging this out to the console is taking a long time. And in order to catch up DTrace had to actually drop 53,000 plus of these events. Now this can happen, because the kernel is not going to simply wait for a user safe process to catch up.

It, it goes about its thing and won't lock up the system or anything like that. So you can try using these probes blank just to see what's going on. But obviously that's way too much data. So let's look at the statistical analysis things that I was telling you about in the beginning. So let's try this script.

Now when you see an "at" sign in front of a variable, this is going to be a global variable called hits, it's called an aggregate. An aggregate's that thread safe accumulator I was telling you about. So if, if blackBox is being called for multiple threads, or for multiple processes even, and it won't confuse the hit count. What we are doing is, we're simply be recording the count. Now since we're not doing anything else with this aggregate, whenever I stop the script it'll print out the results. So you'll, you'll see the number of hits being accumulated here.

So let's take a look. Again, I'll run the script, and we'll wait here. And we'll hit Control-c. And then out comes a number, 580,209 times. Oh, my God! Right? That's a lot of times. So blackBox is being called a lot more than I, I expected. A lot more than, than it's ever been called, and that's probably why it's 75%.

But let's find out a little bit more. So I'm going to use another aggregate, I'm going to call it keys. And that's going to be arg0, which is the first argument of the blackBox function. Every probe has these implied variables being passed in, arg0 through whatever. And they will, they represent in the pid provider then arguments are being passed into the function on the entry probe. OK? So now what we're doing is we're using the quantize function here on, on this aggregate, which is actually going to create a histogram for us of all the incoming values of key.

So let's take another look at it, we'll run our script. We'll take a look. Control-c, and there's our counts. This time was 466,000 counts. And here's the distribution of the value of key. Right? So keys and addresses of void *. And we see that there's now some, some distribution around that really large number on the bottom. But there's also a huge distribution around zero, so apparently I'm passing in a lot of null values here, about 380,000 of the time out of 466,000 hits total I was passing in a null.

So that might not be good. So let's see if we can find out a little bit more. We want to see now, maybe passing in the value's OK, maybe it's not. Let's take a look at if black box is failing or succeeding, and we want to see the ref, the ref relationship there. So what we can do is we can place an exit probe. And it's a single probe that will catch both exit points.

And you can tell the difference between the exit points, because it's being passed in at arg0. So the offset from the beginning of the function, the way the return statement is, it'll actually pass in an arg0. It's one of the most common mistakes here with return providers-- return probes. Arg1 is actually the return value, so that's the one we're going to use. And I'm going to create another global aggregate called argC for return code.

And we're going to key it by the return value of the function. So if it returns by zero versus negative 1, we're going to see the difference immediately. And it's going to separate the, the counts out for us. Let's just run that here. Same thing, run a script, Control-c,, get our results. And you see at the bottom there we have zero hit, returns zero 85,000 times, 86,000 times almost.

And it returned negative 1 380,000 times. Let me blow that up for you. That's the exact same number of times that we've actually are passing in zero as returning negative 1. So there's a high chance here that what we're doing is we're passing in a bad value. Doesn't look like zero's a bad-- a valid value for this.

Maybe it's our sentinel value, or whatever, but we're clearly doing something wrong here. So let's take a look now at if passing in a bad value is taking too long. Maybe that's slow, maybe that's OK, maybe there's a fast check that you can do and that it's-- you don't have to call it at all.

Maybe it does it itself. I don't know. We're going to find out. Now to do this, what we need to do is keep track of the time we come into a function and the time we leave. So we're going to use a global variable. But there's a couple of caveats here. And that is if, if our app is multithreaded, you know, these globals are going to become very confusing.

So in DTrace what you can do is you can scope these variables. If you have a prefix of self on there, what does that is we'll create a thread scoped variable, so that you can write, so it, basically DTrace keeps track of the value for each thread, basically is what's going on.

And also there's a prefix that you can add, called this, which is essentially how you create a local variable. So we don't have to worry about that at all. So let's see how we would use these in our code. So I'm going to create a self-variable here, which is a local-- a thread scoped variable for the entry time.

And I'm going to save off the time stamp when we enter blackBox. And on the return I'm going to calculate the duration of a local variable, which is the time stamp at the point of return versus the time stamp we recorded. And then we're going to use an aggregate, and I'm going to store the duration.

I'm going to store a histogram of the duration that we computed for every return code. So let's take a look and see if calling-- getting a return of negative 1 is actually slower than getting a return of zero. So we'll run our script. Control-c. All right, and here's the distribution that we got. We see that negative 1, which is our return value, is centered roughly around 43,000.

Now the time stamp's measured in nanoseconds, so that's 43 microseconds is the central tendency for when we get a negative 1 return code. And if we compare that to the lower line here, we get a zero, which is a successful return. It's roughly about 21 microseconds. So it takes significantly longer to fail than it does to succeed with this blackBox function. So what did we learn about this? Well blackBox is being called a lot.

It, it's being called actually too much for us to trace, but if we wanted to trace it we could. It's being called with mostly zeros as an input. We found that out from the distributions in quantize. And we found out that zero was probably causing it to fail.

And we also found out that failing is actually slower than succeeding. So a potential fix for this would be to find out, well first why we're calling it so much, but then maybe to check simply before we go to call this function, if it's zero, just don't call it and save ourselves a lot of time. So all right, so let me move over here to the demo machine and we're going to talk about, we're going to talk about-- take a look at a running system. So you're going to need a root show to access DTrace.

And the best way to get that is sudo-F, and then type in your admin password. So now that first command I showed you was DTrace -l. I'm going to pipe that some more and show you all the probes that are right in the system now. So you'll see here we have a few of the standard probes under DTrace, begin and end.

So if you want a set of actions when the script starts, or when it ends, or when it gets an error you can do that. Lock Stack is, is one of the providers here. Profile, there's you fbt provider. You can see a BP search record entry, for example, is one, is an interesting probe in the kernel. Now let's show you how to limit that down a little bit further. So we'll say DTrace -l and I'm going to give it a -n.

And then I'm going to say syscall. We'll look at all of the entry probes in the syscall provider. And so here you see we have syscall itself, exit, forward, read, write, open, close. We have number 8 at the syscall that's actually not implemented. But you can actually take a look at the, the value of 8, or whatever. And that's this one here. So if you see those don't worry about it.

That's not a glitch, that's actually an unimplemented system call. All right, now let's take a look at all of the system calls going on in our system right now. Now the way we do that is DTrace, let me clear the screen here for me, is DTrace -n, and then put a single quote. I'm going to look at syscall.

And we're going to check all of them out. We're going to look at just the entry-- just the entry probes. Let's take a look at that. So as you see now we're looking at the whole, the running system, all these things are firing, they're printing out information in the console.

It's interesting, but you really can't do a heck of a lot with it. So let's take a look at every system call going on, and we'll see-- and we'll break it down by process. So let's create a script here, and we'll say; we'll use an aggregate called hits. And it'll be an associative array that will be keyed by the executable name of the process that is actually tripped that system call. And we're going to count them.

[ silence ]

So we matched 430 probes, 430 system calls 0being instrumented here. Hit Control-c, and there's our list. So MD workers 2 terminal 10, window server about 27. So now let's see if the predicates will limit this to just the windows server. So after my entry and before my, my curly brackets here for the script I'm going to do a forward slash, I'm going to say only when the exact name is equal to Windows server. That's interesting, hold on.

See if I can get this wider here so the; there we go. And then we're going to end that with a forward slash. And then instead of tracking the number of-- the executable, we're going to track the actual function that's being fired. So that's in a variable called probe funk.

All right, let's count those up in the Windows server. We've got a little activity here. All right, so we see map, mmap, m on map, madvise, alt stack, and sig proc mask. And we'll see those, rough number of times each one of those is being called. All right, now let's take a look at sig proc mask.

So what I'm going to do is I'm actually going to use that name inside the probe descriptor, so I'm going to go right before entry here and say sig proc mask, as my, as my probe specification, still inside the windows server. But when sig proc mask gets fired, what I want to do is I want to record the user stack, see where it's coming from. So I'll use ustack, with two parenthesis for function in DTrace.

Take a look. And there the user stacks, every time sig proc mask is being called. Now that's interesting, but again, it's going to be a lot of data to go through. But what, what will be really interesting to find out is, Are there 2, or 3, or 4, or 5 unique call stacks that sig proc mask is being called from? So are there separate points on the code where it's being called, or is it only being called from one spot? Well, if we go and create another aggregate here we can create a, another hit counter here. And in the associative array we're going to use the call stack as the unique key. And we're going to say, Please count those up for me.

[ silence ]

So let's do a little activity here on the windows server. Wait. Control-c. All right here's our results. So it looks like here's a unique stack for sig proc mask, and see here, down here is a count of 2. Looks like a post data port is calling it 167 times.

And we have some driver calls here that are calling it. So we can actually find out quite a bit of information just by playing around a little bit from the command line. And hopefully that will get you guys started, if you guys are following along, or if you want to follow along later, and just kind of explore DTrace. It's a very safe language. It will not disrupt the kernel or crash the-- or panic the kernel.

All right, so let's, let's move on. Cause I want to talk about instruments. And I mentioned that originally that you could extend the power of instruments with DTrace. As it turns out about half of the instruments today actually use DTrace in order to get their data. So DTrace is a powerful and very pervasive tracing system.

It's everywhere in the kernel, and you can use it to get a lot of information about your processes. Let's, so if you take a look at the file activity monitor, you simply drag it out and then double click it, you get this panel that comes down. And it's the DTrace instrument builder window.

Also if you go to the instrument pull down you can create a new instrument of your own. And you'll see, you'll see what's going on here now that we described DTrace. First off the type here, syscall, the system call, and when it hits the entry point of open, we're actually going to see that action script. Now we were using aggregates a lot in our, in our demonstration, but aggregates don't really work well with the workflow of instruments.

Right? Aggregates are something that you total up and, and then print out at the end. Where instruments actually create the time line of what's going on in your system. So what you can do instead of using aggregates is to use DTrace to aggregate the data-- I'm sorry, use instruments to aggregative the data for you.

And then that way you get all the neat features you that you would expect in instruments. So the way you do that is on this bottom section of the panel, you can add a recoding action. So every time this probe fires, aside from executing your script, it'll record, in this example, this is the example for open on the return probe of the open system call.

It's going to record executable, the function, the path that was being passed in, and the return value, the FD, the file descriptor, that all gets sent to instruments. If you want to record a user stack you have an option on the bottom where you can set, you can set the user stack to be recorded on every probe firing that gets sent data to instruments.

Now if you make a mistake in your script, instruments will tell you about it. The one in red will be the one with the error, and there'll be some diagnostic text for you to fix the problem. And once you fix it you get basically what you would expect.

The instrument will record all of the data that's being passed in from the probe firings. Your sorting works, your searching works, all of your data mining stuff just works. If you want to find out more writing custom instruments, if you type in DTrace instruments in the search panel for Xcodes help you'll get the, a list of documents, you want to find the one that says creating custom instruments with DTrace.

It's got a very comprehensive set of examples and a description of how the instrument builder works. All right, so that's the, that's the conclusion for the first half of the session today. So let's go over what we talked about. First DTrace allows you to add tracing into your applications dynamically. So you don't have to bring down your application. And the up side is when you're not using it there's not cost, because it's just not there.

It's not in your app. So you can use DTrace to trace stuff with commands to the console, which is also a very powerful set of aggregates, and histograms, and counters, and variables, and all that stuff that you can use to gather data that's above and beyond what tracing can normally do for you. Now you can trace more than one app, which is what I was showing in the demo.

You can actually look across all of the different probes regardless of what the application is. And you can look at a single app. Now in the examples I was hard coding pid 1, 2, 3, but you can, there's a macro, I think dollar target, I believe, which allows you to substitute that, so you can create reusable DTrace scripts.

And then finally we talked about instruments and how you could use DTrace to extend the power of instruments. Now if you're interested in documentation, there's a couple of docs on the slide. They all come from Sun. And it describes the, how to use DTrace, a couple of DTrace examples.

Also of note is the DTrace Toolkit, which is a set of pre-canned scripts that you can use, and there's one that actually emulates the, the truss tool, you guys are familiar with doing some debugging on Unix. And that's a great one to take apart and, and play with. So dtruss is what it would be called. And that's actually in solaris systems if you want to play with that, dtruss would be the commander run. All right, that's it.

I'm going to invite Jim up here to talk to you about advanced debugging.

[Jim] So as you saw with the DTrace examples and, and instruments the-- basically once you get to advanced, so in other words, you know, you know how to like run the sampler or in the debugger you know how to step in X, and stuff like that.

Then basically everything that you're doing after that is you just trying to like accumulate some nice little toolkit bag of tricks that's specific to your application that knows how to find the data structures, and you know, the common problems that you have and stuff like that. So debugging is basically in all of these kind of investigative tools. What you really want to do is get underneath the tool and figure out how you can customize it for your own particular problems.

So that's what we want to talk about today. So partly I'll talk about a couple of the flexible tools that underlie the Xcode debugger, and that you can take advantage of yourself. And then I'll also talk about a little assortment of, of stuff that we've put together for a few little specific problems. So in particular, I'm going to just show you also a couple of little new features in the Xcode debugger, which are nice, so that people have been asking for.

And some that are appropriate to these little individual specific cases. I'm going to talk about the data formatters feature in the Xcode debugger, which maybe you haven't played around with, but actually is, is pretty powerful. I'm going to-- because threads are just a big thing, we've added a few little features to make debugging threads easier.

So I'm going to talk about that. And then finally I'll, I'll dig a little bit into the GDB command language. GDB is the debugger that underlies the Xcode debugger in the same way that DTrace is the power that underlies a bunch of the instruments, instruments. So I'll tell you a little bit about that.

So first of all the new features in Xcode 3.2. Some of them are enhancements of stuff we've already had. So, for instance, you know, Xcode forever has had this memory window. And there's always been this memory window. And so, now you actually have more than one, which is like Yay!, whatever.

[ applause ]

The other one that, that, the next thing is attaching the program. So again, since like Xcode 2.5 or something like that, there's been an attach menu that you could go and attach to a program, but there's a certain class of programs, which are still pretty hard to attach to. And it's basically all the worker programs that you aren't controlling the launch of.

So how do you get the debugger to grab something when it starts up? Like if you're writing a launchd daemon, launchd is going to launch it whenever, and you know, you can't launch it outside launchd, How do you get that? Or again, if your program spawns off little worker programs to do things, like, you know, to raise the security level or whatever.

And on the iPhone there, you can launch programs in response to push notifications. Again, you know, that's just going to show up and you need to grab it. So the way we've added some functionality to do this. In Xcode if you have a project in the-- an Xcode project, then you can open the active Executable Editor, and then you'll see there's a new checkbox, which is, you know, wait for next launch or push notification.

So what you would do is you would take your project, you would set this check box in the Executable Editor, you'd launch the debugger. But instead of launching your projects, what it would do is-- or your executable, what it would do is just sit around and poll the system.

And then when that executable shows up it'll grab it by the throat and stop it. The process that-- the polling we do pretty frequently. So if your machine's not heavily loaded we'll get to the process and stop it pretty much before any of your code gets run. Usually it's some very scary part of DUILD that it stops in.

But, but then you could just continue on and get it to the part that you want. You won't miss anything. In GDB if you use the command line GDB, the same facility is available. There's this wait for flag that we've added to the attach command, so you would use it like this.

You would say attached -waitforMyProcess. The one thing to note, by the way, just so you don't get confused, we actually filter the process list and throw away all the processes named whatever the process is, which are already running. So we'll catch the next process of that name that shows up. So like if you use attach -waitfor, and we sit there, but your process is running, and you're like, What are you doing? We're waiting for the next one.

OK, soon as he just-- so another one, this is just silly, but, you know, when you started debugging Objective-C code somebody took this stone tablet or it was carved in race memory, or whatever. And they told you, you know, N is exception rate, is you know-- put a break point there. You don't know why, but just put it there. And then when an Objective-C exception is drawn. But then in, in the 2.0 run time it changed. It was no longer NS exception rate; now it was Objective-C exception throw.

So somebody had to hand you another stone tablet and, you know, you put it in your desk or whatever. So we've actually added a, a menu item for that, though. Now we'll track with the, yeah...

[ applause ]

So, so the next time the crafty Objective-C guys change it to something else, provided they tell us, you know, or whatever, then we'll fix it.

[ laughter ]

OK, so this one's kind of sad in a certain sense. We actually spent a lot of time working on getting inline code to work correctly in the debugger.

But it's one of those ones where in the end what happens is like kind of one of these duh things, right? So, I mean if you have inline code, and, and this actually shows up even in your normal debugging sessions, because there are things like NSMakeRect and so on and so forth, which are always in line, even in -O code that you're using for debugging.

And before you like step into it, even if you didn't intend to and stuff. So now with the 3.2, if you're building with the dwarf, which everybody hopefully is building with dwarf debug information now. Then like step, and next, and finish, it just looks like an ordinary function. So that's kind of cool. I mean, inline frames show, up in the backtraces, so those slides are out of order.

So here I'm just showing that the, the inline frame, NSMakeRect shows ups, we mark it as inline. So, you know, if you wanted to know did this get inlined or not, you can figure that out. And the, the one little caveat is the, the way that we know about the inlining is in the little table of information that tells us about files and line numbers.

So if you want to break on all the instances of where an inline function got inlined all over in your code, don't try to set the breakpoint on NSMakeRect. That'll get some of them, but it won't get all of them. But if you go to the header file that defines NSMakeRect and break on, you know, whatever it is, :12, or wherever NSMakeRect is, then we'll find actually all the instances in the program.

So that's inline code support. I told you there's some features that we added for threads, but we'll get to that when we, when we get to threads. There are a couple of other little tools for common problems, both memory. One is if you're in the modern world of, of the garbage collected Objective-C two run time, then we have a little bit of help for those sorts of problems.

Again, because you're in the brand new, shiny world of garbage collection you'll never have the standard memory problems, you know, you never access the freed point or anything like that any more. The main thing that happens is you have, your program is just increasing in memory size, you know. And somehow your objects are never going away.

So that's what you want to do. You want to say, OK, well this object, you know, who's holding onto it? Why, why hasn't its memory been reclaimed? So it's obviously some other object has a reference to it and you have to just find that out. So in Xcode when you are looking either in the local variables view, or if you're looking in your source view, and you find the variable that's bad, that's not going away, then you can bring up the little pop-up, you know, control-click. And you'll get something that looks like the pop-up like this.

And now you have 2 new commands, 1 will print the root object, which eventually trace down to the, the variable you're pointing to-the object you're pointing to. And the other will show you all the immediate refers to that object. And then you can use that to figure out who you should have gotten rid of, or you know, why that person has a reference to the thing and it's not going away. In the GDB console you can also do the same thing.

The commands are info gc roots, and info gc references. And then for the people who are still back in the stone age and using poor Malloc, you know, carving free, and Malloc and having to do it themselves, and all that hard work that our forefathers had to do.

We have a little help for you. So there was always a program in command line GDB called Malloc history. And what we've done is just moved that same functionality into the debugger, so you can immediately say, you know, What's the Malloc history of a variable? or whatever. So the way that that worked with Malloc history and the way it works with the debugger is there's this Malloc stacklogging no compact environment variable, which you set to Yes. And then the Malloc library will then record all the, all the Malloc events, basically.

There's no UI for this next code, but you can go to the console window and you type in info Malloc history and give it an expression. That expression could be anything, but it's going to resolve down to an address. And then basically what this is going to return to is, all the times that that address was either returned from a Malloc call or passed to a free call. And will show you a user stack and all that sort of stuff. And since we're the debugger we're smart, we can give you line numbers and all that kind of stuff. The one caveat there, this is the exact address.

Sometimes what happens is, you know, you're accessing a freed pointer. But the freed pointer was allocated inside a block, and so it's not the beginning address. And you would just want to know, tell me all the, the history events where this address-- that this expression resolves down to, was contained in the block. So you can pass the dash range argument and, and it will do that. Of course, that's going to potentially give you a lot more hits, and you're going to have to figure out which ones are relevant.

So if you know the address, the first one's better. OK, so the next topic I want to talk about is, is data formatters. We talked about these and introduced them a couple of years ago, but it's worth going over them again. Because they're one of those things that they either sit there statically, and you kind of don't touch them, or you just get used to playing with them all the time, and then you'll, you'll really grow to like them I think.

So the deal is, you know, you've had this local variables view or the expressions view in Xcode, you've got used to it. But there's one column, the summary column, that's over on the right hand side. And that's always had kind of magically cool information if you don't know how it comes about. Like that notification is a, is a NSNotification thingy.

And but instead of showing you just the pointer value, we're actually displaying what the name of the notification was. Or in the case of this NSRect that's sitting here, I mean, we have all the relevant information that's interesting about it, you know, sitting up there in the summary column, so you don't have to turn it down and look among its fields and stuff like that.

The point is that that's not just something, which the gods of Xcode have blessed you with for a few special shining instances. But it's actually something that you can mess around with and do yourself. And that's what data formatters are for. So basically their job is that they fill the summary column of the local variables view.

And it's helpful, because now you know if you have a huge number of ivars, but you only care about 2 for your particular debugging session, you don't have to go, you know, like turn it down, scroll, scroll, scroll, find the next one, turn it down, scroll, scroll, scroll.

You can just look at it right there without turning the thing down. Also sometimes you need to do some special formatting to make the-- one of the variables look like you like it, like you want to cast it, or something like that. You can even filter a value through an expression. So if all you care about is whether something's greater than 11, you know, you can say, Show me just greater than 11, and then you don't have to go 12 is that greater than 11, you know, which is hard some nights at 3 o'clock in the morning.

And then, and then finally sometimes you have formatting that you want to do that's special to an object you have, but you don't want to ship it with your code, because it's, I don't know, it's just useless, but it, it does some nice formatting job. There's a way to write little bundles that have custom formatting functions.

And tell the debugger about them, and the debugger will insert them into your program. We're not going to talk about that this time, but there's a sample in the sample code on developer.apple.com, the WcharDataFormatter. You can go download that and it shows you all how to do it.

It has a header file, which describes everything. And finally there is some documentation on this. If you look at the Xcode help and go to the Xcode debugging guide in the viewing variables and memory, there's a using data formatters section. So the general rules, How, how does this work? Again, the results are displayed in the summary column, I said that a couple of times now. They key off the variables types.

So you write one data formatter for a given type, and then every variable of that type, regardless of whether it's, you know, sub-structure element or the primary element, will use that data formatter, but of course, with the values of the particular variable in the summary column. If, if you have a typedef you can write another formatter with the typedef name and the typedef will get used-- typedef formatter will get used.

You can also enter these in the summary column. So if you double-click in the summary column, then you'll see the formatter string there, and then you can put your own in or do whatever you want. And then you have to click off that row to see the summary update.

And then finally the ones you enter that way by, by clicking in the summary column-- double-clicking in the editing stuff, and clicking off. They get saved in a user file, which is this horrible long path, whatever it is. So here's an example of what a formatter string would look like, and let's just kind of take it apart piece by piece. So the first thing is you can have just random text, whatever you want, you can put it in. The second part of it is more interesting.

That's the part where you are taking the values of the variable that is of the type that the formatter is-- the formatter for. And, and, and sticking them into the, the formatter string. So there's two kinds, one is a simple, just structure reference string. So here for instance, this variable is an NSRect, for instance, and it has an origin element. And then the origin element has an X field in it. So here we're just saying, I want the X field of the origin element, or whatever.

And those have a little percent delimiter around them. And then the other kind is more general; it's just any random expression. This is basically anything you could type in a C language in your program, or anything you would type as an expression of the print command in the GDB debugger, you can put here. Note that you could put anything you want. This is a particularly bad example, because here I'm actually taking the variable and incrementing its value, which is going to get done every time you step when the formatter updates.

So it's probably not something you want to put in your own code, but on the other hand if your office mate leaves his desk it might be a nice thing to do.

[ laughter ]

And, you, you put the, you put the curly brackets around it. And the dollar bar little token you see there stands for the variable, which is the variable that this format-- has the type that this formatter string is, the summary string of that type for. Whatever. Then the, the last thing is there's a little bit of freedom you have in what in particular you want to display out of the element that you pulled out of that variable.

The simplest one-- oh, so, note that, you know, whenever I write one of these sub expressions like %, origin, .x%, that is both a value, but it also has a type, because that's like the ends that, that is the X value or whatever. So I can choose to just pull the value out if I want, in which case I would put this little :v there, and that's the default, by the way. But a more cool possibility is to actually pull the result of the summary of that sub-element out and stick it in the summary of the containing object.

And that way you can sort of hoist summaries up through chain of types without having to repeat this summary over and over again. And you can actually get some really nice summaries built up that way. And finally if you want to know the dynamic type of something, you can use the :t. And they show up, you just appended them to the value sub-element like this shown on the slide there.

So this is a little abstract, but it, once you see it as you play around with it, I think it'll become clear. So we're going to do a little demo with Greg here. The first thing is, see he has, you know, some, he's stopped somewhere and, and he's looking at his code.

The first thing is that, you know, there is some that are provided for you, and you can actually look at those and play around with those, and get some examples. So again, you know, there we see he's double-clicking to see what the elements are. But maybe you, you don't like the ones that we provided. For instance, this width and this height, I mean you know their width and height, they're too long, they take up too much space.

You might want to get rid of that. So again, you can edit the ones that we have, and maybe you want to just make that shorter, because it looks better to you. And then you click off it. So notice two things happen there, one the size changed. But also notice that in the frame, that NSRect, that one changed as well. So let's look at that summary formatter. That's an example of using this :s formatter that I told you about.

So in the case of the second part of the NSRect formatter there, you know, we've put in the size, and then the :s form, which says, take the summary value of the size field, and stick it up in my summary for the NSRect. So that's why both of them changed at the same time. The one thing that might be a little curious to you is that, the, the-- we have an NSRect sitting there and it didn't change.

So why is that? Well whenever you get into this problem you're changing something, you think you've got it right, but it's actually not showing up. Often it's because things are typedef'd to different types than you would expected them to be. So you can just show the type column. And yes, low and behold, the NSRect does not have an NSSize, it has a CG size. No idea why.

And so if you wanted to change the CG-the NSRect one, you'd have to go and edit it separately, because they are separate types. Show another little example, suppose for whatever your problem is you're not interested in the upper left hand corner and the, the, the size of the, of the NSRect, but suppose you want the upper left hand corner and the lower right hand corner, then you'd have to do a little calculation to get that.

So this is showing you the, the curly bracket one. We're doing a little calculation for the second half where we get the X2 and the Y2 points by taking the origin.X and adding it to-- adding the width to it, and the origin.Y and adding the, the width, a height to that. And again, that just shows you how to use the dollar bar and the computational side of things.

So then the next thing about this-- of course, play around freely with the ones we've provided, because you don't actually ever get rid of them. If you have changed them, and changed them, and changed them, and you don't like them anymore, you can just go and delete what you added, and the ones that we have will come back. So you can freely play around with them, you won't loose anything. So that's kind of one reason why these are useful.

But I want to show you the second one, which is the-- where, where you have just a complicated object, and so here's an NSWindow, for instance, we have here. And it's got like all these elements. And never in your debugging problem do you care about all of them. So you're going through one particular path of debugging, you only care about a couple.

So maybe in this case the window's not coming out the right size, so I care about the size of the window. But I don't care about all those other fields and having to turn it down and find the bounds every time is really bumming me out. So what I can do is I can just make a little data formatter which stores the, the frame.

OK, so that was nice, except that wasn't particularly helpful, because, you know, curly bracket, that wasn't helpful. And that was because I used the value of the thing. I didn't really want the value, I wanted its summary. So if I change it to the summary then, then I'll actually get the display I wanted.

The, the point really to take home is, this is for your debugging session. So you got the size right, and that, you don't care about that any more. That's right. But now something else is wrong, like I don't know, the delegate's wrong, and so, you know, it's not, who, whatever that's supposed to do. I don't know. I don't do this Objective-C stuff.

But anyway that's wrong some how. And, so you, you want to fix it. And, and so now you can put that one there. And so now every time that you're stepping through this function, you're tracing that thing. So it's basically your little playground where you can focus your attention on what aspects of a complicated large object you care about at any particular point in time. That's, that's the example, thanks Greg.

So it's one of those things that it kind of sits there and if you don't remember its percent in its curly bracket, and it's dollar bar, and you'll never use it. But then if you just do it a couple of times then it just becomes this little thing that you just change all the time.

And it, it's a really nice little feature. OK, so the next topic I wanted to talk a little bit about, you know, in the new world, in Snow Leopard, and our desire to take advantage of all the cores that are now existing on our more modern systems, and so on and so forth. And not only your desire, by the way, but the desire on the part of Apple, is going to mean that your programs as you move along will start having more, and more, and more, and more threads showing up.

And, again, not just the ones that you make, because a lot of the kit underneath you is doing a lot threading to do its tasks, so you would ask it to do something, which on Leopard would be just a concurrent call that would run in the thread that you did it on. But you'll see in Snow Leopard all of a sudden all these threads are spawned off and they're doing stuff.

So just understanding what your program is doing now, you're going to see many more threads and have to figure out what they do. So the first task then it just, What the heck are they all? And so we gave a little help for that, I'll show you. And then there's also ways to get some accounting information and stuff like that, which might be useful to you.

More importantly, particularly as you generate more threaded and concurrent running of code that you're interested in, what could end up happening is that, you know, you are trying to follow a logic problem in a function that you've called, but, so you put a break point there, so you stopped there. And then you start running along and you hit that break point, which is great, and then you're stepping through to follow the logic, but the break point's still sitting there.

And the other threads area also going to run that code and they're going to hit that break point, and now, you know, you're switching back and forth between one thread and another. And you loose the context of the logic you're trying to follow, and you get all confused. And then, then you say bad things about me, so I'd rather you didn't do that.

So we'll, we'll give you a little help for that. So first of all, identifying threads. Basically Snow Leopard added an API to set a name for a pthread. And here's what the API looks like, it's pthread set name and the np means nonportable. Just it's not a required part of the POSIX standard for pthreads.

And the way that the Mac OS X implementation works is you just call it in the thread you are setting the name for, and you pass the, a string, which is going to be the name. So choose some unique name, by the way, because, you know, I mean, you're getting some library and they call all their threads thread 1, thread 2, thread 3, thread 4. You call yours thread 1, thread 2, that's not going to help very much. So if you just, I mean just choose something that you'll recognize.

Note by the way, there's this old API and NSThread, so the NSThread's not an old class, I mean it's been around, but it's useful. But it has a set name API. But apparently they, they didn't remember that they had a set name API, because they didn't set the pthread name in the set name API.

So if you have an NSThread and you've been calling NSThread set name, that's not going to help us at all, so you still have to call this pthread set name np call, whatever. And by the way this is useful, not just for the debugger, you'll notice that if you name your threads this way, sample, and all the other tools like that will also show your thread names.

So it's, it's something that's worth doing. The other kind of named entity in the new world of, of Snow Leopard's multithreading is the GCD queue label. So when you make a queue in GCD saying, I have a bunch of work, Oh, queue would you please go do it for me, you ge-- what you do by this dispatch queue create, you pass it a name. And then if there's a thread that's, that's performing work elements for a particular queue, that thread actually gets the name of the queue that it's performing work elements for.

They suggest you use a reverse DNS form, by the way, for, for GCD queue labels. That's not me, that's the GCD guys. But in any case when you look in the thread pop-up in the debugger, you'll now see in, in Snow Leopard 3.2, you'll see both the GCD queue name if a thread's doing work for that queue, and you'll see the pthread name if you set the pthread name. So now you're no longer faced with this list of, you know, thread 1 through thread 8 million.

You know, but there's names. Yeah, so that really helps.

[ applause ]

But you have to go name your threads for it to work. And by the way, this is another one that, that's useful. The, people have been asking for some, yeah, right. So it's the whole bottom row plus up arrow and down arrow basically.

[ laughter ]

Not the spacebar fortunately. There's a little more information you can get.

So, so again sometimes, and this is sort of another meta-element. I mean the console is available at all times in the Xcode debugger, there's sometimes where you want to see a static list of information that you can scroll around and look at in leisure. You don't have to hold the pop-up, for instance, to see the threads and stuff like that. So, for instance, you can look at your threads in the console as well, and sometimes that's a more convenient way to look at things. The, the basic command in the console is info threads. And here I only have one thread, but here's listing the threads.

The first-- the star, by the way, means that's the thread that stopped. The first element after that is the GDB thread ID. So any later commands you would do in the console, you would pass this ID to it. That's the mach port; don't worry about that. Then you will see your name-- thread name if you, if you name the thread, and the last element is current function, or it'll be the address if we don't have debug information.

So that's the way to see the whole list of your threads. And then if you want to get more information about a specific thread there's an info thread command, with no S. This one's not so great, the other one you clapped about. This one, no S, you didn't clap about. But anyway, and, and then you pass the number. And there's a whole bunch of stuff that's new in Snow Leopard. There's, we dug out the accounting information, scheduling policies, which you might be interested in.

The one I want to call out is that the dispatch queue name in the console is shown here. And then one other thing, which is often really useful in the, in the console, is to perform like a report operation over all threads. And then you can look at it in leisure and compare, and, and, and see things. So there's a command in the console called thread apply, which you would use for that.

So for instance, if I wanted to look at all my threads and just get the top five frames in the stack, and then I can, you know, run around and compare them and not have to go keep switching back and forth in the, in the debugger UI. Then I would say like backtrace and 5 is the, is the back 5 frame.

So then I see, you know, da, da, da, da, dah, and all the threads would come out like that. So then the only thing, so that, that's useful. Basically, any command you can run just it switches to that thread and runs the command. And instead of all, you can say like 1 through 5 or whatever.

And then after you've typed this awhile, typing thread apply all will start getting really, really old. I actually do this, because for some reason I like typing complete commands. And then my office mate always comes over, he's actually across the hall, but he comes and laughs at me. So you don't need to do this, basically GDB will-- I recognize commands by their shortest unique string. So you don't do that, but you would do this instead. Whatever. And then Jason won't laugh at you.

So if you go to the lab don't type thread apply all or Jason will laugh at you. The last part I wanted to talk about with respect to threads is hand-scheduling the threads. So basically the debugger can't really do this for you, because if you just freeze all the other threads in one-- run one thread that's going to cause deadlock. Artificially debugger induced deadlocks, and so we can't really can't do that for you. There's really not enough system support for us to figure that out.

So we always let all the threads run. You know, when you step over a function, do anything continue where we don't really know what's going to go on, we really have to let all the threads run. But as I said, that can cause you to break in another thread, and then you're like ah, and you're switching back and forth, and it's very confusing.

So sometimes you would like to just stop and only run one thread. And there's a way to do that. There's a command in the console called scheduler locking. It's actually a variable that you would set. So the syntax looks like this, you'd say, set scheduler locking, and it has the values on or off.

On causes only the current thread to run. So all the other threads will get stopped, and the current thread will run. It, you can actually switch around from thread to thread with the thread pop-up or there's a thread command in GDB you can use to switch the threads.

And then that will become the current thread for these purposes. And then off causes all the threads to run again. There's no UI in Xcode for doing this, because we really, I mean as you can see with the Provisos, there's not a really good way to figure out when things go wrong.

Things going wrong is basically the thread that you decided to be the only thread that runs tries to take a lock, which is held by another thread, and that other thread isn't running, because you told it not to run. So that lock will never get released. And so your thread will never acquire it. So the symptom is you know you said next and then nothing happened. And it was just sitting there.

The programs not in any weird state, it's just waiting on a lock. So you could just interrupt it with Control-c or with pause, and then turn the schedule locking off and continue. But that's not so good, because of course you've lost your thread of executions through the one program. If you are smart enough yourself to know, Oh, that lock is, you know, blah, blah, blah, and this other thread 5 is the thread that holds it.

And when it gets out of this function that thread will be released, then you can switch the current thread, you know, get out of that function, switch back to the one that you want, and now you're back in your debugging session. But just, there's not enough information for us to do that for you, so that's why this is the advanced debugging talk. Whatever. So the last little bit is, that I want to tell you about is some command line tricks.

Because the GDB command language, while if you use it long enough, you'll hate it, but love it at the same time. But it has some really powerful features that you can take advantage of. If you want to learn more about it, by the way, when you're in the console you can type help and it'll give you a list of things you might type more help about. You can type help about a command name. There's an apropos command, which you can use.

You say apropos and some word and it'll find all the commands that have that in their little help strings and tell you about them. So the ones I want to tell you about are a printf command, which is a little report record formatter that's useful often. I want to tell you about convenience variables, because they're a little subtle, but actually quite useful. Some of the logic constructs. And finally the define command you can use to take some sequence of commands you like and can them for reuse.

There are docks for the GDB. You can find them online. But they're also part of the Xcode help. They're a little hard to find in the Xcode help, and they keep moving them around. So the best way to find them is basically to go to the Xcode help menu, search for GDB, and then you want the document called "Debugging with GDB." So the printf command, this is just simple. It's just basically the C printf command without parenthesis. It's the same format as the C printf. The only thing about it is that it's done by GDB, so we're not allocating memory in your program, we're not writing to your program standard out or anything like that.

As I say, it just looks exactly like the C printf, except no parenthesis. And the parenthesis out, but just you'd expect. That's just useful, because as you make your own commands sometimes you want to make a little report record. And this is the way that you do that.

More importantly are these convenience variables. So they're the variables that begin with a dollar sign. You define them with the set command, so I would say for instance, set dollar temp is something, so here I have an integer, but I want to see what it looks like as a long long, so I cast it as a long long.

The value and the type of the variable you've made actually do come from the expression that you assign it to. So for instance say the ptype command in GDB that prints the type of the command, and it really is a long long, I didn't lie to you. And finally, once you've made them they work everywhere in GDB the same way as program variables, so I can pass it to my printf, or I can, you know, do addition, or subtraction, or do referencing, or whatever with them.

So that's the result of that, whatever. GDB makes some for you, and this is where it gets kind of interesting, actually going back it's kind of boring at first. But GDB uses convenience variables for the registers, so you've seen that if you want to type the machine registers.

They're dollar and then the name of the register. And there's some synthetic ones, like $pc will always be the program counter and so on. But more interestingly when you do the print command in GDB, which is your general way of printing the value of something, it makes a convenience variable for the result, which is called dollar sign and then some incrementing number.

So maybe you've done this before, you say print something in the console, and you get this noise over on the left hand side, and then the thing you're really looking for. But what I want to say is, that's not quite noise, because you can actually use that. So for instance if I printed my struct and then I wanted to deference it, I don't have to print my struct again, I can just say print $1 goes to pointer. And as you're digging down further and further that can save you a lot of typing.

Because, for instance now when I went $1 goes to point of that made $2. So I could then say print $2 goes to something else. Or if I wanted to deference it, say print *$2 or something like that. But it's even easier than that, because I can say the dollar sign itself is the last variable that was made.

So I could just say print *$. So you can just save yourself a lot of typing that way. OK, so the last thing is the define command. So this defines procedures for reuse. The arguments, it does take arguments, they're passing $arg1, $arg2, such, so on and so forth-- zero, 1, 2, rather.

And the number is passed in $argC. With the document command you can use to document them so you don't forget what they do. Help user will show you all the ones that you've defined when you actually do forget them, which you probably will. And then finally-- and the first line of your documentation also shows up there.

And finally, help command name for the command you define will show you this whole documentation. If you put a command in your-- a file called .GDBinit in your home directory it gets loaded at startup. Or you can say source and that will source in a file of commands. So let me finish up with a little example.

So here's, you know, one of these typical jobs, which is a real pain in the neck and a GUI debugger. And suppose I have a link list, this is the standard link list, right I have a structure that has a pointer to itself and then some data. And I have 500 of them.

And I want to find some of the elements out of them. Now in a GUI debugger what you're going to do is turn down, turn down, turn down, scroll to the right, turn down, turn down turn down, scroll to the right, and you'll end up going crazy. But this is the type of task that's perfect for the console. So our task is to dump the elements matching some input value, make it into a user command.

The first argument will be the list head, so this is my list head in this example. And the second example is the match value. So here's how it works. Define, you say, define; you say the name of the command you're defining, and then you end it with end. So that's the bits in yellow. Then I have my arguments, as I said they come in in $arg0, $arg1.

There's a while, which is just a simple, you know, it takes an expression, it keeps going as long as the expression's true kind of things. So here's, that I'm using to iterate through the elements. And the only thing is you got to remember to increment your element counter. You'll do-- you forget to do this like the first five times, and then it won't do anything to print out the first element over and over again.

And like ugh! So, anyway, whatever. And then finally there's a simple if, again, it's basically if and then some predicate. And if the predicates true, there's an else, but there's no else because we're not C after all. And then finally I get to the meat of it what I'm going to do.

I told you there's this nice little report printer, the printf thing. So I'm using my little report printer. But then it's also really useful to print the actual element when you're going through a structure, because that print command is going to contain-- is going to create a convenience variable for you.

And then as you look through your report, when the report shows you that the one 500 elements down is the one that you're actually interested in, you don't have to go back up and go next, next, next, next, next, next, next, next to get down to it, right? You just print the convenience variable.

Remember to document it, because I assure you you'll will forget what they do. And so here's just an example of running it. First of all I asked for the help, because otherwise I'd forget how to use it. And then actually use it, and it's going to print stuff out. And then like if the second one's the one I'm interested in, now I can say print *$22 and that prints me that stuff.

So, that's just how to use that example. So in summary what we did was, we showed you some wonderful new Xcode features, learned how to use the data formatters, which again, just go play with them, because you need to get them under your fingers, but then they're really useful. Showed you a little bit how to handle the forest of threads, which you're going to be facing.

And finally a little brief intro into the GDB command language. If you want more information, Michael Jurewitz is the, the Dev Tools Evangelist. So email him and he'll get it to us. There is this debugging with GDB documentation, and then there's a whole debugging section in the Xcode help. 1