Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2014-410
$eventId
ID of event: wwdc2014
$eventContentId
ID of session without event part: 410
$eventShortId
Shortened ID of event: wwdc14
$year
Year of session: 2014
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2014] [Session 410] Advanced Sw...

WWDC14 • Session 410

Advanced Swift Debugging in LLDB

Tools • iOS, OS X • 47:35

Explore LLDB's powerful features that help you more quickly debug your Swift code. Learn about LLDB's support for protocols, generics, optionals, and mixed-language source code.

Speaker: Enrico Granata

Unlisted on Apple Developer site

Transcript

This transcript has potential transcription errors. We are working on an improved version.

Hello, everyone.

[ Applause ]

How's it going? I heard the bash was pretty amazing last night, and I'm sure a lot of you are hungover so I'll speak really softly to soothe it. Nah, just kidding. There's no mercy here. Let's get right started. We have a lot of great content for you this morning.

You may have heard that Apple released a new programming language this week. It's called Swift, apparently, and it's the new language for the Cocoa and Cocoa Touch platforms. It's a language we totally love and it feels just awesome. And what's even better is that the tools you know and love, the Xcode you know and love, the source editor you know and love, they all just feel great, and the debugger you know and love, LLDB, also feels just great.

The thing is, you've heard about tools that can help you explore Swift. One of them, one of them that is not usually thought of as a tool for exploration is the debugger. The debugger has the unique property that it can help you explore in the context of your application.

Most of you probably do have apps and those were written in Objective-C and you might have started off things with features. So, what you can do is you can code some Swift inside your application and you can use the debugger to help you step through your code, look at your data and figure out how your fancy new Swift features are interacting with your existing code base. And of course, if it ever happened that there were bugs in your code, you could actually use the debugger to be productive to fix them. We have a lot of stuff to cover.

We're going to talk about some Swift types in LLDB. We're going to talk about optional types, protocols and generics. These are all categories of types that Swift introduces. What can you expect when you're debugging your code and you try to use this? We're going to talk about the mix and match situation where you have some Objective-C code and some Swift code that are interoperating together and you have to debug the result of that.

We're going to cover stepping. We're going to talk about data formatters and how you can expect data formatters to work in Swift. And we're going to start with name uniqueness, explain how Swift solves name clashes in your, in your code between different frameworks and libraries and from there, we're going to see how that very same feature also helps make debugging awesome.

Optional types. Optional types introduce a level of indirection. Is it something inside the optional or is it not? The way I like to think of optional types is a box. I got a box and it says there's a string here. But I have to actually open the box to look at the string and see there's a string here, actually there's nothing here. In codes, that unwrapping the box is something you have to do and it's called interrupting the optional.

LLDB helps you out. When you're debugging, it will implicitly, automatically unwrap the optional, open the box for you and show you the contents. And if there are no contents, will just say nil. How can we expect that to look? Let's say we're writing code and we create a bunch of variables. We create a native Swift string optional. We create an NSRect as bridge C struct optional and we create an NSURL optional, and we choose to not put anything in the NSURL.

The variable views will transparently unwrap for you. Your string will show just the contents of the string literal you put in the, in the optional. The boxes open, oh, there's a string. LLDB will show you the string. The same thing is true for the NSRect. Oh, that's actually a rectangle in there. I'll tell you about its origin and its size. And since the NSURL happens to be nil, we'll just say nil.

There's a point when this situation gets a little trickier. Since optionals are like boxes, I can put boxes into boxes. It probably happened to all of us at some point. We order something online and the shipping company sends us a really big, bulky box. And then we open the really big, bulky box and there's a smaller box inside. And then there's another smaller box inside.

And then eventually for all that packaging, all we ordered is a tiny little thing like a clicker. In Swift, you can do a similar thing by having nested optionals. In this example, we have an optional of an optional and what we say is in the outermost optional in the big box, we're actually putting a smaller box, but the smaller box is empty.

There's a fundamental tension at work here. If I am the debugger, I have to decide what to tell in this story. I have to decide if I want to tell you that there's two boxes here and one of them is full and one of them is empty, or I can just look inside the whole layering of boxes and be the smart guy that tells you, eh, you know what? There's nothing here. Just forget it. Just don't bother with it. By default, LLDB chooses to be that smart guy. It looks inside all the level of boxes and it tells you, you know what? I looked. There's really nothing here. There's no string at the end of the day.

But in some cases, I may actually want to know that there's a box with a box and that the smaller box is the one that's empty. Maybe it matters for my API contact that that's the way things are. For those cases, what you want to be using is the row display mode.

Let me give you a little background here. By default, LLDB has a feature called data formatters. The data formatter feature is used throughout the debugger when you're looking at data to present you is no frills, just get me the data that matters in my contact situation. But sometimes you need to actually see the underlying truth without the debugger trying to be smart about what to show you.

In those situations, what you want to use is this row display mode. At the LLDB consult, the row display, the show me the real guts of my objective memory without formatting it in an intelligent way, is invoked with the dash dash row, or for short, -R option to the expression command or the frame variable command.

Some of you have probably not used the frame variable command before. It's a little bit of useful debugger trivia. There are some cases where maybe you're debugging a really, really tricky situation and you're trying to disturb to preserve the state of your app as little as possible while you try to figure out this really weird situation.

In those cases, you want to look at your data but you don't want to be running code that could change the state of your app as you look at data. You can use the frame variable command to say, "Show me one variable. Show me a bunch of variables. Show me all my locals" without having to execute code.

And that is the frame variable command. If you also pass it the -R option, you get the row display. You get things for what they really are under the covers. And if we do that to our big box with smaller box with nothing inside, we see that the first thing LLDB tells us is that, "Yeah, there is a big box and I see that there's something in there." That's what that Some means. Now let me open this big box and see exactly what is in there, and we see that there is a smaller box with nothing inside. But it doesn't stop here.

We told the Swift compiler that we wanted a box that would fit a string, and the Swift compiler made us a box that would fit a string. If there was a string here, we would pretty much see that. We would see the low-level, no-frills representation of a Swift string. But there's no string in this case. So, everything in the storage that the compiler reserved for the string is zeroed out because there actually is nothing.

We've been talking about optional types, but we could also talk more in general about types. We could ask ourselves the question, when we talk about a type, what are we talking about? And that's a deep, philosophical question. That's a question with potentially a lot of answers, and I'm sure we would all like to spend the rest of the session going back and forth exchanging definitions of type, or you could just trust me for a moment, I could give you a definition that works in the context of the following slides, and we could keep moving.

Let's just do that, will we? I'm going to go with the idea that a type is a classification that tells me, given some data, how can I expect that data to be represented? How can I expect to be interacting with that data? In a sense, I'm going to talk about a type as if it was a hat that a piece of data can wear. A hat, data can wear a little hat saying I'm an int, and our piece of data can say, I'm a string. And the fancy guy down the hall can say, I'm a UIView, I'm really pretty, look at me.

The interesting thing is unlike people's hats, data can have multiple types. The same piece of data can wear multiple hats at the same time. How is that possible? There's a number of ways to look at that. One of the ways, the one that is actually interesting in the context about language run times and debuggers is, for instance, the static dynamic type distinction.

We're all pretty familiar I assume with the concept of declaring a variable. We've all done that at one point or another. One of the things we do when we declare a variable is give it a type, whether we do it explicitly like in C or Objective-C or we let the Swift compiler infer that for us. We declare a variable and that variable ends up having a type.

What does that type do in the declaration? In a sense, that type is telling the compiler to keep us, the code writers, honest. When I, when I tell the compiler that thing is an AnyObject, I'm, in a sense I'm telling the compiler, "Please make sure that whenever I use that object, I play by the rules of any object." And as long as I do that, the compiler will be happy, and if I break the contract, when I told the compiler if this is an AnyObject, keep me honest, then the compiler will complain.

When I run time, however, things become a little different. Let's say I want to get the hash code for an NSURL object or that object. There's a lot of different ways to get the hash code for something. How does the system know that when I say "url.hash", the implementation that I expect is the one that will get called? How does that work? Well, that can't rely on the fact that that's an AnyObject, because an any object could potentially be almost anything. At that point, I could just as well choose randomly.

What happens is there's a reliance on the runtime type of the object. The system looks at the type that that object has while my code is running at that moment, and that's called the runtime dynamic type, and it uses that information to decide which hash gets called. That's the magic of a little mechanism called dynamic dispatch.

So, we're here, and we have our little URL object, and we're trying to call hash on it. We said there's a hat on our URL object that says "I'm a URL". It turns out, that's sort of true. That object has ivars, of course, but it also has type information. For those of you that use Objective-C, that would be the iSA. The iSA finder for that object would be the type information.

One of the things that the type information tells us is which methods that this objects type implement. And in this case, one of the ones that NSURL implements, the example on the slide, is hash. So, we found it. We know which hash to call, and we're done. Dynamic dispatch actually works. What if we're trying to call something that is not in the in the list of methods that that objects type implements? Well, we could try asking the base class.

If something had the NSURL hat, it probably also has the NSObject hat, and that means we can say, eh, that didn't work for you as an NSURL, maybe that will work for you as NSObject. And that's what happens in dynamic dispatch. We go to the base type, we try to find a method there, and if that succeeds, then we found it. We can tell the method, "Hey, here's an option for you, please do your thing for me. Thank you." This same concept is also interesting in the context of me debugging. Let's say we have a code example like that.

We have a base class, we have a derived class, which adds some information, and we have a method. We have a function that takes an object of the base class. We're telling the compiler in that function, "Keep me honest and make sure I only do things that are okay for me to do with the base class type." But I can call it with an object of the derived class, can I not? That's perfectly okay.

When I hit my breakpoint, the compiler has to keep me honest, but the debugger doesn't have to keep me that honest. Actually, the very opposite. I want the debugger to tell me as much information as possible about that argument, X. I want the debugger to tell me the dynamic type of X because on that dynamic type relies the fact that I could have more insets data, that I could have changes in behavior. And indeed, that is exactly what LLDB does by default. It shows you the dynamic type in your variables view.

Similar things apply to protocols. In Swift, protocols are types. That means a number of things. Among them, it means that I can declare my variables as of some protocol type. It means that I can declare functions taking their arguments of protocol type or returning object of protocol type.

By design, objects of protocol type are limited. They're constrained. They may only let you play by the rules of the protocol. That's the whole point. I want to make sure that I only do what's OK to do on the protocol type. But again, when I'm debugging, I want to see the full truth.

I actually want to see my implementing object, and that's what LLDB will show you. Let's look at an example. Let's say I'm writing an app for a zoo system and I'm prototyping things, so I have a bunch of critters here. I have a cat and a dog. I don't have a dog collar yet because it's a prototype but I'll get there.

I have a function that takes one of my creatures and asks it, "Could you please speak your voice for me?" We can hit a breakpoint there. We're in a similar situation as before. We declared something. We gave something a static type that is somehow abstract compared to the real thing that we're probably passing at run time, and LLDB knows to figure out the dynamic type information on our behalf and show us that even though we said we wanted just any creature, in that specific moment, while our code is executing, what we got here is a puppy. And a very happy puppy, for that matter.

For those of you that like the LLDB console, you may be tempted to try to reproduce this result. Be careful. By default, if you just ask the debugger, "Can you please show me this variable or protocol type?" the result you get may be a little disappointing. It will probably look like that.

What's going on? What is that? That doesn't look a happy puppy at all. What is happening is you're seeing the static type. We mentioned that the protocol is somehow limiting object by design. It wants at the same time to make sure that you only do things that are declared in a protocol but you also get the dynamic dispatching of this operations, the real object that implements that. The result of that is what is here on the screen. What you want to do is you want to tell LLDB, "Please resolve the dynamic type for me." The way to do that is with the -d flag to the expression or frame variable command.

That lets the debugger resolve dynamic types. Now, there's two ways in which you can ask the debugger to resolve dynamic types. There's a less restrictive and a more restrictive way. Sometimes in order to figure out the dynamic type of your objects, the debugger might decide that it's best to run some code under the covers to go ask the language run time, "Can you help me out here?" In that most liberal settings, the run, the Allow Run Target R Here setting, you tell the debugger, "OK, you can go run some code off for me. I don't think there's going to be any problem if you do that.

Just let me know the dynamic type when you're done, please." Another setting is Do Not Run Target. In Do Not, in Do Not Run Target mode, you're telling the debugger, "I'd really like to know about the dynamic type of this thing but I prefer you not to run any code that might interfere with my process.

If it turns out that you have to do that then just don't tell me about the dynamic type, that's OK. I'll understand." Limitation is that, information to resolve the type will be passed along your data. You will call a generic function with an int and you'll, your int data will be passed through the function as well as metadata that will tell the language, "This is a, this is an int you're dealing with." LLDB can use that same information to reconstruct the meaning of your code.

How does that look like? Let's say we have a protocol for producing arbitrary things, and then we have a concrete class that implements the protocol, that conforms to that protocol and it produces ints. And then we have a function that says, "I can accept any producer of things as long as what they produce is int." If I hit a break point, I expect to see my generics resolved.

That is indeed what happens. But how does it work? Well, LLDB looks at your function and it realizes that your function takes a generic argument, P. So, LLDB has to look for type information to resolve P. And it finds it. When you debug generic code, you're going to see a lot of this $ with .type .name special variables. Those special variables carry the generic type information. They're the Swift object metadata for your generic type. Armed with that knowledge, when LLDB sees the argument of type P, it knows to actually use the generic type information to resolve it to its actual dynamic type.

We talked about a few rules about how protocols and generics behave in Swift. Those are general rules and for the most part, they apply to debug builds. A debug build of your code is a very, very, very literal translation of the code you just wrote into native executable code.

The fact that it's very literal a translation is actually good thing for debuggability and that's why they're called debug builds. If my code is translated literally as I'm debugging through it, it's really easy for me to see the correspondence. Evolution, evolutioning machine code execution, evolutioning source code execution. It's really easy for the debugger to maintain that correspondence between what you wrote and what is actually going on on the bare, on the bare metal, on the hardware.

In an optimized build of your code, while maintaining the same semantics, the compiler is actually free to shuffle things a little bit behind your back and that means that the literal evolution of machine code will not correspond anymore to the same sequenced evolution of your source code. There will be steps, there will be jumps, data will not be there that was supposed to be there.

As a result of that, the first rule of debugging optimized code is that you don't. I'm sure it's a lesson some of you in the audience have had to learn. You probably have apps and you probably get bugs from those apps very, very rarely, I'm sure. But sometimes it happens, and the first thing you should do when you get one of those rare, incoming bugs is reproduce it in a debug build of your app. Only if that doesn't work, then, sorry, tough luck. You're going to have to debug optimized code. All the usual caveats apply to Swift and there's a couple new ones that are specific to things we covered in the previous slides.

While in general, type metadata is passed along, the Swift compiler is free in optimized builds to actually specialize away some types, some, your generic functions for some specific types. Also, if the compiler can understand what's going on with protocols and the concrete types and implement them, it's free to do devirtualization to skip dynamic dispatch and directly call into the implementing object.

Objective-C isn't really going anywhere. Some of you, we said, already have apps and it's very likely that those are written in Objective-C. But, even if you start a brand new Swift app for the first time today after this session, you're going to use Cocoa or Cocoa Touch. You're going to import Foundation, import UIKit. Those frameworks are written in Objective-C.

That means wherever you look around, there's going to be Objective-C in the picture, and you're going to have to deal with debugging mixed Swift and Objective-C situations. What can you expect when that happens? What can you expect to see in the LLDB, in the Xcode variables view? What can you expect as you try to evaluate expressions in the LLDB console? What can you expect when you try to PO your objects? At the variables view, it goes by what's called a most native experience. We'll show you data in the language in which the type was first written.

In this case, we see a Swift string and an NSString side-by-side, and the Swift string is shown as a Swift string literal, as you would type in Swift source code. The Objective-C string literal is shown as an Objective-C string literal. It's shown as if, as you would type that same thing in Objective-C source code.

In all cases, data formatters will apply. If I'm evaluating expressions, however, things become a little more strongly separated. Expressions see two separate worlds. Objects that exist in Swift frames are all useable by Swift expressions, and the same is true for objects in Objective-C code frames. Your results, your result variables, they get two separate name spaces.

A little background on that. When you type an LLDB expression command, the result of that expression is stored away in a debugger-generated persistent variable, which you're very welcome to reuse in subsequent expressions. The results of your Objective-C expressions will get stored in variables named $0, $1, $2, you get the idea, and the results of your Swift expressions will be stored in variables names $R0, $R1, and you get the idea.

Let's see an example of how this whole system works. We're stopped in a Cocoa frame. The F command tells, tells us the frame where we're currently stopped. We type an Objective-C expression because we're in Objective-C frame, just be self. And we get a variable $0 that stores away self. Now, we step around a little bit and we land in a Swift frame. Now, we like to try and use that $0 persistent variable and we like to write an Objective-C expression that involves it.

That's not going to fly so well. Since we're in a Swift frame, the Swift compiler is trying to compile your Swift, your expression with the Swift syntax. But that's not Swift syntax, that's Objective-C syntax. And so the compiler gets really unhappy and he mentions things like, "Anonymous closure argument not contained in a closure." OK, I must be doing something wrong here. Well, what's going on is you're trying to use the other language. And there's a way for you to do that but you have to tell LLDB "Don't automatically infer the language of my expressions from the language from the frame I stopped in.

Use the language I tell you to use." In this example, we're using the -l, or the -- language flag, that expression command. And we're telling LLDB, "Use the Objective C++ expression evaluator. Use the Clang compiler that is inside of you to actually parse that expression." And then that works. But there's a caveat. There's always a caveat. Your locals will not be available. Since you changed your language, as we said before, locals are not available.

PO is, in a way, similar to the expression command, but it actually, once you get the result of your expression, it goes back to that most native experience that the variable view lives by. Swift objects will display using data formatters. Objective-C objects will display using their description method, much like they did from before Xcode 6.

That can get funny real quick. I can have a Swift class that inherits NSObject and I can actually override description for that class in Swift. But if I try to PO it in LLDB, LLDB will not even look at that description. LLDB will use data formatters, and that's what I'll get, because that's a Swift object.

What if I actually wanted to use my description? What if I actually want to use, see the Objective-C side of things. It turns out there is a way. When there's a will there's always a way. I can start from there and I guess I have to write an expression, and I guess I have to write an Objective-C expression since I'm trying to get an Objective-C behavior.

Skimming for the help for PO, an expression, I can discover that the PO behavior is actually triggered by a flag to the expression command, the -O flag for object description. And so I can guess that I need to write an Objective-C expression that gets an object's description. But now I changed language. Now I can't use my object local anymore. I can resort to using its address. I know it lives somewhere in memory and PO told me where it lives in memory. I can use that information to go across the language barrier and bring my object along with me.

But, but I'm not using my local type information anymore, so that's a number. For all Clang knows, I'm asking it, "Can you please show me that number?" I need to tell the compiler that what I actually want to see is not the number OX000-something. I want to see the object of that location in memory. The simplest way to get there? Just cast to it, and after all this magic, your result shows up, just like that.

Thank you [applause]. Thank you. Let's very quickly step through a couple stepping scenarios: protocols and closures. Let's say I set a break point right where I'm trying to use one of my creature objects and I step in, because I actually want to see the implementation of that code and step through it and see what's going on.

It turns out that that just works. LLDB lands right where you would expect. But there's one extra frame on the stack. There's a frame called protocol witness for Creature.speak on the stack. That frame is the protocol, is the protocol dynamic dispatch frame. It's the code that the Swift run time uses between where I, my code stops and when my code, and when the call code starts executing to actually perform dynamic dispatch. LLDB automatically steps through writing through my code. And if I step out, the same magic happens in reverse. The protocol in this frame just disappears and I get right back into my code transparently. And I have more good news for you.

You can set break points inside closures. You can expect LLDB to leave your breakpoints inside our closure even if it's an aligned anonymous closure. And that's what you'll see in the stack. You'll see that since your closure is aligned and anonymous, it's called closure #1 in my call, in my calling function. You can also expect to see your locals. Even those, yes, even those $ variables that are automatically generated by the compiler and you never declared. Good news all over the board. First topic.

Now, this topic is really dear to my heart because that's what I usually work on, data formatters. Data formatters are a way in LLDB to improve the way your data is shown, to hide implementation details and only focus on the core things that matter to you when you're debugging.

Much like we do for C++ and Objective-C, we automatically format types in the Swift library. You don't have to worry about that. That will happen automatically for you. But the good news is that the mechanism is pluggable. This is covered in great detail in last year's session, which you're welcome to watch online or on the LLDB website. We'll just quickly go through an example to show that you can roll your own Swift formatters much like you could in C++ and Objective-C. Let's say we have a struct that represents a person's address and we try to PO my address card represented in the struct.

That's not a horrible display but it looks nothing like an address would look, right? I want to make this more envelope-like. I want this to look a little bit more like I was writing my address on an envelope. I can do that with the LLDB-type summary add command.

The "type summary add" command is that LLDB's command to say, "When you're showing me a variable of this type, here's the at-a-glance information I'd like to see represented." And so we can tell LLDB, "You should use the variable's name. You should use the name of the person. You should use the city and separate those two by new lines since you're at it.

Now, put another new line in there for me, will you? And write a zip code. And then after a comma and a space, could you please put the state." This is the U.S. address format, basically. And we're saying, "Use that for the address object." Now, when you PO me again, we get something that looks a lot more like an actual address. Data formatter's mission accomplished.

I told you there's always caveats. There's a few more. When you actually tell LLDB bind this formatter to this type, you have to use the fully qualified name. That includes the name of the module. We'll talk about that in a little bit. If you're writing Python formatters, you want to use SBValue.GetSummary(). You want to ask objects for a summary.

Even for things that in C or Objective-C look like basic types that have a value, like an int or a float, in Swift there's a little more intricacy going on under the hood. So, what you want to show, what you want to ask is the object summary. Caveat to the caveat: except for enums. When you have a Swift enum and you want to figure out which case is selected, you ask the value.

Let's talk about name uniqueness. Let's say you, you guys are writing an awesome Objective-C app and there's a really good framework that would help make you so much more productive. It's Foo.framework, of course. Foo.framework has developers who have really good taste in class naming. Their taste in class naming is so good that they came up with a really nice class name.

Unfortunately, these guys have too good of a taste of their own good. A little while later, the developers of another great framework, Bar.framework, came up with the same super-nice class name. Now, the result of this is not nice at all. The result of this is undefined. There's two frameworks with the same class with the same name, trying to coexist in the same app at the same time. That's not nice. That's undefined. You don't get to choose which class gets actually loaded. In Swift, that's gone [applause]. Thank you.

Swift provides uniqueness among function overloads and among classes in different frameworks. The way to access all this goodness is a feature called mangled names. Some of you may come from C++ and may be familiar with that already. Let's talk about it. There's two guys and they're both writing Swift code. They don't know about each other but they both think that MyClass is the best name ever for a class. I don't agree with that but that's their choice. They want to submit their code to the Swift Compiler. The first guy goes on and does it.

As a result, he gets a compiled version of his code where his class is actually called something my module MyClass. We'll get back to that in a second. When the second guy does that, it does that same thing, it tries to compile. His class actually gets called something module 2 MyClass. Now they don't clash anymore, Module1MyClass, Module2MyClass.

There's a little price to pay. Now, if I actually look at the screen and see what the name of the class became at linkage time, it's called -TtC7Module17MyClass. And that's just for declaring a class. Now, the world is actually a scary place sometimes. What if you were out there in the wild roaming through dark streets and out of a little alley a mangled name came right at you.

That could happen while you're, you know, you're just sitting there in interface builder doing things and oh, there's a mangled name there. Or, worse even, your app just crashed. Not all of you are going to get crash reporter window. You're also getting mangled names in there. It doesn't get much worse than that, except swift-demangle comes to the rescue. swift-demangle is a little tool that ships with Xcode that lets you pass as input on the command line one or more Swift mangled names, and it magically provides you the demangled version. So, fear no more encounters with mangled names in dark alleys.

This is the magic of modules. This is the magic of modules in the context of name uniqueness, of avoiding clashes. Modules can do a lot more for us. Modules actually make debugging a lot more awesome. Why? How? Let's look at it. I have source code for my app written in Swift and I gave it to the Swift Compiler.

The output of that process is an app and module information. Why is that so important, the module information? Well, it turns out that much like it contains a copy of Clang, LLDB contains a copy of the Swift Compiler inside, and that's used, of course, as part of expression evaluation for Swift. But, there's a little more to the story.

Now, when I try to debug that app, the copy of the compiler will actually be able to infer, to ingest the module information for the compiled app. Why is that so important? Let me give you a little perspective on what happens when you compile an app for debugging and then you try to debug it.

When you tell the compiler, any compiler, let's go with Clang in this example. When you tell Clang, "Please compile my app for debugging", what Clang does is it ingests your code as usual. It understands your code and gets a mental model of it in some sense. Then you tell Clang, "Give me information to help me debug my app." What that process does is it generates DWARF. DWARF is a format used specifically for containing debug information. So, Clang has an understanding of the type system of your app.

It takes that understanding, it translates it into a different format, DWARF. The debugger, LLDB, then ingests that DWARF information and it has to recreate an understanding of the type system of your app. The way LLDB does that is it generates Clang types out of DWARF. Hold on a second. So what you're telling me here is basically that Clang generates Clang types out of my source code then it translates that into DWARF. Then, LLDB consumes the DWARF and generates Clang types back.

Why? Wouldn't it be really nice if LLDB could directly understand the compiler's notion of types? Wouldn't it be great if we didn't have the intermediate steps, and the compiler parses your source code, understands it. It creates a representation of the types, it's own representation of the types in your app, and the debugger gets to use that same representation? That certainly seems nice. It certainly seems nice for us that have to actually write the debugger. We skip one step. We can directly use the compiler's notion of the truth. It's also really nice for you, the users of the debugger.

The reason, well, there's a couple of reasons. The obvious one is that if we could do that, there would be no potential for loss of information in the translation process. We went, we would go from source, Clang types, DWARF Clang types, to source types, types, one less step where information can get lost.

That is what happens in Swift. The Swift Compiler generates a model which is the compiler's understanding of the truth of your program at the time it was being compiled. LLDB's copy of the Swift Compiler ingests that model information and can use it to reproduce the type system that the compiler was seeing at the time your program was being compiled. There's no loss of information and there's no need for the intermediate step of re-creating the types from, back from DWARF. There's one more advantage.

Some of you have written C++ code and you've probably used generics. For that situation, I'm sure you will run into it. You try to use some generic function and you actually didn't use that in your source program and now LLDB's complaining, "You're trying to use this function with this really weird name that is not present in the target." It turns out that that won't happen in Swift. Since generics are actually types that the compiler understands natively and since we have the compiler's very own understanding of the truth, now we get every type and every function through module information, even those you did not use in your source code, and even generic ones.

That was a lot of ground covered today. If you remember one thing from this session, well, there's always caveats. But no, that's not the thing I want you to remember. You can choose your language and LLDB will be there every step of the way with helpful investigation tools. Whether you're, whether you're using Use With Features or you're debugging your existing code base, the helpful features in the debugger will be there to help you make your app awesome.

We talked about a bunch of topics. We talked about Swift types, stepping, data formatters and modules, and probably even more than that. The important thing is your feedback matters a lot to us. I've been in the labs the last couple of days. I've read the blogosphere the last couple of days and I've gotten from, I've seen from you guys some amazing feedback. I've seen the great things you've started doing in Swift, including the Flappy Birds app in Swift, I've seen that.

We've gotten emails where people tell us amazing things about how they expect these new tools to change their lives, to make their programming better. Keep that coming. Your feedback matters a lot to us. Let us know what we're doing great, let us know what we can do better, and of course, thank you everyone. Thank you.

[ Applause ]