Tools • iOS, OS X • 54:22
Objective-C is continuing to evolve as a powerful object-oriented programming language. Technologies like Automatic Reference Counting let you build more robust and easier to maintain code. Modules make it easier than ever to reference framework classes. See how this new technology will help keep your project organized and how you can take advantage of it. Find out about all the newest features and improvements to Objective-C.
Speakers: Doug Gregor, Dave Zarzycki
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript has potential transcription errors. We are working on an improved version.
Good afternoon. My name is Doug Gregor and I'm here today to talk today to you about Advances in Objective-C. Objective-C is a great language with the vibrant user community. If you're here last year, you saw that we are really, really excited that we could see this here in the TIOBE Programming Community Index. This is from May 2012. And we see that Objective-C had moved all the way up to fourth place in the rankings. Just pretty amazing. Well in just the last year, Objective-C has moved up even further displacing the vulnerable C++ for the number three spot. Whoo!
[ Applause ]
So how do we evolve the Objective-C? Well, there are some things that we focus on. The two things in general that we really do want to focus on are developer productivity, that's your time, and software quality, that's the quality that goes into your applications. And we can improve both of these things through evolving the language and the tools that support it. So in the realm of developer productivity, we can do things like find places where there's boilerplate, you're writing the same thing over and over and over again, at synthesize, at synthesize, at synthesize, and eliminate that from the language by getting the right defaults.
Second, we can find other operations that you do day in and day out throughout many, many different code bases and simplify them, bring the syntax into the language, make them easier to use, faster to write, faster to read. And finally, we can provide great tools because you use tools to write codes in Objective-C. And part of this is developing the tools themselves and part of this is making sure that the language itself is amenable to building great tools. We'll actually get back to that with our first major feature.
The other area is software quality and how can we help there through the language. So, couple of areas. We can try to catch more bugs earlier. You can do this through stronger and better static type safety so the compiler can reason about the type in your program and warn when something is going wrong.
Next, we can find error prone tasks, for example, writing retain and release everywhere, automate those away within the compiler to eliminate huge classes of problems. And finally, Objective-C is a language with a rich history. We have a large developer community that has established best practices for how to use this language well.
And we can bring those into the language to help you build better software. Today, we're going to talk about a couple of things. We're going to talk about a new Objective-C language feature called Modules. We're also going to talk about better productivity in the use of Objective-C. And finally, some improvements to Automatic Reference Counting or ARC.
Modules. So, the idea behind Modules is that if you look at applications built for iOS and OS X, at the core of these applications is the use of a ton of really great systems frameworks. This is how you integrate with services like iCloud or with Game Center. Maybe it's using iAd to introduce ads into your application or core location services so that you give your user relevant content where they at right now.
And so, this is sort of the foundational layer on which you build all of the magic of your applications. So we looked at the process of how is it that you use a framework. Well, first, you go into Xcode. You go into your coding window. You write the #import for the framework you want.
In this case, we're going to pull in iAd and use that as our demonstration. And the name is really important, so you see iAd twice when you import iAd/iAd.h. That's fine. You start writing your code to the iAd framework, use some tutorial samples and so on. You hit Build and you get the dreaded link error.
If this is the first time you've seen this, this is horrifying and you have to search to see what actually went wrong. But of course seasoned developers know. Fine, there's several ways to fix this. You can go edit the project, just go over to the build phases, just close the triangle, hit the Plus, go find the framework again. We said iAd three times now if you're counting. Hit Add and we can actually build our application, not exactly wonderful.
And both of these steps are very disjointed. We have the #import which is what you write in your code and then we have the addition of the library which is something you do in Xcode, elsewhere. And so, let's go back to the #import side of things because #import is a teeny tiny innovation over the basic #include that's been in C for three, four decades based on the preprocessor.
And so, we're going to look a little bit at how the #import and #include actually work. So you have your application, some .m file from it. And what it does is it #imports iAd.h. Fine, what does that actually do? Well, it resolves iAd.h and the compiler goes and hunts for the next thing that iAd included and the next thing that that included. Eventually, we get back to UIKit and all of its headers and all the things that that brings in. And so, really, the dependency that you have from your data .m is out to a whole bunch of different header files within the SDK.
How does this actually work as the language model? Well, again, this is the C model of the preprocessor. It's essentially textual inclusion or a fancy form of cut and paste. So, here we have, you know, simple .m for an app delegate. It imports iAd.h. What's that do? First thing compiler does, go find what is iAd/iAd.h actually refers to and it comes up with a file on disk. Fine, it copies that file, preprocess it, and pastes the results into our .m at the end.
Okay. And then what do we have? More imports. So we go hunt for the next file. Take its text, copy it, preprocess it, paste it in, and the .m gets as little longer. And we go hunt for more files and we copy and paste those in. And once you get at the end is one big long .m which is what the compiler actually sees for each .m file in your application.
This model has been working for decades, so what's wrong with it? Well, it has two problems. The first problem we're going to talk about is it's a very fragile model. So I'm going to do something here that may make a few of you cringe. I'm going to define a constant read-only to 0x01 because that makes sense for my .m, for my application code. And I happen to do that before #importing iAd.h. See, preprocessor does what its design to do. It goes and hunts down these files, copies them, preprocess them, paste the result, and we end up with this file up here which is the .m the compiler sees.
The compiler is not going to like this .m and it's going to complain, 0x01 is not a valid property attribute, it is very correct. The really unfortunate thing here is that the error that you get is in the system headers. That's not code you wrote and yet somehow you accidentally broke it just by doing something where you defined the local constant in your header file.
And now, you can blame me for doing this. I'm the one that write-- wrote this code in this slide. Clearly, it's my fault because what I should have done is used a prefixed very long uppercase name for my constant because that's what we do with macros. It's the convention that we've established within the C programming world to cope with this fragility problem.
And so, this doesn't happen often that you hit these problems. But we do hit them in programming. And usually, they come in as some sort of header include-dependency. Someone's header over here didn't follow the rules. He didn't get the memo. And it stomps on another header over here. And if you include them in one order, things work fine, or with one version of some framework, it works fine.
You migrate to another version and, suddenly, there's a conflict that you get to debug. If you're lucky, it manifests an error that's fairly easy to track down. If you're not so lucky, it could actually be a runtime that's really hard to track down for something that ends up being just flipping to include.
So this is a problem that we deal with but we've been working through it through our conventions. It's fine. The real issue here, however, is that this whole model is inherently not scalable. And so, to see this, we took all of the .m files in iOS Mail and we plot them according to their size on disk.
So it's got, you know, about 250 .ms here and you can see they range from half a kilobyte up to about 200 kilobytes in size with a very large skew with really tiny files. And we see this across the numerous projects that you tend to have many, many small .m files.
Now, we've added iAd.h, an import of iAd.h into a fairly central header. So what that really means is for all these .m files, we're not just parsing what's in the .m file, we're also parsing everything that's in iAd. iAd is a fairly small framework and the headers come in about 25 kilobytes. So, for many of these files, just the size of iAd works the size of the actual code that you wrote in your .m file.
Of course, iAd isn't standalone and everyone needs UIKit everywhere and UIKit is more like 400 kilobytes. Okay. So now, our tiny little files which is most of what's here are actually going through 425 kilobytes of header files pulling all those in from disk parsing them just to get at your tiny little bit of code.
And if you think this is bad, this is iOS where UIKit is actually fairly small. So, on OS X, the Cocoa framework that you pull in everywhere, it's about 29 times larger than UIKit. So you can't even see the .m files, your own code in this kind of chart. So what this presents is inherent scalability problem. You can't scale with a system like this because you have your M source files and you have the N headers.
That's the storage on disk, M plus N. But the time to compile is M times N because you're reparsing every one of those headers for all of your .m files. And of course, both M and N are growing as you build your applications and add more code to them and as the system adds more frameworks and APIs to them.
So clearly, it can't be this horrible or I'll be screaming at us to fix the compile time issue. And so, one of that features that we've had for a long time to try to solve this is precompiled headers. And so, precompiled headers actually do help a lot. The idea is fairly simple. You take some subset of headers that's common across your entire project, like maybe all of UIKit. And you compile it once into some efficient on disk representation.
And then whenever you build a .m file, you load that representation first, that binary representation that's fast, no parsing, and start from there. Now, this is great because you don't have to parse UIKit or Cocoa. And in fact, when you started with your project with Xcode, you got a precompiled header for UIKit or Cocoa for free as you started. But anything else that you've added later on, when you add that #import of iAd.h is still being parsed over and over again.
You could fix this if you really wanted to. You could extend your precompiled header to include iAd.h. And now, you're no longer parsing this every time. What we've seen, however, is that developers don't generally maintain their precompiled headers. A few people do and they see more benefits out of precompiled headers. But most don't, partly because they don't know about it, partly because they don't want to be optimizing for our tools. But also, there's another reason you might want to this and that's using precompiled headers introduces namespace pollution.
You may not want to have iAd in every part of your application. It maybe fairly centralized but putting it into your precompiled header makes it available everywhere. So they're always showing up in code completion results, for example. It's always available. And so, there's principle reasons for not wanting to use precompiled headers anywhere.
So Modules are designed to solve these two problems, the problem of the inherent scalability problem of headers and also the fragility problem of headers. So what are these Modules? So think of them as an encapsulation of what a framework is. It's API and its corresponding implementation. A Module is something that's separately compiled all the time. So, it's compiled once and set aside so that later on your application can import that Module, get access to the API, get access to the implementation without having the go through and parse the headers.
Now in support of Modules, we introduced one little bit of syntax. It's the @import declaration. What @import does is it pulls in the API for a particular Module which corresponds to the framework. So here, we're importing the iAd frameworks API into our application. Now this is what we call a Semantic Import and it's very different from the textual inclusion that you get with headers 'cause semantic import, of course, it doesn't parse the headers but it also doesn't let the API that's exposed by @import be changed by any of your local context.
So if I do this horrible thing that I did earlier, # defining read-only to 0x01, it's perfectly fine. That doesn't change or break the API of iAd in any way. The API you get out of the iAd Module is exactly as the authors of iAd intended you to get. You can't make mistake here.
Now, Modules can be thought of as monolithic things, like we often think of frameworks as a monolithic thing. I want to get all of the API of iAd, but you don't have to think about frameworks this way. And therefore, you don't have to think about Modules this way. And so, we can think of Modules as being a larger structure, so here we have the iAd Module and their smaller pieces which we call submodules. So here, we have the ADInterstitialAd, the ADBannerView as submodules within the iAd module itself.
We can import just part of a framework by writing @import of iAd. and then one of the submodule names. In this case, it's ADBannerView. And what that does is it gives us just the API corresponding to ADBannerView within iAd. So from an API perspective, this is giving you exactly the same thing that you would get out of #import of iAd/ADBannerView.h. And in fact, the frameworks and the sub-- the framework headers and the submodules match up exactly.
It's something you can see if you look at code completion for example after @import iAd. is the submodule structure here to get at exactly what you want and this match up exactly what the file names that are there. Now, once you've used @import, you get the API of a framework.
You also get the implementation for free via the Autolinking feature. And so, once you've switch over to Modules and you're importing a particular Module, the compiler is just going to record in the object files it create what Modules you used so that we'll automatically link against these things and you never have to go in-- thank you.
[ Applause ]
Right. So you should not have to go in and then link binary with libraries anymore. So what does it takes to use Modules? We've shown the new syntax, the @import syntax. So Modules are an opting feature. So you can opt in via build setting and I'll show in just a few moments.
And of course, once you've opted in, you have access to the @import syntax. Now, you probably have a couple of #imports and maybe some #includes in your code, maybe a handful, hundreds, thousands. We don't actually want you to have to go and rewrite those, not even automatically. Of course, we could migrate them.
What we really want is you to be able to turn on Modules and go use the feature immediately. And so, the way we deal with this is we actually automatically remapped the #includes and the #includes in your source code. When those refer to a header that we know is part of a Module, we just treat it as if you had written @import all along.
And the great thing here is you don't have to change your source code to use Modules. You just need to opt in via the build settings. The Modules, the @import provides the exact same API that you got before just through a different mechanism that is safer and more efficient.
Now, all of the system frameworks in iOS 7 and OS X Mavericks are available as Modules. And so, when you opt in to Modules, anything you're using from the system, any of those system frameworks automatically goes through this more efficient, safer path. You may be wondering, how does this actually work under the hood? Well, let's take a quick look. So, the basic idea is we have this notion of Module Maps.
And a Module Map establishes a relationship between the headers that are part of the framework and have always been there, and the actual logical Module structure. So here's a fragment of a Module Map. It defines the UIKit Module based on the UIKit framework. It says that to actually get the contents of the UIKit model, you parse the umbrella header UIKit.h which UIKit.h is what you generally import. So this is the same API description. And that anything that UIKit.h itself imports becomes a submodule within UIKit. This is what reflects the header structure within the logical Module structure.
And finally, you can see Autolinking here through the link framework line here that says when you actually use the UIKit Module, you should link against the UIKit framework. Now, these Module Maps are actually very crucial because in our SDKs, we don't ship Module binaries. Instead, we ship headers like we always have.
And when the compiler asks for a Module, when you ask to @import UIKit, the compiler will find the Module Map, it tells it how to build UIKit and effectively spawn a separate compilation process to go separately compile UIkit.h into the UIKit Module which is then cached in Xcode's derived data. So the next time you come through and ask to import UIKit, it's already there and it's instantaneous to load.
So this is what breaks the M times N scalability problem down to actually efficient compilation model. So let's take a quick look at what this does to build times? So build times, of course, build time for an entire project. And so, we'll talk about a couple of projects at different scales and with different levels of utilization of the precompiled headers feature. So Xcode is a very, very large Objective-C project, a lot going on in the build.
And in fact, they've been tuning their precompiled headers for years. And so, what we see when we turn on Modules is that they don't have to change their source code at all. It's just a build setting. And they get a smallish win, a couple of percent win in the build time. Since they had optimized precompiled headers, this isn't a huge surprise.
Preview on the Mac is actually a much smaller project as you might expect. Also, has fairly decent precompiled header. And so, you get a small win [inaudible] larger win out of using Modules. Again, no source code-- yeah-- source code changes required, so it's essentially a free performance here.
And finally, the Mail Application on iOS didn't have such great use of the precompiled headers 'cause they hadn't been actively maintained, like most of all operators don't actively maintained their precompiled headers and it's a huge 40 percent speed up just from flipping the switch, turning on Modules and not doing anything else, all right. This is the elimination of repeated header processing really helping.
So now, build times or overall project build times, they're a little bit messy in a sense that we're not really just measuring what the compiler does. There's a whole lot of other things going on. So, let's go to something a little bit more heavy on the parsing and that is indexing. When an Xcode is indexing your project, it's parsing all the sources in your project so it can build that rich cross reference to give you more information at your fingertips within the IDE.
And so if we take these same projects, indexing time for Xcode got a bit faster, we're in the seven percent range or so. Preview on the other hand got pretty significantly faster, so 32 percent faster indexing time just from switching to Modules. And iOS Mail, as you may have seen earlier this morning, got 2.3 times faster indexing just from doing the switch to Modules.
Hopefully, at this point, I've convinced you, you should at least try out Modules, fairly easy to do. So if you start a new project in Xcode 5, Modules are enabled by default. We really thinking this is the way forward for Objective-C to get access to system frameworks. If you have an existing project, to covert it Modules, just go into your Build Settings and find the Module Setting, change it to Yes and then Rebuild. Nothing else is needed. Now, if you're doing some fancy linking tricks, you may actually want to turn off the Autolinking feature in which case there is a separate option here where you can turn off the Autolinking feature. Most users shouldn't actually need to do this.
As you may expect, there's a couple of caveats. So, first caveat, you need to be using the iOS 7 or OS X Mavericks SDK. Only those SDKs have support for Modules. Now, of course, you can deploy backward because you can use the new SDK and deploy backward. Modules don't change how your code is actually built. They don't change for your source code. They don't change how your code is built. You just need to move to the newer SDK to get those-- essentially the Module Maps that tell the Module system how to work.
Second point is that Modules aren't available C++. Now, it's perfectly fine to enable Modules in a C++ project. Essentially, the fact that you requested Modules will just be ignored for the C++ sources, you'll still get the benefits of Modules for your Objective-C sources. The only downside here is you can't use the fancy new @import syntax in something that's shared between C++ and non-C++ code. And finally, while Modules are available for all of the system frameworks, on iOS and the Mac, they're not available for user frameworks.
So, let's wrap up here. We talked about this new feature, Modules. The idea behind Modules is to simplify the user frameworks so you can just get the nice semantic import behavior which is much harder to break than the textual inclusion behavior that would, so-- and this means we've essentially eliminated all of the problems with strange header order dependencies between system frameworks and user code, and we've eliminated the separate link with library step through the Autolinking feature of Modules.
Now, Modules are actually a lot more than just a user convenience. We're actually fundamentally changing the underlying model and how we can access to APIs in a way that can significantly improve the performance of source tools. And the very nice thing here is that improvement essentially comes for free. You no longer have to tweak your precompiled header to get the build times. Just use Modules and forget about the precompiled header, Modules will do the right thing.
And finally, you can enable this feature without any changes to your source code, whatsoever. It's changing your Build Setting and rebuilding your application. The application doesn't change. Your source code doesn't change. So with that, I'd like to turn you over to my colleague, Dave Zarzycki to talk about advances in Objective-C. [applause]
All right. Thanks, Doug. So I'm going to be talking to you about more advances in Objective-C, some recent, some new. So, I'm going to be starting off talking about better productivity. We're going to be talking about tool support for modernizing your code. We'll be talking about improvements in the SDK and how they make your life better and more productive and generate better code. And we'll be talking about block return safety and catching some common errors.
And then we'll be talking about the runtime in your code. And then, for the rest of the talk, we'll be talking about Automatic Reference Counting. We'll be talking about updates we've made to it and we've been talking-- we'll talk about improvements in generating better warnings that help you generate more correct code. So with that, let's jump in and talk about Tools Support for Modernization.
Something we did recently was adding a Refactoring Tool to modernize your code. It's found right here in the Edit Menu under the Refactoring Submenu and you just convert to the Modern Objective-C Syntax. So what does this do? Well, it reduces a ton of boilerplate in your code. We have object-- more object literals, container literals. We have improved subscripting. And this is covered in-depth at last year's version of this talk.
So let's look at the example of this. Here is an example of one of my favorite jazz musicians. Now, we do have literals. We have string literals. We have a lot of other things. We need to remember how to create a dictionary. What factory method to call? We need to remember the order of the keys and the objects. We need to remember that they have to be objects. And we have to remember to nil-terminate this list. And similarly for NSArray, we have to remember the right factory method to call. And like NSDictionary, we need to remember the nil-terminate it.
Similarly, NSNumber has the same problem. We need to remember the right factory method to call. Is that an end? Is it a long? Is it a short? We need to remember the right one for Bool. There's a lot of opportunity here to reduce boilerplate. Well, with the Refactoring Tool, you can adopt the modern syntax. Dictionary literals just become @, curly brace. Array literals become @ square bracket. The compiler helps you remember keys and values and the fact that they have to be objects. You don't need to worry about nil terminating the list.
And similarly, for NSNumber, you don't need to worry about what type it is anymore. You can just say @ number or @ yes or @ no. So this is a huge simplification and we have tools to help you about the syntax so you can focus on writing great code and sweeping away the details.
Similarly, we can consider containers before the modern syntax. Throughout your code, you work with containers and you have to write this code repeatedly. You have to remember if in the case-- whether the key comes first or the object comes first, it's just a lot of boilerplate that could be simplified. Well, with modern syntax, you can do that. You can use common subscripting syntax that's available in a variety of languages to access containers in the modern SDK and the modern syntax.
Now, there's a ton more to modern syntax that I'm not going to cover here and I strongly suggest that you watch last year's talk. We have boxed expressions via @ parenthesis. We have the full intersection with C types if you want to understand how they work, like shorts and chars and longs and unsigned behavior. We have-- we teach you how to implement subscripting for your own classes and you can see this on last year's version of this talk, number four or five. So with that, I'd like to jump into SDK improvements and how they will improve your productivity.
So the SDK is constantly leveraging the compiler. It's adopting new features. It's helping you write more correct code, safer code, and get better compiled time error detection and problems that you might be running into. And specifically, I'd like to call out two features that the new SDKs have adopted that will affect potentially your experience and help you write better code.
And specifically, where there-- instancetype keyword and explicitly-typed enums. So let's jump in and consider with that is. Now, some of you probably can look at this code and already see the bug. We're taking an NSArray and we're assigning it to an NSDictionary variable. That's terrible. But, copy and paste errors are easy. Refactoring are easy.
And in fact, now with the SDKs worshipping this, you will actually get a warning pointing out the problem. So how is it that the compiler knows if we have a problem? When previous versions of the SDK, array and many similar APIs returned IDE. The problem is that IDE implicitly converts to anything, so the compiler didn't historically know that there was a problem here. In the new SDK, array returns instancetype. This is a contextual keyword. It's only for return types. And subclasses don't need to redeclare array here to expose the fact that they're returning an instance of their subclass.
And finally, the compiler contextually matches the return type to that other receiver. Okay, well what does that mean? Let's consider our subclassing NSArray. And let's say we create a class name Foobar. We don't do anything more. We just put in @end. And what happens in this code now that we're taking a Foobar and calling array and this signage NSDictionary variable? Well, the compiler would still print out the warning, great.
But I'd like to point out is that the compiler is contextually taking the receive type Foobar and printing out the warning pointing out that the return value is also a Foobar, and that's the source of the problem. So that's the instancetype keyword. Next up, I'd like to talk about explicitly-typed enum. Another feature that the SDK has adopted that will show up in your code and help you detect more errors and be more productive.
So let's look at this code. Some of you that have experience with URLs may already see the bug. These are not the same enum. We have an NSURLHandleStatus on the left. We have an NSURLSessionTaskState on the right. Whoops. Well, again, copy and paste errors are easy and refactoring errors are really easy. And the reason this is used to compile in the past is that enums are essentially just global integers. So, we're just assigning one number to another.
Well now, with the SDKs, you will get a warning pointing out that these are of different types which is exactly what you want. So how does the compiler know? In the past, we declared enums like this. In one line, we would declare the enum and enumerate, you know, ABC, JKL, XYZ. And the next line, we declare a typedef where we say that what the storage is and then give it a name. Well, this is where the first line is just mint. We haven't actually bound the two pieces of information here.
And how we fixed this in the SDK and with the compiler is the compiler supports a new feature for explicitly-typed enums. What you can see here on the first line is we've actually moved the storage up and now the enum knows what its storage type is and then now it's no longer an int, it's an NSUInteger. Now in the next line, we actually bind or enum to a type available for use.
This is all covered last year in-depth and this version-- in this talk last year. Now, the Cocoa team have provided convenient macros that exposed this feature. We have NS Enum for a traditional enumerations, like we just demonstrated. You know, ABC, JKL, XYZ. And they also have a convenient macro for NS Options. So, a bit wise operations, like, you know, different flags. So I recommend the use of these macros and you'll see them in the system frameworks.
But we don't stop with just warnings. We also improved code completion with NS Enum and explicitly-typed enums. So before NS Enum, if you tried to code complete our example enumeration here and you typed X, you would see a bunch of XPC-related APIs and you wouldn't see your enum. That's not fun. Well, if we just switch to the NS Enum macro and then get up the compiler feature, Code Completion gives us exactly what we want and we see our enumeration available in Code Completion which is great.
But it just doesn't-- it doesn't stop there. The power of explicitly-typed enums manifest in multiple ways. So in this particular case, we have an NSArray that we're trying to sort using a comparator. And we do some logic and then we decide to return ascending or descending. Now if you look closely, we actually haven't specified the return type of this block between the caret and the opening parenthesis.
And the compiler would actually give us an error saying that, "Well, we infer the type of this block as returning int but the API actually takes NS-- comparison result." All right. Well, how do we fix this? Before explicitly-typed enums, we have the Cast, thus, assigning the correct type.
And yes, this would make the warning go away but now we have this lingering cast in our code that, you know, could create future problems. Because the explicitly-typed enums allow us to fix this and make the enum how many explicit-type, we can help you avoid casting and in fact you can now go and delete these casts and go back to the natural looking code you wanted to have in the first place and write it as intended.
Digging deeper on what NS Enum can do for you, let's consider the fact of how implicitly-typed enums can manifest in different ways. Again, before explicitly-typed enums, these two URL-related enums that are actually different were just ints as far as the compiler was concerned. And this manifested as a silent bug in your code.
Now with NS Enum, you get the warning that you want and now you have to decide how to fix the code. Now, here, this is pointing out a design problem so that there is no quick solution. You'd have to think about it and actually figure out what you originally intended.
So with that, now I'd like to move on to the Objective-C Runtime and you. The Objective-C Runtime is the core of the language. It enables a ton of dynamic behavior. We have, you know, of course, dynamic method dispatch. We have object introspection. We have object proxies. And we have dynamic class construction, even a dynamic method replacement.
The runtime enables a ton of innovation in the language. We've added many features over the years and it's really the heart of all these features. So to give you an example, we've added a new key-value observing, associated objects, we've added @synchronized to do locking, we've added weak references, we've added tagged pointers, and the list go on, on and on.
I'd like to actually call out tagged pointers though because we have some new warnings to enable innovation. So, let's first dive deep and ask the question, what are tagged pointers? They were added in 64-bit Cocoa for a small value-like objects. And examples of a value-like objects are like NSNumber, NSDate, just values.
What we're doing is we're actually storing the object in the pointer itself, so we don't actually need to call malloc or free. And when you don't call malloc or free, you could've-- code gets a ton faster and it's more space efficient. It's three times more space efficient and it's over 100 times faster to allocate and deallocate these small value-like objects. Okay, it's great in theory but I'm a visual person. Show me how this actually works. In a normal pointer, we're actually only using the top 60 bits. The bottom four bits of a pointer are always zero because objects are always 16-byte aligned.
We can take advantage of this fact to implement what we call tagged pointers where we actually store in the bottom bit discriminators and when it's one, we can actually store a ton of data in the rest of the bits. And this is in fact what we do. Having said all this, this is an implementation detail. Some of you have discovered this feature and we need you to undiscover it. [laughter] The runtime details are private. And in fact, what remaining little tidbits of data structures you're finding that are still public in the data structures are becoming private.
Most URI-- applications are well behaved and we thank you for that. Use APIs to instropect things and this lets us innovate considerably as we've already described. But we've added some new warnings to detect the use of tagged pointers and a related problem of Raw 'isa' access. So, you might have code like this in your program where you're testing the tag bit and then you are like, "Great, I have discovered the tag bit isn't set, I'm just going to run in there and just access the isa directly and-- because I'm think I'm optimizing, this is fun." But in the case when the tag bit is set, you actually called the correct API. Well now, you're going to get a warning for that tag bit check. And you're actually going to get an error for the direct usage of the isa.
Well how do you fix this? You delete the testing of that bit and direct that access to the isa and you actually call like it isKindOfClass or object getClass. We really need you to do this so we can unlock the next level of innovation. And failure to do so, might break your code in the future. So please, heed these warnings and errors in your code and do the right thing. Thank you.
Finally on the runtime part of this talk, I'd like to talk about Garbage Collection. GC only exists on the Mac. We have replaced it with ARC and in fact we deprecated Garbage Collection as of 10.8. We're very serious about this. We're not supporting Garbage Collection in new frameworks, things like AVKit or Accounts or GameController or GameKit, et cetera, et cetera, we're not supporting Garbage Collection. We really need you to use the ARC Migrator to transition off GC.
So with that, let's talk about Automatic Reference Counting and tell you about updates we've been doing and some improvements to help you write better code. Let's start with the updates. Cocoa is designed with reference counting semantics in mind. This is great. Being able to deterministically know when an object is destroyed allows you to better reason about your code. It allows you to better schedule things. It allows you to better design. And it's also just great for debugging. ARC also helps you write great code. It allows you to focus on what matters and not the minutia of details of when things need to be released.
The majority of new App Store submissions are using ARC. So a lot of you also agree that this is a really great tool for focusing on what matters. Specific-- another great example of ARC is Xcode 5.0. This used to be a GC app. It was a large app. Nevertheless, we were able to convert it to Automatic Reference Counting and we're thrilled with the results.
We're thrilled with the better determinism. We love the better debugging. We love that we're able to offer tons of better performance. And we hope that you'll find the same experience. Speaking of performance, we are continuing to improve the performance of ARC. Weak references are now about twice as fast and this year's version of our operating system iOS 7 and 10.9 for the Mac. And we're also improving the debug experience as well. We have more predictable memory usage under debug builds. Specifically, the lifetime of autoreleased objects is much more like released builds.
Now when you autorelease an object, you don't necessarily know when it goes away. And in fact, ARC optimizations could kick in and change that timing. We've improved the compilers so the debug builds now release the object much more like when released builds and we hope you appreciate that.
[applause] So this is our great [inaudible] ARC. Well, we have Migrator. It does all the heavy lifting for you. It removes retain, release, autorelease. It deletes empty dealloc methods if all your dealloc method was doing was calling release, release, release. It converts NSAutoreleasePool to @autoreleasepool in the modern syntax.
But you have to do the rest. You need to reason about some rare things like id in structs. Usually the easiest thing to do is convert these to classes and then, you know, your code looks prettier in the end anyway. You also need to reason about some atypical uses of memory management APIs.
This was covered in depth last year in the Automatic Reference Counting talk and you can get all the details there. But if you don't have time to jump back to the video, here's what you need to do. Just like with modern syntax, you can go to the Edit Menu, go to the Refracturing Submenu, and you can convert to ARC and let the tools help you along the way.
So ARC and your app, we really want you to switch to ARC by default and focus on what matters which is your app and writing great code. You can always opt out specific files if you run into problems. So you can just go to the Profile Build Settings and select the Compiler Flag for turning off ARC. And I'd also like to point out that the ARC Migrator supports both manual reference counting code and garbage-collected code and it helps you migrate both easily and straight forward.
Now for an update on new things we've added that we think you will love. Let's talk about some new memory management warnings we have added to help you better reason about life under ARC. So, there are three things I'm going to be talking about and we're going to be talking about the implicit referencing of self and retain cycles with blocks. We're going to be talking about repeated use of a weak variable and what does that even mean. And then thirdly, we'll be talking about sending messages to weak and had a better reason about the behavior thereof.
So let's jump in first and talk about retain cycles. As a brief refresher, let's imagine your app is just referencing an object. The reference count of this object will start out is one. And similarly, if that object references another object, that will be one. But, if we actually have a reference back to the original object, its reference count would be two. And if our app lets go of the object, we have a leak because now these two objects are holding references on to each other and keeping the object alive.
So with that in mind, let's look at some code. Let's say in a method you have two instance variables. And one of the instance variables holds the block and the other one is just an object. It doesn't really matter what kind. In the block we use ivar2, and then we assign the block to ivar1. Well what's actually going on under the covers and how the compiler reasons about this is we have implicit use of self in both of these cases.
And those are the actual objects in question that we need to think about. So let's delete that and then see what warning the compiler can now print out. So I've enabled this warning, the compiler will print out, they were capturing self strongly in the ivar2 case, and then it points out the related case where it believes the cycle began. Well, again, I'm a visual person, but show what this looks like in practice.
So we have an instance of our class and we have ivar2. Again, ivar2 can be any object, string, whatever. And now we're creating this block. Now when we wrote the code, it may look like this. It may look like we're just assigning the block to ivar1 and we're using ivar2. What's the problem? I don't see any cycle.
Well because there is an implicit use of self, the block is actually retaining self. And now we have a cycle and now it's indirectly accessing ivar2. And again, we'll get the same leak that we demonstrated earlier if we let go of the instance of our class, the block will be keeping that instance alive and we have a leak.
So let's go back to the code and the warning. How do we fix this? Well we make some room and we add a weak variable. So what we do is we create a weak variable on the stack and assign self to it. And this variable is an instance of the same type of our class. And then what we do is we use this weak variable in our block. And if we do that, the warning goes away.
So what's going on here? Weak variables do not extend the lifetime on objects. They are-- and therefore, they don't implicitly create retain cycles. And the great thing about weak variables is they safely become nil when the reference count of the object they're referring to drops to zero. Now in this particular case, they are tied together so we don't have a problem. But it allows us to break the cycle and actually get the paper we want when we release the instance of our class.
So building on this, let's talk about weak variables in general. Consider this simple method where we're logging the description of a weak ivar. Does this method even called call? What happens if the weak is nil? You know, what actually happens here? How do we reason about this at all? Well, now the compiler can warn about this saying that we're using weak variable and it may unpredictably be nil.
Well what do we do about this? Well, it's actually worst than that. It can get-- we can have a weak variable and use it twice. Does this get called zero, one or two times? You know, how do we reason about this? Well there's actually a solution for both of these and I'd like to-- oh, sorry. In the repeated use case, we now have a specific warning for that too pointing out that, you know, you can't actually reason about the zero, one or two case.
[ Pause ]
So let's go back to the original code and the original warning and look at how we fix this. Let's make some room and do as the compiler advices and put a local strong variable on the stack, assign our weak variable into it. And once we've done that, that strong variable is either nil or not nil. It's not going to change magically out from underneath us. And because we know that, we can now test for it. And if it's not nil, we can now safely print the description. And if we do that, of course, the warning goes away.
So this is great. We now can reason about the lifetime of this variable. And the great thing too is handling the nil case becomes very obvious. We just add the else block and do the right thing. Next up in the Automatic Reference Counting Improvements, I'd like to talk about the relationship between ARC and CoreFoundation.
If you've already been using ARC, you may have been writing a code like this every time you interact with CoreFoundation. You have a CFDictionary, getting some value out of it. And in order to help ARC reason about the object lifetime, we use a bridge cast saying that there's no net change in the reference count here.
This is required because anytime we come in and out of the ARC system, we need the ARC compiler to actually be tracking the reference count so that way objects live only as long as they need to, and no longer and no shorter. We have a +1-- you can express +1 to ARC via CFBridgingRetain. You can express a decrement of the reference count via CFBridgingRelease. And you can express a no net change via bridge cast.
Well, you know, it's great that we're using ARC and we've been able to make our CF code and our Foundation code work together, but can we improve this situation? Well, CoreFoundation actually has some really strong conventions. Create and copy methods return +1 and everything else returns +0. And in fact, we already have some compiler attributes for the exceptions, like CF RETUNS RETAINED and CF RETURNS NOT RETAINED and CF releases argument for APIs that consume their argument.
And these are there to help the static analyzer and you may have already seen this kick in, in your use of the static analyzer. Well, what if we can just use these conventions to make this bridge cast go away? In fact, we've formalized the everything else cast now. The common CF APIs you use now allow implicit bridging as opposed to this explicit bridging.
[ Applause ]
There are new macros available for use too. And with that, I'd like to show you how this works. So, how do we enable implicit bridging? Let's imagine we're wrapping a CoreFoundation Array and we have our example Foo that-- we have just bunch of wrappers around the array. Well the first API we have here is great. It follows the convention as copying the name. We don't need to do anything.
The second API is also great. We don't need to anything because it follows the convention. It returns +1. It doesn't consume any arguments. But our third API, we don't know what we were thinking. We decided that we're going to return retained and-- but we're following the convention. Well, what we need to do is put a CF RETURNS RETAINED attribute there via macro and let the compiler know what's going on. Even if we just stop here and do this, we've already help the static analyzer reason about our code.
But once we're done auditing, what we can do is add these macros, CF IMPLICIT BRIDGING ENABLED and CF IMPLICIT BRIDGING DISABLED to tell the compiler that we've audited code. Now, this must be after all #includes. Obviously, you're not auditing somebody else's code. You're auditing your code. And you don't have to do it around everything.
If there's code you don't want to think about right now, you could have the explicitly bridge code remain outside of the macros they are using. And that is implicit bridging and this is all the common CF plist types or have been auditing and you can go remove this bridge cast from your code if you're using the new SDKs.
So to wrap up, we have Modules. This is really great for finally fixing the textual inclusion problem and all the associated bugs. It also adds great performance for compilation time and indexing. And it's just a much more pleasurable experience with features like Autolinking. We also have improved productivity with better compiler warnings throughout the SDK adoption of these compiler warnings to help you catch errors early and write more productive code. And with ARC, we've made it better and faster by allowing you to better reason about simple retain cycle and weak reference bugs, and also easier in the fact that you no longer need to write bridge cast for common CF plist types.
For more information, I'd like to point you at Dave DeLong, our evangelist. We also have tons of documentation on the developer website and of course the Developer Forums. We have two labs, one tomorrow morning and one Thursday afternoon. Oh sorry, related sessions. We have What's New in LLVM Compiler, it happened earlier today, you have to catch them on video. But tomorrow, we have Optimize Your Code Using LLVM in Nob Hill at 3:30. So, thanks for coming.
[ Applause ]