Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2010-206
$eventId
ID of event: wwdc2010
$eventContentId
ID of session without event part: 206
$eventShortId
Shortened ID of event: wwdc10
$year
Year of session: 2010
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2010] [Session 206] Introducing...

WWDC10 • Session 206

Introducing Blocks and Grand Central Dispatch on iPhone

Core OS • iOS • 49:52

iOS 4 introduces support for blocks and Grand Central Dispatch, which enable you to encapsulate units of work that may be executed concurrently. Learn how these revolutionary technologies can help you write better, cleaner, asynchronous code.

Speakers: Kevin Van Vechten, Bill Bumgarner, Shiva Bhattacharjee, Daniel Steffen

Unlisted on Apple Developer site

Downloads from Apple

HD Video (140.4 MB)

Transcript

This transcript has potential transcription errors. We are working on an improved version.

[Kevin Van Vechten]

Good morning everyone! And welcome to Introducing Blocks in Grand Central Dispatch on iPhone. Looks like we have a pretty full room today. If you have friends who weren't able to make it inside we will be repeating this session later in the week. More details on that to come.

My name is Kevin van Vechten. I'm the engineering manager of the Grand Central Dispatch team at Apple. And last year we were very excited to release Grand Central Dispatch and Blocks in Mac OS X Snow Leopard. Blocks is a language feature that allows you to organize your code into independent snippets of code for easy reuse and Grand Central Dispatch is a runtime API that lets you execute those snippets of code asynchronously and in an event driven fashion.

It is a very low level technology. It's part of libSystem. It's available for all applications to use without needing to link against any additional libraries and frameworks. And this year, we're very excited to announce its availability in iOS 4 for use by your UIKit applications. So with all of last year's emphasis on the multicore benefits of Grand Central Dispatch you may be wondering whether there are any real world benefits to an iPhone or an iPad that only has a single core. And the answer is yes.

Fundamentally, Grand Central Dispatch is a very efficient runtime for intrathread communication and asynchronous execution and these benefits scale up to many cores as we see on the Mac or scale down to a single core as we see on the iPhone and iPad. And by running work asynchronously it allows you to free up your main threads event loop to keep your application responsive to the user especially for touch events. So please join me in welcoming Bill Bumgarner to the stage to talk more about Blocks.

[ Applause ]

[Bill Bumgarner]

Thanks Kevin. So if you saw the talk earlier today about effective use of Objective-C. I'm going to be repeating some of the concepts from that and maybe a little bit different and then we're going to repeat it again a little later in the week. And part of the reason why we're doing this repetition is because Blocks are pervasive. They're our foundation. They're throughout everything and by understanding this technology effectively you will be able to take advantage of it and your code will be simpler, faster, more efficient, more stable, which let's you ship better apps.

So with that let's talk about Blocks. Now, if you've come to this platform from other languages like the scheme list Ruby, Smalltalk, then you're probably familiar with lambdas, closures, Blocks even they're called, or anonymous functions. JavaScript has anonymous functions with querying and in Snow Leopard like Kevin said we added this concept to see, we call it Blocks. We use a very simple syntax but that simple syntax belies an underlying power.

Now, on our platform are the foundation languages C. Out of C we have Objective-C. We have C++. And of course we smashed the two together in an unholy marriage called objective C++.

[ Laughter ]

So where are Blocks available? Blocks are available fully and completely in C and Objective-C.

However, the C++ and objective C++ report is not quite complete. Though I am exceedingly happy to be the first on stage I think to tell you that in LLVM 2.0 top of tree the Block support passes all 912 unit tests in C++.

[ Applause ]

And if you want to play with that you can grab LLVM 2.0 from the llvm.org site.

So, basic Blocks. What is the most common patterns you're going to see with Blocks and how are you most commonly going to use them? Well, quite literally you will be using the caret everywhere. You see the caret, this operator? That introduces a Block. Why caret? It's the only unary operator we knew of that could not be operator overloaded in C++, so.

[ Laughter ]

Makes things a little simpler, that and we couldn't use the snow man because Unicode is not allowed.

[ Laughter ]

So when you're writing code and you want to take advantage of Blocks. There's a lot of APIs that take Blocks as arguments. These are two of them, one is Objective-C. One is from Grand Central Dispatch that you'll hear more about shortly and quite literally, you throw a Block Literal in your code. You pass it as an argument.

It can do some work. It can return values and it's, you know, simple elegant in line. Now, two things of note: one, by a standard we generally only pass one Block as an argument to anything and it is always the last argument. That was two. And by doing this it just simply makes the code more readable. So if you write your own APIs to take blocks try to stick with that. There's occasion where it's violated but hopefully for good reason.

So when we look at the Block Literal syntax, the Block Literals, what does the syntax look like? Here is an arbitrary Block. Returns a BOOL. Takes an argument called of type ID. It does something simple and this is what the syntax looks like. Now we also wanted Blocks to be exceedingly convenient to use.

And so if you don't return a specific type you don't need to declare that type or the compiler can actually automatically infer the type, there's no need for the return type in. Similarly, if you don't take any arguments, you get a void argument list and a void return type of an inferred return type. You're going to just drop all of that and have a very simple Block Literal. Most of the APIs that your going to see shortly take this form. Now, Blocks are more than just code. Blocks can be treated as data.

That's kind of how dispatch and other things work. They allow you to capture code and some state as a chunk of data that get managed and cued, and executed and handled automatically behind the scenes. So how does this work? Well, in C we had function pointers, we've always had function pointers. That was kind of, how you could pass around callbacks and stuff. But there was no way to attach data to it. So Block Pointers look very, very similar. It's simply the caret instead of the pointer.

Now, of course this can get ugly very quick like most C types. When you start returning pointers to stuff that return pointers to other stuff that has functions and blocks and stuff, it gets ugly. To simplify that, of course you can use typedefs. So in this case we've declared a typedef that we can capture the type of that Block argument that we're passing to that other block. And let's look at how this works as a whole.

So, we'll define it simple type a Worker Block. It takes in integer as an argument, has no return type. And we can then define a simple function, a repeat function, that will call that Block some number of times and pass the iteration count to the Block as the sole argument, and then we can use it.

So, we declare a variable D, give it a value. We declare this Worker Block which uses that value, and that's interesting because that means the Block is actually capturing the value and then it can be used later. And then we're going to call the repeat function and as you would expect this is the output.

Now, in iOS 4, what I've shown you now is basically the foundations for creating and using your own APIs with Blocks. However, in iOS 4, I haven't gotten used to that yet, there are actually over 100 APIs that use Blocks. Like I said, Blocks are pervasive. You are going to use Blocks if you use the audio kit, if you use core motion, core telephony, the game kit you're going to be using Blocks.

You're going to be declaring Blocks passing to the system. And more than just at the individual framework layers, you're also going to be using Blocks to implement features like multitasking across the foundation kit. You're going to be using it in Grand Central Dispatch very heavily, as you'll see shortly.

So in particular there are four common patterns of usage you will see with Blocks and you are encouraged to embrace these in your own code as well. You have synchronous execution. In this case we are, say, filtering a set or we're enumerating a dictionary exceedingly efficiently. And in this case, you're taking a Block, you're calling some method and it will return, you know, synchronous. There is no concurrency. There is no multitasking here. It's an immediate kind of thing.

It's also-- Blocks are also exceptionally useful in the callback role. So normally with a callback you have some Function Pointer, in a lot of cases. You throw it over somewhere. And then when you get the callback maybe there's a Void Pointer that's your context. You got to do the type cast to some other type and that's really unsafe because it's basically telling the compiler that you know what your doing and often we don't and then your code crashes. With Blocks because you can capture state you don't have to have that context. It makes the API simpler and it makes them more robust. It makes them type safe. So these are actually both examples of APIs you'll find in iOS 4.

Another role you will see is asynchronous execution and you're going to get into this is in depth in a moment. In particular NSOperationQueue and Grand Central Dispatch can both take Blocks that will then be executed later, or on another thread, or whenever. And finally, there is a very interesting role for Blocks, that's really a role provided by Grand Central Dispatch and that's that by having this queued invocations of Blocks you can have lockless exclusion for resources.

In this case, if we say we have a queue and that queue is associated entirely with an image or some data source then every time we dispatch something onto to that queue and that's executed serially, we know that nothing else can touch that resource and the Blocks allow us to capture the work related to that very easily.

So with this pervasive nature of Blocks you need to know some of the details of the implementation, and this is where we're going to get kind of technical. First, Blocks are in fact Objective-C objects, always. They can be messaged as objects. They have very, very few methods. Block objects start out on the stack. Why? Because the stack's really, really fast, making allocation in the heap is relatively expensive.

So for synchronous execution that's fine because your frame where you'd clear the stack or where you'd clear the Block isn't going to go away before the execution is done. However, of course, for asynchronous you need to be able to copy the Block onto the heap. So that stack frame can be destroyed and the Block's not going to explode. And this is where we see two of the methods that Blocks respond to. They respond to the copy method and the release method. There is also an analogous Block_copy and Block_release function that you can use.

So when a Block-- when the code execution passes over a Block declaration, effectively what happens is the Block snapshots or captures a constant copy of any data it references from the surrounding scope. So that also means though that objects, references to objects can be captured in this way and Blocks will actually automatically retain those objects and then will release them when the Block is destroyed. So for example, if you refer to an instance variable, then a reference to self will be captured. Self will be retained and your object will survive for the lifespan of that Block even if the calling scope is destroyed.

Now, of course, const copies are useful most of the time, but sometimes you want to get stuff out. You want the Block to be able to update some of that stuff in that local scope. Or maybe you want three blocks to all be able to kind of share state or have some kind of state map between them. For that, there is a new storage keyword in the C language on our platform called __block.

What that says is that that variable is now going to be mutable from within the Block from within that scope it was declared, from within any other Block that uses it. The issue though is that __block references to objects are not retained. The memory management, you have to do the memory management manually.

The reason why that is, is because there are some very subtle conditions under which it's absolutely impossible to do it automatically. Let's look at this lifetime thing in detail. This is just critical. So we have a simple piece of code down at the bottom there's some function that declares a Block, block1. Block1 updates a shared variable with the captured variable, with the const copy captured. Our program pointer is that big orange ugly arrow pointing right at that code and now we're going to go ahead and we're going to go to the next line of code.

So we passed over the declaration of the block. At this point in time, the Block object has captured the data. The block object is now on the Stack. It has a copy of that captured variable. And notice that the shared variable, nothing happened to it. Because it doesn't need to move yet, it's still on the Stack.

It's fine. So let's go ahead and copy that block and we're going to make a second Block reference block2. At that point in time, block1 is still on the stack. Block2 is now a heap-based allocation and because that shared variable is now referenced by something that might outlast the scope of declaration, it has to be moved to the heap too. And even though it's been moved to the heap block1 still going to work, block2 is going to work.

It's all fine. And let's just see what happens when we actually make a second copy of the Block. In this case, we've copied block1 the second time. We actually have two heap allocations now. I just get a little warning sign because this can be a problem if you're going to say schedule a thousand copies of the Block.

Then you're going to end up with a thousand heap allocations or if that block is capturing a big array that's going to be bad. So how do you avoid that? Well, once the block is copied to the heap there is no reason for the heap to keep accumulating copies. That's a bug.

That line in code it's pointing at right now, that should be block2 copy. Sorry, I don't know how that happened. Anyway, so imagine that was block2 copy. I love it-- live debugging. In keynote even. So, now we have block2 and block3 referring to the same Block on the heap.

There's only one copy of the captured variable. Shared is still on the heap because well, it had two on the first copy, so that's how you can avoid copies. Now, one of the details of the APIs in the system, anytime an API is going to take a Block and potentially execute it, somewhere other than the local thread, somewhere other than where the scope the block was created in may exist or not.

The system APIs will copy the Block. This is generally handled automatically for you. There's one case though you kind of got to keep your, you know, in your head. Because if your going to say enqueue a thousand copies of one Block, make a copy first and then release after.

Anyway, so what happens when the Blocks end? Well, if block2 and block3 finish first and we're assuming garbage collection here. Blocks are fully compatible garbage collection on the desktop. If I had a room on the slide, I would have put Block_release after the execution of the Blocks. Then, the heap stuff gets destroyed. The allocation is gone. The shared variable doesn't move back to the stack because that would be a waste of CPU cycles and life goes on.

Now, if the function and block1 were to finish, if that scope of the function that stack frame for the function gets destroyed, then dip stacks cleared, stacks pop, stuff lives on, on the heap and eventually when execution ends, then of course everything is cleaned up and everything goes away.

So that's Blocks in detail and this will be reiterated again on Friday and in even more depth if you're interested in the advanced Objective-C and Garbage Collection Techniques session. With that, I'd like to pass over to Shiva to dive into detail on Grand Central Dispatch.

[ Applause ]

[Shiva Bhattacharjee]

Thanks Will. So in the remaining part of this session we're going to show how to use GCD or Grand Central Dispatch to make your apps responsive and without you actually having to do a lot of work. And why do you want to make your apps responsive? Well, you want happy customers and we are all here to make your customers happy.

So before we start let me ask you. How many of you wake up in the morning and say, you know, "My day is not complete today until I write some really hard multi coded thread." Exactly, because threading is a hard problem. Because the way you think about your apps, in a task oriented fashion most probably, is not how easily you can map that thinking into the implementation. Threading is more granular and it's not the way you kind of like think about the implementation as you are thinking about the logistics of your application.

So hopefully at the end of this session you would, all of you would be actually writing multithreaded code and it will be fun. It-- you will be using GCD to do threading without actually thinking about threads and that's a very powerful paradigm to write your code in. And one of the ways we do this is there is no explicit thread management. We do not expose threads to you.

You do not think in terms of thread and therefore you can easily, more easily map your application logic into your code. As Kevin mentioned before, we introduced GCD in Snow Leopard and probably you have gone to other talks and seen people talk about GCD before. So, most of our frameworks are using GCD which means any ad hoc threading implementation that you might do on your own, GCD will probably perform better than that.

And we saw how Blocks are this way to encapsulate your data in the work associated with that data. So with GCD using Blocks, it provides a really powerful expressive way for you to write your code. So, since this is the age of writing Twitter apps, we decided that we will give our shot to writing a similar app.

To give a brief overview, you would see a stream off messages-- a stream of tweets coming along. We want to display those tweets and at the same time, we are going to show the images, the profile images of the people who posted it. Let's move to the demo.

So, we have two demos here. One that probably uses the naive approach of doing most of this work on the main thread, and one of the reasons, one of the things that we have seen people do is you might think that you are doing a nominal amount of work on the main thread, and that's because you're testing on your Wi-Fi network, there is no latency so you figure, "Oh, I will get the synchronous connection. Get the result back and display the output."

But when that application goes in the hand of your customers and they are on a 3G network and on the edge connection, you know, the experience is completely different. So, what you're going to see flee-- I mean, carefully see the user interaction. I mean, it looks fine as it is. Yes, the messages-- we've put a lot of thought in those messages.

So, now I'm trying to scroll and as you can see, I can-- I cannot even barely make it register my UI touch events because the main thread is busy waiting for those messages to come along and-- Oh, here. And you can see, I mean let alone the-- yeah.

Yeah. So, let alone the scrolling part, I mean, you cannot even barely register your UI touch events to even grab the screen. And that is what the experience would be like. You know, in the hands of a real customer where the network is not good and you're blocking the main thread and therefore, not letting it listen for events. OK. So, now, let's see what GCD can do for us.

So you have to believe me on this and we will show more examples as we go along. Is-- We took the same existing code base. We didn't add selectors. We didn't add more code into our existing code but simply used the existing code, wrapped it around with some basic simple GCD APIs and made our apps more responsive.

Again, from the look of it, it looks similar so if you're not doing any user interactions, you won't really see any difference but now let's see if we do the same kind of experiments of scrolling down and back, and there you go. And that's all we did.

[ Applause ]

Thanks. Alright.

So, what's the gist of the story? Like how to make your acts responsive? And this is probably something that you know but it's good to go over these. Well, one is never block the main thread. The main thread is there to display your UI and to listen for events. So, even though you might think that it's OK to do the nominal amount of work, it's not and it-- the nominal amount of work that you might think is OK would not be a nominal amount of work for your customers.

So, if you do not block the main thread, well you have to do that work on to some other thread. And what has been the case is the accessory code, the boiler plate code that you need to write to move that work from your main thread to a background thread is generally more than the actual work you want to do.

And then obviously once you have the result from the background thread, you want to show the result on the main thread and display it. And that has to be done on the main thread so you have to have a way to ship that result back from the background thread to the main thread. And we will show how you can do this with GCD. Very simply, again, without changing your existing code.

So, let's look at a code example here. You would imagine that's the naive way of doing it. You we're listening for tweets. You got the tweet message back and with that, you've got the profile URL from where you're supposed to get the profile image. The tweets were coming from a server.

The profile images were coming from an image server. So, that's the function that will be called on to your main thread. So here, you go ahead and add the tweet to your history of tweets. This is-- this lets us scroll the history of tweets as we were doing and we add the tweet and we say, you know, display it.

Then, and this is the bottleneck here, is we get the profile image from the image cache. Now, if it's there we would get it immediately. But if it's not there, it's actually going to get this image from the network and on a 3G network or an edge network, this is going to block.

And during this time when the main thread is blocked on this call, you are not responding to user interactions. This is why I was not able to even register those UI touch events as I was trying to grab the screen. And once you have the result, you want to update it.

And this is exactly why you do these things on the main thread because it's easy to update the result because you are already on the main thread. So now let's see how we can use GCD to simplify this. Again, as you will see no change to your code.

Well, some changes but no refractory.

[ Laughter ]

So what we did is we enter this dispatch_async call and dispatch_async the quick and dirty way of thinking about it is that your just dispatch_asynching means roundness on the background. So, you take the Block of work that you were supposed to do and in this case, this is the work of getting the image which might block and you call dispatch_async. This frees up your main thread so your add tweet with message call is going to return and your main thread is free to again listen for events and display as they come along-- as the background thread is waiting for the image to show up.

Now, there you can block and the image will show up finally. But you want to update the image. So, beforehand, since you were on the main thread, you could just do it from the main thread without having to do anything. But now, this Block of code is running on the background thread and you have to do the update on the main thread. But we use the same trick again.

We call dispatch_async and here we call it on the main queue. The main queue is effectively in the dispatch while running things on the main thread. So, let's look at this in more details with a visualization which I think would help. So, here you have the main queue that is drained by the main thread and we have already realized that we want to get the image from URL block and not run it from the main queue.

So, we dispatch_async effectively entering that block onto this image queue that is going to run our background task. Now, note how GCD is efficient because as it-- as soon as it sees there is work to do an automatic thread comes along and starts executing that block. Your main thread is free so it can handle all these other events that are happening.

Your automatic thread executes. gets the result and then similarly end queues the update Block on to the main thread. And the main thread along its way is going to update the display. The automatic thread is done, executing the Block, it goes away. And your main thread finally updates your UI and you're back to having a single main thread. It doesn't hold on to that automatic thread. So, let's go back to the three main points that we had. We didn't want to block the main thread.

We wanted to move work from the main thread to a background thread, and we wanted to move the result from the background thread to the main thread, and we did all of these with dispatch_async. This is really the gist of GCD. I mean, this is a pattern that you will see over and over again.

You dispatch work from the main thread to a background thread and from the background thread, dispatch the result on to the main thread. So, now you have seen we have mentioned queues and I'm going to ask Daniel to talk about some of the more basics of dispatch queues.

[ Applause ]

[Daniel Steffen]

Thank you, Shiva. I'm Daniel Steffen. I'm an engineer on the GCD team and I'd like to talk to you about the details of GCD queues today. So, you've probably seen in other sessions already mentioned of dispatch queues or GCD queues and these are really the fundamental concept in GCD that is really important to understand to use this technology effectively. So, what is a GCD queue? A GCD is actually a very simple lightweight list of blocks that you have committed for execution.

The enqueue and dequeue operations are FIFO and the enqueueing as we have seen happens when we call dispatch_async. This takes acute parameter and a block parameter and enqueues that block on to the queue. The dequeuing happens automatically for you on one of these automatic backgrounds threads that we've seen or on the main thread, depending on the queue type and its worth noting that's this dequeuing and running a Block is the only way that you can get a Block off a queue once you have dispatch_async the Block, it will run. You cannot stop that.

So, one queue type we've looked at already is the main queue and the main queue executes Blocks one at a time on the main thread and it cooperates with the UIKit main run loop which as you know is responsible for dealing with UI event and updating the UI. So, to get the main queue you call dispatch_get_main_queue. It's very simple. So, let's see an example of this in action.

So, remember these codes that we had before. This method that is a callback and adds a message to our user interface, this is a call back on the main thread. What do we need to do today with foundation to call this from an background thread probably something like this.

Here, we have a method that's called on the background thread with a message and a URL and we really want to do the thing at the very bottom of this slide which is called this addTweetWithMsg method on the main thread and all the rest on the slide is the body part that you have to do with foundation. You have to wrap up to two arguments in a dictionary, you have to call performSelectorOnMainThread with a special type of selector passing that dictionary.

You have to implement that special purpose selector, unpack the dictionary again taking the message and URL out, and then you can finally call your addTweetWithMsg. How can you improve this with GCD? Well, take away all the boilerplate and we call what we really want to call directly and we wrap this in a dispatch_async and that's it.

[ Applause ]

And as you can see here, we've used the dispatch kit main queue API to get the main queue which we passed as the first parameter to dispatch_async.

So these are the queues that we've seen that I mentioned that you can create yourself. So, these queues also execute blocks one at a time like the main queue and that's why you see them referred to as serial queues in documentation or in other sessions and the difference is that these execute their blocks on an automatic help as written in the background. And you can use this for instance to queue up some background work. So how do you create one of these queues?

You just call the dispatch_queue_create API and you pass in a text label. This can be anything you like but we recommend that you use the reverse dns notification that makes it very easy to identify and distinguish your queues from the queues that may already be present on the system and these are left in the Xcode debugger or in crash reports as well.

And when you're done with your queue, you call dispatch_release on it to a free the storage. So, there's much you can do with these queues. Bill has already mentioned just a bit earlier queues can be used instead of locks and why is this? The enqueueing operation that I mentioned is in fact thread-safe so it's perfectly fine to call dispatch_async on a queue from multiple threads at once and everything will work fine. And because the execution of locks is serial, is one by one, these two things combined allow you to protect access to a shared data structure by executing blocks on a queue and accessing the data structure only from those blocks.

And because queues are lightweight, this is actually easier and cheaper in many cases than using locking. You can imagine queues being an on demand locking mechanism that only creates a lock when there is contention and when a lock is really needed. So, let's see an example of this.

Imagine in our Twitter app, we have to maintain this history of tweets that we are displaying. This is a global object in our application and multiple threads wants to apply domain the background thread then gets new network messages and the main thread at displacement. So, we need to protect access to this shared resource somehow.

So, let's use a dispatch queue for this. So, we create a dispatch queue with dispatch_queue_create, give a nice label that tells us that this operates on these tweets object and then in the main thread wants to modify the object, say to remove the last item. It might be this too much history and we want to cut down on the history. We remove the last item. We dispatch_async to this queue and to block that executes can modify the shared DOC.

And then when the background display gets some new messages from the network, it also dispatch_async to this queue to add the new item and because queues execute these blocks one by one, the rest of those threads can be sure that once they're inside the block, they are the only ones operating on that shared object at that time and so, they can do safely without colliding. Also, note that we've used dispatch_async in both of these cases because the caller here doesn't actually care that this update happens right away. So, you don't need to block like you would be locking, and wait for this operation to finish before you can go on.

If you do want to block like in this case, we call dispatch_sync. That is exactly like dispatch_async except that they will wait for the block that it enqueues to finish executing before returning. So, here we would want to display the tweets, get a snapshot of the shared object and not refuse the __blocks syntax that you've seen earlier to extract the result from this block. And at the end, you call dispatch_release when you're done with the queue.

So let's see this in an animation. We have the main thread, the background thread, dispatch_queue that protects this shared object represented as a box. And these two threads enqueue some blocks and as you can see when they enqueue at the same time, they get enqueued safely in some undetermined order.

Here at the background thread one and its block that enqueued first. Now, the dispatch_queue has some work to do so an automatic thread comes along and runs these blocks and they can safely modify the shared resource because they know that they're the only ones executing a dispatch_queue at that time.

So, we remove an item, add an item, and the copy can take snapshots safely and update the main thread. And that's the end and there's no more block. The automatic thread goes away. So, now that you know how to create queues, you have to know how to manage their lifetime. So, as you have probably guessed queues are reference counted like other object types. And you use dispatch_retain/release to manage the reference code.

And note that GCD retains parameters to dispatch API. So, in many cases you don't actually have to do anything to manage this reference count. But if you have queues that get captured by blocks, you may have to do manual reference counting to ensure that lifetime of these queues is correct across asynchronous operations as we'll see in an example right now.

So, imagine in our Twitter app we have, want to-- asyncParse the data that we get from the network and extract the Twitter messages and we want this asyncParseData Objective-C method which takes the data and parses it. And when it has a result it calls back the user asynchronously and this is the pattern that Bill already mentioned.

The last parameter is a block that gets called, a callback block that the user gives us that we call when we're done and the queue that that block gets called on. So, here-- in here we will be calling a dispatch_async on a parse queue which will do the parsing on the, you know, on a background thread. And note that the block that gets enqueued here captures this queue parameter that the user gave us.

So, this is a case where you have to manually manage the reference cont of that queue object. Basically you can execute it after this method has returned and the caller may do anything they want with the queue at that point right? So you have to have a reference to the queue object to make sure that it's still there when the block actually gets executed. So, the way to do that is to call dispatch_retain on that queue object before calling the async with the block that captured it.

And inside that block, you can then call dispatch_release and you're done with it. So now that you've done that it's safe to async and we can parse our data and dispatch_async the result back to user provider queue and now we can release because as mentioned this dispatch_async is in an async retains its arguments so it retains that queue. So the pattern to remember is-- sorry-- the pattern to remember is to dispatch_retain before the async and release inside the block that captured the queue.

The same is true with other kinds of objects. In-- the general rule is that you have to ensure that objects that are captured by blocks are still valid when those blocks are executed and with dispatch_async that could be much later. So the good news is that as Bill has mentioned from objective-C objects you don't have to do anything and that would cover the large majority of your cases, these get auto retained and released by the blocks runtime and although it's just transparently, in our previous example for instance we have this NSData object we didn't have to manage the reference counts on that because the blocks runtime dealt with that for us.

However, if you have other objects like co-foundation objects, these need that same pattern that we saw they would see every time before an async and see a full list inside the block that gets async. So let's talk about doing App Design with Queues. So one pattern we recommend that you look at is to employ one queue per task or per subsystem in your app.

This will allow you to make these tasks very easily independent and communicate among them by using dispatch_async and because queues are very lightweight and efficient, it's not a problem to have many tasks and the associated queues. That doesn't mean that you will have many threads necessarily because in fact GCD does automatic thread recycling so one of these automatic threads that we saw could get used to execute blocks from many different queues during its lifetime.

So let's apply this idea to our demo. Essentially we have four tasks in this app, receiving and parsing the network stream, maintaining this shared message history object, fetching and caching images, and displaying the user interface. So how can we employ the one queue per task idea to this?

Well, the code in one thread for all these tasks might look like something like this, we get the data from the network, you parse it to create a tweet object, add this object to the shared storage, update the UI, then get the image from the imageCache and update the image display when we have received that and release it at the end.

So how can we use one queue per task to make these tasks independent? For the networking, as you can probably guess, we will dispatch_async to a queue that manages the network task and the network subsystem and as we've seen in the first part of the GCD section here, this will run potentially blocking like on a background thread.

Now when we have this data from the network, we will dispatch_async to the queue that manages the shared history object subsystem which we call tweets_queue here and from that-- inside that block when we need to update the UI, we dispatch_async to the queue that manages the UI subsystem which is the main queue and similarly for the imageCache, we dispatch_async to a queue that manages the imageCache subsystem and that can operate independently of the other paths and again from inside there when we have the image, we dispatch_async back to the main queue to update the UI.

So obviously this is a very simplistic contrived example but I'm sure you can imagine the power of this technique when applied to a complex application, here we've separated the code out with four queues and just used dispatch_async to communicate between the different subsystems. So there are some pitfalls to be aware of when you use this pattern, firstly, same as with the main queue, if you have this for subsystem queues, don't block them because if that's the way you communicate with the subsystem and you block that queue, you will block the whole subsystem.

Alright so the same technique applies to getting back off the main thread, you use another queue where you would push anywhere I think that would take a significant amount of time off too. And also when you use dispatch_sync API or other waiting API, be careful when you have many queues and multiple queues that you don't get into a deadlock situation. GCD doesn't protect you from the traditional mistakes you can make with multithreading and locking and if you do a dispatch_sync from one queue to another one which then dispatch_syncs back to the first queue, you will get into a deadlock. So that's something to be aware of.

Also if you block one of these automatic workers threads waiting on some external resource, this will actually make this pattern expensive where you said that dispatch queues were very efficient and lightweight but if all of those dispatch queues execute a block that waits since its there and parts one of these automatic worker threads you'll end up with many threads and that is actually expensive. So you can get into a situation like in this animation, here we have one queue that runs a block that cross a receive and it just never receives anything.

Maybe the network is slow and this will block this automatic thread in this receive block so if this happens with one queue that's not a big problem but if you have many queues and many occurrences of this and many automatic threads that get blocked, you can get into a situation where you will end up using a lot of memory and as you know on iOS if you do that, your app might get killed.

So one way around this problem is to use an API that allows you to respond to external events rather than having a blocking API that waits for something to happen and in GCD we provide one such API called dispatch_sources and we won't have time to go into this in a lot of detail today, we have a second session later on in the week that will cover this fully but just to give you an introduction, dispatch_sources allow you to monitor external event sources like files, like network sub groups, like directories, timers, very low-level things, and if an event happens on these sources, an event handler that you specified can be delivered to any queue you choose so this isn't like with the run loop sources that you might be familiar with where the event handling is tied to one specific thread and we recommend that you use these sources to replace any polling you are doing or blocking API calls for low level events and for more details, please come to the session Simplifying iPhone App Development with Grand Central Dispatch on Friday.

Just to give you a very quick teaser, remember this example with the many queues that we had before, here we have the dispatch_async to a network queue so this could be one of these examples where we might block one of the automatic threads waiting for networking communications to come in.

If you wanted to replace this with the dispatch source, all that we would need to do is to set up the source which we won't go into now, which will listen on a socket and then set the same block, the same code as the handler for that dispatch source and now, the source will trigger, will evoke this handler when there is actual network data on the socket so that we can immediately read it and don't have to wait and this will get re-invoked every time there is network data whereas before, with the async you would have had to do this every time to wait for more data as we process the last piece.

So where do I find this technology? The introduction already mentioned., GCD is part of loop system and that means that it comes along for free with malloc and other basic functionality, you don't have to do any special linking, it's available to all apps, you just include the header here, dispatch the edge, and you're good to go. Also worth mentioning, GCD's Open Source.

You can go to the libdispatch homepage on macosforge to get more details on that and there is also a mailing list where you can ask questions and for more information, please contact Michael Jurewitz, our development tools and performance evangelist. There is some very good documentation on the developer's side in particular the concurrency programming guide and there is extensive header DOC in the dispatch headers that you can look at as well as very good man pages on your Snow Leopard system which apply equally well to this technology on the iOS, and again the reference to the homepage on the Open Source site and on the Apple developer forums there is also a section labeled Core OS where you can ask questions about GCD and we have related sessions, on Friday the simplifying up an App development with GCD that has been mentioned this morning there was a working effectively with objective-C session that also talked about blocks briefly and there is an advance objective-C and garbage collection technique session on Friday after the GCD session. Alright, thanks for your attention.

[ Applause ]