Video hosted by Apple at devstreaming-cdn.apple.com

Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2011-112
$eventId
ID of event: wwdc2011
$eventContentId
ID of session without event part: 112
$eventShortId
Shortened ID of event: wwdc11
$year
Year of session: 2011
$extension
Extension of original filename: m4v
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: [2011] [Session 112] Writing Eas...

WWDC11 • Session 112

Writing Easy-To-Change Code: Your Second-Most Important Goal As A Developer

App Frameworks • iOS, OS X • 57:23

Dealing with software change is constant. The better you plan for change, the faster you can make new apps and update your existing apps. This talk gives you ideas for improving your approach to software change, and includes both general software engineering conventions and iOS-specific conventions. You'll hear concrete suggestions you can use to make your development process quicker and more efficient.

Speaker: Ken Kocienda

Unlisted on Apple Developer site

Downloads from Apple

HD Video (133.2 MB)

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Hello, good morning everybody. Good morning. Good morning, thank you so much for coming. So this is session 112, Writing Easy-To-Change Code: Your Second-Most Important Goal As A Developer. And I'm Ken Kocienda. And again, thanks for coming. So this is a talk about writing code. Unlike some other sessions where you get some specifics about new APIs or features, this is sort of more general, talking about writing code day in and day out. That's what I do. I think that's what a lot of you do. And so I think there's a lot of interesting things to talk about.

In terms of making your code easy to read and learn and understand and maintain and easy to change. Easy-to-change code. Easy-to-change software. I think that this is your second most important goal. So we'll get to the punchline right away. What is the first most important goal that you have? I say, right? What's your most important goal? I say that is to ship products.

That's your most important goal and this is certainly how we think at Apple. I think some of you from the applause you think that way as well. And at Apple we've done over 30 iOS releases since 2007. It's a huge amount of change in a very short amount of time.

And so we've come up with some ways to try to manage that change, try to actually keep going at that pace. Because after all, releases are complicated. There's all these things that you need to juggle. There's new hardware and there's legal and there's new features and there's tight schedules and people are coming and going. There's lots of things to kind of juggle to keep those releases on track.

And so this talk is all about some ideas to help you make change easier. Some ideas that I try to put into practice, and I think that at Apple we try to put into practice all the time in developing software. Right? Because you're always changing your software. Try to make that change easy since you always need to change your software. And what kind of changes am I talking about here? Well, it's the typical stuff.

I mean, it's not anything really all that surprising. It's bug fixing and adding new features and enhancing existing ones, right? Changing code that you wrote, yeah, sometimes maybe six months ago. You did something. You were just hacking it together to get a release done. And you go back later and you say, "Gosh, what is this even trying to do?" Right? So if you kind of go up front and think ahead of time to try to make your code easier to change, going back later won't be such a hassle. Right? And so for the rest of the talk, I'll be talking about some general conventions that apply to whatever kind of software you're developing on whatever platform, but then also some Mac and iOS-specific conventions.

So, that's the pitch. I hope you are interested in staying. For the rest of the talk, I've structured things along a series of topics, or different angles, different ways of looking at this same idea of making your software easy to change. So, things to think about. And here they are. As you can follow along, make a little checklist perhaps as I'm going through.

Different ways of looking at this same problem. So, let's get started. The first one is about style. And I think that style is more than skin deep. Well, what is kind of that skin deep level? Well, I think the first thing we think about when we think of style is coding conventions.

And, you know, nothing, you know, no rocket science here. It's, you know, I think we all have a style, how we like to structure our if statements and our parentheses and capitalize things, right? But, um... I think that while local consistency is important, I mean, you don't want different styles of all those different things in one block of code. Local consistency is important, for sure. But I think that's really only the beginning of style.

"And real style runs deeper than that." And so here I've got a quote from a famous writer, 19th century writer. "People think I can teach them style. What stuff it all is. Have something to say, and say it as clearly as you can. That's the only secret to style." And so if I were to reduce that slide down to one word, here it is: clarity.

Clarity is what you should really be striving for above anything else. Because just like clear writing is easy to understand when you're reading it, I think that clear code is easier to change. And of course we're changing our software all the time, right? So if you write your code clearly, it'll be easier to change later when you go back.

So what are the elements of a clear coding style? To be honest, I could spend the rest of the hour answering that question. But to just pick out a couple things for just this session, I think good names and common idioms are two good things to talk about.

Now when it comes to the first one, good names, I think we all know that, well, you know, a class, the name of a class should describe what the class does. Same thing goes for a method. A method should describe clearly what the code in the method does. So I'd like to look at a couple of maybe sort of less common ways of thinking about good names and good descriptive names.

[Transcript missing]

And so I think if we change the method to just be more descriptive, it's now a lot clearer what it does. If you go in back and put this in the calling code, it's very easy to see what that yes/no is trying to communicate. So again, good names are descriptive, and not just where you typically think, right, classes and methods, but also little things, just like variable names, Boolean variables. Okay, so that's good names. Now a little bit about common idioms.

So now I would say that this line of code is not very idiomatic, right? Try to take a look at that quickly. My goodness, what is this even trying to do? Do you really want to count square brackets when you're trying to go and read a line of code to try to understand what it does? Very, very difficult to see what that is trying to do.

If you rewrite that, again, that idea of clarity, you're not just communicating to the compiler, you're also communicating to other developers and to yourself six months from now. So if you just take a little time and rewrite that and just make it a little clearer, now it's easy to see what the code is trying to do. So rewrite those workhorse lines of code.

So, even more, you want to read and understand quickly beyond the scope of a single line of code. And I think this is an idea that comes up quite a bit: design patterns. Well, what are design patterns? These very common patterns that we see in software quite a bit. And we use these throughout our frameworks.

At Apple, you use them if you code on the Mac or iOS. And what's more, we've got these sort of Apple-specific patterns that you see quite a bit. And we add new ones when the situation presents itself, like the bottom one view controller. So sort of this new design pattern that we use all over iOS to help your job, help make your job using the frameworks easier, making your apps easier to develop.

And so what's good about this? It's that these idioms are communicating to you at a high level. If I say ViewController to you, or if I say Observer, or I say Delegation, it communicates a whole series of concepts to you. If you put that name, if you say FooDelegate, you can see from the name of the class.

What that code is supposed to do and what role it has in the system. It communicates at a very, very high level, and you create this shared vocabulary between you and other people that you're coding with. Again, sometimes even you and yourself six months from now, right, what a piece of code is supposed to be doing.

So use those common idioms, and as you're looking through our developer documentation, you'll see these names coming up quite a bit. If you see some name that repeats itself quite a lot, go and try to see what that design pattern or that common idiom is trying to communicate. It'll really help you try to get to the bottom of what our frameworks do and how you can make the best use of them.

[Transcript missing]

So in your software, let's say you have a bug. Well, why do you have a bug? A lot of times it's because there was something that you didn't anticipate or you didn't understand about your code or somebody else's code that you're depending on. And so you have to go and debug, right? Simple.

Here's a quote from a great developer about debugging: "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you are as clever as you can be when you write it, how will you ever debug it?" Brian Kernaghan Ryan Kernaghan, the K from KNRC, right? So, well, let's say that, you know, you get this bug, so your step one, I guess you think, is, well, you fire up the debugger.

What are you really looking for? You see this bug and you go into this bugger, right? What is it that you're hoping the debugger will tell you? And so before I fire up the debugger, I like to take some time and think first. How could a bug happen? If a bug is happening in the software, what situation could make the bug, what could make the software behave like I just saw? That's wrong.

Go back to Brian Kernaghan, the most effective debugging tool is still careful thought, coupled with judiciously placed print statements. It seems very old-fashioned, right? But I admit that probably, you know, Xcode Debugger, we put a huge amount of investment in it, it's a great tool, but I have to say that 98% of the time I debug with print statements. I've written a lot of software over my career, and most of it has been debugged with print statements. And careful thought, that's really it, because debugging, after all, is understanding.

It's increasing your understanding about how your code works. It's not just jiggling code around until the problem goes away. Have you ever done this? You use performSelector with objectAfterDelay. If I just take this line of code and make it happen later, maybe with zero, performAfterDelay zero, make that code run at the bottom of the run loop.

That's rarely right. That's rarely the right answer. It might make the bug go away, but you probably just set a little trap in your software that you may just trigger later on. Or if you do this, if you just take two lines of code and just switch their order, bug's gone. But why? Why did that make the bug go away? And I think that instead of just doing that, you should really try to have a good understanding of why your code change fixes the bug. And you should be able to tell a story about your bug fix.

Sometimes it's like I investigate a bug and I go, "Eureka, I think I understand what's going on here now. And before I even code the fix, before I even write the software to fix the bug, I'll get up and go tell somebody about it. Go and tell my story." Because a lot of times, boy, when you think, you're sitting there thinking at your desk, it's like, "Oh, this is a good idea." And then you try to go and verbalize it to somebody.

Actually go and tell a story, and it's like, "Nah, that doesn't really make sense." Or you go and maybe somebody else is really the expert on the piece of code that you're working on, and they may have more understanding, more knowledge that they can bring to your story. And again, maybe make it not seem so solid and together.

Or the opposite. Hopefully it is exactly what needs to be done. And then you go back to your desk and you write the fix. If you write it up, go find yet a third person and tell that same story again during the code review. Spread that knowledge throughout your team, your organization.

And maybe you're doing this at 2:30 a.m. and there's nobody around, so instead maybe write that story into your bug tracker somewhere to capture that information about that increase in bugs. And then you go back to your software and you try to get that increased understanding about your software. Because a lot of times, right, a bug fix should be about anticipating more and understanding better about the environment your software lives in and what it's trying to do. So that's stories.

Next, laziness, or maybe what a lot of you are thinking is, "Just wake me when it's over."

[Transcript missing]

So now if we go and take a look at the implementation of the init method, a lot of times you'll see this, where that singleton object needs to use another singleton object to go and deliver all of its features, right? So this init method calls another singleton object.

And if we kind of go and we switch them, right? So we've got the foo calls the bar controller. And if we go over to the bar controller, well, guess what? Right? The bar controller contains a call to the foo controller. So now we've got this little circular bit of lazy initialization. And this can cause an init storm, because a lot of times it's not just two controllers which have a relationship to each other. Sometimes you have a large graph of controllers.

That all depend on each other. And there are several problems to this. One is you might get a long pause, not at application startup, so you solve one problem, but you gave yourself another problem. So now the first time the user goes and accesses a feature, there's a long stutter while all of these objects go and instantiate themselves. So you don't want long pauses in your program, obviously.

But there's a more difficult problem, which is the order of initialization problem. I mean, depending from that code example, depending on whether I access the foo controller first or the bar controller first, the code is going to run in a different order. And that might not be known while you're developing the code. It might just be based on what the user does first, which features are used first. And it makes it more difficult to understand how your software is going to behave and, of course, managing change in your software also is complicated by that.

And yet there's one more problem to lazy initialization, which doesn't exist in this code, which uses dispatch once to go and allocate the instance the first time this is called. If you have some older code, which was written before dispatch was available on either the Mac or iOS, you maybe wrote it like this.

Well, you're just checking to see whether that instance has been assigned yet. Not whether the object has been created, but whether it's been assigned to that instance. And of course, that init, that alloc and init, will completely run its entire call chain before that instance gets assigned. You could actually, or if you're running with multiple threads, calling it to this controller, you've got a big problem. You might wind up with multiple instances of a singleton. Which is probably not what you want. You wouldn't have written the code like this if you could have more than one instance. So it's a problem.

It's a mess. And so, well, what's the solution to... This lazy initialization problem. Unfortunately, there are no silver bullets. It really is kind of case by case. But here are a couple of ideas. You can think about maybe doing a lightweight of all of these controller objects, and maybe not necessarily init them, or maybe partially init them in a lightweight way that doesn't involve calls outside of the class to other big components, other big controllers. Or you can decompose your controllers better so that they don't rely on each other quite so much.

Yet another idea is to have alternative accessor patterns. So instead of that shared instance, you could have variations of accessors to go get these singletons, something like active instance, which will only return the instance if it's already been created. It won't create it if it hasn't been created yet. This really helps you at program startup. I've done this quite a bit, and it's pretty successful. All right. Shared Instance or Create if needed. It's just yet a more descriptive name for that very same thing.

So laziness. Think through your lazy initialization. But do it so that your program starts up quickly. Okay, next. Hygiene. You make the mess, you clean it up. So I think we can all accept, I mean, this is true not only for WWDC, but, right, it's sort of a little bit of a little kernel of truth for a life.

Good hygiene takes effort, right? And if we kind of try to apply that to great writing, right, E.B. White, a great writer, he says the best writing is rewriting, right, going over, iterating over your writing. Now, in terms of code, we also don't want to throw away code, ever.

Are those things in conflict? If the advice is to keep iterating and rewriting things, but then you don't want to throw things away, well, how do those two things relate? Well, I think that changes in your software need to be part of a process. There's kind of different levels of changes, different kinds of change in your software. And of course, again, your first priority should be to ship. And of course, kind of keeping your code easy to change is, I think, the best way to do that. So, the secret here is not to rewrite your code from scratch, but to refactor.

And what is refactoring? Well, I think that's maybe a lot of things to a lot of people. To try to fit it on one slide, I think refactoring is keeping the functionality of your code, but just changing the form. You don't change the behavior, you just change the way that things are written. Saying the same thing, but saying it with different words, a different code.

Well, what about cruft? I mean, I think we've all maybe worked on projects where code just kind of accretes all this junk in it, and it's kind of hard to work with, so you're tempted to just kind of throw it away and start over again. You get this code that's just too difficult to work with.

But cruft is not any of these things. It's not code you don't understand, and it's not code you didn't write. It's somebody else's problem, not invented here problem. And it's not code that you don't like either. Don't be tempted to just throw away code that meets any of those characteristics.

But I do think there is a couple things we can say about what's genuine cruft with a very, very high degree of confidence. So what's genuine cruft? Dead code. Code that you've got in your software which never gets called. Maybe you had an old feature, and you've superseded it with a new feature, and the code will now no longer run. I think you should get rid of dead code.

It just gets in people's way. You just have to read something, and maybe if you're new to a project, you're reading this code to understand it, and it never runs. It's a waste of time. So dead code is cruft. The other thing is comments, which no longer apply. In other words, the code and the comments don't match.

I think by definition it's the comment that's wrong unless there's a really bad bug. A lot of times you write the comment and then the code evolves over time so that the comment no longer matches what the code does. And number three, well there is no number three.

That's the only thing that I can think of, only two things I can think of with really, really high level of confidence that's cruft. So for the first one, you can use the compiler for dead code checks. A lot of times when I'm looking through my code and I think that something is dead, I'll just go comment out the declaration and compile the code and see if it's actually dead.

Now sometimes, of course, you've got a lot of dependencies, you might need to build other frameworks or other apps or other projects, right? But this is a good way, if you can actually build all of the code, it's a great way to see if the code is dead. the compiler will tell you.

And for comments, I always like to go back and check them. You know, sometimes there's, you know, you might put a reference to a bug in your code, and the bug was filed 10 years ago. You know, it may not be relevant anymore. I mean, the software may have evolved to the point where that bug is now such a built-in, or the fix to that bug is such a built-in part of everyone's conception of the software that the comment just gets in the way.

People are tempted to go and read in this thing, and it's just like, oh, I already know that. That's the way the software works. Right, so old comments, you might even just delete them, but particularly delete them if they don't match the code. Because the code itself is accumulated knowledge. Good comments are also accumulated knowledge.

And when it comes to the size of the changes to make, I think the size of the change that you want to make is really important. So for small changes, if I go and fix a bug and I see that there's a name that could be improved, even though changing the name to something more descriptive doesn't really have anything to do with the bug fix per se, I'll make that change and just write a little extra note to the code reviewer.

Say, yeah, yeah, it's not part of the change, but I think this just is better. I'm just going to kind of clean up as I go. Make the name, improve names as you do other things. For something medium, you know, I think that once you get beyond the scope of a small change, a lot of times it's about people skills more than anything having to do with software. If you're going to make a medium change, go and talk to people on your team. Think things through. Decide.

Tell yourself a story about the code change that you want to make and why. Maybe changing a set of names, moving some interfaces to something that are maybe a little bit cleaner and easier to maintain. For things that are really large, we should probably maybe even get up and write something on the whiteboard. Maybe even write a quick little design document. Put it in email.

Think about what your change is in a more formal way. I mean, I don't think that you should be writing huge tomes, right? But if you kind of think things through, and again, it's really more about people skills than it is about sort of software or coding. Kind of deciding what you want to do and coming up with a good plan before you do it.

So kind of different scopes of change. Of course, you always want to be aware of regressions when you go and make changes. That's what refactoring-- again, refactoring in most cases, is keeping the same behavior in your software, but just changing the form. Of course, the best thing that you can do is to have tests.

Right? You all have tests, right? You all have unit tests, and performance tests, and correctness tests for your whole app, right? If you don't, you should. Tests make things-- makes your code easy to change. Gives you confidence.

[Transcript missing]

We have C, mostly C language family programs that we write for iOS, right? GoTo. Now, I think that notifications are just a glorified GoTo.

But they're worse, because you don't even say where to go. Just post this notification and somebody's going to be listening for it out there. At least you think. Right? But not only that, you can go to more than one place, because more than one hunk of code can sign up for a notification.

So it's their go-to, even worse. Now, why do I think this is even worse? Well, one of the things that I think it does is it frustrates code inspection. Maybe you've written a bit of code and you attach that patch to an email to somebody, and the code reviewer reading that can't tell easily, necessarily. What that notification is going to do, what code is going to run as a result of a line of code being called. You can't see many times easily what code will run.

The behavior is non-deterministic if you have more than one callback for a notification. The callbacks are unordered. There's no guarantee that the system makes that one client which is signed up for a notification will run before another. So notifications, I think, can complicate change. Because it makes it more difficult to understand what your software is going to do. Now, all that said, they're not all bad.

Because notifications can promote loose coupling. If two pieces of code can communicate with each other and they don't know directly about each other, that can be a good thing. Because you can change one without necessarily having to worry too much about the other. And there are two really good examples of that. Model View Controller uses notifications to communicate between the different elements of MVC. And Core Data uses notification to communicate changes in data.

I think that if you can think of your notifications in terms of will and did notifications, if you've got something and you want to tell somebody else about it, tell them some other piece of code about it, well then post, "I'm about to change this variable notification." Then you go and change the variable, and then perhaps you might even want to say, "I just changed that variable notification." If you can think of your notifications in terms of will and did, giving some hooks to other pieces of software to let them know about pending changes and let them know about changes which just happened, that can be a really good way to use notifications.

And I think a good piece of advice, a lot of times for you, since most of you are making apps instead of Apple where we make frameworks and APIs that you all are going to use in ways that we may not really know about. We're trying to make a flexible framework. If you're making an app, I think a lot of times you should know about the endpoints for your notifications. You should know who's sending it and who's receiving it and why.

It's not just, "Well, I think I'm going to put this hook in there and I don't know who may sign up for this." If you know about the endpoints, that helps you keep control over notifications. You should really think twice about other uses. If you don't know about the endpoints, if it's something other than will or did, really, really think twice and think about whether notifications are the best way to get the job done because code can be too loosely coupled.

If two pieces of code are interacting with each other in ways that are sort of non-standard, not idiomatic, not using a common pattern, it can frustrate change, make things more difficult to understand. An option is to use protocols or delegates instead of notifications. If you think that two pieces of code really are starting to get this real relationship with each other, make an interface out of it or make a delegate out of it.

[Transcript missing]

Okay, next. Optimization, the 3% solution. Probably one of the most famous quotations about computer science from one of the most famous computer scientists. Donald Knuth says, "We should forget about small efficiencies, say, 97% of the time. Premature optimization is the root of all evil." You've probably heard that.

So I got out my pencil, sharpened my pencil, and did a little bit of arithmetic. Knuth says, "97% of the time." And so, we should worry about premature optimization 3% of the time. Well, that's helpful. Well, which 3% do I worry about? Which 3%? Well, if I kind of came up with a laundry list of things which generally speaking can be slow or might be slow.

Allocating tons of memory, making a huge number of views, drawing, or maybe I've got new code and I haven't chosen algorithms really well lately because I'm still thinking about what the software should do, blocking on information from other code. A lot of times unnecessary work. A lot of times you write a method and later you go and look at it in instruments in the debugger. You wind up calling this method again and again and again during a single iteration of the run loop in ways that you didn't think about. You're just going and doing the same work over and over again. So those things can be slow.

But really, don't jump to conclusions. Only optimize when you've measured your software. Use instruments. It's a great tool. It's right built into Xcode. There's lots of different ways that you can measure your software. So definitely use instruments. And only optimize when you understand what instruments is telling you about your software.

And I'd go even further to say that as you're looking in the profiles which Instrument gives you, you should go and look and see what the hottest functions are and optimize the ones which you understand the best, the ones which have the clearest role in your program. Very likely some new code might be up there, but instead, you might want to pick instead the one that has, you've been living with longer.

You know what the software is supposed to be accomplishing. And so here's an idea is that you should optimize your slowest and oldest 3% of your code. Code that's been around the longest, and it's the slowest. You understand it the best. Instruments is telling you that it's slow. There you go. That's the code that you optimize.

And what that will help you to do is to keep your newest code easy to change, because a lot of times that's where the features are still maybe provisional. You don't understand quite so well maybe what the code is supposed to do. You haven't had enough time to go in and clean things up, make things work as efficiently as possible. So keep your newest code easiest to change.

And trades are okay. So I have a story about years ago, I was an early developer on the Safari project and going and making the web browser. Our goal was to make a very fast web browser, and eventually, we succeeded. But in the beginning, the browser was slow. So well, how do you make a slow program fast? And so we came up with this idea that we were going to never make the program slower.

So from very, very early on, I developed this tool that allowed us to go and just run through a series of URLs and give timings on how fast the pages were loading. And so we ran this test every single day. And so we knew on a 24-hour or more, 24 hours was the largest amount of time that would pass before we would run the test. A lot of times we ran it before we even checked code in.

And so we decided that we would never, ever, ever let the program get slower, even as we were adding features. And this is where the trades comes in. So we added a feature and make the browser behave more correctly in the way that it needs to. But how do you do that if it winds up making the program slower? Well, we did what I just said. We went and optimized the oldest and slowest 3% of the code. So we went and found a little bit of headroom someplace else to pay for this new code, this new feature that we added.

And we decided that this was okay to do. It made sense. This was a good way to go and make the software faster. And it helped us to really keep that rule in place. Never, ever, ever, ever make the software slower. Don't convince yourself that it's okay because I'm adding a new feature so a little performance hit is okay.

It's not okay. Never make your program slower. And it can only either stay the same or it's going to be slower. And it can only either stay the same or it's going to be slower. And it can only either stay the same or it's or get faster, right? It's simple logic.

Right, so the sort of the lather, rinse, repeat recipe here is right, change, test, measure, and optimize, and repeat. And that's optimization. So next, dependencies. Don't call us, we'll call you. So me, I'm worried. I'm kind of paranoid, right? I'm nervous when I go and change software. I'm worried about, well, what are the implications of this change that I just made? Particularly, sometimes I sort of parachute in on somebody else's project and go and jiggle things around, right, in my own way.

I'm worried about, who did I just break? What bug did I just introduce by adding this new code? I always am worried about limiting the collateral damage. And so here's a couple of tips on how to do that from a design point of view, from a code design point of view. Inheritance trees and call graphs. So about inheritance trees, I think the simple rule is shallow is better. I think we've decided at Apple in making our frameworks that we don't want to wind up with these deep, deep, deep inheritance hierarchies.

We try to keep them as shallow as possible. And why is that, right? Because if you wind up with these layers and layers of overridden methods, a lot of times you've got to go into the middle or sometimes even to the root of the inheritance hierarchy and make a change in software that's being overridden.

And now you have to think about are people just doing straight up overrides on this? Are they supposed to be calling super in their, you know, in their overridden methods? How does -- if they are calling super, what does my change do for that code in the overridden method? If they're not calling super, how do they maybe wind up getting this bug fixer feature that I just put up in the overridden -- in the, you know, in the method way up at the top of the hierarchy? Right? So this is complicated. It complicates change, having a deep inheritance hierarchy. Right? And so the solution or a solution to this is where possible to use delegation, design your classes so that they have delegates when -- in the points of the code where interesting work is getting done.

So what is delegations? It's customizing by calling another object. You define what kind of the interesting things that you might want another object to have a role in delivering, and you have that other object do that work. And it keeps the conceptual overhead small, because you decide up front, this is the kind of interesting work that this class does, which might be customized by another class. So instead of adding to the inheritance hierarchy, you add additional delegates.

And the great thing about this is that you can vary that customization at runtime as you need it. You can make a class behave in two different ways simply by changing the delegate. I mean, I think a great example of that is just going and removing a delegate that winds up responding to maybe button presses.

If you just remove the delegate and replace it with nil implementations, the button will be disabled. You update its user interface, it's nice and easy to sort of enable and disable user interface controls by varying the delegate at runtime. Object becomes active again, reassign the delegate that does the work, and you're good to go.

[Transcript missing]

Because it helps you to limit includes. If you limit includes in your software, you get a bonus of faster compile times, particularly if you have a bunch of C++ code. Compiler takes a longer amount of time to compile C++ than straight C or Objective-C, so you get faster compile times, which is great. More turns of the edit, compile, debug cycle in your brain.

But more than that, I try to strive for unidirectional calling when I design two classes that need to interact with each other. And so here's an example. I have two classes, a foo class and a bar class, and you can see they message each other. So now if you go and look at the Foo example, I'm going to go and change an interface.

Now, of course, since Barr calls that method, I need to go and update all the callers, right? So this is potentially complicated, particularly in a big framework. If you need to change an interface, well, now you need to go and change all the callers, and you need to make sure that you've got everybody, even independent frameworks that maybe you don't develop every day.

Right, so I think a different way is if you can rethink the relationship between these two classes. Instead of just foo and bar, two classes which are friends, if you have master and slave instead. So not only is the name more descriptive, right, but the relationship between the code is sort of clear and is communicated in the names.

Now, what this means, of course, is that now if I go and change that gold interface in master, if I decide to decompose that, refactor that, so it's not just change now, I've now broken it up into a data receive and a data process step, I don't need to go and change the slave. It doesn't matter. I'm free to change that master without having to worry about the collateral damage in the slaves. The slave doesn't depend on the master. I always try to strive for unidirectional calling between two classes if I can. Keep those call graphs shallow, small. Next, mixing.

So, Model-View-Controller, I think it's a really neat concept. It keeps separate things separate, the data in your program, the visual representation of the program, and the way to mediate between them. It's a good design because you don't mix these model and view changes in the same code. Helps to keep your code easier to change. Same model can have then later on two different visual representations. Makes great sense if you're doing iOS development, you have the same model, different view system on the iPhone and iPod Touch, and then a different one on the iPad.

So that's, I think, a pretty good idea. Even more generally, I say you shouldn't mix different things in your software. Don't mix different things. Like, for instance, computation and I.O. If you're receiving data from over the network, don't do any important, interesting work on it at the place where you received the data. Do that elsewhere. Don't mix your algorithms and data sources, which is sort of another way of saying the same thing. UI in a specific screen resolution.

Because again, we wind up changing the screen resolution. Or in the case of iPhone developers, along came the iPad. Because we wind up changing. New hardware, new products. So don't just hard-code in. Don't bake in the notion of a specific screen resolution. Or your user interaction in a particular interface pattern.

What's very common, of course, in the iPhone is to have apps which show data's in screenfuls. And we've got this side-to-side navigation. Along comes the iPad. And a lot of that same functionality is then implemented in popovers. So don't bake in that side-to-side notion. Because it complicates change when the world changes around you.

I'm conflicted about lines of code like this: set editing which seems sort of model-ish, and animations which seem somewhat view-ish. I think this is maybe mixing a little too much. Should you really hard code animations in your code? So I've got a story about multitasking gestures, the new feature for iOS 5. I don't know if you've tried them out yet, the sort of swipe side to side to switch between apps, and sort of the hand-closing gesture to go back to the home screen from an app. Well, I did a lot of the work for multitasking gestures.

And of course, this is a big change to how the system works that we came up with in version 5 of the software. And it turns out that in SpringBoard, getting an app to launch sort of behind the scenes as you swipe side to side without that launch animation was really hard to do because we baked in the notion of, oh, well, on iOS, and when an app launches, it does this animation up from the icon. That's just how it works. But we change how the system works.

So this is a case where we baked in this notion of animation, and it turned out to be pretty difficult to go and disentangle that to make the system work differently. So that's a lesson in change management for us, for sure. So don't mix different things. App launching and animations are two different things. Don't mix different things.

And that's mixing. Next, expectations, or how do I work this thing? I think bugs are disappointments. Maybe that's something that would be better discussed with my analyst, but I'm also often disappointed with bugs in the software. It's like I expected A and you did B, and I'm all sad now. I have to figure out why. Of course, this is now a good general rule of thumb, Postal's Law, be conservative in what you send, be liberal in what you accept.

I think that applies to app development and not just network programming. Right, and this idea of hard to use wrong, if I'm going to develop an interface, a code interface that all of you are going to use, I would like to think that it's difficult to use that interface wrong.

It's hard to use wrong. And so I've got a couple of ideas about how to make code that's hard to use wrong. One is method arguments, and the other is assertions and early returns. Well, if we look at assertions, we look at this method argument here, and I put it together with an assertion.

So I say, if you call this method, well, it's wrong. It's simply wrong to call this method. I've decided, as the interface provider, that it's an error to call this method without a view. The software will break. So if you're using the assertions that we provide as part of foundation, your software will go into the debugger while you're debugging it, which is good. You'll fix that bug before you ship to customers. So assertions are great. Put them in your software.

[Transcript missing]

Now, what about IVARs? IVARs bring up an interesting point, too. And I think we'll all accept that global variables are bad. And why are they bad? Because the scope is too broad. A change to that global variable can affect all of the code in your program. But I think that IVAR scope can also be too broad, particularly in a large class.

If you have a big class, one of those singleton controller objects, An IVAR can be really, really difficult to manage, and you may need to understand a lot about that IVAR in order to make sure that you understand how the software is supposed to work. So I've got some rules of thumb for IVARs. Generally speaking, have as few of them as possible.

Try to design your code so that you're not just squirreling away little bits of state in the scope of the whole class. I try to have simple life cycles, getters and setters. Don't have complicated life cycle transitions for IVARs, if possible. And avoid tight relationships between multiple IVARs. Don't have IVAR at the state of IVAR A depend on the state of IVAR B, where possible.

Right, and avoid letting non-setter methods change IVARs. I mean, this is tempting a lot of times because, you know, you may want a function that computes the value of an IVAR, but since a C function can only return one value, you might be tempted to have a sort of an update state method which goes and changes a bunch of different IVARs, and that can be difficult to understand, particularly later if you're trying to change that code or if you're trying to learn it, what that method actually does.

So, if you've got hard-to-manage IVAR state already in your code, and maybe you're going to try to refactor, using a state machine might be a good idea. This is even a good idea if you're going to design a class from scratch. Use a state machine. UI Gesture Recognizers in UIKit implements a state machine to implement its functionality. Of course, that's expressed to you in the API, how you wind up getting your callbacks.

You wind up asking the gesture what state it's in: possible, or record that changed, or canceled, or ended. And it was such a good idea that that's how I implemented--there's no API for this, but this is how I implemented multitasking gestures in Springboard, using a state machine. All right, so state machines, how do they help? Well, they help you to think things through.

And they help you to limit possibilities. If you've only got a specific number of states, there is no in-between. You can't read in-between the lines. You've got to transition between one state to another. And states also help you to make assertions. Some of those assertions you might add to your code is, well, this function can only be called if the class is in a certain state. If not, I'm asserting.

It's wrong to call in the wrong state. And later, right, because this is all about change, right, so you implement the state machine and then later you realize it doesn't do everything that you want. You need to add a feature. Well, you've just made your life easier because all you do then is you add a state and you then you think through what are all the transitions between all the existing states and the new state that I'm adding. What makes sense? Can you get there from here? So state machines help you to manage change and make your code harder to use wrong, even in internal implementation.

So, wrapping up, 10 things to think about. Clear code. Your bug fixes should tell a story. Lazy initialization: Keep control of that where possible. It's good. You want quick-launching programs, but it's not magical. Refactor instead of rewriting. Don't overuse notifications. Use them for the right things. Keep your newest code easy to change.

Optimize your slowest and oldest code. Limit dependencies. It's not about just like not working and playing well with others, right? Just trying to keep your code easy to change. Don't mix different things and make code that's hard to use wrong. So, 10 things to think about, and I hope those will help. And thank you for coming.