Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2006-413
$eventId
ID of event: wwdc2006
$eventContentId
ID of session without event part: 413
$eventShortId
Shortened ID of event: wwdc06
$year
Year of session: 2006
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC06 • Session 413

Signed Applications

OS Foundations • 1:03:16

Mac OS X Leopard will provide support for identifying applications by digital signatures. Various OS features will make use of this identification to base trust decisions for the user. Come to this session to learn about the tools available on the system to sign applications, Apple's Strategy for signed applications, and to see how features make use of this new capability.

Speaker: Perry "the Cynic" Kiehtreiber

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Hi, everybody. I'm here today to talk about a technology preview of application signing. It's a new feature for Leopard that we've never had before. So, wide cone signing. We're actually beginning to get a growing number of requests, both internally and externally, to identify client processes. Not just by what user they're running as, but actually what application it is that's actually running.

Also, we're getting requests to be able to validate application integrity. Has this application been messed with? Has it been changed? Has somehow something within its bundle or within its code actually been altered? The requests have come both from outside parties, vendors, and also from internal parties that have a need to know who the client is that's asking them for a particular piece of data. Also, customers are becoming very interested in this as they want to tell whether or not their systems still have maintained integrity and are still secure.

So there's a little bit of history of actually identifying apps with the system. Keychains have always tried to actually identify their client apps. Anybody that's ever updated a system and ended up with dialogue saying mail.app is trying to access this particular keychain item. perhaps it's been updated, do you approve? Has actually seen the keychain subsystem trying to re-identify an application that it knew once.

The problem was that back in that scheme, we were always running live hashes on applications as they were delivered on the system. And then we were involving users in trying to make the relationship of, "Hey, this is actually an application that Apple gave to me. Yeah, I really want to trust it." MCX for a long time, Managed Desktop and Parental Controls, tried using the CFBundle name for a while to say, "Do you want to allow your children or kids in a lab or people within an organization to run particular applications?" That was actually ended up getting thwarted by eight-year-olds that were smart enough to understand they just needed to change CFBundle names of their WOW application or something like that, and they could actually run it anyways. Some people have tried using paths. Everybody's trying to do various things, and really what we decided to do was try to solve this problem in a more generic fashion.

So PKI use, or X.509 certificates, have quite a history here at Apple, or a history over the Mac OS X timeframe, of being an actual system provided piece of functionality. In Jaguar, the keychain subsystem included basic PKI and SSL functionality through our secure transport libraries. That was actually used later in our first client, which was Safari, to actually provide its SSL capabilities. In Panther, we enhanced things by including S/MIME capabilities. We were fairly rudimentary. They basically only allow access of S/MIME to one identity for a particular mail account. But basically, it all just plain worked, and it worked within the framework of our X.509 and PKI subsystem.

In Tiger, we started off with smart card integration. We'd actually been supporting smart cards since Jaguar or so through our PCSC stack and the muscle card subsystems. But in Tiger, we decided what our feature would be is to actually support them as if they're keychains so they actually fit into the way that we manage PKI capabilities on the system itself.

Other things that happened during Tiger, actually mid-release, the iChat folks included the capabilities for encrypted iChat that once again exploited mostly our CMS libraries that were developed by SMIME. VPN and AirPort also work with our PKI capabilities, and since of course we're enabled through smart cards as keychains, that functionality was there as well.

So now, the next step for us is to include signed applications. The ability to actually sign applications here at Apple, deliver them to a customer system, and when you or your mom are actually running Mail.App, the application, when it asks, for instance, for a keychain item, will be able to say, can Mail.App, actually signed by Apple, actually get this keychain item? And when we update that application, it'll still be Mail.App, signed by Apple, and the user won't be bugged for another question.

We'll be able to springboard this into other technologies, like I said, MCX and parental controls, we'll be able to actually solidly identify applications. And we have a lot of ideas for future use of this technology as well. So, my lead architect for the security group, Perry 'the Cynic' Kiehtreiber, will be doing a presentation now on signed applications, how we did it, and everything else. Perry.

Give me that thing. Live? Good. OK. I just put up the big picture to scare you all. Don't worry about it. I'll talk you through it a little bit later. What am I going to tell you? I'm going to tell you what this is all about. I'm going to tell you what it can do for you and for the system. I'll tell you what it won't do for you in the system, which is probably just as important. I'll let you know what we expect you to do for Leopard and what you could get out of this, depending on your situation.

So what does it do? It essentially takes your code, your application, your bundles, your tools, and puts a digital seal on them that allows modifications to be detected. So think of it as sort of freezing your application in a container of sorts. It says here, "Provably seal it," because the system can figure out when the seal's broken, when your code has been modified. Then, we bind the seal to a digital identity.

That's your digital identity or the identity of whoever made the program. In addition, what the code signing system allows you to do or anybody else to do is to express requirements of various nature that can be placed on code. Basically, a set of conditions that need to be satisfied before the code is allowed to do something or get something or otherwise proceed with its operation. Now, on the other side of the game, of course, we have the functionality to verify the seal, to identify the digital identity that did the signing, and to evaluate these constraints, these conditions.

What doesn't it do? This is an identification facility. It helps everybody involved figuring out the identity of a piece of code and whether it is still intact or whether it has been modified or otherwise subverted. Using this API or signing your applications by itself doesn't give your code the ability to do anything that it isn't already able to do. In that, it's very much like the authorization APIs, if you've ever used those. They are not there to give you the capability to do something. They are there to validate whether a piece of code has the right to do something.

CodeSign Inc. won't do anything about the bugs that are in the code because it doesn't actually watch your code running. It just tells you whether that's the code that the manufacturer made. So just because something's signed doesn't mean that there aren't any bugs in it. I know that's a horrible surprise. And CodeSign Inc.

doesn't actually protect you against trusting the wrong code or trusting the wrong manufacturer. If you tell the system that you are willing to run this Mail.app made by Hacker Inc., then the system will go, "Okay, I'll run it for you. You said so." I've been told to tell you that this is not a copy protection solution, and in fact it isn't, sort of. It does help you protect the integrity of your code. So if your worry is that somebody will go in, crawl into your code, and hack it around to turn off stuff you put in there, code signing will help you.

It doesn't do anything about people taking that code, taking it elsewhere, and running it elsewhere. That's not its job, that's not what it's for. So if there is stuff in your code that you're worrying about people hacking around on, this is great for you. But copy protection, this isn't. And just like anything else we're doing, unfortunately, it's not bloody magic. It's not going to solve all of your problems. It's just one little piece, well, okay, one medium-sized piece, that's going to help make the world safer.

So let me walk you through the basic scenario. Here's what's happening. You take your code. Nobody cares whether this is Xcode, except, of course, Apple. You're supposed to use Xcode. But make files will work. Doesn't matter. And if you're making an application, you have an Info.plist. You have resources. You put them all together with your build tool. And eventually, you arrive at this beautifully bundled up application with the right icon and the right configuration and all the localizations. And it's beautiful. That's what you'd normally ship.

Well, actually, in addition, you're going to take a digital identity in a keychain, because that's where they go. And you're going to feed this final code of yours and the identity into the codesign command. That's a new command. It's in your seed, so you can play around with it. It's a command line tool. I hope that doesn't scare you too much.

What codesign does is it adds information to your bundle. It creates a couple of files. It will also, at least in the final version, rewrite your executable somewhat. And the result is very much an application or a tool or a bundle as the input. It behaves exactly the same as far as the system is concerned. But it's now signed. So there is extra stuff in it. Then you take that and you ship it pretty much the way you've always done. And the codesigning machine absolutely does not care how you ship this.

You can, you know, just do it. You can ship installers. You can use online software updaters. You can use binary diffs and put it back together at the other end. The only thing we care about is that when you're done doing your delivery at the user system, it's exactly the same thing that you started with.

And then somebody, you hope, will run this on the user system. Now this is a really important point. Code signing is about running code. This is not about stuff sitting on the hard drive. Code signing will help you verify things on the disk, but that's not its primary function. The primary function of code signing functionality is to work with code that's actually running.

You make calls into a verification API, give it handles for the running code, the process, whatever else it is, and out pops an outcome which is pretty much either, yep, it's there, this is really its identity, and these are the conditions that you placed in it, and they're fine, or you get an astonishing variety of error codes that tell you that things aren't so, and that something means that this piece of code really can't be identified. Amen.

I'll try to show you what this really means. I hope nobody here is scared of terminal and the command line. Demo system, please. Thank you. All right. So as I said, there's a new command called code sign. It has a man page, so if you don't mind that kind of thing, go read it. It'll explain everything to you. And I made a little program called CSTest, one of those things that only lives to demonstrate what's going on here.

Right now, since I just built it, CSTest isn't signed. So code sign dash v does a verification of the code signature, and well, it's not signed. So just for the few of you who have never been annoyed by keychain calls, let me show you what happens when a program isn't signed. Let's make a little keychain item here, call it foo, doesn't much matter. And one of the few things that CSTest knows how to do is it can retrieve a keychain item.

You've all seen this dialogue. This is the, hey, no, this CSTest program has never tried to access this item before. Do you really want to give it access? And if you click on always allow, then the system remembers that from now on, CSTest is allowed access to the keychain item.

And if you do this again, then you don't get the dialogue again, thankfully. And everything's cool. That is, until you want to change something. I mean, we all have been through this. This is the source of CSTest. And somebody just told you to change the program to, I don't know, put a disclaimer in.

So we've rebuilt CSTest. It now puts the disclaimer out. Well, isn't that a beautiful dialogue? This is the best the system can do in Tiger to tell you that, well, CSTest has changed. It's the same name that it used to have, but it's not the same program anymore, and what do you want to do about this? And there is this change all button that basically says, I know what I'm doing.

It's still the same program. And that tells the system to remember that the old one and the new one are sort of the same as far as you're concerned, and now we're okay again until we change the program again. Which is very annoying, particularly when it happens to 25 keychain items in your keychain.

Not good. So, what can we do? Well, we can sign CSTest. The -s takes an argument that is the identity that you're going to use. Since it's in the keychain, we have to unlock the keychain that happens to contain this. And now we can verify it. This is a Unix style command, so if it doesn't say anything, that's good. If that bothers you, add more -v options and it starts saying lots of things that just tell you that the world is good. So, let's try this again now with, just to be fair, a different keychain item. Let's call it test.

And now we're trying to access test. Yes, it's okay. Take access. So here. And you know how these marketeers are. The disclaimer wasn't really disclaiming enough. They need a strict disclaimer. Okay, so here is CSTest, which now has the strict disclaimer. And since we're now living in the code signing world, let's sign it again. New version of your program. New signed version of your program. So let's access test and oh, no dialogue.

Let me tell you what really changed here. I mean, the dialogue went away and the user is happy. You don't get tech support calls from users saying, "What is that dialogue and does this mean that you are attacking my system?" But what really changed here under the hood in the system as a whole is something really fundamental.

In the old version, the system looked at the program on the user's system using a hash, but that's sort of an implementation detail. But it basically looked at the program on the user's system and remembered that program. And when you shipped an update, it was just a different program.

It sort of kind of knows that it's the same name, but that, of course, can be faked easily, so it can't rely on this. So in order to figure out whether that new program is really supposed to behave like the old program as far as security is concerned, it has to ask you. It doesn't know any better. In the new universe, with code signing, the signing is done When you make your program before you ship it. So rather than comparing the programs on the user's system, what it's really doing is it's taking your word for it.

When you sign a program and you say this is CSTest, okay, so what we remember is that a program called CSTest signed by you is allowed to access this keychain item. If you make an update and you sign it and you ship it, it's still a program signed by you called CSTest. There's no question about here. There's no uncertainty. It's the same thing. It's the same program because you said so and you're the manufacturer. So this is really what's going on here.

Okay, well, we're Apple, so let's do something with an actual application that looks graphical. Again, this thing really doesn't do anything other than demonstrate things. This is freshly built, so of course it's not signed. But, you know, just like you can sign a tool, you can sign an application.

The checkboxes here are simply the program calling the verification API itself. Normally you don't do this because you generally assume that you yourself are okay, and if you aren't okay, you can't trust yourself figuring out whether you're okay. So in reality, you call this on other people's code. But for demonstration purposes, this is easy. What you have here is we signed it, it's validly signed, and well, this button here basically just does the same thing that the command line tool did.

It fetches a keychain item. And of course, since the CS test application is a different program, we get the always allow, and okay, it worked, and well, it still works because now we're on the access control list. So one of the things that happens, remember, code signing is about running code.

In addition to the files on the disk, there's a dynamic state of validity that can be cleared. A program can lose its identity when it does certain things. The kernel keeps track of this for processes, and the idea is that once you've lost your identity, you can't ever get it back. It's sticky.

Most of the time a program will actually do this to itself. "Hmm, I'm not sure if I should load this. I'm not quite sure if I'm still going to be myself when I load this, but okay, I'll tell the kernel that I'm no longer me, and then I can load this and run off, but the rest of the system will now know that I've lost it.

If you quit me and relaunch me, I'll be myself again until I do something questionable." So we can turn the valid bit off, which behind the scenes is making this, you know, "I'm no longer me" call. And at that point, well, you're back to this dialogue, because the system now doesn't think that CSTest really is CSTest anymore. It might be, but we're not sure.

And as I said, you can't get your dynamic validity back except by, well, relaunching yourself, at which point you're fine again. Now, what is it that we're protecting here? We're protecting against modification of the code itself. You've seen that in the command line tool. I could do the same disclaimer thing, of course, here. I could show you that I can modify the nib, and it would change the program. The nib's part of what's protected. But let's look at something perhaps a little bit more interesting.

The Info.plist is protected by code signing. So if you're going into-- This info.p list here, and you are maliciously trying to change something, like I'm going to make this thing accept... The application signatures that it shouldn't, see here, the verification API figured out that the code's changed. It's still signed, but it's not validly signed, and again, well, system knows you're not really you anymore. The sort of cool thing is that it really doesn't much care how you got there, so if you change the Info.plist back, then it's valid again. Everything's fine.

Another thing that you are being protected is resources. Come on. Do what I mean. There's not much resources in here. Let's say I am adding a resource. It notices. Same thing happens if you remove a resource. You know, if I took the icon away, if I modified it. Basically, your resources have to be exactly the resources that were there when the program was signed or the signature's invalid.

Okay, this is the basic integrity thing, and if you don't care about any of that other identification stuff, the one thing to take home is once you sign your programs, you'll never have to worry about people hacking around with your program and you not having any way of figuring out that that happened.

If you sign it, and somebody gives you a tech support call and says, "Your program's misbehaving," tell them to code-sign -v your program, and if that gives an error message, it was modified. You won't necessarily know who modified it, whether it's the user himself or somebody made a hacked version of your program, but at least you know it isn't your program that's misbehaving, it's some mutant offspring of it. Okay, let me show you one more thing.

Here's the .mac preferences, which we all know and love. I'm showing you .mac because the .mac password is such a loved password. Everybody wants to use it. There's like 16 different Apple applications now that all want to use your .mac password. Mail wants to. iChat wants to. And half of those have little demons in the background. So what really happens when you are creating a .mac password with one of the Apple applications is there's a little SPI call that makes a really, really big access control list. It takes a little while because it's a really, really big access control list here.

And you can just tell this is a who's who of Apple applications. Now, all of these know that they want to work with your .Mac password, and there's basically this list of all of these applications. And if Apple ever makes a new one, and God knows we do all the time, it gets added to this list eventually. Meantime, you get one of those nice dialogues that says the Frobos background super app wants access to your .Mac password, and you're kind of going, "What is that?" Well, Apple invented that in the last software update.

Oh, and here's a new one. This is something that we added with code signing. Rather than just having a list of applications, each of them separately enumerated, we now have application groups. And any application in this group automatically has access to this item. So what I can actually do, if I want to, is I can take this list here and just take them all off the--

[Transcript missing]

How about mail? This is my .mac account and I, no, I don't really, I haven't actually checked my email there for a while so it's probably a long list.

Syndication agent. That's exciting. But the idea is we're online here. It's still chewing through my email. As I said, I haven't checked this in a while, and it's full of spam. But Mail.app has access to this password, not because it's on some explicit list, but because it's in the .mac application group.

So, how do you add something to an application group? Good question. Let's put CSTest into the .mac application group. Actually, we're almost in here. What you do is you edit the Info.plist, which you've all done before, and it's really quite simple. You add an application group. entry. And in the case of .Mac, the name of the application group happens to be .Mac. Of course, editing the Info.plist invalidated the signatures, so if we now went and tried to access Perry, which happens to be my password, that doesn't work. You're not really you, are you? So let's sign it.

I haven't used that keychain for five minutes. The dash F, incidentally, just-- if you're trying to sign an application that's already signed, it won't let you unless you say, I know what I'm doing here. So that has regenerated the signature. And if we now try to access the dot Mac password, it worked.

slides, please. And I know, this isn't quite as good as pulsing Aqua buttons. But we're the infrastructure group. OK. We're calling this code signing even though the session is called application signing because this is more than just application signing. We've already talked about tools. What we're planning to do is include in this notion of code you can sign pretty much anything that you intuitively think might be code.

Single executables, application bundles, but also plugin bundles, frameworks, libraries, scripts, applets, widgets. Pretty much anything you think is code, anything that gets run or runs on the system. Now, it's going to take time to get all of that covered under the umbrella, but the architecture is there to cover it all. So that's why we call it code signing.

If you're making universal binaries, and you all do I hope, signing happens on a per-architecture basis. So if you make something that's four-way universal, then you'll basically get four code signatures embedded there. The advantage is that if you run around and you thin the executable, every single architecture that you pull out and stick somewhere will still be validly signed. So if you have installers that take a universal binary and, you know, just pull out the right architecture for some reason, that's okay. Code signing will still work.

Now here's one really important point. I'm going to tell you this three times throughout the presentation because it's really important. Your code needs to be immutable. It can't change on the user system. And I don't care how it changes. You can't have configuration files in there that the user is supposed to edit. You can't stick any preferences of any kind into your app.

You can't ask the user to stuff new icons into your application, whether directly or through some UI of yours. It needs to be immutable. Anything that changes on the user system needs to be somewhere else. And if you read the rules and regulations of Mac OS X, it'll tell you pretty explicitly where that somewhere else is.

Preferences go into library preferences. Support files go into library application support and so on. So your code needs to be immutable or you're going to have trouble with this. It just won't work for you. Bad things will happen to you. You'll be unhappy. Your customers will be unhappy.

Don't do that. So how do you sign code? Because that is what we ask you to do. Come Leopard, we want you to just have this extra step when you make your code, your program, your applications, before you ship them, we want you to sign them. And it's really easy. You saw.

You run code sign. You say, over there, you say, what with? And there's a couple of other optional stuff you can stuff in if the circumstances warrant it, but that's basically it. It will modify your application, your code, you take the result, and you shove it into your, excuse me, you insert it into your packaging process just like you always do.

I told you I will tell you again. So, it's okay to thin your code after signing it, and conversely you can use lipo if you've ever heard of that to, you know, bunch more architectures into one file. Since we're signing in things individually per architecture, the system really doesn't care.

And once you've signed your code, you can do pretty much anything you want with it that doesn't change it. You can copy it, you can move it, you can stick it on a file server, you can put it in an installer, and we don't care if it's the Apple installer or a third party installer.

You can cut it down the middle, ship one half online and the other half on a CD, stick it back together on the other side. It doesn't matter as long as when you're done on the user system, it's the same files in the same order as you started with.

It's not okay to change your code, and that includes our excellent link edit commands like strip and nm edit and all of the other ones. And it is not okay to play with the resources after you sign. The resources are all sealed. So if you have some kind of post-production arrangement with some other department or some other company, get that done before you sign.

In particular, it also means you can't take resources out. It means that if you are doing localization for some core language set and then send it off to some other company to add Korean and South Vietnamese and whatever else, when you get it back and they've added their resources, you need to resign.

When should you sign? Well, obviously, after you're done modifying it. I've already told you three times now. After you've gotten the resources exactly right, and this is sort of an important point, after you've decided that this is good, you can go off and sign almost anything. The machinery doesn't care.

I mean, you use your digital identity as long as you have it right there. You can sign your program. You can sign somebody else's program. You can sign something that erases the hard drive. You wouldn't want something that erases the hard drive to go out after you sign it. Because the signature basically says, "Yeah, we like this. This is us." No.

You may want to have an extra step designed in there where after you build something you test it. Yeah, that's the word. You test it and you actually make sure it behaves like something that you are proud of shipping and then you sign it. That's probably the best way to do it.

Now that doesn't mean that you can't sign stuff as part of your build process if verifying signature is something that is routinely done to you. In which case, you might want to set up two digital identities, one for testing and one for shipment. And obviously you want to sign before you package it and ship it.

There's nothing again that keeps people from signing stuff on the end user system. That's perfectly okay. But it's not as good as signing it in the system. So, you're missing out on the point where you make your applications. Because the ceiling basically protects against modification from the point where you sign to the point where you verify. If you sign on the end user system, you're not protected against modifications during shipment, people hacking your installers, people misdirecting software updaters, that kind of thing.

So, you're sort of missing on half of this immutability gig. Also, anybody who signs anything on the end user system is going to have to have the same key on the end user system. And another important point about using these digital identity things is that you keep those on your computer where you work. The end user never gets to see those. The end user couldn't sign those with your key because he's never got it. This is important.

So we've been talking about these identity things, and while those of you who know what X.509 means vaguely will probably have figured out what this is by now. For those of you who have no idea what a digital certificate is, let me just give you the three-sentence summary.

It's a cryptographic key, which is sort of a binary bit bucket, and a digital certificate, which is another binary bit bucket that you either make with something called a certificate assistant, something we ship with the system, or if you are big in corporate, you can ask VeriSign or other companies to make you one.

You stick those into a keychain, or you import them into a keychain if you already have them. And from that moment on, it just shows up as an identity, or if you prefer, my certificates. And as long as you carry that keychain around and you have the password for the keychain handy when you try to use it, it'll just work. You can make your own code signing certificates. We don't insist that you get certificates from big companies that want lots of money from you. So that's your choice.

Hey, we think of you, folks, we do. If you have a code signing certificate, say, from VeriSign, one of those things that work with Microsoft Authenticode that happens to also work with our stuff. So if you've already invested the Big Moolah for just the right kind of code signing cert, you don't need a separate one for us. Just use it. It's OK, we don't mind.

We don't sell these things to you. We're not planning on selling them to you. It's not really our business. Why bother? Do think a little bit, particularly if you're more than one employee in your company, think a little bit about who's got the authority to actually do the signing. Just as I said, only sign what you're proud of.

You may want to make sure that if you've got 2,000 employees that not anybody can just go and sign anything. How paranoid you want to be about this is really not for me to say. It's sort of a corporate decision. So if you actually have departments that worry about the legal side of these things, you may want to give them a ring and tell them that this is coming and tell them to think about it. Usually the legal folks think about these things for months. So if you think you might run into this, start talking to them now. Usually this takes way longer than fixing bugs or implementing features.

Okay, code requirements. If you remember at the start, one of the pieces of functionality that code signing has is it lets you express restrictions, constraints, conditions, requirements on code. It lets you say things like, "Must be called mail.app" and "Can be signed by" "Must be signed by Apple." As a matter of fact, you know, if you're talking about Apple's mail.app, that's pretty much the requirement you're going to use.

"Must be called mail.app and must be signed by Apple." Or if you prefer, "Apple must say that this is called mail.app." Now, what we've done is we've invented this thing called a code signing requirement, or just requirement. And we've given it a generic form that you can use using an API pretty much in all situations.

There's a binary form, which is just a binary blob with no pointers or anything inside. So you can store this anywhere you want. You can stick it in the database, put it in your own data structures, you know, wrap a CFData or NSData around it and stick it in a dictionary.

We don't care, just as long as you keep that blob from being modified until you feed it back into the API. There's a text form for it, which, as you'd expect, is a little bit geekish. It's a little programming language of sorts, a really simple, trivial programming language. And, you know, you can convert between the two, of course.

So, remember this picture? Let's go through it again, but this time let's talk about what requirements are. So we have the signed code on the end user system, and there is a way to derive requirements from the code. I'll talk a little bit later about how this works. It's called the designated requirement. If you happen to want to derive a requirement from the code back home where you made it, that's fine too. Remember, it's the same code, so you'll get the same requirement out of it anyway.

Usually, you want to store these somewhere in some configuration database, in some preferences file, configuration file of some sort. And as I said, you're supposed to store it as a binary blob, so it doesn't really matter how you do that. NSData, database entries, we don't care. Turn a requirement into a binary blob, store it anywhere you want, get it back when you need it, turn it back into an API type, and off you go.

And, well, the code gets run, as it always does, and I sort of half cheated, because when you are actually feeding it to the verification API, usually you also feed a requirement in. What you are verifying then is, as before, that the identity of the running code is still intact, that it hasn't been violated in some way, but also that the requirements that you pass to the API are satisfied. So, you know, the difference is that if there weren't any requirements, then feeding, say, a running Mail.app to the verification machinery would just verify that it's a properly signed application, which you could make.

You could call it Mail.app, nobody would know, but if you add a requirement that says, "Must be signed by Apple," and it's called Mail.app, then now you can't fake it anymore, because Apple won't give you their signing key, I hope. I hope. Outcome, which now can include "doesn't satisfy requirements", and there you go.

Just to give you sort of an overview of what you can do with the requirements language, the most important one is constraints on the signing chain, the certificate chain. And most of the time what you do is you constrain the anchor of the certificate chain, which is essentially a constraint on the authority that built the digital identity that ended up signing this. There's a special one for Apple, of course. Hey, we're Apple. But any certificate can be expressed in the requirement language internally. It's stored as a hash of the certificate. So if you made your own, it'll just end up in there as a hash of the certificate authority.

Identifier. I didn't talk about this before, but when CodeSign goes off and builds the code signature data, it embeds as an identifier string in the signature data. This identifier string is by default derived from the bundle identifiers. So if you are following the rules and you're making an application bundle or a plug-in bundle or something, then you'll automatically get something reasonable like com.yourcompanyname.something.

If you are signing tools, by default what you'll get is the file name, which is usually not what you want. And there is an option to CodeSign to make up your own identifier, com.yourcompany.whateveryouwant, and feed that in. So the identifier is the other element of typical code constraints.

As I said, signed by Apple, and identifier is com.apple.mail, would be the canonical code signing constraint for the mail application. Or in your case, it would be signing anchor is your anchor, and name is whatever you said it is. You can check arbitrary contents of the Info.plist for anything you want, either something that's always in the Info.plist, like the NSBundle identifier, CFBundle identifier these days, or something that you decide to add to your Info.plist.

And as a matter of fact, that's how application groups are implemented. If you remember what I did editing the Info.plist, I just added an entry called, application group, and said the value must be .mac. And that's all that you do when you check for application group membership. You check that there's an application group entry, and its value is .mac.

And you can of course combine them with logical operators like 'and'. We are going to add more elements to this language and if you feel like there is one that you think would be really cool, then let us know. Now is a good time because, well, we're still defining this.

So these are requirements that typically the caller of an API imposes on running code by someone else. We tend to call them external requirements because the guy who imposes the requirement is different from the guy who made the application. It's you want to check somebody else's application or somebody else is trying to check your application.

Now there are situations where you, the application maker, actually have requirements that you want to embed in your code. A very typical example is if you can't trust your libraries, who can you trust? So you may want to say, "I really only want to be linked against genuine Apple libraries." Or, "I think I should only load my own plugins, the ones made by myself, because who knows what the other ones will do to me." These are called internal requirements because they are directly embedded in your code signing information.

They are automatically sealed by the same mechanism that seals the code and the info peelers and the resources and everything else. If you have those, you feed them as one of those optional arguments to the code sign command and read the man page. That explains how you can do it. Another form of internal requirement that's just sort of hovering here as a teaser is hosts and guests.

Well, let me tell you that much. Remember how I said that code can be, you know, not just executables, macho binaries, applications, but it can be scripts, other stuff that's being run by other code? There is a part of the code signing architecture is a model called the host and guest model, where you're basically having one set of code running or interpreting or supervising another set of code. The supervisor is called a host, the code that's being supervised is a guest. This mechanism allows you, us, together, to extend the notion of code dynamically.

Anytime we want to add code as defined by a particular interpreter to also be code signable, all we have to do is add, we hope, a few little things to the interpreter or supervisor. And the script becomes code signable, and the entire machinery that I've described so far becomes available. Keychain items could directly be linked not just to the interpreter, but to the interpreted code.

And all the other applications that use the code signing APIs automatically extend to now be able to identify these scripts. Resources don't get handled by internal requirements. Resources are directly sealed with the signing identity that signs the rest of the code, because generally you don't want to be able to dynamically change your resources on the fly. Besides, it's a heck of a lot more efficient.

And there's this very, very special requirement that we call the designated requirement. One of the most common gestures in practice is the user pointing a program at something on disk and say, "That program there." Now, if you're thinking of, say, parental controls where the user is browsing through the application folder and going, "Oh, mail's okay. Safari is, eh, okay.

Disk utility, eh." So when the user points an application on disk and the API caller wants to remember that application, that thing, that mail here, what it needs is a code requirement that will later, when passed to the verification API, can verify whether this is the same thing that we're looking at right now. That's called a designated requirement because from the point of view of the application, it's designating this particular requirement as, "This is how you can sign a program." how you can tell me again. This is how you can identify me again. This is me.

And there is an API call, SecCopy Designated Requirement happens to be what it's called, that produces an API requirement object based on a code identifier. The system will make one up on the fly, and the current form, which we don't promise will stay the same exactly, is simply signed by the guy who signed the application, and the identifier is what the identifier of the code is.

Which again, in the case of Apple's Mail.App, leads us to the anchor is Apple, and the identifier is com.apple.mail. If for some reason you don't like that, and you want a different designated requirement, you can just explicitly stick one into your code, again as an optional argument to code sign, and that's what will be returned.

This is a really, really short glimpse into what the API looks like. If you are desperately scrambling through the system headers at this point, you won't find the code signing headers in your WWDC preview because we haven't decided whether to actually make them public yet. We are not sure how many of you will actually want to call the API as opposed to just sign your applications. Don't misunderstand me here.

You are all expected to sign your applications, no exceptions. But some of you may actually want to be on the other side of this gig. Some of you may want to identify applications, remember the identity of applications. If you want to do that, you need to call the API.

[Transcript missing]

We have, well, let's get to the next slide. The APIs are very much based on core foundation. As a matter of fact, our API handles are core foundation objects. So you can use CF retain and CF release and stick them in CFDictionary's and all of this stuff. And just, you know, they behave normal. They behave as by now you should expect them to behave. The code is part of the security framework. So you'd be linking against security framework.

We have three API object types, a SecCodeRef for identifying running code. That's what you usually use. A SecStaticCodeRef is for code on the disk, sort of like a bundle, but includes things that aren't bundles like, you know, tools and scripts and other single file things. And a SecRequirementRef, which is a code requirement. And that's all I'm really going to tell you about the API. So if you want to know more, go ask, please.

A couple of notes on what's in the seed. This is a big thing. We're still implementing it. We're not done implementing it. We probably won't be done implementing it for a long time. But what you have on your DVDs is something that works. It's self-consistent. I mean, that was a WWDC seed that I was demonstrating on. The data format is going to change, so something you sign today will not work to be verified when Leper chips.

So don't worry about it. Just sign it again when you're getting ready to get real about this. One little warning: If you are signing individual single files, don't do this to files that have resource forks, because right now we're sort of stealing the resource fork to store the signing data. That's not a permanent condition. We're... That will be fixed soon.

Just until then, don't do that. One difference to the final Leopard version is that in the seed, and only in the seed, you can use any digital identity that can sign anything at all to sign code. We're basically not checking the marker that says "is valid for code signing." Primarily so you have an easier time playing around. If you don't have a code signing identity, well, if you've got one for signing emails, it'll work.

And one other caveat for reasons of we're not quite done implementing it, if you're calling the dynamic verification API, you have to be rude. Sorry, that'll get fixed. And the API is not there, officially. What will we have in Leopard? What you've seen, obviously. We already have the keychain machinery hooked up to the code signing APIs, so keychain items created or added to an application that is signed will remember the application by its code signature, which means that you're going to lose all of those precious dialogues you've been coming to expect. MCX and other parental control features will use this, and they intend to restrict what can launch based on this.

So one of the better reasons to get serious about signing your applications for Leopard is that you may otherwise run into situations where grandma or the IT manager simply won't allow anything to launch that isn't signed. So keep that in mind. There's also a firewall feature that assigns essentially port configurations based on an application's identity. That too is probably going to be much more painful if you are not signed.

Recognizing code, either your own code or somebody else's code, that's the other side of the game. That's where you would be calling the API. As I said, it's not currently decided whether the API will be officially released as an API for Leopard or not. If you're interested, talk to your developer evangelist. Hi, Craig.

So what do you do now? Well, let me tell you one last time, make sure your code is immutable. If your code doesn't work right, if it's stuck in a read-only disk image or on a CD-ROM, then it's not right. Fix it. And that's not just for code signing.

There is a raftload of other applications that don't take easily and happily to application self-modifying. I'm not the first one to tell you, I'm sure you've heard it before, if you're still sort of kind of dragging around and going, "Yeah, we'll get around to it, maybe next year," that would be a really good time to get around to it.

And in particular, configurations, preferences, and other configuration data really has well-defined good places to go. Just, you know, read the good documentation. It'll tell you where to put them. There are even functions all over the place in Core Foundation and in Cocoa that make it very easy to put them in the right spot. So please do.

If you feel like it, make yourself a digital identity. Go around, sign your programs, and, you know, just have fun. A signed program shouldn't behave any different from an unsigned program, except when it's making keychain calls, and it should behave somewhat better than an unsigned program. So, in particular, if you are calling the keychain APIs, go sign your programs and see if something weird happens. And I'm not talking about missing dialogues. I'm talking about stuff that isn't working. Let us know. You know, it's new.

And absolutely do plan to sign your code in the Leopard timeframe. There's no big bang here. We're going to let you know in the release notes of some developer seed when the former change happens. After that, you can go off and ship signed applications before Leopard ships, and Tiger just won't notice. It'll ignore the signatures, and it's a little bit bigger. It's about half a percent bigger. But there's no reason why you have to wait until Leopard's in the stores and people salivate before you can actually go off and sign your applications.

And, well, let me tell you that one again. If you are a medium or large-sized company with departments that have responsibilities, then you probably want to find out what department has the responsibility for digital identities and tell them that they need to make a decision here or there. Because they'll probably go into kind of a shell-shocked state for a moment. And then they'll ask you a lot of questions. And then time passes, and then eventually you'll get your signing identity.

That's Leopard. This is a big feature. It's a really big feature. It's one of those features we could never do if we'd had to get it done in one release, so we won't. What are we planning to do going forward? What's sort of the bigger idea here? Again, we're trying to extend the notion of code to everything that reasonably could be considered code.

And it's a wavy line, it's a blurred line. Is an Emacs initialization file code? Turns out yes, because it's a Lisp interpreter. Is your application's configuration file, is your application's preference code? Probably not, but what do I know? It's your application. If it happens to have a language interpreter in it, maybe it is.

But what we will do is push that notion of code outwards and include more and more different kinds of code as the requirements are coming up. Obviously, the ones that are bigger security holes and the ones that are more interesting will get their codeness first. And maybe others, you know, will take five years, but we're definitely going in that direction.

Crunchy shell and chewy inside, yeah. Traditionally, security has been done with crunchy shells. You put sort of a layer out around your code and you defend yourself vigorously against the evil that comes through. You check all of your arguments at the APIs. And then in the rest of your code, you sort of assume that it's OK, because it went through the crunchy shell intact. And that works decently well. It works better than not checking at all, obviously. But it means that you only need to get one hole in the shell and then in the inside, and ooh, I can do whatever I want in here.

We'd like, going forward, to instead have program systems be sort of groups of subsystems that actually identify and defend themselves, not just from the evil outside, but also from each other. So the chewy inside means that once you've actually intruded successfully into some piece of the whole puzzle, you don't automatically just bounce around and take the rest of them, because there's multiple sort of membranes between the different pieces of the system. And code signing actually helps you in that modifications of one piece can be detected by the next piece over.

So the classic attack of, great, I'm inside. I'm going to buffer overflow, and I'm going to modify, and I'm going to modify the code, and it'll just do something completely different. That'll be a bit harder.

[Transcript missing]

There's always more information. Here's Craig. He's waving his hand at me and telling me that I'm slightly exceeding my allotted time.

In case you're wondering what those code signing certificates are, RFC 2459 explains it to you in excruciating detail. There is one URL, the second one here, that explains how you can put info.plists into your single file executables. In case you didn't know that, you can do that. Yes, you can. And since code signing is based on things found in your Info.plist, you may want to do that, even if you don't right now.