Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2003-101
$eventId
ID of event: wwdc2003
$eventContentId
ID of session without event part: 101
$eventShortId
Shortened ID of event: wwdc03
$year
Year of session: 2003
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC03 • Session 101

Security: Architectural Overview

Core OS • 1:05:27

This session explains how the Mac OS X security technology foundation is architected from the high-level interfaces, such as those found in Keychain, down to the lower levels, such as the cryptographic libraries in CDSA. We also discuss how the APIs at the various levels are related and how they are to be used with each other.

Speakers: Craig Keithley, Perry "the Cynic" Kiehtreiber

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good afternoon. I'm Craig Keithley. I'm the security technology evangelist in Apple's Worldwide Developer Relations Group. I'm really pleased by this turnout. Security is a really important part of our architecture, and it should be an important part of your applications and solutions as well. Today's session, we're going to give you an overview of our security architecture. We're going to go into a bit of the structure and what pieces are there and what they're for and how you should use them. So without further ado, Perry the Cynic.

Okay, good. I'm glad somebody cares about security. Never know, five in the evening. Okay, security is different. Most features, you know, you know you want it, it'll wow the users, it'll sell lots of copies, that's why you're using a technology and you just want to use it right.

Security, There are a few applications where security actually is the main feature, you know, your typical encryption utilities and stuff like that. But for most of you, it's something you put into your applications so things don't go disastrously wrong and you get really bad press, you know, like security violation in public disclosure, that kind of stuff.

So for most of you, security is sort of a defensive kind of thing. There are situations where it sort of adds to the value of a feature because you can make it work better than it could if you didn't have any way of determining who's who and who's allowed to get what they're supposed to be getting.

But I think for most of you, the experience is that security tends to get in the way of your application. And I've certainly seen a lot of application programmers that spent a lot of time just making that bad security get out of the way so they can get their feature to work.

One thing that you're probably all noticed at least once, and if you haven't, you will, security operates on the weakest link effect. There is nothing that is absolutely secure, but if your stuff is the thing with the weakest security, the weakest application security-wise or the weakest library security-wise, the attackers are going to come after you because you're the easiest target. So like the old joke with the two people in the forest, which we've all heard, so I won't repeat it. Your task primarily is not to be the easiest mark when the bad hackers come calling.

So, what are we going to talk about? This is security architecture overview. So, you're not going to see any lines of APIs scrolling over the screen. That's not the level of detail. I am going to tell you what we've got in big brushstrokes. I'm going to tell you what to look out for, how these things are supposed to be used, and basically, if you have a problem, which document you go to first, which API header you go to first, to see if maybe you can solve your problem there. Well, technologies, what have we got? A lot of what we got is industry standard.

It's UNIX. We're all making a big deal out of how this is really UNIX, so you'll find UNIX security technologies, all the standard stuff. There is Mach 3 under there. Most of the time you won't notice, most of the time you don't have to know, but if you know something about Mach or if you run into something having to do with Mach ports, that's a part of the security here. You can certainly break an OS X system really wide open by doing stupid things with Mach ports. We use the common data security architecture API infrastructure that's an open group standard, and it's in there. That's how you do crypto.

We have pretty darn good support for Kerberos 5, this time around for Panther. As a matter of fact, we have a strategy to Kerberize just about every client server application. And you'll find a lot of the open source stuff in there: SSH, OpenSSL, all that good stuff. If you can't find it in the system out of the box, you can probably just compile it there. You may have heard of UNIX ports, which makes it pretty easy to get stuff onto OS X if it's reasonably portable. And let's just think.

: Okay, well of course we can't just do the open source stuff. We want you to pay extra money. We want everybody to pay extra money, so we have Mac OS X specific stuff. We have Keychains. Everybody here know what a Keychain is? This is where you put your secrets so they don't leak out. We have something called the Authorization API, which you may not have heard of, but you should, because it's pretty darn useful, particularly if you make local client-server arrangements. It's basically a way to deal with authorizations in the system.

We have X.509 Certificate Support and Associated Trust Management. That's, we think, a heck of a lot better than what you get out of OpenSSL. I put directory services on here not because it's something that the security group does, although we talk to those folks once in a while, but if your security problem is how to look up users and find them and enumerate them and figure out what they're supposed to be allowed to do, directory services is probably the API that you're going to use. that you want to go through. Alright, next level of detail. As I said at the beginning, as usual, keep your questions, write them down, you can ask them at the end. We'll stick around for as long as it takes to answer all of your questions.

Which one of you is not comfortable just dealing in Unix terms? Let's admit it. Raise your hands if this Unix thing just freaks you out and you're sort of used to the OS 9 way of doing things. All right. Thank you. So, very briefly for you, unlike OS 9, on UNIX, processes are hard separated with address spaces. Each process has its own address space, and that's really the main point of security in UNIX.

Within an address space, you have the OS 9 game back, and everybody can look at each other's data if they only figure out where it is. You can't really separate or keep secrets within one process. So, the UNIX trick, to the extent that it is a trick, is if you want to protect data, stick it in its own process, or stick it in its own file managed by its own process, and then that process can defend the data against everybody else who's sitting in a different process. What you have is user identities, user IDs, numbers, names, and groups.

You have the UNIX file system that is basically your primary way of labeling the data. Craig Keithley, Perry 'the Cynic' Kiehtreiber So, the UNIX system is basically your primary way of labeling the data. You can look at each other's data, and everybody can look at each other's data. You can look at each other's data. Craig Keithley, Perry 'the Cynic' Kiehtreiber with who owns it and who gets access to it.

And, uh, You have the magic word user, user number zero, who gets to do everything in the system that he wants. At that point, my presentation said, sort of, "the nuclear weapon of Unix," and they told me, "Don't do that. It's bad." You're not supposed to use scary words anymore.

And we have the setUID facility, which is essentially the one and only way that you can get access to a user ID if you don't already have access to a user ID. That's basically the one Unix mechanism for elevating privileges or getting access to stuff that you don't already have access to.

And cautionary words about root: always a good idea. Root is dangerous, root is omnipotent, root can do anything. Root can mess you up, can delete your data, completely corrupt the system. So be very afraid if you write code that runs with root privilege. The primary advice is don't do it if you don't have to. If you have to, then... Know that you are dancing in a minefield and either be an expert in how to do this right or hire yourself an expert.

Writing root-level code, if you're just sort of kind of understanding Unix, is just not a good idea and it's worthwhile getting yourself at least a consultant who understands this stuff if you're not comfortable with it. One principle about root code: don't make it big. Make it really, really simple.

Make it really, really small and make it really easy to understand. Ideally, the code that you write that runs with root privilege is about a page or two of source code. Of course, yeah, okay, so your application may have 200,000 lines of code, so what do you do? Remember process separation? This is how you do security in Unix.

Take your root code, make it as small as possible, and then you can start to build your code. If you want to build your code as small as possible, stick it in a separate process so it runs as a process separate from your big application, and then let the two talk to each other in a secure way. I'll tell you how later. We call this factoring. It's a little bit like the factoring the OS 9 folks of you went through when you did Apple Script and you factored your application. Anybody remember that? It's about the same idea.

So, standard UNIX, marvelous UNIX. Not quite totally normal UNIX. There is a Mach microkernel in there. As I said, most of the time you don't notice, but it is in there and it is not separated from the UNIX kernel. The two are sort of sitting there like Siamese twins.

So, you have to understand that sometimes it is possible to get things out of UNIX going the Mach route that on a normal UNIX system you wouldn't be able to get. For example, there are ways to get root privilege by going through the Mach passport facility. So, when you are doing security analysis, if this is the kind of thing you do, keep that in mind. Otherwise, for the most part, don't worry about it.

The root user doesn't actually exist by default. Well, okay, it exists, but it doesn't have a password, so you can't log in as the root user. An administrator can go in, turn it on, give it a password, it'll be a perfectly nice user, but by default, it doesn't exist. And that's by design.

We have invented a class of users called administrators, which technically is just those users that are in the admin group. And administrators can actually do a lot of things to your system. If you look around your Mac OS X system, Jaguar, Panther, earlier, doesn't matter, there's a lot of directories that are writable by group admin.

So admin users are almost root. Frankly, an admin user, if he knows what he's doing, can get the root pretty easily. So look out for that group. Make sure you don't accidentally create directories that belong to admin or are writable by admin unless you mean them to be basically system opening.

Just a few words about Mach. It's in there. You probably don't care about it. If for some reason you actually want to use a Mach interface, it's a different world. It's not like sort of Unix. So get yourself a book, you know, a nutshell book or whatever your preference is. Learn how to do this right. Basically, Mach ports, Mach message ports are the big thing in Mach security. A Mach message port is an access right. You can pass it around between processes and, well, this is how you do security.

One warning word, if you know about Mach and you want to play around with it in OS X, we are actually using the Bootstrap port facility quite extensively. We're using Mach Bootstrap subsets. Again, if you don't know what this means, never mind. But if you do, we are using this. Don't expect that every process in the system has the same Mach Bootstrap.

Common data security architecture. It's an open group standard, you can tell by the word. We implemented that for Mac OS X. It's a pretty complete implementation. This is not a port of the Intel reference platform. This is a completely new implementation. We've open sourced it. You can get it out of the CVS repository, look at it, play around with it, be impressed with how great it is.

It's C++, in case anybody cares. Basically, anything in the system, well, almost anything in the system, excuse me, in the system that does cryptography is actually doing it through the CDSA APIs. So, whenever you see something doing encryption, whether it's disk images or SSL or anything else, chances are, ultimately, it's calling down into the CDSA layer. So, what you see is that many times you're going to end up using the CDSA layer implicitly by calling higher APIs.

So, it's really great. I mean, the open group standard is like 600 pages. I mean, you can spend many, many weekends just reading the standard. And it's a very, very powerful set of APIs. It's very flexible. It's all pluggable with plugins. It's also very verbose. I mean, it basically takes about 50 lines of C code to just start the thing up, you know, calling initialization and loading modules.

So, unless this is something that you really need to learn because your job is doing cryptography on OS X, my advice would be that you should try to use higher-level APIs. For example, if you want to do SSL, call the SSL APIs. They'll do all of this nasty stuff under the hood.

There are situations where, you know, there isn't a higher-level API or it doesn't quite do what you want. One of the features of our APIs, the Apple APIs, is that in almost all places, if you look, there is an API function that gets you CDSA data structures out from underneath.

So, if a higher-level API gets you 90% to where you want to go, you can get the CDSA module handles and attachment handles, make a couple of calls, get that special extra option you needed, and then go back up and continue on the higher-level API. So, that's really how you should look at this.

All right. Big building block number one: Keychains. So, You all have one. Well, at least if you have an OS X system that you've ever logged into, you all have a Keychain. At least one. Because the system makes one for you when you log in for the first time. A Keychain is a file in your home directory where you can stick secrets. Passwords, keys, all that stuff that you don't want just everybody to know.

You could put it on a sticky note or write it on a piece of paper. The nice thing about Keychains is that they actually encrypt the data. So if you log out and you walk away and somebody carries your system away in his car, they can't get at your secrets.

Because, as the line item here says, they're offline safe. That literally means that short of calculating for a couple of probably hundreds of thousands of years on the fastest known platform, there is really no way to get those secrets. So you can't get those secrets out of a Keychain if you're not around. That's assuming you picked a good password.

But I hope you all know about the importance of not picking your mother's maiden name as your password. The items in a Keychain are protected by access controls, specifically by CDSA access control lists. I'll talk about that in a couple of slides later, but keep that in mind, it's really powerful.

If you look at it at the CDSA layer, these things are databases. They're really actually databases complete with schemas. You have different item types. Each item type has a different schema. You have a set of typed attributes assigned to items of a particular type. And behind your back, this stuff is actually done by a system daemon.

Remember, again, Unix separation of address spaces? All of the good stuff, all of the secrets, are not actually sitting in your own address space. They're handled by the security server daemon. So even if some bad virus actually manages to grab ahold of your application, all is not lost. Some is lost, but not all.

Scalable APIs. Hmm. Well, there's a single API function for "Store this secret in my keychain somewhere, please." If all you want is some bag to stick your secret password into and then get it back out later, that's the only call you need. Just store that for me. You give an account name and a service name just so we can tell them apart, but that's all there is to it.

And there's one other call, which is, you know, get stuff out from my keychain and give it back to me. Cool. Not very much detail to this, of course. So if you actually have to deal with a situation where there's multiple keychains, yes, you can have multiple keychains, you can drop down to a somewhat more interesting API with a lot more arguments where you can say, you know, which keychain and under what circumstances. You can search through them and do all kinds of interesting stuff with it. If that's still too simple for you, if you really, really need to do the nitty-gritty detail, you can actually drop down all the way to the CDSA API level and manipulate keychains from there.

And that's probably hundreds of lines of code, but it lets you do anything that is physically possible to do with keychains. So this is your choice here. Of course, you know, the higher the API, the simpler the call, the lower the API, the more work. But at least you got a choice.

So, how is access to a Keychain and the items in the Keychain controlled? These things are UNIX files, at least right now, so if you don't have UNIX file access permissions, you can't get at them. So, if you want to make a Keychain that's just yours, you can use the usual UNIX commands or the finder, get info to make it just readable by you and not anybody else, and that's fine, that's security.

The next thing is a passphrase or some other secret that locks the whole thing. This is basically the key that encrypts the data in your Keychain. If that passphrase, that secret isn't around, nobody can really get at the contents. As long as you pick a good passphrase, that makes it, as I said, offline safe. You can feel pretty confident.

We're 99.99% sure that we didn't make a mistake there. That in particular means if somebody walks away with your PowerBook, which has a Keychain on it, which has your brokerage account password in it, that's okay. As long as you didn't set your Keychain to stay open forever, you're safe. That's a good feeling.

So we have the master unlock, what we call the passphrase for a keychain. And then the next step, for each particular item, there is an access control list. Now, these access control lists can be as simple as, so I don't care anybody, or it can be put up a dialogue and confirm with me before you allow access to this item, or it can be a list of applications, like let mail.app use this, but ask me if anybody else tries.

That's actually more or less the default, the creating application gets free access and everybody else puts up a dialogue. The dialogue's there mostly so if you end up with a virus that tries to roam through your keychain while it's open, you get a chance to figure out that somebody is doing something weird here.

In particular, you don't need to trust the file system a Keychain is on, because the Keychain, as I said, is encrypted. The secrets on it are encrypted. That means that the only thing the file system ever sees is gobbledygook, as far as the real secrets are concerned. I mean, the structure of the Keychain is understandable. You can see the items, but you can't see the secret. Now, this means that it's actually, from a cryptographic point of view, completely safe to put your Keychain on your iDisk. even if you don't trust Apple.

Even if you think that Mac.com is run by evil alien infiltrators who read all the data that's going through Mac.com, you can still put your keychain there, because all that's getting on there is encrypted data. And the only one who gets to actually see the clear text data is the security server demon on your system. Same thing, of course, with AFP servers and NFS servers or removable volumes. It doesn't matter. You don't have to trust the file system.

Yeah, well, advanced stuff, sort of. You can have any number of Keychains. You start with one. You can make new ones in a little utility application called Keychain Access. You can make as many of them as you want. Why would you want to do that? Well, perhaps you want to make one with some secrets in it that you carry around with you, you know, one of those little USB dongle things or a zip disk or whatever strikes your fancy. Maybe you're comfortable with carrying around some of them, but not all of them.

Generally, if you have more than one Keychain, things are arranged for use, so when you search for an item, you actually end up searching them all. There's a search list that is part of your preferences. You don't have to put all of your Keychains on that search list, but that's what you get by default, because that's what normal users want.

Portable does not mean that you can take it to your Windows box and it'll do anything useful there, but it does mean that if you take a Keychain, put it on your USB dongle or a Zip disk or wherever, you go to another OS X system and you stick it in there, it'll work. It will require your passphrase to actually access it, but there's nothing particularly specific to the system where you made the Keychain. It'll work on any OS X system.

If you want to store things in a Keychain and find that none of the defined item types really do it for you because you need very special attributes, for example, you can extend the schemas on a Keychain to add your own item types. That's definitely advanced stuff. I don't generally advise it because it's much easier to just shoehorn it into one of the existing data types. But if you really feel that that's what you need, call developer support. We can work with you. We can either tell you that that's not what you should be doing, or we can show you how to do it.

There is, in addition to these user keychains, you know, the one that each of you has automatically, there's also keychains in the system that's new for Panther called system keychains. They're normal keychain files, except they don't belong to a user, they belong to the system, and since there isn't any user around, they're not actually unlocked with a secret that a user types in. They're unlocked in other ways.

The only time when you actually care about this is if you're writing system daemons, you know, things like PPP daemons or message servers or, you know, generally if you're thinking of writing something for OS X server, maybe you're a candidate for system keychains. Look it up under that term. The same APIs work for system keychains. If you're writing a system daemon, you will automatically work with them by default, because the rules are a little bit different.

Authorization. Another big building block of security, a lot of code that went in there. The catchphrase is: The Authorization API is about authorization, not authentication. Yeah, gee, what does that mean? It means that this is about whether to allow a privileged operation to proceed. It's not about who the requester is.

If that's sort of too subtle, I'll try to work out the difference a little bit as we're going along. There is, in your OS X system, and has been for quite a while, an authorization configuration database. It's currently in /etc/authorization, although eventually we may move it elsewhere. So forget I told you about that. And that's the place where an administrator can set authorization rules. Basically rules that determine under what circumstances the system lets you do certain things.

Authorization has the built-in capability to do what people usually call "single sign-on." Basically, what that means is that once you've typed in whatever is needed or otherwise proven that you're allowed access, you can remember that you did that and carry that credential over to other operations. You may have noticed, for example, if you're not an admin user, you open up preferences. There's this little lock icon.

You have to click that. It asks you, "Show me your admin password." If you then go over to a different preferences panel, it doesn't ask you again because even though that other panel does a separate authorization, it remembers that you just proved you're an admin. Well, you're probably still an admin.

This is a pluggable architecture. You can, if you need to, write plugins to add authorization methods. And if you're a Unix kind of person and you're wondering why we're not just using PAM for this stuff, since PAM is, you know, pluggable authorization, because we think that ours is a heck of a lot more flexible, but if you have your heart sold to PAM, we're actually gatewaying both ways to and from PAM, meaning there is a PAM plugin that can trigger an authorization check and there is an authorization rule that can run a PAM chain. Thank you.

All very theoretical, I know. And this is Keynote, so I tried my hand at graphics. This is your program. This is some server. Your program would really, really very much like that server to do something for it. Unmount the CD-ROM, reboot the system, unlock the secrets of the universe, whatever.

How can this server trust you? I mean, who are you? Why do you want this? Why should you be allowed to do this? Okay. So, your program calls the authorization API and makes an authorization create call. There's a name in there, that's just a character string, and each of these strings, we call them write strings, mean something different.

They're just used by convention. And of course, we're using a dotted hierarchical notation here. The authorization API hands you back what we call an authorization ref. This is one of those opaque, you don't have to worry what's in them, don't ever look at them, kind of abstract handles.

You take this authorization ref and you hand it together with your actual request to the server. The server looks at your request and hands that authorization ref with a call called authorization copyrights back to the authorization API and basically asks, "This guy over there that sent me this handle, is he supposed to be allowed to do this?" The authorization API does something incredibly magical and decides whether you're supposed to be allowed to do this and either sends back, "Sure, go ahead," or "Eh-eh." The server basically does it or doesn't do it and sends its response back to your program. Simple.

Depending on which side you're on, that sort of means different things. If you're on the program side, what you're doing is you're creating an authorization and handing it over to a server. That's all you're doing. If you're writing a server, then what you're doing is you're taking authorization requests from your clients and then you're checking them.

If you're doing what I told you about dealing with root privilege, namely factoring your application into a little part that has root privilege and a large part that doesn't, you're actually going to do both things. Because that server will be your little factored program and, well, your program is the rest of your program.

I don't know if you actually care what's happening there, but I'll tell you anyway. When you are making these authorization calls, you're actually talking to the security server daemon in the system. And what it does, behind your back, is... I should have checked those slides one more time. Anyway, it is talking to a UI daemon that talks to the user behind your back.

So, the prototypical application of the authorization API, and remember the little lock icons in the preferences, is that a dialogue comes up and it says, you know, "Prove that you're an administrator. Type in the administrator password for, you know, some administrator account." This dialogue does not actually come from the application that you are working with. It doesn't come from preferences.app. It doesn't come from any kind of background server. This actually comes indirectly from security server. That's important because, you know, the application actually never sees that admin password.

You're talking, when you are typing in your admin password, you're talking to a system daemon that you better trust because it's part of the system. The result will be passed to some server, but not the secret that is actually being checked here. So again, Unix, different processes, keep things apart, good for security.

So, what ingredients have we got here? You have rights. These are these... They're actually just ASCII strings, no fancy Unicode or anything. Recommending that if you make these strings for your own use, you use the Java convention of basically taking your company DNS name and reversing it. So if your company happens to be froboss.com, then your write strings would be called com.froboss. and whatever's after that is up to you. We are also defining a bunch of these things for the system. They typically start with system. when we define them. There's a couple of other ones here, but for the most part, this is how we name them.

Each of these write strings has a different meaning. These meanings are there by convention. There's nothing magical about the characters in the strings. There's no automatic mapping to system services or anything. It's just that eventually, when you make authorizations based on these writes, they'll go to a server and the server will check for a particular write in order to determine whether it's supposed to do something for you or not.

If you're wondering what write strings to use there as a client, look up the documentation of the server or the system service that you're calling. If you are writing a server, you're going to make up your own. As I said, our recommendation is that you use your reversed .com name and then just use something that's unique within your company.

Credentials Things that you are or that you have or that you can prove. Again, the prototypical credential is, show me that you have an admin password. That happens through that dialogue that we've already talked about. There's other types of credentials. This is pluggable. If you have a particular kind of credential in mind that you thought would be really cool in the system, you can write a plugin and shove it in, and it will become available for authorization roles.

Credentials are shareable. As I said, we have the single sign-on capability in there. If you share a credential between authorizations, then you only have to enter it once, and it automatically carries over to other authorization roles that use the same credential. That's cool because you don't have to type in that administrator password over and over again. It's also potentially dangerous because if you're not really sure about who you're sharing it with, the user may accidentally authorize more than he thinks he does. Win a little bit, lose a little bit. Be careful.

They're persistent in that you can configure them to be remembered forever, which essentially means that you can't remember them forever. That's what it actually means until the user logs out or for a certain amount of time. This is another way of controlling the single sign-on thing. If you give it a five-minute lifetime, that means that it becomes available for satisfying other authorization requests for five minutes after you type in that password, and then you have to do it again.

And then there's rules. The administrative authorization configuration database. /etc/authorization, and now you all forget about this again, basically maps rights to credentials. So it says things like, in order to make a change to the network preferences, you have to prove you're an administrator. Or in order to reboot the system, it's okay, anybody can do that. Or, you know, stuff like that. If you are wondering /etc/authorization is a normal plist file, you can open it in the plist editor, there's a bunch of comments in there, you can, you know, play around with it.

New for Panther, there's actually an API for adding entries to this. That's really cool, particularly if you make up a new write for yourself, for example, to talk between your root-factored part and the rest of your application. Because, again, we don't really want you to know that the configuration is in this particular file in /etc.

What you can do now, starting with Panther, not in Jaguar, is you can call this new API, say, "I want to create this new authorization write." You also get to add some descriptive strings, and it can be localized. And you can do that in your post-install script or the first time you run your application. It will simply add a write to the system that you can then use for yourself and offer as a service to others.

So where is this stuff being used? Well, in your application soon, I know. But pretty much all of these little lock round button things that you click on that ask you for your admin password or not because you've already typed it before, those are all based on authorization.

There is a service called Authorization Execute with Privileges, very ugly name intentionally, that effectively is a way of getting root access. And that we recommend under very specific circumstances, rather than as a generic facility to get root access. Let me say that one more time. This is for very specific circumstances, like, for example, third-party installers.

If you write an application that for some reason or other needs root access, then calling this is not the best way to deal with it. You actually are better off writing a little factored, very small set UID root application, well, tool, factoring your application, remember, and then using authorization to make sure that it's actually the rest of your application that's calling it.

New for Panther authorization: Bind Privileged Port. Those of you who do networking have probably figured out by now that in Unix you need to be root to bind to a TCP/IP port whose number is less than 1024, which at some point was considered to be a security feature by certain students in Berkeley.

But because of legacy and tradition and backward compatibility, it's still like that to this day. If wanting to bind to a low-numbered port is the only reason why you're considering gaining root privilege, starting with Panther, this call can save you the trouble. It's authorization-based, which means that, again, behind your back, not in your application, there might be a dialogue popping up asking the user to confirm that it's really okay.

Or if the administrator configured it differently, it'll just work. Or if the administrator is paranoid, it just won't work. But it cannot under any circumstances be any worse than the current situation, which is, if you're not root, you can't do it. So look at that one if you need low-numbered ports.

X.509 certificates. Can I have a show of hands on which one here actually knows what that is?

[Transcript missing]

For those of you who know what certificates, X5 and X9 certificates are, we have full support for them pretty much in the Panther system. Some of it is preliminary, but it's there.

That's a pleasant change from Jaguar, where there was a lot of promise in the system at that point. At the CDSA level, we support things like building certificate chains and evaluating them to make sure that they're cryptographically okay. That's part of the CDSA standard, and that's where it's implemented.

We have higher-level APIs for the stuff that you really care about. In particular, SSL is supported by an API called Secure Transport, which is our own implementation of SSL that's using CDSA for cryptography and doing a lot of useful, good stuff for you. In addition to the pure cryptographic verification stuff, one thing that you get if you call our The first thing I would like to say about certificate and trust support APIs is there's a little database hidden in the system, a per-user database, that allows the user to basically tag a certificate with a level of trust.

It allows the user to say things like, "Okay, this certificate's fine. I've looked at it. I'll trust it for network connections now, or I'll trust it for mail use now." Or, "That one there, I know it's cryptographically okay, but I hate this guy, so never let me use this." We call it the user trust database. It's persistent, it's per user and per policy. There is a session on Thursday that I think talks about that a little bit.

Actually, a lot. So look it up in your program. Oh, and yes, you can store certificates in your Keychains. There's an item type for certificates that lets them fit right into your Keychain files. And as a matter of fact, we recommend that that's how you store certificates. 'Cause it works, it's easy, and there's pretty good support for searching for certificates in Keychains.

Secure Transport, as I said, that's our SSL implementation. It's a pretty good SSL implementation. It implements the whole standard client-side and service-side. It automatically ties into the user trust stuff that I just talked about. So by just using secure transport, you get that for free. I should give you one practical warning. There are some uses of SSL out there, particularly the OpenSSL-based ones, that are very, very lenient about how they interpret the rules of SSL. They pass a lot of stuff that they really shouldn't because otherwise the customers complain.

Some web browsers come to mind. By default, if you use secure transport, it will actually implement the standard, which means that if, for example, an SSL server has an expired certificate, it will not connect to that server. It will give you an error back saying, "I've got an expired certificate.

This is not working." There's a number of flags and options to secure transport to basically say, "It's okay. I know. Do it anyway." But these are not on by default. And we don't recommend that you just turn them on by default. Those of you transitioning from OpenSSL, it's a little bit tempting. Just set all those flags and it'll work. It's fine. But it's not the right answer because these are actually potentially problems of a security nature. So leave those flags off by default. If you must, give the user options to override with your favorite check boxes and like.

Better answer is to actually use our user trust APIs. We have canned UI that you can use to essentially present to the user the fact that something having to do with certificates failed and how and why. And that allows the user to then go in and express his opinion on whether to proceed or not. If you're using CF Network, HTTPS protocol through CF Network, you're getting secure transport automatically because that's what it calls. So same things apply, same warnings and same congratulations. Very good, you're using the right solution.

This is what we provide for SSL usage. Preliminary. Not an official API. May change. Just to play around with. Okay, end of disclaimer. This is our preliminary implementation of CMS and SMIME. CMS, the cryptographic message syntax, and SMIME, the secure MIME. It's basically the way you do encrypted email in the X.509 universe, and CMS is sort of a generic way of making encrypted bags of stuff. These APIs are actually ported and somewhat modified from what you will find in Mozilla. Some of you may be familiar with it.

As I said, it's not an official API yet, but this is what we're planning on making official next time around. So if this is your area of interest, if you're interested in encrypted email or otherwise making encrypted blobs of stuff, take a look at that, play around with it, give us feedback, tell us what's not working or what you think should be different. And again, of course, this stuff is using our user trust implementation and all of the good stuff automatically.

So, that's the major building blocks. Let me give you an example of something practical that you may have actually run into and how it's using those building blocks. Encrypted disk images. You all know what disk images are. If you make a disk image, there's actually a little pop-up that lets you say, "Make that encrypted." Cool, huh? Actually, very cool, because our cryptography is pretty darn good. Our implementation, we think, is secure, so you can actually trust that security, unlike certain third-party utilities I could think of.

Of course, the cryptography itself, the encrypting of the disk blocks is done through CDSA. You'd expect that, and that's what we do. The disk images are encrypted with keys based on a passphrase that you type in when you create the image. The passphrases are usually stored in your keychain, because that's where secrets go.

And therefore, as long as your Keychain is unlocked and your access controls are set up appropriately, you're actually not generally being bothered with having to type in the passphrase for your encrypted disk image again. It'll just fish it out of your Keychain. If you're being paranoid with your Keychain and you have it unlock itself after a couple of minutes, well, maybe you'll get a prompt to unlock your Keychain with an appropriate description that, you know, somebody wants to.

[Transcript missing]

Your encrypted disk images are actually pretty darn safe, even against physical removal. Because what happens when the thief opens the powerbook, I mean, obviously if he reboots it, you're safe because the disk image, the mounted part of the disk image is gone and the thief needs to have your passphrase to decrypt it again.

But if he still gets your powerbook while it's asleep, it will still be asked for either the passphrase for your Keychain in order to reopen the Keychain or directly for the passphrase for the encrypted disk image. If either of these fails, the image gets unmounted and your data is safe again.

And of course, when we actually go and we ask for the passphrase, either for the Keychain or for the disk image, that's again happening through a security server and the UI daemon that it uses. So the application that actually requests the mount or the other user interaction doesn't get to see your passphrase. So that's sort of how things get put together.

Having bombarded you with all of those technologies, I'm going to make up for that a little bit by telling you how to actually go and use them. This is sort of obvious. Find the right API for your job. Don't try to subvert some API that looks cool but isn't necessarily made to work.

[Transcript missing]

The fewer arguments you have to explicitly specify, the more defaults you get. And getting defaults is actually good, because if it turns out that one of the defaults is not such a great idea, we'll change them for you next time. If you explicitly specify every last option of an API call, then you are actually going to have to roll your application to make a change. So when in doubt, call the higher-level APIs.

I have seen a lot of creative use of APIs, security and otherwise. I mean, this is sort of an Apple developer tradition, you know, "Look, it's an API. How badly can we mangle it?" That's sort of fun if it's an API for drawing boxes on the screen or maybe for writing files, but if you're doing this to a security API, you are probably hurting the security of your program.

Because these APIs aren't just calls that you make and something wonderful happens magically and everything's cool. These APIs are part of a process. They're part of a way of working with your data. So when you go and start using a security API, actually read the part that doesn't just describe what the APIs are and what the parameters look like. Read the part on how to use them and then try as well as you can to actually use them that way.

Because the further away from the intended use you get, the more likely you are to get into some weird path that may not be as well tested as you'd hoped. And even worse, that may actually break the security model that you think you're using. So to the extent that you can, stay on the mark path and try to stay simple in your application of these APIs.

Security is not the kind of thing you put in the last two weeks of your development process. I hope you all know that. The best thing to do is to design it in. At least, you know, have your program, once you've scaffolded it up, if you don't think you know that much about security, find somebody who does and have them look over it.

The longer you wait, the harder the wake up sometimes is when you realize that you've done this wrong and there is no easy way to fix it short of rewriting major parts of your program. So it's, this is not Mac OS X specific, but I'll give it to you because you paid good money for this. Put the security in early. And if you don't think you're an expert, get yourself at least the advice of one. It's worth it.

Find the right API, he says. Yeah, what does that mean? If you need to store secrets, if you need to keep secrets, particularly if you need to keep secrets from other people, use Keychains, that's what they're for. That's where you stick secrets. If you need to provide some privileged service, either to the rest of your own program or to third parties, Think authorization.

Authorization is specifically there to answer the question, "Am I supposed to do this now or not?" If you have a network connection, TCP/IP probably, to some other machine and you're worried about people snooping on it, use secure transport, that's SSL, use it right, and you basically don't have to worry about snoopers at that point.

Using right is a little bit harder than it sounds. It's not as easy as just calling secure transport and saying, "We're cool now." But that's one level of detail that I'm not going to go in here. Read the documentation. It's actually got some good stuff in there. If you need to authenticate people over the network, go to directory services. That's its job. It deals with looking up users and figuring out what their attributes are.

If you work with X.509 certificates, then CMS is a good try. Again, not an official API yet, really. Can't change. Preliminary. Still. That's our answer for it. So, short of porting some third-party code, this is actually what you want to call. And if you need to do cryptography sort of at the base level, not for a particular purpose, but because your job calls for cryptography, then CDSA is what you're going to call.

There you are. And as it says here at the bottom, of course, life is never that easy. So you'll end up mixing and matching these things. But that is an initial map. Do's and don'ts. I love do's and don'ts. I get to tell other people what to do.

Wherever we recommend a particular API, like secure transport for SSL, use that. Don't go off and say, "Well, I've seen this cool open source thing over there, and I'm sure it's much better than this Apple stuff." So maybe there's a 0.1% chance that it actually is, but there's a 99.many chance that our stuff will actually work better on OS X than something that you port yourself.

In particular, Whenever you find yourself in a situation where, "Am I supposed to do this now? I'm root and I'm doing something dangerous. Am I supposed to be?" That's authorization. Think authorization. That's what that's for. If you use that, you are actually slotting into a very interesting and fairly complicated machinery that allows an administrator to configure how permissive the security in his system is. If you do this on your own, you're going to end up re-implementing half of authorization and the other half is going to be missing. So try not to do that.

If you deal with X.509 certificates and You want to know whether a certificate chain validates or whether to trust a certificate for a particular operation. SecTrust APIs are what you really should be using rather than some generic X.509 certificate library like OpenSSL. Because, again, there's stuff in there that is OS X specific, for example, the user trust database, which if you call our APIs, you get for free. If you call, say, OpenSSL or some Mozilla library, nope, just the raw stuff.

Before you go off and use root code, try to find something in the system that already does what you need. For example, before you go off and become root in order to mount and unmount file systems, dig around a bit and you'll find that there's something called a volume manager that actually does these things for a living.

Craig Keithley, Perry 'the Cynic' Kiehtreiber If you have to use root code, as I said, factor your application, make the amount of code that actually runs with root privilege as small as possible, and I really mean two pages. In rare cases, maybe five pages. If you find yourself writing ten pages of root code, you probably made a mistake. So go back and think about what you really need that root privilege for.

Of course, there are don'ts. I mean, there's never just dos. If you need to keep a secret, don't just stuff it in a file. Don't just stuff it in a file and make it readable and writable only by yourself. Put it in a keychain. It is safer. You can get into so much trouble trying to do your own secret storage. It's not worth it, believe me.

If you have communication between different processes because your client server or because you had to factor your program, don't try to come up with your own ways of authenticating the two pieces to each other. Use authorization. Again, that's what it's for. It's very good at that. You can basically get the code templates, stick them in your program, and they will, for the most part, work just fine for you. If you try to do this by hand, it's very hard to do it right. It will definitely take you longer to do it on your own than to do it with authorization.

If you are writing root code, and this is only a partial list of all the horrible things you're never supposed to be doing while you're root, don't ever load a plugin. Doesn't matter where you got it from, don't ever load a plugin. Don't link against any GUI libraries.

I've got nothing against GUIs. GUIs are wonderful, but GUIs are big. There's a lot of code in GUIs. GUIs have a nasty habit of loading plugins behind your back. Don't do that. Your root code should be a simple tool that the glorious GUI-enabled Cocoa or Carbon app of yours is going to talk to, using authorization.

Did I mention that? Good. And there's also a laundry list of dangerous system calls and library calls that any unique security textbook can tell you about that you are under no circumstances to call while you are root. Things like, you know, system and P open. There's about two dozen others.

Other things you shouldn't be doing: don't assume there's a single user on the system. This is an old, particularly you who come from the OS 9 side, I mean there's always a user, you know? The user. After all, we're a personal operating system, right? No. There isn't always just one user. We've just introduced what's so nicely called fast switching, which means that there can be any number of users on your system. And they're all logged in up there in the front. They typed in their password at some point. Their programs are still running.

One of them gets displayed in the front, but the other ones are alive too, and you have no idea which one of them might be the one that you want to talk to unless you actually belong to a particular user. And Keychains right now are files in the file system, single files. If you can avoid it, don't assume that will always be the case, because it won't.

Okay. If you actually have a little bit of spare time to think about how to make your code more future-proof or less in risk of having to be rewritten every time you release something, Use standard APIs. We have this fairly strong policy about continuing to maintain support for APIs once we call them APIs. So that's good for you if you call them.

[Transcript missing]

Don't assume that all access is based on passwords. That's already no longer true in the system today. There are some smart card interfaces and it'll become less true in the future.

And do test with different security configurations. Test with different authorizations, learn how to play around with the authorization database, and play around with the access controls on Keychain items. See if suddenly your application throws up dozens of dialogues. That would be embarrassing for users who care about security. I am running over, so I'm going to speed up like a dervish here.

Almost all of this stuff is Darwin. The only stuff that I've talked about that isn't Darwin is the actual GUI components. If you get Darwin from the open source server and install it, this is all in there. You've got CDSA, you've got the security server, you've got all this stuff. Anywhere where there's UI involved, you've got a stub that doesn't do anything. So, if you were really, really desperate, you could take Darwin code and, you know, hack it up to do UI again, but of course, why would you? It's only $129 to get the real stuff.

Quick list of what's new for Panther. A lot of the X519 certificate support is new. The CMS S-MIME is new. You can now import certificate bundles, PKCS, if you don't know what that is, don't worry about it. System Keychains are new. There are two new frameworks called Security Foundation and Security Interface for Cocoa programmers that make nice canned user interface panes and views.

Authorization got the access to low-numbered ports call and the APIs for programmatically adding authorizations are new. And you get more flexibility in controlling how the dialogues work that sometimes get issued. Keychain access is a lot nicer now. Let's put it this way. Somebody stuck secure arrays in. If you're the kind of guy who wants to erase something 20 times, the NSA can't get at your data. You can do it now. It takes a while. Okay. I am wrapping up now.

Somebody stuck this in, so this is the first time I've seen this. These are other things you may want to go to. Okay. Yes, they all sound really interesting. 109 security certificate APIs. If X.509 certificates rings your dinger or it's something you need for your program, then definitely go there. It'll discuss the APIs we have and how to use them. Kerberos, of course, if that's your area of interest. Oh yeah, security feedback. In case you didn't like my presentation, FF016 security feedback forum, that's where you get to complain. Unless you want to do it now.

Craig Keithley, who officially is there to serve your every whim, and me. Yes, you can send me email, I don't mind. I don't promise to answer, but... There is by the way I should mention, a mail list that Apple has, it's called [email protected]. It was originally created as a self support and feedback kind of mailing list for Cdsa users but in the past it's low volume enough that if you have other security questions it's probably okay to just ask them there. Using the usual rules for mailing lists, read the archive first and be patient and nice. For more information, well, you know these places, of course.

This is where our documentation sits, and under security, you'll find what we've got. In particular, there's There's a section on the Keychain APIs and on authorization. There's the Apple CDSA mailing list. The CDSA is an open group standard, so if you really want the 600 pages, you can get it from the open group.