Core OS • OS X • 45:39
Discover how you can use Lion's App's Sandbox feature to protect your application's users from unintentional bugs or deliberate attempts to compromise security. Understand the App Sandbox security goals, how applications and user data are isolated from each other, and how to describe the system resources your application needs to get its work done.
Speaker: Ivan Krstić
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
It's a great pleasure to be here today and introduce you guys to App Sandbox, a feature dear to my heart and the system that we believe is the most advanced desktop security system in the world. But before I do that, you'll have to bear with me because it's hard to appreciate just what we're doing and how we're going about it without understanding some of the history that's brought us to this point.
If you think about where we were yesterday, a figurative yesterday, the first virus that ever appeared was Creeper on ARPANET in the '70s. And since then, it's taken 30 years until in about 2004, the count of known malware reached about 100,000. And in another four years after that, it surpassed a million.
And today, if you talk to antivirus vendors and people that are in the business of tracking malware, they're seeing tens and tens of thousands of unique malware samples a day. Now, thankfully, Mac users were mostly spared from this situation, and App Sandbox is how we'd like to keep it that way, but I'm getting ahead of myself.
In 2005, there was almost no data theft malware that was spreading by email. This phenomenon just didn't used to exist. There was lots of malware spreading by email, but it was trying to do things to your computer, not steal your data. And so, sometime in 2005, we started seeing the first examples. The industry started seeing some of the first examples of this.
And everyone kept their eyes peeled because the big question in the room was, how quickly would this grow? And so, this--we had the opportunity to witness a new category of malware being created. And by 2007, we had--the industry was basically registering over 5,000 different kinds of samples of data theft malware that was just trying to steal user data being sent around. And that's really sort of shocking because the growth rate was in the hundreds and hundreds of percent and has kept going. Okay.
And you would think that maybe it was the little guys at home that were being attacked and that if only you're serious about protecting your networks and data, then these attacks can do nothing. But actually it's not true. No one has been able to really keep these attacks at bay.
And it's very hard to actually get a good insight into what the situation is for governments because they tend to be pretty secretive about it. But remarkably, just before President Obama took office, there was a government commission organized to write a report called a report on the cybersecurity for the 44th presidency.
And it had key people from the military, from government, from academia, and the private sector. And they were basically trying to make some recommendations to the new presidency about what they should do with the cybersecurity threat. But to me, this report is fascinating because it's public. All of you can download it.
And because it actually gives a rare glimpse into some of the kinds of attacks and some of the numbers that the government has been experiencing. So in 2007, as an example, the Department of Defense, Department of State, Department of Homeland Security, Department of Commerce, and NASA all suffered what they were called major intrusions from unknown foreign entities, including a compromise of the unclassified-- email of the Secretary of State.
And this kept going. The Department of State has basically gone on record as saying that they lost terabytes of information in these intrusions. The Department of Homeland Security has confirmed that the breaches expanded to cover the TSA. And, uh, uh, the situation at NASA was, in fact, uh, uh, so interesting, they've actually seen launcher designs compromised. They've imposed restrictions where for several hours before a launch, they actually just pulled the plug on their email system entirely.
What's going on here? This is, this is, this is not good. Somehow, you know, if you, if you think about other situations in real life that have a security component, I think of driving because that, if you look at the statistics, is not the safest activity. But people don't need physics degrees to understand how to drive cars safely. And somehow it seems like users need computer science degrees to drive their computers safely.
And, you know, even that's questionable, I, I, because lots of computer science people don't really understand security. But how did we get here? Why is this so different? So think about where we are with cars and car safety today. There is, before a car can be brought to the market, there is basically mandatory government-imposed crash testing to check what the security situation looks like.
For all of the key systems involved in controlling the engine and transmission, basically there are redundancies in a number of cases. There are just backup computers that can take over if anything goes wrong. And there's this big understanding that when you have a system that's as complex and moving as quickly as a car on the freeway, eventually with the best engineering things can go wrong and you need to be prepared for that possibility. So, seat belts and airbags, right? Damage containment.
Even if you have the best damage avoidance systems on the market and systems that try to steer the car away if they sensed an impending accident, ultimately accidents can still happen and when they do, you want to have a containment mechanism like a seat belt or an airbag.
That's not the case with where we are today with computer security. The game we're playing is essentially the defenders have to protect 100% of the system 100% of the time, whereas an attacker only needs one breach at one time. And when they do that, because the industry has been emphasizing damage prevention but not really containment, what happens is that one breach at that one time means that it's game over and the attacker wins. . And we've done some amazing progress, mind you, with damage prevention mechanisms, but the containment aspect has not received nearly as much attention.
So here is--if you stop and really think about this hard, here is what I'm going to be referring to as the unfortunate assumption, not just a unfortunate assumption, that has dominated desktop security for decades. And the assumption is this: every program that a user runs should run with the full privileges of that user. Or if you want to put it differently, we should isolate different users on a system but we should not isolate different programs by the same user. And this assumption is really unfortunate.
And if you in fact try to figure out where it came from, it's kind of a fascinating exercise because Uh, this model, which was inherited in most modern operating systems from Unix, which came out in 1971, When--when Richard Thompson basically said, "Well, the program is the user, so, you know, we shouldn't really distinguish there," 1971 was seven years before we ever had an international packet switch network. It was 12 years before we ever had a TCP/IP, wide area network, and it was 20 years before we ever had a Web. And today, that assumption is 40 years old.
What did untrusted code mean in 1971? The words had no meaning because if you wanted to run code on a computer, you have to physically bring it to that computer on a punch card or later a piece of magnetic tape and run it. Today, every time you browse a website, you're executing untrusted code. That is 40 years of We have this unfortunate assumption in the industry about desktop security that literally predates personal computing.
It is a model of sticks and stones that we're trying to use to defend ourselves today and We need something better. We need a way to contain damage when it happens regardless of ultimately why it happened. And not even just for malicious attacks, because even though that's what's usually at the forefront of all of our minds, you know, programs can just sometimes run amok. There can be unintentional, perfectly benign coding errors and misbehavior in programs that can still have really bad effects. And we need a way to contain the damage that such programs can do.
So, how do we make that better wheel? Well, we have a lot of experience in the last 20 years as an industry about what doesn't work. And we know chiefly that the unfortunate assumption of all programs running with the full user privilege doesn't work. We also know that security UI doesn't work, and there's this great quote in politics, which is, "If you're explaining, you're losing." And this is true of security interfaces as well.
If you have to explain to the user how to be secure, you're losing. Here's what we show users day in and day out as an industry and expect them to do the right thing. We're basically conditioning our users to ignore all the security that we're trying to put in front of them. And thus far, the conditioning has been very successful.
Imagine, if you will, that you are zipping along at The speed limit down one on one and there's a car that starts swerving and you're about to hit them and in your last moments before the impact, your eyes glance across the entertainment console where there's a cheery question, "Hey, would you like us to deploy that airbag now? Would that be okay with you?" This is crazy. We can't keep doing this.
But we've also learned some things that do work. And chief among them is something called the principle of least privilege, also known as the principle of least authority, which is the idea that when building systems, they should have no more privilege than the amount they need to get their work done. In other words, there shouldn't be any ambient privilege that they get just because, for instance, they're being run by a particular user. Privilege should match what systems do and there should not be any more of it.
So it's interesting because this--it took us some time to sort of reach this understanding. The PDP-10 actually had what was called a high-low memory segment system, which meant that the operating system would run independently from user code. But the feature just wasn't in demand because, you know, the PDPs were giant computers used by highly trained people. And for the most part, this didn't seem very valuable, but it cost money. So in the PDP-1170, the program was gone.
And Dennis Ritchie likes to tell the story how if you were sitting in a computer lab and everyone was using the same PDP-11, it was considered absolutely necessary as a courtesy to others that if you compile a program and you're about to run it, you had to yell, "A.out!" And then wait about a minute until everyone saved their work so you could run your program, because once you ran your program, it could potentially stomp over all over the user, uh, the operating system and just, you know, kill everyone's session.
So, We did learn this lesson eventually as an industry and we started separating ever more privilege. We separated kernel land and user land. We separated user land into actually having different little user lands for each user on the system. And we've--basically the state of the art in desktop operating systems around the time that Mac OS X, when this NT came out, had sort of fully caught up with this idea that at least when it comes to memory, we should actually have some real protection there.
And this story of iterations of less and less privilege is one that repeats over the decades. So on x86, we actually have CPU rings in hardware, and we have protected memory, and we have separated users, and we have processes that are separated, and processes can't look at each other's memory. A series of iterations of reduced privilege That once we hit the process boundary, we stopped. In other words, today, there's really no way to, within a single process, deliver different bits of privilege to different parts of that process.
Really no mainstream computer operating system supports anything like that. And so this is-- what this means is that the unfortunate assumption still holds. We still have all the users' programs running with the full privileges of that user. And the reason this is bad obviously is because programs are growing. It means that eventually different parts of a single program need different bits of privilege.
But because there's no subdivision of privilege within a process, ultimately if you have to give any part of a program some amount of privilege, really the entire program has it. And this should be clear to most of you, but it's easy to think through why it's impossible to try and separate out privilege within a single process. Because if you're using an unmanaged language like C, you can literally-- you can literally construct pointers to arbitrary parts of the memory space and go do whatever you want in that memory space.
So that was yesterday. Where are we today? The Internet really changed the game when it comes to acquiring software. We have tons and tons of software being written by tons and tons of vendors. It's become easier than ever before for computer users to actually download programs and run them.
And computers have gradually come to be almost always on And so the challenge of the security industry has really become trying to isolate data between different programs rather than trying to keep different users on the same machine isolated. In fact, a large number of machines only ever have one user.
So here is the challenge that we have with Mac OS. It is an incredibly powerful, incredibly rich platform. And it's a platform that has a user experience that is centered around the file system. When you think about how you use your Mac, you think about using Definder to find your documents, to find your data, and then you open those documents in whichever application you choose and you work with them. And apps have always relied on running with the full privilege of the user.
And because there has not been a damage containment mechanism, it meant that you When you were writing apps for our platform, could not tell us what those apps were really intended to do in some kind of machine readable form so that the operating system could construct a last line of defense and make sure that if something goes really, really wrong, that the operating system can catch you.
So if you imagine the world's most boring application called Watch Grass Grow, This application, which really shouldn't be able to do anything on your computer except maybe showing some grass growing, can actually, if it was somehow to become exploited, steal all of your email and send it to Croatia, which, by the way, is scary because I'm Croatian, so take it from me.
Here's the reality we're up against folks. We're building really complex systems. They always have vulnerabilities, and complexity is a tide that thus far, the entirety of the computer industry has not been successful in turning. Complexity is only growing. It's not getting smaller, and we're at the point already where a single buffer overflow somewhere in your code really ruins your user's day.
And I don't necessarily even mean your code, because today, when you write an application, you're linking against all the frameworks you're linking against and all the libraries. Potentially, your actual code is just the tip of the iceberg of the code that's actually running, and the single buffer overflow anywhere in any of that code is enough to ruin your user's day. And there is simply no limit on how that day can be ruined. There is really no limit on the kind of damage that Watch Grass Grow can do if it becomes exploited.
So I want to drive this point home because I really think it's pretty remarkable. So in 1977 was the first time that a U.S. automaker, General Motors, put in a piece of electronics into a car. It was an electronic spark plug controller on the Oldsmobile Tornado. It was a little, purpose-built controller that really could do only one thing, and that was time, electronic spark plugs, and that was it. And they like this idea so much that actually just four years later, there were up to about 50,000 lines of code running different microprocessors in the cars that they were putting on the market.
If you fast-forward a bit, in 2005, the F-22 Raptor fighter plane still in service today and a key flown asset of the US Air Force is running about 1.7 million lines of code just for core avionics, so really the mission-critical software. And the Joint Strike Fighter project that's sort of supposed to be completed last year was up to about 6 million lines of code. This was, I think, about mid-2009 that they had this figure.
So I want you to look at this graph and sort of the magnitudes of difference because there's a punchline to this graph, which is that if you buy a car today-- There's about 100 million lines of code in it. There's between 30 and 100 processors that total up to about 100 million lines of code. I hope all of you will be enjoying your drive home.
This situation can't keep going. And in very recent times, in the last few months, I'm sure you've all heard, there's been a string of high-profile breaches and compromises across a number of different companies and the industry. And The kind of pain this is creating is not just for those companies that are being breached, but it's personal information of users that's being exposed, leading to identity fraud, financial fraud.
Basically, it's users that are hurting in terms of time, in terms of money, and really, in terms of Their ability to enjoy technology, because they're learning very quickly that through just innocuous use of these things they like, they may be opening themselves up to nightmares like trying to recover your identity after it's been compromised or having money stolen from your bank account.
We know that there is a better model for doing security, and here it is. If you think about how iOS works, there's been a Sandbox from day one with a very simple, very understandable set of rules. Applications on iOS cannot touch other applications. They cannot touch the system, which means that if there are mistakes in programming, if there are exploits that happen in these applications, the damage, the overall damage to the user and their device is quite limited.
And there's a nice side benefit, it also becomes very easy to uninstall applications because applications are isolated into their little containers, which are their own little spaces where they get to put their data. And because this story is so easy to understand and has such powerful benefits, the Sandbox on iOS has really become a key element of the overall security picture.
The way this is implemented thus far has been entire-- almost entirely through private interfaces. So interfaces that have not been available to developers. And in iOS, there's never been a need to make them available. But basically, there's a kernel enforcement mechanism called Sandbox that can gate a number of different things that the programs do, especially as it relates to acquiring system resources. And we've used this to great effect obviously in iOS, but also with demons and system software on Mac OS.
But there is a problem, maybe more than one. To use Sandbox, you have to know ahead of time what resources an application is going to want to use so that you can create a security policy that the kernel can then enforce. And if you think about iOS again, this is pretty easy to do because applications are isolated in their own containers and aren't expected to be able to go and trudge around the file system.
What's more, Sandbox is just an enforcement mechanism. It doesn't really make it any easier for developers that want to be able to take bits of privilege from their program and somehow separate it out so that we don't have this issue where all the privilege is available to programs.
So the bottom line is, even though we have what we believe is a fantastic model in iOS, it simply wasn't possible for us to take it wholesale and somehow port it to Mac OS. We have to do more. And it's against that backdrop that I want to tell you about App Sandbox today.
The App Sandbox is a mechanism to aid in writing and securing graphical applications on Mac OS X. It is a mechanism that was designed especially with the Mac App Store in mind. And it is a damage containment mechanism. It is a mechanism that tries to limit the kind of exposure that a user's data has in the event that an application has become exploited or that there is an unintentional coding error or other misbehavior.
The way App Sandbox does this is by trying to control a number of features of the operating system, such as access to the file system and the network, with the goal of making it very hard for exploited applications to be able to steal, corrupt, or delete user data. App Sandbox uses the same sandbox mechanism as iOS for enforcement in the kernel, but adds to it a number of custom-tailored changes at almost every level of Mac OS to be able to provide a great security experience.
When we built App Sandbox, we had a number of design goals in mind. And Knowing that popping up security UI is simply not an option, One of the key design goals was to try and find a way for the user's intent to somehow directly translate to security policy. If you only knew what the user was trying to do at any given time, You could create a perfect security policy around that.
But of course, we don't have mind reading machines yet. We wanted to make it really easy for you guys to have a technology that you can use to make it so that if your application becomes the one that's exploited, that the damage it can do is really limited.
And finally, and this is--this should be obvious, but it sort of bears repeating, we didn't set out to create a perfect security mechanism, a silver bullet to end all security problems of all time. No, we are not trying to do that, but we did set out to create a mechanism that would significantly elevate the bar for attackers that are trying to take advantage of applications on Mac OS.
Interestingly, when we sort of enumerated these design goals and started building App Sandbox, we realized that If you look at what's prohibited by the Mac App Store policy today for submissions to the store, it turns out that App Sandbox adds enforcement for a lot of these already existing restrictions.
So in some ways, it's--it's putting teeth behind restrictions that already exist and you shouldn't--you shouldn't think that this will somehow translate to completely new, completely draconian restrictions. These restrictions were already there. Those of you that have submitted to the Mac App Store have already followed them and so AppSandbox is simply a way of enforcing some of those restrictions on a technical level.
Here are the key ideas. We want you, when you write your app, to tell us what your app is supposed to be able to do to get its job done. We're then going to take your app and put it in a container exactly like on iOS, meaning you're going to get your own scratch space for all of your preferences and caches and other things that are not user documents, that are just your application's data. And we're going to take that container and make it really only available to your app via your app's code identity. So there are no namespace collisions if two applications happen to somehow have the same name. No, really, your container is your own.
And for the tricky part, when it comes to actually accessing a user's documents, we're going to put control of that in the user's hands. We're going to make it so only that when a user chooses to open a document in your app, does your app receive the ability to open that document. And that, that kind of access is not actually going to persist across relaunches of your application.
But we also wanted to do this in a way that the special cases that we all know and love, the Recent Items menu and drag and drop, should just automatically work. So the system we built has five key components, and I'm going to lead you briefly through all of them.
I already mentioned that you're going to be telling us what your application is supposed to be able to do. And the way you're going to do that is through what we call entitlements. The entitlements that you bind into your application's code signature tell Mac OS what it is that your application is supposed to be able to do. The entitlements are just a property list. This is not deep magic. And in fact, Xcode lets you edit these options graphically so you don't even need to know if you don't want to that it's backed by a plist.
When you go to, uh, uh, to-to look at your, uh, targets in Xcode, you'll simply notice that on the summary page, you can turn on application sandboxing, um, and choose which entitlements you'd like to give your application. The entitlements themselves are really, really simple, and even though We're not showing them to end users. We really want them to be so simple that a user could understand them if we did show them.
So if you--if you keep that guideline in mind, an end user should really be able to totally understand what these entitlements mean, and if it's any more complex than that, it probably shouldn't be an entitlement, you'll realize that it's quite different from some other security models that you may be thinking of, like Androids. Android has hundreds of permissions that you can choose for applications. We have fewer than 15 entitlements total in Lion. In fact, here they are.
This is it. You're looking at all of the entitlements that are made available. But I'll talk more about that later and especially in the session immediately after this one. Let's progress and talk about containers. I already mentioned that, like on iOS, we're going to, when you opt into App Sandbox, give your application its own scratch space bound to your application's code identity and make your application the king of that domain. And this is very simple.
There is no magic to containers. It's actually--we set two environment variables and that's it. And setting these two environment variables is enough to make it so that every Apple API that you call and ask for either the user's home directory or different subfolders of the diff--of the user's home directory like a library or a, you know, a computer. And you can use this to do this.
So, what we're going to do is we're going to use the app, the app, the app, the app, the app. And we're going to use this to create a container for the If the application now tries to directly call the open system call with a path to the user's real home directory, that access is just gonna be denied by the Sandbox.
On the other hand, if you call an Apple API like in its home directory, the result you're actually going to get out of that is within the container. It is the container. And of course, the container is within the Sandbox so your application can read and write and happily do what it wanted to do.
I already mentioned that we use the kernel enforcement mechanism from iOS called Sandbox. By default, it's really only the container and certain system locations that your application can access. The kernel will be enforcing that normally your application cannot get any access to the user's real home directory aside from your container.
So, I hear you ask, how is my application going to get at an actual document that the user has then? Okay, I didn't actually hear anyone asking, but I thought it would be a good transition. We built a mechanism called PowerBox. that tries to put this idea of translating user intent into security policy in practice.
Here's how it works. If you think about the Cocoa Open and Save panels, We've all seen them a million times and probably not stopped to pause to reflect deeply on them, but they're actually pretty remarkable because they're such an unambiguous, overt declaration of user intent as any you can find.
If you have an open panel and the user chose some files in that panel, the user is unequivocally saying, "I want this application to open the files I just chose and no other." So why don't we make it that way? Why don't we make it that it's really only the files that a user chose in an open or a safe panel that are available to the application that showed the panel? So that's what we did. And to do that, we have a trusted system mediator called PowerBox. And I'll show you how this works. Here's your application that links against the App Kit. We're going to put it in a Sandbox. Your application is going to call the NSOpenPanel.
Now, It so happens that if AppKit tried going to the real tilde documents folder that the user has, the Sandbox would deny the access. This simply would not work. But instead, AppKit detects that you're running inside an App Sandbox, and instead of drawing the open panel from within your application, it actually reaches out to this trusted system mediator called PowerBox. PowerBox also links against AppKit, but it's not in the Sandbox. And so it actually has the ability to access the user's documents.
So it's PowerBox that will actually draw an open panel on your application's behalf. And because of some fantastic work that's gone into this, neither you nor the user will ever know that it's another process doing the drawing. The panel looks the same, it behaves the same, it is indistinguishable from being drawn by your application. But then, and here's the key part, it's only the files and directories that the user selects in this panel that are sent back into your sandbox and made available to your application.
Finally, I want to mention a system called XPC Services. We talked about how the, the iterations of, of reduction of privilege sort of stopped at the process boundary. And today, if you have a process, you really can't somehow subdivide privilege within that process. Well, we wanted to make it much easier to divide privilege. And we decided not to do it by finding a way to subdivide privilege within one process, but instead to make it really, really easy to break applications up into different processes that carry different levels of privilege.
So XPC services are a system that lets us do this. They're a mechanism by which you can take parts of your application, create a separate binary for that part of the application, put that binary directly into your main app's bundle, And XPC, which is the new inter-process communication layer that we added in Lion, will actually manage the lifecycle of you talking to these parts of the application entirely for you. This means you don't have to install these services, you don't have to set up launch DEP lists to get the service started. No, it's enough that the XPC services are in your app bundle.
And as soon as you try talking to them from your application, they will be spun up and made available only to your application and no other. Which means that you no longer have to write special code in your helper's checking, whether it's your application that's trying to talk to them and doing the security checking on your own. No, all of this has managed for you. Incredibly simple to take parts of the application that need different bits of privilege from the main app and spin them out into different processes.
So that's a lot to take in, and I thought it would be a good, a good way to tie it all together by actually picking an actual application and, you know, putting it in the application Uh, uh, telling you what this looks like. So, I'm sure most of you are familiar with Adium.
It's a, uh, third-party, open source, uh, instant messaging client. Very popular on Mac OS, very full feature, tries to be a good citizen, uses a lot of platform functionality on Mac OS. And the main application is about 250 files of source, uh, about 75,000 lines. Now, this is not counting, um, any of their libraries or any of their frameworks. Uh, this is really adjust the main app.
Here's what we're going to do. Here's the simple process we're going to follow. We're going to pick some entitlements that we think are appropriate for Addium. And we're going to build it. The entitlements will get signed into the code signature. We're going to run it and check that it's now really running under App Sandbox. And then we're going to see if the Sandbox system logs violations, things that Adium tried doing but that the Sandbox mechanism prevented it from doing.
So here's my pick of some of the initial entitlements we want. The main one is the app sandbox entitlement, which actually opts it into an app sandbox. We want Addium to be able to access our address book, because it wants to be able to show names for people on our buddy list.
We want it to be able to both read and write files that the user selects, going to the trusted open and save panels, because we want to be able to receive files from buddies and send them to buddies. And because Addium needs to be able to receive files from the outside world, we have to make sure that it can act as a network server and receive incoming connections.
So here is a, I hope, yeah, I guess you can see that pretty well. Here is what this looks like in Xcode. I just picked the addium, um, uh, uh, target. And you can see that we just have this nice drop-down with entitlements where I picked enabling App Sandbox and allowing incoming network connections and allowing address book access. And that's it. And in fact, if I, if I didn't know that there was a P-list backing this, it wouldn't matter. I just clicked some check boxes and I was ready to go. So now, I'm going to build and run this.
And check whether Adium is really running on their Nav Sandbox. And you can do that by starting up Activity Monitor. In the View menu, there's a column you can add called Sandbox. And you can check here, in our case, that Adium is showing up as yes. Yes, it really is sandboxed.
But then it ran and no buddy list came up. So I opened up Console app and looked for errors logged by the sandbox daemon to try and understand what went wrong. And here is an error that we see. Addium tried to make an outbound network connection. We see to what address and to what port.
And in fact, there's even a button that I want to draw your attention to called Full Report, which if I click, I can see a back trace of exactly where things went wrong and where the kernel intervened and stopped the application. So in this case, we see that it's in lib purple, which is Addium's networking library.
And that the operation that was denied is an outbound network operation, which makes sense because even though I gave Addium the ability to listen for incoming connections, I forgot to give it the ability to also make outbound connections. So our process here is very simple. We're basically going to add the network client entitlement, which really just means adding that one more checkbox in Xcode and hitting build again and running the program.
And the next time I run Adium, there are no more violations and things are happy and it's running under an App Sandbox. Why does all of this matter? Because now you have to think what happens if Addium gets exploited. Instant messaging clients have been a notorious target of exploits.
And the question now is we've done this work, we've checked some check boxes, what do we get for it? And what we get for it is that an attacker that has fully compromised Adium on your machine, when it's running under an app sandbox, really has almost no ability to do anything to your system or your documents. If you happened to--while Adium was running that particular time, if you happen to send or receive some files to your buddies, an attacker could get access to those.
If you haven't sent or received files to buddies, an attacker can't really get at any of your documents. And-- Really, to be able to do anything more at this point, an attacker needs yet another exploit that's not an addium, but it's probably in the operating system itself, usually the kernel, to be able to really do damage and bypass the containment that we've actually put upon him.
Our sandbox is a damage containment mechanism. It is the last line of defense against exploitation, against programming errors. It is not an antivirus system. It does not try to stop things that started out as malware. But it tries to drive policy, security policy, by user intent, and it tries to make it really easy for you to secure your applications in such a way that if they become exploited or if there are benign programming errors, that the operating system can impose a pretty strong bound on the kind of damage that can happen to a user's data and a user's system.
There is a guide called the Code Signing an Application Sandboxing Guide on the usual developer documentation sites that goes into quite a bit of detail about a lot of the things you heard. And in fact, we also made some sample code available for you that you can see not just App Sandbox in action, but also an application that we took and broke up into different pieces with XPC services. Uh, and we've given each of those XPC services different entitlements and different security properties.
Here's--here's--here's the closing. This number that was just unveiled yesterday, 14 billion downloads from the iOS store. The iOS sandbox protected every one of them. Each of those 14 billion downloads has run under the iOS sandbox. And as a result, users have enjoyed this beautiful carefree experience where they can pick up their iOS devices and run an application, download a new application, and not worry about, "Well, what if there's an error and it deletes my phone?" Or, "What if there is an exploit that ends up happening and everything will go wrong?" No, users have been carefree and-- See that kid up there? That's how your users should feel on Mac OS. And we hope App Sandbox will be one step in accomplishing that. Thank you.