General • 1:31:55
Speakers: Bertrand Serlet, Simon Patience, Peter Graffagnino, Andreas Wendker, Scott Forstall
Unlisted on Apple Developer site
Downloads from Apple
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Good afternoon. Last year at WWDC, if you remember, we had a little surprise for you. That's, of course, when we announced our switch to Intel processors. And after the conference, all my colleagues at Apple went back to Cupertino and worked really hard for a number of months to be able to surprise you again. And that was at Macworld, when Paul Ottolini, the CEO of Intel, came on stage to hand the symbolic wafer to Steve.
And after that, back to work. And in the months that followed, with pretty good regularity, we turned the entire line, so that as of this morning, the transition is complete. Now, we could not have done such transition in so little time if Paul Ottolini had not been there.
We didn't have a vertically integrated company where hardware and software work hand-in-hand to make it happen. So we design the hardware, we design the software, and all together, that brings a tremendous user experience. That's our goal. That's our mission. So this integration of hardware and software is going to drive the agenda today for the State of the Union. We'll start with hardware topics, things like the CPU, memories, GPU, all those things.
And from those topics, we'll move on to the software implications. So, at the heart of a computer... You have, of course, the CPU. And our latest addition is a 64-bit CPU. So to talk about the CPU, I'd like to welcome on stage Simon Patience, who is VP of CoreOS.
Good afternoon. Well, it seems appropriate that somebody from the core of the software operating system should be coming here to talk to you about the CPU, which is the core of the hardware of the machine. So I'd like to start this talk with a little trip down memory lane.
And processors originally had a word width of four bits, which was clearly not enough. And so fairly quickly we moved through the 8-bit processors. 16 seemed to be a reasonable amount, but that didn't last for that long until finally we settled on 32. And 32-bit processors have been in use now for 20 years and have done us well. But more recently, we've moved ahead and to 64-bit processors with the introduction of the G5 and of the Xeon, as you heard this morning.
So why would we want to go to 64-bit? Well, there's a number of reasons. The first is computational speed. We have an instruction set that's optimized, that doesn't have the legacy baggage that it has to carry on, and so can be highly performant and fast. On Intel, we also have many more registers than we had previously, and that means fewer memory references, and that in itself is a major performance improvement. The second reason is that we have a much larger virtual address space. Now, for people who have large data sets, that's very important. There's no data window moving across your data set, which makes it more efficient because there's no remapping, but also it's a lot simpler.
In addition, with a large virtual address space comes a large physical address space, lots of RAM in our machines. And if we look at the historical trends of our machines, this is the maximal configuration that you can buy from Apple over the years. You can see we've been stuck underneath that red line. That's the 4 gigabyte or 32-bit physical boundary until we had the introduction of the G5 and now suddenly our physical memory configurations have leapt up.
So where are we? So in Tiger on PowerPC 64-bit, we have a 64-bit virtual address space for applications. We use the industry standard programming model LP64, that's 64-bit longs and pointers. And we introduced a single binary package that allowed you to take a 32-bit executable and a 64-bit executable, put them in the same package, which you could then install anywhere and it would run.
And we were targeting the Unix application or command line for 64 bits. So if we look at our software stack, then for Tiger, this is the target, it was Unix. Now that was for PowerPC. Now today we've announced the Intel Xeon in the Mac Pro, and we have the exact same functionality. We have the Unix functionality, 64-bit libraries for Intel and for the PowerPC.
So that's Tiger. So what about Leopard? Well, we go look at our software stack again. This is the software stack and we've extended the 64-bit support all the way up the software stack. So we now have full 64-bit application support including Carbon and Cocoa. Now just like the 64-bit processors have managed to lose a little bit of legacy in their instructions when they went to 64-bit, we're losing a little bit of legacy also.
So there's no 64-bit quick draw, but we do have the modern courts framework. There's no 64-bit low-level QuickTime APIs, but we have QtKit providing that functionality. No 64-bit sound manager? Core Audio does that for you. And finally, there's no 64-bit CFM, and we have no replacement, and we don't believe that's a problem.
So I'd like to dispel a little myth that is that 64-bit is always faster. Now, 32-bit has smaller fundamental data types. And so sometimes it can be faster because you have a smaller heap, a smaller footprint, less memory pressure on the system. So it's a trade-off, and you have to work out what is correct.
But we have tools for 64-bit ready to go for both PowerPC and for Intel. We have this new little pull-down panel here, which allows you to specify whether you want to build for a 64-bit or a 32-bit or both application. And so what I'd like to do now is to show you a little bit about what a 64-bit application could do for you.
Okay, I have this project here. Let's quickly look at the targets. You can see actually we're building both a 32-bit version and a 64-bit version. This is the new pull-down panel. So what I'll do now is I'll launch the 32-bit version of this application. So what this application does is it processes a four gigabyte file of DNA data and is actually using that data to render an image of the helix or various images of the helix.
Now this file is too big to fit into a 32-bit address space, so we're having to move an address window across it. And it's also very compute intensive and very memory intensive. So you can see it's pretty slow. So let's launch the 64-bit version of the same app.
So the 64-bit version, the entire file is able to be mapped into memory, so there's no remapping of a window. We're running on the Mac Pro on this with 8 gigabytes of memory, so we can actually put all of the file into physical memory, which is helping the 64-bit application, but doesn't appear to be helping the 32-bit version. And the compiler is taking full advantage of the additional registers to help speed the operations. So this will take about 47 seconds to do all the frames, all 27 frames, and the 32-bit version might get through two, if we're lucky, in the same time frame.
So the 64-bit version is benefiting from the optimized instruction set, from the large virtual address space being able to map the file, and the large physical address space to get the file into memory in one go. So there's the authors. We're not even past number two yet. OK, so there's 64 bits.
So does this mean that you all want to rush out and convert your applications to 64 bits? Well, you really want to only consider 64-bit if you have a computationally intensive application and you have a large data set that you need to work on and you're targeting one of the 64-bit capable machines.
So that's a relatively small set of people that really need the 64-bit capabilities and you have to remember that we have a lot of 32-bit machines to target. So that's 64-bit. We have the 64-bit hardware in the Mac Pro available. We have 64-bit tools available. And we have the full 64-bit software stack available. And so now it's for you to get your 64-bit apps onto the platform.
So we move on to the next aspect of the CPU. This is also a revolutionary change in the industry. So now I'd like to talk about multicores. So this morning we announced the Mac Pro. This has a multicore Xeon processor. Now, multicore is an industry-wide trend, and so I'd like to talk a little bit about why we're doing this and what it means to us as software developers.
So, looking back, what we've been doing with single cores is just relying on increasing clock speed. Now this has been great for application developers because you don't have to do anything, right? You wait for the next processor to come along, the clock speeds faster, your application, no change to the binary, just runs faster too.
Unfortunately, in order to be able to increase the clock speed like this, there are some ramifications to that. So the way that we've been increasing the clock speed is through miniaturization. When you miniaturize, unfortunately, you increase the amount of power consumption, and when you increase the power consumption, you also increase the generation of heat, which you then have to get out of the machine. And this is going up exponentially and is causing significant problems.
So what's the solution? So if we look at a single processor, this is a single-chord processor, fully clocked, the performance meter is set at 100, and our power consumption meter is also set at 100. So what we'll do is we'll increase the clock speed by 20% and see what happens.
So we've got a nice 13% increase in performance. That's good. We like increasing performance. But it comes at a cost of a 73% increase in power consumption, which is not good because there's all the heat that goes with that, too. So instead of doing that, instead of increasing the clock frequency, let's decrease it and see what happens.
So we've lost that 13% performance gain, and in fact we've lost more compared to the single cord processor. So that's bad, but look at the power consumption. Our power consumption has dropped to 49% of what the fully clocked processor would be. So what we'll do is we'll take this core, we'll add another one, and we'll make it a dual core and see what happens.
So now we have a 73% increase in computing throughput on a dual core processor with that clocked down chip compared to the fully clocked processor. And only a 2% increase in power. And this is ignoring any advances in microprocessor technology, which of course is happening. And so this is why we're going dual core. For small power changes, if any, we can get significant increases in computing throughput.
So how do we use this performance? I mean, it was simple in the single-chord world, where we just do nothing and wait for the processor to speed us up. But this is a second processor in here. Well, we have options. We can do nothing, again. We can use threads. Or we can adopt some new APIs. So let's talk through those. First of all, let's talk about doing nothing.
But how can you do nothing? We've got an extra core. Well, the reason you can do nothing is because the OS has done the hard work for you. During Tiger, we went through the whole of the kernel, we put fine-grain locking throughout the kernel, so the kernel is now highly concurrent and parallel. And at the heart of Mac OS X, we have a fully SMP-capable Unix system.
In addition, while your application is running, there's some other system activity that, on a single processor, you are having to compete for. So there's various daemons and other utilities going on in the system. With a dual core, these can be going on in the second core, while your application now has the full core at its disposal.
Mac OS X also has a client server architecture. So services that you're requesting from the operating system are frequently done by daemons or other processes in the system. These can now run on the other core while your application is still making forward progress. We also have some threaded frameworks. So some of the APIs that you use are
[Transcript missing]
So even if you do nothing, you get an automatic performance improvement. It's not great, but it's good for nothing. It's good for doing nothing, I should say.
I knew I had to make one mistake. So that's the do-nothing option. The second option is to use threads. Many of you already have threaded applications, and many of you may be going towards threaded applications, so we'll talk about that for a little bit. But there's an art to multi-threading, and it's not a simple thing to do. A lot of applications are broken down using functional decomposition. You use one thread to perform one part of the activity, another thread to do something different.
But much of the concurrency is actually around access to data. And, of course, access to data in a multi-processor world is all about locks. And the big question is, how many locks do you use? Do you use a few big locks, a lot of small locks? And the answer is, there is no single answer. So it's all about profiling your application and looking to see where you're getting concurrency, where there's contention on the locks, and so forth. And it's a lot of work.
You also have to worry about deadlocks. You don't want deadlocks in your system, so you have to start planning lock hierarchies. And being able to debug those is also incredibly difficult. Other complexities include cancellation. If your thread is cancelled, you have to clean up. You have to release any resources that were allocated.
You have to free any locks that it was holding. And cleanup's easy to get wrong. There's also thread management. You have to work out how many threads you need to create in your application. If you create too many, it's inefficient. They're fighting with each other for the CPU resources. If you create too few of them, then you've wasted the concurrency that's available in the machine.
Now, we haven't actually solved any of these problems in a threaded world. That's very difficult to do. But what we have done is help you identify the work in your threads through prioritization. Now, we've had CPU prioritization for a long time now, but in Leopard, we've added two new interfaces to be able to control file system activity and networking activity.
So you can mark your thread as being a background thread, and all file system activity will take lower priority compared to other threads, and all your networking activity will take lower priority also. This allows you to distinguish between your foreground threads, which are doing, for example, the user interface, and the background threads, which are doing downloads or cache management or other housekeeping things in the background. So that's using threads. It's a manual process. You will get the performance out. It depends on how much work you want to put into it. But that's how you get that performance out of multicores using threads.
So the last one is adopt new APIs. So we thought a lot about how we could make this process easier to get concurrency in your applications without you having to deal with all the complexities of using threads. And we looked around at what were successful patterns in life to be able to increase the workflow and be able to get things moving. And we discovered the queues. We were already using queues in things like run loops. And it's a very effective mechanism.
So what you need to do is to take your program, your application, and basically do top-down design kinds of approaches where you take your long operations and you break them down into smaller operations and queue them up to be done. Now, these work elements can also generate more work elements and queue those up as well, so it's an iterative process. And we have two new APIs in Leopard to help you do this, NSOperation and NSOperationQueue.
So what do these two guys do? So NSOperation actually manages your work element. It gives you information such as the status, is this work element running? And it allows you to specify to the operating system things like how parallel this work element is with the rest of your application, what the priority of this work element is, and what its dependencies are in case there are several that you have to do sequentially.
NS Operation Queue is the actual queue management itself and takes the work elements and applies them onto a queue. And this will automatically fork the threads for you, the appropriate number of threads, depending on the hardware configuration that you're running on and also the amount of concurrency that you've specified in your application. And because this is queue-based, it fits in with the run loop. And therefore, it's a very familiar kind of model. And so what I'd like to do now is to show you a little demo about what you can do with NS Operation.
Okay, this is a demo program. Basically, there are two activities going on here, one of which is drawing a yellow picture in activity one, and the other one is drawing a blue picture in activity two. And each of these activities has been broken down into seven work elements.
So let's just start it and see what goes on. Now you can see this is just going on in a strict FIFO order. We put the work elements on the queue and they're just being pulled off and executed one by one, because we've specified nothing about the work elements.
So this is a dual core machine, this iMac. And so we can specify that actually these elements can be executed concurrently. So let's do that. And of course, you can now do them two at a time. Now, you have to remember that there's nothing changed about this program from the first one, except for the fact we've added a bit in each work element that says that they can be executed concurrently.
Now I personally think that the yellow flower should be drawn first. So what I'm going to do is I'm going to set this priority on all the yellow work elements to say that they're more important than the blue ones. So let's run that. And now you can see that we pull the yellow work elements off in preference to the blue ones. And again, there's been no change to this program other than the fact that as I put each work element onto the queue, we've specified a higher priority than the blue work elements.
So that's NS Operation Q. It's a new way of getting concurrency without all the hard work of threads. So NS Operations, you let the OS do the hard part. And because we're putting all the concurrency management in the library, it's applicable across all applications. And because we're working with the hardware, we can future-proof your application to take advantage of more and more concurrency as it becomes available, as long as you specified that in your work elements. All you need to do for your applications is break the work down into units and hint the OS the amount of parallelism you can tolerate, the priority, and any dependencies that you have.
So that's adopting new APIs. And we'd really encourage you to go off and look at NS operation and NS operation queue. So it doesn't matter which one of these options that you choose, Mac OS X is the best software platform for being able to take advantage of multi-cores as they develop in the industry. And so I'd like to return to Bertrand to talk about the next item of hardware in the system.
Thanks, Simon. So Simon talked about the CPU, two aspects that are really important nowadays for the CPU, 64-bit and multi-core. But to have a computer, you need something more than a CPU. You need at least memory.
[Transcript missing]
What that means is that all Objective-C objects are collected. The old release method does nothing. You can still have it in your code. And it really integrates well with kind of the other heaps, the malloc heap, or even the CF object, core foundation object, where you have done CF retain and CF release.
We use a very modern garbage collector, one that's generational. It's not your granddad's kind of trace and sweep collector. So generational means that we take advantage of the fact that a young object, when created recently, tends to go away faster than the old objects that tend to sit there pretty much forever. So very modern garbage collector.
Now, I heard a little bit of clapping. And so I know that at least a number of you are really excited by the convenience, not having to worry about deallocation, and that especially. If you are creating a new application, that can be great. OK, forget all these codes. That's error prone.
But in the back over there, I saw some folks who are a little worried. What is this going to do to my existing application? Maybe you've worked with a garbage collector in the past, and that wasn't kind of the best performance. Maybe you have a real-time application where you want to very carefully control the lifetime of your buffers. So-- Relax. You can have it both ways. Garbage collection is opt-in. We're not changing the meaning of your existing code.
So what does opt-in mean? It means that per application, you can specify with just a flag whether you want your application to be garbage collected or not. And we've made sure all our stack, all our frameworks, work both ways, collected or not, which is, I think, a first from a technology perspective.
So this is a brand new feature in Objective-C. And actually, there will be others. But for that, you need to go to the next session, the one after this one. So that was garbage collection, Objective-C, a better way to fill all those memory chips. Now, let's move on to a new part of computers. Actually, this is something that all modern computers have and that contributes to creating an exceptional user experience. And that's...
[Transcript missing]
We're using the GPU today as kind of a metaphor for things we do in the computer to create cinematic experiences for our users. Cinematic experiences are when the user uses your application or turns on the computer, just have a really nice greeting for them just that they feel right at home when they're using your application. My analogy for this is kind of like going out for a nice meal at a nice restaurant. You sit down, the lights are right, the tablecloth is pressed, the napkin's pressed.
Everything feels great, even before the meal hits your table. And so that's really the kind of impressions you want to create with a cinematic experience. And there's a bunch of ingredients to that experience that we're going to talk about. We're going to talk about computing power. We've got a lot of computing power in the platform. How do we take advantage of that? We're going to talk about better algorithms.
As you can tell, the computing power is coming in some sort of funky ways at us, and we've got to figure out how to use that. We're going to talk about making everything easier to program so that you can take your application to the next level without kind of going all the way back to square one. And we're going to talk about more pixels, both in space and time.
Let's talk about more computing power. So we heard about our new four burner processors this morning. But there's another processor in your box called the GPU. And the GPU is increasing at a rate of performance even faster than the CPUs. In fact, the latest engines are capable of 10 gigapixels or more in processing per second.
And it's not just about gigaflops for the graphics processor. The graphics processors are becoming more programmable, so they're starting to feel more like a CPU. They're having more precision, so they're up to a floating point precision. And also a lot more memory bandwidth, because the memory systems are closed and not necessarily expandable on the graphics cards, they typically have much greater bandwidth, 50, 60 gigabytes per second even.
But as we saw this morning again, the multi-core CPUs are giving it a run for the money, and they're not going to go down without a fight. And so the great thing about this is this creates this embarrassment of riches of all this computing power that we can take advantage to great cinematic experiences for users.
So the next thing that we want to talk about is how to take advantage of all of those processing engines. As we heard, it's about parallelism. And one way to think about parallelism is task parallelism. Task parallelism is where you divide work into chunks where, say, you might have a line chef and an expediter and someone doing the appetizers to try to get the throughput through the kitchen as much as possible.
And so that's a common way to break down problems on the computer as well. And we've done that with OpenGL. And we've even done more in Leopard with that. Conventional OpenGL is already a multi-threaded kind of task parallel architecture because you've got the CPU, the application running on the CPU creating commands, then being executed by the GPU, which is actually its own processing engine as well. You can think of it as running a thread. And receiving commands from the application. So you already have some concurrency going with conventional OpenGL.
And what we've done is gone to a multi-threaded OpenGL engine to add an extra thread in the middle. So the application thread can have a processor all to itself, call OpenGL, record that, return immediately, and then there's one thread whose sole job it is to take those commands from the application and keep the GPU fed. And this results in tremendous gains for us.
So here's a graph that we made of Doom 3 and World of Warcraft running on the new machines we announced this morning. You can see Windows XP. We can boot it to XP on the left there. And you can see Mac OS X in the middle without the multi-threaded engine.
And then on the right, we have Mac OS X with the multi-threaded engine. And you can see we get over a factor of two in some cases. And in fact, World of Warcraft is now faster. It's a much faster machine on Mac OS X booted into -- on a Xeon machine booted into Mac OS X than it is Windows. And we think that's pretty cool. So World of Warcraft. Factor of two faster by using multi-threaded.
and I've asked Jeff Stahl to come up on stage and run World of Warcraft for us. You never know what you're going to see when you go live into World of Warcraft, but let's see where we're at. There's Jeff running around. You can see we've got a frame meter up on the right showing the frames per second. We've got a CPU meter there. It's taking a little bit over one CPU right now because World of Warcraft has some networking, some audio threads that it's doing to spread the load as well.
Right now we're in a common area called the Iron Forge, which is kind of a meeting area. You can see a bunch of characters running around. And there's Jeff looking pretty spiffy there. We've got the world appearance set to maximum. And right now with a multi-threaded engine off, we're at about 40, 43 frames per second. Jeff is going to flip a switch, turn on the multi-threaded engine, and we'll see what happens. Frame rate goes up about a factor of two in frame rate.
And you can see we're using over two CPUs worth of load, one three-quarters, two. So that's really pretty incredible. Why does frames per second matter in a game? Well, you know, in this scene you're going faster than monitor refresh, so it doesn't matter all that much. But if you get to a more complicated scene where you've got a high-level encounter with a lot of folks, the action's real fast.
You need to be able to respond quickly, and that's exactly the last time you want the machine to bog down. So having a lot of headroom is real important as the levels get harder. So I think it's really cool that we are able to show you this. To do this in your application, it's only one line of code. You just opt in to the multi-threaded engine, and you just turn it on real simple. So, Jeff, why don't you show us what you think about beating XP on a multi-threaded machine? Yeah.
Wait a minute, wait a minute. Jeff, Jeff, what do you really think about beating XP? Yeah, there you go. All right, say goodbye, Jeff. All right, thanks. That's great. So that's OpenGL running multi-threaded. There is actually a version of this that is shipping with the new Xeon machines, and we'll continue to improve it.
And it's going to be in Leopard as a standard feature. So that's great. So let's talk about another kind of parallelism, data parallelism. So again, back to my food analogy, you can cook 100 French fries the same time you cook one. If you cook each one serially, it would probably take a little bit of time. So French fries to me are like pixels, so I think of Core Image. So Core Image is an engine that does the same thing to a bunch of pixels at the same time, and that's exactly what data parallelism is.
And so Coreism is actually a fairly general data parallel API for constructing little kernels of code and tying them together in order to create a tree of operations that has to occur over an image or a bunch of images. So it's ready for these highly parallel architectures like GPUs, multi-core CPUs, et cetera.
And there is-- the capability of running it both on the CPU or the GPU in order to take advantage of the parallelism. It's fully floating point precision, again, taking advantage of vector units running on the CPU and taking advantage of the floating point GPU capabilities. And it also has a really important feature called lazy evaluation. It doesn't produce pixels until you really need them.
and Core Image is actually a perfect match for the explosion in digital photography that's going on right now. These cameras generate megapixels of data, 10, 20 megapixels of data, a lot of data to process. And in fact, there's an explosion that goes on when dealing with raw camera images in particular.
So if you take a 24 megabyte sensor image, that's just a 16-bit sample per pixel. Then you have to reconstruct that and build the other two channels per pixel, which is going to explode that data out to 96 megabytes. And then if you wanted to adjust that maybe and save a floating point result, for example, that image could be 192 megabytes. And you probably want to save all of these because you may want to go back and you may want to undo. Of course, you want to save your original.
So that's a lot of data. And so the beauty of Core Image when combined with this kind of imagery is that you can really save the rest of the data. And that's the beauty of Core Image. So I can create a recipe to create that purple tinted images and just save that along with my raw sensor data. And I've basically got that version of the image with very little extra cost. So you could imagine doing this to a bunch of different variations, a sepia tone, a black and white version again. And I've only paid incrementally for another recipe for each of these.
So that's a real powerful concept about how to use parallel processing to kind of do things on the fly and not have to always save out intermediates. And we use this in the Aperture product, which was released at the end of last year, redefining kind of how to do digital photography by creating these variations of images very cheaply by just sort of saving the recipe rather than the actual bits.
So Nu and Leopard were exposing the raw reconstruction via a core image filter. So raw is totally integrated with the core image pipeline. You have control over the details of taking that sensor data and converting it into an exposed image. You can control the exposure, the temperature, and the tint. And I'm going to give you a demo of that right now.
So here I have an image that's shot a little bit overexposed, as you can see. And so up here on the right-hand side, I've got two blocks. I've got an imagery construction block, which is controlling the sensor decoding. And I've got an image effects block to add some additional artistic effects.
So let's try to get this image to expose. So I'm going to move the exposure slider down. So a core image is-- working to reconstruct the image with a little bit less of an exposure. You can see since it's dealing with the raw, it's getting some detail back in from the highlights where the sensors were getting saturated. So we're able to pull detail down in from the highlights by using the raw sensor data. I can also change the color temperature or I can pick an area from the image that I want to be neutral.
So I get the image exposed the way I want it, then I can enable artistic effects like this is an Edgework filter that's in core image where I can change the radius of that. So this whole processing pipeline, both the reconstruction and the effect is all put together, globally optimized as one big GPU thing and GPU expression. And I'm going all the way from the sensor data to the final image in one computation. And I can turn everything off and see where I get to. So that's the new raw features in core image.
So the next thing I want to talk about is making things easier to program. Ease of use is kind of about quality and convenience. I mean, you wouldn't necessarily go bake your own bread, but you're glad there are people that know how to do that really well. And so you can buy nice bread and you can buy nice greens.
And so as we develop APIs, we really want to have them out of the box be very useful for you and something you can build your solutions on top of. And so core animation, something we're introducing this year, something we're talking about as a very easy to use API.
And one thing we've found while working with the development teams, both you guys and people internal at Apple, is developing layered animated user interfaces and user experiences is really very hard. Usually the first thing you need to do is learn OpenGL. OpenGL is a great API, but it is fairly low level and sophisticated. And if you just want to draw an image on the screen, perhaps not the best API to use for that.
Then you need to go manage some timers and threads. Maybe you start out doing your animation in your main run loop and you realize that doesn't really work very well. So you spawn off a thread. Now you've got all these concurrency issues that Simon was talking about that you have to deal with.
And now you have another problem with when you're animating the user presentation, you have to decide what's the truth as far as the application logic is concerned. If I say, is an item in the scene or not in the scene? And that's kind of a binary thing as far as an application is concerned. That object's there or it's not. But in the view, that may be in the process of animating off because you told it to go away. Maybe in the process of animating on because you told it to come on.
And so you need to manage this discrepancy between the data structures representing what the user is seeing at any instant and what the application logic is thinking. What menu am I on? What images are in the scene? And so that can be really hard. And there's usually replicated data structures with lots of concurrency and locking going on to handle that.
And then usually you have to manage layout for different sizes if you're different size of display or different size of final rendered image, like 4 by 3 for TV or 16 by 9 for HD. You want to make sure things stay on the left where they need to and stay on the right in the top and the bottom, et cetera. And that layout code is just something that you have to write. And if you're starting from OpenGL, from the baseline, you don't have the tool kits necessary like Cocoa to handle layout for you.
And then you have to fix all the bugs in all that code. And this is usually where my team gets a phone call just before some big keynote saying, we're using OpenGL. We've got multiple threads going. And how do we get out of it? And it's really tricky. It's really tricky.
And so core animation-- and again, this is just the rendering code. You haven't even written any application code yet. So we really want you guys to worry about your application code. And so core animation is the answer to get you to have dynamic user interfaces with very little work.
So behind the scenes, this is a dynamic layering engine with automatic property animation. You can just set a property, and it will animate to it. It's media agnostic. You can have 2D, 3D, graphics, video, whatever you want on a layer. It has an asynchronous rendering thread that's actually running at vertical retrace intervals and drawing the scene for you. So you don't have to worry about that. It's always ready. responsive.
��内��内部的����器, 如果您想要��置一些控制, 这些都很��单。 这些都包��了一个非常容易使用的API. 可能使用的API最��单的就是 这些我们��为"无限的����"的����. 我正在����解��这一点, 而外国人的��友 Ron Popeil 也提起了, "只需要把它��置好,然后忘记它". 让我们看看如何做到。 我们看到了一些文字和结果在右上角。 我现在有一个现场的��片, 它有一个��心动画的名��。 如果我有一条文字����于"��次.opacity=0.0", 结果就是"它被��除了"。 如果我有一条文字����于"��次.opacity=1.0", 它被��除了。 So you just need to specify the goal state in your application logic, and the animation will begin asynchronously and attain your target goal. Now, if you need to change your mind because the user pressed another key, you can still change the model state, and the animation engine will then reacquire that new goal and start animating towards that.
There's also a batched transaction model if you want to do multiple things. I don't show transaction here because there's an implicit transaction around every event. So suppose I wanted to do things, set the opacity to zero of both the blue and the green layers. I can do that once through my run loop, then things will start to animate away.
If I want to do something a little more complicated, like say, take the blue layer and set its opacity to zero so it fades out, set its size to zero so it shrinks, and take the green layer and send it off to the right, I can do that just by executing that code. So let me give you a demo. You saw one this morning. I will give you a little bit of a different one.
It's going to be in the same app, though. I have the executive version. So we'll go through the first couple of camera moves here until we get here. And I'll bring up my little menuing system that I happen to have. And this menuing system is all done in core animation as well. I can move forward and backwards.
I'll show you a little bit about the acquiring new goal states. If I hold down the infamous Shift key and move to the next menu, you can see if I change my mind halfway through, it will acquire the new state kind of on the fly. And that's just automatic. The application code is just setting which menu is current. So what did I want to show you here? Well, I wanted to show you the cityscape.
and I've got some preset camera views that I can show here. So here's the city all built. The city actually builds In about four or five seconds completely. 四至五秒左右。 And the animations we showed this morning 前面的動作 were all different kind of camera angles 都在那些動作建��上 around that animation building.
相反的��頭角度。 So let me show you one building we called the carpet, 讓我給大家看一看一間建��叫做「地板」 which is this one. 這是一間建�� And if I start the timers again, 你們可以看到我再開��時間表 you'll see it kind of folds out in front of you like that. 它就像這樣,它就像這樣, and if I show the man behind the curtain, 你們可以看到那位人在��外 you can see him, you're looking up through the bottom of the building.
你們可以看到他在��下看著 Another one is the folding building. The folding building is over here. He builds like that, which is kind of cool. Again, if I move back a little and run that, I can show you the city while the folding building builds. You can see the guys come in here at the end.
What else did I want to show? We said we could show video on a plane, but no one's actually proven that yet, so let me prove that to you. I'll go back to one of the outtake modes we didn't use in this morning's demo, but it's called Huge.
It's just like a big plane of your album art, and you can select in, and you can see now I do have some video on some layers here. There should be audio on this, but I don't know if they've turned it up. You can see I can move around.
Take control of the camera, kind of show it zooming out. Another view we had was the panel's view. The panel's view was like a bunch of playlists in 3D, kind of each with a little screen saver running on them, and that's kind of fun to move around as well. And then finally, I'll show you the city back with the ... some video on some textures here. That's kind of fun. So that's a little bit behind the scenes on the core animation demo from this morning.
So there's going to be sessions on that, so I think in the big hall here, so please go to those and get much more detail. The other thing we're doing with core animation, which is really important to us, is doing full integration up the Cocoa stack. So core animation is backed by a layering engine, which actually is hierarchical in the same way NSView hierarchy is. And so in Leopard, you're able to back an NSView hierarchy by a hierarchy of layers that are built with core animation. So that's really exciting.
And that allows you not only to have 2D, 3D graphics and video on layers, but Aqua controls as well with full interactivity, and that's great. And it's all built into the new interface builder. And to give you a look at that, I'd like to invite Andreas Wendker up to the stage to give you a demo. Thanks. Thank you.
As Peter just explained, we are building core animation capabilities right in our Cocoa frameworks into the new Leopard interface builder, so that you can easily take advantage of hardware acceleration and implicit animations. I'd like to show you an example of that. What you see here is a very simple photo viewer application. This application is hooked up to a core data database that contains the pictures you currently see here in the window.
The pictures themselves are displayed in a new class of view that we are adding to the Cocoa frameworks. The end is grid view. And the content of this grid view is populated through a binding to the array of pictures loaded from the database. So since it's using a binding, the grid view will automatically reflect any changes in the underlying array. For example, if you reorder the pictures. To demonstrate that to you, I added a button to my application that will randomly reshuffle the pictures in the array. Let me show you how the grid view reacts to that.
So as you can see, instead of immediately jumping to the new state, the GridView will actually react with an implicit animation to it and smoothly slide the pictures into the new space. Let me do this again, this time sorting by name of the pictures. And I added a button to slow down the animations just so that you can see better what's going on.
And let me turn around the order of the pictures in the array. Oh, and of course, since this is a core data application, I can also undo this change. Actually, I can't. That's because I didn't make a change here. Sorry. Of course, I can go ahead and rename a picture just by clicking here in the grid view.
And as you can see, since we're using bindings again, Since we're using bindings again, the name change will result in an automatic reordering of the pictures in the array. And again, the grid will react with animation. And now I can undo. There we go. So let me show you how this application is built.
Here's the project in Xcode. All you find in this project is a very simple core data model, which as you can see just contains a simple entity. And you also find about 100 lines of code. This code deals directly with any of the view animations you saw. This is just the standard code you pretty much find in any Core Data Template application.
It's just the code to set up the database, to configure the undo manager, and other things. The only code I had to add myself to this application is the code that randomly reshuffles the pictures in the array. So the entire behavior of the user interface is defined in the Nib file. Let me open that up for you in the new interface builder.
The large black area you see here is the Andes grid view, which of course is empty here in interface builder. Now the grid view is a completely generic type of view. It's not specialized in displaying photos. In fact, it can display arbitrary types of objects. And to tell the grid view how to display these objects, it allows you to combine regular NS controls, things like text fields, buttons, sliders, and little compound views that become the prototype for each of the elements displayed in the grid view.
I know it's going to be hard to see here on the screen because of the color and transparency settings I'm using, but here's the little compound view that becomes the prototype for all the little pictures in my view. Now let me go ahead and extend the functionality of my application by adding filtering to it. For that, I'm going to go into the library window of Interface Builder.
The library window contains all the objects I can add to my Nib file, and it's one of the cool new features we added to the new interface builder. As you can see, it's organized in a hierarchy of groups. Right now I have the entire Cocoa group selected, but I can also drill down and just look at the comment controls, or maybe what we call the extra controls, or maybe look at both of them at the same time. And yes, you might have guessed it, we are using a grid view here in the library window too. Another great feature we added to the new interface builder is the ability to edit toolbars right in the Nib file.
So I can just take a search field out of my library, drag it into my toolbar, make it part of the default set of items in the toolbar. Now all I have to do is set up a binding from the search field to the array controller in my Nib file to configure the filter. Let me go ahead and do that.
I'm going to configure the bindings so that it filters in a case-intensive way on the name of the picture. That's all I have to do. So I'm done here. Save the Nib file. Go back to Xcode, build and run. So here's my application again. I can still reorder the pictures, of course. Now I can also filter, for example, everything with an S. Let me slow this down for you.
So that's just one simple example of how you can make use of core animation when it's built right into Cocoa and interface builder. We hope that you will make extensive use of this. Thank you. Great. Thanks, Andreas. Very cool stuff. The thing I like about that demo is it really doesn't have anything to do with animation. I mean, it's really just about the data and about the application, and the fluid user experience just happens implicitly. So I think that's really cool.
So let's talk about pixels, more pixels. So we've got these great big 30-inch displays now, which is great. And so there's kind of a real estate boom going on, right? We've got lots of windows open. We can be real productive in our big screens. But what happens if we were to increase the density to 200 DPI on these displays? Well, maybe it's a little bit too much of a good thing. Can't quite see what I'm doing. My icons are real small. My text is real small. So what are we going to do about that? The solution is pretty obvious-- resolution-independent user interface.
So more pixels can provide more detail. So if you have a 200 DPI monitor, wouldn't it be great if it looked just like the 100 DPI monitor, except if you zoom in, instead of being faced with this at 100 DPI, you could have something like that.
[Transcript missing]
So what do you have to do? Well, avoid Quick Draw. We've been saying it for a while. This is now a great time to go get rid of those last vestiges of Quick Draw in your applications. Provide high resolution artwork. Again, those 512 by 512 icons.
Or consider using, say, PDF for some icons if they're fairly easily represented by vectors. Test with Quartz debug. This has been in the release for a little while. Be able to set the user interface resolution and run your app. And we're asking applications to be ready by 2008. That's when we expect to be ready.
So that's more pixels. So these are the ingredients of our cinematic experience. But why do we sort of do all of this? Well, the answer is, again, in the nice restaurant, the food is presented to you. If it looks great, before you even take a bite of it, you know it's going to be good. And so we have a lot of tools that we've presented to you for doing great, gorgeous user interfaces.
So even before the user starts typing in your app, or when they pick up that icon and drag it around, everything just looks and feels like you spent a lot of time. Each attention to detail is a little message to your user, saying that you care about them, and you're going to give them a great experience. So that's our trip through GPU land, using cinematic experiences as a metaphor. And we're going to go back to our tour of hardware and software integration with Scott Forstall. Thanks very much.
Okay, let's see what I get to talk about. The magical mosaic says cameras. So as you know, we've been building cameras into almost all of our new machines. And for the machines where we don't build a camera in, we provide the EyeSight, which you can go ahead and add as well.
One of the first things we do when you get your new machine is take your mug shot. And we use this picture as your login window picture. Once you've logged in, we use it in the address book. Now let's say you don't like your mugshot. We have a picture taker panel where you can take a new picture. Much better. So we're using this picture taker panel, both in the address book. We're using it in system preferences. We're using it in iChat for your buddy list. So we're using it all over the place. So we decided to make it a public API for Leopard.
This does not come complete with Nick Nolte. So the Picture Taker panel allows you to take picture snapshots. It's a consistent UI throughout the OS. You can use it everywhere. It has the automatic flash to fill in highlights. Once you take your picture, you can zoom it, you can crop it, you can rotate it. And really importantly, we handle the camera state for you.
This is especially important for an external eyesight. We handle whether or not the iris is open or closed. We handle whether or not the person unplugs the cable. So you just call the panel. We return you a picture once they're happy with it. That's the Picture Taker panel.
Next, of course the most fun way to take pictures is with Photo Booth. We are bundling Photo Booth in Leopard. It's fun because of all these crazy effects. I know my kids play with it for hours. In Leopard we're actually opening it up so you can create your own effects which Photo Booth will pick up and you can use as well. Just create a quartz composition, put it on the system and we'll pick it up. That's Photo Booth.
Now our cameras are really good not just for still photos, but also for capturing video. We use this in QuickTime Player, so with QuickTime Pro you can capture video. Of course iMovie allows you to capture video straight from the eyesight. And we use it in iChats, video conferencing to capture video, and now in Leopard we also apply fun effects to those as well. So to make it as easy as possible for all of you to take advantage of all the built-in cameras, we're going to make it easy, we're adding a new API called the QtKit Capture APIs.
The Qt Kit Capture APIs are an absolute pro-grade solution. They are meant to support everything from the highest end professional applications down to a consumer application. They give you frame accurate AV sync. So it syncs the frame with the audio perfectly. We support both the internal and the external eye sights and a host of other cameras as well.
We allow you to capture directly to a file. You can capture to a stream. We have a really nice on-screen preview, so you can see the video as it's being captured. And this whole thing integrates in with the OpenGL pipeline, so you can do what you want with the video as it comes in. So that's the Qticket Capture APIs.
Now in Leopard, we took iChat one step further and added sort of a virtual camera in this thing we call iChat Theater. And iChat Theater allows you to plug an application in to a video conference and present it across the conference. So in this case, we've already plugged in iPhoto, so you can do an iPhoto slideshow across the web. We've plugged in Keynote, so you can do Keynote presentations as well, and even QuickTime Player. But we've done iChat Theater in a public way. There's a set of APIs, so you can plug your applications into it as well.
The way you do this is you take and register with the instant message framework. That is the iChat framework. Your application controls all the content. So you control when you start playing, when you stop, any transitions. You control all the content. You'll get a series of callbacks. In each callback, you render a single frame of the animation into a core video buffer, and there's several of those.
You use core audio to provide the audio, and then you're done. We'll go ahead, we'll 264 encode it, we'll worry about latency issues, we'll get it across the wire, and display it on the other side. So you just get a series of callbacks, provide the content, and you're done. That is iChat Theater.
We're doing a lot with cameras, but when I look at it, I think we're just still scratching the surface of what we can do. Cameras to me are a lot like a keyboard or a mouse. It's just another input device. And the camera's becoming a more and more ubiquitous input device, and it's really high bandwidth. There's a lot we can do. So I actually have been looking at a number of applications that you've been writing that I think are really cool. Here's one. It's called EvoCam. It's a video baby monitor.
There's a Delicious Library. Delicious Library uses the camera to scan the barcodes of books and DVDs and movies and CDs and automatically add it to your library. There's this thing called I Alert You. It's a security device for your computer. So let's say you have your MacBook, you're at the cafe, you walk away to get some more coffee, and some criminal comes and screws with your machine.
You could light his breath on fire. It'll actually take a picture of them. And so you come back, you find out who screwed with it. If he steals your machine, it actually in the future is going to email you his picture, so you can go track him down in Malibu.
So that's iAlertU. There's iStopMotion. iStopMotion will let you be the next Nick Park. You can create Wallace and Gromit. It's great. It allows you to create stop motion movies by sort of translucently putting the last frame over the next one. You can watch it as you do your manipulations. And there's this thing called ToySight. ToySight's the best way to have fun and exercise and look very silly.
The camera watches where your hands are and basically puts virtual controls into the air. And as you move your hands around, it tracks them and you're actually controlling the game. So that's ToySight. So we're bringing the cameras to almost all of our machines. We're bringing more and more APIs, especially in Leopard, to make it really easy for you to take advantage of these cameras. I can't wait to see more and more applications of yours that take advantage of the camera. And that is the camera. Next.
OhMysteriousMosaic. Hard drive, sweet. Not for the sexiest part of the demo. The hard drive. A few days ago, I was comparing the original Mac to our current standard 20-inch iMac. And I was comparing it on three different aspects: the CPU speed, the amount of RAM, and the size of the hard drive.
You look at CPU speed. The original one was an 8 megahertz 68,000. Currently we're at 2 gigahertz. So just on clock speed alone, it's 250 times faster, right? There's a lot of other things, but on clock speed alone. You look at RAM, it was 128 kilobytes, non-upgradable. It was an appliance.
It was perfect. And the new one ships 512 megabytes standard. So that's 4,000 times larger. But if you look at the hard drive size, it actually blows all this away. We go from a 400 kilobyte removable disk to a standard 250 gigabyte drive, which is 625,000 times larger. And yet it's still full.
So with these large drives, we are filling it up with lots and lots of content. And once you have all this content, there's two things you want to do. The first thing you want to do is find what you're looking for, and that's what Spotlight's about. And the second thing you want to do is preserve what you have, and that's what Time Machine's all about. So let's start with Spotlight.
Spotlight's great because it allows you to very quickly search and find anything you're looking for on your machine. You can search by both file name, but also metadata and contents. And the way we do that is we provide a whole set of Spotlight importer plug-ins. These understand the file types, they read it in, and they hand the metadata and the contents to Spotlight, which does an index of it. But we still hear from people that there are certain file types where you can't search them in Spotlight based on metadata or contents.
And that's because it's a proprietary file format that we don't have access to. So if you're creating those, please write a Spotlight importer plug-in. We try to make this as simple as possible. There's one API. You already have the code that parses your file. We even give you an Xcode project. So please write in Spotlight importer plug-in, and then people will be able to search for your files. Once you've written the plug-in, the best place to distribute it is right inside your application package.
If you put it in the application, you can search for your files. So you can see that you've got the same package that when people get your application, they can immediately search for all of your file types on their machine. Also, tell us about it. We'll put it on our website. We have over 50 Spotlight importer plug-ins on our website that people can download. So tell us about it as well. So once you've done that, now we want to make sure that we're preserving all this metadata.
When you have a file that has all this great metadata, and we're getting more and more as time goes by, you want to make sure when you save the file, you don't lose the metadata. There's some metadata that you don't even know about. It's Finder contents, ACLs, there's a bunch of things on there that you might not even know about. So we asked you this last year, but now we're going to make it easy for you.
We're adding a new API. It's called the Replace Object API. It is a safe save API. What you do is take, save your new file on the side, call Replace Object, it will replace it with the original file, preserving exactly the metadata you want to preserve. So when you change the file and save it, you'll preserve all the metadata. It's the Replace Object API. Thank you.
Okay, this next spotlight feature is for the excessively lazy of us, myself included. Let's say you're sitting at home, in the living room, on the couch, working on your MacBook. And you're looking for a file, but it's not on that machine. It might be on the machine that's upstairs. It might be on the machine that's over in the den. But you don't want the owner's burden of walking upstairs to look for the file. You'd like to sit on your butt and do the search of the other machines in the house.
And in Leopard, we are encouraging couch potatoes. We are providing that. So you can sit with Spotlight on the couch, do this search with Spotlight of all the other machines in your house. It'll find it. And then using personal file sharing, you can open it up, never leaving your butt. So that's our feature, searching other machines. Next, quick look.
Quick look does a few things for us in Leopard. When you look in icon view in the finder, you find a bunch of generic icons. The icon tells you what type of file it is, but tells you nothing about the contents. We actually have a feature called show icon previews, and that will turn a few of those icons into rich previews. The quick look APIs in Leopard allow you to write a plug-in to support rich icon previews for everything. So please write one of these. for your file types.
Now we took it a step beyond this as well. When you're in Spotlight, you can also use Quick Look to show a quick preview of the entire document. So you can quickly see, is that the photo I'm looking for? It works for other things like movies, so you can quickly look at a quick time movie. You can even look at presentations. So you can pull up, quickly look, see if that's the right presentation, and then open it.
It works both for Spotlight and it works in the Finder. So if you're in the Finder, you can quickly get a preview. But most importantly, this is the way you can get previews in Time Machine. So again, if you have a document type, write one of these plug-ins so we can look and see your document when going through time with Time Machine. That is Quick Look. Thank you.
So another thing we've done for Spotlight is added Spotlight to Help. When you look at Spotlight, one of the really good things about it is there's this menu right there. You can type into a search field and quickly see your results. And we've decided to do that exact same thing for the Help system in Leopard. So let's say I'm in text edit working on some document.
And I want to set what the default font size is of a new document. I can go up, click on the Help menu, and there's a search field. Zoom that in. type of search in there and it shows you the help topics right there. You can choose one, it pops open, help you right to the right spot. But you know, once we did this we realized we can spotlight search more than just the help contents, but we can spotlight index part of your UI, your menus and other parts of your app. So let me go ahead and show you that now.
So here we are. Let me bring up a Pages document. And I have a delicious cookie in the document, which I'd like to center. And I don't remember how to center items in Pages. So I can click on the Help menu, just type Center, and boom, here's Aligning Objects.
It's a really quick way to find exactly the help topic you're looking for. But better than that, I can actually Arrow through these menu items, we pop up the menu, even if it's a submenu, telling you exactly the item you want. I find a line object center, I hit return, boom, the cookie's centered.
This is especially useful with pro applications that have lots and lots of menus. So this is an unchanged Photoshop. I go up here, and I want to change the contrast of this image. Type contrast. Boom. Go through here. Yep. Here we turn. Brings up the panel. Change my contrast. Much better. Done. Just like that.
So that's Spotlight. We're doing a lot with Spotlight in Leopard. Once you have the contents, let's preserve it. That's what Time Machine is all about. As you heard this morning, Time Machine is absolutely the best way to automatically backup your Mac. If you change a file, we automatically backup the file.
We backup everything. We backup all the files. We backup your photos, your music, everything, applications, the entire OS. We allow you to restore everything. So if you lose your hard drive, someone even steals your machine or the hard drive dies, you can buy a new hard drive, replace it in there, and be exactly where you were before that hard drive died.
You can restore a la carte, so you can go and restore a single file if you'd like. You can back up to a hard drive or a server. And of course, it allows you to go back in time. So I've had a lot of questions since this morning. How the heck do we go back in time? So let me go ahead and show you how time and space work.
We start with the entire file tree, the entire file hierarchy of your volume. And conceptually, as changes are made, we make a complete backup of that entire file tree for all the changes. Conceptually this is what we do. We don't actually copy every file every few minutes, but conceptually we have complete file tree hierarchies for every snapshot.
These snapshots are all read-only, so you don't accidentally delete it. And in fact, they all just use journaled HFS+. So we haven't created a whole new file system format. We use the standard one that we already use and invest in. And in fact, you can look at these snapshots all the way back to Jaguar. Of course, Leopard has a really nice UI on it, but it's a standard file system.
So for each of these snapshots, the way it works is like when the finder wants to show today, we point the finder at the entire file system hierarchy as it stands today. So it shows you exactly what you have today. When we take the finder back in time one day, we just point it to the root of the tree of the file system as it stood yesterday. Because we have an entire file system tree as it stood yesterday. So it shows the files there.
Now, that file is an image that I imported two days ago and hadn't rotated two days ago. So when I go back two days, the finder shows it not yet rotated because it's just looking at that file system. It's not doing anything magic. We just point it at the entire file system as it stood two days ago. Now, this works with more than just the finder. It works with your apps. It also works with things like the address book. So the way this works with the address book is the address book points exactly at its own database within this file system.
So when the address book is asked to go back to yesterday, when I think I hadn't entered Sonia's address, The address disappears from the card because the address book is merely showing us its database that it knows how to read exactly as it stood yesterday. And if we go back two days before that card even existed, again, it doesn't appear in the address book because it's just looking at its database.
So the app doesn't need to know about how we do snapshots or anything else. It just needs to know about how it reads its own database or its own files, and we point it exactly to a snapshot of that as it stood in time. Now here's the way we actually do the snapshots.
We start with the entire file system and the first backup is complete. The first backup to the hard drive of the server, we copy every single file over. So now you have a complete snapshot. We have a low priority daemon, backup daemon, which is listening for changes. When a change happens, like this file here rotates, an event goes out. It's called an FS event and it's a new file system notification in Leopard. Our daemon listens to these changes. Now every once in a while, and right now it's about every hour, the daemon will coalesce all the changes that have happened and make a snapshot.
It takes everything which hasn't changed at all and it creates a series of hard links to it, which takes virtually no space whatsoever. In fact, we have even tricks on top of the normal hard links to make it so it takes hardly any space at all. And then for the file that has changed, that gets copied over to the backup.
So now you add a file again, an hour, you know, a few minutes later, an hour goes by. We go ahead and copy that file into a new backup where all the rest of that backup tree is is just a series of hard links taking no space. Now once a day, we coalesce all of those backups into one, and so you have a snapshot per day going back in time. So that's the way it works.
So one way we're saving space here is by using hard links. Another way we save space is by just not backing up things that don't need to be backed up. For instance-- If we're in Safari browsing the web, this is what our file system looks like. We have a cache for the current page you're on. Actually, we have a cache for a number of pages.
If you're in Google, you do a search, you go to another page, we have another cache page there. You download a file, it goes to your desktop, we have a file for that. So instead of just taking and backing up all of this, there's no reason for us to back up any of these cache files over here.
And so we have ways, there's an API for you to mark things as cache files or as not-to-be-backed-up files, and then we won't back them up, saving a lot of space. So there's really two ways for you to do this. One is to use the API and mark files not to be backed up.
The other is to put files into standard locations. So if it's a temporary file, put it in the slash temp. If it's a cache file, we actually have a standard cache location. So please put it into the standard location. If you get only one. So if you get only one thing out of this part of the talk about Time Machine, mark your files that don't need to be backed up as such, which will enable backup devices to store farther and farther back in time. There's another type of file that you should mark as not-to-be-backed-up.
And those are, if you have an index file, it's not a cache, it's not a temporary file, but it's an index file that can be regenerated by the other data files that you have. Mark that as well, not-to-be-backed-up. Because if we went back in time, you could just regenerate it instead.
All right. That is Time Machine. We think it's absolutely the best way to preserve your data, to go back in time, and grab it back. And we think in Leopard, with all the great content we have on our machines, we have really good solutions for both finding that content and for preserving that content on these increasingly large hard drives. For the next section, I'd like to turn it back to Bertrand.
So we've talked about lots of different parts of a computer. Now let's talk about entire computers. Let's talk about the server. Our servers find their way in a lot of different places. They are present in some of the top supercomputers. You find them in small department servers or small enterprises. You find them on the campus of higher ed or in some elementary schools. To span this really broad use of servers, we need to make our server easy to use, powerful, and we need to provide a rich collection of services.
So let me go in a turn on each of those three aspects. Ease of use, of course, right? And we've made the Leopard server even easier to use. We're adding a number of applications and polishing the applications that we have. So we're adding a number of applications and polishing the applications that we have.
So two I want to mention is one is a server assistant that right off the bat will ask a few questions on how you want your server to be set up. And it will set up your server just based on those few questions. It will also set up the network so that when you bring in a new machine, automatically your machine will participate into the network. We also have a new server preferences that's very similar to the system preferences that you find on the client. So the big difference is that it sets up the settings for a whole bunch of machines at once.
Now, in terms of the power, of course, the power derives from Unix. Unix powers the internet. It powers most of the servers that you find on the internet. And all the benefits that we talked about for the client are present, obviously, on the server because we are using the same kernel.
So you have 64-bit processes. You have SMP processes. And we will recompile all the server services in 64-bit mode as well as tune them for SMP in the Leopard Server product. Now, another important aspect of Unix is that it's open source. Everything we do at the Unix layer is open source. And by the way, we'll be posting the sources of our kernel both for PowerPC and for Intel.
Now, at the same time, OpenShorts is kind of a two-way thing with the community, and we are selecting the best packages that are available, whether it's commands, libraries, services, whatever, and adding them to the server. We have a couple hundred of those services. Now, with those kind of bricks, we make high-level services, which brings me to my third point, which is a rich collection of high-level services. We have a pretty good collection in Tiger. All the basic high-level services are covered, but we're adding more for Leopard. We're adding a calendar server that, of course, matches iCal. It's based on the CalDAF protocol, all industry standard protocols.
We're adding a server that corresponds to Time Machine, a backup server. We're adding, of course, the Spotlight server that Scott just mentioned. We're adding a number of collaboration servers to do wikis, blogs, to produce podcasts. And we are bundling Ruby on Rails. Ruby is a very trending language.
So the way we conceive of the server is that it complements nicely the client. Now we don't do that by any kind of monopolistic tie-in. We do that by just using standards, just open standards, industry standards. So this is the basic strategy for the server. This will be talked, of course, a lot more in the session after this. It's about ease of use, about power, and about a rich collection of services. All this you will find in Leopard Server. And of course, Leopard Server runs great on the latest machines that we've announced this morning, whether it's a Mac Pro or the new XServe.
So this concludes the little promenade we've had through hardware and software and the integration of the two. Now, why do we do integration of hardware and software? And of course, it's to bring the ultimate user experience to our collective users. But it has another important aspect. It lets us create an ideal development platform for you guys to build upon.
Why? Because by integrating together hardware and software, we can bring technologies to market faster. We also insulate you from all the details of the hardware. So for example, you don't have to learn how to program a GPU. And it's tricky. There's a lot of tricky stuff there. And we protect you from all that with low-level frameworks and even higher-level frameworks like Core Animation. We also provide a dependable baseline, both software and the hardware.
So you can count on having a camera on most Macs, and you can have innovative applications that use that. So we believe that Leopard is the ideal platform for innovation. This is a theme that we've had in previous years. This is a part of our mission, and that doesn't change. We want to create this great platform on which you can develop great applications like these ones.
Now, one final thought. Over the years, what makes a great application has changed. It used to be that all you needed is this idea of doing something that was useful, that hadn't been done before. It was all based on function. And with the advent of the Macintosh, of course, things changed. Now you had to think in terms of ease of use. It was no longer sufficient to have a useful application. It had to be easy to use. You had to think in terms of making it fun for the end users to interact with the application.
I think what has happened over the last few years is that we've entered into a new stage. Now you must provide a wow factor. You must amaze your users. And I think you'll agree that this is something we've done with backup. I think this is really important. I think this is the future. And I look forward to lots of your innovative applications. Thank you.