Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2004-300
$eventId
ID of event: wwdc2004
$eventContentId
ID of session without event part: 300
$eventShortId
Shortened ID of event: wwdc04
$year
Year of session: 2004
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC04 • Session 300

Development Technologies Overview

Development • 1:03:18

With rapid development technologies, a groundbreaking user interface, and powerful features like built-in performance optimization tools, Xcode can help you work faster and smarter. View this session to learn how using Xcode Tools can increase your productivity and help you deliver outstanding Mac OS X products. You will see a demonstration of the latest development technologies from Apple, and get an update on Apple's current plans and vision for programming tools on the Macintosh. This session is intended for all Mac OS X developers.

Speakers: Ted Goldstein, Rich Siegel, Gavriel State, Matt Firlik, Chris Espinosa, Steve Peters, Nathan Slingerland, Sanjay Patel, Todd Fernandez, Tim Bumgarner

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Ladies and gentlemen, please welcome Vice President of Development Technologies, Ted Goldstein. Good afternoon. What an amazing day it's been so far. I hope to keep this on a roll. We have a lot to talk about. I want to give you a quick agenda of what we're going on.

Give you an update on what we've done this last year with the Xcode Tool Suite, and then talk about something everybody cares about is performance, and where we're going with what's next, the future for Xcode, and the compiler and runtime, which is such an important part of making applications great on Mac OS X.

Performance tools are a critical, critical part of how you make your application faster, and we want to give you all the help, not just making the compiler produce great code for you, but also give you the assistance it takes to make your applications really perform. And then finally, how can you use Xcode with all the great frameworks and SDKs you saw this morning and this afternoon? So let's jump right in and give you a quick update on the Xcode Tool Suite.

So, in this last year, we put out three releases of Xcode: 1.0, 1.1, 1.2. And 1.0 was the feature release. It had predictive compilation, zero link, fix and continue, distributed builds, smart groups, and code sense. A very rich build. In 1.1 and 1.2, we elaborated and enhanced both the performance capabilities of that, the stability and the robustness of it, and filled in a few of the missing details and workflow issues. Third-party tools quickly jumped in. BBEdit and SubEtherEdit and Perforce, almost immediately out of the gate, was great third-party support to show that Xcode was not just an island by itself, but included access for other tools.

In the 1.1 timeframe, we added support for IBM's terrific XLC and XLFC, C++, and Fortran, and Objective-C compilers. And that is a great benefit to the platform, working with IBM. They, of course, also helped develop the 970 CPU, the G5 processor that Apple ships. And the combination is incredible.

Much of the great scientific computing that's been done over the years has been done in Fortran. It's been, you know, I can remember as an undergraduate in school that we said, I don't know what language people will be using in the year 2000, but it will be named Fortran. Well, it's still true. It's still named Fortran, and it's a great language.

and there have been oodles of templates and other components and capabilities that add to Xcode. This includes conduits and AppleScript plugins and such that really round out the corners of making Xcode terrific. And you've responded with over 300,000 downloads of these tools. And that's, I want to thank you very much. I think that's an incredible show of support for the platform.

Of course, it's great to have a volume of users using the tools and it's also extremely important for us to be working with our important developer partners. Folks like Adobe are helping us to make Xcode and GCC terrific for their tools. Here's a quote from Fritz Haberman, lead architect and head of InDesign.

And we think it's just an incredible thing to be working with partners because frankly a large code base such as InDesign takes a lot of resources to put it to a new environment and as well for us to help the compiler really encompass such an enormous code base. And we learn and improve the product for all of our developers with each step.

We also see this as well with Quark. And Quark has been a fabulous partner working with us to make Xcode great. And we find that it's so important. So important to be doing this process in partnership with these key developers. Of course, this morning we heard from Maya Engineering, from Bob Bennett, and here very happy to say that Maya 6 and Maya HT, incredible, incredible applications developed with Xcode.

And using the benefit, the productivity benefit and the performance gains that Xcode and GCC provide to deliver on G5 performance. So that is the key. has been an amazing thing. Now I want to invite-- Rich Siegel, CEO of Barebones Edit, Barebones Software, sorry, to talk a little bit about building BBEdit with Xcode.

Thank you very much, Ted. Good afternoon, everybody. It's great to be here. So when Apple announced Xcode a year ago, almost today, in fact, we were really intrigued. At Barebones Software, we've got a long history of, of course, not only experience using developer tools, but also some of us have worked on them.

and we know that a diversity of developer tools is a good thing to have. It's good for the ecosystem, it makes a better developer experience, and that makes a better platform. So when Xcode 1.0 shipped in October, we took a good close look at it, we kicked the tires, and we thought about what's really important to us when it comes to making better products to serve our customers. And it came down to really three simple things. The first was compiler quality. And here, great job in Xcode. GCC is an excellent compiler, and I'll get into that shortly.

The next thing was build productivity. Usually the first thing we do when there's a platform change, such as a new hardware platform or a new tools platform, is we'll sit down and we'll whip out a stopwatch and we'll see how long it takes to build our product line.

And finally, the really important thing for us was to be able to continue using as many of our existing tools as we could. So, on the first front, compiler quality. Well, GCC is a great compiler. Its language conformance is very good. And what we found was when we turned on all the warnings, which Xcode makes really easy to do because it's a great GUI for GCC, is, you know, we went through and we turned on all the warnings and we found a few latent bugs and we found some good code quality improvements and so the result is we have a cleaner code base, it's more maintainable, and we can run it through even more compilers such as, for example, XLC if we choose.

Also, the generated code is favorably comparable to pretty much everything else we've used. We gave up nothing in code quality. The next thing, as I mentioned, was the productivity improvement of build times. And there, what we found was that with distributed builds, we were seeing about a 30% to 40% decrease in build times compared to building on a single machine.

So we have a little bit less time to get a cup of coffee while we're building, but in spite of that, it turns into a real productivity improvement. The beautiful thing about that, however, was that we were able to realize this improvement without any capital investment whatsoever. We didn't have to go out and drop $20,000 on a dedicated build farm.

Instead, we were able to use existing hardware in the office, administration workstations, engineering workstations, operations, marketing, even the machine that drives our CD duplicator as part of the in-house build farm. So no capital expense, a free 30% to 40% in build times. It was like having our cake and eating it, too. It just didn't get any better.

Finally, of course, was the ability to use our own tools, and here this was crucial. And the external editor support in Xcode was great. I mean, can you imagine us not using BBEdit? So, we were really pleased to be able to do that, and in fact, on the converse, not being able to do that would have been a real deal-breaker.

So, I'm happy to say that as a result of really meeting these three basic tests, compiler quality, productivity through building performance, and being able to use our existing tools, all of our product development, with the exception of maintaining current shipping versions, but all of our products, BBEdit, Text Wrangler, Mail Smith, and SuperGetInfo, are being built with Xcode today. So, I just wanted to thank Ted and his team for delivering a great tool chain. Thank you, Rich.

It's so important. And we focus on performance in every part of the development cycle, both on the build time, the launch time, the runtime, and development and debug time. And each of these areas is critically, critically important. With Xcode 1.0, we placed an immense amount of emphasis on the batch build time, that which you do in the office. In this last year, though, a lot of people said to us, Ted, you know, that's great when I'm in the office and I can use distributed builds.

What about when I'm sitting in a cafe drinking that cup of coffee on my laptop? And there, of course, we, so in this last year, we have turned our attention really from the main office build time to really the cafe build time, if you will. And so I'm happy to say that now Xcode build performance is 33% faster on laptops.

We also get another 18% speed improvement on dual processor machines as well. And so we think of these things as incredibly important, and we will continue to make improvements on each of the areas. I think that it's critically important to realize that GCC, as the heart of this, is already an amazingly fast compiler. We are innovating in multiple ways. One of the most important ways that we're doing this is something we think of as shortcut compilation.

And that is ignoring unnecessary declarations and short-circuiting unnecessary recompilations. What's an unnecessary recompilation? For example, when you hit a space or put in a comment. Really, the text and program meaning of the code hasn't changed, but just the white space, if you will, of the program has changed. And so this is coming out. It didn't make your DVDs, but will be in a future download beta coming along soon.

Now let's turn our attention over to launch performance. The time it takes for the program to start up. Mac OS X has seen continuous evolution of the Mac object file format. Among the capabilities that we typically advise people is to pre-bind their applications. Pre-binding is the set of setting up the your application for a given release so that the addresses of the libraries are already located in the executable text segment.

But the feedback on pre-binding has not been good. This is a great comment from Arnold Sanity.org. I really hate the fact that Mac OS X requires pre-binding for Mac OS X applications. And there are whole websites. Pre-binding is painful. It's time consuming. It's error prone. And now I'd like to say that it's unnecessary for most applications.

Starting in Jaguar, but continue on in Panther, and then resulting in some work that we finally put it all together in 10.3.4, we made it so that pre-bound Maya 6, for example, one customer that we worked with who was taking their application from CFM to MockO, originally in an un-pre-bound, dynamically bound -- it's the opposite of pre-bound -- state takes about 48 seconds to launch. And that kind of impends on the development cycle. In the pre-bound state, it takes about 12 seconds.

And in 10.3.4 and Tiger, it takes only five seconds in the dynamically bound, not pre-bound state. Why could we do this so fast? It's because we put in important things like two-level namespaces and hints to help to quickly find symbols, so that it's not searching all of your address space as the binding is going on, but it's doing that very quickly and searching a very small subset. So this is an important technology, and I think it's very helpful to have. Tiger.

We've even made it one step further. We're studying applications like Safari, for example. These are some preliminary measurements where we have both the pre-binding and the dynamically bound. The pre-bound state is no difference. We cannot tell the difference. Now, you should always, you know, this is a preview release. We want to hear your feedback, so please do some studies. But we believe that pre-binding is no longer necessary.

So that's pre-binding. So what's next for Xcode? With Xcode we have-- we're releasing-- I think Peter had already turned on the RSS feed, so we already saw some of the press releases hitting the Apple News site that there is going to be an Xcode 2.0. But in fact, there are two releases, an Xcode 1.5 for Panther and an Xcode 2.0 on Tiger.

And we've gotten this because you've given us the feedback that says, you know Ted, it's great for us to get the next release, but we don't wait that long, and we want to use the new tools and features that you have on the current operating system that we're shipping today. So Xcode 1.5 contains most of the goodies that 2.0 has. Certainly everything that can work in a 1.5 context, it doesn't take advantage of important frameworks and new features that are Tiger only.

So we're going to try to do this. I'm not going to make a firm commitment, but definitely we've heard you, that you want to have great tools at the same time as you have the forward-looking tools. So let's focus in first. And really, 1.5 is a feature release where we focused on the features you've asked for.

Principal among that is an important feature called dead code stripping, essentially removing the unnecessary code out of a program. We also realized that even though we're doing this amazing formatting in code sense, you also want to have faster edit performance. And edit performance is improved in many cases by 10x and even more for the very large files.

I think one, saying that we love Java and AppleScript, but where then was the code sense support in Xcode? And in 1.5 you've got it. You've got support for code sense and incremental compilation, and Ant, and many other build system features for Java and AppleScript. You'll see some of that later today.

And finally, Subversion. Gee, I thought I'd have to explain to you what Subversion was. So for the two people in the room who don't know, Subversion is a new version of CVS, or a CVS-type system, source code management system. Many people believe it is the logical successor to CVS. It's done by the same folks who did WebDAV, so it has a terrific network orientation. And so now Subversion support is built in to Xcode 1.5. We've also made a number of debug improvements. Now I want to bring up Chris Friesen, and Gaveler State from Transgaming.

Thanks, Ted. So, at Transgaming we work on software portability tools for the entertainment software industry. Specifically what that means is that we work to provide tools and technologies to allow developers and publishers to take their games from one platform to another very quickly and cheaply. Now, we've been working on a title called Tron 2.0, which was recently released. In fact, you can actually go to CompUSA and pick it up in the stores.

And one of the things we run into a lot when we're dealing with porting games is the fact that a game, by its nature, takes a huge amount of CPU, huge amounts of RAM, and most importantly, wants to actually take over your entire desktop and lock your mouse into that window. So trying to debug a game on a single CPU can be extremely difficult and cumbersome.

You have to kind of change your rendering code to put it into a window and share the RAM, especially with huge symbols that you get with some of these large projects. So one of the things that we've been working on is that we've been working on a lot of things that we've been working on.

And one of the things that we've been working on is that we've been working on is that we've been working on a lot of things that we've been working on. And one of the things that we've been working on is that we've been working on a lot of things that we've been working on.

And one of the things that we've been working on is that we've been working on a lot of things that we've been working on. So one of the features that Xcode 1.5 provides, we've been playing with for the last week or so, is cross-machine debugging. What we're going to show you here is running the game, Tron 2.0, on one machine while debugging it on a completely different machine.

Now, Chris Friesen has been intensively training in the Tron 2.0 light cycle game over the course of the lunch break. And he's going to demonstrate the results of that training. Please, Chris, go ahead. We have sound on the game machine? Now, you know, I believe, I was watching Chris train, I believe this problem is not, it's not that Chris didn't train enough.

The problem here is really something very different. Something that you wouldn't necessarily expect on a light cycle. But I think Chris is going to go here, he's going to show you the character he was set up to play. And clearly the problem here is that his character had spiky hair.

It was impeding the airflow in the light cycle, it just wasn't working. So we're going to switch over to, actually we're not going to switch over quite yet. What I'm going to do here is I'm actually going to go on this machine and activate a breakpoint. All right.

You'll see that the game machine has stopped. And let's switch over to the debugger machine for a moment. And what we're going to see on the debugger machine is the actual code that's executing at this point, which is in the middle of Transgaming's proprietary Direct3D rendering technology, which allows us to take games written for Win32 and bring them to other platforms.

And we've got a little extra piece of code here that I added that can deal with the whole spiky hair problem. If a certain kind of rendering is happening, we're just going to mask it right out. All right, so let me go back to the breakpoints and disable that one.

And let's go back to the game machine.

[Transcript missing]

Remote debugging is also extremely important when you're on two machines. Was there one other thing? I'm sorry? Was there one other thing? There is one other thing. It's going to take us a second to get it ready. We'll run that right before your next demo. Great. So remote debugging, numerous other features in terms of globals, support, being able to set arbitrary expressions, and a host of interesting features you'll see as well in the 2.0 demo. So let's go to Tron level 2-- I mean, Xcode level 2.

So, a very big part of what people care about is to be able to launch quickly into their application and quickly over to Xcode. And you want to actually see Xcode laid out in a way that's similar and very good for you. And so we've added a number of important new layouts that are extremely familiar for people from both the Visual Studio and Code Warrior worlds.

We also saw earlier today Andreas Venker give the modeling with the database. That same modeling technology is also available for object-oriented programming for Java, Objective-C, C, C++. And we think that's going to be a very big help to you, even if you're not doing database programming. That's on the front end.

In the back end, We've added support for GCC 3.5 and 64-bit tools. GCC 3.5 is the next generation of GCC, and it has some amazing performance features, some of which are being innovated by Apple and IBM working together. And 64-bit support, so that you can generate and take advantage of the full address space. I'm going to spend some time talking about that, too. They're also important new performance tools and some incredible SDKs that Xcode is essentially the gateway to get to.

The traditional Xcode layout is all very nice and compact and very neat. And it has a number of great features. But one of the things, of course, is to have the debugger, you have to open up a separate window. And that kind of takes your mind away from it.

Here we're looking at the console window. Oftentimes to actually see it, you're then doing your own window management. And one of the things that's very popular on the Microsoft platform, Visual Studio World, is an all-in-one view where everything is nice and compact. And so we've added this now to Xcode.

You have a new all-in-one view that essentially puts the debugger, all of the tools embedded within the tiling frame of Xcode. But of course, it's important to have an easy way to switch. You'll see that there's a new toolbar up on the top to be able to rapidly switch function from one to the other.

The other important side of things is this idea of a condensed view. People familiar with Code Warrior will have many, many windows spawned off of a central palette. And that also is a new part of what Xcode 2.0 supports. So these two incredible features, I think, help. And why don't we take a look at this and some other nifty features. Let's bring Matt Firlik and Chris Espinosa up to give a demo of Xcode 2.0.

Thanks, Ted. Xcode 1.5 and Xcode 2.0 are replete with features. We only have a short time today, so Chris and I are going to demo two of them. But we encourage you to go to the rest of the sessions this week to check out some of those additional features.

In addition to being a standard IDE, Xcode provides a number of facilities for dealing with large problems that you have. Let's stick on the whole concept of debugging for a moment. Global variables. Kind of a necessary evil when working. Everybody has them, and even if you have more than those just in your application, there are potentially thousands in all the shared libraries that your application loads as being an application on Mac OS X.

The question is, how do you deal with those? How do you figure out the values? How do you keep track of those as part of your development work cycle? New in Xcode, we're providing the ability to look at global variables in your application in a very simple and easy manner.

What you'll see right now is that Chris has launched an application called Celestia. It's a freely downloadable application for looking at 3D renderings of, in this case, the universe. What we've done is set up a breakpoint under the history option here. So we're going to go ahead and hit that, and we're actually going to stop right in the middle of the debugger. Probably something you're all very familiar with.

But say at this point we wanted to actually go and look at something interesting, something in the sense of global variables. If you go and look up under the debug menu, you'll now see there's a tools menu, and we have something called the global variables browser. Chris is going to go ahead and make this window a little bit bigger so we can see some of the more information in here.

You see on the left-hand side, we have a list of all the shared libraries that this application knows about. And when Chris selects one, the right-hand side is populated with information from that shared library. We can see the name of the global variable, the file name, the value. These are all standard views that we can go ahead and reorganize the column view, so we can take the file name out of the way to look at something a little more interesting.

But you can obviously see an app kit here, we have a lot of symbols, in this case 500 that we're looking at. Makes it kind of difficult to find the ones you want, so we've included standard features like a search field at the top. Chris could go ahead and type in something, for example, NSF, and go ahead and filter all those global variables down to the ones that start with a common prefix. Something relatively easy to do, but it makes it very easy to find something by name. Not necessarily as interesting, let's go and look at something for the Celestra application. Chris can go ahead and click on that, and we're going to load the symbols for the Celestra application.

Once it's done populating, Chris is going to have to go ahead and remove the search field that he's typed in, so he can get the values back. Now that we see that, let's go ahead and search on something more interesting. What if you don't actually know the name of your global variable, but you know, for example, it's an int? We can go ahead and change the search to look based on type, as you see here, and Chris is going to go ahead and type in int, and now we can see, you can actually go find the global variables by type, making it very easy to go ahead and look at them.

So we made it very easy to deal with the concept of large amounts of global variables, but how do you get this into your workflow? Well, you'll see that there's a nice little column here called view, and you can go ahead and check those. Chris can go ahead and select a couple of them.

Once we do that, if you close the global variables browser and go back to the standard debugger view, you'll note in the variables display, there's an element called globals, but now when he expands it, you'll see the global variables. So we're giving the facility to take something as large as global variables and bring it down to a standard workflow.

On the same concept of this, let's talk about designs. Chris, you want to show them the new design features of Xcode? Sure. Thanks, Matt. Now, you got a taste of the design features this morning in Bertrand and Andreas' presentation when they did the core data model for the core data framework presentation. This is a new feature of Xcode where we are going to add design features throughout the workflow from the beginning of your application design all the way through deployment and debugging.

The first two plugins for design are here on your Xcode 2.0 CD. You've already seen the core data one this morning. We're going to show you the class plugin this afternoon. Now, what Matt's going to do is he's going to take this Celestia application, and he's going to create a new class model of some of the classes in the Celestia source code.

So he picks a class model, gives it a name. A class model is a project file like any other project file. It's stored in your project directory. It's committed to your subversion or CVS repository. He's going to pick a set of sources, a folder, and he's going to make a tracking model. He's going to add that to his group.

And now he has a model of the sources in that folder. Now, these are the classes. Now, you've probably been familiar with other applications that give you a snapshot or a bird's eye view of the classes in your application. But it's pretty static. It's how the IDE communicates with you what the classes are.

What you can do with the design tools in Xcode is you can use the class model to communicate to other people what they are. So you can move these classes around. You can expand them or collapse them to show their methods and their functions. And you can make the design look like how you think the application works, save it with your project, and then when other people download or check out your project and open it up, they can explore it using the class model that you've created.

Now, this is, of course, wired to the project. What Matt's going to do is he's going to select a method in one class. He's going to go straight to the implementation. That's fairly easy and obvious. And he's going to go back and using the menu, he's going to go to the definition in the header file.

and you can also, because all of your classes may descend from classes in the Apple frameworks which have documentation behind them, he can select a class and go to its documentation in the online documentation instantly. You can go either to the object itself or directly to the method of the property on the object. So these design features are all built in.

Now, you've seen what we can do with modeling your classes, but you're probably wondering, hey, why do I have to create a separate file and store it in the repository and in the project? And you're probably wondering about scalability. How does it do one really well? No, first what we're going to do is show that it's tracking. It's live. It's connected to your project. So as you change your project, the model changes with you. So what Matt's going to go do now is he's going to add another folder of sources to the model, and there they go. The design automatically updates.

[Transcript missing]

Thanks Matt and Chris. By the way, of course, great use for that is put it up on the wall in the hallway, your boss will think you're really doing something. You can find out more about that Friday at 9:00 in the Modeling and Design in Xcode session in North Beach. So this is, I think, one of the terrific new features. I think it's one of the starring great things in Xcode 2.0, and a reason to use both Xcode 2.0 and 1.5. Now let's turn our attention to the compiler and the runtime.

First, let's give a little bit of a road map, because we throw a lot of version numbers out, and if you're not on the GCC mailing list, you may lose track. GCC 295 is really the oldest compiler anybody is likely to be using, and it generated code for G3 and G4, but it didn't generate any of those 64-bit G5 instructions. 295 is a compiler that we keep around, because it supports original 10.1 text file format. But other than that, there really is no great reason to be using 295 anymore.

The last compiler, not the current compiler, but the last compiler, is GCC 3.1, and it too didn't generate code for G5. The next compiler, the current compiler that everyone should be using, is GCC3.3. And it has great support for optimization, great support for the G5. And so with this coming release, we're going to obsolete some compiler. Now, I'd like to obsolete both 295 and 31, but some people still want to ship on 10.1 kecks.

So what we're going to do is we're going to get rid of the 31 compiler, and we're going to have the GCC295 to be download only. And we think that that's a reasonable compromise for things. So the current compiler, 3.3, is the best -- is the compiler we should be using for shipping products. And this makes room for GCC3.5. This is the leading edge compiler. This compiler has terrific new optimization features, a brand new C++ parser.

It has also support for long doubles. And 3.5 really is where most of the attention is happening in the Free Software Foundation, the GCC community. And if you look at it, 3.5 is really something you want to start getting into. Give us some feedback on it now. We're putting it on the developer tools distribution so that you can begin working with it. But of course, it's only in preview mode.

The big lifting in 3.5 is in the performance improvements. And we're totally retrofitting the optimization technology. Something called single static assignment. SSA form is really the professional and best way to do code generation optimization. It's a little bit like register renaming, for those of you who are familiar with the hardware world.

And what it does is essentially treats every expression that is computable as a separate arithmetic expression inside the compiler. And the compiler can then do incredibly good scheduling and overlapping and such. That also gives you the ability to have very fine-grained control. And another terrific feature called auto-vectorization.

We heard a lot this morning about the GPU processing. In fact, as well, the PowerPC, of course, also has in it the AlteVec Velocity Engine Processor. And that allows us to have this additional processing in parallel, in addition to the GPU, but on chip, this terrific capability. Now, previously, in order to take advantage of AlteVec, you had to handwrite your own intrinsics.

You had to write, essentially, very low-level instructions and do the formatting for your instructions so that all of your data types began and ended on a 16-byte boundary. But with autovectorization, we essentially say, hey, let's let the compiler do it. The compiler can worry about those details. And so, instead of having to write the autovectorizing code yourself, the autovectorizer converts the code and operates it.

And identifies your most common loops. It works best with memory-aligned data. So, if you can structure it so that your blocks of memory are on 16-byte-aligned chunks, that's terrific. But it doesn't have to be. And there are annotations to tell GCC to go and try to keep the beginning of a data structure on a 16-byte alignment.

But that's not always possible. The compiler will compensate and put in the correct prologue and epilogue code for those loops. So, what's the problem? What's the payoff? Well, at the time I made these slides, we were able to execute these types of instructions in parallel. When you have these kinds of arithmetic expressions inside of a loop, we can see anywhere from a 4x improvement, 4x of a vector of values being executed in parallel, up to 12 and even 14x.

In fact, some late-breaking data, some tests that I just read this morning, have it up to 20x performance improvement. So, really amazing, amazing improvement that you can get from auto-vectorization for some codes. And we think that you're going to like using this feature because it's really very simple to use. You just enable it with one checkbox. So, that's auto-vectorization.

Now, let's turn our attention to 64-bit. 64-bit is something that we had, in fact, on Panther. It's part of the G5, access to greater than 4 gigabytes of RAM. But we also gave this flag in GCC, in Xcode, to compile for G5 architecture. And this actually gives you, from C, C++, Objective-C source code, to 64-bit arithmetic operations. So long-longs can already fit in one register. And we think that this is already something that you should be using on G5s, even today on Panther.

But what do we actually need with 64-bit memory? Why do you want to break the 64, the 4-gigabyte barrier? And we think, in fact, that a great many applications, certainly those eight megapixel displays, are going to give you lots of opportunity to display all this great data, all this supervision.

So, all of a sudden now new vistas of media, genomics, proteomics, medical, engineering and one of my favorite, geo-spatial applications, now become possible. Not to say that we didn't already have techniques for sliding through larger than four gigabytes of data, but it was cumbersome and awkward. So, I'd like to have Steve Peters give you a demo of a geo-spatial application. Ken, we have this demo machine.

Ted, we're looking at TerraVision. It's an open source app that I downloaded from the net. TerraVision is a visualization app for terrain data, and lets you see terrain data in 3D. It's a 32-bit application that takes advantage of a variety of Mac OS X graphics frameworks. Behind the scenes, there's a 64-bit service application, a service process. I built that using the Tiger Preview and the tool chain and Lib C, readied for 64 bits. Sure, let's give it a go.

So this is a computation, not a movie. And as we move through this data set, we'll encounter more and more complicated terrain pieces, going from topographical data, Yosemite shimmering off to the east, Landsat data around the Bay Area, and finally this patch of high altitude aircraft imagery data, which gets us down to a resolution of about a meter. Anybody recognize that feature in the Mid Bay? That's Stanford Stadium.

and we'll swing around to see... I think that's my house. Yeah, University of Palo Alto, you can see the Apple store down there. I always wanted to visit Stanford, so we'll roll up this way and see the main academic quad and Palm Drive. Little run up to the dish, this is where the radio astronomy towers are sited.

and then we'll sprint down, actually sprint north, towards Slack. In the foreground are the experimental halls, and then the linear accelerator running off two kilometers into the background. And there's this curious feature about Slack. It passes under Highway 280. So I'd like to acknowledge the creators of TeraVision, Martin Reddy, Yvonne LeClercq, and Lee Iverson.

They did this at SRI a few years ago. And the take-home message here is, with Tiger Preview, you can start to experiment with application architectures that employ 64-bit service processes like the ones we've shown you today. Check it out, see if your app would benefit from some 64-bit goodness, and enjoy the ride. Thanks. You bet, Ted.

64-bit addressing in Tiger essentially gives you access to much larger data sets. How large? Well, you can learn more about it, by the way, on Wednesday at 10.30 a.m. in Presidio with the Programming for Mac OS X session on 64-bit. We're doing a staged rollout. So initially, we're focusing on the lib system functionality, the basic I.O. capabilities that you need, and the capabilities to map it from a 32-bit address space into a 64-bit address space.

and many more. 64-bit is an amazing number, and if you're a geek like me, you love big numbers. How big is 2 to the 64? Well, it's 16 exabytes. It's that large number, which I'm not going to read, or 18 times 10 to the 18th. And here, your scientific training in exponential numbers is going to come in very handy.

Really, it's -- having access to these much larger data sets really is going to change, to some extent, the way you think about data and data management. And what you just saw in the application that Steve demonstrated is essentially we have a 32-bit GUI front end, right? But it talks to a 64-bit service address space.

That service address space can do a lot of the heavy lifting. It can access a uniform memory environment. But the 32-bit address space essentially is mapping in the data structures that are needed to present. Because, in fact, frankly, there really is no great benefit to doubling the size of a pointer that points to a window. There's only benefit to be derived from a window that points into your memory that has your actual data in it. And so we believe that this is a very convenient architecture. It's one that's extremely scalable.

And it's one that gives you the benefits of 64-bit without having to have to take the big plunge to port your entire application from 32-bit to 64-bit. So this is what we mean by kind of a scalable architecture for 64-bit computing. And as you can see, we've already demonstrated that it actually works and does actually give great performance today.

I want to actually talk a little bit about the future, because I believe that 64-bit has some incredibly interesting emergent properties. When we made the transition from 16-bit data to 32-bit data, it's not that we all of a sudden had the capabilities to do object-oriented programming and computer graphics in an incredibly intense way. And that transition, it's not to say that you couldn't do object-oriented programming in a 16-bit address space. It just wasn't all that useful or necessary.

Applications didn't have to scale that large. The functionality wasn't all that impressive. Going from a 16-bit space to a 32-bit space, all of a sudden now, certain capabilities and technologies became very easy and uniform. And I believe the same thing will happen now. As we move from 32-bit space into 64-bit space, now we have amazing, amazing capabilities to access just a breadth of data.

And so I think of this as kind of a supervision, right? We have this ability now to look at scales of the universe and of data, to make seamless transitions between data sets so that, in fact, you know, your locality search, say, instead of Google being, you know, a zip code, you just pinpoint where you are on the globe, zoom in, there you are, okay, what's there? It's so, it's going to unlock incredible vistas. And I'd love to see the applications you folks are going to develop with the 64-bit tools we're providing to you today. That's 64-bit.

Let's talk about the next generation runtime. Again, this is a little forward-looking. We are working on garbage collection. Why do you want garbage collection? Well, it simplifies coding, especially in a multi-threaded world where events are generated both by the network, by the user, by perhaps other people, and it can improve correctness of your applications and improve the memory efficiency of what you're doing. So what we've been cooking up in the lab is semi-automatic memory management for Objective-C.

It's fast. It has very low latency. It's designed for interactive applications, and it's optional. You don't have to use it. On a per-application basis, you can either enable it or not enable it. It's binary compatible with the frameworks that we'll be shipping, and we believe that applications should begin experimenting with this. It's kind of a rad feature when it comes out in Tiger, so we really don't understand yet the complete balance yet between high-performance applications and... the amount of memory footprint it has.

So this is something where we'll be making this technology available in the Tiger timeframe, and I think it's going to be very interesting, but this is not a journey that lasts in one year, right? This is something that we will explore together to figure out different ways to tune this technology, but I believe it's going to make a very big improvement to productivity and debugging of applications.

Instead of having to do sort of the retained release style programming that we do today in Objective-C and C++, we will have techniques there that say, you know what, memory, when it's no longer being referenced by anyone, it can just be... the garbage collector will reclaim it. So this is cooking up in the labs, and I'm very excited by this. Thank you.

The other side of things is performance tools. So performance tools we provide help you make your application great. One of my favorite performance tools is Shark. What is Shark? Shark is a multi-purpose tool. We've enhanced it with functionality, merging in capabilities such as sampler into Shark, and what it does is it profiles everything, all the way from the device driver level to the kernel to the applications.

It has extremely low overhead, and it works both in a static and a dynamic way. It annotates the source code and the disassembly code, and it gives you optimization tips. So let's bring on Nathan Singleton and Sanjay Patel to give us a quick demo of Shark. Thanks, Ted.

Good afternoon. I'm really proud to announce that we have a preview version of Shark on your Tiger preview disks, and we're also going to be available shortly off the FTP site with a new version of Shark. And what I'd like to do is just demo some of the features in Shark. I think the best way to do that is with an open source application that we think was rather cool. We found it out on the web. It's called Celestia.

[Transcript missing]

So here's Celestia running. And what you have to do if you're going to optimize for performance is add a little metric. So we decided to figure out how long it takes to run through this demo. And now if we launch Shark-- You can see that we've added lots of features to Shark, but the main thing we had to do was make sure that it was still very easy to use.

And the default workflow for most people is simply to hit the Start button. So right now we're actually taking samples of the system. So we're sampling not only Celestia, all the running processes on the system, by default, once per millisecond. If we had stopped, we're now going to process all those samples that we took.

You can see by default we've gotten what we call the heavy view, and this is ordered from time spent in the topmost function down. Now you can also look and see that we've sampled the entire system. Celestia is taking up a little less than half of the CPU cycles on this system.

You can see we have a thread pop-up to show you all your threads of execution. Celestia is single-threaded. That explains why it's not getting all of the CPU resources on a dual-processor Mac. Now we can also look at Celestia in the more traditional tree view, so you can see your code executing all the way from main down.

New for Shark 4.0, you can actually view heavy and tree simultaneously. So that was a big request. We've added what we call data mining features as well. So we get a contextual menu, as well as this side drawer, which lets you filter out things that you don't want to see. For example, the system libraries.

You can also color your code, so it's very easy to figure out where you're spending time. Now one area we've really improved is the chart view. So this is a chronological view of your program's execution. In this case, we have two processors, so we have two charts. Now we know that Celestia is single-threaded at this point, so let's just focus in on its one thread of execution, chronologically. Now a big feature we added was this Zoom slider.

So you can zoom in and out. If you click on the stack, so along the x-axis we have time, and along the y-axis we have stack depth. And you can click or mouse around through here, and you can see your stack at any point during your program's execution.

So now if we go back to the profile view, probably the coolest feature, I think, of Shark is if you double click on a function, and we found this one function in particular called Big Fix that we're going to look at. You see your source code. But you don't just see your source code. It's actually annotated, and it's highlighted. And what we did was we made the brighter the yellow, it's more important for your code. That's the more important time-consuming line of source.

And so you can see also Shark is going to offer you advice when it can. These are these exclamation points. So for example, we have this floating point to integer conversion. Now this is really bad for a G5 because it causes a serialization of the pipelines. Now new for Shark is the fact that we can look at both source and assembly simultaneously. So you can see exactly what the compiler generated for each line of source.

And when you highlight on a line of source, it automatically highlights and scrolls the lines of assembly that correspond to each line. Now, for me, I generally speak PowerPC as well as I do English, so we know that's not true for most people. So we have this PPC Help button down in the corner.

and David S. As you can see, it's actually updating as we scroll through. It updates and gives you a description in English of what each mnemonic is. So that's really cool. And so it's really great to be able to focus in on your performance hotspots right away. And we have this nice edit button now. And this will jump you right into Xcode. And the line that's highlighted in your Shark Viewer is now highlighted in Xcode, so you can edit away.

So if we go back to Celestia, you can see that this demo has already been running for over four minutes, and this is the original code when we got it. We decided we should spend some time optimizing it based on what Shark had told us. And you can see we did nine steps. Now we don't have much time here. So let's just show you Warp3 for a second here. If we restart the demo, you can see that we're going to do a flight through the universe. We're going to stop around Saturn, go off to one of its moons.

and pretty soon here we'll be done. Now that's not bad. Originally this demo took over a thousand seconds to run. So we made some improvements at warp three. But we weren't done. We decided we should write AlteVec code, we should do some G5 optimizations, we should unroll some loops, we should schedule code better. And the sum of all that is what we call warp nine. And if we take a look at that, you can see-- We got the demo to be a little bit over 300 times faster from where we started using Shark. Thanks.

I wanted to put up the URL. I think Celestia is a wonderful application by itself, though spend more than five seconds looking at it. There are lots more tools, performance tools. Shark is terrific and such. But also, be able to look at, you know, memory allocations, where is it going.

Very often, an application that is slow is not because it's algorithmically slow, but because it's taking up too much memory. Features that also Shark has is to be able to profile and set up the events of when allocations happen. There also are standalone applications that you can leverage and work with without the GUI. Malik debug, object alloc.

Quartz debug for looking to see your quartz allocations. Sampler. Spin control is a great thing when you see the pizza, the spinning cursor to understand what it is the application is doing. Great, great thing. Thread viewer. Accessibility inspectors and verifiers. Very, very important. A lot of -- both to make your application accessible to the unsighted.

It's a terrific new -- it's a great new set of capabilities in Mac OS X and will give you the tools to help make your application accessible. So lots and lots of performance tools and analysis tools on the developer CD. It's worthwhile to explore it and there are sessions on all these features.

Xcode is, in many ways, the gateway to the frameworks on the system. And you heard this morning and from Bertrand about a number of the terrific SDKs and tools. Spotlight is a great new SDK. Dashboard, Core Data, all of these SDKs have both, in many cases, have additional tools. For example, Quartz Composer to better understand how the filters interact.

So it's a terrific thing to spend some time and leverage this. Core Data is a terrific SDK that unlocks the virtues of persistence and database for object-oriented programming. I think one of my favorites, though, is going to be the Automator. And Automator SDK, and let's bring on Tim Bumgarner and Todd Fernandez to give us a demo of the Automator Action SDK.

Thank you very much, Ted, and good afternoon, developers. I hope that you all got a chance to see Automator this morning in the keynote. While Sal did a great job of showing off Automator's user side, and Tim is taking advantage of it right now-- I'm sorry, it's already done-- running a workflow to set up our demo, we're here to show you the developer side. And the first thing we want to do is show you an action you did not see this morning, an action that allows you to create a new email.

And if you take a look at the Actions user interface, and Tim, please bring up the people picker, you'll see that something's missing. There's no BCC support. So in order to show you how easy it is to create automator actions, Tim and I are going to add that support for you now.

and to do that there's three main steps. We need to update our actions user interface. We need to add the connections between the new user interface elements and our code. And finally, we need to add our source code in Apple Script and Objective-C. So Tim, please open the project in Xcode 2.0. And the first thing to notice here is that this is a native target. Native targets now support AppleScript. So let's go ahead and open the nib in Interface Builder.

And there's our new text field to hold the BCC addresses. And we've added a BCC button to the people picker. and we've already wired that up to a new addBCC items method. Now what we have left to do is to use Cocoa Bindings to bind the value of that new text field for the BCC addresses to a new variable. Strangely enough, called BCC addresses.

And we've got that all set, so please save the nib, Tim, and let's get back to Xcode. So now that we have Cocoa Binding setting this new variable, BCC addresses, we need to actually use it in our script, which is used to manage the actions user interface. Tim, go ahead and add a line of code to read the BCC addresses out of the parameters block, which is managed for us by Cocoa Bindings. and here we're showing off another great new feature in Xcode 2.0 that we now have code completion for Apple Script.

Excellent, let's save that. And Tim, if you could just give us a brief tour. We already have the rest of the AppleScript code analogous to what we already had for the two ANSI C addresses. and Dan Lebowitz. And down at the bottom, we're taking advantage of Mail's great scriptability to add the new BCC recipients to the new email message that the action will create for us. All right, that's all we needed to do in our script, so let's please save that, Tim. And finally, what we need to do is open up an Objective-C class and take a look at that new addBCCItems method.

And really all we needed to do was copy and paste and change three CCs to BCCs to read the value from the people picker and add it to the text field. So great, that's all we needed to do, is add this new feature. So Tim, if you would hit build and go, please.

And we'll save our script. And what we're doing here is taking advantage of a custom Automator executable, which is a great Xcode feature, which has a launch argument which passes the newly built action to Automator when we launch it directly from Xcode. This gives us a very efficient development cycle and very much like building a normal application.

So let's add our new email action to the workflow. And we see it's got a BCC field. and the People Picker has the button that allows us to add new addresses to that field. So Tim, if you'd BCC me please. And let's send a message to Ted, since he's been so gracious as to let us share his stage. And CC Tony. And let's tell them what a great combination that Automator and Xcode 2.0 are.

It's kind of a boring email, so let's make it a little bit more interesting. Let's add an image to it. Strangely enough, an image of another great combination. And let's go ahead and run the workflow. And there's our email with the VCC field correctly filled out, and we've got the Venus transit a couple weeks back.

So in a few short minutes, we've shown you how easy it is to use your development language of choice to create automator actions. And I hope that you'll all join me tomorrow morning at 10:30 in the Mission for a full session explaining just how easy it is to create a rich set of actions for your applications. Thanks very much, Ted. Thanks, Todd. Thanks, Tim.

So, yes, tomorrow morning, 10:30. I think Automator and connecting up applications is going to be a very incredible thing. We're asking all the application developers to really think about and expose-- it's very simple to expose an AppleScript interface and add these actions. You can actually add Automator actions in, really, the language of your choice.

It's very simple to do it with AppleScript, but it doesn't require AppleScript. There's many different ways to do this. It's really a very simple set of nibs that can be copied, dragged, and pasted into your project. And to really enable this is a terrific way to do it.

This technology grows out of work done to also enable Unix shell scripting and many kinds of things. So this is a foundational piece of technology that I think you're going to see a lot of exciting things happening. And it's a terrific way for many applications to become very integrated and to leverage the capabilities from one application to another. for our customers' benefits.

So to summarize, Xcode continues to advance for Panther and Tiger. We think that Xcode is helping people to build faster applications in less time. So improving both the performance of the applications, both with compiled code and generated code, and by taking advantage of new performance capabilities. We think it's going to be extremely important to Automator enable your application. And we think that Xcode is really the new foundation for a large number of terrific things.

Now, I know many of you have seen that a few of us around here sporting these great shirts. I want to invite Rich Siegel and Gabrielle State back up here. I want to hand them some shirts for being some of the first applications we're showing, developed in Xcode. Hand them a shirt.

Thank you. Thanks so much, Ted. Thank you. And you have a little other demo to show? Yeah. So while Ted was getting everything ready and showing off that Automator demo, I thought, gee, you know, I really hope that Apple concentrates on security for that stuff, because it can be really important. You know, in the Tron game, if we get the sound there, that'd be great.

Get the sound on? Tron game, there's a little problem with that, so. Please, I've got to get back to my routing.

[Transcript missing]

Yeah, so that's the danger of some of this stuff, so I'm sure you guys will do a great job on that. Little email scripts can run amok, huh? Exactly. Thanks a lot, Gabriel.

So we have a few more shirts here, but in fact, when you come down to the Apple campus on Thursday, we're going to have a lot more. And if you bring your application running on a laptop and show us your application building in Xcode, then we'll have a shirt for you as well. So we're going to throw out a few now. and David S. So only a few today. You've got to show us your application building in Xcode. So thank you very much and have a great conference.