Configure player

Close

WWDC Index does not host video files

If you have access to video files, you can configure a URL pattern to be used in a video player.

URL pattern

preview

Use any of these variables in your URL pattern, the pattern is stored in your browsers' local storage.

$id
ID of session: wwdc2005-647
$eventId
ID of event: wwdc2005
$eventContentId
ID of session without event part: 647
$eventShortId
Shortened ID of event: wwdc05
$year
Year of session: 2005
$extension
Extension of original filename: mov
$filenameAlmostEvery
Filename from "(Almost) Every..." gist: ...

WWDC05 • Session 647

Java Virtual Machine Exposed

Enterprise IT • 40:42

Mac OS X Tiger supports Java 1.3.1, Java 1.4.2, and J2SE 5.0. This session gives you an opportunity to look under the hood of the Java Virtual Machine. You'll learn about new functionality available in J2SE 5.0. We'll discuss how the JVM optimizes your application, and provide tips on programming practices that make those optimizations more powerful.

Speakers: Roger Hoover, Mikey McDougall, Victor Hernandez

Unlisted on Apple Developer site

Transcript

This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.

Good morning. Welcome to the Java Virtual Machine Exposed. Don't panic. We didn't bring trench coats. For those of you who did come here out of voyeuristic tendencies, you can leave now. What this talk is really for are for developers who want to look under the hood at the Java VM engine on Tiger, and who want to kick the tires of the Java VM on the new Intel-based Macs.

So, what you're going to learn about Mac OS X on Java, Java on Mac OS X in this talk, is the here, the now, and a preliminary look at the race ahead. The here is Tiger, what we delivered six weeks ago, namely J2SE 5.0. That's Java 1.5, for those of you on the old naming scheme. With the C1 compiler, that's the same compiler we've shipped in the past, with earlier versions of Mac OS X.

And then, let's see, Mikey McDougall's gonna come up and fill you in on that. And then Victor Hernandez is going to come up and talk about what we just did right now, namely the Tiger on Intel Macs, how we have older versions of Java 1.4.2, and then the C1 compiler on Java 1.5, and also he's gonna introduce the new C2 compiler that we're shipping with the developer kit on the new Intel-based Macs.

Actually, it's not on the developer DVD, but it's gonna be a separate download, just like 1.5 was with Tiger. And then I'm gonna come back, my name's Roger Hoover, after that, and give you some preliminary numbers on where we are on the development platform with the Intel box. So, Mikey? Thanks, Roger.

So the Java Virtual Machine in Tiger It's really sort of cool to be able to talk about this because it's a really good VM. Tiger comes, and this is sort of probably the third time you've heard this, but it comes with 1.4.2 and 1.3.1 sort of installed. Every Mac's got them. And now some of your customers have got J2SE 5.0. This came out six weeks ago, same time that Tiger came out. And it's available as a software download. It's so, I mean, it turned out really well. It's going to be a software update soon.

So, one of the things that people ask a lot about is like, where are we going with Java? So, what's our support story? We're moving forward, really are. The J2SE 5.0 is really the focus of our efforts now. and the good news there is that it's a really good release, and that all your 1.4 stuff still runs in J2SE 5.0. If you have any backwards compatibility bugs that you're encountering, let us know. Let us know right now, because this is where we're going. And really, really, Java 1.3.1 is going away.

So this is some stuff that Sun said about their release of J2SE 5.0. Your application, it already works. The platform was subjected to a lot of testing, and Sun was really proud of their release, and rightly so. They did a great job. And I'm here to tell you that it's a really good release. If you're not on J2SE 5.0, you should download it. You should give it a try, because it's really solid.

So what's new? This really is sort of the biggest delta for a Java release that has happened in the history of Delta's Java releases. There's language changes that I'm gonna go into briefly. There's library and runtime changes. How many folks are running J2SE 5.0? I mean, how much of this is review material for? Okay, good. So I won't burn through this, but I'll go through this relatively quickly.

So language features. So now we get auto boxing. So as a programmer, you can go back and forth between the base type int and the object integer sort of interchangeably. We have support for generics. So this really lets you provide more object-oriented, more portable code and libraries. Static import, very handy.

You get the enhanced for loop for, just makes iterators a little bit cleaner looking. You get var args, which are very nice. And metadata, metadata is really powerful. I mean, originally when I saw the whole Java doc thing, I'm like, huh? But it's turned out to be just huge, and I think it's gonna get bigger.

So one thing you should know about all these language enhancements are it's all syntactic. It's all up in Java C. It doesn't actually affect the bytecodes that get spit out. And they all really do things that make it so that you can write good code, portable code. And by and large, none of these changes sort of affect how your application will perform.

With one exception that I'm going to talk about briefly. This is autoboxing. Autoboxing is a little odd in that autoboxing does create objects for you. And so while this is generally good because you're going to have to create the objects anyway, you can get yourself into a little bit of trouble.

So here's an example for those of you who took calculus in maybe high school or college. If you remember Taylor series and things like that, this is something that's going to calculate a quarter of pi by successive approximations, a little loop. And it runs through and builds it up by running through this loop again and again and again and again. And, you know, if you're straining your brain to remember Taylor polynomials, you might make... A slightly different version.

Now, the only difference between these two versions, I don't know if you can see it, The first example is using a double capital D, and the second example is using a double lowercase d. If you push this through into what that means once auto-boxing is turned on, you can see that you've got a fair amount of extra cruft there.

So, just be aware that this can happen. So there's sort of a good news, bad news kind of thing. The bad news is it's like, takes two and a half, 2.7 times longer to run. The good news is, that only took two and a half, 2.7 times longer to run. So, the good news is, the compilers are really good.

So, just be aware that this can happen. So there's sort of a good news, bad news kind of thing. The bad news is it's like, takes two and a half, 2.7 times longer to run. The good news is, that only took two and a half, 2.7 times longer to run. So, the good news is, the compilers are really good. So, the good news is, the compilers are really good.

We got a real high precision time available now. We got a super fast string builder. It's not thread safe, but it allows us to do sort of what would be older, slower string operations much faster. The string plus operator now goes much faster. And if you were in the session before this, you probably heard about some of the graphics improvements that are available. So you also get some graphics pipelining stuff.

So I want to talk about the virtual machine that is shipping right now in Tiger. This is the C1 compiler, something that's called the client compiler. It's a just-in-time compiler, so you're running along, doing bytecodes, bytecodes, bytecodes, and it says, "Oh, this is a hot method. "I'm gonna take it down and crank it right onto the machine." There's a couple things that you need to know about this. One is that it will go to whatever machine it's on. If it's on a G5, it'll go to a G5. If it's on a G4, it'll go to a G4. So it really uses the best instructions that it has available to it to do any particular job.

The internal representation of classes are basically static. They don't change. And so one of the other things we've managed to do to boost startup time is to sort of freeze dry classes, and then share those between various instances of JVMs. And what you may not know is you have a variety of garbage collector options available to you. So let's talk about the compiler itself first. So the whole idea behind a compiler is that you want to run fast.

And you also don't want to spend a whole lot of time compiling. So we look at the hot methods, and those are the methods that we recompile. Now, generally, just having a JIT, a Just-In-Time Compiler, will make your stuff run 10 times faster. But there's a sort of balance that needs to be struck between compilation time and time that you actually spend doing work with the processors.

So, we'll talk later about sort of more classic compilers, stuff you'd see in C and C++. These you have to crank pretty hard to get absolutely optimal code out of, and optimal register allocation. So the C1 compiler actually provides a good balance between that. It compiles quickly and still produces quite good code. The C1 compiler is the default compiler.

It's got a couple things in it that are sort of interesting to know about. The first thing took me a while to understand this on-stack replacement thing. So, you can imagine that you have a method that has a loop in it, and you're going to loop around that loop, you know, 50,000 times.

Now, it'd be kind of a drag if you had to be interpreted 50,000 times before you got up out of that loop, and then you could compile it. So, the C1 compiler actually does, it compiles it in place. It says, oh, you've been in this function a long time. Maybe I should compile it.

And then so, it compiles it, and it sort of inserts you into the function on the stack, you know, sort of live. And that's really quite powerful. The other thing that it does is the VM knows sort of instantaneously what classes have and haven't been loaded. So, even if you have two classes with virtual methods that might have the same name, if only one of them has ever shown up, it's able to know. Oh, I can treat that as final. I can inline that. I can push that back in and up and optimize out all the call cost.

The other kind of cool thing that the C1 compiler does is, along with all this compilation and optimization, there's deoptimization. So, for instance, if you're in this function I was talking about before, you're cranking around the loop 50,000 times, and you put a breakpoint on it, for instance. The hotspot VM realizes, oh, he wants to debug. And what it'll do is, it kind of unfolds all of that compiled code, takes you back into the bytecodes at the right spot.

You know, sticks your, you know, lets the debugger know, oh, this is the bytecode that you're on. And then you just kind of keep going in real time. So that, I was really, sort of blew my mind the first time I saw that. It's really good. So, also in the C2 compiler, they've got some other support for, just, basically if you do object-oriented programming, you subclass things. These are optimizations to make it so that you can just write good object-oriented code and still expect it to go fast. And lastly, the code generation in the C1 compiler is optimized for PowerPC. So, you get Altevec instructions, you get 64-bit instructions for longs, floats, doubles.

Let's go to this whole faster math thing. You may have seen this slide earlier today. On a G5, this thing turns into just a pair of loads off the stack into registers. On a G4, of course, you haven't got 64-bit registers, so dealing with longs takes a few more operations. In the context of this slide, this is a really nice, clean Java function. This guy is almost certainly going to get inlined. So on the G5, you're not even going to pay the branch through the link register, generally speaking.

Class data showing, so we're kind of proud of this, 'cause it's cool. What you do is, as I said before, you sort of freeze dry that internal representation of a class, store it off on disk, and then this lets you very quickly invoke that class. There's ways to control this if you want to sort of tightly control and profile your code and understand things. They're listed up here. We originally came up with it at Apple, and we gave that back to Sun, and Sun's like, wow, this is a cool idea. And they've pushed it out through all the VMs now. So you'll be seeing this in J2SE 5.0 everywhere.

Garbage collection. So one of the great things about Java is you haven't got to do your own memory management. There are three garbage collectors that you have available to you right now. and they meet different needs. The default garbage collector is the one that works with class data sharing. It's got a good mix of sort of time spent paused to do garbage collection, versus over the length of your program, the amount of time spent garbage collecting.

There's another one available, Concurrent Mark and Sweep. This is sort of optimized so that over the life of your program, a long running program, you will spend as little time as possible actually doing the garbage collection. But you might have slightly longer pauses while it does that very efficiently. And then lastly, we've got another one that has sort of briefer pauses, so it's sort of more real time.

But over the life of your program, you're going to see longer, more time actually spent doing the garbage collection. The take home point from this whole slide is that you have choices available to you. Garbage collection is kind of really tricky to get in your head exactly how it performs. And we don't even suggest that you try. What we really suggest that you try is if you have sort of performance critical stuff, try each of these garbage collectors, see which one matches your needs the best, and then use that one.

And you shouldn't go to any Java sessions this week and not have us tell you about Shark, because Shark is so damn cool. It's a really good performance tool. You get it for free with your development package. I've just seen people get huge wins just doing some really straightforward optimizations. So, get Shark, run Shark, identify your execution bottlenecks. I mean, it's really great. So, knowing that there's a compiler under there actually has implications for your code.

I mean, it's really great. So knowing that there's a compiler under there actually has implications for your code. two different things, but we're unable to hoist a method up and inline it if it's synchronized, if it's very very heavy, if it's large, and also if you have an exception handler, that also will not be inlined.

and you know, it's the 21st century, so here's some sort of general programming tips knowing that you're working with the hotspot compiler. Exceptions. Exceptions are objects, and they're not like little featherweight objects, they're heavy objects. They got a whole stack frame in them. So use them for error handling. Don't use them for control flow. Don't use them to like terminate the end of your loop or something like that. It's just... Very odd to see that. The other thing, and this is sort of more important, is don't use object pools.

The whole power of Java in many ways is that you don't have to worry about object management. Object creation, object deletion, it's very different from Cocoa that way. And we keep making the handling of objects faster and faster and faster. So all of these things are going on.

Garbage collection these days is very, very fast. New has been in line, new gets in lined along with everything else. And so it's really fast. We get lock-free allocation on each thread. So stay away from these object pools. So Java, Tiger, and you, J2SE, 5.0, it's ready, it's there.

And generally speaking, when you're tuning for Java, Don't optimize first. Write your code as clean as you can. Let us do the heavy lifting with the compiler, and then only afterwards take it to something like Shark and do your optimization. And with that, I'd like to bring up Victor to talk about what's new on Intel.

Don't optimize first. Write your code as clean as you can. Let us do the heavy lifting with the compiler, and then only afterwards take it to something like Shark and do your optimization. And with that, I'd like to bring up Victor to talk about what's new on Intel.

So, this is what the picture looks like for JVM support in Tiger. We support 1.3.1, 1.4.2, and 5.0. Basically, all of the support that Mikey just described. How does this change on the Intel-based Macintosh computers? Well, there's two main changes. The first thing you'll see is that 1.3.1 is gone away. We've told you for a while that it was going away. Now it's gone. But we are actually providing something new in replacement, which is something you've been asking for for a while, and that's the server compiler.

It's available to you under Java 1.5 on the Intel-based Macintosh computers. When you sit down at one of these Intel-based Macs, what kind of Java support can you expect? You can expect basically equivalent support for 1.4 and 5.0, as you found right now in Tiger. It is really robust, and we're really proud of the implementation that we have going. I've been working with developers all week on basically trying out their application on the Intel-based Mac, and it's been really, really successful.

Not only are they running on Intel for the first time, but they're also sometimes going from 1.4 to 1.5 for the first time, and it's going really, really smoothly. As Mikey mentioned before, 5.0 is really the way to go, and it also applies on the Intel-based machines. Your Java application should continue to work.

There are some issues with having to upgrade to universal binaries for your native libraries, but one of the key things also is that the developer tools like Xcode will continue to work there. Also, a lot of the open-source things like Eclipse are basically one recompiler away from working, and will be available for download for the systems very, very soon.

Okay, so I'm here to mainly talk about the one thing that we're adding, which is the C2 compiler. Something that we've heard a lot from developers that they're interested in using when deploying on Mac OS X. The main difference between C1 and C2 is the fact that C2 uses more advanced compilation techniques to generate faster code for your Java methods. It plugs into Hotspot just like C1, and uses a lot of the same dynamic features as C1, including the ones mentioned here that Mikey described.

Things like on-stack replacement, and the ability to dynamically de-optimize a compiled method back into interpreter, either for implementation of deep inlining, aggressive inlining, or for debugging purposes. All you have to do to get access to the C2 compiler is to invoke Java with the minus server flag. I'm actually kind of curious, how many people here on Mac OS X, or on any Java platform they did deploy on, use the minus server flag? So you wanted to change your application, and it will just work like that.

We've always supported the minus server flag back in previous versions of Mac OS X. It doesn't get you a different just-in-time compiler. It basically gets you a few different parameters. But now, on the Intel-based Macintosh computers, it's actually going to get you a different JIT compiler, a completely different set of performance characteristics. We do, in the Apple Developer Kit, we don't currently support passing the minus server flag in application bundles or in applets, but that will be supported very soon.

All right, so let's talk about the C2 compiler. It does generate better code than the C1 compiler. One of the things that you've got to keep in mind with the C2 compiler, and your understanding of it, is that all of the things that Mikey described earlier about how C1 generates optimized code, think of that as a subset of what C2 does. It simply does a better job. To do a better job at compiling your hot Java methods, it unfortunately does take longer to do.

The general rule of thumb is that the C2 compiler takes 10 times as long to compile your Java method as C1. This can actually cause a visible pause in your application, similar to a visible pause you might see from garbage collection. So when is it actually useful for you to use a C2 compiler? It's mainly when you have a long-running application that can withstand those possibly longer compilation pauses.

And also, if you have some hot methods, you can use some methods that are very compute-intensive, so that the amount of time spent in them, algorithmically, is important enough to get the best code you can possibly. The other thing to keep in mind is that the compiler scales very well on multi-processor systems, meaning that the pause is not so visible when you're running on various processors.

So let's actually go into detail on what we're talking about in this compilation pause time. and how that can affect your application's performance. So let's say this application, basically the horizontal axis, is the amount of time it takes to run this app. If you run it with the interpreter, it takes the green amount of time. As you can see with C1 and C2, they both take a less amount of time.

Why does C1 take more time than C2 in this example? Well, as you can see, even though C1 takes less time to compile the three methods, the generated code is substantially slower than C2, and it actually takes a lot longer to actually execute those methods. So in this case, C2 is faster than C1. Remember, that is not always true.

Here's a similar application with three methods. And in this case, C2 is taking so long to compile these methods that it actually takes longer than C1 would take to even just execute them. And the amount of time taken overall turns out to be faster for C1. This can be seen in a lot of GUI applications when running with the C2 compiler. And you really just have to measure your application to find out which of these two scenarios your application falls under.

All right, so how does C2 generate better code than C1? Well, one of the things that it's mainly doing is it's applying a lot more classic compiler optimizations to your Java method than C1 is. That's where it's spending all of the extra time and possibly using more memory to do so.

Here's a whole list of optimizations that C2 does, some of which are done by C1, but mainly it's things that only C2 does. Instead of describing them here with just a few bullet points, it's much better to illustrate them, so let's actually go through each of these in succession.

First of all, we've got strength reduction. The thing to keep in mind is that an optimizing compiler optimizes your code in two ways. One, it either reduces the amount of operations being done, or chooses an operation that's less costly. Or, it moves your code around in such a way that it makes some other optimization able to reduce the number of operations that are required. In this case, dividing by two is more expensive than shifting by one, and C2 is able to identify such a situation and replace it with a cheaper operation.

Here's another one. Here's one where it actually just moves your code around to provide another optimization to do its job. Basically, you can move values and variables around and simplify the expression for something like constant folding. Constant folding is powerful because you got rid of two, you reduced from one add operation to just simply, from two add operations to just one, because the compiler knows what the value of the constants are since it moved them around. And even though your code might not have looked like this in the first place, it's able to do this for you.

Another optimization is constant propagation. Here, if you know the value of a particular variable, you can cascade that value down into later operations and simplify expressions later on. There's also common sub-expression elimination. It's not just a value, but it's actually an expression that you can propagate down and simplify it. Here, the key thing is that you're doing the plus operation only once, and reusing the value.

Loop unrolling. This is a pretty interesting one. Loops can be basically, if you know that this loop, for example, is only being executed four times exactly, you may as well just execute four times in a row. It looks like this is doing the same amount of work, but there's two things that are being gained here.

One is the fact that if you still do the for loop, there is actually a minor overhead to checking to see if we've gotten to the end of the loop, and the code on the right-hand side doesn't have that problem. The other thing also is that once you move all of this code and flatten it out, it actually enables a lot of other optimizations to happen, especially in the case where you're able to inline foo, which we'll get to in a second.

There's also loop invariant code motion. This one is very, very powerful. C1 is not able to do this. If an expression is always computed every single time during the iteration of a loop, you may as well just do it once before the loop, and then reuse the value every single time. In this case, X divided by Y, an expensive operation, divide is expensive, although we do do the best job we can. It's always the same every time you go around the loop, and you may as well just do it once before you go into the loop.

There's also dead code elimination. Clearly, this expression can be simplified to just the body of the true branch in the if statement. You might say, well, my code never looks like this. Why would you be doing this optimization? Well, because of other optimizations, like, for example, knowing constants and expression values, you can basically get into a scenario where your code actually does end up looking like this, as far as the compiler is concerned, and you didn't have to refactor your code specifically for this one scenario. Really, really powerful optimizations.

Class Iric Analysis. Mikey mentioned this one earlier. This is actually an optimization that C1 is able to do as well. And the key thing here is that even though the answer is declared to be virtual, because you wanted your code to be as easy to reuse later on for other purposes, and override that method, if you actually haven't overridden your method, we can just treat the answer as being a declared final, and we know we can dispatch directly there. That gives us the opportunity to reduce the overhead of the virtual method lookup, and it also lets us do the following. If we know that that method is the only destination for the virtual call, we can inline it.

[Transcript missing]

All right, so you can say, all of the specific optimizations, my code never looks like that. Well, the point I'm trying to make is that your code does end up looking like that once the compiler is able to move all your code around. Here's an example. Basically, all we're doing is we're calling foo with two values. And you can reuse the foo method in many different places with many different values.

But in this specific case, what's really powerful is that once you know the value of n and m, you can basically optimize this thing like crazy. Once you know n and m, you know what x is. Basically, you can collapse the whole switch statement because you know what x is. At that point, then, you also know what y is, and you can do dead code elimination on the if statement. And at that point, then, all you have is basically that return value, which obviously you can simplify down to something.

So, this is something as simple as 42. It's printing out 42. This is a real-world example that C2 does genuinely simplify down to print 42. C1 is not able to do this. This is really, really powerful, and this does extend to just about any kind of algorithmic, compute-intensive code you might put into your hot methods. So, as you can see, when you're doing a lot of things like this, C2 will be able to generate better code than C1. What happened there? Sorry, there's a delay here. All right, well, now I'm having trouble.

Wow. Yeah, it's compiling. This is, you know, one of those visual pauses. Maybe I'm compiling. Well, escape, and then press play. Let's try that. What else do you need to know about C2? There is actually some more things you need to know about C2 than just simply how it's able to optimize your code.

One of the cool things is that it has a really, really great register allocator. This is the sort of register allocator that's common in a lot of static compilers. A full graph coloring allocator where it's able to use as many registers as it possibly can and limit the amount of spills that happen.

It also uses a static single assignment intermediate representation that's pretty heavy duty compiler jargon. What you need to know about that intermediate representation is the fact that it's able to, it makes a lot of the optimizations that we described earlier capable of being done very easily. You should also know that GCC4 is also using this style of intermediate representation, and that's also one of the ways that it's able to do better optimizations.

Also, there's null check elimination that C2 is able to do. You might know that null checks can be implemented very easily using hardware support, and so there might not be a reason to get rid of null checks. Well, in fact, since a null check is able to do, have as a side effect the throwing of a Java exception, it can actually minimize the ability of other optimizations to happen, because there's a lot of kind of control flow that gets simplified once you get rid of that possible side effect. And so this is really powerful.

Just like C1, it's able to dynamically determine what the best instructions are on the given process you're running. For example, choosing to use SSC2 instructions. And it also has the ability to knowledge of the Java memory model. So if you're using a volatile variable, it knows how to inline code that does specifically that.

So, now that you know a lot about C2, and that you know that C2 is basically a superset of the optimizations done by C1, the thing to keep in mind is that therefore, all the things that Mikey described earlier as good practices for optimizing your code in C1 also apply to C2. Just that C2 simply does a better job at optimizing your code. So, just write your code as you would, just object-oriented.

Let us do the optimizations, and just keep your code clean. But I want to reiterate, once again, the tips for the 21st century that Mikey described, which are that exceptions should be exceptional. There really is no need to optimize ahead of time by making your methods final. And also, that object pools are only really needed in very specific scenarios. And with that, I want to invite Roger Hoover on stage to tell you about how, what kind of performance changes we've seen in applications that we've been running that will also apply to your application.

Thank you, Victor. Okay, how does this performance change with the new JVM? I'm gonna talk about how C1 and C2 compilers compare on the new Intel-based Macs. And I'm gonna talk about a couple, three measurements, namely startup time, how does that look, and the memory footprint, and then I'll run through some benchmark numbers.

But first let me say, these are really preliminary results. We're running on a machine hardware that's just a development system. The real machines are gonna be different. We just got this working like last week. So we really haven't spent any time performance tuning on this implementation. And in specific, there are known things that we know we have to fix performance-wise. So even though it seems to be working really stable, and we're really happy about that, things should improve and the numbers will change. And now I'm having this, there we go.

So C1 versus C2, just startup time. Well, we kind of told you what the goals were in the design of these two different compilers, and that really shows up on this slide here. You can see that for a simple little program that doesn't take much time at all, C1 is faster than C2 on HelloAWT.

But a longer and more involved GUI-type example, C2 takes a lot longer, 62% slower at startup time. So if your application looks like Java 2D, you're probably gonna wanna just stay with C1. Memory footprint, there's a little bit of difference here. And you can see that on both of these examples, C2 takes slightly more memory.

Now, on some benchmarks, Simark is a benchmark that does a bunch of math computation, and you'd perhaps expect a more crunching compiler might do better, and this shows you. Here we have the scores that Simark has, so the taller the score, the better. And you can see that in all of these cases, C2 is outperforming C1, and if you look at the composite score, it's 2.3 times faster, so big win by C2.

And on a well-known compiler benchmark, we're just measuring the time here. So the shorter bar is faster. The C2 numbers are there. C1 numbers are here. And you'll notice that C2 doesn't always win. So some of this benchmark suite, C2 wins. Some of the benchmark suite, C1 wins.

What does this say about your application? Well, Just like we tried these benchmarks, you're going to have to try your applications and see just how well they perform under each, and pick the best solution for your deployment. So another benchmark, business logic benchmark. C2, a little bit faster than C1.

and Vellano Mark. This is a chat room with a server and a client, lots of message sends. C1, fairly big win for C1. So again, measure your own applications. Don't just assume that one compiler is going to win over the other. So, pretty exciting, we think. We get faster performance with C2 on many applications. I know people who are in the server space have been clamoring for this, and we expect it to work well.

We want your feedback about all of this stuff and other things. So later this afternoon, 3:30, please come to the feedback forum. Tell us what you want, what you think about Java. If you want to give us praise, please make sure you're there. To contact, we have Alan Samuel, who will be up here on stage in a little bit for some Q&A. And Francois Jouel, who is the manager of the group, he'll be here as well.