Java • 57:38
This session introduces the Java2D APIs as they relate to Quartz in Mac OS X. The unique features of Java2D in Mac OS X as well as performance tuning are discussed.
Speakers: Gerard Ziemski, Ken Russell
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it may have transcription errors.
Welcome to Java Graphics Session. My name is George Jemsky. I'm Java Classes Engineer, and my responsibility is graphics. I would like to welcome also Vladimir Lotak. Vlad, tell us who you are. Hi there. I work on Java Classes, and my specialty would be anything to do with events. Vlad will help me with some demos. So let's get started.
Session overview. In this session, we'll cover three topics. First of all, I'll give you a brief introduction to Java graphics on Mac OS X for the benefits of those of you who do not know that much about Java graphics. And specifically, I will focus on some of the features Mac OS X provides you.
Then I will talk about advanced topics. In that part, I will talk to you about things that you as a developer should be aware of, things that may help you explain and understand how to make the most of your program, how to make it run the fastest as possible, and if something goes wrong, possibly tell you why things go wrong. And specifically, I will focus on graphics performance and imaging.
And then I'll talk about future. And there, in that part, I'll tell you a bit where we come from and what we are thinking of doing next with Java graphics, specifically on Mac OS X. So let's talk about Java graphics on Mac OS X. Again, here I will try to inject some of the details and implementation-specific details of Mac OS X implementation for Java graphics.
First of all, you have to realize that our implementation of Java graphics on Mac OS X is based on the Quartz engine. When we started working on Java 2D implementation some time ago, we had a choice. We could have gone with the SAN provided source code that implemented the renderer in its software. We could have done that. In this way, we would stay very closely matched with what Java graphics looks like and how it behaves like on other platforms. We also had a choice to base our own implementation of Java graphics on Quartz, and we decided to go with Quartz.
I'll talk in this part about three things. First, I'll introduce to you briefly the graphics, original graphics object, and what you could do with it. Then I will talk already in more detail about the graphics2D features. And lastly, I'll talk to you about specific features that Mac OS X brings with its Java graphics implementation.
The original graphics object. The original graphics object was primarily designed to be fast, and the key underlying concept in graphics was to make it fast, all the primitives were supposed to go over the network. So the primitives were limited in scopes. They were very basic, and that's what you have. With the original graphics object, you could draw rectangles, ovals, polygons, simple graph primitives, but they were fast.
you had limited access to the color. What you basically had was a solid color. You could choose whatever you wanted to, but it had to be a solid color. Moreover, transparency, the alpha channel, it was all tricky to access that. And in Graphics2D, that feature has been added to make it much more easier for you to play with transparency.
Original graphics object had very limited support for images. It had only one type of an image, which in a way makes it a little bit simpler, because with Graphics2D you have-- I'll talk about this a little bit more-- you have several different buffered image types, and depending on the platform and depending on how you use them, things may not go as smoothly as you would want them to be. So the basic images support was there in graphics object, you could do things like animation using the producer-consumer model. So you could do some interesting things with graphics, but Graphics2D goes much more beyond that. And text, you had very basic also support for the text. What you could do primarily was simply draw a string. That was about it that you could do with graphics. So as you can see, in today's technology, that is clearly not sufficient. So then we got Graphics 3D.
Graphics2D is, first of all, highly extensible. What that means is if there is something missing in Graphics2D that you wanted to have, you have interface that are provided for you that you can simply implement. You can create your own objects, and you can simply take them, plug them in, and everything else will just work.
It is also low level. What that means is it gives you an access to things that really cover the basics of what graphics is. For example, with buffered images, you have the access directly to the pixels. You can find information about the layout of the pixels. All of that is encapsulated in the raster objects, color model. You can look it up, and you can find out information about this and use it to your advantage. So you have an access to really low level specifics of the graphics model. And lastly, it is fully featured. What it means is you basically have an access to all the possible features that you can think of and that you would want to use in today's technology.
If you wanted to implement modern web browser, you could do this. The Java Advanced Images itself, which we don't have on Mac OS X, But Java Advanced Images itself is implemented on top of Graphics2D. they didn't have to implement their specific native hooks in order to do some fancy stuff. All of that, the underlying features are already there in Graphics2D.
Graphics 2D--in order to understand Graphics 2D, be really successful using it, you have to realize and you have to really understand three basic principles, and that is, what is graphic context? What are the graphics context attributes? And also the graphics objects. So what are those? The graphics context is-- you can think of it as the surface. If you're drawing whatever it is, a primitive image, single pixel, you have to draw it somewhere. That is represented to you as a graphics context.
then graphics context attributes. This lets you control... The features of the context itself, things like what kind of color you want to use, what kind of color model. Do you want to use transparency or not? Do you want to use transformation matrix or not? Those are the graphics context attributes. You should be familiar with those. What is there, what is available, how to use them?
Graphics objects, those refer to the actual objects that you can render. So those are basically shapes, images, and text. All of them are represented as objects, and you can do some interesting things with them because they're objects. Now, if you want to be really successful with Graphics2D, you have to go a little bit beyond that, and you should have a clear understanding of buffered images and transformations. And right here, I will talk to you a little bit more about buffered images and transformations.
This is a basic concept in graphics, and if you want to play with them and if you want to combine them, you need to realize what effects they have. And also it is important to you as a developer because we do have some bugs. So you need to be aware of, it would be helpful if you knew what the effect of combination of transformations is supposed to be. If things go wrong, you should be able to realize that this is probably not because you're doing something wrong, it's probably because there is a bug in our implementation, then just tell us about this, we'll fix it.
So right here, I'll tell you-- I'll just give you three examples of the Graphics2D features. Because our implementation is based on Quartz, you can do all of that in Quartz. And on the left, I showed you an example of the concept, the name of the concept, as it refers to in Java. On the right, I show you just bits of information of what we as Apple had to do in order to make it run, to implement this using Quartz. So first, shape objects. shape objects, they are primarily defined as paths. If you look at quartz, everything in there, even a line, the line is not just a line, it is a path.
In Graphics2D, you have an access to that. So what is a pad? This is a concept of your visual pen. You pick up a pen, you put it down, and you start drawing, you start defining a pad. The pad can consist of line segments, can consist of cubic and quadratic curves.
So this is basic concept behind shapes, but now we have shape objects here in Graphics2D, and that means you can do some interesting things. For example, you can use a shape and actually use it not only to draw, but for example, to define a clip. So we have an example here where we had a glyph vector, and then we simply asked the glyph vector for its shape, And once we had that shape object, we simply used it not only to draw it on the surface, but to define a clip. This is an example.
So this is an example of objects in Java. Now, we have an access also to compositing. This is an interesting feature because nowadays-- and I'm sure you're familiar with Aqua Look and Feel, so you can see how many transparencies and-- an anti-aliasing that is involved in drawing modern graphics user interface. You have an access to all of that through compositing model.
So for example, what you can do is you can set different transparency values and then obtain interesting effects. If you wanted to implement nice drag and drop feedback action, you can simply render the object that you're about to drag as a translucent image, and you can use that in order to give a feedback to a user. You can do that very easily with transparencies.
I skipped one. Strokes. This is another interesting feature in Java 2D. you not only can render a line, a path, in a solid color and with a line of width of one single pixel. But you can use a stroke object, which is a context attribute, and you can use that in order to define a different way of drawing your primitives. So here we have an example of lines where you set, when you can set, it shows an example when you can set line width to something else than one pixel. You can also set the ends of a line to be something different and then a simple rectangle edge. And we can do also dashes, which is a pattern. You can use a stroke to define an interesting pattern, and then when you draw your object, the shape, it will use that pattern.
So beyond Java 2D, because our implementation is based on Quartz, what is important to you is to realize that on Microsoft 10, Windows all double buffered. If you use swing application, There is a way to turn on or off double buffering in the swing. On Mac OS X, that operation is set to a no-op. You can try and set it to be double buffered, but because the windows on Mac OS X are already double buffered, we don't do anything. However, you may want and try to implement your own double buffering mechanism. In some instances, that might be necessary.
However, most of the time, you should not probably do that. What you will end up doing is simply triple buffer. And you will pay a penalty for that. So with your application, just try and run it on Mac OS X and see whether you really require your own double buffering mechanism. Probably you will not.
PDF underpinnings. Quartz itself uses PDF to describe a scene in a resolution-independent way. What it means is if your destination context is screen, the pixels are actually baked in at a certain resolution, the resolution of the screen. And that's how it is presented then on the final destination, final context, which is your screen. However, you can render your scene to a printer, which is a different device accessible to you from Java to D. If you do that, you're seen as being rendered in a resolution-independent way. What you could do is then specify, "I don't really want to print it. "I want to save it, for example, as a PDF file." And what you will get is right from your Java application, your scene can be rendered in a PDF file very, very easily.
So somebody asked me yesterday a question about, they had a problem and they, what they wanted to do was, they wanted to do exactly this, except they probably didn't realize that this part is already in there. What they tried to do was they tried to render an image, they tried to render a scene into an image, and because a printer has much higher resolution than a screen, what they tried to do was they tried to render their scene in a much bigger image so that once they created the image, once they rendered their scene into the image, they simply wanted to send that image right to the printer. Now, the problem was they weren't sure what is the resolution of the printer. So that's one of the problems.
The second problem is once you render it to the image, what you're sending to the printer then is this huge job. An image is simply a bunch of pixels. It's a bitmap. It's a lot of information to send to a printer. It would be much easier to simply create a PDF file and then send that to the printer.
Can we take this off, please? Go to the slides with this one. and font support. Now, again, when we had the source code, when we were starting to work on graphics on Java 2D, on Mac OS X, We also had a choice to go with Sans' own sort of renderer for the fonts. We decided not to. We decided to go with Quartz. Because of that, we get beautiful looking, beautifully anti-alias fonts.
That's what we have here in Microsoft. So let's move on. So this is the second part of the session, advanced topics. I call it advanced, but it doesn't mean it's difficult. It simply means that I'll cover here some topics that may not be-- you may not think of them necessarily. You may not be aware of them, but you probably should.
So I'll talk to you about rendering hints, the graphical context in more detail, and graphics for acceleration, and then I'll give you some hints and tips. Rendering hints, first of all, they're optional to implement. On Mac OS X, we have support for only few hints. We have support for text rendering hints and graphics rendering hints. Now, the default settings of these hints on other platforms is different than what it is on Mac OS X. That is because on Mac OS X, we have implemented our own Aqua look and feel, And we wanted that to match as closely as possible to the native Aqualucan field. What that means is we had to turn anti-aliasing on for the text. Otherwise, you would see the buttons with this ugly non-aliased text. Just didn't look good. So we had to do this. However, if your application-- if you know that your application does not need to use the hints and to use the anti-aliasing, you can override that.
And you can use the runtime options for the text and for the graphics, and you can set it to false. Thank you. You also, what you could do is, in your applications, once you have the access to the graphics to the object, you can use the setRenderingHint method and set your hints to whatever you want.
And lastly, there are other rendering hints. For example, there is a rendering hint that lets you specify what kind of interpolation to use for your images that you draw on a screen, and those images are scaled either upwards or downwards. This hint, the support for it exists in ports, and the implementation of this will not be difficult. However, that's one of the hints, for example, that we're missing. So, yes, on Mac OS X, we don't implement all of the rendering hints yet, but we probably will.
graphic context let's talk about this in all detail now this is important for you to understand first of all conceptually you have to understand that there are two different types of context depending on what kind of service they provide. You have a source context, an image, for example, where its purpose is simply to serve as a source of the pixels, and you simply grab pixels from the source context, and then you do whatever you want with them. And usually what that ends up being is you simply put them down, or your destination context. So we have two types of context, source and destination. And then based on their image representation, there also can be of different types. Now, this comes because in Java 2D you have buffered images. And buffered images were added because there was a need to represent platform-specific types of images. So you have types of images such as RGB or BGR, and here is where problems may occur on Mac OS X.
so let's talk about the flow of the pixels if you think about your job application then this would be this box on the right What you usually do is you have a source of a pixel, whatever it is, the source context, or your graphics primitive object that you want to draw. They go then, and they end up being drawn on the graphics context, the destination, in your Java application. However, this is not the end of life of pixels. The way they're being displayed on the screen is they have to be natively put almost physically on the screen. How does that happen? Well, natively, there is another context, the final destination context, which represents a window, which represents screen.
In Quartz, that is the native window surface. Now, the problem here is that... in Java, you have the access to many different types of contexts based on their image representation and also on the color space model that you're using. However, when we had to implement this on Mac OS X, we were slightly limited. So suppose this is how many different context types you have in Java.
then Quartz can only represent that many-- a subset of original Java 2D types of context-- can represent only that many as a source context. then again you take a subset of that, and this is what Quartz can do as a destination context. And then yet another step, you take a subset of that, and only this is the subset of context that Quartz can use in order to draw on the final destination. So what is of interest to us is that natively there are only two context types that Quartz can handle, and that is the RGB, the integer, 32-bit representation, and RGB, 16-bit representation. Thank you. So let's go to the demo. Can we turn this on? Right.
And let me show you what that means in practice. So, Vlad, what is the buffered image type we have set right now? We are running 16-bit RGB 555. So this is what we have it set right now. The key point here also is that the computer, the monitor itself is set to 1000 color mode. What this means is if you refer to this picture, all the contexts match. Why? Because the monitor is set to thousands colors modes, which is 16-bit, and then we use a perfect image, which is 555, which is 16-bit. So there's no conversion that needs to be... that needs to happen anywhere on this path. So the pixels just flow. So this is the best we can do. And what is the frame rate? Anywhere in between 120, 240 frames a second. All right, so let's remember that number. Now, let's switch to another type.
All right, you can see it's a little bit slower. What is the frame rate? And that's about half of what we were getting before, 70 roughly. And the type of the context right there, the image? And we are looking at 32-bit alpha RGB pre-multiplied. So natively, this is a context that actually Quartz can handle. However, because the computer, the monitor, is set to thousands of modes. Right now, it cannot handle that directly. So there needs to be conversion that takes place, and that conversion takes place right here. It doesn't happen in Java. It happens in Quartz itself, but the conversion takes place, and we are losing some speed, as you can see. We only get half of the speed. And can we switch to another time? Sure.
Now we are looking at a 32-bit A BGR pre-multiplied, and we are about 25 percent of the original speed, 45 or so frames a second. So we are slower yet again. Now why? You have to realize the key point here is this is BGR type. What that means is for every single frame, we have to work through every single pixel, and we have to swap R and B. So this is an example where things go wrong right here, in this step.
This happens in Java code, and this is the code that we're responsible for. Right now, we are not using things like Altivec to try and optimize that. We could, and probably we will. So there are things that we can do about this to speed it up. However, right now, you just have to pay penalty, and that penalty is never going to go away. We can make it smaller by trying to use smarter algorithms, but we can do things like, for example, cache pixels once we swap R and B. Why not? Because if you have a buffered image, you have the access to data buffers. If you have the access to data buffers, we do not know when you can draw directly on those pixels. So we cannot detect that.
What that means is we have to assume that once you call data buffer, you call getRaster, and then you extract from the data buffer, you start manipulating the pixels, which is the case you have to do for animation, for example. we cannot cache that conversion. That conversion has to happen for every single frame. So there's penalty, speed penalty, it's never going to go away. It will get smaller with better implementations, and we can do that. We'll work on that. Thanks. Slide's off. Can we take slides off here?
On Java dev, there was a thread about three weeks ago, and some of you asked, so what's the deal with the plasma output? Why is it so slow? And the reason was actually twofold. First of all, the original, the plasma output uses the original image type, and it uses the producer-consumer model, which means that when we try and render that, we can't, we to convert that image into a buffered image. So this is where the penalty happens, first of all, because we have to create a buffered image object for every single frame. So we have to do that. And also, unfortunately, well, more fortunately, well, PlasmaOpt uses the index color model. Index color model in back days, it was used as a sort of compression algorithm in order In order to represent an image in a smaller number of pixels, we had index color. So the color was represented simply by an index, by a simple number, as opposed to, for example, 32-bit value for a pixel. However, right now on Mac OS X, we don't support index images directly. So what happens is we have to convert the index color model to the direct color model. and that incurs a penalty.
However, there are things you can do about this as a developer. First of all, important things to realize is that So this is just pieces of the code that are in original applet, the Plasma applet. What you can do is you can switch to using buffered images. and don't hard code the types of the context. 'Cause you never know what can change in the underlying implementation. Quartz may decide to implement different destination context that may be faster, and we may want to switch to them internally. So there's a way for you to obtain the best context without hard coding any values. And this is a piece of code that shows you how to do this. So notice we don't hard code anything there, especially the type of the context. We just ask the system questions. Give me your graphics environment. Give me your device. Give me the configuration. And give me the default color model. We ask all those questions, and then we create a buffered image to represent our context. We use that information.
And then you're guaranteed that if you do this, On Mac OS X, you'll get the best matching context. And keep in mind, if your monitor is set to 32-bit mode, millions of colors, then it will end up being the integer ARGB pre. And if your monitor is set to 16, to thousands of colors mode, then that will end up being new short 555 RGB. But don't hard-code those values anywhere. Can we switch to demo one, please? I will show you now the plasma upload as modified by me.
Okay, so Vlad is trying to get that ready, and I will show you that demo later, but in the meantime, let's go to another slide. Higher acceleration. This is a topic I like to talk about. And higher acceleration. Introduction. You ready? Mm-hmm. Okay, do you guys want to see the plasma demo? Just speed it up? Okay. Can we switch, please, to demo one again?
Where is it? It's in that iBook? So we're running this on the iBook, so the frames rates will not be as impressive. But relatively to each other, this still should give you an idea. So right now we are running at about 34 frames per second. This is the modified version. And remember, the modification was pretty much only making sure we're using the correct... context type. That's all. So this is the new one, modified. So we get 32, maximum 37. And now let's run the original one.
And this is the original one running at around 22, 21, 20 frames per second. So you have-- there was 35, say, versus 22. So you have nice-- Nice increase in speed by not doing much at all, just making sure we're using the correct image type. That's all. Vlad, can you switch this? Can you put this computer off this?
So let's move on and talk about hardware acceleration. Here we have some demo machines. There are dual 1 gigahertz machines. The performance of them is 15 gigaflops. One gigaflop is one billion floating operations per second. You may be surprised to know that there is actually equally faster CPU sitting right in there in this box, and that is the graphics video card. Those computers have GeForce 4MX video graphics card, which have performance of 15 gigaflops.
So let's take an advantage of that. And this is where hardware acceleration comes in. Normally, this CPU, this graphical processing unit that sits right there on your video graphics card, it's not doing much at all. So hardware acceleration is trying to actually utilize that extra CPU, if you will.
Now, on Mac OS X, there's only one way to achieve hardware acceleration, and that is through OpenGL. You'll hear a lot about OpenGL. You probably heard about Quartz Extreme. So if you are interested in graphics on Mac OS X, I would suggest that you pay attention to where OpenGL is moving because it is going to be underlying mechanism for us implementing hardware acceleration efforts.
Because we're using OpenGL, OpenGL was designed with speed in mind, as opposed to Quartz, where these guys really pay attention to the quality. And you look at every single pixel, and it's just almost hand-tweaked by these guys to look pretty. OpenGL doesn't care much about anti-alias pixels. It cares about the speed. So if you use hardware acceleration, you'll gain the speed, but you will lose the quality. You have to be aware of that.
The activation, many of you were unhappy with our decision to use the opt-in mechanism for turning on hardware acceleration because what that means for you is whenever we introduce a new computer, you have to update your application. We understood that and we changed that mechanism to be opt-out. So I think we'll be happier with that. to start playing with this, please give us feedback on this.
Heuristic and fallbacks. There are things we cannot do using hardware acceleration yet. In those cases, we will have to fall back on the solder renderer, on the quartz, make on the quartz itself to render the scene. So we are using heuristic in order to determine what we can do using higher acceleration and what we can't do. For example, drawing fonts at sizes larger than 24 is not going to be hard-wired accelerated. Why not? Because the way we hard-wired accelerate the text is we render every single glyph off-screen to an image. And then when the time comes to actually draw the text, we are simply extracting those pieces of the image and we're blotting them on the screen. So this is very fast. However, if you increase the font size, at a certain point, the image we would have to create in order to represent the glyphs is going to be just too big and is going to be too memory-- it would take too much memory in order to represent that. So, for example, heuristic is the piece of the code that determines when we can use our acceleration and we cannot use it.
Now, UI framework implications. What this refers to is basically do not double buffer. If you want to use hardware acceleration, do not double buffer because OpenGL is a mechanism for accelerating drawing on screen. If you double buffer, it means that you're rendering your scene to buffered image off screen. That is not being hardware accelerated. We cannot do that yet. We will. So let me show you then a demo of hardware acceleration. Can we switch to demo one? Give it another second, Gerard. Another sec? Okay. Can we go back to slides, please?
Now, let me tell you... I'll have two different demos showing two different... two different issues where hardware acceleration helps. First of all, images. If you think of the flow of the pixels in a slightly different way, you have to realize that if you have a buffered image, for example, that represents your image, the pixels live in your RAM. In order to show them on the screen, they have to go through the CPU bus, they have to go to the video graphics card, and then from the video graphics card to the screen. That's a long path.
So what can we do about this? Well, with OpenGL-- and this is an implementation detail-- what we can do is we can represent the static images as textures. So then we upload it once to the Video Graphics card. Then the image, all the pixels, live right there on the Video Graphics card itself. When you want to render that image, we simply say, "Render the texture with that number, with that index.
And then when you think about this, the flow of the pixel-- the path the pixels have to go through in order to be shown on the screen is much, much shorter. They simply--they already exist. They live on the video graphics card. So in order to be shown to the screen, they take very, very short path, right from the video RAM to the screen. Boom. That's it.
So this is one aspect of hardware acceleration. Now another one is if you render simple primitives, like drawing lines, rectangles, this is what the hardware-- this is what the Video Graphics Card were designed for in order to speed up those operations. So those are not bus intensive. So for example, the same demo, just drawing lines on my app book versus on this dual 1 gigahertz machine, the demo that would simply render lines, it's not going to be that different because-- The call to draw a line is very simple. We just pass four parameters, x, y, and another x, y, and that's it.
And then most of the time is actually spent drawing the line. So the machine itself doesn't have to do that much. It simply says draw line, and it's the video graphics card that draws it. So there are two different aspects in hardware exploration, and let me--and I have two different demos to show that off. Can we switch to the CPU one? image tiles demo. This one will show off the images.
So we have an image here. Another thing to keep in mind is with higher acceleration, transformations are for free. If you think about software renderer like Quartz, if you apply a transformation matrix, then every single pixel has to be remapped using a software algorithm, an algorithm in software. Here, the hardware will do all that job for you. So it is a little bit of cheating, but, well, I'll show you. So we can do this. Let's reset that. And let's look at the frame per second rate. So we are getting around 300 frames per second. Now let me switch hardware acceleration off.
This is with hardware acceleration off. Now it is Quartz that is doing all of the work. 14 frames per second. So this is a demo showing you the images. Now let me show you a simple demo that shows you the performance of drawing simple graphics primitives, drawing lines. So here we have a very simple 3D model viewer.
So we have 63 frames per second and around 1500 lines every frame. We have this model. So this is how it looks with hardware acceleration on. Let me go back to a very simple model, cube. There's no perspective here, so that's why it looks a little weird. Let me turn hardware acceleration off.
So this is with hardware acceleration off. I'm not sure if you can guys tell whether this looks better or not, but this is hardware acceleration off. So we're starting with a very small, very simple model. This is it. this model and look how slow things get. and this now let me turn hardware acceleration on right here do you see any difference in the quality of the model although let me turn it off this is the quartz hardware acceleration off this is OpenGL hardware acceleration on Now, if your application cares to draw many, many lines per second, then probably you're not going to miss those pixels that are anti-aliased and look just slightly bit better. So there are certain applications which you can simply not do if you do not have hardware acceleration. Can we have slides, please?
That was a demo. And now hints and tips. Use hardware acceleration if possible. Just turn it on, see if it helps. If it doesn't help, turn it off, leave it off. Don't worry. If it helps, fine, use it. Use appropriate image types. Don't hard code them. I showed you a piece of code that showed you how to find out what is the best representation of an image on a specific platform.
It's not platform specific. There wasn't any coding there. we didn't hard code anything, we just asked questions. Use clipping, if possible. If you know there's a scene where you don't have to bother rendering certain parts of it, use clipping in order to make it smaller and easier for the underlying mechanism to draw. Cache fonts/objects. Creating font objects on Mac OS X in Java is a very expensive call. So if you have to work with fonts, cache them. Don't recreate them. Do not double buffer. And do not mix AWT and swing. Thank you.
Now the future. 1.4 is our future, and we're moving there. We started working on this, and you'll have a developer preview of our 1.4. You can play with this. And there are certain interesting features in 1.4 that we're looking forward to. For example, Plugable Image I/O Framework. This is an interesting feature. It makes your application not really care understanding what the underlying image representation is, whether it is via PNG or JPEG or TIFF. Somebody else, a third party, can implement a plug-in for you and you can simply use it in your code and your application doesn't have to care, doesn't have to know what is the actual representation of the image.
In 1.4, we also get better font and text support. And now, this is an interesting point. True type hinted fonts. Now, you remember me mentioning to you that we're not using the Sans software font renderer. We're using Quartz to render fonts. So, things like this, they're already there because we're using Quartz. We, because Quartz supports true type hinted fonts, we support true type hinted fonts. So, we already have that feature. Unicode 3.0 binding algorithm, that will help you if your application has to be internationalized. If you need to handle Hebrew or Arabic text, This algorithm will help you determine the layout of the text. So it's not a nice feature. We're looking forward to having that.
And also in 1.4, you have a full portrait of Compositing Rules support, so we can play with the transparencies and with alphas. in greater detail than was possible. And on this slide, I put these two features specifically together because, well, they're interesting from our point of view. First of all, new pipeline architecture, that is 1.4. What that refers to is it gives you, it tries and cash states the attributes of the context. So if you set a color, if you change one color to the other, it will not invalidate entire pipeline. It will try and be smart about this and not try to invalidate entire pipeline, but just the piece of the code that represents the graphics attributes. Now, this is interesting to us because we already have that in our implementation. We already have the architecture that supports that. For those of you who were here last year, we talked to you about our pen model, and this is something that John Burkey came up with, and I helped him implement this. Hey, John. And so we already have that.
volatile images. This is what SAN does in order to hardware accelerate drawing primitives. Again, I just showed you the hardware acceleration demo. We are already doing this. Now, the difference here is that because swing is double buffered, they-- decided to optimize off-screen drawing. So they come up with this volatile image object that lets you hardware accelerate your scene to off-screen context. That's slightly different from what a car acceleration is, In our case, we only have the on-screen context. So that's pretty much the difference, but we can and we will do this as well. And interesting point is that we already went through much of this, so we already know how to do this. It will be pretty easy for us to actually implement volatile images on Mac OS X. And it's really very similar to what we have. They offer fast paths for images, and so do we, and I explained to you why those static images are--can be hard to accelerate and can be--can be drawn very, very fast. And they support basic 2D operations. We go all the way beyond that. We try to optimize more primitives, but we are so limited. But we can certainly do what Volta images can do. And they also cache glyphs for high-tech performance, for--for high-tech performance. high text performance and i explain to you that we are already doing this as well and how that works So these two features are interesting because you guys already actually have them.
So let's talk about 3D graphics. Java 3D, as you probably know, we are working with Sun on that. And that's what I can tell you. We're just working on that. Geo for Java. So what can you do if you actually want to do 3D graphics in Java and Mac OS X? Well, the answer is Geo for Java. And to talk about this, I would like to invite Ken Russell.
Yeah, we set. So this is the iBook? Okay, cool. Hi. So what we'd like to show you is another demonstration of a new feature in JDK 1.4. called NewIO. And NewIO can help accelerate your graphical applications, although not the 2D graphical applications that you've seen today. It can actually help accelerate 3D graphical applications. And it can also accelerate non-graphical applications, like if you're doing high-throughput sound. Okay, so let's fire this up.
So this is not running off the iBook. Oh, it isn't. OK, that's fine. No problem. That's good. Okay, so what we've got here is a large data set. It's a terrain data set, and this is actually of the Grand Canyon in Arizona. It's a pretty large data set. It's about 300 megabytes in size, and it's too large to fit in main memory, typically, on most computers, although actually not on Mac OS X hardware. Okay, so what do we have to do? We have to take the data set and downsample it to a reasonable size that can actually be processed by the computer in real time. Now, what this application is showing off is how you can use new I.O. to do all of the processing that's necessary to render this terrain data set in real time in the Java programming language.
Okay, so this demo uses new IO in two ways. The first way that it uses new IO is to get the data into memory to be operated upon. Okay, so it's using the new file mapping support in JDK 1.4 to get the data off the disk and available to the application very quickly. It's not using a file input stream for each tile that represents this data set because that would be way too inefficient. Okay, the second way that this application is using new IO is to get the data out to the graphics card. Now, we're using OpenGL for Java, as Gerard mentioned earlier. This is a free software OpenGL binding for the Java programming language that allows you basically to make OpenGL calls, raw OpenGL calls from Java. Okay, so you can create an OpenGL context. You can set up a vertex array. You can put your polygonal data into it, and you can then go and render it. Now, OpenGL for Java has built-in support for new I.O.
And this means that you can keep your data inside of a new I/O buffer, which is the new data type that's present in 1.4, and directly take that data and put it down on the graphics card. In previous versions of the JDK before 1.4, in order to do an operation like this, you actually had to copy the data from some place where the JVM could operate upon it out to the graphics card. So basically you would have two data copies, one for reading the data in from the disk and the other one going out to the graphics card. And basically, this was inefficient. And what we're going to show you is just how inefficient it is. So we've actually implemented the inner rendering loop of this application twice. The first is what you're seeing now, which is the new I.O. version, the 1.4 version.
And the other version is what this demo would look like if you are running it under JDK 1.3, and your only option for rendering to the graphics card was to copy the data out of the JVM. Here we go. This is 1.4. And this is what this looks like using the 1.3 APIs. is. Okay, let's be very clear about this. This is one three, and this is one four. All right.
Now, what's phenomenal about this is that you can develop on OS X with the great operating system, the great developer tools, and the great OpenGL implementation, And you can get really high performance out of it. You can approach C++ speeds. We're getting, frankly, in some of these demo applications, 90% to 100% of C++ speed at this point using the Hotspot JVM. And you can deploy, in addition to OS X, you can deploy on any platform that has a JDK 1.4 implementation, which is good for you, the developer, because you get more possible market penetration for your app, but you get the advantage of developing on a great operating system with great hardware. That's really all I've got to say. Thanks, Ken. That was great.
Could you go back to slides, please? That was Ken, sharing JL for Java and native IL. Now, where to go from here? Obviously, there are some interesting Java sessions that you probably should think about attending, but also, if you're interested in graphics, I would like to reiterate that you should pay close attention to OpenGL and what we are doing with it. So there are many OpenGL sessions in this year's WWDC. I would encourage you to at least check it out. And let's go to Q&A.