Information Technologies • 51:02
WebObjects is a powerful system to develop and deploy web applications and web services. Get the latest information on configuring and deploying your WebObjects applications on Mac OS X Server. Learn about tools and techniques for troubleshooting your WebObjects deployments. Find out how to make your transition to the latest version of Mac OS X Server as seamless as possible for your WebObjects applications.
Speaker: Mankit Sze
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Good afternoon everybody. Welcome to WebObjects deployments for Leopard Server. And my name is Mankit Szhe. I'm a senior WebObjects engineer. Today I'm very happy to be here with you guys and we're going to talk about some new features on our WebObjects runtime. So today I'm going to talk about four brand new WebObjects features for Leopard Server.
The first and very important one which is the performance boost enabled by using Java new I/O. And we're going to introduce a new threading model and we're going to show you what do we mean by performance boost and let you experience it. So the second features we're going to talk about is Load Balancing with Apache 2.2 Mod Proxy Balancer.
So, starting in Leopard, we are deprecating the Mod WebObjects proprietary load balancing module in favor of using standard Apache 2.2 Mod Proxy Balancer. And the third feature is we're going to talk about load balancing with Tomcat 5.5 Adapter. We are doing some experimentations, exploring options, and we'll talk about this very soon.
The fourth feature, which is also very important, which is I'm very happy to tell you guys that starting Leopard, WebObjects is going to have built-in JMX support with every world application instances. And I'm going to talk about some new APIs and how you guys can leverage it and do a demo. So, to begin with... Let's talk about the performance analysis we have done with JMX.
So what we have done is that we tried to, in that part we introduced a new threading model and we tried to adopt the Java NIO and we do a quick performance analysis back in Cupertino. And what we did was that we just loaded the standard file upload examples that were shipped with the Xcode DVD and then we load the server on a MacBook 1.83 core-dued Intel processor with 2 gigs of RAM.
And the interesting thing is that we are actually using the load generating on a Mac Pro quad processor with 4 gigs of RAM. Why we do that? Because we just need to generate a lot of loads to try to kill the NIO server, but we just cannot. So let's take a look at what numbers we got.
So in terms of response time performance, and here's what we got with KASIC, which is the old world. The average response time was 61 milliseconds. The minimum response times was 47 milliseconds and the 90% line, that means that 90% of the requests are being handled by the asynchronous. So we can actually show WebObjects instances within 63 milliseconds. So what about -- that was using the WebObjects 5.3, what you guys shipped, what we shipped with and what you already have.
And with the new threading model and the NIO runtime, we have this. So -- hey, thank you. So our average response time was cut from 61 milliseconds to 11 milliseconds and our minimum response time is cut from 47 milliseconds to 7. Most of our requests, 90% is cut from 63 milliseconds to 12 milliseconds. So who can do the math? That's six times faster.
Thanks. So after we talk about the response time, what about scalability, right? So when you develop applications from a customer perspective, your customer want to get the response really quick. That's why we talk about response time first. But from an application administrator perspective, you want to have a server that's super scalable so that you don't have to buy tons and tons of machines to serve your customer. So let's take a look at the throughput performance that we have done and measured in the last few minutes. back in Cupertino.
In terms of throughput performance, using the KASIC worlds, which is a stream-based blocking IO using the WebObjects 5.3 World Worker Threads model, we get... We were able to handle 30 requests per second and using the new threading model, DND, using the Java NIO, we get... Mankit Szeghi: 119. So how much more requests we can handle with the new I/O? It's four times more requests per second.
So now what's next? So now we have showed you this performance boost using the new NIO and the new threading model. Now the next thing that we are going to talk about is how could that be possible? I mean, how does it work with the new threading model? Now we all live with world worker threads.
Now you're telling me that these world worker threads is not relevant anymore? What's the deal? Why do we get rid of the worker threads? And what is so special about the NIO? So with that, we're going to do a dive into the NIO, the new threading model, and the Java NIO architecture. So to start with, using Java NIO, that's something called Connection Watcher.
What it is, is it's a single thread that's just watching for incoming connection. So what is so special about this? It is special because unlike the classic model, you have one thread watching one socket. With the Java NIO and the new threading model, you actually have one thread that's playing the role of a connection watcher and now it's watching hundreds and thousands of socket channels. So that means that you actually save a lot of server resources.
There's no more one thread per socket, which was the case in the classic world. So what happened when the connection watcher discovered us? So ready, socket channel. It is going to... heading off the connections to a worker thread pool. And then what happened to the worker thread pool? The worker thread pool would select its managed worker thread and distributing the connections to each of them.
And then what happened to this three worker threads pool? Now three keywords. One, they work directly with clients. Two, they work independently. Three, they work independently without supervision of its thread pool. So there's no overhead like paying back and forth. So three is that they actually work in parallel. And later in the slides, we will see why that's so important that they were in parallel independence. Because we need that to achieve maximum concurrency.
So now, as you see, all these three workers for us, the one in purple and the one in red, the one in green, they're all doing good, serving as clients, and clients are on different networks. So everything seems to be good. Now, what happens when trouble comes? Like that.
You see that? The worker thread that's in purple is servicing a client on a slow upstream network. So what does it mean? That means that your server is actually on a very fast network. It's ready to receive lots and lots and lots of data in a very fast manner. But it's too bad.
Your client is just on a slow upstream network. It cannot give you data quick enough. So what happens in the classic world? In the classic world, then the worker threads will just sit still, standing still, blocked on read, and then consuming the server resources and do no useful work.
Because in the old model, in the old world, a request, once a request comes in, we have a worker thread assigned to it. It grabs really tight to the request. And then when the client is not ready, it will still hold it and it will never let go.
But in the new world, what did they do? They actually tell the, they actually return itself to the worker thread pool. And then what would the worker, and then what happened to the other two threads, the red and green? Remember three keywords? The first one was direct. I show you from the previous slides that they were direct.
Like, take a look. They were directly working with clients. And now I'm going to show you that they're working like independently of each other. The first worker threads in purple, because there's no more work for them, so they return itself to the thread pool. But the other two, it's continued.
It's handling the request. It's processing the business logic. So how the worker thread pool deal with the worker threads? It's going to do this. It's going to make a note to the connection watcher and say, hey, you know what? Watch that client. We have to watch them because the client is on a snow upstream network.
There's no more work for us. So as you see, the worker threads in purple, it's go back to the worker thread pool and it's getting ready to do useful work rather than, unlike the classic model, that the worker thread would just sit idling, blocked, consuming server resources and doing no useful work.
So now... What happened when the slow client is finally catch up and is ready to send more data, but we have nobody home, nobody home to receive the data. So what happened will be the worker thread would selecting a worker threads. Now, what's so special about the new threading model is that This worker thread could be the same worker thread that has surfaced the client before, or it could be a totally different one.
Because, as I said before, unlike the old model, when the transactions come in, a worker thread will hold on to those transactions and will hold on to them until it completes. But in the new model, they don't. When there's no more work, the worker thread will return itself to the worker thread pool and doing useful work.
That means that a worker thread could be reading in one minute and then writing response for another minute and then processing requests during business logic in another minute. And guess what? They're all independent. They're all swappable. And they can be serving totally three different transactions at various different times, doing different things. That's how we achieve maximum concurrency, by breaking the tie between one thread per transaction.
So now,
[Transcript missing]
It is on a slow downstream network. What does it mean? That means that your server, again, is on a very fast network. It wants to push data really, really fast to its clients, but the client just cannot keep up. It doesn't receive the data quick enough. As a result, the client is not ready. The worker threads have nothing to do. Unlike the blocking I/O, the worker thread will just sit still, idling, blocked, consuming server resources.
With the new threading model and NIO, the worker threads will return itself to the thread pool, and then the rest of the worker threads will work independently in parallel without any interference. How does the worker thread pool handle the worker thread? It would recycle it, and it would get it ready to handle a different client.
And while it's doing that, it will make a note to the connection watcher and say, "Hey, watch that guy. That guy is on a slow downstream network. When he becomes available, let us know." And I will let the worker thread pool to select one of its managed worker threads to handle it.
Now finally, the clients on a slow downstream network has become available, but there's nobody home to send his data, to send him any data. So what's going to happen? Mankit Szeghi: So the worker thread pool is going to select a worker thread. It could be the same thread. It could be a totally different thread. So in this case, it's a different thread. Before it was red. Now it was yellow.
So after they select the worker threads, they continue servicing the request in a direct, independent, and in parallel manner. They're writing response. So now the next thing is that you see the green worker threads is fast on both ways. It's fast on upstream and it's fast on downstream.
So of course it's going to finish first. So when it's finished, it will return itself to worker thread pool and then the transaction completes. What happened to the other two? The other two will continue working with its clients. And then finally when they were done, then they returned themselves to the pool and There we go. We complete an illustration of how life looks like with the new threading model and Java NIO. Let me get some water.
So, after we have talked about, after we have shown you some performance analysis we have done in Cupertino headquarters, after we have talked about some theories and understand how things work, so the next question you are going to ask me is, Mankit, is this relevant to me? Can I replicate this kind of performance enhancement in my home production environment? To enable you to do that, let's talk about some configuration settings, something that you actually need to do to enable this.
So let's talk about how do you turn it on. To turn on the NIO adapter is really easy. You set it to wo nio enable equal to true and notice that that's a stashd in front of it. So what does that mean? That means that you can use, you can pass this command line using standard Java notations and do the stashd on command line when you launch your Java applications or if you are a big time, long time WebObjects fan, you would like to use wo properties file, that's fine. You can just specify that in the property files. Both would work. So the moment you turn on this wo nio enable flag to true, you're running with the new threading model, you're running based on NIO and your WoW worker threads becomes irrelevant.
So that's no hybrid mode. You cannot say, OK, I want an I/O on, but then I also want a worker threads on at the same time. No, there's no such thing. And the reason we don't do this thing is because if you turn an I/O on, you have one worker threads running the connection watcher and watching hundreds of thousands of requests.
But if you have at the same time, if you're on the same server, same instances, running the blocking I/O, it's going to spawn a lot of working threads up front, and it will consume a lot of server resources. This is something I would try to avoid. That's why there's no hybrid mode.
So it's a big switch. You either move or you either stick around. So next, this will be, will an I/O blocking enable? So that seems a little bit confusing. So what is it? So, NIO is basically two parts. One is doing non-blocking I/O and you have a multiplexer. In Java, terminology is having a selector to watch the register socket channel.
There's another side of it is that with the NIO, it's actually operating in terms of bytes buffers. So, the reason I introduced this flag is because when I'm developing this, I get into a situation that, hey, that something goes wrong. Is it some bugs in the connection watcher or is it the way it handles its, remember, the worker's report would notify the connection watcher every time that this client is not ready to process data.
So we turn this blocking enabled to true and then that would make the worker threads just call thread yield. So instead of notifying its thread pool, which we in turn notify its connection watcher to watch that slow client. So this is basically a debugging thing. I'm pretty sure you guys may run into it when you try to vote and I are enabled, you get a true and something breaks.
Try that with blocking enabled to go to true and see if it fix your problem. And please file the bugs in bugreport.apple.com. We need you guys to help mature the frameworks. Okay, that's enough digression. Next, let's talk about woe-nio bytes buffer factories. In Lepre 5.4, we introduced a woe-nio bytes buffer factories. And the most important flag is called direct bytes buffer enabled. So what is it? What is direct bytes buffer? Okay. Direct bytes, okay, that's two types of buffer. One is bytes buffer, the other one is direct bytes buffer. For bytes buffer, it's consuming heap space.
It's managed by the VM and with direct byte buffer, it's using non-heap space. So what's the benefits of that? The benefit is if you're using non-heap space, it's closer to the low level stack. So that means that when you do networked I/O operations or file I/O operations using non-heap space memory, it's going, it's because it's going to be a lot more efficient. Because it's closer to the native system call, the I/O is going to be faster.
So what's the catch? The catch is the creation cost of the direct bytes buffer is going to be larger than is traditional bytes buffer. So to address this concern, we introduced second configuration setting, which is woe-nio preheated bytes buffer pool size. So what is it? So this is, so let's say if you are running a small business, if you have an intranet, and at the most you only have 10 clients accessing your application. You don't want to set this preheated by its buffer pool size to a really large number, because at the most there will be 10. So setting it up really big, that means that the consequences is the launch time will be relatively increased.
So we want to make sure that you set this number to a reasonable size so that you can pay the construction costs up front. So when your server is up and running, then the user would have quick response time, because it would take benefits of using non-HEAP memory, and the cost of using non-HEAP memory is taken care of during launch time.
So the next thing is NIO defaults by buffer size. The default is 8192. So what is it? So we have a bucket and a water tab. So the water tab, in this metaphor, the water tab is actually your networked
[Transcript missing]
The water is dripping. Now it's done. But take a look. The bytes buffer, you allocate it. You told the WebObjects runtime to allocate it.
It's huge. But it's never getting used to its full capacity. The utilization is so low. You're wasting a lot of server resources. That's the way to kill scalability. So please don't do that. If you know that you only have a very small data coming in burst mode and your network is slow, use a small bucket. So that when the data comes, water dripping in, and it will use the full size. So it's all about, in terms of performance on deployment, it's all about efficient use of server resources.
[Transcript missing]
As you've seen before, your buckets fill up really fast, and what happens when it fills up? Huh. Your networks want to push a lot of data to you, but you don't have enough bytes per size to hold it. So you have to stop the tab. You have to pause the network. And then you have to replace the buckets. So if you're on a fast network and expect lots of data coming, then you use a larger bucket. So set the bytes per size to larger. So like that.
[Transcript missing]
Remember the connection watcher? The connection watcher is implemented with a class called WoW NIO Multiplexer. And we introduced a configuration setting called Multiplexer Timeout. So what is it done? Why do we need to do this? Because, again, it's about efficient use of several resources. You want to set the timeout, let's say, to 30 minutes. So you can track it.
We're going to have a host to the WoW NIO Multiplexer for you. So you can do something like this. If within 30 minutes, if you don't see any ready socket channels, that means there's no clients hitting your server for 30 minutes, then you may want to do something. Like, let's say, email the system application system administrator saying that, hey, our server is busy. Our server has been idling. Maybe we should use the server and deploy different instances and retire these instances. and something like that.
Right. And the next thing is, is adapt to listening queue size. Now, for those people, how many people who comes to previous years of WebObjects performance optimization session? Can I raise a hand? Now, last year, we talked about, okay, the default listening queue size was four. Now, this is not relevant anymore.
Because with the new threading models, we do some performance testing. Four doesn't seem to be the optimal for the new threading model. And I encourage you guys to try it out, characterize it, and file a box report. Tell us how it works in your production, in your station environments, and we would mature these frameworks together. So the defaults for the new NIO, it will be 128. So next, there's a preheated worker thread pool size. So what does it mean? So if you have a Mac Pro, you want to set this at least to 4 because you have 4 cores.
And so this is basically you want to see how many concurrent requests you want to handle, you expect to handle. If you set it too big, again, it's consuming a lot of server resources. If you set it too small, you're not fully utilizing your CPU. So the default will be 16. It may change. It's just a DP. The good news is that you can actually specify this yourself based on your need.
[Transcript missing]
Mankit Szeghi: I'm going to talk about two graphs. One is throughput load graph. The other one is the response time graph. Now, I encourage you guys to do something similar to your production environments, which is now when you start adding load, The throughput is going to increase. The response time is going to increase, but not that much because your server is still under utilization.
[Transcript missing]
Then, increasing more load, the throughputs instead of increase is decrease. Why? Because you have reached a very high utilization and you should do something. Like, think about deploying more instances, using more server. What happens if we don't do anything? Or we keep just monitoring it, assuming that, don't worry about this main kit, don't worry about it. It will just work, crossing our fingers.
Then if you hit this buckle point, when more user comes in, boom, it's dead. Your throughputs will decrease exponentially and then your response time will increase exponentially. So basically, you're just thrashing, not doing any useful work. So after that... Let's do a quick summary of what we have talked about so far. We talked about the performance number. One, we talked about the performance number. Two, we talked about the theories like how the new threading model works and how the NIO works.
And now we talked about configuration setting. We talked about analyzing the performance on a production environment. Next thing I'm going to talk about load balancing. As we said, once it hit buckle points, your instances are dead. So we need to start thinking about load balancing while you hit your saturation point or way before you hit saturation point at best. So with that, I'm going to talk about load balancing with Apache 2.2.
With this, starting in Leopard, we are going to use Mod Proxy Balancer instead of the Mod WebObjects that were Apple proprietary. And I think it's a good thing that we move on to Apache Community because you can always get a nice new view for Apache and it's actively being developed.
So in Leopard, some of you guys probably see this slide in another session. So we actually have a GUI in Leopard server, the server admin application, so that you can specify your application cluster using load mod proxy balancer. So yes, you can use Apache frontend to distribute load to WebObjects backend. Right, and by the way, WebObjects is running faster than Ruby and RHEL and is much more scalable than Ruby and RHEL. WebObjects is still alive and is moving forward.
So after this, I'm going to-- so we talk about the GUI. So what about those people who actually use KMLM? How many people-- let's do a quick survey. How many people here would prefer using a GUI piece where you still have? Mankit Szeghi: Okay, so how many people who would actually, I do all my system administration with command lines. I do SSH in a data center.
Okay, so in Leopard, starting in Leopard Server, WebObjects is going to take care of both audience. We have the fun end and we have the, you can do, you can configure and administer your application on command line using SSH. So how do we do that? Five step. The first step is you need to use SCP securely copy your board executable into your remote host. Step two, you generate the launch e.plist file.
So let's talk about launch e for a second. In WebObjects 5.3, there's something called vote test e. So what it does, it's actually trying to keep track of the healthiness of your applications and they send live bits and if you don't get the live bits in a timely manner, then it will thought that it was dead, it will recycle, it will restart it and something like that. And Apple is pushing launch e. So and we move, we move to the next step. We're getting along with them. So we're jumping into launch e.
So and the benefits of this is that you have, you get rid of a lot of problems that we have, that you guys may have been experiencing with vote test e.
[Transcript missing]
Your WebObjects instances are starting and running and then if it dies, the launch day will pick it up and restart it again. There's no more WOTAS days, so that means that there's no more close rate problem. If you're deployed on Apache.
Thanks. So it seems like people do experience close-range problems with WebObjects 5.3 in Apache 1.3. But good thing is that we are moving to Apache 2.2. So, and then the fourth step is to create a XTDBD-webobjects.conf file.
[Transcript missing]
Then it would, basically what it does is if you have existing deployments environments, they all deploy with Wotacity, they all deploy on Apache 1.3 Adapter. Now, to move to Apache 2.2, there's one quick shortcut for you guys. Fire up your Java monitor and you click on a tab called Migration Tab. And when you, the moment you click on that one, you will see that the httpd-webobjects.conf file will be generated, will be spirited out for you on the web page and you can copy and save it on to local disk and do SCP and use that file.
Thanks. There's also some other features in the migration tab in Java Monitor, which is you can specify a SSH identity file, and then it would actually move the files for you. So it would make it even easier. So the last step, the fifth step, is of course called Apache Start, or if you haven't started, or Apache Graceful. So that's concluded with Apache 2.2 Low Balancing Stories. Let me get some water.
So, what next? We are probably talking about load balancing strategy. What are the other load balancing strategies? What about using hardware load balancer? Well, we are not going to talk about load balancing using hardware load balancer, but it would certainly work. And so instead of talking about hardware low balancer, we're going to talk about low balancing with Tomcat's 5.5.
So let's talk about it for a few minutes. So why are we doing this? Well, because we're That was people, so what is it? Blow balancing with Tomcat 5.5? Well, basically, it is a filter that is set onto Tomcat, and it doesn't have to set onto Tomcat. It can set into any J2EE container.
I knew that a certain customer that was actually trying to server to your management and using WebObjects for a J2EE shop, like if they use WebSphere or WebLogix, and you just, because they don't know about the power of WebObjects and how easy it's used, and you want to just develop one part of the applications and to use WebObjects, yes, you can do that. We are doing this experiment, experiments, and please tell us if you like this, tell us that you like this, then we'll see what we can do.
So one very important point that my manager always reminds me during rehearsal, which is make sure that we understand this node balancing adapter is not the same thing as Surflets deployment. It is not the same thing as you develop your application, you web your application into a tool wall bundle, and then every draft framework gets shoved into the bundle, and then the whole WebObjects application is in a Surflets. No, that's not what we're experimenting right there. So instead, we actually have a filter, which will talk to different instances running outside of the container.
So how do we do that? There's five steps, very similar to doing load balancing with Apache. The first step is you need to use SCP to copy your world bundle-- war executable into your remote host. Step two, you generate the launchd.plist files. And that's - and in Leopard, I believe that in Leopard you can actually - how do you generate this launchd.plist file? That's open source tool called link girl. You can use it with a GUI and configure it. You can use a VI or you can grab your WebObjects application instances and you do this.
Let's say, hollow world and then you do stashd, roll launchd enabled. It will - and if you launch it with the appropriate access privileges, it will generate the file for you and dump it in the correct place. So it will constantly elaborate, but I believe that that portion of the code might be commented out.
So the first step is that you need to call launch, control, load the library, and then launch the plist. The fourth step is that you need to create a row-adapter-config.xml file so that the load balancer filter would know how many instances were out there. Last step, you start your container. So with that, let's do a quick demo.
Mankit Szeghi: Demo machine, please. Okay, so what do we get here? So we're going to show you how to use the Tomcat adapter to do sticky session -- to do round robin load balancing with sticky session support. So with that, I would bring up three terminal for my three instances, and then I would CD into that directory.
and then I will launch them. Now watch this. Do you see that's a stashd jSessionId equal to the number? And, okay, let me open up another terminal and show you something. So that's here. So I am sitting into the library WebObjects configuration folder and then I'm going to catch you this. You see that? That's an instance ID. When you launch your instances, let me bring this up.
So when you launch your instances, stashd jsession ID up on the top screen equal to 1 must be matched with your world configured .xml file. It says instance ID equal to 1. If you specify it as 1 and you specify it to 10, I'm sorry. It's not that smart. WebObjects is cool. But not that smart. That they can reach your mind. So you have to make sure that they match.
Okay, let's create this. And then, now I'm starting one instance. Mankit Szeghi: Let's move it aside. And I'm starting the second instances. Move it aside. Now I'm going to start the-- Mankit Szeghi: The third instance is, move this aside, and after I start three instances, it's starting. Okay, so that's some networking issues here. So let's do logo.
[Transcript missing]
So as you see, I started Tomcat and see now I try to access it, which is here. Now I started transaction and see and take a look at I want you guys to pay attention at the upper. Let me bring it down. I want you to pay attention right here. Do you see that? J session ID equal to 2. And then when I click this,
[Transcript missing]
Now, updated a few times. It's doing one. So it's actually doing round robin successfully with sticky session support. And that concludes my demo for Tomcat. Slide, please.
So what next? The next thing is JMX. So then that's part of the reason why we do Tomcat because Tomcat is Java is more close to, Tomcat is Java and we have built in JMX to pop up WebObjects and we can do a lot of interesting things if we move to Tomcat. Like such as using the advanced class loading mechanisms to do some interesting things. We have a demo.
So let's talk about troubleshooting with JMX. So in JMX-- Okay, in JMX we introduced some new API. The first new API we introduced is the role application. In role application is register and bin. Why do we want to do that? Because Mankit Szeghi: Like in the past, you used and used all these light bit things to see whether your instances is healthy and running. And I believe that we can do better than that.
You can actually define your own healthiness of your instances using JMX. You can specify how much memories that your instances should have and stuff like that. And then you can track different users, set up different conditions. If certain assertion fails and you want to lock that transactions to JConsole and stuff like that, we'll have a demo for that.
And then the next API we introduce is Unregistered Nbin. Yeah, obvious. You have register, you have unregister. And then here, get Nbin server. So every WebObjects application instances would have a built-in ambient server. That means that you can do some interesting thing. Let's say, for example, if you have a web page, if you have web applications, you have one page called...
[Transcript missing]
But that feature is not a part of the Leopard preview disk. So another API we introduced is the Get JMX Domain and Set JMX Domain. So let's have a demo with JMX.
You can see. OK. So with that, we're going to start a world JMX demo machine. And Chuck at Wall, our intern who does cool photo viewer applications from yesterday, is going to help us demo this. So to do this, take a look. I want you guys to pay attention at here. Stashd.com.sun.management.jmx.remote.ec ode.show. If you're running Java 1.5, you want to turn this on. If you're running Java JDK 1.4.2, you want to turn this on. Plus, you want to download an optional package from Sun so that you can use the JMX.
And with Java 1.6, when they shift, this flag is not less than Siri. So, Chuck, can you launch this for us? So the application is launching. OK. OK, we have some network problems here. OK, so what does this application does? This application is a simple application. It asks users for inputs.
So it expects the numbers from 0 to 10. So Chuck, can you launch JConsole? So you see in JConsole, you can actually do it locally, remotely, and that's in this case because we're running on the same machine, let's do it locally. Can we click connect? Bottom. Lower. Yep, there you go.
[Transcript missing]
In this, you see there's a heap space measurement thing, meter, and there's a non-heap space. With the new threading model and NIO, you'll see non-heap space go up a lot, but not using as much heap space. And that's your application code, use lots of heap memory. And then can you click on the threads? So here's the interesting thing.
Now, in this pane, you can show all the worker threads, what are they doing. And can you click on one of them? So as you see, like before JMX, when your application hangs or does not respond as quick as you expect, what do you do? You're just guessing what it does, build lots of logs.
But with JMX, you actually know exactly what state it is on, and you have a stack trace. So next time when your application has run into issues, you actually can fire up JMX and look at all the threads, and then you will know whether this EOF, Enterprise Objects Frameworks, is deadlocking on you, or you have some problem with Apache adapter that they have all this thing that's hanging all your WebObjects threads. So, okay, so class, can we go to class? So this will show you how many classes you will load on your runtime, and let's talk about nbin. Things will get more interesting.
So here we have three things. Thanks, make it bigger. So nothing seems to be exciting. So Chuck, can you go back to the application please? Let's start of inputting some good numbers like three, five or something. Okay, we said that the user submitted a five, go back to JConsole. Nothing happened. Try now number seven, let's say.
Okay, go back to JConsole. Nothing happened. Now, here's a user that's bad. Let's try click like, give a number out of range. Let's say 11. And then look at the JConsole. Boom, you see it locked that as a transaction coming with that host. Can we make this bigger, please? that it is actually submits the number 11. So in this simple application, we are trying to show you that with JamX we can do conditionally registration. In this trivia example, it illustrated how much you can do when it comes to EOF.
Let's say if you have EOF applications that you expect that you would only fetch like each user have a default editing context in session and you expect that each user should have only let's say 10 toys, right? But all of a sudden that the user fetching all the toys in the database. Now, with JamX, you can do this conditionally. You can only track the user that trying to do something that you specify that is not expected behavior. and we encourage you to try this out and that concludes the JMX demo. Slide please.
Okay, oh, there's one little slide missing. That's okay, I'll talk about it. So there's four new features that we introduced in LibreServer. One, it's an NIO and the performance enhancement and the new threading model. Two, it's a load balancing with Apache 2.2 adapter using Mod Proxy Balancer. Three, it's an experimental Tomcat adapter. You can use it on different J2E container. Again, as my manager always reminds me to say this, make it explicit.
It is not the same thing as bundling your application into a true raw bundle and running the entire WebObjects application as a server because it's heavyweight. It is running as a, so with our experimental Tomcat adapter, it's actually a filter that's living on a J2E container and then all your WebObjects application is running outside of the container.