Enterprise • 39:41
Learn how to use fail-over and load-balancing technology from Emic Networks with Apache or MySQL applications running on Mac OS X. The Emic Application Cluster (EAC) includes performance scalability, load balancing, fault tolerance, and continuous availability with fast fail-over, as well as centralized cluster management—all running on Xserve technology. Web application developers, IT managers, and MySQL administrators will find this session useful and informative.
Speaker: Thomas Loran
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Good morning, and welcome to session 613, Creating Application Clusters for Apache and MySQL. It is my pleasure to introduce to you Thomas Loran from Emic Networks, a partner of ours that we've been working with for the last six months in providing this cluster technology. So without further ado, Thomas, come on up.
Good morning, everybody. Let's make sure that this thing works right here. Like Chris said, my name's Thomas Loran. I'm the principal systems engineer for Emic Networks. We're a San Jose-based company specializing in Apache and MySQL clustering. We do have engineering offices overseas in Ulu and Helsinki, Finland, so we're kind of an international company, but we are located here in the Bay Area.
What I'm going to talk to you about is application clustering, the concepts, the technologies for, like I said, Apache and MySQL. I'm going to talk about why we use load balancing and why we use clustering. I'm going to talk about the various types of clustering. I'm also going to talk about the various types of load balancing and how load balancing works.
Now, when you're talking to an audience like this, there's a wide range of experiences. So some of the early slides might be a little bit slow for people that are experienced in load balancing or some of the other cluster technologies. But I think it's important to get a baseline technology so people understand what I'm talking about when I say load balancing, people understand what I'm talking about for clustering, because clustering can mean different things to different people.
So, why do we use clustering and load balancing? Well, basically, there's two reasons. The first one is redundancy. People have got SLAs that they need to meet. You've got to have uptime for your internal and external customers. Redundancy is one of the primary reasons that people use clustering and load balancing. The second one is scalability. A lot of times, people have hit performance walls on a single box, and they need to put more users on a particular application.
Well, how do you do that? How do you make sure that your incoming client requests go to multiple servers? That's the load balancing part. And then, how do you make sure that the various servers have the same data consistently on the back side? And databases have a particular challenge for that.
So, what are we talking about here? What we're talking about is a multi-tiered approach. Now, this is something that most everybody understands. You've got your web clients, you've got your application servers in the middle, and you've got your MySQL database servers on the back end. Now, sometimes, these layers can be blurred, and you'll see that throughout the slides. Like, a lot of times, I'll be combining your middle tier and your back end tier together, and bear with me as I go through these slides. It's several times I'm going to have to say, okay, please understand that.
When you're talking about Apache, you're talking about the client being your web browser going to your application servers, but then sometimes when I'm talking about MySQL, you need to understand that the client in that particular case is your web app going to the back end. But for space and the way that the slides are laid out, that this is the only time really that I'm going to put the three-tier architecture up there. But that's kind of what we're talking about.
Like I said, Apache clustering between your browsers and your application servers. MySQL clustering between your application servers. And your database back end. And sometimes you can combine, you know, in Emix case in particular, for very low-end SMB people, you have the option of combining your application servers with your database servers. So.
One of the things that's pretty important is understanding what clustering is. Clustering means various things to different people. You've got computational clustering, application clustering, and data store clustering. I'm going to kind of mention what each one of those are and then focus primarily on application clustering and data store clustering. Computational Clustering Computational clustering. There's a lot of sessions here today, this week, about computational clustering. Basically, it's designed to seamlessly integrate resources like CPUs into a computational grid, sometimes called high-performance clustering. A typical application would be video rendering, scientific and technical environments.
A good example, as you can read up there, is Virginia Tech. That's not the focus of this presentation. There are other presentations that talk about computational clustering and HPC. This is a traditional application clustering. And traditionally, what happens is—and hopefully you guys can see the little tiny dot here—is you've got two applications running on two different servers here. And traditionally, there's an application heartbeat between those two servers.
I'm going to go back here to this one. The reason I hesitated for a second is normally on my other slides I had IP addresses. What would happen is your top server up here would have one IP address, like a 192.168.1.2. The one down here would have a 192.168.1.3. And in the center, you would have like a virtual IP address that people would connect to those two, and it would be like a 192.168.1.1.
The application heartbeat determines which one of these two applications is going generally, but not always. Only one of those applications is going at a particular time. What makes application clustering different is the fact that traditionally—and once again, there's always exceptions. I'm talking about generally here. Generally, there's a single data store.
In this particular case, I've got an XServe RAID on the back end. That means—and there are exceptions to this—but generally speaking, only one of those applications can be active at a time because you can't have two applications writing to your same disk or mounting the same volume at the same time. So that's what happens with application clustering.
Now, data store clustering is kind of a different concept. In data store, what we're talking about is the physical data, the disks themselves. Sometimes it happens at the file level, and this really has no awareness of the application. So there are various volume replicators, there's some file system replicators, things like that, that will take your data that's physically on the disk and replicate it back and forth. No awareness whatsoever to the application. Thomas Loran This requires data replication. The challenge with this is the applications hold data in memory.
Now, what that means, why that's a problem is, frequently when you're using data store clustering, your data underneath an application can change physically on the disk, but your application is physically pulling from what's cached in memory. So you can have a case where things are out of sync.
So when you're running data store computation or data store clustering, you're not going to be able to replicate the data that's being collected. So you're going to have to do a lot of work. Thomas Loran So when you're running data store computation or data store clustering, it's very, very important that you have hooks that are built into your application.
That's where a company like Emmet comes in, because we have hooks where we can do the application clustering, the data store clustering, and make sure that the application is aware that the data physically on a disk has changed. And once again, I'll talk more about this. It's particularly important with databases where you have the concept of order.
So, data store replication, traditional file or disk replication. One of the ways that you do this is a master-slave architecture. What happens is you've got read-writes that come down to a primary server. Then what that server does is it forms out the request or replicates those over to other servers that are slaves. Those other servers sometimes, but not always, can be used as read-only replicas, but you can't write to them, and the reason being is data consistency. That's why you've only got a single place where you do your writes, and then you've got read-only on top of that.
One of the things, one of the, when you're talking about databases, the concept that follows that is lazy database replication. Lazy database replication does use the same master-slave architecture, and it uses log shipping via a peer-to-peer connection to maintain slave databases. One of the big advantages of that, of course, it's easy to set up.
It's able to survive relatively high latency for replication, and the slaves are read-only. Now, the cons, of course, are the failover's not transparent for applications. It's got limited scalability. Data can be lost if the master becomes unavailable for extended periods of time. Those are some of the cons of that. Data store clustering, what we're talking about here, and this is actually masterless. Peer-to-peer is not correct.
Masterless is the correct term for this. So what happens is each one of your nodes is able to handle both a read-write, and each one of your nodes takes your data and transports it to all the other ones. Now, when you're doing this, it's very, very important that you have some kind of a consistency protocol so that each one of your nodes has exactly the same data, because if you don't, what'll happen is one of your nodes will read one set of data that hasn't been transported over to the other side, and you've got inconsistent views. When you're dealing with a database, that could be very bad.
So, active database replication. Once again, this is not a peer-to-peer, this is a master list. And what happens is, all database requests are sent to all nodes in a cluster. It ensures that the database requests are executed in the correct order. And I'll talk more about the ordering problem. The advantage here is the application is—the fail-over is absolutely seamless. You get a single IP address where the applications go.
The nodes are consistent at all times. The cons are it's a little bit more complex to set up, and you need additional network. At least in Emic's case, what you need is an additional network on the backside to handle replication, but I'll show that in another slide. Now, what I'm going to talk about is load balancing.
Now, load balancing, the problem that we're going to try to solve is, how do you spread the load between the various servers? Well, so you've got, in this particular case, what I've got here is some web clients that are trying to go to a generic cluster over here.
So the first request comes on in, it goes to one server, and you want to make sure that the next requests are replicated and goes over to the other servers. Well, how do you do that? Well, make sure that...
[Transcript missing]
DNS round robin, which I sort of talked about before, hardware load balancing, software load balancing. So in a DNS round robin scenario, which is very, very common, what happens is—and this could be very elementary for some people here, but just bear with me so we have a level set here.
What happens is a web client does a request for, let's say, www.mysite.com. The request goes to a DNS server. The DNS server responds back with an address, in this case, 17.100.100.5. Then it takes that physical IP address, that 17.100.100.5, and ships it over to one of the servers.
The advantage for DNS Round Robin is it's very free. It's easy. Let me back up here and talk a little bit more about this slide. So what would happen is the next request would do the very same thing. It would go up to www.mywebsite.com. It would get a different IP address. Instead of 17.100.100.5, it would get 17.100.100.4, which would take it over to my website.
So, what happens is, in your DNS zone records, you would have, for www.mywebsite.com, you would have four, five, six, seven different physical IP addresses. The advantage is, it's free, it's very easy to set up, it's very easy to administer. However, like I said before, the con is, there's no awareness of the servers being up or down. There's no inherent ability to separate out services, and that's one of the advanced features is, there's no awareness of the servers being up or down. So, you don't have to worry about that.
While you can use different names for different services, it has no way to say that an HTTP request goes here, an HTTPS request goes there, a database request goes here. So, it can't handle any of that kind of stuff. It's not very intelligent whatsoever. So, hardware load balancing requires a network device, not necessarily a network switch. They're very specialized devices to do this. And what happens is, you get an address that comes in, in this case, 17—say, 216.17.1.
And then, the load balancer applies some sort of a policy, and I'll talk about what the policies are on this. There are various parameters that the hardware load balancer will use to determine which one of these goes to. It translates that address to a private address of one of these. It could go to 1001, 23, etc., etc. Like I said, the first one comes in, it gets translated, goes to 1001. And then, the second one comes in, it gets translated, goes to 1002. For your next request, 1003.
Let me back up here and talk about what some of the types of policies are. And when I say policies, typical ways that they do it are things like round robin. Now, it's different than DNS round robin because the hardware load balancer physically knows what server is up. And if a server is down, it simply drops it out of the rotation. One of the other policies is weighted round robin.
Here's a good example. Let's say you've got a— A new dual G5 running an application over here, and you've got a low-end Linux box running Pentium 166 over here. What you can do is say—is you can weight the G5 to be more preferable than the low-end Pentium box.
So, let's say for every one request that the Pentium box gets, your G5 gets five. That's one of the weighted ones. You can do things like simple—let's say first in, first out, latency. But what you can't do, and where Emmet comes in, is you can't query the server for things like server load balanc—or for server CPU utilization and parameters that are specific to an application that's running on that server. That's one of the policies that Emmet has that's an advantage over a hardware load balancer.
Like I said, they're very, very powerful. They're flexible. You can separate out the services, and they can handle special cases. Well, what do I mean by special cases? And I'm not going to go into a lot of detail about what special cases are, but they're things like SSL encryption, things like persistence. And what I mean by persistence is sometimes when you're running a web application, a financial application, you want to make sure that the exact same tuple or exact same client on the same port goes to exactly that same server before that gets committed.
That's called persistence. One of the other things that hardware load balancers can do is they can do things called global load balancing, where you may have a cluster in New York, and you may have a cluster in Los Angeles. How do you make sure that somebody in Chicago goes to the right one? Those are global load balancing things. These are things that hardware load balancers kind of specialize on. They're usually deployed in pairs, and they're very, very expensive.
They're $15,000 to $50,000 each, so $30,000 to $100,000 to get them set up. They're pretty complex to set up and administer. There's a lot of policies, rules, and things like that. And like I said, they don't have, as part of their policies, they don't have any direct awareness of what your application load is.
So, software load balancing. An example is the Emic Application Cluster. What we do is we make sure that each one of our nodes gets—each one of our sessions, and then using our policies inside there, we will make sure that one of those gets accepted and the other two get dropped. And in a second, I will go into that, and I'll explain how we drop those and how we actually run our flow. This is on the front, or the public network, and we're talking about the load balancing.
Later on in the presentation, I'm going to talk specifically about Emic and how we do—after we've load balanced that, how do we do total ordered replication of the database on the back end? Like I said, the next one would go to the next server. It gets pretty redundant. You guys kind of get the idea here.
So, the advantages for MMIC application clusters or software load balancer, in our particular case, were designed for HTTP and MySQL. Policy based, based on server application load, it's very easy to set up and easier to administer. I said on the con, it does require a separate network on the backside. It can be a virtual network, it can be a physical one. And also, it's designed to support only HTTP and MySQL, and it doesn't handle the special cases. Currently, in this version of code right now that we've got out, it doesn't handle the special cases.
So, let me switch gears. That was kind of a high overview of what load balancing is in general and what replication is in general. Very broad broads. So, let me just brush on that. Let's talk about MMIC application clustering. It's the same technology framework. And this is where I condense it. I talked before about a three-tier architecture.
Well, this will always be your clients here. And this is a cluster, and I deliberately left it out whether that's going to be like an Apache cluster or whether that's going to be a MySQL cluster. So, several times in this presentation, I'm going to talk about these web clients here. And remember that I'll try to make a specific reference, whether that's a web browser or whether that's an application. as your client that's talking to the database on the back end.
So, what MySQL does is we apply Apache load balancing on the front end, then we can do MySQL load balancing, and then we can do MySQL replication. And you notice one of the things I didn't put up here was actually Apache replication. That's currently not in this particular product because when you're talking about these kind of websites, generally speaking, you're talking about static content for a lot of your web data. Your dynamic content is generally in your database.
It gets replicated. So, there are other mechanisms that you could, since it doesn't change that often, that you could actually replicate your web content between your various servers. But remember, to have a cluster, you've got to have load balancing, and you've got to make sure that your content is consistent. It's databases that change constantly, and databases have got the total order problem, and that's what we're going to talk about.
So, Emic Application Cluster, you get dynamic load balancing, you get an active replication as opposed to lazy replication, and you get fault detection and isolation. So, let's take a walkthrough here. What we do is we configure our switches, and we're changing the way that we do this right now between various versions, but we take our interfaces and switches are configured to allow all packets from all clients to arrive on all nodes. So basically, everything gets flooded to each one of these servers.
Generally speaking, there will be a SYN packet at the front. And the SYN packet has got to match our policy. So the first server is set up so that it will accept SYN packet one, the second one is set up so that it accepts two, the second one sets up three.
What happens is the other SIN packets are dropped so that we only have one session going. The initial SIN setup goes to all the servers, and then the other sessions are dropped. That's how it gets flooded, and then based on that policy, we decide whether we're going to accept that particular flow or not accept that flow. Let me give you an example here of the kind of scalability that we're talking about. The first slide right here is an example of a standalone MySQL server.
Down here, these are transactions per second. These are fairly typical of what we're seeing over some low-end three-node clusters here, but they're relative depending on things like your network, your hardware, your application. On the back side here, this is, let's say, server units. So this is like half a server, one server, two server units.
You see when we put MySQL or we put M-ApplicationCluster on a node, on a single node, and we turn it on, we really don't take much of a performance hit whatsoever, and obviously we don't get any scaling because there's only one server. When we go to two servers here, you can see that we don't quite double the load, but we definitely get up there. And then when we go to three servers, you see that the transactions per second go up. It doesn't go up exactly one for one, but there is a load. a linear relationship as we go up.
So, it's managed off of a Java application, it's Aquafied. This is an example of our client, and later on in the demo, I'm going to talk about that. I'm going to show you this. I'm going to connect up to a cluster. I'm going to bring a couple nodes online.
Each one of the nodes is represented by a particular glyph or an icon that has various colors for various meaning. It can be active, where it handles all our requests. It can be offline, where it does not handle any requests whatsoever—in other words, totally dead. It can be standby. Standby is kind of interesting, because what a standby server is doing is, it's replicating all your database changes on the back end, but it's not servicing any read requests. So it can come online very, very quickly, because it doesn't have to be a standby server.
It doesn't have to synchronize any kind of a database, so it's ready to go. Transitional, it's sort of moving between the states. You'll see this glyph come up several times when you're actually moving between a standby and an active, or an offline and an active status. And a maintenance mode is where you take the database down for maintenance. So I'm going to switch over to a cluster here.
So what I've got here is I've already hooked up to an application cluster. This is a prototype software we're running here for an OS X port. And I should say that we're in version 2.0. We're built for Linux. We've got production customers running in version—on Linux. But several months ago, we started in an OS X port.
So what I'm showing now is the OS X port of our product, which is prototype software. And I'll talk about when we're going to be using it. And I'll talk about when we're looking at releasing it. But it's not a build, because that's already out there. It's just a conversion over to a different OS.
What we're seeing here is Node 3 up here at the very top is running in an active status. Node 2 and 3 are in a cold status. And we've set up a Linux server that's running a script that's banging this for load. So it's got—if I remember the script right, it's set up—it's putting about 350 clients are connecting on it. We've got a lot of artificial things in the script to generate 100 percent load here. Now what I'm going to do is I'm going to control-click on our first server here, and I'm going to move it from an offline status.
[Transcript missing]
And as it starts to move, what's happening now is the first server is sending its load over to the second one. It's actually copying the database physically, so you don't have to have that one set up. It's physically copying that database over, and now the second one is online.
Thomas Loran Now, if you take a look at the chart here, it took a little bit of a dip, and then our CPU utilization should go right back up, but our transactions per second—let me move over here so we're not checking on load, we're checking on our request rate, our transactions per second. Thomas Loran Our transactions per second went up here. Now, you notice that this doesn't have the same scalability that I showed in the other slide, because we're running prototype software, Thomas Loran and our engineers are still connecting.
Thomas Loran So, we're kind of tweaking the code. I just wanted to show that it does increase. It's not as large an increase as we have on our Linux, but while we're still running 100% load on two servers now, because that's what our script is pushing, we're actually getting more throughput. Thomas Loran These two servers—one's at 96%, one's at 98%—that's about as high as they'll get.
They usually don't go to 100%, but sometimes it takes a little while to level out the load because of the number of transactions. Thomas Loran Some have got a timeout, and the load balancer shifts. Thomas Loran So, we're kind of tweaking the code. I just wanted to show that it does increase.
It's not as large an increase as we have on our Linux, but while we're still running 100% load on two servers now, because that's what our script is pushing. We're actually getting more throughput. Thomas Loran Sometimes it'll take three to five minutes before they level out at the same utilization. Now what I'm going to do is I'm going to control-click on our third node.
I'm going to bring that online. Thomas Loran Now, what's interesting here, and while we always recommend deploying these in three, is what's going to happen is one of the servers is going to go into a standby mode—a special kind of standby mode—where it's not going to be servicing requests out. It's going to freeze the server. Thomas Loran So, we're going to freeze the database as a snapshot, copy it over to the new node that's coming online, and cache all the requests that are coming in.
Then, when both of them have the exact same copy of the database, what it's going to do is take the cached information that it was transmitting, verify that everything's consistent, and bring all three of them up on the same time. Thomas Loran So, it's better always to have three than it is two so that you can always have one running as you're doing this, because one does move into a maintenance mode or into standby mode. Thomas Loran So, as you can see, what's happening is both of these have gone into transition state. One of them is still active.
Now, this guy's gone offline, and while the top guy is still servicing all the requests, the guy on the right has now transferred his database over to the one on the left. Thomas Loran He's got caught up, and then he takes the information that's cached, and he brings those together. And, well, down here, this was running—the first line was a single. Thomas Loran The second one was two boxes, and now that's increased up to three.
So, while we're still running, our load—let me go back here and check. Yeah. Our load overall for the cluster is still running 97%. Thomas Loran We've got the Linux box set up to just blast the box with as much as it can take. You'll see that our requests per second keep going up.
Don't pay that much attention to what the delta is between the steps, because that's code optimization that's specific to the OS X platform that we're still working on. Thomas Loran So, that basically is the demo that I wanted to show you, and I'll go back to the presentation.
It's kind of interesting how one of them will go offline and then copy the database over while the first one is still servicing all your requests. Okay, can we cut back over to the slides? Done with the demo? All right, thanks. So, let's talk about how EAC does the active replication. And I told you that one of the problems that we had was a two-phase commit.
I don't think that was actually in the slide. Lazy—when people traditionally did active replication, they had a concept of a two-phase commit. It means your database write has got to be committed on one server, and then it's got to be committed on the other server, acknowledged, before it actually becomes, let's say, a final commit. Well, that's very, very slow.
So, what we wanted to do—the whole essence of why Emic was created and the problem that we were trying to solve was, how do we avoid this two-phase commit and still have a lot of speed? Well, we came up for our concept of active replication, and I'll explain how we do that.
So, what we do is we've got a group communication protocol, and we use a total reliable multicast protocol. It's a token-based passing protocol. A lot of people joke with me and say, token ring. No, no, no. It's not token ring, okay? It's a token passing scheme that we use, and we use the reliable multicast. And, of course, if you're a network guy, you say, well, wait a second. Multicast is an oxymoron. It's not reliable.
Well, because we use the token scheme, and what we do is each one of our requests gets a sequence number that's assigned to that particular request, and then we're able to keep everything in track. The token gives a particular node permission to transmit, and it doesn't wait for the event the request actually execute on each one of the nodes. What we're trying to do is make sure that it gets queued in a proper order. So, that's what we're doing with a sequence number for each request.
We also have a mechanism for handling partition clusters. And what I mean by partition clusters, let's say I've got five clusters in a node. There's some kind of a partition on the back end, and what happens is they're still connected up on the front end, so it's being load balanced across the front, but it's partitioned on the back.
So, you've got two nodes over here in this group that think that they're servicing all the requests, and you've got three nodes over here determining that they're still servicing all the requests. Well, one of those two can't be right, because then you've got a partition cluster. We have a mechanism for handling that and shutting down what we call the node that isn't the quorum. The quorum, or the majority, continues to operate, and the other part actually drops off.
The other thing is we're actually integrated with MySQL lock manager, so the problem that we're trying to solve is how do we have total ordered consistency? Here's our front network. We're doing our standard load balancing that we had talked about before. We receive the tokens on a private network. We assign an order request, and then we multicast them out on the back side. You'll notice here that we've got a token.
Right here on the top, this just gives him permission to transmit, and then each one of your requests gets assigned a particular sequence number. So the sequence number that each one of these nodes gets is one, for example. Then we do our next operation. The load balancer goes over, let's say, to the middle server. He does a request. That request is number two, for example.
He gets the token, so he has permission to transmit that, and he multicasts that out. And then everybody says, well, the last one I saw was one. I'm getting a two, so I know that everything's in order. It's got to be sequential. Once again, the load balancer changes over to the third node. It's got a request that comes in, a database replication request. It assigns it number three.
It gets the token. It multicasts that out to each one of the nodes. So, as you can see, there's no two-phase commit here, and that's really what we're trying to do is solve that two-phase commit problem. Now, for your programmers out there, this makes a lot of sense.
Here's the problem that we're trying to solve. With databases, it makes a big difference the order that you receive your particular request in, because as you're well aware, you know, three plus four divided by five is a lot different than three divided by four plus five. So, with databases, you've got to make sure that the order is exactly consistent. Now, there are a couple of things that go into making up that order.
You've got network latency and things like that, but on each server, each one of your threads could be operating at a different rate. There could be different CPU load on each one of the servers. So, you need to make sure, even though the request gets to that particular server, how do you ensure that it gets ordered or executed in exactly the right order on different servers? This is a very challenging problem. And like I said, what we do is we assign sequence numbers to it, and we queue them. Okay. And we do that for execution in exactly the same order. And as long as they're queued, then we can free up and go do the next operation.
So, I talked about fault detection and isolation. One of the challenges that we've got, on our back end we've got a secondary network that can be either real or virtual. What happens is we use a token loss timeout and a probe, which is ping-like. Ping is the best way to describe it.
We have a particular fail—we have a failure detected on the network here. We detect that the token has timed out. And by the way, the target that we're looking at to do this is 10 milliseconds. So we're not talking, you know, 10, 15, 20 seconds or a minute or two. 10 milliseconds is what we're looking at. We get a token loss, and then what we do is we'll send a probe down.
[Transcript missing]
So, kind of in summary there, we do MySQL and Apache clusters, not computational clusters. We're the hybrid approach where we do data store and application clustering, and we bring those two together. Our focus is accurate replication, where you need very, very tight replication between your various servers, where you need to scale your databases very solidly, very low total cost of operation. The other thing that I'm kind of looking to you guys for is our release candidate program.
Like I said, we're on version 2.0 on other platforms, but we're looking for beta testers and release candidate testers for our code to run on OS X. We're looking at releasing that sometime late Q3, early Q4, with production as scheduled, sometime around the end of the year or early part of the year.
The product managers get very upset when I actually, you know, nail them down with dates. You guys are familiar with that. I mean, there could be a bug that, you know, somebody determines during the actual release candidate program to slip those dates. But anyway, we're looking at Q4 and early Q1 2005 for production code on OS X. So, if you're interested in that, somewhere up here it had my email address on there or you can go to our website at www.emicnetworks.com or my email address is thomas.loran at applicationcluster.com.
One of the things about having an early stage company is sometimes we really can't decide on what our name is. So my email address is actually thomas.loran, L-O-R-A-N, at applicationcluster.com. So if you guys want to be part of the release candidate program and test this as we're running our production release, I'd be happy to talk to you and getting this set up in your environment. So with that, I am going to work with Chris who's going to handle any kind of Apple questions. I wanted to see if there are any questions from the audience here on MySQL and Apache clustering.