Enterprise • 1:18:35
View this session to learn how to combine Xserve RAID in configurations that meet demanding new storage needs with a high degree of reliability and redundancy. For example, we discuss best practices for building a remote data replication and disk-to-disk-to-tape workflows. This session is for network planners, IT project managers, and network administrators.
Speakers: Alex Grossman, Ryan Klein, Stephen Terlizzi
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Please welcome our first presenter of the morning, Director Hardware Storage, Alex Grossman. Good morning, everyone. Thank you very much for getting up early on a Friday morning after a really great night last night. I appreciate everybody being able to do that. We're going to start this morning. We've got a lot to cover, and hopefully it's going to be very useful to everyone.
I've got a pretty full agenda, and I've got some great guest speakers to talk to us about some real-world stuff that we're going to talk about. But before we do that, I want to go through some calisthenics in the morning. So we're going to use our right hand and our left hand a little bit.
So to get an idea, just to help us, give me an idea of how many people are familiar with the terms that I'm just going to relate here. First one is DAS, direct attached storage. How many people are deploying direct attached storage today? Okay, not that many. How many people are familiar with the term NAS, or network attached storage? And how many people are deploying network attached storage? Oh, a lot of people.
And how about the last one is SAN, storage area networks? And how many people are deploying those? Okay. Okay, that's great. So for the other hand, we've got to use the other hand for this one. How many people are familiar with fiber channel networking? Okay, great. And how about iSCSI? Not too many? OK, great.
Fantastic. So let's get started. Let's talk a little bit about the agenda and what we're going to cover. First thing I want to do is-- oh, last question is, how many people today have Xserve RAIDs? Cool. Okay. So first thing I'm going to do is give you a little quick refresher on the Xserve RAID, talk about what's unique about it.
And, you know, we're seeing a year or so ago, you know, Apple was probably the only one out there in a Tier 1 storage vendor with an ATA-based RAID system. We're seeing a lot more ATA out there, but not all of them are created equal, and we think we have something unique in the Xserve RAID. Then we're going to talk just a little bit about RAID basics and where we see that whole market going and what's going on there, and then a lot about storage planning and what it's really like to plan a deployment in storage.
And then we have one of the biggest areas that we get questions in, and this is constant from people, is they really don't understand, what do I do in fiber channel infrastructure? What does it really look like? What should it look like? What's the future look like? You know, the costs have been really high, and we want to take a stab at what it's really going to look like. So we're really happy to have... Qlogic's SAN architect with us, Ryan Klein, so we're going to have him come up and talk about that.
And then we have a customer deployment, a real life example of where we actually had Apple and some partners get together and do a great deployment for one of our customers. Then we're going to wrap it up a little bit and talk a little bit about best practices and some Q&A.
Sound like fun? Good, so let's get started. First thing is let's talk about Xserve RAID and what it really is. So Xserve RAID is really a storage base. It's a storage building block. So it's a high availability design system. It's a 3U enclosure, basically. It can scale up to 3.5 terabytes, and it can scale up to nearly 400 megabytes per second sustained throughput. So that is really in read and write performance very high and very competitive with systems that cost a lot more.
The other thing about Xserve RAID is that it's extremely versatile. So it can be used in a DAS deployment, and we'll call that our SCSI replacement strategy. It can be used with combining it with an Xserve G5 or an Xserve or even a Power Mac and use it in a NAS type of configuration. And it has capabilities in it that allow it to be used as standalone or combined with other systems to build great SAN infrastructure.
Probably the most important thing about Xserve RAID is the high availability design. And what I mean by that is that everything in the box is swappable. So the design really lends itself from systems that have cost a lot more money to run. So it's a very versatile system. It's a system that costs a lot more money traditionally where the components are either completely redundant or they're swappable easily. And so the downtime, usually there's none, but if there's ever downtime, it's usually extremely minimal.
And that comes from just looking at the front of the system where you see 14 hard drives that can easily be unplugged from the system. And in the back of the system, what you see is you see a clean design, an Apple design from the ground up. We didn't start with existing designs or use pieces or parts that were out there. We actually started clean to design a system that had high availability built in.
And we have redundant power, redundant cooling. We have hot swappable components throughout, warm swappable RAID controllers. We have a passive midplane for the data. And so what you end up with if you pull all the components out is you end up with a metal box with a midplane or a board in the middle that does passive signaling through it. So very easy to change and update the system as you need to. So this is one of the great points of the XSERV RAID.
Now, beyond the high availability design, one of the things that really sets Xserve RAID apart from a lot of the other systems out there is the management. We chose to go with a Java-based management tool for Xserve RAID, and it's something that we've updated constantly since we introduced the Xserve RAID. In fact, what you're going to see, and one of the things I can tell you about now, is you're going to see an update to the RAID admin utility that's actually happening next week.
And I was very fortunate last year to be able to show you a significant update. This one's a little smaller, but it's a performance update. So we're constantly updating the performance and scalability of the Xserve RAID. The beauty of the RAID admin utility is it allows you to do monitoring of one or even hundreds of systems from a single screen. And it also allows you to manage these systems very easily and do all the management tasks remotely. And that was mainly driven by people like me who just are lazy and, you know, don't want to get up early in the morning.
They find out what's going on in your system. They just want to do it from home. And the other thing is we've added more and more SAN capabilities and high availability features like LUN masking and mapping to the system and things like being able to rebuild parity on the fly.
And we're doing a lot more of that. And when the new release comes out, you're going to see a lot more of those capabilities just built in. So this is really a tool that is built for every platform, yet it looks stunning on the Mac platform. It's really a phenomenal tool.
The other thing about Xserve RAID, and this was a request that we had here last year from just about everybody who attended, was that we had people deploying Xserve RAID on a number of platforms beyond the Mac, but we hadn't gone through the actual certification and compatibility. Yet it was kind of funny that our customers really drove us there. In fact, there was a website that went up last year, and it's still extremely active. It's called alienraid.org, and it's really exciting.
And alienraid.org, they were really the first people, it was a group of people who actually said, did you know that the Xserve RAID works on Windows, and that it works on Solaris, and it works on AIX? And they really went through the step-by-step features of actually showing you how you would install it. And most of those were, plug the cables in, turn it on. So it was really pretty simple overall. But what we did over the last year is we asked our customers, who were the infrastructure partners that they really wanted on Xserve RAID? And we're open.
If you have suggestions as to who else you'd like to see, we're very open to do that. And we chose what we felt were the best of the best. And those are people like Qlogic and Veritas and Emulex and Brocade and Candera, and people that you really, really care about. And then also the traditional Apple vendors, like Addo Technology, where we really felt they would add a lot to the platform. But of course, we had to look at popular operating systems as well.
Now, I think everybody in here realizes that we work very well on Mac OS X, but we're also certified on Red Hat. And that's their advanced 2.1 and 3, also on Yellow Dog Linux, which actually runs on our platforms, and Novell, and Novell 5, Novell 6, Novell 6.5, Windows 2000, surprisingly, Windows 2003, and Windows XP Professional. So all those certifications have been done, and we're continuing to certify Xserve RAID on more.
So this way you have guaranteed compatibility for the system, and not only in Mac and all Mac installations, but also in heterogeneous installations where it's simple. And I'd just like to get a show. How many people have a heterogeneous installation of the Xserve RAID? Wow, that's a lot more than I've seen in a long time, so that's great.
So let's just go over a quick RAID basics. So everybody in here should be familiar with multiple levels of RAID. And I'll tell you, these were the levels that were initially defined by Randy Katz at Berkeley in 1987, and these are still what I call the pure RAID levels.
These can be combined with other RAID levels in here, and there are some fancy RAID levels that people are looking at today. But for the most part, these are the RAID levels that everybody does. And what is a RAID level? Well, obviously when you start lowest to highest in number, you actually increase in redundancy or availability and performance. If you start with RAID 0, it's striping.
Not really a true RAID level, but a lot of people have, or still are deploying striping today for speed, because it's one of the easiest ways to take a number, a number of disks, combine them together, and get performance out of them. And then probably the most popular RAID level that's used out there is mirroring. This is basically just taking either one hard drive or a group of hard drives and mirroring them, keeping the same data set between them.
So if one were to fail, you'd have another copy of the data. This is the photocopy way of doing things. It's not very efficient. It's kind of wasteful, right? If I have a copy of something, I make a photocopy, I wasted two pieces of paper. It's the same with mirroring.
If I have one hard drive, I mirror it, I used 100% of the second hard drive's space for mirroring. And then as you move up, really you get to what I would consider more efficient RAID levels. And the one that we focus on a lot, and the one we've optimized for, is RAID level five.
And the beauty of RAID level five is that it's a distributed parity scheme. And what we mean by that is we actually create an algorithm or a piece of data on every one of the hard drives that is a part of the data that's on the rest of the hard drives.
And essentially what that means is that if you have a hard drive fail, we can instantly, virtually on the fly, recreate that data from the remaining hard drives. So you'd have to lose a large number of hard drives before you'd actually have a failure or lose data. And the problem with RAID five in the past has been that the performance has not been consistent.
So the read performance is very similar to that of RAID zero. It's like striping a bunch of drives. The performance is better than a single hard drive. But the write performance, especially in random writes, has been similar. It's slower. So people who are doing things like database or online transaction processing, anybody do that type of work on their systems? So if you're doing that, you knew that RAID five was a bad way to go in the past. And also, if you're doing things like video, anybody doing video here, streaming video? If you were streaming video, RAID five was also terrible.
People went to things like RAID three. Or in most cases, they were just doing RAID zero striping. And with XServe RAID, we really looked at that. And we put our team together and built some really sophisticated algorithms and some cache and caching schemes that really make RAID five faster in most cases than any of the other RAID levels, including RAID zero. So our performance is really quite good in RAID five. And the protection is really good.
So let's talk about storage planning a little bit. This is really the most important part of deployment and really talking about a best practices strategy. And it's really all common sense, but it's things that we don't think about on a daily basis. When we start there, we have to talk about the three different approaches to storage. And some of you may not realize you're actually deploying a number of these within your organization. The first one is direct attach. So when we talk about direct attach storage, what is that? And there's a lot of different ways that we look at this.
The first way to look at it is, a great example is the Xserve G5. It's locally attached hard drives. It's the hard drives that are in the Xserve G5 that become direct attached. And for the most part, this is the way traditional storage was done. You bought a server, you put hard drives in the server, and it was a done deal. And when you needed more performance, the servers didn't have a lot of performance, you added servers, and therefore you added storage. And that's the way things scaled. And it kind of looked like this.
You had a network down at the bottom, all your clients, and obviously I could draw hundreds of clients, but I'm not that good with drawing. And you have Ethernet switches, those are the gray lines, and then you have the two Xserve G5s. And in this case, you could have a terabyte and a half of storage online, assuming you left it all in either JBOD, just a bunch of disks, or a RAID 0 stripe. So it's quite a bit of storage.
But the problem is that it's never enough. And so you might take one of your servers, which is a, let's call it a high-use server. This could be your email server, for instance. And this could be an Xserve G5, a Mac OS 10-based server. This could be a Windows server or a Linux server. And you're going to add some external storage. Well, that works.
And you might even find that with the Xserver RAID, because it's a dual ported, dual controller design, that you want to share half of the storage on one of the boxes and half the storage on the other. A little more efficient. You get the ability to centralize your storage so that you take advantage of the high availability of Xserver RAID.
Yet you get to share it over two servers. Or you might find that that's not enough and you just want to attach more storage to your individual servers. And this is still direct attach. And this is the way things have been done. It's truly a traditional approach. And it works really easily because today most people have LAN-based backup.
So you're backing up across the LAN. Now this was a really good idea when the data sets were small. How many people can remember a couple years ago when your entire organization ran on a couple hundred servers? 100 gigabytes, right? I had an experience a few months ago when GarageBand first came out. And I was actually on a plane and I went to install GarageBand on my notebook. And I put the CD in and I went to install it. Actually the DVD.
And it said that I didn't have enough room. And this is my notebook. And I thought, don't I have an 80 gig hard drive in here? Well, I had a few keynote presentations so it was a little hard to do. But for the most part, backing this amount of storage is really easy. It's a lot of data.
In this case, we would have almost seven terabytes of data here. Imagine backing that up across LAN. Anybody go to the backup session earlier this week? So you get an idea. For those of you who didn't go and those of you who backup terabytes, it can really take about, in an uncompressed environment, over 24 hours to backup a terabyte of storage. So, you know, the backup windows are shrinking. So this LAN-based backup doesn't seem to work very well in this occasion where you have a lot of storage.
So when you start looking at this, you go, well, one of the problems is I need to share the storage, but I have something like this where I'm pretty dedicated. That one XSERV RAID, let's call it 3.5 terabytes, is dedicated to that server. And the other XSERV RAID is dedicated to the other server. So my resource sharing is limited. So I better guess right as to how much storage I really need on each server.
Let's say one of those servers needed 6 terabytes and the other one needed 1 terabyte. In the direct-attached world, it's really hard to do, right? You just can't, what we'll call provisioning that storage over between the two systems. So when you look at this and you look at that traditional approach, is it still viable in today's world? And a lot of people look at it and they say, sure, it's viable because it's a lower initial investment.
I don't have to do much planning. I know that I can just buy one and attach it. But there are problems with scalability and there's problems with longevity. Because a lot of times that storage is internal to your servers and when it's time to replace the server, when the call kicks in, it's time to replace that server, you end up not having storage that's compatible.
And that's the beauty of an SSRV RAID if you deploy it because it's external, it's fiber channel, it's just going to plug in. But you also find out that you're pretty limited on that backup and that restoration. So it's not really the ideal approach. So what else do you do? Well, this is what most people do. Most people have moved to a network-attached storage model.
They start out with some type of direct-attach and usually it's many more than two servers that are out there. And they take some type of direct-attached NAS right onto the network. So it's attaching to that gray wire that's there indicating our network. And in my case, I chose to take an XSRV G5 and an XSRV RAID and use that as a NAS replacement. And that works.
It works and it gives you the ability to share that storage across the network to both servers. So you get some provisioning because what you can do is you can dedicate, you don't have to dedicate those resources, you can leave them open. And it has expandability. But the expandability becomes limited because what happens at this point is now the wire that's coming out of the XSRV G5, that single Ethernet becomes the bottleneck.
And because your backbone, unless you're building a 10 gigabit fiber backbone or just 10 gigabit Ethernet backbone, you're really pretty limited in the overall performance it's going to do. How many people have a 4 gigabit Ethernet backbone out there? How about a 10? So I saw one hand. How many, you're pretty much limited. How many people have gigabit backbone? That's just about everybody out here.
So imagine that if each XSRV G5 can saturate that gigabit backbone, how is your storage being attached going to do that? And this is really what most people do. Most people have deployed a network attached specialty appliance. And they're just an embedded NAS. So this is a very lightweight NAS server. And the reason they've done this is there's no client access licenses, just like an XSRV G5. But they're usually inexpensive. And usually what happens is you end up with this.
And you end up with a lot of different little appliances out there. And the problem with that is that while it gives you a heterogeneous approach, and while it has a lower investment, it is a management nightmare. And it's very difficult to manage that. And it's very difficult to know what's failed. And you find that you start plugging them in all over. And when you have an Ethernet problem, it's really a problem. It is a big problem because you lose accessibility to the storage.
And whether we like it or not, anybody here ever crimp an Ethernet cable? Okay, if you've crimped an Ethernet cable, you know that usually one out of three you're going to screw up. And usually it fails like six months later when it's hanging there. And this is part of the problem with hanging everything on the network. That network was made to deliver small packets. And it wasn't really made to deliver the performance and the reliability to a lot of different servers. It is truly a collision-based network.
But for most people, that works. And in fact, most people who start there tend to move to this, the NAS appliances. And we know who they are. They're generally extremely expensive in a per-gigabyte cost. But they have a lot of features. They do things like snapshotting, which means that you can replicate the data really quickly.
And they're appliance-like. They're generally easy deployment. They have a built-in file system. And for the most part, these very high-end appliances have an operating system and a file system. And they also have the downside of being a single-vendor lock-in. You're, for the most part, using their management tools and you're having to buy them again and again and again. And it gets pretty expensive. And we all know those people. And they're good things. They're companies like Network Appliance and EMC. They build really nice products. And they still have the issue of having that single point of connectivity to Ethernet.
Now, they may have multiple Ethernet ports, but generally, you're not going to have hundreds of Ethernet switches. You're going to have one or two very high-end switches. And what if you don't have a hundred? You're going to have a high-end infrastructure. You're really funneling everything through Gigabit. Has anybody here ever seen Gigabit Ethernet perform at Gigabit? So, that's one of the other issues that we run into all the time. So, what is the choice? The choice is to build a SAN. And it sounds like a lot of people out here have already taken that step, at least.
So, what does a basic SAN look like and where do they scale? Well, most people who build a SAN start out again with that direct-attached model. They start out there and they add storage. And I'm going to go in a fiber channel switch in here because I knew I was building a SAN.
So, from a best practices standpoint, I want to start out with expandability and scalability already in there. And you can take an XSERV RAID today and you can use a tool called LUN masking. And you can map each address, almost like a MAC address, but we call it a fiber channel worldwide port name.
And we would map those worldwide port names very easily in the RAID admin utility to each server. So, we can have a provisioned storage to each server. So, the servers can't see each other's storage, but they see the storage they have. Really simple implementation. And I can add more.
And as I add more, I don't degrade my performance because my back channel network is a specialized network called fiber channel in this case. And fiber channel is a non-blocking network infrastructure and my performance scales as my capacity scales. And we call that a SAN island. And then people will scale it out.
So, as you scale it out, the server becomes more heterogeneous. The servers become MACs, PCs, Linux. Hopefully they're all MACs. But as you scale it out, now you can start to deploy more storage on more servers and you're not limited. And you can reprovision that storage as you need to. And in some cases, the manual process. In other cases, there are partners that can help us provision that storage instantaneously and without any interruption of service.
[Transcript missing]
If we look at that mission-critical storage environment, I picked one, an interesting one here, because I said I'm not going to pick one on Macs. I'm going to pick one on Windows. So this is a Windows 2003 advanced server with Microsoft clustering environment. In this case, it cost about $6 a gigabyte to deliver fully redundant mission-critical storage.
And this is something that today you'd have to spend an incredible amount of money to deliver this with other systems. And this is really made possible by Xserve RAID. So in this case, you've got a lot of storage to two servers, and it's really simple and easy to deploy.
Now here's your typical three-tier storage infrastructure the way it usually really looks. It's more than one storage device, it's more than one server, and it's heterogeneous. And so you do have a storage pool that's mission critical, a business critical pool, and a near-line pool. That's really the way it looks. And today, with XSAN, you can build this. Or at least when XSAN is released, I should say.
So it all comes down to which storage approach is best, and it really depends. And what it depends on The other thing that's interesting is that 50% of the managers consider heterogeneous storage to be a strategic goal for them. So when we talk about interoperability, this is important.
So if you don't have an all Apple infrastructure, you don't have an all Windows or an all Linux or an all Solaris infrastructure, you really need to be heterogeneous. And I think the most important thing is that 52% of the people surveyed, and I think they surveyed about 1,000 IT managers, they view the reduced maintenance costs as proof of a return on investment. So when you look at deploying tiered storage, probably in that high tier of storage that's 40 to $100 a gigabyte, you pay that same amount in maintenance every year.
And that's really where the cost is. So when you look at, well, I can tier this, and I can reduce my cost both in the initial cost of the system and in the maintenance cost, that's really what it's all about. And can I deliver those same services? We think so. So if you're doing storage planning, there's a few things to look at. It's really, well, what do I already have? What do I really need? How much is it going to cost? And so let's take a look at some of those.
The first one is existing infrastructure. There's two things that people don't look at here. The first one is, how old is the existing infrastructure? I hear this term all the time, and I guarantee everyone in this audience has used it in storage at least once. It's called legacy. They all say, I have legacy storage that I need to connect to my new storage.
Anybody ever say that? Yeah, we say it a lot, right? Legacy storage. Well, what does that mean? Does anybody here realize storage wears out? I mean, that's another thing that people don't realize. In the year 2000, I paid $2 million for this three-letter acronym, storage. And I have to amortize it over the next 10 years because I paid a lot of money for it.
Well, storage wears out. It's rotating media. It's not as bad as tires on a car, but it does wear out. So you have to plan on depreciating that storage over three to five years and getting it out of there. And when you do, are you going to buy that same monolithic storage you bought before, or are you going to look differently? The other thing to really look at is, have you considered a tiered approach, lowering the overall cost of storage, putting that expensive storage in your mission-critical areas and putting lower-cost storage in the business-critical and the near line? It's something to really consider.
And then the other one is true capacity requirements. And since most of us don't really know what our requirements are going to be, we can take a guess. But, you know, who would have thought that from 1998 to 2003 that storage would have been growing, from a needs standpoint, 110 percent per year? Not a lot of people would have guessed that, and they would have guessed low.
So in deploying something like a storage area network, it allows you to actually grow with that storage. So you get scalability up, down, in, out, every way you can look at. So you can redeploy the storage and reprovision it. So you need to look at today and look at tomorrow.
I think the other important thing is throughput. So we talk about network attach versus storage area networks versus direct attach. In the storage area network world, in the direct attach world, the performance is going to be bottlenecked by the limitation of either the server or the storage. They're the limitations.
In the network attach model, the throughput or the storage performance is going to be limited by the network. So you have to determine what is my application. Is it megabytes per second? Is it IOs per second? And how many clients do I actually have out there? So you really have to look at that.
The other one is really availability requirements. Do I need it to really be up 24 hours a day, seven days a week with no downtime? Or can I have a reasonable two to four hour downtime? It could be five minutes, but let's just assume it's four o'clock in the morning and something happens and it's going to be two hours of downtime.
Is that reasonable? Is it business critical? Does it need to be archived? How often does it need to be archived? Can I use near line? These are all questions you have to ask and you have to answer these yourself because there's really no one who can tell you what your business model looks like because they vary so much. In fact, you'll find that most of the people selling very high end storage will dictate your business model to you.
And that's not necessarily the right way to do it. And the other one is really disaster recovery. So there's been a couple things that have really driven that. One, of course, was 9/11 and none of us really wanted that to happen and none of us wanted to have to bear what happened afterward and that was rethink our storage strategy.
Let's get this off site. And there's really two ways to do it. One is to deploy remote replication. Very expensive. Generally, it doubles your cost because not only do you usually have to replicate the storage, you have to replicate servers and infrastructure and everything. And the other one is an off site backup service.
And you can even carry it off site. Small companies, you know, the CFO carries the tapes home. And in large companies, there are companies like Iron Mountain that will come pick up your tapes and they'll even load them and unload them if you need to. So it's really a cost driven thing. And when you talk about that, you really need to talk about compliance.
Because one of the other things, anybody here that falls under Sarbanes-Oxley or know that they do? So a lot of larger companies are doing this. And they're actually using the same technology that's used in the last five years. So they're using the same technology. So you can see that the government is taking this very seriously and it's starting to move to Europe and it's starting to move farther throughout the world.
So this type of compliance that say that you have to find every email for the last seven years in 24 hours and deliver it to the Justice Department, that's a pretty huge requirement. Especially how many of you can go through the tapes you have today and find something from yesterday. It's usually a pretty hard thing to do. So it is good practice to have a budget. But it's also important to have a budget.
And I think that's the most important thing about a budget is that you have to have someone that you can trust that can give you the right advice and that's looking out for you. And if they're just telling you today that this is the right thing to do, then you're going to have to have someone that you can trust that can give you the right advice and that's looking out for you. And if they're just telling you today that this is the right thing to do, then you're going to have to have someone that you can trust that can give you the right advice and that's looking out for you.
And if they're just telling you today that this is the right thing to do, then you're going to have to have someone that you can trust that can give you the right advice and that's looking out for you. And if they're just telling you today that this is absolutely what it's going to cost and there's no other way of doing this, I think you need to think different.
So what I want to do with that is really talk, bring up Ryan Klein, who's a SAN architect at Qlogic, to talk to you about basically the one area that we hear all the time, and that is fiber channel best practices. If I'm going to deploy a network and basically a SAN, how do I do it and how do I lower the cost in it? And Ryan's going to tell us about some great exciting stuff here.
So what I want to do with that is really talk, bring up Ryan Klein, who's a SAN architect at Qlogic, to talk to you about basically the one area that we hear all the time, and that is fiber channel best practices. If I'm going to deploy a network and basically a SAN, how do I do it and how do I lower the cost in it?
[Transcript missing]
When building small networks, starting as small as eight fiber channel ports, growing up to very large networks such as 64 and 128 port SANs, and deploying a very scalable, cost-effective architecture. And then we're going to talk a little bit about SAN interoperability. Alex touched on this as being a very important part of deploying a storage area network, and it's really key to making sure that all the componentry that you have works together and is supported and there are solutions that you're not going to have to deploy. not going to run into any issues with.
So we talked a little bit about all of these components that Qlogic makes. So how does this come to you? What are the strategies that allow you to make use of these products? Well, what we're going to start to see is that we integrate our products into the SANs that you go deploy. So you'll see Fiber Channel HBAs inside of the servers, taking our Fiber Channel ASICs, iSCSI ASICs, putting them on the motherboard. Most people today who have deployed servers have IP integrated on the motherboard.
You're probably familiar with SCSI on the motherboard. Same thing is happening here with Fiber Channel. We also integrate switches into the componentry that we have. So if you look at a lot of the bladed environments that are out there today, they've taken the technology, such as the Sandbox 5200, which today is deployed in a box product, and they've integrated it right into the back end of those products. So moving forward, storage boxes like Xserve RAID and various other storage boxes have the ability to integrate switching architectures into those boxes. And there's a lot of things like that that will be coming out.
We're simplifying and lowering costs. And what does that really mean to you? So when Alex asked how many people here are storage administrators and have dedicated people deploying storage, I only think I saw one person raise their hand. So what does that really mean? Well, essentially, everybody here has a lot of different responsibilities and functions within their IT organizations, as in their developing products.
So you're not necessarily a SAN expert. You know you have storage out there. You know that a storage area network makes sense for your deployment, but you don't necessarily want to have to know every single parameter and all of the detailed implementations to configure these types of things. So what we're doing is we're building intelligent software that allows us to configure automatically these environments, as well as provide ease of use to you so that you don't have to worry about all those detailed implementations.
The final thing that we're really doing here is we're delivering turnkey SAN infrastructures. So what this means is it gives you the ability to, from a single perspective, buy all the componentry that you need to deploy a storage area network. So today, you know, we need servers, we need the interconnects, and we need your storage.
Well, what are all the pieces and parts that you need to go deploy a SAN? If you're not very familiar with all the componentry, it could be somewhat overwhelming. So the idea is to provide a turnkey solution that allows you to purchase the storage networking switch, all of the optics and things like that that are required, all of the cabling, as well as the host bus adapters that go inside the servers, and to be able to do that in a heterogeneous environment. So these types of kits allow you to cross Windows, Linux, NetWare, Solaris, as well as OS X, to deploy heterogeneous environments and manage them from a single location.
Talk a little about expanded management, and this is what I just mentioned. We have something called our SAN Surfer Management Suite, and this is a device management tool. It's Java-based and really complements the Xserve RAID GUI. So Alex talked about LUN mapping, LUN masking, the ability to point specific LUNs at specific servers. This software really complements this, and what you're looking at here is a picture of our brand-new, just-released OS X GUI for the Sandbox 5200 switch.
What this allows you to do is configure the specific SAN ports, switches, and all the functionality in a heterogeneous environment, crossing all the applications that we show at the top there. It works out really well and complements all of the Apple tools, and is Java-based, as I mentioned.
Let's talk a little bit about the switch market and what you've probably seen in the past and where we believe switching to go. So here's a basic SAN. This is a four-switch mesh. You see a bunch of Xserve servers at the top, Xserve RAID at the bottom, some tape backup, as well as some heterogeneous environments.
And if you wanted to go deploy this environment or something similar to this a few years ago, Those are some rough numbers of what you'd see and what it would cost. The switching environment really stands out here. Four switches there cost roughly $20,000 apiece, roughly going to about $80,000 for the total, so that's a large part of the overall SAN. It was cost prohibitive for a lot of people to put together storage area networks.
So what did we see? We saw most SANs being deployed at the large enterprise. And from a show of hands earlier this morning, most people here aren't deployed large enterprises. They're more on the small to medium business side. So SANs are really cost prohibitive. As we move forward, one of the strategies that we're working with Apple on is to be able to bring storage area networks, the functionality that Xserve RAID brings you, down to the small to medium business and be able to develop the platforms at the sub-$15,000 level for the entire solution, as well as still scaling all the way up to the enterprise.
So the Sandbox 264 switch is a chassis-based switch that allows you to be at the very high end here, as well as the Sandbox 5500. The Sandbox 5200, which we're going to talk about in a few minutes, really allows us to scale all the way from eight ports through the small to medium business there all the way up to 64 ports.
[Transcript missing]
At the same time, the embedded switches start to come into play here. Taking that switching technology and putting it directly into the storage arrays or directly into bladed environments for servers and things like that, really reducing cost and complexity. So using the last two slides and comparing them to this one and the next one, in the past we had the chassis-based high availability directors, large port count, as well as we had the 8 and 16 port fixed switches.
So of the folks in the audience that raised their hand regarding having switch infrastructure, how many people here deploy chassis-based or director class switches? I see one or two hands, so almost nobody. So everybody else here, by a show of hands, eight 16-port switches, fixed port? Okay, so right there, you're really locked into a strategy where if you want to scale that environment, you have to take another switch and connect it in via an inner switch link and start using up those valuable end-user ports to do that.
So where do we see this going? Stackable switch market. Stackable switch market allows you to scale an environment. You can still continue to leverage the existing eight and 16-port switches that you have, but you connect them directly into the stackable switch and scale that way. And then if you need the high availability, high port count switches, you start using the chassis-based switches, and you see how they complement each other, giving you a choice to scale from fixed port environment through stackable all the way up to the high port count chassis switches. to really be able to pick the right switch for the right application.
So here's a little bit of a view of the industry and the major players that are out there and some of the products that they have. So most of you are probably familiar with Cisco, small little company, McData, Brocade, as well as Qlogic. And you see that everybody out there really is offering fixed port switches, not giving you too much choice or scalability.
Qlogic has come along and really been disruptive and is offering the Sandbox 5200 as well as the Blade switch in the bottom right. But one thing that you'll notice about stackable switches is they offer all of the functionality that you would get from a fixed port switch, as well as they're offering you functionality that you'd see at the director class. So things like non-disruptive code load.
Everybody here uses patches all the time. Well, as we move forward, we provide updated software and functionality to switch. Infrastructure part, you want to make sure that you're up on the latest and greatest code. Well, you probably don't want to bring down your environment to do that. Sandbox 5200 allows you to upgrade firmware dynamically without affecting your storage area network.
Other big things here are management software. This is really important because it allows you to manage the environment and doesn't cost additional money to you, as well as all of the features such as monitoring and performance and things like that. These are all things that are included in these environments where you may not get those in a fixed port environment.
So we're going to introduce here the Sandbox 5200 to you. By a show of hands, how many people here are familiar with the 5200 switch? That's great. That's a great number of people here. So the 5200 has 16 2G ports and 4 10G ports. So the 16 2G ports really kind of be viewed as a fixed port architecture. What the 4 10G ports, and you got cut off here on the right-hand side, but I'll show it to you in a later slide, are used to interconnect those switches together.
It's a 1U box. It's managed just like an IP switch would be managed. It has an Ethernet port as well as an RS-232 port. It has something that we call configuration wizards. I mentioned this a little bit earlier. Configuration wizards allow you to step-by-step configure and deploy a storage area network without having to know all of the details required to do so. 5200 can be deployed in about five minutes from a configuration standpoint. And you're ready to start plugging in XR RAID boxes automatically discovered and configured. And you don't have to worry about what's happening and why. It does it all for you.
IO StreamGuard. For the folks in here that do full stream video as well as backups, this is a really cool feature that I'll talk about in a minute. But it's the type of feature and functionality that Qlogic works with Apple on to ensure that we're serving all of the needs of the folks like yourselves.
IO StreamGuard. For the folks in here that do full stream video as well as backups, this is a really cool feature that I'll talk about in a minute. But it's the type of feature and functionality that Qlogic works with Apple on to ensure that we're serving all of the needs of the folks like yourselves.
IO StreamGuard. For the folks in here that do full stream video as well as backups, this is a really cool feature that I'll talk about in a minute. But it's the type of feature and functionality that Qlogic works with Apple on to ensure that we're serving all of the needs of the folks like yourselves. Alex Grossman, Ryan Klein, Stephen Terlizzi IO StreamGuard.
For the folks in here that do full stream video as well as backups, this is a really cool feature that I'll talk about in a minute. But it's the type of feature and functionality that Qlogic works with Apple on to ensure that we're serving all of the needs of the folks like yourselves.
10GIG ISLs, this is really a disruptive technology to the industry. We're the first switch to have 10GIG functionality, and we use it on the right-hand side there. You see those copper interconnect cables. Left-hand side here is a picture of a large 64-port mesh that you would have to create if you wanted fixed-port environments.
So if you wanted to scale that large, that's what the environment looks like. It requires 30 cables just to connect the infrastructure together. And by the way, those ports that you had to use, you can't use for your devices, your tape drives and your disk drives and your servers anymore.
In a 64-port stack using Qlogic on the right-hand side, you get to use all 64 ports. You don't lose those valuable user ports when deploying a 5200 solution. Not to mention the amount of cables and infrastructure mess that you would have to manage with 30 cables and the redundancy problems that you might have. The other thing to mention here is the 10GIG speed.
You'll be able to connect those switches together with a 10GIG bandwidth. And in a fixed-port environment, even if you started out with two switches and you scaled the third and the fourth, you're only having a 2GIG bandwidth between those switches. So that's one major aspect of the 5200 that really brings a lot of advantage.
Breakthrough ease of use. So, you know, QLogic really looked at what Apple had done with Xserve RAID and the GUIs and the tools and the ease of use that's available today. You know, that's one of the biggest things that you hear about Apple software is how easy it is to use.
We took the lead there from them, and we were able to integrate that type of technology into the software that we have for the Sandbox 5200, as well as the SandServer management suite that we've built. Really being able to provide a stackable architecture with all the value-added software that's easy to use. We really think that's important, and it's really looked at as a best practice in the industry because it provides you with ease of use.
So here's IO StreamGuard. I mentioned this before. This is a really cool feature with QLogic switches. It's exclusive to QLogic switches. And for those of you that aren't familiar with the process, when you bring a server up and down on a SAN or reboot a server on a SAN, something called an RSCN, a Registered State Change Notification, goes out. And what that is, is that server telling every other server on the SAN, hey, I'm here or I'm gone.
Well, when that happens, it's only a split second. But if you have server A talking to disk A and server B reboots and comes back up, and this RSCN goes around the SAN fabric, server A momentarily pauses. If you're in an OLTP environment running a database or something like that, not really a big deal. If you're streaming video or you're streaming a backup and all of a sudden there's a pause, what do you think happens to the screen? This is not a good thing if you're doing high-definition broadcast.
So this feature that we have called I/O Stream Guard allows that switch port to not acknowledge something called an RSCN. That way you can have continuous streaming video or streaming backup without having that port go down. This is exclusive to the 5200 and really plays well with Xserve RAID as well as the customers that use this type of technology.
So best practices in SAN interoperability. SAN interoperability is something very important, and it's something you need to look at when you're deploying SAN solutions. Historically, interoperability really meant connecting product A with product B. Does it work? And it seemed like a game. You didn't really understand how it worked and what to do. And most of that has gone away, and a lot of it has to do with things like the SAN interoperability guide that QLogic put together.
This is something that's available on our website, and you can go to QLogic.com and download it. Really what it is, is a guide that allows you to build a SAN. It covers close to 60 different partners in the industry, from QLogic to Apple to backup companies like Veritas. All the ISVs and IHVs out there, multiple storage vendors, multiple software vendors.
And what it really is, is a guide that allows you to know. What works with what, and how to put it together. Really key that you use a document like this when you're building a SAN. Everything from the infrastructure components all the way up to the application layer, to know what's out there and what works with what. Great document.
Switch interoperability is also something very important to Qlogic and something we work on continuously. Of the folks in the room, there was a number of people that had deployed 5200s. How many of you have deployed brocade switches? Great, there's only a few of you. I like that. So the idea here, though, is that there are going to be a lot of people that have used other vendor switches. And sometimes you're going to hear somebody say you have to stay in a homogeneous environment. If you want to add another switch or grow to more ports, you have to buy another Brocade or another McData switch. That's really not the case anymore.
You know, there's a lot of advantages to using something like the Sandbox 5200 as you continue to scale your environment. But at the same time, you don't want to forklift out the switches that you have. So this type of a document is a best practices switch interoperability document that gives you step-by-step procedures on how to configure new technologies like the Sandbox 5200 with legacy technologies from companies like Brocade, the fixed port architectures and things like that. So keep your existing technology, continue to scale your environment, and we give you step-by-step procedures on how to do that. And something that really gives an advantage to you when you're growing those environments.
So the other thing we like to do is really educate our users, our developers, and provide documentation. All the documents that I've been talking about fall under an umbrella called QLogic Press. And it's really an educational arm of QLogic that is designed to provide really great white papers and documentation about common SAN deployments. So as Alex described the various areas of NAS and DAS and SAN, of course QLogic is focused on SAN.
And what we like to do is provide procedures and step-by-step and deployment scenarios for SAN infrastructures. And this is a series of guides that we built. This one is an Xserve RAID-focused document. And it really gives you an idea of common topologies, how to configure, how to deploy. There was a lot of topology screens that Alex showed you. And those are in these types of guides. Plus, it tells you how to configure it.
So that can be very complex, very quick. You know, lines everywhere and devices connected and multi-pathing and heterogeneous environments. I've got a network server. I have a Linux server. How do I connect that in? How do I do LUN mapping? How do I do LUN masking? We try to take all of that complexity out of it by putting together guides like this with step-by-step screenshots that give you the ability to go deploy effectively.
So in summary, Qlogic is an I/O technology leader. We want to provide you with the infrastructure to move your data from your server to your storage via a SAN network. We make fiber channel switches as well as fiber channel HBAs that cross heterogeneous environments. You can deploy in numerous solution environments as well as manage them from a central location.
In addition, we have new technologies like the Sandbox 5200 that you're going to see just continuously become easier to use, more functionality, as well as price being reduced to enable more people to deploy SANs and take advantage of all the aspects of shared storage. The ability to pool your storage to more effectively back up your storage, as well as to use SAN to better your businesses.
We touched on interoperability and how important interoperability is to your environment, making sure that when you build SAN environments, you take advantage of things like the SAN Interoperability Guide. That guide gives you all the visibility to what works with what and how to deploy, as well as taking advantage of things like the Xserve RAID Configuration Guide that we put together.
And finally, you're going to see Apple and QLogic working closer together to continue to bring you solutions documents and things like that, and to continue to build environments that take complexity out, as well as allow you to scale SAN environments, and really take a competitive advantage over your competitors, as well as building solutions that allow you to scale from small environments all the way up to large 64, 128 port count environments. Thank you.
That was great, Ryan. I brought one of those SAN configuration guides just to give you an idea how thick and complete this is. And I can tell you that with this guide, just about anybody is able to build a SAN with an Xserve RAID. Now, obviously, if you're doing something like deploying an Xserve RAID, an Xserve, you're deploying XSAN, you know, the physical part is really going to be hard. I mean, it really is. How do I set that switch? What do I do? If I've got video, do I need to turn something on, turn something off? And it's all in here, and these are available online from QLogic, so thank you, Ryan, for doing that.
What I'm going to do now is I'm actually going to turn it over to Steve Terlizzi from Candera, who's going to really talk to us about an interesting case study that we put together with Candera, and where the customer is actually Shai Day, who is a large ad agency. It just happens to be Apple's ad agency, just had nothing to do with that, actually.
But they were really facing something that is something that everyone faces out there, and it's really a mission-critical storage environment and an issue they had for a long time in heterogeneosity. And Candera, Apple, and QLogic all got involved, and we're able to really do something that's incredible. So, Steven, do you want to take the stage here, please? Sure.
Thank you. So, one of the interesting things when you start to look at what Apple, Qlogic, and Candera are talking about is addressing the needs of mission-critical storage, but not at the prices that the big players and the big vendors are pushing on the Fortune 200, Fortune 500 companies.
What they tend to miss is the fact that there's a large number of companies that need mission-critical storage, need the availability, the performance, the performance scalability that you could find in a monolithic storage device, but certainly not at the $40 per gigabyte plus that you typically see when you look at a monolithic device. So, today I'm going to talk about Chiat Day and how they built their mission-critical infrastructure on a combination of Qlogic and Candera.
So, let's start with the first part of the session, which is the Qlogic, Candera, and Apple products. Then talk a little bit about this new architecture, the ability to build this intelligent ATA, and then close it up with what that means. How much can you apply this to your infrastructure, whether you're an enterprise or a small to medium business? And, of course, the Qlogic.
[Transcript missing]
About 50? Over 100? It's actually over 100. Now, we scaled it down to about 40 by rationalizing it, but when we were developing it, it was over 100 megabytes. Now, imagine 500 professionals working daily with multiple copies of presentations of this level of quality, trying to do digital content, and you can see how quickly it can grow.
And this is their business. They're responding to very tight deadlines of clients that are paying a lot of money for advertising that don't want to see things missed. And they need the access all the time because creative people like that work very long hours, very vigorous hours. And so, Chiat Day, this facility in LA, needed 7 terabytes of storage to handle 500 clients. Their New York office needed 21 terabytes.
So what did they have originally? What was the before? The before was a heterogeneous server environment made up of Novell, Apple, HP servers, all with direct attached storage. So every server had its own storage, different departments had different projects on different servers, and sure enough, the server that had available storage, that department didn't need the storage, while the department that really needed the storage had a server that had no storage on it.
So they ended up having an environment that was very difficult to manage, very poor utilization. The other aspect to Chiat Day is this is not a Goldman Sachs that has a liberal IT department. It was a very limited IT staff and a limited budget. So they needed to focus on how can I build this infrastructure that I can manage most cost effectively.
The other thing is storage planning. How do I plan for the growth? How do I plan for the scaling? As they brought more and more services and more clients on, it's an unpredictable demand flow. They land a big client, they need more storage very quickly. How do they respond to that environment? And the other thing is with a DAS environment, backup is difficult. And in an environment where you have high availability, they need to be able to consistently backup and restore that data. In this DAS environment, you have different backup regimes.
So what we did is we recommended a SAN. We took a pair of 5200, Qlogic 5200s, and what we call a Candera Apple ATA appliance, which is a number of Apple Xserve RAIDs aggregated with Candera's network storage controller. And what that allowed them to do is deploy a very easy SAN storage.
The Qlogic SAN provided the connectivity. The Candera Apple ATA appliance provided this storage that could be deployed in seconds and provisioned very quickly. The other aspect to this is the fiber channel level of reliability. When Chayote went out to look and say, "What can I buy?" They looked at fiber channel storage, modular and monolithic storage, because they believed that's what they needed.
When Candera and Apple came in and said, "No, you can do this using serial ATA technology," they looked at us and said, "You've got to be kidding." But we were able to provide the kind of active-active high availability, the kind of tracing and diagnostics that you typically find in a monolithic storage device with the Candera Apple solution.
Also, it allowed you to centralize the data assets. So I could have the physical storage here of the Apple Xserve RAIDs, and then with the virtualization and the centralized management provided by Candera, I could create virtual LUNs that I could very flexibly provision to the various hosts. And those hosts were heterogeneous hosts, including HP, Apple, Novell environments. The other key aspect to that is the error detection and correction. So when you look at a SAN environment, when there's a problem in the environment, maybe a flaky HPA or something that goes awry on the storage, it's difficult to diagnose.
The more complex the SAN gets, the more difficult. But with an aggregating element in there, you can then do very quick error detection and diagnosis. And the other aspect and the other benefit to this is, by moving from a DAS environment to a SAN environment, you de-link the storage from the hosts. Consequently, it's very easy to replace hosts, to add new hosts, and to buy smaller servers.
So were they happy? So this is the quote from Chiat Day where they said, basically, we didn't think we could do our storage infrastructure on ATA, but Candera and Apple were able to deliver that at a fraction of the cost of monolithic or modular storage. They were so happy that after the evaluation with the LA system, they bought the 21 terabyte New York system at the same time. And they had planned a phased implementation and they bought everything up front.
So what is this partnership with Candera and Apple? Why are we together? What are we doing? The Candera Apple ATA appliance takes the components, takes the guts of what you'd find in a monolithic storage device by way of fine-grained virtualization and centralized management, and uses that to aggregate Apple's superior ATA technology.
So when you look at a monolithic device, what you typically find, and pull back all the sheet metal, you find components that handle connectivity and virtualization and management, components that provide RAID processing and those types of things. And they're able to scale because as you need performance and capacity, you add more controllers. As you need connectivity, you add more disk adapters, channel adapters. So you can scale the performance. As you need modular devices can't do that, but the combination of Candera plus Apple allow you to do that. They allow you to scale both the performance and capacity together.
Also, enterprise class, the high availability, the ability to do active-active failover, the fault tolerance, those types of things are very important. When you look at environments, we talked about legacy storage and how messy it can be in there. What we allow you to do is start by building this ATA appliance and then start to bring in your legacy storage and provide one centralized approach. So interoperability becomes very easy.
And so when you look, it's now a new approach to a storage architecture. Monolithic approach was the first approach. Big, manageable scales, but very expensive. Modular starts small, but doesn't scale. Now you have this intelligent ATA approach that allows you to start small, but aggregate all those devices into one big virtual disk.
In fact, the combination of Candara and Apple allow you to build an architecture that matches that monolithic architecture. What does that mean? What that means is you can manage everything from one centralized approach and have a management GUI that will work and allow you to scale and work like a monolithic device.
[Transcript missing]
Fantastic. Thanks, Steve. I appreciate it. You can see that today there's a lot of choices in storage. And the Candera and Apple solution is an interesting one. It's one where you can actually take high-performance storage, combine it with the feature sets that you get in very, very expensive monolithic storage, those three-letter acronym name companies, and you can really build a system that is very cost-effective and scales a lot better.
And so what I want to do really quickly is wrap up, and we're probably not going to have time for Q&A today, so what we're going to end up doing is talking to you afterward. But just to wrap it up real quickly, I think when you look at the summary of what we've talked about, the first thing is to remember is budget. Budget's going to dictate your infrastructure, and we think we have a building block storage product that allows you to do that. The second thing is that complexity is always going to equal cost. So the more complex it gets, the more it's going to cost.
And of course, scalability is something you need to address up front. And you have to really look, when you're talking about scaling it, is what is your real-world usage of the systems? And you don't want to overlook the backup needs that you have, because a lot of people say, well, I'm putting in RAID storage, I no longer need to backup. And nothing could be farther from the truth than that.
When you put in RAID storage, what you need to do is be more concerned about backup because you get a false sense of security. And then the last thing is consider tier storage. Because if you look at tiered storage and you look at these approaches and not spend that money on that monolithic storage for everything you need, you can build an infrastructure that matches your needs going forward. And I can guarantee you one thing, and that's that we're going to continue to drive the ease of use and the price of storage down. I appreciate everyone's time today. Thank you very much.