Enterprise • 56:22
Xsan is Apple's high-performance, easy-to-use SAN file system for Mac OS X and Mac OS X Server. Find out how Xsan can be deployed as a platform for workflow in demanding environments, as an affordable alternative for storage consolidation, and as centralized storage for a computational cluster. Also, learn how Xsan will create new opportunities for applications demanding high-speed access to shared data. Xsan deployments and major features are covered.
Unlisted on Apple Developer site
Transcript
This transcript was generated using Whisper, it has known transcription errors. We are working on an improved version.
Morning. Before we start this session, I want to talk a bit about the session that's going to be coming after this. Because Xsan is a brand-new technology and we have tons of things we wanted to talk about, we split the content into two sessions. The first one is this one, introducing Xsan. And the second one is the one right after this, 10:30 in this room, Xsan in-depth.
So if you're here to learn about what Xsan offers and what it takes to deploy and manage it, or to learn how to program for it, we highly recommend that you attend both sessions. And here, I just wanted to quickly review what we're going to talk about in these two sessions.
So in this session, we're going to start off by talking about why you may need a Xsan file system for other benefits. And we're going to spend a pretty good chunk of time talking about all the key features of Xsan, everything from the basic file system to the management, to performance, to cross-platform support.
And we're going to cover what it takes to deploy Xsan. Basically, we're going to go over the basic hardware configuration with a few examples. And then to conclude the session, we're going to talk about different software and hardware product Apple offers to build Xsan and also build different solutions on top of it.
So that should keep us busy for next hour to hour and a half. And then at 10:30, We're going to talk about how Xsan works from an engineering perspective. So, for example, we're going to talk about things like what gets passed on wire when you make a file system call.
And we're going to spend lots of time talking about the administration of Xsan. In fact, we're going to have a demo of Xsan admin software, using it to set up SAN from scratch, or almost from scratch. So that should be pretty fun. Right, and we're going to talk about different aspects of administration, such as command line tools.
Then we're going to talk about a few tips for developers, things they should watch out for when accessing Xsan volumes, things such as scalability and locking and other issues. And lastly, we're going to talk about Xsan Developer API. We're going to have a demo for a couple sample applications using that API, and it's pretty visual, so it should be pretty entertaining. So with that, I would like to introduce the speaker of the first session, Tom Gaugen, Director of Server and Storage Software from Worldwide Product Marketing. Tom Gaugen: Thanks, Kazu.
[Transcript missing]
Now, what came out and came into vogue a few years ago were NAS devices, network-attached storage. The reason these are very interesting is that these are purpose-built appliances for sharing storage over the network. So, in a sense, it's just like a direct-attached storage device, except that it's storage with an embedded file server. It's a good way to think about it.
So, it has some really great advantages, one of which is typically NAS devices are extremely easy to deploy, very quick to get them up and running. And that's because they are single-purpose devices. Their whole point is to share storage on the network. At the same time, they do suffer from some of the same performance and scalability limitations as direct-attached storage. Now, there's a whole range of very sophisticated things that some companies offer.
In terms of storage virtualization services, where they can actually make multiple NAS devices appear like, and on the network appear like a single device, or they can eliminate some of these single points of failure. But it is fairly complex to deploy the storage virtualization software to accomplish this.
Now, in a storage area network, this is where multiple storage devices are attached to a fiber channel network. That's really what a storage area network is all about. Now, fiber channel today, as well as iSCSI. You could use a different type of network called iSCSI. This is SCSI protocol over IP, over Ethernet.
But for all intents and purposes today, when I talk about a Xsan, I'm going to be talking about a Xsan based on fiber channel. So we have multiple storage devices connected to a fiber channel switch, and then we have multiple servers that are able to access those storage devices. Now, a key thing is that they essentially access the storage devices, and in a sense, they share the storage devices, but they don't share the data on the storage devices.
And I'll get into a little bit of detail on that right now. So, why a Xsan file system? Well, as I said, at Xsan you have multiple computers accessing the same storage device. In the example here, we've got an XServe RAID connected to a fiber channel switch. The XServe RAID presents two RAID controllers, hence the two fiber channel lines coming down from the top, and the fiber channel switch to the three XServes that are shown here.
These three XServes are connected via fiber channel to the fiber channel switch. And they can see the XServe RAID. Each one of them can. In fact, they could take that storage from the XServe RAID, and they can reshare it using network protocols such as AFP-SMB or NFS across the network to these clients. So they can do the exact same thing that a NAS device or direct-attached storage can do.
However, the challenge with this, before you have a Xsan file system, is that each RAID set is effectively a volume, and the volumes are connected separately to each one of the servers. So you have A, B, and C up in there, denoted by the colors, and A, B, and C are mounted by each of those servers. Only one server can mount one file system. So a server could mount multiple file systems, but the file system, or the A, or the B, or the C, can only be mounted by one server. It can't be mounted by two servers at the same time.
So, each RAID set is mapped to a server. That's a process called provisioning in the Xsan world. And that's what that's all about. The reason a Xsan file system is interesting is you eliminate these data silos. Keep in mind that Server A over here only sees the data in Volume A. Server B only sees the data in Volume B. Server C only sees the data in Volume C.
That's before a Xsan file system. After a Xsan file system, you can actually have data A, B, and C, as denoted up above, in a single volume. So this would be a single 3, 3.5 terabyte volume on the XServe RAID. And that volume can be seen by each of these servers.
So this is kind of a subtle difference between network-attached storage and SANs, in that the file system used for a SAN can actually be used for a server. And that's what we're going to talk about today. So, basically, the volumes created by that file system can actually be mounted directly with direct block-level access by each of the servers or desktops that's attached to the SAN.
So, multiple computers, all three of these computers, can read and write to the same volume at the same time. And this is block-level access. Very, very different from, in a sense, from when you have a network file system. So, it's sharing a file system to multiple computers, but final arbitration, block-level access to the volume that it's sharing, is arbitrated by the file server itself, by that single file server. This is very different. And, of course, it could re-share that volume to all of these clients as well. So now we have three servers able to share the data that's on that volume out to the network.
Now, the nice thing about a Xsan file system is it's very, very easy to scale. So you want to add more storage. You've gone past 3 and 1/2 terabytes, and you want to go to 5 or 6 terabytes of storage. You literally add the storage in. You can grow the file system.
And all of the systems that are attached directly to the file system simply see a larger volume with more space for more data. Now, again, the other thing you can do is you can add more servers to the Xsan as well and share the data in the volume through more file servers.
So you can literally get more network performance out of the Xsan than you would get with having a single server sharing a volume. Because now you have 3, 4, 5, or 6, as we show here, servers able to share the data that's on the same volume. So that's why a Xsan file system is so important. A file system is very interesting. A little overview of Xsan.
Xsan is for high-performance storage networking. So it's very, very fast. In fact, when you set up an Xsan system properly configured, it'll be, I don't say this to a lot of you, it'll be much, much faster than the internal disk drive on your system. In fact, you will get whatever the storage architecture that you deploy can give you. Xsan is very, very lightweight in terms of its overhead on the storage infrastructure.
It has a high availability architecture. So you can imagine a system like this is going to be deployed in mission-critical environments. We've done a lot of things to make sure that when you deploy it that way, if any single failure happens somewhere on the system, we've got you covered. And I'm going to take you through some of that as well. Xsan provides some really, really interesting volume management tools. And that's where we get into some of the nomenclature I mentioned.
And I'm going to walk you through some of that. I'm going to walk you through a little bit of the volume management tools and show you what we're talking about there. And then later on, I know the guys are going to show you how to actually set up a Xsan volume.
[Transcript missing]
So from an HA perspective, we do a couple of things. There's a thing called a metadata controller, and I'm going to show you a little bit more about what that's all about in subsequent slides. But the metadata controller is like the traffic cop for the Xsan. What the metadata controller does is that any time a computer on the Xsan wants to open a file, read a file, write to a file, it first talks to the metadata controller.
And it says, you know, can I open that file? I'd like a read/write access to this file XYZ. And the metadata controller checks and says, okay, no one else has got read/write access right now, so here you go, you can have read/write access. And, you know, here's where the file is located.
But note the difference from a network file system. It doesn't say here's the file. It says here's where the file is located. And then the computer that requested the file goes in over Fibre Channel, and then it goes directly to the storage at the block level and reads or writes to that file.
So that's what metadata controller failover is all about, is providing a secondary metadata controller. You can imagine if the metadata controller fails, then you've got a real problem on your network, on your SAN, because nobody knows if they're allowed to have access to a file, where the file's located, and so forth.
Xsan comes out of the box with support for metadata controllers. So what does a metadata controller failover? If any metadata controller on your SAN fails, the Xsan computers, the computers running Xsan, will get together, and they will nominate another one of these computers to take over the duties of the metadata controller. They will do that automatically, and that will happen within seconds of a detected failure in the metadata controller.
So that's what metadata controller failover is all about. One other added note about it: Because we ship Xsan with the metadata controller software built in, unlike a lot of other SAN vendors, any Xsan system connected to the SAN can be the metadata controller. So we support something called cascading failover.
This means if the second metadata controller, if that person who stumbled into the machine and kicked the power cord out, stumbles into the next metadata controller and kicks the power cord out, then the next system on the SAN will take over. And so as long as any one system, any one Xsan system is up and running, you have access to your data.
We think that's extremely important with these systems. Of course the file system is journaled. You'd expect that. And that's why we can actually recover the file system within seconds of a failure. The other metadata controller takes over, and the other access is the file system runs the journals, and you're up and running within seconds.
We support something called fiber channel multipathing. So some of you may have purchased an XServe, and you may have gotten the fiber channel card that goes along with it. We have a fiber channel host bus adapter, or HBA, that goes into the machine so that you can connect a fiber channel cable into the XServe, or into a Power Mac G5.
The thing is that that card actually supports two connectors. And you could actually run both those connectors to the fiber channel switch where they can see the storage.
[Transcript missing]
From a volume management perspective, we support a range of things, and I'm going to walk you through a little bit with this with some pictures. Storage pooling, so we group multiple storage devices together, really multiple RAID devices typically, and you'll hear me, you'll hear others refer to them as LUNs, logical unit numbers. A LUN is simply a logical grouping of storage that's hidden behind some form of RAID controller typically.
And so, for example, with the XSERV RAID, as many of you know, it has two RAID controllers in it, seven drives on either side attached to each of the RAID controllers. If you took the seven over here and just did, say, RAID 5 across the mall to that RAID controller, the seven over here, RAID 5 across them to that controller, when you looked at it across the fiber channel network with Xsan's or with Xsan's admin tools, you would see two LUNs.
And these two LUNs can then be grouped together in something we call the storage pool. In fact, you can group multiple LUNs across multiple devices into multiple storage pools, in fact. So I'll show you how we can do that sort of thing and what that's all about. We can dynamically scale volumes, as I mentioned earlier. So you can grow these volumes.
We support things like volume mapping and affinities. I'm going to go into a little bit of detail on this. You can decide with the Xsan admin tools which systems connected to the SAN are able to mount which volumes, and you can have that mount occur. You can force that mount or force that mount right from your admin console. As I mentioned, the admin console works remotely, so you can actually run that on a PowerBook or an iBook and connect into your Xsan remotely and take care of that type of management.
Affinities I'll go through. It's better to do a diagram for that because it becomes really, really clear what affinities are all about with the diagram. We do have offline defragmentation tools. Now, the file system is designed, and by design, designed to minimize if not eliminate fragmentation. But there are, of course, certain operations that can happen on a file system that are sort of out at the fringes that can cause some amount of fragmentation. And so we include the defragmentation tools along with the product so that you can run those if necessary on the file system.
So let's talk about how we create a volume. I mentioned the XServe RAID. I build a RAID set, as I mentioned, from, say, those seven drives. Format it or initialize it, I should say RAID 5, and I do that with the RAID admin tools. We have to be very clear about what tools we use with Xsan because you will not be using disk utility with Xsan.
Disk utility is for creating HFS Plus or UFS volumes and doing software RAID. Xsan has its own volume format, and so when you create an Xsan volume, you'll be using the Xsan admin tools. to do this. So we create a RAID 5 LUN. We haven't got a volume yet, but we've got a LUN. It shows up on Fibre Channel.
We can take multiple LUNs, as I mentioned, and put them into a storage pool. And the storage pool allows you to group multiple LUNs that are similar to each other. So take multiple RAID 5 LUNs. Each LUN, say, is one and a quarter terabytes in size, and you group them together. What Xsan does is automatically stripes data across the storage pool. So now you have this logical grouping of storage, this logical pool of Raid 5 lawns.
You can actually create multiple pools and put them into a single volume. Now, in the example I'm showing here, we have a RAID 5 storage pool made up of one or more RAID 5 LUNs and a RAID 1 or a mirrored set storage pool made up of one or more RAID 1 or mirrored sets.
Okay, and we've pooled the two things together. We haven't pooled them together, we've grouped them together in a single volume. Now some of you out there are saying, "Well, that's kind of cool, but where does my data go?" I mean, does it go on the RAID 5 hardware, does it go on the RAID 1 hardware, just exactly where does it go? And that's what's kind of interesting about affinities. So we all know the RAID 5, in theory at least, would be a little bit faster than the RAID 1.
Xsan is the most protected type of storage that you have. It's a mirrored set. So you have fastest and most protected. Now, what you can do is you can actually, through affinities, associate those storage pools with directories or folders in the finder or folders in that volume. So if you had a couple of desktops hooked up, as I've shown here, You can literally provide the user with two folders. In this case, I've called them fastest and most protected. You could call them work in progress and final work, for example. And the work in progress could go to the RAID 5, and the final work could go to the RAID 1.
So, in a sense, in reality, affinities will really help you set up interesting workflows across the SAN and move data around the SAN into the type or category or class of storage that is relevant or most useful, most appropriate for the type of data that you want to store in it. So, this is a really interesting feature of Xsan, and I invite you to check that out. as well.
From a remote administration perspective, as I mentioned, fairly easy to configure. We do real-time monitoring. We'll do event notifications. We'll do things like email you or page you in the event of a metadata controller failover, a fiber channel problem, a quota problem. We support usage quotas. We have both soft and hard quotas. And we do have integration into LDAP. And this is where I'll take a little digression here, if you'll allow me this.
From a Xsan perspective, the file system sits underneath the standard sort of API set, file system API set in Mac OS X and Mac OS X Server. So for all intents and purposes, it just looks like another locally attached, internal, if you will, volume on your system or drive on your system. And this is really, really interesting for a couple of reasons. Now, with a couple of caveats, too. That internal drive looks and behaves mostly like UFS, follows POSIX locking semantics, and is case-sensitive.
So all Xsan systems will be case-sensitive. And we are relying on all of you to do the proper thing from a locking standpoint in your system. So when you open a file for a user for read-write access, open it. And typically in Carbon, this takes care of itself automatically.
But if you're doing a Cocoa app or a Unix application, open it and assert the POSIX lock for that operation. And we will adhere to that and use that in Xsan to make sure that your data doesn't get trampled by some other user. Now, so that's where Xsan sits.
And the reason that's interesting is that on our systems, that means that you can control access to Xsan, to the folders and the files in Xsan, using a directory server. So when the person logs into the machine, or when they remotely attach to a server and go in using, say, a file service, they have an identity, a UID, that has been given to them by the directory server at local.
And that UID is the one that's going to be used to log in time. And all of the machines connected to the directory server, in other words, all the machines on the Xsan connected to the directory server, recognize that UID. And we'll use that UID to determine whether or not that person has access to a folder or file on the Xsan.
And because Xsan is coming out in the fall, jump ahead a little bit, but Xsan is coming out in the fall, right now we're adhering to POSIX permissions, but not ACLs on Xsan. So we use the permissions model that's built into Panther for that. But that gives you some control over who has access over data in Xsan, gives you full integration with an LDAP directory server, and for that matter, Kerberos infrastructure as well, because we're sitting, as I said, below the file system layer in the operating system.
So, I mentioned a complete SAN solution and mentioned that you need partners to really pull this off. So, from a hardware perspective, we offer XServe RAID. It will be qualified on XServe RAID. It will be qualified with the XServe G4 and G5. It will also be qualified with the Power Mac G5 and with Power Mac G4s back to dual 800s or better.
We have our own Fibre Channel PCI card that we're qualifying the system with as well. It's going to be qualified with our professional software, including Mac OS X, Mac OS X Server, and many of our professional applications. We have worked with the Fibre Channel switch vendors, and we're testing with these switches. There's a list of switches on our website that we're testing with from Brocade, QLogic, and Emulex. So check those out.
And from a data management and interoperability standpoint, we have worked with ADIC to ensure that you can add Windows, Linux, and other Unix systems to the SAN. And ADIC is supporting Xsan with their StoreNext file system. Well, the compatibility is with their StoreNext file system, and they're also supporting Xsan with their StoreNext management suite.
Now, what that means is that you can use their data management tools as a foundation for something called information lifecycle management. And this is policy-based storage management. And they have some really, really great products to do this, and you'll be able to use those with Xsan to do things like, I mean, I think of it as backup on steroids, but basically to be able to migrate your data to the system. So you'll be able to migrate your data from Tier 1 to Tier 2 storage and migrate it all the way back to tape, and to automate a lot of that process.
So just a little view of how this can happen. You can literally put XSAN on, say, a Power Mac G5, an XServe G5, have that connected in. It sees the SAN volume. There's a metadata controller, so XSAN's up there running on another XServe G5 as well. But you could literally take a couple of servers running Windows or Linux and have those servers connected into the SAN as well. Now those machines would be running the StoreNext file system from ADIC.
And you can purchase it from them. It will install and run and allow you to see the XSAN volumes. And as well, you could use the StoreNext storage management software to back up and maintain all of the data in your SAN as well. So this is a full and complete solution that we're offering here. Take a drink of water here, but I want to take you through quickly some example deployments.
And in the kind of walkthrough of some example deployments, I also want to show you how you, specifically, how you set up some of the infrastructure for Xsan. Now, in the later session, they're going to get right under the covers and show you more of this, but I want to give all of you kind of an overview of how all of this works.
So when you're setting up an Xsan, you need a storage device, which is an XServe RAID. You need a fiber channel switch. You need, obviously, the Xsan systems, the systems that are going to access the storage. And this will be servers or desktops. And you run Xsan on each of these. And they all connect in over fiber channel to the storage area network. One of these systems you're going to denote or decide is going to be your metadata controller. Remember that traffic cop for the Xsan.
So, fiber channel network, you get high speed, you'll get on each fiber roughly, well roughly, you'll get the scope for 2 gigabits or 200 megabits per second. And we have numbers up on the website on what the XServe rate can actually give you sustained, but it's on the order of 180, 190 megabytes per second sustained throughput.
So, Correct me, guys, if I'm wrong on that one, but that's my understanding of those numbers. So you get that out of the XServe RAID, and you can, of course, provide that to each of those machines. Now, each of these systems, I want to be very clear about this, needs to have a copy of Xsan running on them.
So you buy four copies of Xsan, you put it on the four systems, install it, get it configured using the Xsan admin tools, select one machine as the metadata controller, decide and wait which of the other machines should actually be the backup or standby metadata controller if the metadata controller fails and you're on your way. Well, sort of.
There's a back-channel network that needs to be deployed with Xsan. We refer to it as the metadata network. Now, I don't want to confuse you with the metadata that's associated with the spotlight technology. The metadata we're talking about here is file system metadata. File system locks and locations and all of that sort of stuff. So, the file system metadata is moved around on a separate metadata network. Now, this is just a TCP/IP Ethernet network that you set up between all of these systems.
What it provides is really out-of-band communications. And the reason this is important, and the reason it's architected this way, is that what you're sending back and forth with metadata is small packets. So small requests. I need to open up this file. Okay, you can open up this file. FibroChannel is really optimized for large chunks of data.
And so, if we were to take all the metadata operations that would happen on a large SAN and run them across Fibre Channel, we'd be bogging down your Fibre Channel network, where all your storage performance is, with metadata communications. So, to optimize around this, we put the data, the big chunks of information, on the storage network, on the Fibre Channel network, and we put the metadata on a separate network.
Now, the actual metadata actually lives on the SAN, but it's accessed and cached by the metadata controller, and then it's controlled, and access to it is controlled by the metadata controller. So, this separate, out-of-band network. Now, I'll have a lot of people have come up to me and said, so do I have to use a separate network for the metadata network? And the answer is, while it's IP, you don't have to use one.
But if you use your local area network, for example, and someone initiates a large FTP transfer across that network, for whatever reason, that may inhibit some of the metadata traffic on that network, and that may slow down your file system. And I almost guarantee it'll slow down your file system, because that will delay those packets being sent to the metadata controller. So, we recommend, very strongly recommend, is that you use a separate network for this metadata controller network.
Now, the good news is that you can do that, and you can still share the data over Ethernet when you're using, for example, an XRF G5, because we do have two gigabit Ethernet connectors on the back of the XRF G5. So use one of them for the metadata network and use the second one for your local area network. And, as I mentioned, you can then share the data using common networking protocols to as many clients as you want over your local network.
Now, these machines, to be clear, the bottom ones, the desktops that are shown here, do not have to be running XAN. You do not need an XAN license for them. Of course, with Mac OS X Server, you get an unlimited client license. You can connect up as many clients to do file sharing with the unlimited client licensing as you want. And we don't care whether that data comes from. If it comes from the SAN, that's great.
So, as I mentioned, dual gigabit ethernet in the XServe G5 to support this. Now you could also, in another sense, use this as the storage pool for a web service as well, whether that's a website, QuickTime streaming server or something like that, and have internet clients connected into them.
But in any case, there's two networks. There's the main network where the data transfer happens, and there's the metadata network, which is where the file system does all of its operations, with the exception of doing the actual block level access to storage, which happens over the fiber channel network. So.
When we deploy XSAN in kind of a post-production environment, we deploy XSAN to desktops. And I'm sure those of you in the audience who do a lot of work in video. So you create a SAN volume using multiple XSERV RAIDs, fiber channel switch, metadata controller, server makes a nice metadata controller, and then you set up the desktops with access to the SAN.
You can also, on the same network, put a server there, have it reshare some of that information using the QuickTime streaming server. So you can imagine a workflow where you're doing video editing, and when, for example, at the end of the day or at the end of their time working on this project, an editor makes a QuickTime movie and posts it into a special folder where the QuickTime streaming server can share it out to, say, a producer or a director who wants to just check in over the network. Without having to come down to the editing bay. And, of course, you could put appropriate security and access control around all of that.
So this is an example of using XSAN in post-production workflow. Now, the thing I'm not showing here that you do have to set up again is that private Ethernet network that connects all of these systems and the metadata controller. So there'll be a private Ethernet network. You can multi-home these systems, so you could use a second Ethernet, you could use a second Ethernet connection.
You can put another, an external, well, a gigabit card into the XSERV G5, for example, or you could even use the airport connection on the XSERV G5 to connect it to the Internet or connect it to your local network. You want to reserve one Ethernet port for the metadata network.
Computational clusters are another area that we're looking at with XSAN. And in this case, we're showing a very large cluster. For small clusters, as I said, up to 64 systems can connect to the SAN. So if you had a small cluster of 4, 16, 32 systems, you could actually have them all connected directly to the SAN over fiber channel with block-level access to a single storage pool, which means they all can be using the exact same data, writing back to the exact same location at the same time. Now, if you have a very large deployment, and, of course, we use the Virginia Tech one all the time, 1100 G5s.
Now, they, of course, XSAN isn't available, so they weren't able to do this. But if they had chosen to do so, they could have actually created an environment where of those 1100, say 20 of them, 25 of them, you could pick 55 of them. Say 55 of them were connected into the SAN directly. And those 55 were the head nodes for another 20 machines in the cluster.
So the head node connected block level fiber channel directly into the SAN. And then what it does is uses NFS to share the data, or AFP, to share the data to the other Xsers in the cluster, the other 20 for which it's responsible. And you create a network between those 20 in the head node and another 20 in the head node.
And now you've covered 1,100 machines. And for 55 of them, they're feeding data, or they're getting data directly from the SAN at very high speed. And they're able to reshare that over, say, gigabit ethernet to another 20. Now, I can tell you that BEAT's trying to reshare that data over gigabit ethernet to another 1,100. So this is a way of more easily or more quickly distributing the data around a computational cluster.
So I'm going to go through a few of our products here and run through some of that as soon as I get a... Another drink of water here. I'll run through some of this and give you an idea of the kind of products that we're offering in this space. And then I want to give you an idea of why we think we can be very successful in this space in both video, regular storage consolidation projects that I've shown you, and also in computational clustering, where price performance is really what counts.
So, Mac OS X Server, qualified with Xsan, delivers all sorts of great services. I'm sure most of you caught the earlier sessions on what the next version of the server is all about. We'll be qualifying Tiger Server with Xsan as well, when Tiger Server is closer to availability. Xserve G5, phenomenal performance on a one-use server, an amazing box, and I'm sure you've seen some of the sessions on that as well. Wow.
We do offer a fiber channel PCI card. Many of you may not have realized that. This is a PCI card that goes into an XServe or into a Power Mac and delivers dual port, 2 gigabits per second on each channel. So, it's a really, really nice card. It says 64 bits, 66 megahertz PCI card. Neat thing is we actually include the cables. And these cables on the street would be something like $150, $200 a piece, but we include the cables as part of the package for the PCI card. We have the XServe RAID.
This is a 3.5 terabyte solution in a 3U enclosure. Again, very, very high speed. If you stripe across the channels, as it says, over 336 megabytes per second, striping across the two controllers. Very easy to set up. This is RAID setup in three easy steps. Very, very nice solution.
This one's actually qualified not only for Mac OS X, Windows, and Linux. And as we said, it'll be qualified for Xsan. And nice solution, $3 per gigabyte. I have this joke I have with, just as a digression with the guys, I have this joke I have with, just as a digression with the guys, I have this joke I have with, just as a digression with the guys, who runs our server and storage hardware. I said he should only have three slides on XServe RAID. The first one should say $3 per gigabyte. The second one should say $3 per gigabyte. And the third one should say, did I mention $3 per gigabyte. It's a beautiful, beautiful box.
And Xsan, and as I mentioned, Xsan, concurrent access, shared volumes, up to 64 computers. A lot of great features in there from a high availability or HA standpoint. Gives you direct block level, high speed access over a fiber channel to the storage. Provides some really, really cool volume management capabilities. Comes with some great remote administration tools. And of course, we have great cross platform support with full interoperability with the StoreNex file system from ADIC. So really, really complete solution there.
And we have some great server and store support options, and we got a lot of questions on this yesterday, and I just invite you to check out the latest offerings that we offer in that space. I do want to run you through the prices. The thing that's key here, and the reason I'm running through some of the marketing slides right now, is that a lot of people have said to us, look, we look at iSCSI, we look at Fiber Channel, we're not sure what's going to happen, but it sure looks like iSCSI is going to take over.
And when we look at it, we say, well, first of all, you'll never be doing video over iSCSI, or not in the foreseeable future. And in fact, it's very, very difficult to get any kind of high performance out of iSCSI. And when you do, you get into using really interesting proprietary hardware to do that, and that drives the expense way up. Most people want iSCSI because it's cheaper than Fiber Channel. That's the clear reason why they want that sort of thing. So we took a look at it. We feel that Fiber Channel is actually ripe.
For price reduction. And so we're going to change this market entirely. And so we're offering these products at an incredibly low price compared to anything else on the market. Yes, I know. So $999 for 10 server, unlimited client license, XSERV G5, $3,000, 10 server is included with that. The PCI card is $500 with the cables included. The XSERV RAID, that $11,000 price. With everything you can put in it. 3.5 terabytes of storage, 128 megs of cash, etc., etc. And then, XSAN, under $1,000 per node. And unlimited capacity. So.
So as some of you know, that's easily, from a software standpoint, that's a fifth to a tenth of the price of comparable solutions on the market today. And that's no exaggeration. You can check it out with other vendors. It's a fifth to a tenth the price of comparable solutions. And many solutions actually charge you on the basis of the amount of storage that you put in place. Many solutions charge you extra for the metadata controller software. All of that stuff is built into Xsan.
So as I said, we're driving down the cost of this infrastructure. Xsan will be available in the fall of 2004. So we're going to have it out to you in the fall of 2004. Xsan is in beta today. We made our first beta available at NAB. There is a program up on the site, apple.com slash xsan, and there's a little link hidden over in the corner referring to the beta program. So you can go to apple.com slash xsan slash beta if you want to sign up for the beta program.
Now the great thing about this, in a sense, is we've had several hundred people already sign up for the program in the couple of months that we've been making it available. And I will tell you, if you want to be part of the program, this is why I'm fascinated that there's been that many people sign up, you have to have an XServe RAID, you have to have a qualified fiber channel switch, and you have to have two of those.
So you have to have two or more of either XServe G5s, XServe G4s, or PowerMac G5s, and you have to have our fiber channel PCI cards. So in reality, that's, you know, $30,000, $40,000 worth of gear that you have to take offline and use it for Xsan testing.
Yeah, I really appreciate it. I know there's a lot of you out there in the audience today who are doing that with us, and we really, really appreciate it. If you are interested in doing that, go sign up at the website. And we'll get you the bits. We are backlogged today in terms of requests. We've just been overwhelmed with them, but we really appreciate that.