[NTLUG:Discuss] Summit presentation on virtualization and clustering

Chris Cox cjcox at acm.org
Sat Jul 5 11:53:05 CDT 2008


Robert Pearson wrote:
...
> 
> Chris, what would you use instead of iSCSI?

Fibre Channel

> There seem to be configurations in the low cost area that demand iSCSI.

You're right... there are SOME configurations where iSCSI could
be useful.  For any of the following for example:

1. You have NO money (and I mean NONE).

2. You don't need much more performance than what you get with
single drive solutions (or even NAS... though NAS will be slower... it's
just at some point, you don't care.. right?  If you cared, you'd
want something better than iSCSI.)

3. The machines in your cluster are not close to each other
physically.

4. You want to route to storage over a WAN (risky, but possible).

5. You just HAVE to use your existing network infrastructure
for SAN (a VERY VERY VERY VERY VERY VERY VERY VERY VERY bad
idea).

If the goal is high speed, reliable storage that is VERY easy
to setup, give me fibre channel anyday.  Steps:

Create a new LUN.
Zone it at the switch (if you have a switch).
Mask it (to keep other machines off it that are in the same zone).
Mount it.
Done.

IMHO, iSCSI isn't all that useful unless we're talking about
multi-terabyte RAID arrays... again, that's MY opinion...
And of course, once you go down the path to buy a multi-terabyte
RAID system... you just left mom & pop budget land :)

AND... once you're considering a multi-terabyte SAN unit...
you might as well go 4Gb fibre... it's simpler, faster,
longer runs, more reliable, etc.  But FC does cost more...
no doubt (UNLESS you consider 10gE for your iSCSI.. then
the tables flip flop at least in the short term).

iSCSI Example:

1 Nexsan 42TB Array (RAID6) (does either iSCSI or FC)
    $50K
1 Cisco 1Gb 32 port ethernet switch $10K (ok... Cisco
switches are VERY high priced, but if you're going to
do a SAN, a cheapy store-and-forward type switch is
probably NOT a wise idea)
32 cables $200
32 NICs $500

Total: 50 + 10 + .2 + .5 = $60.7K

Performance: ~100MB/sec (very variable, 70MB/sec is typical)

FC Example:

1 Nexsan 42TB Array (RAID6) (does either iSCSI or FC)
    $50K
1 Cisco 32 port fibre switch with 32 4Gb SFPs $17K
32 fibre cables $500
32 HBAs $15K

Total: 50 + 17 + .5 + 15 = $82.5

Performance: ~380MB/sec stable sustained

Now.. that may seem like a lot to pay for  4-5x
(probably at LEAST 5x btw) performance, but realize
that the cost of infrastructure does down a bit since
you're not purchasing switches (ethernet or fibre) all
of the time.  It may be more likely that you are adding
more SAN storage... and that makes the extra fibre cost
as a percentage of the total cost lower.

In both cases, you can up a single host performance
by bonding interfaces together.  But, you'll have
to up the storage array to something nicer to get too
much above 500MB/sec.  Aggegation does NOT mean
single app to disk throughput increases.

We don't have our iSCSI appliance anymore, we've gone
all 4Gb fibre... but if you want some comparisons
between DAS, NAS and FC... I can try to do some.

Here are some other IMHO ratings (best to worst):

DAS = Direct Attached Storage - e.g. a SATA/SCSI/SAS drive or any
       in combo with RAID.
FC = Fibre Channel (assume 4Gb)
iSCSI = Internet SCSI (assume 1Gb)
NAS = Network Attached Storage (assume NFSv3)

Performance

1. DAS (if you've got the HW, 500M/sec or more)
2. FC (380M/sec+)
3. iSCSI (70-100M/sec)
4. NAS (30-60M/sec)

Flexibility

1. NAS (like a swiss army knife)
2. iSCSI (because it's routable)
3. FC (very long distances, stable)
4. DAS (eh... limited by many things)

Ease of use

1. DAS (plug it in)
2. FC (plug, zone, mask)
3. NAS (define, export)
4. iSCSI (aack.. see Thomas's article)

Reliability

1. FC (very reliable)
2. DAS (#2 mostly because of termination issues and such)
3. NAS (tolerates a mess load of bad things)
4. iSCSI (I know, that may seem unfair, but
it's just that people don't understand that a
SAN makes a network FRAGILE, you can't do
"normal" ethernet network tricks on a network
housing iSCSI.  If you have a separate isolated
iSCSI network AND you know what you can and cannot
do because of that... then maybe you can bump
this up a notch.)




More information about the Discuss mailing list