[NTLUG:Discuss] Re: looking for raid & controller advice -- "FRAID" card = "software RAID"
Kevin Brannen
kbrannen at pwhome.com
Sat Dec 4 14:06:54 CST 2004
Bryan,
Thanks for all the info! I almost got lost in it, but managed to hang
on. :-)
For those who like fun thought questions, feel free to jump straight to
the bottom and reply. :-) But for those who want to enjoy the journey...
Bryan J. Smith wrote:
>[ FYI, there is a further discussion of this in the 2004 April article
>of Sys Admin magazine entitled "Dissecting ATA RAID Options." ]
>
>On Sat, 2004-12-04 at 02:54, Kevin Brannen wrote:
>
>
>>I need to build a file server for my church. Rudundancy is a must,
>>since I've lost several drives in the recent past (I'm not too keen on
>>the WD2000 right now--I might even have some used ones to sell soon).
>>I'm thinking a cheap way to solve this (as opposed to buying a NAS
>>solution) is to get a semi-low cost computer, add 1G of RAM for lots of
>>cache, and stick a 3ware 7506-4LP in it with 3 250G EIDE drives in a
>>RAID-5 config,
>>
>>
>
>Why not 4 drives for the same storage in RAID-0+1?
>It will be much, much faster.
>
>
Because speed is not the issue or the goal. Sorry, I really should have
mentioned that! The file server I'd build only has to serve 2
computers, over the Gb ethernet card as I mentioned. So the NIC will be
the bottleneck. Nevertheless, my goal is data safety and capacity. The
2 client machine control CD duplicators, but even burning at 48X
shouldn't tax the file server. Presently, when the 2 clients talk to
each other, they can transfer a 500MB image to the other in about 20s.
Since it takes almost 3 minutes to burn a full CD, you can see that
speed is not an issue -- even if both are going at once. They will be
consistently serving large files (500MB+), so read cache won't matter;
write cache may not either when you consider 500MB files, though they
will be doing reading much more than writing.
If I were to go with 0+1, which I don't think I need, I'd have to get
the -8 version of the card, because I want to be able to approach TB
capacity over the next couple of years. I've got almost 200GB now and
am growing faster than planned, so I am concerned about size.
Because of the reliability concerns, I'm thinking hard about doing
RAID-5 with a hot-spare; which seems wasteful to me initially, until I
remember that I've just lost 2 drives in the last week, and now will
have to spend a day or more reloading images from old CDs. Grrr!
>running Linux and serving the files out the Gb network
>port with a Samba server. (Yes, the 2 clients are Win2k, ugh!) So far
>so good. I can get all the parts new, including a spare 4th drive for
>$1500, maybe somewhat less.
>...
>
>Only the 9500S series now leverages _both_ SRAM + DRAM for the
>_ultimate_ performance _regardless_ of RAID level. But you'll pay for
>it.
>
>In a nutshell, _no_ sub-$500 RAID-5 uC+DRAM controller I've seen can
>match 3Ware 7000/8000 at RAID-0+1 in write performance. With the cost
>of ATA drives being so low, it's much more price/performance effective
>to go RAID-0+1 IMHO. Unless you are talking 8+ drives.
>
>
OK, let's ignore 0+1 for a minute and discuss RAID-5. :-)
A 7506-8 is in $390 area, a 9500S-8 is in the $440 area (both sub $500
cards BTW :-). Is the 9500S worth the extra $50? If yes, that's
probably $50 well spent and within my budget. Your thoughts?
Also, if I were to go with the 9500S-8, I only see SATA versions. I
haven't heard any good SATA success stories on Linux yet. Not on any
newsgroups, from friends, anywhere. (maybe that means I don't read
enough :-) Does the 9500S deal with that and just present an interface
to the Linux kernel so I shouldn't care? But that is why I've been
focusing on EIDE controllers.
>>* It advertises Linux support,
>>
>>
>
>3Ware has had a _stock_ kernel support since 2.2.15 (yes, that's _2.2_,
>not 2.4).
>
>...
>
>
Execellent!
>>and software called Disk Manager. Does DM work under Linux?
>>
>>
>
>Yes, there is a specific version for Linux, along with a CLI (command
>line interface) version (the two are mutually exlusive). The regular
>(non-CLI) DM appears as a web server, and you can then pull up a web
>browser to it. It only allows local access as root by default.
>
>...more good stuff...
>
>
Cool!
>>Is it useful? Or do you just tell the card via a BIOS like tool to go
>>RAID-5 and the card handles it all automatically and Linux sees the
>>card as 1 big drive.
>>
>>
>
>_Both_. _All_ "intelligent" RAID cards have _both_ a BIOS _and_ an
>on-board intelligence. That's how they differ from the "FRAID" cards.
>
>...
>
Hmm, OK, but I think I definitely need someone to help me on the SATA
question above. :-)
>>* Will this card demand to be the "first drive"?
>>
>>
>
>That's a BIOS setup issue. It's up to your BIOS settings on how you let
>the 3Ware card take control of your Int10h functions. But yes, the
>3Ware card does have a BIOS.
>
>Again, you seem to be focused on the 16-bit, Int10h "BIOS" services
>aspects. They are _not_ used once the OS loads. _All_ off-chipset
>ATA/SCSI cards, RAID or not, offer a "BIOS" for booting. So there is
>_no_ difference between a "regular" ATA card, a "FRAID" ATA card or an
>"intelligent" RAID card -- they _all_ have BIOSes.
>
>
OK, thanks! I understand all the RAID concepts, the implementation
details are what I'm trying to learn quickly.
>>I've got an extra PCI EIDE card in my home computer that insists on being
>>hda-hdd. I could live with this but would prefer the MB drive be hda,
>>and these drives be hde-hdh.
>>
>>
>
>The first 3Ware array will be /dev/sda, the next /dev/sdb, etc...
>assuming you have _no_ other SCSI drives/arrays.
>
>If you are modifying an existing system, you will need to built an
>initrd (initial ram/root disk) with the SCSI module, 3Ware card and SCSI
>disk drivers. You may also need to tell GRUB to map BIOS disk 80h (C:)
>to /dev/sda, if the 3Ware card is booting.
>
>If you are installing a distro new on the 3Ware card, it should do all
>this for you.
>
>
Excellent! The RAID-5 array will not be the boot area, but the huge
data area. I'll probably do RAID-1 for the boot drive with a pair of
80G drives I already have, as the MB has that onboard.
>>* It advertises hot-swap (ain't gonna do it!) and hot-spare.
>>
>>
>
>Yes. Using only 1 ATA drive per channel, this is _very_safe_.
>
>
Safe is good! :-)
>>How does it tell you when it has lost a drive?
>>
>>
>
>It beeps.
>...various notifications...
>
OK, sounds good. (pun not intended ;-)
>>* If in the future I want to add 1 more disk because I have room on the
>>controller, will the card naturally just "expand"? (if i have to tell
>>it in some setup tool, that's OK) Or will I have to save everything
>>off, rebuild the whole array, then restore the data? If the latter,
>>then maybe I need to add the 4th drive in up front. :-)
>>
>>
>
>DM2 is supposed to allow dynamic rebuilding of a new, expanded layout.
>I have not tried this though. And I would _never_ do such though.
>
>I would create a 2nd array. It's faster and safer.
>
>
Understand, but since I want 3 drives in there, plus the parity, plus a
hot spare, or so I'm now thinking after sleeping on it; I think I want
the 8 channel card, so if I have the room to expand, why not?
>>* On single drive systems, I like to use a journaling file system (I
>>prefer ReiserFS on Suse and ext3 on RH). For RAID-5, does a journaling
>>FS matter?
>>
>>
>
>No. Volume management is independent of journaling.
>
>Additionally, the 3w-xxxx driver _does_ to a "flush" on shutdown. If
>you've seen how newer kernels "flush" the ATA devices (because most ATA
>drives have 1-8MB of SDRAM buffer), the 3w-xxxx driver does a "flush" of
>its SRAM (and SRAM+SDRAM in the case of the 3w-9xxx driver) at that same
>point before shutdown.
>
>
>
>>Or because of the redundancy will the faster but potentially
>>less reliable ext2 do just fine?
>>
>>
>
>Ext2 is _not_ "less reliable." Journaling does _not_ increase
>reliability**, that is a common and poor assumption.
>
>Journaling _only_ improves recovery time when a filesystem is left
>"inconsistent" (like on a power failure or improper shutdown).
>
>
I meant "less reliable" in the terms of less likely to be able to
recover if something goes wrong, which has been my experience, and you
seem to confirm. Or so I will infer from your statement. :-)
> ...
>
>>Since the 3ware is about $240 and the Promise is about $110,
>>the difference is almost the cost of my spare drive. Is there any
>>reason I should not go for the Promise card? (looking for good & bad
>>experiences)
>>
>>
>
>It's a FRAID card. They are basically considered "hell" for Linux
>because all of the "brains" are in its drivers and that's a GPL issue.
>They also differ *0* from a "regular" ATA card. In fact, there are
>often "hacks" to upload the Promise FastTrak BIOS into a $35 Promise
>"regular" ATA card.
>
>You're paying $75 extra for _software_, *0* hardware. And your system
>interconnect gets the added 2x transfer requirement for mirroring.
>
>
I was looking for a 4-8 channel EIDE controller card last night that
didn't do RAID and was having bad luck. If you know of any of this ilk,
that would be appreciated. I agree, no need spending the extra $75 for
software I might not use.
>With an "intelligent" RAID card, you avoid all that. And since there is
>no "brains" in the driver, but on the card itself, there is a 100% GPL
>driver. Not only for 3Ware in its 3w-xxxx/9xxx (which are in the stock
>kernels), but for the Promise _SuperTrak_ as well.
>
>
>
>>Linux also gives me the option of using Software RAID, but that will
>>require a 4-channel EIDE card because of the number of drives I want to
>>use. Does anyone know if the Promise TX4000 will support a non-RAID
>>config; i.e. just be an EIDE controller and not impose HW-RAID on me?
>>
>>
>
>That's _exactly_ what a TX4000 is!
>
>It's a "regular" ATA card with some "trick" 16-bit BIOS and a "trick"
>32-bit OS driver. If you don't load the "trick" driver, it is a
>"regular" ATA card!
>
>In fact, that's what you'll get when you load Linux on it!
>
>
OK, so I could use that for software RAID if I want to go there. Still
not sure what I want to do there, but I've got a few days to read,
think, and plan.
> ...
>
>In fact, it's better to use Linux LVM/MD "software RAID" than to use a
>"FRAID" card. Because Linux knows how to better and more optimally
>organize the data than the FRAID card.
>
>
Interesting.
One last question if you're still with me. :-)
For about the same money I'd spend on the fileserver, I could buy 2
7506-4LP cards with 6 disks, and put 1 controller & 3 disks in each
client computer as RAID-5, then just have them sync up every night
(which I was doing anyway), instead of building a file server for both
clients to hit like a NAS. [When it's time to grow, I could add the 4th
disk to machine A, copy all files from B->A, expand B, then copy from
A->B to grow the data set without data loss.] Oh, I mustn't forget to
mention that both clients run mswin2k because of the duplication control
software; hence the reason I mentioned Samba in the original post. If
it matters, all machines are left on 24x7.
Do the scales tip towards this setup (RAID-5 individually plus mirror
between machines)? Or towards the fileserver (RAID-5 plus HS)? [While
the file server would give me an opportunity for Linux advocacy, it's a
fringe benefit and not the goal, so don't let that enter into the equation.]
Such interesting things to think about. :-)
Kevin
More information about the Discuss
mailing list