[NTLUG:Discuss] Setting up a RAID
Chris Cox
cjcox at acm.org
Wed Apr 9 16:48:14 CDT 2008
Blue Dragon wrote:
> Rick Renshaw wrote:
>> --- Blue Dragon <thebluedragon at gmail.com> wrote:
>>
>>
>>> I am thinking about using 4 or 6 SATA drives on it and maybe having half
>>> of the drives for the main RAID array while the other half of the drives
>>> will be used to backup everything if possible. I am trying to determine
>>> what I will need for the case to get it running.
>>>
>>> Also if it is possible to use half of the hard drives to backup
>>> everything when needed how would I do that?
>>>
>> If you are using RAID 0+1 (1 to make a duplicate of every drive, and 0 to put the pairs together
>> into one large drive) then you can split the RAID 1 part of the array and remount the drive as
>> another device and back up that device. When the backup is done, rejoin the backup disks to the
>> array. Depending on the RAID controllers you are using (hardware and/or software) this may be
>> much easier said than done.
>>
>>
>>
> I haven't bought any of the hardware yet. Any recommendations on a RAID
> controller? I am also thinking about building a server in a 1U or 2U
> case which will also manage the RAID.
I like RAID subystems better. I've used a plethora of
internal RAID cards. They either all had some kind
of problem or didn't at first and developed problems
later as the (unsupported) Linux drivers fell into
disarray.
I've used internal cards primarily from LSI and Adaptec.
Exceptions to rule, RAID boards/cards from HP (cciss)
and SOME Dell PERC (Adaptec, not the LSI). Those are
fairly well support in Linux... mainly because it's
of great interest to HP and Dell.
I'd avoid LSI like the plague, though at one time
that was my preferred choice. Even problematic
on Dell's with LSI PERC controllers.
RAID subsystems (even cheap ones) are of higher quality
than the RAID card solutions and work with almost
any configuration of Linux. Most subsystem deliver
comparable performance and in some cases, considerably
better performance than internal card solutions.
Specifically what I own/use today:
Arena Aiby 4bay tower PATA (RAID5) U160 SCSI
(love these... so small, so flexible)
Arena II 8bay tower PATA (RAID5) U160 SCSI
(note, this one was used when I bought it, it's
in REALLY bad physical shape AND it works GREAT!)
Arena Indy 6bay rack 2230 PATA (RAID5) U160 SCSI
Arena Indy 16bay rack 2600 PATA (RAID5) U160 SCSI
Accordance 2bay internal ARAID 2000 SATA (RAID1)
(inexpensive INTERNAL RAID0/1 subsystem, recommended!)
Nexsan 14bay rack ATAboy2 (RAID5) U320 SCSI
Nexsan 14bay rack ATAboy2f (RAID5) U320 SCSI
(actually WAS an ATAboy2, I bought the fibre
controllers and changed it into an ATAboy2f... have
two of these at home)
Nexsan 14bay rack ATAboy2x (RAID5) Fibre
(actually WAS an ATAboy2 gen 2, bought the fibre
controllers and changed it into an ATAboy2x)
Nexsan 42bay rack SATABeast (RAID5) Fibre
(we have two of these, one with 500G SATAs
and one with 1TB SATAs)
(the 42TB Nexsan will saturate a 4Gb Fibre
link with over 400M/sec throughput)
RAID cards I have used:
LSI Megaraid Elite 1600 (loved this one at
the time... no longer works well with Linux)
LSI Megaraid 320-2 (pretty slow contemporary
SCSI RAID board... so-so support in Linux)
LSI Megaraid 320-2x aka Intel SRCU42X (supposedly one of
the fastest boards out there... I'm NOT impressed)
Various Adaptec style PERC controllers on Dell (and
likewise some of the LSI ones, which are similar to
what I mentioned above)
HP cciss RAID controllers, these are the onboard or
system provided RAID controllers used on HP's Proliant
server lines. Reliability of the onboard controller
on the DL380G2 line was a bit off... rest have been
fine. Well supported in Linux... and if not, HP
supplies drivers (prior to it becoming a part of
the Linux kernel).
If you're going to buy a 2U server, I highly recommend
the HP DL380G5 (or even the Opteron DL385) model. Very
well supported at least with Red Hat and SUSE. They
cost a bit more than a Dell (YMMV) but are about 10x
the quality. Performance? Not shabby... about 150M/sec
from their cciss RAID (across 4 drives I think is what
I tested).
More information about the Discuss
mailing list