cheap san was (E: [NTLUG:Discuss] File Size Limit)

Alfred Dayton linux at adayton.com
Fri Nov 21 14:17:00 CST 2003


Hello Chris,

       Your response is encouraging.  The xfers will be taking place between
box A: containing ata 100 drives/controller

And box B: containing ata 66 drives/controller (legacy pc). Box b is used
for "bulk" file storage only.

Hence the desire to limit spending to only gigabit solution.

According to rough calculations @ 60% wire capacity it will take 66 minutes
on a 100 mb nic/lan or

6.6 minutes on a gb nic/lan to xfer each 25 gig file.  Seven minutes would
be GREAT :)

In fact at present it IS taking aprox one hour for each xfer on existing 100
mb LAN.

By putting gb nics and gb switch would there still be a substantial

increase in throughput between these boxes?  I would use only the Intel nics
as you recommend.  These are

also my preferred nics.  What I can not determine is the practical
limit/bottleneck of the ata66/hard drive

portion of this equation.  If this was large enough throughput then it would
be worth the present cost

of 2 gb nics plus a 4/8 port gb switch.  Future expansion would replace the
legacy pc with a linux

pc which would have faster disk drive subsystem.  Also I understand there
exists an linux embedded

san appliance which though slightly more money would be very high throughput
to support multiple

workstations at this site.

-----Original Message-----
From: discuss-bounces at ntlug.org [mailto:discuss-bounces at ntlug.org]On Behalf
Of Chris Cox
Sent: Friday, November 21, 2003 11:11 AM
To: NTLUG Discussion List
Subject: Re: cheap san was (E: [NTLUG:Discuss] File Size Limit)

Alfred Dayton wrote:
> Speaking of "cheap mans san", I am trying to find method for xfer large
(25
> gig) video files off a
>
> Windows rendering workstation across network (10/100/1000) to a pc/san/nas
> device.  Perhaps
>
> Linux would provide a speedy solution???  The problem is it takes a
> Loooooong time presently
>
> Xfer from windows 2000 to windows 2000 boxes across a 10/100 mbs lan.
What
> are the limiting
>
> Factors in this scenario? I.e., would changing to 1 gig lan improve xfers
or
> is the bottleneck in

ABSOLUTELY!!  The network (100mbit) is slower than the disk performance by
a large factor.

You can do things to speed up local access... but in your case, the
network case... it's your network speed that is the bottleneck.

With regards to moving to a true SANS device.. that should be
fiber... so you should be ok there.

Now your SANS "network" and your IP network are two separate things
(ideally).
So it's conceivable that you have a gigabit NIC (possibly fiber ... could
be copper) and your SANS HBA controller (most likely fiber) in the same
box.  However, if it's just disk to host performance you need... then the
SAN HBA is the main thing... the gigabit NIC would be used just in the
transition (though nice to have if you are able to).

So... IMHO... if you have a true SANS disk array (Nexsan is good if
on the cheap).  Hook up your Windows or Linux box via a fiber HBA
device (this is NOT a NIC!) to the SAN (possibly though an intervening
fiber switch... which is generally what you need if multiple hosts are
going to use disks carved out of the SAN disk array).  Once you've
established your new SAN disk architecture... do a one time xfer to
the new SAN device and start working.  The SAN disks show up as
local SCSI drives.  You MUST have some education on SANS though... I'd
look at getting the SAN/NAS book from O'Reilly to start with (though
it does contain a somewhat serious error with regards to zoning security).

In general SAN will be faster, but more complicated and costly to setup.
You're looking at $500 for every HBA, your fiber switch could run
you $$$$ to $$$$$ depending on model (GBICs for ports on the switch
add to the cost... anywhere from $300 - $1000 per port), the cost of
fiber cabling is MUCH higher than copper.  If you go with an
"enterprise" class storage array... well... there's not enough room
for the $'s here (!!).

NAS is cheaper and easier to understand.  If you go the NAS route,
then I'd move to gigabit NICs.  I'm using the Intel 1000 FX boards
right now and I'm easily seeing 894Mbit host to host.  My experience
with the cheaper gigabit NICs is usually less than 600Mbit host to host.
But YMMV.

Of course if money is infinite... do both SANS and NAS!!

>
> The hard drive/pci bus area?  I googled and checked drive mfgs for some
kind
> of chart or
>
> White paper/faq on the subject matter but found none.  Compaq Server Div
> several years ago had an excellent
>
> White paper on this subject but I can not find it again.
>
>    I am guessing that a small hardware raid stripe (3-6 drives) on each
box,
> whether windows or linux is going

Striping has to do with throughput on a high speed bus... until
you eliminate the network (100Mbit) bottleneck.. this won't buy
you anything.

>
> To be the only answer to achieving realstic thruput to move the files in
an
> acceptably short time.  If so
>
> Since xfers are only intermittent perhaps software raid strip would work
to
> keep cost down.

Hmmmm.. I'm confused here.  "High speed" and "video" don't usually bring
to mind words like "intermittent".  I usually think of running
tasks generating gobs of data during that period of time.  Producing
25GB of data takes enough time even with lightning fast buses and
setups.

>
> Thanks for any help point me in right direction.

Get the O'Reilly book.  Using SANS and NAS by Curtis Preston.
I would do a lot of research/education before implementing/deploying.

>
> Alfred
>
> -----Original Message-----
> From: discuss-bounces at ntlug.org [mailto:discuss-bounces at ntlug.org]On
Behalf
> Of gan hawk
> Sent: Thursday, November 20, 2003 1:58 AM
> To: discuss at ntlug.org
> Subject: Re: [NTLUG:Discuss] File Size Limits
 - - - - - - -  deleted original  - - - - - - - -  -




More information about the Discuss mailing list