[NTLUG:Discuss] recommendations for tape drive
kbrannen@gte.net
kbrannen at gte.net
Mon Mar 17 18:41:10 CST 2003
Greg Edwards wrote:
> kbrannen at gte.net wrote:
>
>>
>> All very easy if you know shell scripting, though it could be done in
>> Perl or even C (if you had to). If I had to do it again today, about
>> the only thing I might change would be to use scp instead of curl.
>> Though curl is still a very useful utility in its own right, scp is a
>> bit easier to use.
>>
>> HTH,
>> Kevin
>>
>
> I guess I'm really lazy ;)
Aren't we all? :-)
>
> I use find and a pipe through cpio -pdvm to an automounted (amd) nfs
> drive. Only copies modified and new files. When I want a fresh image I
> delete the tree on the backup server. My backup server currently has a
> 36G SCSI drive devoted to backups and doubles as my development box from
> an IDE drive. When the time comes I'll add SCSI drives to extend my
> capacity.
It just depends on your circumstances. Chris's are different still. NFS
wasn't something I wanted to do, or felt comfortable with on our production
machines.
For those wondering how this might be done, here's the heart of my script:
backup_name="$host-$name-$tdate.tar.gz"
tar -cf - -T $file_list | \
gzip | \
curl -T /dev/stdin \
ftp://$backup_id:$password@$backup_mach/$backup_dir/$backup_name
I expanded a var for the Linux values, because this script also ran on
Soloaris machines, which uses different arg for tar, and also forces gzip to
be a separate process instead of "-z" to tar. So other than a temp file to
hold filenames that need to be backed up (generated earilier in the script),
no other space was required on the server to be backed up.
Anybody else have any tricks to share? :-)
Kevin
More information about the Discuss
mailing list