[NTLUG:Discuss] recommendations for tape drive
Chris Cox
cjcox at acm.org
Mon Mar 17 13:31:41 CST 2003
kbrannen at gte.net wrote:
> Chris Cox wrote:
>
>> Greg Edwards wrote:
>>
> ...
>
>>> What backup tools would you recommend for the SC50? I've got my SC50
>>> installed but I haven't tried to do any backups yet and I'm
>>> considering BRU. I don't need a networked solution so their low end
>>> workstation solution would work for me. I run staggered cron jobs
>>> from my hosts to move data to my backup server now. I just need a
>>> solution that will move the data to tape unattended and allow a
>>> manual recovery.
>>
>>
>>
>> I don't have any preference. I've never found a backup utility that
>> I truly like. I usually make my own. If bru works for you.. I don't
>> think there is anything "wrong" with bru.
>>
>> Lately, I've been preferring a disk cached solution with archival
>> to tape. That way the users can quickly perform their own restores
>> of individual files or directories. Logical volume snapshots
>> are used for the daily incrementals (using LVM). We use rsync
>> to move data from the servers to the online disk cache. Using
>> mtx (for robotic tape units) and cpio, we take the weekly master backup
>> off of the cache to tape. We use this solution to handle terabytes of
>> data... overkill for small users.
>>
>> So for me, just the simple commands. Cron tasks are nice (and some of
>> our backups are manged this way), but can be
>> a problem when going to the same tape unit (when things don't happen
>> as scheduled). I prefer a single controlling program so that I know
>> everything is happening sequentially.
>
>
> Last time I was in this situation, I did almost the same. Each server
> had a specific time to copy its important dirs to the backup server.
> Cron would kick off a 30-40 line shell script to tar up the dirs, then
> use "curl" to copy the .tar.gz files to the backup server.
Yeah.. we couldn't do this because we don't have the local disk
space on each host. Doing a full copy across the network to
a remote storage host is very expensive. We opted to go the
rsync route to minimize the amount of time and bandwidth
used.
> Then after
> all those were done, the backup server had it's own 30 line shell script
> to remove old archives (I kept a moving 4 day window online, i.e. <= 4
> days I could just grab it off the HD, > 4 days I had to goto tape), then
> copy all the new day's files to tape. It even emailed a report to me so
> I could verify it worked. The only manual things I had to do every day
> were to read the email and walk to the server room to change the tape
> (DLT).
Yes.. very similar. We use the LVM snapshot technique to minimize
storage (just stores the differentials) without having to implement
a lot of "code" to perform the feat.
We've got a 73 tape library now :-)
We will probably (though we haven't done this yet) take the logs
into a web page.. shoot we may even put something to allow us
to produce the config tables and even help a user to do restores
from a web interface.
>
> All very easy if you know shell scripting, though it could be done in
> Perl or even C (if you had to). If I had to do it again today, about
> the only thing I might change would be to use scp instead of curl.
> Though curl is still a very useful utility in its own right, scp is a
> bit easier to use.
We use key'd ssh to kick start things at the remote host end from
the backup server.
Things have gotten easier over time (using stuff like rsync and LVM
instead of doing everything via script). Still need good scripts,
but much smaller and less complicated now.
More information about the Discuss
mailing list