Subject: dump/restore problems with large output files?
To: None <netbsd-users@netbsd.org>
From: Steve Bellovin <smb@research.att.com>
List: netbsd-users
Date: 05/02/2001 09:52:52
I use a desktop machine with a large IDE disk (~75G) as a backup
server for my laptop.  Twice in a row, I've been unable to use
the output of a level 0 dump of /usr; I'm wondering what the
problem might be.

The dump is done over the (switched, 100BaseT) network via ssh:

/sbin/dump -0 -u -f - /usr | ssh 135.207.228.197 "umask 077; cat >$Full/usr.$x"

When I did a trial restore on the server machine, things weren't
healthy -- restore said that it reached end-of-file in the middle of
a file, and that it couldn't finish processing many assorted files.

The dump was taken with the laptop in single-user mode, so the file
system was quite still.  But the output file is about 8G long, which
has me concerned.  Are there any file size limits in dump, i.e.,
something that's 32 bits long that should be longer?  How well-tested
is file system code with files that large?  For that matter, the
server has an ASUS A7Pro motherboard with VIA chips.  (But there were
no error messages during either the dump or the restore.)

Any suggestions?  (I'm contemplating rewriting my dump script to
use the host:filename,filename... syntax and limiting each file to 2G
or so.  But dump wants the files to exist, which is annoying.)  I
may also try two consecutive dumps, and run cmp on the output files.

Here is some sample output from 'restore -N' on last night's dump:

# nice -20 restore -N -r -f /usr/B*/berkshire/full/*usr*
Mount tape volume 2
Enter ``none'' if there are no more tapes
otherwise enter tape name (default: /usr/BACKUPS/berkshire/full/usr.laptop-dump.
0.2001.05.01) none
bad entry: incomplete operations
name: ./home/smb/doc/carniv_final.pdf
parent name ./home/smb/doc
sibling name: ./home/smb/doc/capi.ps.gz
next hashchain name: ./home/smb/Mail/to-smb/4093
entry type: LEAF
inode number: 2143987
flags: NEW
abort? [yn]