Subject: gzip problem on 1.3.2
To: None <port-sparc@netbsd.org>
From: Eric McWhorter <emcwhorter@xsis.xerox.com>
List: port-sparc
Date: 06/24/1999 13:11:14
I have a production system (i.e. difficult to upgrade) running
NetBSD/i386 1.3.2.  I keep the most recent copy of dumps for the
system online on a large disk array.  Due to the size of these files,
I really need to store the files on the server compressed.  To make
the backup run more quickly, I dump to a disk file on the server, then
run gzip on the NetBSD box using a nfs mount of the server disk.  The
last time I tried this, the resultant gzipped file was 30 bytes and
was not a valid dump file.  The original file was a little over 2
gigs.  In other words, if I try to gzip the dump file the dump file
gets trashed.  The previous dump was just under 2 gigs, so perhaps
there is a two-gig boundary somewhere.  

If I gzip the dump file on a solaris host, the resultant file is just
under a gig and is a complete and valid dump file (I did a test
restore just to be sure).

I'm using different versions of gzip on the two hosts, but I don't
think that's the problem since I have the same problem with
compress(1). 

Does anyone know if this problem exists on newer versions of
NetBSD/i386?  I would prefer to not upgrade NetBSD at this point if I
can avoid it.

Thanks!

-- 
Eric McWhorter
Xerox Special Information Systems
emcwhorter@xsis.xerox.com