Subject: Re: dump and tape block size
To: VaX#n8 <vax@linkdead.paranoia.com>
From: Brett Lymn <blymn@awadi.com.au>
List: current-users
Date: 05/24/1996 15:41:22
According to VaX#n8:
>
>Tape block size(512) is not a multiple of dump block size (1024)
>

You can work around this by using dd to reblock the data for you.  Try
doing something like:

dd if=/dev/tape_device bs=512b | restore ivf -

And see what happens.


>
>Other than that, it _seems_ okay, but I'm wary.  Anyone know what is
>going on?  I'm assuming dump expects to read() 1024-bytes at a time
>and isn't smart enough to do multi reads if it doesn't get everything
>it wants, but I'm not sure.  Any ideas?
>

dump is very dumb when it comes to blocking the data - if it does not
get exactly what it wants then it will spit the dummy.  As a rule I
make sure that if I use a non-standard blocking factor on a dump that
I use the same one when reading the tape.


>And what is the "32"?
>

Probably the default blocking factor (like tar's is 20).

BTW:  It seems multi-tape dumps on my QIC150 (wt driver) are broken.
Restore gives me something about resyncing and either crashes or
misses some files.  I am not sure where the problem lies but it used
to work ok when the driver did not support a block size other than 1.
I have not tried this on current (still doing my guinea pig thing of
quivering in the corner of my box with my whiskers twitching awaiting
the great ctm experiment to begin ;-) so I have not made much of it...

-- 
Brett Lymn, Computer Systems Administrator, AWA Defence Industries
===============================================================================
  "Upgrading your memory gives you MORE RAM!" - ad in MacWAREHOUSE catalogue.