Subject: Re: Tape access over net
To: Jukka Marin <jmarin@pyy.jmp.fi>
From: Luke Mewburn <lukem@wasabisystems.com>
List: netbsd-users
Date: 04/04/2001 00:20:09
On Tue, Apr 03, 2001 at 05:06:45PM +0300, Jukka Marin wrote:
> > > > tar cvf - <data to backup>|ssh host "dd bs=32k of=/dev/rst0"
> > > 
> > > Does dd do double buffering?  If not, the tape would stop streaming if
> > > there was a slight delay in network or on the other machine?  I wanted
> > > to use buffer because it does double buffering and simultaneous read/write
> > > to the buffer.
> > 
> > tape drive has it's own buffer (mine about 1MB)
> 
> I'm trying with "dd bs=32k" now.  I have tried using "buffer" with bufsizes
> up to 8 MB and blocksizes of 10k, 32k, and 256k.  This is what I usually
> get:
> 
> buffer (writer): write of data failed: Input/output error
> bytes to write=262144, bytes written=-1, total written    3216384K
> Connection to tmp closed by remote host.
> 187.602u 94.750s 2:14:53.14 3.4%        0+0k 258+53io 75pf+0w
> 
> If I run tar on the system with the tape drive, I can fit much more data on
> the tape (same data, that is).  Weird..

A hint about doing backups this way. Instead of:
	dump .... | ssh host dd bs=32k of=/dev/rst0
use
	dump .... | ssh host dd obs=32k conv=osync of=/dev/rst0

("conv=osync obs=32k" instead of "bs=32k"). This should help prevent
various problems when restoring the volume.

I'm not sure if this will help with your tar problem, but it might.

I've used this conv=osync trick for years, even to the point of
porting the NetBSD dd to systems like Solaris to use it, and I've
generally been able to achieve the expected bandwidth to the tape,
taking into account if the network or local disk is the bottleneck.
E.g, on gigabit ethernet writing to DLT-7000's getting between 5MB/s
and > 10MB/s depending upon the compressability of the data.

Also, understand that ssh encryption has an effect on tape speed.
I was doing some testing earlier today between a Celeron 600 and a
PIII-600, and my figures for reading from /dev/rld0d and writing
to a remote /dev/null were something like:
    ssh v2
	3des-cbc		0.8 MB/s
	blowfish-cbc		3.3 MB/s
	cast128-cbc		2.5 MB/s
	arcfour			3.6 MB/s
    ssh v1
	blowfish		4.0 MB/s

FWIW: I was getting about 46 MB/s with "dd if=/dev/rld0d of=/dev/null bs=32K" !
This is using a 3ware Escalade 6400 IDE raid card with 4 x IBM DTLA
drives in a RAID-10 (AKA RAID 1+0 AKA a stripe of mirrors).

Luke.