Subject: Re: gunzip|dd causes dd to fail w/ new gzip
To: None <>
From: der Mouse <mouse@Rodents.Montreal.QC.CA>
List: tech-userlevel
Date: 07/08/2004 19:55:41
> ``Just because the writer issues a 16k write does not guarantee that
>   the reader will receive all the data in one read call.

Right.  Pipes are byte streams, not record streams.  (Worse, if the
data producer is slow compared to the machine, they can act enough like
record streams to confuse developers who do not really grok them.)

>   Changing gzip may improve the probability that it will work, but to
>   be certain, the proper arguments to dd should be specified.''

Or better yet, a tool better suited to the task than dd should be used.

Long ago, I wrote something called catblock, specifically designed to
turn a byte stream into a block stream; it looks as though[%] this is
exactly the sort of task it's designed for. has it.

[%] Not "like" - doesn't _anyone_ know the difference any longer?!

> With the above command, the default is ibs=512.  Do pipes guarantee
> that at least 512 bytes will be read?

I don't think so; if there are fewer than 512 bytes queued, it won't
wait very long (if at all) for more.  But does that matter?  If bs= (as
opposed to ibs= and/or obs=) isn't given, I think dd is supposed to
reblock the data; certainly the manpage I have at hand says it does,
but that's just one particular implementation.

/~\ The ASCII				der Mouse
\ / Ribbon Campaign
 X  Against HTML
/ \ Email!	     7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B