Subject: mkfifo command - was Question on pipes (off-topic)
To: None <firstname.lastname@example.org>
From: John Maier <email@example.com>
Date: 08/28/2000 14:57:03
Humm, mkfifo is cool!
There is a note in the man page that "The mkfifo utility is expected to be
IEEE Std1003.2-1992 (``POSIX.2'') compliant."
The man page was written in '94, by now I would think we might know if it is
or not :-)
Think I submit a PR?
Midamerica Internet Services
----- Original Message -----
From: "Kevin Cousins" <firstname.lastname@example.org>
To: "John Maier" <email@example.com>; <firstname.lastname@example.org>
Cc: "Mason Loring Bliss" <email@example.com>; "Todd Whitesel"
Sent: Monday, August 28, 2000 11:57 PM
Subject: Re: Question on pipes (off-topic)
Mason> Or, perhaps better, since NetBSD supplies this cool
Mason> functionality already:
Mason> tmpfile=`mktemp /tmp/whoisdata.XXXXX`
Mason> rm $tmpfile
Off topic, I know, but FIFOs can be lot of fun (c.f. mkfifo(1)).
When faced with the prospect of dealing with prohibitively 'ken huge
(even when compressed) datasets, and just barely enough real storage
for compressed input and output, yet with significant preconditioning
and postconditioning to be done, I have been known to shuffle data
between processes using lengthy pipelines using tee(1) and several
mkfifo input.data intermediate-result.1 # ...
gunzip -c input.data.gz | tee input-data |
( first-processing-pipeline ) > intermediate-result.1 &
cat input-data |
( second-processing-pipeline ) > intermediate-result.2 &
cat intermediate-result.1 |
( last-processing-pipeline ) | gzip > output.data.gz &
Using this sort of approach, I was able to variously parse ~8 million
records (each 1K long), in several passes, in less than 15 minutes on
a 500MHz Alpha! The compressed input was ~100MB.
Still, this might be overkill for John's situation. I like Mason's