Subject: Re: remote job execution with make
To: Laine Stump <lainestump@rcn.com>
From: Todd Whitesel <toddpw@best.com>
List: current-users
Date: 01/03/2000 01:39:19
> I haven't tried using rsh, but my sense is that it would create a lot of
> overhead when each job is fairly short (such as compiling a .c file).

Having recently tried this on 20/25/40 mhz sun's and mac's, I disagree
with this as a general statement :)

For example, on a 20 mhz sun3, compiling a "typical" source file from
the kernel takes 2-3 minutes, about 2/3 of which is spent solely in
/usr/libexec/cc1. On a 40 mhz sparc, nroff of a "typical" man page
(cat) takes 10 seconds.

Since both of those programs are filters in the best sense of the word,
remoting them to other machines is easy (as is building a cross cc1, it
turns out). I used a 233 mhz arm32 (SHARK) running 1.4.1.

Average gcc -c rate for the sun3 tripled; nroff rate for the sparc
quintupled. I did some fidelity testing, too: cc1 output compares equal
to the local cc1, but nroff has some niggly differences that I suspect
could be traced to differences in /usr/share on 1.4.1 vs. -current.

As for the rsh/ssh link: rsh is rock-solid but doesn't return exit codes
on its own; ssh does do exit codes but depending on how I do the stream
forwarding, it suffers from occasional incidents of data loss (always the
last hunk of stdout) or flat-out hanging (only observed once so far).

As they say (somewhere), "research continues." My next idea is to wrap
rsh again, but spoof stderr a little to carry the exit code. It is of
course a hack, but that's pretty much a given here.

Todd Whitesel
toddpw @ best.com