tech-cluster archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Welcome to tech-cluster



"Aaron J. Grier" <agrier%poofygoof.com@localhost> wrote:
> On Mon, Oct 20, 2003 at 11:07:36PM -0400, Jan Schaumann wrote:
> 
> > (For starters, I'll toss in the cluster I'm administering at work --
> > http://guinness.cs.stevens-tech.edu/~jschauma/hpcf/ -- if you have
> > questions regarding this setup, feel free to post here.)
> 
> how do you replicate the setup on multiple nodes?  a master HD image and
> g4u or something similar?

Actually, we're using rsync -- the nodes run rsyncd on the internal
interface.  The main file server has the image installed in
/usr/local/node, so we can easily upgrade by building into this
location.  For each node, there are two files that differ (/etc/rc.conf
and /etc/inetd.conf as they contain the IP address), which are rsynced
in a subsequent pass.

Since the drives on the nodes are mounted read-only, the rsync script we
use has the following steps:

- run any initial commands on the remote host, taken from a regular file
  if it exists
- re-mount all partitions read-write
- rsync everything
- rsync special etc files
- run any post-commands on the remote host, taken from a regular file if
  it exists
- re-mount all partitions read-only

> I have visions of making a NetBSD equivalent to kickstart via a mix of
> netbooting, auto-install scripts, and cfengine, but am not sure where to
> start...

That would be interesting.  I never used kickstart, but I guess you'd
start by booting a kernel from the network via dhcp/tfpt, nfs-mount the
root filesystem (or extract the sets from wherever if the client has a
disk).  Or something.

-Jan

-- 
A common mistake that people make when trying to design something completely
foolproof is to underestimate the ingenuity of complete fools.

Attachment: pgp_uqwwiIygF.pgp
Description: PGP signature



Home | Main Index | Thread Index | Old Index