tech-cluster archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: cluster install (was Re: Welcome to tech-cluster)



> 
> On Tue, 21 Oct 2003, MLH wrote:
> 
> > Sounds good. Do you know of a way to obtain a MAC address without
> > hooking up a keyboard and monitor or opening the box? The boxes
> > are sealed by M&A and the warranty is voided if we open them.
> 
> Ah, this wants a slightly different technique, but one that would still
> probably be faster than using a floppy, and, in fact, probably faster
> than the one I suggested earlier.
> 
> You need a "setup server" with a separate network for the machines
> that it's going to set up. You also need to have new boxes with no OS
> installed, so that they'll try to netboot right off. Your configuration
> server, when it sees a netboot request from an unknown MAC address, adds
> the MAC address to the database, assigns a new IP address for it (also
> adding this to the database), and then boots the machine and runs the
> install and configuration code with the appropriate setup for that new
> IP address. Assuming you have a private network for the compute nodes
> (a la Stevens), you'd just plug in a new node, power it on, and you're
> done.

What we do right now is for each subnet of cluster nodes, we have
a master node which controls access to the cluster subnet. One
essentially logs into that node and runs jobs from there using sge
though jobs can also be started from one's desktop Sun. The master
node handles the nfs mounts for all of the nodes it is responsible
for, arbitrates cpu use, manages the tasks and incidental logistics
- mostly using sge.

We try to keep nfs traffic to a minimum, but sometimes it is simply
more efficient than copying the same files to all of the machines.
We do install almost all software (as much is as feasable) on each
node to keep network traffic down to just what is essential to run
the analysis jobs. We can't afford gigabyte cards/switches in the
racks, but we do have gigabyte switches for the main switches. We
run 24 boxes per rack with one 100baseT switch per rack.

Some of what you recommend we have discussed, but we haven't had
the manpower/expertise to implement it.

We are using intel pxe nics though:
fxp0 at pci2 dev 4 function 0: i82550 Ethernet, rev 16

I have two of the MP boxes here in my office - one I use for building
NetBSD release sets and pkg binaries etc. (trying to figure out
how to automate this) and another I'm using as a development workflow
server.

> If you can't get the machines without an OS installed, you could
> probably use a wee bit of trickery to make a CD or floppy that would
> boot to the point where it would do the rest off the network. The key is
> to let the server node do the assignment of an IP address and any other
> machine-specific information.

They all came with Solaris pre-installed. Plus I will likely be
using existing nodes to test NetBSD on (even though I'll be trashing
the drives). The next 500-600 cpus will likely be 4-processor
opteron boxes with Linux pre-installed, so will need to trash that
also. But they all do have cds.




Home | Main Index | Thread Index | Old Index