Subject: Re: load balancing
To: None <newhouse@rockhead.com>
From: Johan Ihren <johani@pdc.kth.se>
List: current-users
Date: 07/22/1998 22:26:34
>>>>> "Paul" =3D=3D Paul M Newhouse <newhouse@rockhead.com> writes:

    Paul>  Johan Ihren <johani@pdc.kth.se> wrote:
    >> I don't agree with this.

    Paul> I'm not sure what you don't agree with *8^)?  Chris was
    Paul> looking for "thoughts", which I took to mean "other
    Paul> possibilities".

Ok, I was a bit unclear. Sorry 'bout that. 

I don't intend to argue against HiPPI, on the contrary: it works, on
some machines (and for certain types of communication) it's really,
really fast and I absolutely look forward to GSN (aka
HiPPI6400). Neither do I argue against "other possibilities": that's
fine, and the more the merrier. However, I do argue that HiPPI is not
likely the best network interconnect for a PC cluster.

    Paul> Chris may well find HiPPI to pricey (I did mention it was
    Paul> more expensive).  If memory serves, each HiPPI NIC ran
    Paul> ~$2500 and a loaded switch was in the $20K-$25K range.  Not
    Paul> your typical SOHO choice/option *8^) but, it was something
    Paul> he could look into.

I.e. around $4K per complete port. That's quite lot in the PC world.

    Paul> Chris hinted he wanted price/performance but, didn't give
    Paul> any clue as to his budget.  He might be able to tolerate a
    Paul> pricier option ...  maybe not.  I assumed he was looking for
    Paul> alternatives so he could make a rational analysis of his
    Paul> options.  I didn't tell him or anybody else that it was "the
    Paul> only way to go"!  *8)

Here's another problem. Although definitely pricier, HiPPI is not
necessarily "faster" than multiple fast ethernets. That depends on the
communication pattern of his application. But even if that's optimal,
you still won't see significantly more bits coming through the HiPPI
interface than four fast ethernets.

    Paul> I don't remember if NetBSD supports the Zynx card (? I think
    Paul> that's the right name?).  It has 4 fast ports per card, with
    Paul> the daughter card it acts as a switch.  Again, not as cheap
    Paul> as the NetGear 1 port 10/100 cards (under $30/card) at
    Paul> ~$1200 for 4 ports.  BUT, again something to look into.

Definitely. If only the Znyx people could start returning my
emails... ;-)

    Paul> It sounded to me like Chris wanted to present a spread of
    Paul> options not just one.  I am merely suggesting some fodder
    Paul> for the presentation.

Ok. Adding more fodder: If I absolutely need to buy more "bandwidth"
for a PC cluster than I can get with (multiple) fast ethernet the old
answer was in many cases Myrinet. The new answer (I believe) will be
gigabit ethernet. Right now we're right between these two, but each
month of delay works for gigabit ethernet and hence against Myrinet.

My point wrt HiPPI (or at least my opinion) is that a well designed
system typically exibits some kind of balance among it's parts. In the
case of PC clusters, I feel that a design where the network
interconnect cost significantly more than the individual nodes
(including CPU(s), memory and disk) is probably not balanced (except
possibly for a special class of applications).

Basically I don't see this as a question of budget as much as a
question of balance. 

So, you presented another option and I presented some thoughts on that
particular technology. I see no real difference of opinion here ;-)

Regards,

Johan Ihr=E9n, <johani@pdc.kth.se>, Center for Parallel Computers,
Royal Institute of Technology, S-100 44 Stockholm, Sweden

PS. I personally like HiPPI, although I argue against it for this
particular use.