Subject: Re: load balancing
To: None <current-users@NetBSD.ORG>
From: Paul Newhouse <newhouse@rockhead.com>
List: current-users
Date: 07/23/1998 01:30:06
Re: load balancing
Johan Ihren <johani@pdc.kth.se> wrote:
>However, I do argue that HiPPI is not likely the best network
>interconnect for a PC cluster.
Unless you have money to burn and then why buy PC's? I agree.
>Here's another problem. Although definitely pricier, HiPPI is not
>necessarily "faster" than multiple fast ethernets.
Can the PC drive the HiPPI sufficiently faster than some number of
Fast ethernet's?? Interesting questiona nd as you point out:
>That depends on the communication pattern of his application.
>But even if that's optimal, you still won't see significantly more
>bits coming through the HiPPI interface than four fast ethernets.
If both are running TCP/IP (V4) probably (almost assuredly) not.
>Definitely. If only the Znyx people could start returning my
>emails... ;-)
Hope they do. It looked like a pretty interesting card.
>answer was in many cases Myrinet.
Oh, yeah I'd forgotten about them.
>The new answer (I believe) will be gigabit ethernet.
When it gets here, maybe, consider the following (as pointed out by an
associate): (from www.ods.com - essential)
HIPPI currently holds a large lead on Gigabit Ethernet when we view both
technologies from a performance standpoint. Here are some comparative examples:
HIPPI has achieved throughput speeds of 720Mbits/second vs.
Gigabit Ethernet onlyreaching 400Mbits/second
HIPPI has an upgrade path to GSN vs. Gigabit Ethernet with no
next-generation upgrade
*** Typical CPU Utilization of a HIPPI attached host is <30% during
a transfer vs. hosts which are Gigabit Ethernet attached operating
at 100% CPU utilization when they are transferring across the network
HIPPI is an established standard vs. Gigabit Ethernet which will
not have a ratified standard until later in 1998
*** Additionally, the typical latency of a HIPPI switch is 500ns vs.
20ms for a typical Ethernet Switch, making connection time
through a Gigabit Ethernet network a limiting factor as well
Latency is probably more of an issue than bandwidth for many reasons
in a cluster.
Ok, www.ods.com might not be be the most unbiased source but, they make
some point worth looking into.
>My point wrt HiPPI (or at least my opinion) is that a well designed
>system typically exibits some kind of balance among it's parts. In the
>case of PC clusters, I feel that a design where the network
>interconnect cost significantly more than the individual nodes
>(including CPU(s), memory and disk) is probably not balanced (except
>possibly for a special class of applications).
Current HiPPI prices make the $'s unbalanced. As an associate also points out:
Besides, I think that Essential could drop the bottom out of
the price of the PCI cards if they wanted to. Now that they are
part of yet another larger networking company (ODS now instead of
STK/Network Systems), maybe they can fold HiPPI RnD cost recovery
into the general budget... I think that HiPPI could cost as little
as 1200/connection (600 per NIC, 600 per port board).
>PS. I personally like HiPPI, although I argue against it for this
>particular use.
Fair enough, I argue it should be considered even if it appears to be
too expensive. After all, it's just another point on the graph.
Paul