Subject: Re: NFS/RPC and server clusters
To: David Laight <>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-net
Date: 10/16/2003 16:58:21
In message <>David Laight writes
>On Thu, Oct 16, 2003 at 07:44:50AM -0400, William Allen Simpson wrote:


>I was talking about what happens once the 'slow start' time has finished,
>and the full window is being transmitted.  The slow start stuff shouldn't
>stop the full window being used even under these conditions.
>No matter what, using UDP saves you having to transmit any TCP acks.

And it saves you the cost of the TCP state machinery.

>FWIW the tests I was talking about were done before NetBSD existed.
>But are relevant because they are a 'feature' of CSMACD.

Its good to know them for historical reasons, but (per Thor's
message), anyone interested in `server farms' can upgrade to
full-duplex switched Ethernet at ~trivial costs -- same price-point
as 10/100 was at just under 2 years ago.

Given that, one should be asking oneself, rather pointedly, just how
relevant any half-duplex-specific lessons really are.

As a datapoint: Its fairly easy to tune NIC Rx interrupt mitigation so
that an *entire* *64Kbyte* SMB/CIFS write RPC is handled with *one*
interrupt, triggered immediately as the last frame in that RPC comes in.

I've personally caught tcpdump traces augmented with high-res
timestamps, which demonstrate exactly that behaviour.