Subject: Re: NFS transport
To: Jonathan Stone <jonathan@DSG.Stanford.EDU>
From: David Laight <david@l8s.co.uk>
List: tech-kern
Date: 07/24/2002 00:37:03
> ... not that the Penguin-OS is on topic, but in the days of Linux'
> userland NFS server, UDP was probably better: it's cheaper to do IP
> reassembly and pay the copyout/contexsw cost only once per RPC, rather
> than once per TCP segment.  If we have arches where switching to an
> nfsd kthread requires a full-blown MMU context-switch, the same may apply.

Never mind the OS features, on a local LAN segment with network card that
have adequate buffering packets just don't get lost.
This means that NFS over UPD never ends up sending the unneeded acks
that NFS over TCP (probably) does.
The only network traffic if the NFS request and its ack, no other packets
are ever used.  NFS over TCP will generate other acks due to window sizes
and delays in various places.

I remember some tests done with large TCP (IIRC) windows. The recipient
decides to ack every other packet, however the sender has a 32k window
- so is sending about 22 full sized packets out back to back.  None
of the acks get onto the lan until it is idle, by which time 11
are queued and all go out as back to back frames.
This means the sender has to process all of them -rather than just the last.
Additionally the exponential backoff means that none of these frames are
sent immediately after the lat fragment is received.
All this piles up to reduce the throughput

	David

-- 
David Laight: david@l8s.co.uk