Subject: Re: vmstat - is it reliable? Some think not... (fwd)
To: None <firstname.lastname@example.org, email@example.com>
From: Chris Torek <torek@BSDI.COM>
Date: 07/27/2000 18:52:37
>... if the a higher level interrupt _consistently_ happens right before
>the timer (say 1ms before), the timer assumes the past 10ms have been in
>the kernel. Hence it over-reports CPU usage.
This problem has ten-dollar word: "isochronous behavior". (Okay,
maybe it is a $20 phrase instead of a $10 word. :-) ) You can find
more details in a paper Steve McCanne and I wrote for the Winter
1993 San Diego USENIX. Any process (in the queue and random-sample
theory meaning of the word "process", not the usual Unix meaning)
that exhibits isochronous behavior can be systematically mis-measured.
UDP (and in fact any networking protocol) is problematic because
it is partly driven off the software clock. The rest is run off
the software interrupt, and the software interrupt is not an actual
interrupt. I am not quite sure why NetBSD is *that* bad, since the
hardware interrupt that delivers the packet should be immediately
followed by the software interrupt that gets into ip_input and
thence to the UDP code, and these should not be synchronized with
the measurement clock. Only the timers should be fully synchronized
in this manner.
Anyway, there are two solutions:
- use a separate, randomized clock for the "random" samples (the
approach we used on the sparc)
- stop using a random-sample approach at all
The latter is available on machines with cycle counters, and is
probably the way to go today. You simply sample the counter at
each "point of interest" and subtract the previous sample to find
out how much time you spent doing whatever you were doing. Convert
from CPU cycles to consistent units (remember that the CPU MHz
changes as you go to lower-power modes) and sum these up in u_quad_t
counters for later presentation.