Subject: Re: diffs for UVM/UBC improvements available
To: None <>
From: Robert Elz <kre@munnari.OZ.AU>
List: tech-kern
Date: 05/22/2001 21:53:02
    Date:        Tue, 22 May 2001 15:32:30 +0100
    From:        Ben Harris <>
    Message-ID:  <>

  | >> -current:
  | >> 4852.751u 779.388s 2:50:07.17 55.1%     0+0k 153979+322668io 42565pf+0w
  | >> 
  | >> with these diffs:
  | >> 4855.911u 883.693s 2:29:39.24 63.9%     0+0k 16927+58502io 15800pf+0w
  | >
  | >oddly, cpu usage increases while walk-clock time decreases (and i/o
  | >decreases dramatically).  do you have any understanding of why this is
  | >the case?  
  | It'd seem consistent with clustering I/O better.  This gives you relatively
  | few large operations, instead of many small ones, and means the machine
  | spends less time waiting for I/O, and hence a larger fraction of its time
  | doing actual work.

If all you're looking at is the percentage, you could explain it that
way - but in there is 104 extra actual seconds of system time being
used (out of < 800, so about 15% extra system CPU consumed than there
was before).

I can think of just two (general) explanations for this - it is possible
that the kernel is spending much more time optimising the I/O than it
was before, collecting together the pages, etc.   If this is true, then
as long as the CPU has lots of idle time, then it is a win - but if
the code is ever used where the CPU time is all being used for other purposes,
then this would be a net loss to the system as a whole.

Or - it may just be that the cpu time usage measurements are being
skewed - the time was always being spent, but the accounting just wasn't
attributing it to the processes used in the "make build".   If there's
more being done by the (kernel parts of) those processes now, and less
in other threads - perhaps just less interrupts happening while the CPU
is in the idle loop, then this is just a bookkeeping artifact, and doesn't
really matter to anything at all.

It would certainly be interesting to know which it is.   The easy way to
do that is probably to run a background CPU hog (a niced infinite loop
will do) and see how much it gets done during (say) 3 hours during which
each of those "Make builds" gets executed.   If the system isn't doing
more work, the CPU hog should do exactly the same amount of "work" (say
counting 64 bit numbers).   If the system is taking more CPU time than
it was before, the CPU hog will get less done.  (Of course it is expected
to get less done during the period that the make build is running - as that
is getting more of the CPU time during that period, but for a shorter
period, with the new code - that's why the hog needs to run for a fixed
time that starts before, and ends after, the make build in both cases).