Subject: Re: Disk-level Transaction Clustering
To: Chris Jepeway <jepeway@blasted-heath.com>
From: Chuck Silvers <chuq@chuq.com>
List: tech-perform
Date: 09/07/2002 12:54:04
hi,

hmm, that's interesting, could you find out what was in the blocks
that you were able to cluster?  I'd guess it's inode data, but it
could be something else.

it's kind of disappointing that there was no measurable improvement
in performance, though.  could you try experimenting with ccd or
raidframe and see if it helps noticably in that context?  it'll probably
help if you use a machine with a slower CPU as well.  my point with trying
to see a performance improvement is that if we think there should be
a performance improvement but there isn't one, then maybe something
isn't working correctly.

-Chuck


On Sat, Sep 07, 2002 at 02:42:19AM -0400, Chris Jepeway wrote:
> For a simple benchmark, I used ssh/pax to copy a full-ish
> /usr/src/sys tree (it had the kernels from a release build
> in it) onto a test machine where sd clustering was enabled.
> About 99K total xfers were done to disk.  Of these, about
> 1300 were clusters built by the sd driver.  These 1300
> clusters held about 5100 buffers that would have been
> individually scheduled if the driver weren't combinging
> them.  So, clustering saved about 3800 xfers, roughly a
> 4% savings.
> 
> I then built the GENERIC kernel with clustering disabled.
> About 12800 xfers were done during the build.  Building
> GENERIC again with clustering turned on did about 12200 xfers,
> where 1000 buffers or so were combined into 300 clusters.
> That's about a 5% savings.  CPU time and wall time for both
> compiles were comparable.