Subject: vfs.lfs.pagetrip--unexpected result
To: None <>
From: Blair Sadewitz <>
List: current-users
Date: 12/11/2006 00:15:16
I an an idea to measure the aggregate bandwidth of my ccd to calculate
LFS segment size: according to various tests with bonnie++ and
newfs_lfs -AN, the optimal setting came out to be 58805888 bytes (I
have tree disks, and 5880888 is three times larger than the value
newfs_lfs -A gave me for an individual disk).  Interestingly enough,
this scheme did somewhat better than dividing the autoamtically chosen
seg size for one disk and dividing that by three to get the stripe

Now, this is where it gets strange.  To arrive at a value for
vfs.lfs.pagetrip, I was told to divide the bandwidth of my disk (in
the case of the ccd, it is ~162000000 by 4096 and then divide that by
four, producing a result that should be bigger than your chosen
segment size.  However, if I simply divide the bandwidth by 4096 (size
of one page) and set vfs.lfs.pagetrip to, for example, 39950, bonnie++
results tend toward one third better write performance, and other
improvements all around.  If anyone has the hardware to create and
benchmark LFS filesystems to investigate this, espically on raidframe
or ccd.

I can now easily get over 100000KB/s sustained write performance
whereas before it was only 43-65KB/s.

So, if you're disk bandwidth is about 50331648b/s.  So, divide that by
4096 (on i386 and amd64, this varies from port to port) to produce
12288.  Now, run bonnie++ with vfs.lfs.pagetrip=12288 and again with
vfs.lfs.pagetrip=3072 (12288 / 4, a 'fudge factor').  I noticed
significantly better performance on nearly all tests if I omitted the
division by 4 and used the intermediate number for my vfs.lfs.pagetrip

Anyone care to test this out and report their findings?  I'm
interested to hear about other's outcomes with these strategies.



Support WFMU-FM: free-form radio for the masses!

91.1 FM Jersey City, NJ
90.1 FM Mt. Hope, NY

"The Reggae Schoolroom":