Subject: More on UFS performance
To: None <current-users@netbsd.org>
From: Thor Lancelot Simon <tls@cloud9.net>
List: current-users
Date: 11/30/1994 22:02:58
Having taken Kim's suggestion and changed my newfs values, I think I've now
made some empirical observations that suggest that the defaults for newfs
should definitely be changed.

With all the disks I tested with, -n 1 (which isn't even *documented!*)
provided greatly improved performance, as opposed to all other values of
-n.  I think that with sector-addressed drives with complex physical
geometries, rotational position optimization is a technique which is no
longer valid.

If _anyone_ has _any_ disk larger than 300MB or so (or even a small disk)
manufactured within the last few years for which larger values of -n produce
better performance than -n 1, I'm very curious to hear about it.  I'd be
particularly interested in any disk for which the default value produces
optimal results.

Increasing maxcontig seemed to always improve write scores, but values of
maxcontig above 16 seemed to have a noticeable _negative_ impact on read
performance.  -a 512, for example, on the disk in my machine at home, 
yielded a peak write rate (4MB file, 8K record size) of 4.7MB/s, much
better than the 4.3MB/s value for -a 64, but read performance was reduced
from 2.6MB/s to 2.1MB/s.  I do not understand why this is the case, and I'd
love suggestions.

I believe that with rotational position optimization turned off (-n 1),
the value of the -r option is of no consequence.  I believe that the fact
that with the default value for -n, the -r option seemed to have little or
no impact on performance serves to demonstrate that rotational optimization
does not work correctly on modern drives.

The default value of the -d option also produces much worse results than
-d 0.  I'm probably inexact up above; I believe that -n 1 -d 0 is what
turns off rotational position optimization entirely.  I'm all for it. :-)

I suggest that the defaults for newfs be changed to:

-n 1 -d 0 -a 16 -r 5400

The -r value just in case someone decides to try playing with rotational
position optimization for some incomprehensible reason.  Though actually,
anyone with a disk where said optimization is a win might want -r 3600
after all.

If someone can explain why values of -a above 16 seem to negatively impact
read performance, I'm all for making -a very very large, like 512 or 1024 --
in this case the filesystem code will automatically limit maxcontig to the
maximum transfer size for a given controller/disk, right?  What are some
typical such sizes?  Why does -a 512 hurt read performance so much, and how
can it be fixed?  From comments by Larry McVoy, a good implementation of UFS
with clustering will yield disk speed on writes, and about 25% less on reads.

Right now, on my hardware at least, we seem to _surpass_ slightly the speed
of raw writes to the disk device on writes, but on reads we lose big as the
maxcontig value goes up, and we seem to lose worst on large file/record
sizes, where the raw device delivers about 5MB/s in my case, but with -a 512
I get only about 2.5MB/s under UFS.

If you can't guess, I'm incredibly curious as to why the value of -a
affects reads as much as it does, or at all, for that matter.

Still, we don't do so badly -- with -a 16, we pretty much hit Larry's
"good" value on reads of 75% efficiency, and we still just barely surpass
the raw device write figures.  (I am very, very, very curious as to how this
is possible at all.  Anyone?)

Comments?  Can one of the people able to do so change the defaults that
newfs uses?

Thor