Subject: Re: a thought about FFS parameters & disk performance
To: Curt Sampson <cjs@portal.ca>
From: Soren S. Jorvang <soren@t.dk>
List: current-users
Date: 03/31/1997 01:40:58
On Sun, 30 Mar 1997, Curt Sampson wrote:

> On Mon, 31 Mar 1997, Soren S. Jorvang wrote:
> 
> > > This strikes me as a bit odd; on my machines (486s, Pentiums,
> > > low-end Sparcs) with the standard 8K/1K block/fragment sizes, I
> > > normally get very similar speeds out of bonnie and iozone very
> > > close to what I get from dd if=/dev/rsdxx of=/dev/null.
> > 
> > Using what block size for dd? On the system I am writing this on,
> > the above 'dd' reads 0.6MB/sec from disk, while 'dd bs=64k' reads 7.1MB/sec
> > from the same disk.
> 
> Oh, usually 8K or so. My nicer P90 with 7200 RPM disks (Quantum
> Atlas) on a Buslogic controller gets about 6 MB/sec on the above
> tests.
> 
> > I am running -current. Before the MAXPHYS changes, my results were
> > pretty much the same as yours.

What I meant was, I got similar performance from 'dd bs=8k' and
'dd bs=64k', as would be expected with a smaller MAXPHYS.

> Hmm. I should think we need to have yet another look at the MAXPHYS
> stuff, then, if it's killing performance unless we increase our
> block sizes unreasonably. (Moving from 1K to 8K fragments on my
> development volume, for example, would waste about 90 MB of disk
> space.)

See above, my mistake. I don't see anything wrong with the larger MAXPHYS.

It seems only natural to me that newer disks are happier with large
requests. The larger MAXPHYS allows larger transfers, which is good.

For general use, I mostly use 4K fragments. I do not have seperate
partitions for source, so I do not see the waste you describe.


-- 
Soren