tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: 5.x filesystem performance regression



> [First figure missing is] a problem because that is what is required to show
> the effect of the buffer cache.
You mean, in order to compare it to the second figure?
I do remember it all took ages with 5.1. I'll go ahead and collect the missing 
data.
To speed things up, I won't test the influence of parity maps and don't do the 
16k RAID 5 case.

> I always reboot between such tests to ensure that the buffer cache has been
> cleared out.
Doesn't re-mount-ing do the same thing?

> For some reason, RAID 5 appears to be very slow and it needs looking at.
Yes, that's where I originally started off. It didn't begin with fs-on-RAID-5 
performance measurements but with why-is-my-machine-so-slow.

> If we want to look at the second runs in order to work out why 5.1 looks so
> much worse in the second runs, we still only have enough data in the
> plain disk and RAID 5 32k columns.
I'll collect some of the data. Oh, I already have 4.0.1 RAID 1 because of the 
time -c measurements: 19s/12s. The 4.0.1 16k RAID 5 is missing, I won't 
recollect that.

> Try comparing the output of "sysctl vm" on the two versions of NetBSD.
- 4.0.1
+ 5.1
-vm.uspace = 20480
-vm.idlezero = 0
+vm.uspace = 12288
+vm.idlezero = 1
-vm.bufmem = 1117184
-vm.bufmem_lowater = 75479040
-vm.bufmem_hiwater = 603832320
+vm.bufmem = 1778688
+vm.bufmem_lowater = 75483648
+vm.bufmem_hiwater = 603869184
+vm.swapout = 1

> Once again RAID 5 appears to be very slow and it needs looking at. 
Yes.

Updated table, with only 32k bsize on RAID 5
The ``r'' rows have been measured with setcache r, i.e. write cache disabled.

                plain disc      RAID 1          RAID 5
4.0.1 softdep   64s     12s     19s     12s     54s     12s
4.0.1 softdep r 24s     18s     31s     10s     73s     14s
5.1 softdep     51s     42s     65s     60s     218s    250s
5.1 log         66s     30s     84s     25s     194s    190s
5.1 log r       201s    206s    249s    214s    201s    189s
5.99.52 log     26s     25s     90s     27s     368s    186s

Throuhput on the raw/block devices (dd if=/dev/zero bs=64k count=10000)
        plain disc      RAID 5
4.0.1   14M/s   6M/s    10M/s   33M/s
5.1     11M/s   6M/s    9M/s    32M/s

No, I did not mess up block/raw on the RAID 5. On the bare disc, the raw device 
is faster (as expected), on the RAID, the block device is faster. Why?

Could anyone else please mesure on a RAID 5?

I sincerly hope someone can make sense out of these hours and hours of testing.


Home | Main Index | Thread Index | Old Index