I have a netbsd-5 system with a RAID1 array (wd0, wd1). Under load (cvs update of pkgsrc) it becomes not particularly responsive. I have in sysctl.conf: vm.filemin=5 vm.filemax=10 vm.anonmin=5 vm.anonmax=80 vm.execmin=5 vm.execmax=50 to try to avoid program pages being paged out for the buffer cache. An interesting aspect of my RAID set is that I have two different disk brands (in an attempt to avoid correlated failures; I am viewing disks as nearly free and trying to avoid data loss), a Seagate and a Hitachi: wd0 at atabus2 drive 0: <ST2000DM001-9YN164> wd0: drive supports 16-sector PIO transfers, LBA48 addressing wd0: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors wd0: 32-bit data port wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) wd0(piixide1:0:0): using PIO mode 4, Ultra-DMA mode 6 (Ultra/133) (using DMA) wd1 at atabus3 drive 0: <Hitachi HDS723020BLA642> wd1: drive supports 16-sector PIO transfers, LBA48 addressing wd1: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors wd1: 32-bit data port wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) wd1(piixide1:1:0): using PIO mode 4, Ultra-DMA mode 6 (Ultra/133) (using DMA) Using dd, the disks seem similar. But when using sysctl vmstat, I see: wd0 295 2747K 100 wd1 296 2762K 10.8 raid1 281 2766K 95.5 So I don't really believe that wd1 is only 10% busy, but I can believe that it is somehow faster somewhat. So my questions are: Do people believe the %busy in 'systat vmstat'? Does raidframe RAID-1 dispatch read operations to components in a near-optimal way if say one of the disks is twice as fast as the other?
Description: PGP signature