Subject: RaidFrame poor performance
To: None <current-users@netbsd.org>
From: Mihai CHELARU <kefren@netbsd.ro>
List: current-users
Date: 01/19/2005 10:55:27
Hello,

Shortly, RaidFrame is performing very poor on NetBSD 2.0. Here is my 
configuration:

NetBSD raid 2.0 NetBSD 2.0 (GENERIC) #0: Mon Nov 29 14:09:58 EET 2004 
root@proxy.girsa.ro:/disk2/netbsd-2-0/src/sys/arch/i386/compile/obj/GENERIC 
i386

# cat /etc/raid0.conf
START array
1 4 0
START disks
/dev/wd0e
/dev/wd1e
/dev/wd2e
/dev/wd3e
START layout
32 1 1 5
START queue
fifo 100

# dmesg | grep ^wd
wd0 at atabus0 drive 0: <ST3120026A>
wd0: drive supports 16-sector PIO transfers, LBA48 addressing
wd0: 111 GB, 232581 cyl, 16 head, 63 sec, 512 bytes/sect x 234441648 sectors
wd0: 32-bit data port
wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
wd1 at atabus0 drive 1: <ST3120026A>
wd1: drive supports 16-sector PIO transfers, LBA48 addressing
wd1: 111 GB, 232581 cyl, 16 head, 63 sec, 512 bytes/sect x 234441648 sectors
wd1: 32-bit data port
wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
wd0(piixide0:0:0): using PIO mode 4, Ultra-DMA mode 5 (Ultra/100) (using 
DMA data transfers)
wd1(piixide0:0:1): using PIO mode 4, Ultra-DMA mode 5 (Ultra/100) (using 
DMA data transfers)
wd2 at atabus1 drive 0: <ST3120026A>
wd2: drive supports 16-sector PIO transfers, LBA48 addressing
wd2: 111 GB, 232581 cyl, 16 head, 63 sec, 512 bytes/sect x 234441648 sectors
wd2: 32-bit data port
wd2: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
wd3 at atabus1 drive 1: <ST3120026A>
wd3: drive supports 16-sector PIO transfers, LBA48 addressing
wd3: 111 GB, 232581 cyl, 16 head, 63 sec, 512 bytes/sect x 234441648 sectors
wd3: 32-bit data port
wd3: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
wd2(piixide0:1:0): using PIO mode 4, Ultra-DMA mode 5 (Ultra/100) (using 
DMA data transfers)
wd3(piixide0:1:1): using PIO mode 4, Ultra-DMA mode 5 (Ultra/100) (using 
DMA data transfers)
# disklabel raid0
# /dev/rraid0d:
type: RAID
disk: raid
label: Raid
flags:
bytes/sector: 512
sectors/track: 96
tracks/cylinder: 16
sectors/cylinder: 1536
cylinders: 449892
total sectors: 691034976
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

5 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
  c: 691034913        63     unused      0     0        # (Cyl.      0*- 
449892*)
  d: 691034976         0     unused      0     0        # (Cyl.      0 - 
449892*)
  e: 691034913        63     4.2BSD   2048 16384 29184  # (Cyl.      0*- 
449892*)


There is only one FS (UFS2) - about 320GB.


So, what is doing:

1. Sometimes all the processes that need disk access just freeze for 
about 5 seconds or so.

2. Filesystem performance is very slow.

Here are some tests:

1. bonnie

On the RAID system:
# bonnie
File './Bonnie.6204', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
               -------Sequential Output-------- ---Sequential Input-- 
--Random--
               -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- 
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU 
/sec %CPU
           100  2817  2.4  4532  1.7  5818  2.3 125784 90.3 669657 99.9 
3970.7 13.0


On a similar system, with one identical harddisk like the raid system uses:
# bonnie
File './Bonnie.11220', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
               -------Sequential Output-------- ---Sequential Input-- 
--Random--
               -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- 
--Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU 
/sec %CPU
           100 34311 30.4 65617 27.4 65750 26.2 80785 60.3 171872 28.9 
13979.6 49.4


So, as you can see, writing is very slow. Also the seekers.


2. dd

RAID:
# dd if=/dev/rraid0d of=/dev/null bs=32k count=8k
8192+0 records in
8192+0 records out
268435456 bytes transferred in 4.424 secs (60677092 bytes/sec)

Other machine:
# dd if=/dev/rwd1d of=/dev/null bs=32k count=8k
8192+0 records in
8192+0 records out
268435456 bytes transferred in 5.258 secs (51052768 bytes/sec)

Here looks ok.


3. ftp from LAN (no network problems, beleive me)

ftp> get H.avi
local: H.avi remote: H.avi
229 Entering Extended Passive Mode (|||51388|)
150 Opening BINARY mode data connection for 'H.avi' (737063768 bytes).
100% 
|*******************************************************************| 
702 MB    5.07 MB/s    00:00 ETA
226 Transfer complete.
737063768 bytes received in 02:18 (5.07 MB/s)

Meanwhile `systat vmstat 1` shows:
Disks: seeks xfers bytes %busy
    md0
    wd0         165 2622K  47.5
    wd1         305 2629K   100
    wd2         164 1908K  20.8
    wd3         119 1901K  11.9
  raid0          70 4499K   100


I'll provide any kind of further information you need.

Thanks,
Mihai