Subject: Re: raidframe consumes cpu like a terminally addicted
To: Robert Elz <kre@munnari.OZ.AU>
From: Matthias Buelow <mkb@mukappabeta.de>
List: netbsd-users
Date: 04/30/2001 00:46:02
Robert Elz writes:

>To provide more help, the config of the raid is going to be needed,
>how many drives in the raid5, on what busses do they live, what's the
>raid config, what are the disklabels of the underlying drives, and of the
>raid (and perhaps more).

Well, it's 3 disks (IBM DNES 9.1GB UW) living on a single bus (not
optimal but shouldn't saturate the bus in normal usage).

The raid is configured as following:

START array
1 3 1

START disks
/dev/sd2e
/dev/sd3e
/dev/sd4e

#START spare
#/dev/sd5e

START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
32 1 1 5

START queue
fifo 100

with each of the disklabels looking like the following:

# /dev/rsd2d:
type: SCSI
disk: DNES-309170W
label: postbus2
flags:
bytes/sector: 512
sectors/track: 312
tracks/cylinder: 5
sectors/cylinder: 1560
cylinders: 11474
total sectors: 17916240
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0 

5 partitions:
#        size   offset     fstype   [fsize bsize   cpg]
  d: 17916240        0     unused        0     0         # (Cyl.    0 - 11484*)
  e: 17916240        0       RAID                        # (Cyl.    0 - 11484*)

and that's the label of the raid5 set on top of them:

# /dev/rraid1d:
type: RAID
disk: raid
label: postbus-spool
flags:
bytes/sector: 512
sectors/track: 256
tracks/cylinder: 1
sectors/cylinder: 256
cylinders: 139970
total sectors: 35832320
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0 

5 partitions:
#        size   offset     fstype   [fsize bsize   cpg]
  d: 35832320        0    unknown                        # (Cyl.    0 - 139969)
  e: 35832320        0     4.2BSD     1024  8192    32   # (Cyl.    0 - 139969)

The host adapter is Symbios-based ultra-wide; there are two other disks
on the bus (sd0 and sd1, forming a RAID1 but these don't get much traffic.)

During the test (the ls -l on a directory with ~1000 entries) I have
looked on disk i/o with systat iostat and there was almost nothing
(most of it was in the buffer cache anyways) so insufficient bandwidth
shouldn't be the problem in that case, imho.

--mkb