Subject: Re: RaidFrame problems.
To: Andrea Franceschini <andrea@cs.tin.it>
From: Greg Oster <oster@cs.usask.ca>
List: port-i386
Date: 02/09/2001 21:27:01
Andrea Franceschini writes:
> Manuel Bouyer wrote:
> 
> > Ok, could you try to limit your IDE disks to UDMA mode 2 instead of 4 ?
> > This is with 'flags 0xa00' for each of your wd entries in kernel config fil
> e.
> > 
> > I'm not conviced I got the VIA Ultra66 stuff rigth :(
> > 
> 
> Ok i tried ...... without results:(
> 
> Anyway,the UDMA/66 supports seems to perform good without RAID-5

On just one drive, touching only that drive, right?  Try something like:

  foreach i (0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16)
  dd if=/dev/rwd${i}d of=/dev/null bs=1m count=1000 &
  end

and see what the performance is like (I'm interested to see the numbers).
(substitute whatever drives you are using into the above...)

> Maybe a simple mis-configuration of raid device?

We need to see your raid config file, disklabel(s), and dmesg output.
 
> I mean ,using a wrong stripe size may lead to such loss of performance?

Using so many drives certainly will... 

> > You may want something larger than 32 here, This mean 16k blocks read/write
>  to
> > the disk. You'd want something larger - 64 or 128 - to decrease the interru
> pt
> > load and increase the IDE bandwith.
> > The effect of this can be more important at Ultra/66 than Ultra/33.
> 
> As far as i can see there'is not agreement about correct stripe size.

Correct stripe size depends on the number of disks in the array, the type of 
disks, and how well they actually perform... the best way to select the right 
size is to try a bunch, and see which ones benchmark the best under whatever 
benchmark most closely resembles whatever it is you want to use the filesystem 
for... 

> Anyone has its own idea about the 'right way' do to it .
> So i think is better try every reasonable value (16 to 64),but this may
> take a long
> time :(

Note that you don't need to re-build the parity just to do benchmarking...  
You also don't need to use a partition as large as the entire RAID set... 
Build a config file with one stripe size, config the RAID set, put a
valid disklabel on the RAID set, drop a small partition onto it, newfs it, 
run your benchmark.  Repeat for all stripe sizes.  Pick the size that 
performed the best.  (note that by changing the block/fragment sizes you can 
improve performance too.)
 
> Any suggestion is welcome.

1) Don't put all 17 disks in a single stripe set.  
2) Do put each drive on it's own IDE channel.
3) Split the drives into 3 sets of 5, and make a RAID 5 set of each 
set of 5, and a RAID 0 set of those 3 sets.  
4) If you can't do 2), then at least try to divide up the drives such that the 
fewest channels have drives in the same RAID set.
5) have a look at 'systat vmstat' when you are benchmarking, and see how many 
interrupts per second you are handling
6) send us the dmesg, disklabels, and RAID config info

Later...

Greg Oster