tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: 5.1 RAID5 write performance



More observations on my issue:

> Did you dd against the raw device, the block device, or against the file
> system on top of the raid?
I first tried against the fs. I now added a scratch partition and tried the 
devices:
I get 40 to 50MB/s on the raw device and a whopping 0.57MB/s on the block 
device!
Nevertheless, when writing to the block dev, sysstat shows the raid being 100% 
busy
and each of the discs being 50-60% busy.
Moreover, when writing to the block dev, dd's WCHAN is vnode!

> Did you try to match up the raid data stripe size with the filesystem
> block size?
No, I didn't. But it is aligned by chance.

> What kind of performance do you get if you do the following:
> dd if=/dev/zero of=/dev/rraid0d bs=<data_stripe_size> count=100
On a scratch raid2h, some 0,57MB/s.

> Do you get reasonable write performance for each of the component disk in
> the raid? I.e. is there any possibility of partially failing disk?
Yes. 21MB/s for each for them.

> I noticed you tried raid1, which only takes 2 disk, so is the 3rd disk okay?
The RAID1s are across three discs, too.


Two more random observations:

I continuously see ~3763 Interrupts on ioapic0 pin4, yes dmesg doesn't show 
anything for that pin.
Nevertheless, there is ~0% CPU time spent in interrupt mode.

When un-tar-ing src.tgz, the raid (plus it's components) are busy for minutes 
after the command finishes.


Looks like there's something seriously wrong here, especially the raw dev being 
a hundred times faster than the block dev.


Home | Main Index | Thread Index | Old Index