NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: RAIDframe write performance below expectations on a RAID-1 of two magnetic disks on NetBSD/amd64 9.1



Matthias Petermann <mp%petermann-it.de@localhost> writes:

> On the RAID device (/dev/raid1 for me) I then created another GPT
> partition table and created a 4k-aligned partition in it as well:
>
> 	# gpt create raid1
> 	# gpt add -l data -a 4k -t ffs raid1
> 	# newfs -O 2 -b 16k -f 2k NAME=data
>
> This was formatted with an FFS filesystem (with the recommended
> parameters from [1]) and mounted with the mount option "log".
>
> However, the write throughput remains well below my expectations and I
> am despairing. When writing a 1 GB file, I achieve write rates of
> about 2 MB/s.

Long ago, I tried to measure performance on a Xen system, starting from
the single disk on the dom0, to the raid set on the dom0, to the file on
the dom0 that held the image, to the rxbd0 on the domU.  For reading, it
was basically about 10% slowdown from dom0 raid to dom0 file, and
another 10% from dom0 file to domU "disk".

Anecdotally, I have run many RAID1 sets over the years (with disklabel,
not GPT) and have never noticed a problem.

So, I would advise (and you've done some of this)

  of course, keep written notes of everything

  measure the raw read and write speeds on your disks with dd to the raw
  device, perhaps 1m size.  Actually read and write (zeros) the entire
  disks.

  create a filesystem on the disk and measure read and write speeds of
  large files.  destroy the fs

  create the raid,  measure the read and write speed of the entire disks

  create a filesystem on the raid disk, and measure again


I think the big thing you have missed is testing the raid virtual disk
w/o a filesystem.

other bits of advice:

  Definitely look at all the partitioning and check the alignment; assume
  nothing about the creation process going correctly (even though probably
  it did).

  align to more than 8 sectors.   At least 32k (64 sectors).  I see your
  partition starts at 40.

  If you are going to mount with log (I do), then you might want to use
  "-s -64m or whatever at creation time.  I have a vague impression that
  this ends up better than having the filesystem allocate a log itself
  from inside, but I am very hazy on this.

  There is complexity surrounding write caching, which is dangerous (but
  generally it is on), and filesystem safety.  This might relate to NCQ,
  and it also probably relates to wapbl.  There is also a relationship
  with fsync.  It is possible that adding the raid layer affects this.
  I am fuzzy on the details.



I have a test system running old NetBSD with a RAID1 of 2 x 400G drives,
and within that filesystems.  disklabel, no GPT, and 512-byte sectors.
The machine has only 2G of RAM.  This is very crufty, but I keep it
around for portability testing on things I help maintain.

The raw drives will read at about 45 MB/s (yes, they are very old).
With an 8G file (and hence not even close to fitting in RAM):

  dd if=/dev/zero of=ZERO bs=1m count=8192 

watching "systat vmstat", it seems to be writing to the disks at varying
speeds 19M/2 to 27M/s.  It finished at

  8589934592 bytes transferred in 342.770 secs (25060345 bytes/sec)

Reading the file back, also bs=1m, I get

  8589934592 bytes transferred in 225.276 secs (38130713 bytes/sec)

I am unable to easily test with a fs on the raw disk.  But 25 fs write
and 38 fs read vs 45 raw read seems entirely reasonable.

Obviously something is not right with your setup.

Attachment: signature.asc
Description: PGP signature



Home | Main Index | Thread Index | Old Index