NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: [RAIDframe] system hangs while manipulating large files



On Thu, 2 Jan 2014 09:48:40 +0100 (CET)
"Emile `iMil' Heitor" <imil%home.imil.net@localhost> wrote:

> 
> Hi and Happy New year fellow NetBSD users,
> 
> I'm setting up a RAID5 NAS using NetBSD 6.1.2/amd64. The RAID5 array
> is used as a media storage system, it is composed of 3x2T SATA disks.
> I followed various on-line documents such as:
> 
> http://www.netbsd.org/docs/guide/en/chap-rf.html
> http://mail-index.netbsd.org/netbsd-users/2011/09/02/msg008979.html
> http://abs0d.blogspot.fr/2011/08/setting-up-8tb-netbsd-file-server.html
> http://pbraun.nethence.com/unix/sysutils_bsd/raidframe.html
> 
> I've copied about 10G of data composed of 1 to 100M files, everything
> went smoothly. Then, in order to have some performances figures, I
> used `dd' like this:
> 
> $ dd if=/dev/zero of=./test bs=1m count=5000
> 
> withing a couple of seconds, the whole system becomes irresponsive
> and hangs totally but does not panic. I have reproduced this
> behaviour a couple of times, the system hangs everytime.
> 
> Here's the RAID setup:
> 
> $ cat /etc/raid1.conf
> START array
> # numRow numCol numSpare
> 1 3 0
> 
> START disks
> /dev/wd1a
> /dev/wd3a
> /dev/wd5a
> 
> START layout
> # sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
> 32 1 1 5
> 
> START queue
> fifo 100
> 
> and here's how I did the setup:
> 
> dd if=/dev/zero of=/dev/rwd1d bs=8k count=1
> dd if=/dev/zero of=/dev/rwd3d bs=8k count=1
> dd if=/dev/zero of=/dev/rwd5d bs=8k count=1
> disklabel -r -e -I wd1 # s/4.2BSD/RAID/
> disklabel -r -e -I wd3 # s/4.2BSD/RAID/
> disklabel -r -e -I wd5 # s/4.2BSD/RAID/
> raidctl -v -C raid1.conf raid1
> raidctl -v -I `date +%s` raid1
> raidctl -v -A yes raid1
> raidctl -i raid1
> gpt add -b 128 raid1
> dkctl raid1 addwedge export 128 7814058015 ffs
> newfs -O2 -b64k dk0
> tunefs -m0 raid1

Any particular reason why you set minfree to 0 instead of leaving it at
the default?  Especially given that the man-page for tunefs says:

 This value can be set to zero, however up to a factor of three
 in throughput will be lost over the performance obtained at a 5%
 threshold.

Losing a factor of 3 in throughput in addition to the RAID5 write
penalty seems pretty expensive, performance-wise :(  (You might also
get better write performance with '64 1 1 5' in this configuration,
instead of '32 1 1 5')

Later...

Greg Oster

> mount -o rw,log,async,noatime /dev/dk0 /export
> 
> Any hint? Have someone already faced this issue?
> 
> Thanks
> 
> ------------------------------------------------------------------
> Emile `iMil' Heitor .°. <imil@{home.imil.net,NetBSD.org,gcu.info}>
>                                                                  _
>                | http://imil.net        | ASCII ribbon campaign ( )
>                | http://www.NetBSD.org  |  - against HTML email  X
>                | http://gcu.info        |              & vCards / \



Home | Main Index | Thread Index | Old Index