NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Beating a dead horse



On 11/25/15 16:05, Greg Oster wrote:
On Thu, 26 Nov 2015 04:41:02 +0700
Robert Elz <kre%munnari.OZ.AU@localhost> wrote:

     Date:        Wed, 25 Nov 2015 14:57:02 -0553.75
     From:        "William A. Mahaffey III" <wam%hiwaay.net@localhost>
     Message-ID:  <56561F54.5040207%hiwaay.net@localhost>


   |   f: 1886414256  67110912       RAID                     # (Cyl.
66578*- | 1938020)

OK, 67110912 is a multiple of 2^11 (2048) which is just fine.
The size is a multiple of 2^4 (16) so that's OK too.

   |           128  7545656543      1  GPT part - NetBSD FFSv1/FFSv2

The 128 is what I was expecting, from the dk0 wedgeinfo, and that's
fine.  The size is weird, but I don't think should give a problen.
Greg will be able to say what happens when there's a partial stripe
left over at the end of a raidframe array.
RAIDframe truncates to the last full stripe.

If you do ever decide to redo things, I'd make that size be a
mulltiple of 2048 too (maybe a multiple of 2048 - 128).  Wasting a
couple of thousand sectors (1 MB) won't hurt (and that's the max).


But overall I think your basic layout is fine, and there's no need to
adjust that.  The one thing that you need to do (if you really need
better performance, rather than just think you should have it - that
is, if you need it enough to re-init the filesystem) would be to
change the ffs block size, or change the raidframe stripe size, so
standard size block I/O turns into full size stripe I/O.

Doing that should improve performance.  Nothing else is likely to
help.
The first thing I would do is test with these:

  time dd if=/dev/zero of=/home/testfile bs=64k count=32768
  time dd if=/dev/zero of=/home/testfile bs=10240k count=32768

so that at least you're sending 64K chunks to the disk... After that,
64K blocks on the filesystem are going to be next, and that might be
more effort than it's worth, depending on the results of the above
dd's...

Later...

Greg Oster


4256EE1 # time dd if=/dev/zero of=/home/testfile bs=64k count=32768
32768+0 records in
32768+0 records out
2147483648 bytes transferred in 166.255 secs (12916806 bytes/sec)
      167.20 real         0.12 user         8.85 sys
4256EE1 #

The other command is still running, will write out 320 GB by my count, is that as intended, or a typo :-) ? If as wanted, I will leave it going & report back when it is done. BTW, I see much more of the above 13-ish MB/s than the 24-ish reported earlier, when I posted earlier (a few weeks ago) I think I had about 18 MB/s, but 12-15 is much more common, apropos of nothing if it is nominally as expected .... Thanks & TIA for any more insight ....

--

	William A. Mahaffey III

 ----------------------------------------------------------------------

	"The M1 Garand is without doubt the finest implement of war
	 ever devised by man."
                           -- Gen. George S. Patton Jr.



Home | Main Index | Thread Index | Old Index