NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: I/O question



On 09/09/15 09:02, Ian Clark wrote:
On 8 September 2015 at 14:39, William A. Mahaffey III <wam%hiwaay.net@localhost> wrote:
On 09/08/15 03:13, Ian Clark wrote:
[snip]

Thanks for your reply. This RAID is created from partitions of the
underlying drives, not from whole drives. The resulting raid is mounted as
/home. I attach the disklabel info for the underlying drive0 (all 6 are
identically sliced) & the header for the /home FS. Below is the fdisk info
for drive0:

4256EE1 # fdisk wd0
Disk: /dev/rwd0d
NetBSD disklabel disk geometry:
cylinders: 1938021, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
total sectors: 1953525168

BIOS disk geometry:
cylinders: 1024, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
total sectors: 1953525168

Partitions aligned to 16065 sector boundaries, offset 63

Partition table:
0: NetBSD (sysid 169)
     start 2048, size 1953523120 (953869 MB, Cyls 0/32/33-121601/80/63),
Active
1: <UNUSED>
2: <UNUSED>
3: <UNUSED>
Bootselector disabled.
First active partition: 0
4256EE1 #

That looks fine, as does your attached disklabel. This would suggest
the misalignment is at the RAID level. Can you provide the
disklabel/partiton info for your raid device(s)?


Glad to, what exactly do I need to do to get that info ?



All 6 drives are fdisked into 1 large partition, then those partitions are
sliced into 3 slices (2 X 16 GiB (2 of the 1st 16 GiB's for root (RAID1),
the other 4 for /usr (RAID0), the 2nd 16 GiB's for swap (all 6 drives)),
then the rest of each drive for the /home (RAID5)), & the slices are
RAID'ed. Need anything else, *please* do not hesitate. TIA & thanks again.
All your paritions look to be aligned to 4k boundries so it looks like
your underlying disk setup is fine. This leaves the problem likely to
be with the partioning on the raid device itself, or potentially the
fs block size of the raid.

Very well, what info should I provide to assess those issues ?


Additionally, (and I'm sure someone will correct me here if I'm
wrong), I'm not sure you're going to get great performance from 6
drives in RAID5. The FS block size is a power of two, which means you
ideally want to split an FS block over all your drives in a RAID5 so
it fills each drive exactly. Because there's an extra drive for parity
you need to take this into account.


My 6 drives/partitions are used as 4 data drives, 1 parity drive, 1 spare, I *think*. At least, that is my understanding, FWIW :-/ ....



For example, imagine you had a RAID 5 array with 3 disks, and a
filesystem block size of 32K, if you arrange your raid so that each
drive gets 16K this means you can complete a write of a block to all 3
drives at once, as you write 16K to the first drive, 16K to the second
and 16K of parity to the third.

However if you have 4 disks you would need to write 48K blocks to
avoid having to do a read/rewrite, but 48K is not a power of two (and
therefore not settable as a FS block size), so you either go with 32K
or 64K, both of which will result in a certain amount of write related
reads (as the RAID has to recalculate the parity).


I do indeed have 4 data disks (I think), w/ 1 parity disk & 1 spare. I thought that the parity data was/is generated by the RAID software, i.e. if I write 48K of data to my 4 data drives, it would get split up into 4 12K blocks, 1 for each data disk, then another 12K of parity data written to the parity disk. Thus, I write 48K of (my) data, the RAID system writes a total of 60K (in my case) of data, no ? Otherwise, I am lost & need to do more study to understand what's going on here. I *thought* I had followed all of the BPP recommendations for best performance, but obviously I missed something.



Try creating your RAID5 array from 5 of the 6 discs and see if
performance improves; if it does you could just not have one disc in
the RAID5 array or try RAID6 (with 2 disks of parity).

I *think* that's what I have now, w/ 1 spare, i.e. 4 active drives, 1 parity drive, & 1 spare.


Also, it's probably worth making sure you've got a backup of all your
configuration information, as whilst raid5 is redundant and can
tolerate a disk failure RAID0 can't (so you would loose the contents
of /usr on a drive failure).

Cheers,

Ian


Yeah, I'm clear on the risks of /usr, I wanted performance there, & I am backing up most critical system stuff on another server, so I think I am good there. I've heard folks on other lists say that RAID0 for /usr is OK, since most of what's there can be recreated fairly easily, so it is (slightly) less critical.


--

	William A. Mahaffey III

 ----------------------------------------------------------------------

	"The M1 Garand is without doubt the finest implement of war
	 ever devised by man."
                           -- Gen. George S. Patton Jr.



Home | Main Index | Thread Index | Old Index