NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Prepping to install



On 12 May 2015 at 16:01, William A. Mahaffey III <wam%hiwaay.net@localhost> wrote:
> On 05/12/15 02:32, David Brownlee wrote:
>>
>> On 11 May 2015 at 23:46, William A. Mahaffey III <wam%hiwaay.net@localhost> wrote:
>>
>> If you are using RAID5 I would strongly recommend keeping to
>> "power-of-two + 1" components, to keep the stripe size as a nice power
>> of two, otherwise performance is... significantly impaired.
>
> Hmmmm .... Could you amplify on that point a bit ? I am intending to
> maximize available storage & have already procured the mbd & 6 drives, but I
> could rethink things if my possibly hasty choices would be too burdensome

For RAID5 to perform efficiently data should be written in units which
re aligned with the RAID stripes and are a multiple of stripe in size,
otherwise a simple write changes into a read of  stripe, modification
of the affected part and then a write.

Filesystems tend to have sectors and blocks which are powers of two,
so the easiest way to arrange this for ffs is for the filesystem block
size to be a multiple of the stripe size ("1" is a fine multiple in
this case).

This is similar to the issue with sector drives which have 4K sectors
but present them 512byte sectors - if a filesystem is not 4K aligned
then write performance suffers horribly.

>> If you do not need to maximise the space you will always get better
>> performance from RAID1 (or RAID10). For that I would RAID1 the disks
>> in pairs, then RAID0 two of them to give a 1TB and fast 2TB storage
>> units, which would them be partitioned up as needed. It also makes it
>> simpler to later replace the pair or the four disks while leaving the
>> other set.
>
> I do want to maximize storage space, will go RAID5 almost certainly for my
> largest storage pool, /home.

>> As long as you are below the 2TB limit for any given component you can
>> use disklabels which are much simpler than gpt with wedges. NetBSD is
>> moving more to wedges, but netbsd-6 is probably not the version to do
>> it on by choice :) On that note I would probably put a netbsd-7 BETA
>> on the box and just update when the full release comes out.
>
> I want the (possibly perceived) reliability of the 6.1.5 over anything
> called BETA :-/ ....

NetBSD is very conservative about shipping releases. I think the BETA
is very close to what will be released and I've found it very stable.

>> If you want to maximise space with some redundancy then as you say,
>> RAID5 is the way to go for the bulk of the storage.
>>
>> A while back I setup a machine with 5 * 2TB disks with netbsd-6, with
>> small RAID1 partitions for root and the bulk as RAID5
>> http://abs0d.blogspot.co.uk/2011/08/setting-up-8tb-netbsd-file-server.html
>> (wow, was that really four years ago) - in your position I might keep
>> one 1TB as a scratch/build space and then RAID up the rest.
>>
>> If you have time definitely experiment, get a feel for the different
>> performance available from the different options.
>
> *Wow*, another fabulous resource. Your blog documents almost verbatim what I
> have in mind. I am going w/ 6 drives (already procured, 6 SATA3 slots on the
> mbd, done deal), but philosophically very close to what you describe. 1
> question: if you were doing this again today, would it be fdisk or GPT ?

If I had >2TB drives it would be TB :) If not, I would still stick
with fdisk. The complexity of
gpt setup and wedge autoconfiguration is still greater than fdisk and
disklabel. I know I'm going to have to move to it at some point, but
I'm going to hold off until I need to

> I think I am looking at 4 partitions per drive, ~16 GB for / (RAID1, 2 drives)
> & /usr (4 drives, RAID10), 16 GB for swap (kernel driver, all 6 drives), 16
> - 32 GB for /var (RAID5, all 6 drives), & the rest for /home (RAID5, all 6
> drives). TIA & thanks again.

I would definitely hold off on RAID5 for everything except the large
/home. RAID1 is much simpler and more performance for writes. I would
also try to avoid configuring multiple RAID5s across overlapping sets
of disks, while it theoretically provides more IO bandwidth, that
bandwidth will be having to compete with all the other filesystems and
swap usage on the system.

If you wanted to use all six disks:
- 32G(RAID1 root+usr) 910G(non raid scratch space)
- 32G(RAID1 root+usr) 910G(RAID5 home)
- 32G(RAID1 var) 910G(RAID5 home)
- 32G(RAID1 var) 910G(RAID5 home)
- 32G(RAID1 swap+spare) 910G(RAID5 home)
- 32G(RAID1 swap+spare) 910G(RAID5 home)

32GB space notes:
- This gives you three 32GB RAID1 'pools' to allocate everything
outside of /home
- Can adjust the 32G up or down before partitioning, but all should be the same
- In the suggestion, root+usr are kept on the same RAID (and could be
a single partition), so that the system can have all of the userland
available with only one disk attached, and a 'spare' partition is left
in case of later moderate additional space needs - maybe an extra
partition for /usr/pkg?, or for /var/pgsql, etc
- Obviously allocate usage within pools to taste - could put /usr on a
separate raid to provide more IO bandwidth for root & usr

910GB space notes:
- This gives 5* 910GB RAID5, which provides 4*910G (or 3640G) of space
- One disk is not included in the RAID5. This could be saved as a
spare for a RAID5 component failure (though a better approach might be
to have a disk on the desk next to the machine :), or used as non
raided scratch space. If it will not be active, then probably best to
put it on one of the components for the heaviest used 32G, or the most
important 32G

Note in the above that IO to /home will hit (almost) all disks, and
will affect all of the 32GB pools, so if you have heavy IO to /home do
not expect blistering performance from any filesystem. On the other
hand when /home has very light IO then you should have relatively nice
multi spindle performance from the other filesystems.

Having said all that, if I had the time to play I would install onto a
USB key, then script up the building and partitioning of the system in
many different forms and then chroot into the result and run some
tests to see how it performs.


Home | Main Index | Thread Index | Old Index