NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Prepping to install



On 06/09/15 17:36, William A. Mahaffey III wrote:
On 06/09/15 09:00, William A. Mahaffey III wrote:
On 06/09/15 08:56, Martin Husemann wrote:
On Tue, Jun 09, 2015 at 08:50:12AM -0453, William A. Mahaffey III wrote:
Thanks for the reply. My RAID1 raid[1,2] devices are defined from 16 GiB partitions of the underlying HDD's, 2 each per raid device. They are not
intended to be subdivided, AFAIK. Therefore, I'm guessing
/dev/raid[1,2]a, right :-) ?
Not subdivided == use the raw partition, so probably /dev/raid[1,2]d

Martin


Gads, this pilot's all over the sky :-/ ....

Would the eventually/hopefully created RAID10 device be autoconfigurable during boot ? TIA & thanks again.


Well, a more careful re-read of the raidctl online man page informs me that a RAID10 is in fact *not* autoconfigurable, so I switched to a 4-device (4 X 16 GiB partitions that I was going to make into a RAID10) RAID0 for /usr. I also redid the parameters of my RAID5 configuration, which I had chosen poorly/invalidly before, & it's initializing its parity for about the next 5 hours. I just post this for anyone who might follow the thread in the future. I'll be off to disklabel-ing the 3 RAID's tomorrow & (hopefully) installing ....


OK, I'm up to the (try to) install, & hit a minor snag. I prepped my various filesystems closely following the attached notes, posted earlier in thie thread. In particular, I prepped the root filesystem to be bootable. I then rebooted the box & removed the USB key, hopefully to reboot into an install environment upon reboot. Instead, I get an endless string of messages:

init: can't exec getty (/usr/libexec/getty) for port /dev/console: no such file or directory.


I rebooted again (hit the reset button) & inserted the USB install er back into the USB drive. It was acknowleged during boot, & I have the BIOS set to try to boot from the USB device 1st, then try hard drives next. Nonetheless, it still apparently tries to boot from the HDD's, & returns to that endless string of init messages. More pilot, error, I assume, but how do I get around this ? Any clues appreciated. TIA & have a good one.

--

	William A. Mahaffey III

 ----------------------------------------------------------------------

	"The M1 Garand is without doubt the finest implement of war
	 ever devised by man."
                           -- Gen. George S. Patton Jr.

	
Aug
10
Setting up an 8TB NetBSD file server
If anyone reading this is hoping for a new chapter in my Orange Customer Services Experience they will be sorely disappointed - this post is horribly geeky - just stop reading now.

For anyone else, this will have the slight flavour of a HOWTO - just adjust to taste.

I set this system up a few weeks ago &  recorded these notes - I've only just got around to posting this due to time constraints :)

Prolog...

Time to update my home fileserver. Budget: under £500, minimise power usage, noise & size, maximisie disk space.


With current prices that means 2TB disks - five is the natural number to give a power of two data stripe with RAID5. So some quick ebuyer.com browsing later:

    One HP MicroServer (~£140 after rebate)
    5 * 2TB disks (~£55 each)
    3.5 to 5.25 mounting adaptor (~£2)
    PCI-Express network card (~£8)

Assemble the box - HP still love their bizarre internal hex headed screws, but at least in this case the door has all of them you would need in a neat row, complete with the hex head allen key - nice.

Next, install NetBSD to a cheap USB key - just boot a standard install CD and run through the steps & setup to allow remote root ssh. Alternatively download a live ISO image - either way the goal is to have something you can boot and then work on in a laptop window from the comfort of the sofa.

Filesystem overview

The plan is to have the majority of the disks in a RAID5, leaving 40GB unused at the start of each disk. This will give:

    7.3 TB RAID5 /home/media (disks 0,1,2,3,4)
    40GB RAID1 / (disk 0,1)
    40GB RAID1 /home (disk 2,3)
    10GB swap (disk 4)
    30GB /tmp (disk 4)

Everything except swap and /tmp are raided to avoid data loss in the event of a single disk failure. Multiple disk failure is outside the scope of this page, but setting up a second similar server at a remote location is always a good call :)

This is a pretty generic layout - the reserved space can be easily adjusted up or down at this stage to cater for different usage, if you are paranoid about resilience then the swap should be on RAID1, and if you plan on htting /tmp and swap heavily at the same time then shuffle things around.

The first 40GB of disk4 is actually setup as a single device RAID0 so we can take advantage of raidframe's autoconfigure - if the disks are shuffled around everything will still work.

Setup

Anyway, onto the setup. We shall assume you are logged in via a USB install or Live image:

First lets just confirm we have some disks:

onyx# sysctl hw.disknames
hw.disknames = wd0 wd1 wd2 wd3 wd4 sd0

or the more verbose

onyx# grep ^wd /var/run/dmesg.boot
wd0 at atabus0 drive 0
wd0: <WDC WD20EARS-00MVWB0>
wd0: drive supports 16-sector PIO transfers, LBA48 addressing
wd0: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd0(ahcisata0:0:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
wd1 at atabus1 drive 0
wd1: <WDC WD20EARS-00MVWB0>
wd1: drive supports 16-sector PIO transfers, LBA48 addressing
wd1: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd1(ahcisata0:1:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
wd2 at atabus2 drive 0
wd2: <WDC WD20EARS-00MVWB0>
wd2: drive supports 16-sector PIO transfers, LBA48 addressing
wd2: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd2: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd2(ahcisata0:2:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
wd3 at atabus3 drive 0
wd3: <Hitachi HDS5C3020ALA632>
wd3: drive supports 16-sector PIO transfers, LBA48 addressing
wd3: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd3: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd3(ahcisata0:3:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
wd4 at atabus4 drive 1
wd4: <Hitachi HDS5C3020ALA632>
wd4: drive supports 16-sector PIO transfers, LBA48 addressing
wd4: 1863 GB, 3876021 cyl, 16 head, 63 sec, 512 bytes/sect x 3907029168 sectors
wd4: 32-bit data port
wd4: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd4(ixpide0:0:1): using PIO mode 4, Ultra-DMA mode 6 (Ultra/133) (using DMA)


Thats maybe a little more information than we really wanted to know... but anyway.

Creating MBR partitions with fdisk

Start by using fdisk to create a single NetBSD partition on each disk, and to make the partition bootable on the first two disks (which have the root filesystem).


onyx# fdisk -iau0 wd0

fdisk: primary partition table invalid, no magic in sector 0
Disk: /dev/rwd0d
NetBSD disklabel disk geometry:
cylinders: 3876021, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
total sectors: 3907029168

BIOS disk geometry:
cylinders: 1023, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
total sectors: 3907029168

Partitions aligned to 2048 sector boundaries, offset 2048

Do you want to change our idea of what BIOS thinks? [n] [RETURN]

Partition 0:
<UNUSED>
The data for partition 0 is:
<UNUSED>
sysid: [0..255 default: 169] [RETURN]
start: [0..243201cyl default: 2048, 0cyl, 1MB] [RETURN]
size: [0..243201cyl default: 3907027120, 243201cyl, 1907728MB] [RETURN]
bootmenu: [] [RETURN]
Do you want to change the active partition? [n] y[RETURN]
Choosing 4 will make no partition active.
active partition: [0..4 default: 0] [RETURN]
Are you happy with this choice? [n]  y[RETURN]
Update the bootcode from /usr/mdec/mbr? [n] y[RETURN]

We haven't written the MBR back to disk yet. This is your last chance.
Partition table:
0: NetBSD (sysid 169)
start 2048, size 3907027120 (1907728 MB, Cyls 0-243201/80/63), Active
PBR is not bootable: All bytes are identical (0x00)
1: <UNUSED>
2: <UNUSED>
3: <UNUSED>
Bootselector disabled.
First active partition: 0
Should we write new partition table? [n]  y[RETURN]

    Run fdisk -iau0 wd1 giving the same answers
    Run fdisk -u0 wd2 similar except it skips the active & bootcode questions
    Run fdisk -u0 wd3 and fdisk -u0 wd4

Disklabels

Next comes the disklabels. Decide on how much you want to keep from the RAID5 ( I chose 40GB) and how much you want to use for swap (10GB and 30GB). Ideally this amount should be the same on all disks. If its not the RAID5 will just use the smallest remaining value and waste the extra on any other disks.

I labelled my disks "disk0" to "disk4" to match how they show up in the NetBSD autoconfig. This is absolutely not required, and you could even shuffle the SATA cables between every boot - NetBSD automatically assembles the RAID components based on identifiers on each disk, but it pacifies the slight OCD tendency in me.

The start partition is offset by 1m. This tends to match the value in newer NetBSD fdisk and also avoids the old 63 sector insanity which causes misaligned accesses on 4K sector disks.

This uses partition a to be the RAID1 for /, /home, or RAID0 for /tmp & swap, and partition e for the RAID5.

The total size of your disks may be different from the values here. This does not matter as long as the disks are 2TB or less. If they are over 2TB then you need to use GPT rather than MBR partitions, but we do not cover that here.


onyx# disklabel -i wd0
Enter '?' for help

partition> N[RETURN]
Label name [fictitious]: disk0[RETURN]
partition> a[RETURN]
Filesystem type [?] [unused]: raid[RETURN]
Start offset ('x' to start after partition 'x') [0c, 0s, 0M]: 1m[RETURN]
Partition size (' for all remaining) [0c, 0s, 0M]: 40g[RETURN]
a: 83886080 2048 RAID # (Cyl. 2*- 83222*)
partition> e[RETURN]
Filesystem type [?] [4.2BSD]: raid[RETURN]
Start offset ('x' to start after partition 'x') [2.0317461490631103515625c, 2048s, 1M]: a[RETURN]
Partition size (' for all remaining) [3876019c, 3907027120s, 1907728.125M]: $[RETURN]
e: 3823141040 83888128 RAID # (Cyl. 83222*- 3876020)
partition> P[RETURN]
5 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 83886080 2048 RAID # (Cyl. 2*- 83222*)
c: 3907027120 2048 unused 0 0 # (Cyl. 2*- 3876020)
d: 3907029168 0 unused 0 0 # (Cyl. 0 - 3876020)
e: 3823141040 83888128 RAID # (Cyl. 83222*- 3876020)
partition> W[RETURN]Label disk [n]? y[RETURN]
Label written
partition> Q[RETURN]

Then repeat for disklabel -i wd1, disklabel -i wd2, disklabel -i wd3, and disklabel -i wd4 only changing the "disk0" each time.

Creating the RAID partitions

Onto the raid setup. For this we just create four files, raid0.conf to raid3.conf, in /root/ will be fine:

For the root filesystem

# raid0.conf - RAID1 on two disks, for 32K block size
START array
1 2 0
# row col spare

START disks
/dev/wd0a
/dev/wd1a

START layout
64 1 1 1
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level

START queue
fifo 100

For /home

# raid1.conf - RAID1 on two disks, for 32K block size
START array
1 2 0
# row col spare

START disks
/dev/wd2a
/dev/wd3a

START layout
64 1 1 1
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level

START queue
fifo 100

For swap and /tmp

# raid2.conf - RAID0 on one disks, for 32K block size
START array
1 1 0
# row col spare

START disks
/dev/wd4a

START layout
64 1 1 0
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level

START queue
fifo 100

For /home/media

# raid3.conf - RAID5 on five disks, for 64K block size
START array
1 5 0
# row col spare

START disks
/dev/wd0e
/dev/wd1e
/dev/wd2e
/dev/wd3e
/dev/wd4e

START layout
32 1 1 5
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level

START queue
fifo 100

Next we create the four raids, give them all unique Serial Numbers (-I), and tell them to autoconfigure on boot (-A). We'll come back  to making raid0 automatically be the root filesystem later. If we do it now it will be annoying if we reboot before creating its filesystems.

onyx# raidctl -C raid0.conf raid0 ; raidctl -I 10 raid0 ; raidctl -A yes raid0
onyx# raidctl -C raid1.conf raid1 ; raidctl -I 11 raid1 ; raidctl -A yes raid1
onyx# raidctl -C raid2.conf raid2 ; raidctl -I 12 raid2 ; raidctl -A yes raid2
onyx# raidctl -C raid3.conf raid3 ; raidctl -I 13 raid3 ; raidctl -A yes raid3

Moving on we need to initialise the parity on each raid. We could just run them all at once but its probably better to set the first three going, then when done start the final one (to avoid disk contention). We can use raidctl -S which displays the rebuild progress and only returns when the rebuild is complete. You can continue on while the parity is initialising, and even reboot (in recent netbsd-5 and later) and have it continue the parity, but it does mean actions a slower, and the system is not protected from disk failure until the parity is complete.

onyx# raidctl -i raid0 ; raidctl -i raid1 ; raidctl -i raid2
onyx# raidctl -S raid0 ; raidctl -S raid1 ; raidctl -S raid2
onyx# raidctl -i raid3 ; raidctl -S raid3


Creating the partitions

The default disklabels for raid0 and raid1 are probably fine for us (one large 'a' partition), so we can just get them written to the disks. There are other ways to do this, but just to re-use the 'disklabel -i' command:

onyx# disklabel -i raid0
Enter '?' for help
partition> W[RETURN]
Label disk [n]? y[RETURN]
Label written
partition> Q[RETURN]

Then repeat for raid1. For raid2 we want to setup a 30GB /tmp and 10GB swap so:

onyx# disklabel -i raid2
Enter '?' for help
partition> a[RETURN]
Filesystem type [?] [4.2BSD]: [RETURN]
Start offset ('x' to start after partition 'x') [0c, 0s, 0M]: [RETURN]
Partition size ('$' for all remaining) [327679.75c, 83886016s, 40959.96875M]: 30GB[RETURN]
 a:  62914560         0     4.2BSD      0     0     0  # (Cyl.      0 - 245759)
partition> b[RETURN]
Filesystem type [?] [unused]: swap[RETURN]
Start offset ('x' to start after partition 'x') [0c, 0s, 0M]: a[RETURN]
Partition size ('$' for all remaining) [0c, 0s, 0M]: $[RETURN]
 b:  20971456  62914560       swap                     # (Cyl. 245760 - 327679*)
partition> W[RETURN]
Label disk [n]? y[RETURN]
Label written
partition> Q[RETURN]

Since raid3 is larger than 2TB, (more or less the whole point of this exercise), we need to setup a GPT table to handle it:

onyx# gpt create raid3
onyx# gpt add -b 128 raid3
[ this will indicate the size of the wedge, in my case 15292563679 - use that number below]
onyx# dkctl raid3 addwedge media 128 15292563679 4.2BSD

This gives us an (automatically created on boot) dk0 device which is around 7.2TB in size. Unfortunately it will not show up as type 4.2BSD until the next boot, so we will have to give newfs the -I flag when we create its filesystem (or reboot).


Creating filesystems

We will go with FFSv2 filesystems. The RAID5 raid was created with 32 sector (16K) per component stripes, so it is important to use a 64k blocksize to avoid writes suffering an expensive read/modify/write cycle, and the other raids will fit a 32k blocksize nicely, so:

onyx# newfs -O2 -b32k raid0a
onyx# newfs -O2 -b32k raid1a
onyx# newfs -O2 -b32k raid2a
onyx# newfs -O2 -b64k -I dk0 

Installing

Now that we have all these wonderful raid filesystems, it would be nice to have an operating system to use them. (Unless you have the social life of a kumquat in which case just creating them may be goal enough in itself.)

First we mount them - during install we can use "-o async" to maximise the write speed, as at this point we do not have any data we care about in the event of a crash. Once install is complete we'll use "-o log" for data security. Note also the mount_ffs used for dk0 as we have not yet rebooted to "fix" its issue. Mounting /tmp is not strictly needed, but its a nice test:

onyx# mount -o async /dev/raid0a /altroot
onyx# mkdir /altroot/home ; mount -o async /dev/raid1a /altroot/home
onyx# mkdir /altroot/tmp ; mount -o async /dev/raid2a /altroot/tmp

onyx# mkdir /altroot/home/media ; mount_ffs -o async /dev/dk0 /altroot/home/media
 A quick df -to see how much space we have:

onyx# df -h
Filesystem        Size       Used      Avail %Cap Mounted on
/dev/sd0a          14G       6.0G       7.7G  43% /
tmpfs             905M       4.0K       905M   0% /tmp
tmpfs             905M       4.0K       905M   0% /var/tmp
/dev/raid0a        39G       8.0K        37G   0% /altroot
/dev/raid1a        39G       8.0K        37G   0% /altroot/home
/dev/raid2a        30G       4.0K        28G   0% /altroot/tmp
/dev/dk0          7.1T       8.0K       6.7T   0% /altroot/home/media


Next, extract NetBSD to /altroot - if you've booted from USB key and are happy to use that install as a base then just run

onyx# cd / ; pax -rw -pe -X / /altroot

Alternatively extract a NetBSD release *.tgz files into /altroot

Setup /altroot/etc/fstab - a sample might be:

# /etc/fstab
/dev/raid0a   /           ffs    rw,log 1 1
/dev/raid1e   /home       ffs    rw,log 1 2
/dev/raid2a   /tmp        ffs    rw,log 1 2
/dev/raid2b   swap        swap   sw     0 0
/dev/dk0      /home/media ffs    rw,log 1 3
/proc         /proc       procfs rw
kernfs        /kern       kernfs rw
ptyfs         /dev/pts    ptyfs  rw

Install boot blocks - we need to do this on *both* wd0 and wd1 so the system can still boot in the event of a single disk failure:

onyx# cd /altroot ; cp usr/mdec/boot .
onyx# installboot /dev/rwd0a usr/mdec/bootxx_ffsv2
onyx# installboot /dev/rwd1a usr/mdec/bootxx_ffsv2

Finally setup raid0 to automatically configure as the root filesystem

onyx# raidctl -A root raid0

... and we're done. Setup apache to serve webdav for your xbmc machines, samba, netatalk, and nfs as required :)



Home | Main Index | Thread Index | Old Index