Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

RAIDframe use on -current


I am new to RAIDframe on NetBSD (I usually use ZFS RAIDZx or hardware
RAID if required). A machine turned up lately, previously running
FreeBSD with hardware HPT1520-based mirror. As this didn't look
supported by NetBSD (which was the original reason many years ago to
provision FreeBSD on that box anyway), I decided to try RAIDframe,
following Ch. 16 of the manual.

The guide states that after creating the fake mirror on the second
disk, populating it with a copy of the system from the first one and
writing the boot block, one should switch the boot from the second
disk and then proceed with the further preparation of the first as a
spare. This boot did not work for me under -current - I got a message
from the bootloader that /boot cannot be found. However, when I booted
again from the first disk, I found that my root was actually the
desired single-disk RAID set - and not the primary NetBSD

During the first resilver, the system froze with a message about an
interrupt from the HPT controller (this may be completely different
matter, I suspect my P/S wasn't very well secured to the m/b, the box
was unstable); I replaced the HPT controller with a Promise PDC20375
and reinstalled off USB installation disk the system on the first disk
- I could not boot from either at this stage - in order to repeat the
setup to create the RAID. I was surprised to find out that the second
this still contained a previous copy of the system - even after a 'dd
if=/dev/zero of=/dev/rwd5d bs=8k count=1' as per the manual - so I
completed the process at this stage, did an fsck of /dev/rraid0a,
rewrote the boot blocks etc. and tried to boot from that disk again.
This did not work with the same reason - /boot cannot be opened. So I
rebooted again from the first disk - only to find myself with root on
/dev/raid0a using the original pax-ed contents of the first
installation... I then proceeded to clean the first disk, adding it to
the set as a spare and resilvering the set, which completed; I then
rewrote the bootblocks following the manual. After that I was not able
to boot from either of the disks with the same message - /boot not
found, Error (2) - so I decided to do a clean install of everything,
thinking that I've made some mistake. I was pretty surprized to faind
again, that when I booted the installation image from the USB stick, I
found myself on a perfect RAID1 mirrored disk...

It seems to me that it doesn't matter where GENERIC comes from - when
there is a RAID with Autoconfigure and Root set - it will switch the
root to that. This is fine, but I can't figure out how to avoid using
a USB stick to boot - where and how to install the bootblocks on the
two RAID members - the manual says 'installboot  ... /dev/rwd?a
/usr/mdec/bootxx_ffsv2' - on condition that 'file -s /dev/rwd?a' finds
FFS v2, or dumpfs -s /dev/rwd?a finds the same - in my case I get:

uksup1# disklabel wd4
# /dev/rwd4d:
type: ESDI
disk: ST3120827AS
label: fictitious
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 232581
total sectors: 234441648
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

5 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
 a: 234441585        63       RAID                     # (Cyl.      0*- 232580)
 c: 234441585        63     unused      0     0        # (Cyl.      0*- 232580)
 d: 234441648         0     unused      0     0        # (Cyl.      0 - 232580)

uksup1# file -s /dev/rwd4a
/dev/rwd4a: x86 boot sector

(dumpfs skips).

On the raid disk I get:

uksup1# disklabel raid0
# /dev/rraid0d:
type: RAID
disk: raid
label: fictitious
bytes/sector: 512
sectors/track: 128
tracks/cylinder: 8
sectors/cylinder: 1024
cylinders: 228946
total sectors: 234441472
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0

4 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
 a: 213961472         0     4.2BSD      0     0     0  # (Cyl.      0 - 208946*)
 b:  20480000 213961472       swap                     # (Cyl. 208946*- 228946*)
 d: 234441472         0     unused      0     0        # (Cyl.      0 - 228946*)


The question to -current users is - are there any recent changes in
the RAIDframe and the system in general which lead to a different
setup for a mirrored root? Or perhaps there is some other means of
doing it altogether... Otherwise this query should have been directed
at netbsd-users.


Chavdar Ivanov


Home | Main Index | Thread Index | Old Index