Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: configuring raid using gpt, large drives



On Sun, Feb 02, 2020 at 01:10:33PM -0500, MLH wrote:
[...]

> $ mount NAME=raid@ZFN15G3N /mnt
> WARNING: autoselecting nfs based on : or @ in the device name is deprecated!
> WARNING: This behaviour will be removed in a future release
> mount_nfs: no <host>:<dirpath> or <dirpath>@<host> spec
> $ mount /dev/raid0a /mnt
> mount_ffs: /dev/raid0a on /mnt: Device not configured

It is rather long shoot and I'm not pretty sure about that, but at first
I suggest to remove '@' sign from GPT partition names and wedge names. 
It leads to conflict with NFS scheme at least in mentioned case and I 
affraid, that there are other places to fail.

Personally I use gpt drives and wedges from 7.0_BETA and raid setup on
GPT partition tables from 8.0 - 9.0_RC (now).

My setup consist of the following components:

=============================================================================
1. wd0 and wd1 - both GPT partitioned, each with small EFI boot
   partition, previously used for EFI-compatible bootloader.
   There was a bug with memory mapping there, so I use grub now
   (bug is probably fixed but I didn't verified it so far):

#  gpt show -al wd0

   # for grub

        2048        4096      1  GPT part - BIOS Boot
                                 Type: bios
                                 TypeID: 21686148-6449-6e6f-744e-656564454649
                                 GUID: 4b409213-4d0d-3448-8835-e25ba083dad9
                                 Size: 2048 K
                                 Label: bios1
                                 Attributes: None

   ...

   # for EFI (efi1 on wd0 and efi2 on wd1)
   293607424     1126400      4  GPT part - EFI System
                                 Type: efi
                                 TypeID: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
                                 GUID: a0f50318-d84d-da4f-be1a-bed65411c09e
                                 Size: 550 M
                                 Label: efi1
                                 Attributes: None

   294733824    83886080      5  GPT part - NetBSD RAIDFrame component
                                 Type: raid
                                 TypeID: 49f48daa-b10e-11dc-b99b-0019d1879648
                                 GUID: 6609f622-26ac-e049-a10a-4e90fc9e0cfd
                                 Size: 40960 M
                                 Label: netsys1
                                 Attributes: None

   ...

=============================================================================
2. three raid sets, created from three pairs of gpt partitions

[    10.990214] raid0: Components: /dev/dk4 /dev/dk12
[    10.990214] raid1: Components: /dev/dk5 /dev/dk13
[    11.010222] raid2: Components: /dev/dk6 /dev/dk14


=============================================================================
3, there is a GPT partition table, created on each raid set(!), see:

# gpt show -al raid0

        40   2097152      1  GPT part - NetBSD FFSv1/FFSv2
                                 Type: ffs
                                 TypeID: 49f48d5a-b10e-11dc-b99b-0019d1879648
                                 GUID: 8dda46b0-c667-4314-9d6e-8470bf03330e
                                 Size: 1024 M
                                 Label: root
                                 Attributes: None
   2097192   4194304      2  GPT part - NetBSD FFSv1/FFSv2
                                 Type: ffs
                                 TypeID: 49f48d5a-b10e-11dc-b99b-0019d1879648
                                 GUID: 1fd5c641-51f9-4aca-998a-258b613dea2a
                                 Size: 2048 M
                                 Label: var
                                 Attributes: None
  ...


gpt show -al raid2

         40  419430192      1  GPT part - NetBSD FFSv1/FFSv2
                                 Type: ffs
                                 TypeID: 49f48d5a-b10e-11dc-b99b-0019d1879648
                                 GUID: fc736297-4256-4bf7-934e-0d614e8509ff
                                 Size: 200 G
                                 Label: home
                                 Attributes: None


=============================================================================
4. All partitions are mounted by name:

#
NAME=root  /      ffs  rw     1 1
NAME=var   /var   ffs  rw,log 1 2
NAME=usr   /usr   ffs  rw,log 1 2
NAME=home  /home  ffs  rw,log 1 2
NAME=swap  none   swap sw     0 0

NAME=efi1  /mnt/efi1 msdos rw,noauto 0 0
NAME=efi2  /mnt/efi2 msdos rw,noauto 0 0


=============================================================================
5. Boot process looks like following. As You can see system is able to assemble
raid devices and scan them for gpt partition tables.

[     2.817072] dk3 at wd0: "efi1", 1126400 blocks at 293607424, type: msdos
[     2.817072] dk4 at wd0: "netsys1", 83886080 blocks at 294733824, type: raidframe
[     2.817072] dk5 at wd0: "netswap1", 18874368 blocks at 378619904, type: raidframe
[     2.817072] dk6 at wd0: "nethome1", 419430400 blocks at 397494272, type: raidframe
...
[     2.867092] dk11 at wd1: "efi2", 1126400 blocks at 293607424, type: msdos
[     2.867092] dk12 at wd1: "netsys2", 83886080 blocks at 294733824, type: raidframe
[     2.867092] dk13 at wd1: "netswap2", 18874368 blocks at 378619904, type: raidframe
[     2.867092] dk14 at wd1: "nethome2", 419430400 blocks at 397494272, type: raidframe

[    10.990214] raid0: Components: /dev/dk4 /dev/dk12
[    10.990214] dk16 at raid0: "root", 2097152 blocks at 40, type: ffs
[    10.990214] dk17 at raid0: "var", 4194304 blocks at 2097192, type: ffs
[    10.990214] dk18 at raid0: "usr", 77594416 blocks at 6291496, type: ffs

[    10.990214] raid1: Components: /dev/dk5 /dev/dk13
[    11.010222] dk19 at raid1: "swap", 18874160 blocks at 40, type: swap

[    11.010222] raid2: Components: /dev/dk6 /dev/dk14
[    11.050238] dk20 at raid2: "home", 419430192 blocks at 40, type: ffs


=============================================================================
6. Warning - there was a problem with too smal number of /dev/dk* devices, in
my case I have:

ls /dev/dk*|wc -l     
      25

I hope that may help.

Regards,
-- 
Piotr 'aniou' Meyer


Home | Main Index | Thread Index | Old Index