Port-amd64 archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: raidframe on > 2TB disks, revisited



On 06/05/16 11:33, Greg Troxel wrote:
John Klos <john%ziaspace.com@localhost> writes:

It still seems absurdly difficult to set up a mirror with two disks
larger than 2 TB.
Agreed, and it would be great to fix this.

Two, some part of newfs seems to be broken or my understanding of it
is broken:

# newfs -O2 -b 65536 -f 8192 -F -s 7772092288 /dev/rraid0a
/dev/rraid0a: 3794967.0MB (7772092288 sectors) block size 65536,
fragment size 8192
        using 1160 cylinder groups of 3271.56MB, 52345 blks, 103936 inodes.
wtfs: write error for sector 7772092287: Invalid argument
As I understand it, the basic problem is that disklabels can only
represent 2T.  So you can't use disklabels for big disks at all.

So, in order to have a bootable, mirrored set of drives larger than 2
TB, one has to:

1) Create gpt wedges on both drives for a small ffs filesystem, swap,
and a large RAID
Can raid autoconfig from gpt?   I wonder about two raids, one moderate
for root/swap and whatever else you want, and one very large, each in
gpt.  Use disklabel in the small raid and gpt in the large one.

newfs -F -s (the number of sectors in raid0) -O2 /dev/rraid0d
Do you really need to give -s, when I'd expect newfs to figure out the
size of rraid0d?

It seems a bit convoluted. It'd be nice if we could:

1) have RAIDframe autoconfigure something other than raid0a
yes, but I think the next appraoch is better and renders this unnecessary

2) use GPT wedges in raid0 and have that work with autoconfigure
this would be great, and it might not be that hard.

3) compile a kernel with a dk wedge set as root
or
4) simply boot a kernel in GPT which is in RAIDframe which is in GPT.
This may not be that hard either (and I think it's entirely separate
>From autoconf of root in gpt).  But it is probably more involved that
the disklabel method, which I think relies on the inside-raid a
partition being the one to use and starting at 0 in the raid virtual
disk.  One would have to skip the raid header and then recursively read
another gpt, and add back in the offset to the start.


Another approach for users, while not really reasonable to recommend
becuase of these issues, is to have a RAID pair of moderate-sized SSDs
(eg. 256M) for root/swap/var/usr and then a pair of 4T for /home.

Another observation is that it would be really nice if ZFS was up to
date and worked.


.... or LVM, no ?

-- 

	William A. Mahaffey III

 ----------------------------------------------------------------------

	"The M1 Garand is without doubt the finest implement of war
	 ever devised by man."
                           -- Gen. George S. Patton Jr.


Home | Main Index | Thread Index | Old Index