Subject: Large disk support in NetBSD, is it hard to do and is anyone working on it?
To: None <current-users@netbsd.org>
From: Brian Buhrow <buhrow@lothlorien.nfbcal.org>
List: current-users
Date: 04/23/2006 20:35:58
	Hello folks.  I've just spent the day building a new raid array
comprised of 7 500GB disks.  This is a raid 5 array, meaning I'm proposing
to build something close to a 3TB array.  After configuring all the disks,
initializing the array, and calculating parity on it, I discovered that the
NetBSD disklabel can only count up to 2^32 sectors on any given disk.  In
looking at the current sources on cvsweb, it appears that -current still
has this limitation.
	Does anyone have any suggested work arounds for how to create logical
disks larger than  2.1TB?  Also, should I file a bug on this one or is it a
well known enough problem that it will get taken care of sooner or later?

	I'm happy to test fixes if folks want to try them out first, but I'm
not sure I'm comfortable in proposing a mechanism for supporting a new disk
layout.
	Any thoughts or suggestions on what might be needed to fix things up,
or to suggest what might be done to start cobbling patches together to fix
the problem for once and all would be greatly appreciated.
-thanks
-Brian

Script started on Sun Apr 23 19:06:23 2006

fserv1# disklabel wd1
# /dev/rwd1d:
type: unknown
disk: wd5000ks
label: 
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 969021
total sectors: 976773168
rpm: 10000
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0           # microseconds
track-to-track seek: 0  # microseconds
drivedata: 0 

4 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
 a: 976773105        63       RAID                     # (Cyl.      0*- 969020)
 c: 976773105        63     unused      0     0        # (Cyl.      0*- 969020)
 d: 976773168         0     unused      0     0        # (Cyl.      0 - 969020)

fserv1# cat /etc/raid0.conf
#Raid Configuration File 
#Brian Buhrow
#Describe the size of the array, including spares
START array
#numrow numcol numspare
1 7 1

#Disk section
START disks
/dev/wd1a
/dev/wd2a
/dev/wd3a
/dev/wd4a
/dev/wd5a
/dev/wd6a
/dev/wd7a

#Layout section.  We'll use 63 sectors per stripe unit, 1 parity unit per 
#stripe unit, 1 parity unit per stripe, and raid level 5.
START layout
#SectperSu SusperParityUnit SusperReconUnit Raid_level
64 1 1 5

#Fifo section.  We'll use 100 outstanding requests as a start.
START queue
fifo 100

#spare section
#No spares.
START spare
/dev/wd8a

fserv1# dmesg |tail
raid0: There were fatal errors
raid0: Fatal errors being ignored.
raid0: RAID Level 5
raid0: Components: /dev/wd1a /dev/wd2a /dev/wd3a /dev/wd4a /dev/wd5a /dev/wd6a /dev/wd7a
raid0: Total Sectors: 1565670656 (2861639 MB)
#disklabel raid0
# /dev/rraid0d:
type: RAID
disk: raid
label: fictitious
flags:
bytes/sector: 512
sectors/track: 384
tracks/cylinder: 28
sectors/cylinder: 10752
cylinders: 545074
total sectors: 1565670656
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0		# microseconds
track-to-track seek: 0	# microseconds
drivedata: 0 

4 partitions:
#        size    offset     fstype [fsize bsize cpg/sgs]
 a: 1565670656         0     4.2BSD   4096 16384 21024  # (Cyl.      0 - 145616*)
 d: 1565670656         0     unused      0     0        # (Cyl.      0 - 145616*)

fserv1# dc
976773105
6
*
p
5860638630
q

fserv1# exit
Script done on Sun Apr 23 19:07:16 2006