Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: RAID - raidframe vs hardware/controller



        Hello Paul.  Those BIOS settings, as far as I know, just allow the
BIOS to boot a disk that's labeled as a raid1 and then leave the work of
the actual raid to the operating system.  See ataraid(4) for a description.

        As to the complexity of raidframe(4), the raidctl(8) man page is long,
but provides a pretty complete description of how to set things up.  In
general, the steps are:

1.  Disklabel your disks as you normally would, but instead of labeling a
partition as a 4.2BSD partition, label it as a raid partition.

2.  Write yourself a raidx.conf file, as documented in raidctl(8) and put
it in /etc on your mini-root, or the root disk of the system on which you
want to build a raid.

3.  Run raidctl -C <path to file you just created in step 2> raidx where x
is the number of the raid set you  want to configure, i.e. raid0 or raid1.

4.  Run raidctl -I <some serial number, I usually use today's date> raidx

5.  Run raidctl -i raidx
        This initializes parity, does the mirroring, etc.

6.  Disklabel your newly created raid disk.

7.  If you want the raid set auto-configured at boot, run: 
raidctl -A yes raidx

8.  If you want the raid set to be root, run:
raidctl -A root raidx

9.  Run installboot on both components of your raid1 set, i.e.

installboot -v /dev/rwd0a bootxx_ffsv1 /mnt/boot
installboot -v /dev/rwd1a bootxx_ffsv1 /mnt/boot

Here's a copy of the /etc/raid0.conf file I use on my raid1 systems around
here.

Hope this helps.
-Brian


#Raid Configuration File for woody.zocalo.net (06/28/2001)
#Raid for root partition.
#Brian Buhrow
#Describe the size of the array, including spares
START array
#numrow numcol numspare
1 2 0

#Disk section
START disks
/dev/wd0a
/dev/wd1a

#Layout section.  We'll use 64 sectors per stripe unit, 1 parity unit per 
#stripe unit, 1 parity unit per stripe, and raid level 1.
START layout
#SectperSu SusperParityUnit SusperReconUnit Raid_level
64 1 1 1

#Fifo section.  We'll use 100 outstanding requests as a start.
START queue
fifo 100

#spare section
#We have no spares in this  raid.
#START spare

On Apr 18, 11:45am, Paul Goyette wrote:
} Subject: Re: RAID - raidframe vs hardware/controller
} Well, I was talking about relatively modern southbridge chips, as found 
} on many motherboards.  BIOS settings exist for three operating modes:
} 
}       IDE (compatability?)
}       AHCI
}       RAID
} 
} I just sort of assumed that setting them to RAID would do all the work 
} and wouldn't require any additional driver support in NetBSD.  Maybe I'm 
} being naive?
} 
} My biggest fear with raidframe(4) is that it has taken me about 10 years 
} to get almost comfortable with simple fdisk/disklabel stuff.  I've seen 
} many threads over the years about having additional considerations for 
} the raidframe component label.  But I don't think I've ever seen a 
} single how-to cookbook.
} 
} It would be great if there was a simple step-by-step procedure written 
} down somewhere.  (If there is one, I haven't found it.)
} 
} Oh, to answer an earlier question, I'm looking only at RAID-1 (mirror) 
} for now.  I don't think I have any plans to move on to RAID-5 or -10.
} 
} 
} On Mon, 18 Apr 2011, Brian Buhrow wrote:
} 
} >     hello.  If what you mean by hardware raid is the pseudo ataraid
} > driver as recognized by many Promise and other ATA and SATA cards, then the
} > differences between hardware and software raid are minimal in terms of
} > performance, since ataraid(4) is merely a software raid1 driver which
} > honors the labeling conventions of these ATA and SATA cards.
} >     In terms of raid(4) versus the hardware raid management on cards like
} > the 3ware cards and the like, the answers to your questions, are, of
} > course, it depends.
} >     In general, I prefer using raid(4) over hardware raid controllers
} > because while it's possible that performance may not be as good, I've found
} > it to be more robust in the face of marginal disks, as well as easier to
} > manage while the system is still running if things do go wrong.  If the
} > hardware raid card you're thinking of using works well with the bioctl(8)
} > utility, then I think managing disks in errored states and the like will be
} > comparable with raidctl(8), but if not, you'll have to bring the system
} > down when ever you want to do maintenance work and work from the card's
} > bios utilities.  In my view, that obviates one of the major advantages of
} > raid systems.
} >     Just my 2 cents.
} >     Just thoughts from a happy raid(4) user who's used it since
} > NetBSD-1.6.
} 
} -------------------------------------------------------------------------
} | Paul Goyette     | PGP Key fingerprint:     | E-mail addresses:       |
} | Customer Service | FA29 0E3B 35AF E8AE 6651 | paul at whooppee.com    |
} | Network Engineer | 0786 F758 55DE 53BA 7731 | pgoyette at juniper.net |
} | Kernel Developer |                          | pgoyette at netbsd.org  |
} -------------------------------------------------------------------------
>-- End of excerpt from Paul Goyette




Home | Main Index | Thread Index | Old Index