Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sbin/raidctl * add -G, which lists the configuration of the ...



details:   https://anonhg.NetBSD.org/src/rev/c6e66cbef46c
branches:  trunk
changeset: 512568:c6e66cbef46c
user:      lukem <lukem%NetBSD.org@localhost>
date:      Tue Jul 10 01:30:52 2001 +0000

description:
* add -G, which lists the configuration of the given raid set in the
  same configuration format that -c and -C use.
  this is useful if you're using autoconfig and you've misplaced the
  /etc/raidXXX.conf files
* "filesystem" -> "file system", and other man page cleanups.

diffstat:

 sbin/raidctl/raidctl.8 |  75 ++++++++++++++++++++++++++----------------
 sbin/raidctl/raidctl.c |  86 ++++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 129 insertions(+), 32 deletions(-)

diffs (truncated from 390 to 300 lines):

diff -r 604fe3f65cb2 -r c6e66cbef46c sbin/raidctl/raidctl.8
--- a/sbin/raidctl/raidctl.8    Tue Jul 10 00:52:29 2001 +0000
+++ b/sbin/raidctl/raidctl.8    Tue Jul 10 01:30:52 2001 +0000
@@ -1,4 +1,4 @@
-.\"     $NetBSD: raidctl.8,v 1.23 2001/06/05 11:22:52 wiz Exp $
+.\"     $NetBSD: raidctl.8,v 1.24 2001/07/10 01:30:52 lukem Exp $
 .\"
 .\" Copyright (c) 1998 The NetBSD Foundation, Inc.
 .\" All rights reserved.
@@ -60,7 +60,7 @@
 .\" any improvements or extensions that they make and grant Carnegie the
 .\" rights to redistribute these changes.
 .\" 
-.Dd November 6, 1998
+.Dd July 10, 2001
 .Dt RAIDCTL 8
 .Os
 .Sh NAME
@@ -94,6 +94,9 @@
 .Fl g Ar component Ar dev
 .Nm ""
 .Op Fl v 
+.Fl G Ar dev 
+.Nm ""
+.Op Fl v 
 .Fl i Ar dev
 .Nm ""
 .Op Fl v 
@@ -145,7 +148,7 @@
 Make the RAID set auto-configurable.  The RAID set will be
 automatically configured at boot 
 .Ar before
-the root filesystem is
+the root file system is
 mounted.  Note that all components of the set must be of type RAID in the
 disklabel.
 .It Fl A Ic no Ar dev
@@ -188,6 +191,13 @@
 the reconstruction process if a component does have a hardware failure.
 .It Fl g Ar component Ar dev
 Get the component label for the specified component.
+.It Fl G Ar dev
+Generate the configuration of the RAIDframe device in a format suitable for
+use with
+.Nm
+.Fl c
+or
+.Fl C .
 .It Fl i Ar dev
 Initialize the RAID device.  In particular, (re-write) the parity on
 the selected device.  This 
@@ -195,7 +205,7 @@
 be done for 
 .Ar all 
 RAID sets before the RAID device is labeled and before
-filesystems are created on the RAID device.
+file systems are created on the RAID device.
 .It Fl I Ar serial_number Ar dev
 Initialize the component labels on each component of the device.  
 .Ar serial_number 
@@ -248,6 +258,7 @@
 for the i386 architecture, and /dev/rraid0c
 for all others, or just simply raid0 (for /dev/rraid0d).
 .Pp
+.Ss Configuration file
 The format of the configuration file is complex, and
 only an abbreviated treatment is given here.  In the configuration
 files, a 
@@ -394,7 +405,7 @@
 .Sh EXAMPLES
 
 It is highly recommended that before using the RAID driver for real
-filesystems that the system administrator(s) become quite familiar
+file systems that the system administrator(s) become quite familiar
 with the use of
 .Nm "" ,
 and that they understand how the component reconstruction process
@@ -622,7 +633,7 @@
 .Xr newfs 8 ,
 or
 .Xr fsck 8
-on the device or its filesystems, and then to mount the filesystems
+on the device or its file systems, and then to mount the file systems
 for use.
 .Pp
 Under certain circumstances (e.g. the additional component has not
@@ -680,7 +691,7 @@
 is used.  Note that re-writing the parity can be done while
 other operations on the RAID set are taking place (e.g. while doing a
 .Xr fsck 8
-on a filesystem on the RAID set).  However: for maximum effectiveness
+on a file system on the RAID set).  However: for maximum effectiveness
 of the RAID set, the parity should be known to be correct before any
 data on the set is modified.
 .Pp
@@ -734,7 +745,7 @@
 and the 
 .Sq Parity status
 line which indicates that the parity is up-to-date.  Note that if
-there are filesystems open on the RAID set, the individual components
+there are file systems open on the RAID set, the individual components
 will not be 
 .Sq clean
 but the set as a whole can still be clean.
@@ -995,19 +1006,22 @@
 .Ed
 .Pp
 RAID sets which are auto-configurable will be configured before the
-root filesystem is mounted.  These RAID sets are thus available for
-use as a root filesystem, or for any other filesystem.  A primary
+root file system is mounted.  These RAID sets are thus available for
+use as a root file system, or for any other file system.  A primary
 advantage of using the auto-configuration is that RAID components
 become more independent of the disks they reside on.  For example,
 SCSI ID's can change, but auto-configured sets will always be
 configured correctly, even if the SCSI ID's of the component disks
 have become scrambled.
 .Pp
-Having a system's root filesystem (/) on a RAID set is also allowed,
+Having a system's root file system
+.Pq Pa /
+on a RAID set is also allowed,
 with the 
 .Sq a
-partition of such a RAID set being used for /.
-To use raid0a as the root filesystem, simply use:
+partition of such a RAID set being used for
+.Pa / .
+To use raid0a as the root file system, simply use:
 .Bd -unfilled -offset indent
 raidctl -A root raid0
 .Ed
@@ -1019,9 +1033,9 @@
 Note that kernels can only be directly read from RAID 1 components on
 alpha and pmax architectures.  On those architectures, the 
 .Dv FS_RAID
-filesystem is recognized by the bootblocks, and will properly load the
+file system is recognized by the bootblocks, and will properly load the
 kernel directly from a RAID 1 component.  For other architectures, or
-to support the root filesystem on other RAID sets, some other
+to support the root file system on other RAID sets, some other
 mechanism must be used to get a kernel booting.  For example, a small
 partition containing only the secondary boot-blocks and an alternate
 kernel (or two) could be used.  Once a kernel is booting however, and
@@ -1039,29 +1053,32 @@
 .It
 wd1a - also contains a complete, bootable, basic NetBSD installation.
 .It 
-wd0e and wd1e - a RAID 1 set, raid0, used for the root filesystem.
+wd0e and wd1e - a RAID 1 set, raid0, used for the root file system.
 .It
 wd0f and wd1f - a RAID 1 set, raid1, which will be used only for
 swap space. 
 .It
-wd0g and wd1g - a RAID 1 set, raid2, used for /usr, /home, or other
-data, if desired.
+wd0g and wd1g - a RAID 1 set, raid2, used for
+.Pa /usr ,
+.Pa /home ,
+or other data, if desired.
 .It 
 wd0h and wd0h - a RAID 1 set, raid3, if desired.
 .El
 .Pp
 RAID sets raid0, raid1, and raid2 are all marked as
-auto-configurable.  raid0 is marked as being a root filesystem.
-When new kernels are installed, the kernel is not only copied to /, 
+auto-configurable.  raid0 is marked as being a root file system.
+When new kernels are installed, the kernel is not only copied to
+.Pa / , 
 but also to wd0a and wd1a.  The kernel on wd0a is required, since that
 is the kernel the system boots from.  The kernel on wd1a is also
 required, since that will be the kernel used should wd0 fail.  The
 important point here is to have redundant copies of the kernel
 available, in the event that one of the drives fail.
 .Pp
-There is no requirement that the root filesystem be on the same disk
+There is no requirement that the root file system be on the same disk
 as the kernel.  For example, obtaining the kernel from wd0a, and using
-sd0e and sd1e for raid0, and the root filesystem, is fine.  It 
+sd0e and sd1e for raid0, and the root file system, is fine.  It 
 .Ar is
 critical, however, that there be multiple kernels available, in the
 event of media failure.
@@ -1110,7 +1127,7 @@
 .It
 IO bandwidth
 .It
-Filesystem access patterns
+file system access patterns
 .It 
 CPU speed
 .El
@@ -1155,7 +1172,7 @@
 sizes are small enough that a
 .Sq large IO
 from the system will use exactly one large stripe write. As is seen
-later, there are some filesystem dependencies which may come into play
+later, there are some file system dependencies which may come into play
 here as well.
 .Pp
 Since the size of a 
@@ -1167,13 +1184,13 @@
 empirical measurement will provide the best indicators of which
 values will yeild better performance.
 .Pp
-The parameters used for the filesystem are also critical to good
+The parameters used for the file system are also critical to good
 performance.  For 
 .Xr newfs 8 , 
 for example, increasing the block size to 32K or 64K may improve
 performance dramatically.  As well, changing the cylinders-per-group
 parameter from 16 to 32 or higher is often not only necessary for
-larger filesystems, but may also have positive performance
+larger file systems, but may also have positive performance
 implications.
 .Pp
 .Ss Summary
@@ -1225,13 +1242,13 @@
 .Ed
 .Pp
 .It 
-Create the filesystem: 
+Create the file system: 
 .Bd -unfilled -offset indent
 newfs /dev/rraid0e 
 .Ed
 .Pp
 .It
-Mount the filesystem: 
+Mount the file system: 
 .Bd -unfilled -offset indent
 mount /dev/raid0e /mnt
 .Ed
@@ -1251,7 +1268,7 @@
 Certain RAID levels (1, 4, 5, 6, and others) can protect against some
 data loss due to component failure.  However the loss of two
 components of a RAID 4 or 5 system, or the loss of a single component
-of a RAID 0 system will result in the entire filesystem being lost.
+of a RAID 0 system will result in the entire file system being lost.
 RAID is 
 .Ar NOT
 a substitute for good backup practices.
diff -r 604fe3f65cb2 -r c6e66cbef46c sbin/raidctl/raidctl.c
--- a/sbin/raidctl/raidctl.c    Tue Jul 10 00:52:29 2001 +0000
+++ b/sbin/raidctl/raidctl.c    Tue Jul 10 01:30:52 2001 +0000
@@ -1,4 +1,4 @@
-/*      $NetBSD: raidctl.c,v 1.26 2001/02/19 22:56:22 cgd Exp $   */
+/*      $NetBSD: raidctl.c,v 1.27 2001/07/10 01:30:52 lukem Exp $   */
 
 /*-
  * Copyright (c) 1996, 1997, 1998 The NetBSD Foundation, Inc.
@@ -66,6 +66,7 @@
 static  void rf_configure __P((int, char*, int));
 static  const char *device_status __P((RF_DiskStatus_t));
 static  void rf_get_device_status __P((int));
+static void rf_output_configuration __P((int, const char *));
 static  void get_component_number __P((int, char *, int *, int *));
 static  void rf_fail_disk __P((int, char *, int));
 static  void usage __P((void));
@@ -97,6 +98,7 @@
        char name[PATH_MAX];
        char component[PATH_MAX];
        char autoconf[10];
+       int do_output;
        int do_recon;
        int do_rewrite;
        int is_clean;
@@ -108,12 +110,13 @@
 
        num_options = 0;
        action = 0;
+       do_output = 0;
        do_recon = 0;
        do_rewrite = 0;
        is_clean = 0;
        force = 0;
 
-       while ((ch = getopt(argc, argv, "a:A:Bc:C:f:F:g:iI:l:r:R:sSpPuv")) 
+       while ((ch = getopt(argc, argv, "a:A:Bc:C:f:F:g:GiI:l:r:R:sSpPuv")) 
               != -1)
                switch(ch) {
                case 'a':
@@ -159,6 +162,11 @@
                        strncpy(component, optarg, PATH_MAX);
                        num_options++;
                        break;
+               case 'G':
+                       action = RAIDFRAME_GET_INFO;
+                       do_output = 1;
+                       num_options++;
+                       break;
                case 'i':
                        action = RAIDFRAME_REWRITEPARITY;
                        num_options++;
@@ -287,7 +295,10 @@
                check_status(fd,1);
                break;
        case RAIDFRAME_GET_INFO:
-               rf_get_device_status(fd);



Home | Main Index | Thread Index | Old Index