NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

bin/44947: Multiple issues with LVM, lvremove



>Number:         44947
>Category:       bin
>Synopsis:       Multiple issues with LVM, lvremove
>Confidential:   no
>Severity:       serious
>Priority:       high
>Responsible:    bin-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Tue May 10 04:45:00 +0000 2011
>Originator:     Ben C.
>Release:        5.99.51 Current
>Organization:
>Environment:
NetBSD  5.99.51 NetBSD 5.99.51 (XEN3_DOM0-LVM) #2: Mon May  9 13:14:11 CDT 2011 
 root@:/usr/mybuild/obj/sys/arch/amd64/compile/XEN3_DOM0-LVM amd64

Kernel is XEN3_DOM0 kernel in src/sys/arch/amd64/conf/XEN3_DOM0 except Xen 
memory ballooning is enabled and debugging/diagnostics was turned off
>Description:
This issue exists in current as of today as well in the code base I was using 
which was about a month old. 

There is some kind of issue with lvm lvrename.  The old names are not removed 
properly.

The main issue I've found is lvremove seems to be completely broken.  There are 
at least two other cases in the current mailing list that seem to confirm this:

http://mail-index.netbsd.org/current-users/2011/04/15/msg016368.html
http://mail-index.netbsd.org/current-users/2011/04/21/msg016467.html

One of the above also seems to use RAIDframe which is why I included my RAID 
details in the repeat problem section.  I do not know if it's related to 
RAIDframe

Adam H. stated May 8th: I was informed that there might be some problem with 
lvm + libdm  .. and that he needed debug out put ..

Unfortunately, I had no idea what kind of debugging output he is talking about 
... nor how to get it if anyone told me anything short of exactly what commands 
to run .. [more than likely anyway] .. Or I would of included such details

Any attention to this would be appreciated ..


>How-To-Repeat:

# raidctl -s /dev/rraid0
Components:
           /dev/wd0a: optimal
           /dev/wd1a: optimal
No spares.
Component label for /dev/wd0a:
   Row: 0, Column: 0, Num Rows: 1, Num Columns: 2
   Version: 2, Serial Number: 7, Mod Counter: 190
   Clean: No, Status: 0
   sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
   Queue size: 100, blocksize: 512, numBlocks: 976770944
   RAID Level: 1
   Autoconfig: Yes
   Root partition: Yes
   Last configured as: raid0
Component label for /dev/wd1a:
   Row: 0, Column: 1, Num Rows: 1, Num Columns: 2
   Version: 2, Serial Number: 7, Mod Counter: 190
   Clean: No, Status: 0
   sectPerSU: 128, SUsPerPU: 1, SUsPerRU: 1
   Queue size: 100, blocksize: 512, numBlocks: 976770944
   RAID Level: 1
   Autoconfig: Yes
   Root partition: Yes
   Last configured as: raid0
Parity status: clean
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.
# lvm pvadd /dev/rraid0i
  No such command.  Try 'help'.
# lvm pvcreate /dev/rraid0i
  Physical volume "/dev/rraid0i" successfully created
# lvm vgcreate xenpool /dev/rraid0i
  /dev/xenpool: already exists in filesystem
  New volume group name "xenpool" is invalid
  Run `vgcreate --help' for more information.
# lvm vgs
# rm -rf /dev/xen
/dev/xencons /dev/xenevt  /dev/xenpool
# rm -rf /dev/xenpool/
/dev/xenpool/rscratch /dev/xenpool/scratch 
# rm -rf /dev/xenpool/
/dev/xenpool/rscratch /dev/xenpool/scratch 
# rm -rf /dev/xenpool/
/dev/xenpool/rscratch /dev/xenpool/scratch 
# rm -rf /dev/xenpool/
# lvm vgs
# lvm vgcreate xenpool /dev/rraid0i
  Volume group "xenpool" successfully created
# lvm lvcreate --name test --size 1M xenpool
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "test" created
# lvm lvremove xenpool/test
Do you really want to remove active logical volume test? [y/n]: y
  Unable to deactivate logical volume "test"
# lvm lvcreate --name test --size 100M xenpool
  Logical volume "test" already exists in volume group "xenpool"
# lvm lvcreate --name test2 --size 100M xenpool
  Logical volume "test2" created
# lvm lvremove xenpool/test2
Do you really want to remove active logical volume test2? [y/n]: y
  Unable to deactivate logical volume "test2"
# lvm lvremove xenpool/test2 
# lvm lvremove -h
  lvremove: Remove logical volume(s) from the system

lvremove
        [-A|--autobackup y|n]
        [-d|--debug]
        [-f|--force]
        [-h|--help]
        [--noudevsync]
        [-t|--test]
        [-v|--verbose]
        [--version]
        LogicalVolume[Path] [LogicalVolume[Path]...]

# lvm lvremove -dvf xenpool/test2
    Using logical volume(s) on command line
    Archiving volume group "xenpool" metadata (seqno 3).
    Found volume group "xenpool"
  Unable to deactivate logical volume "test2"
# lvm lvremove -dvft xenpool/test2
  Test mode: Metadata will NOT be updated.
    Using logical volume(s) on command line
    Test mode: Skipping archiving of volume group.
    Found volume group "xenpool"
    Found volume group "xenpool"
    Releasing logical volume "test2"
    Test mode: Skipping volume group backup.
  Logical volume "test2" successfully removed
    Test mode: Wiping internal cache
    Wiping internal VG cache
# lvm lvs
  LV    VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  test  xenpool -wi-a-   4.00m                                      
  test2 xenpool -wi-a- 100.00m                                      
# lvm lvremove -vdf xenpool/test
    Using logical volume(s) on command line
    Archiving volume group "xenpool" metadata (seqno 3).
    Found volume group "xenpool"
  Unable to deactivate logical volume "test"



# lvm lvrename xenpool/test xenpool/new_test
  Renamed "test" to "new_test" in volume group "xenpool"
# lvm lvs
  LV       VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  new_test xenpool -wi-a-   4.00m                                      
  test2    xenpool -wi-a- 100.00m    
# ls -la /dev/mapper/
total 93
drwxr-xr-x  2 root  wheel        512 May  9 17:28 .
drwxr-xr-x  6 root  wheel      93696 May  9 17:15 ..
crw-rw----  1 root  operator  194, 0 Mar 25 06:02 control
crw-r-----  1 root  operator  194, 0 May  9 17:28 rxenpool-new_test
crw-r-----  1 root  operator  194, 1 May  9 17:14 rxenpool-test
crw-r-----  1 root  operator  194, 2 May  9 17:15 rxenpool-test2
brw-r-----  1 root  operator    0, 0 May  9 17:28 xenpool-new_test
brw-r-----  1 root  operator  169, 1 May  9 17:14 xenpool-test
brw-r-----  1 root  operator  169, 2 May  9 17:15 xenpool-test2
# ls -la /dev/xenpool/
ls: /dev/xenpool/: No such file or directory

>Fix:
Only fix is to vi /etc/rc.conf, set lvm=NO, reboot, dd if=/dev/zero 
of=/dev/rraid0i bs=32m count=1, reset lvm=YES, start /etc/rc.d/lvm .... and to 
make sure to remove the left-behind devices in /dev/



Home | Main Index | Thread Index | Old Index