NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

bin/51116: resize_ffs has problems with non-zero filled expansion of an ffsv2 filesystem



>Number:         51116
>Category:       bin
>Synopsis:       resize_ffs has problems with non-zero filled expansion of an ffsv2 filesystem
>Confidential:   no
>Severity:       serious
>Priority:       high
>Responsible:    bin-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Thu May 05 20:25:00 +0000 2016
>Originator:     Brad Spencer
>Release:        NetBSD 7.0_STABLE
>Organization:
Home
>Environment:
System: NetBSD anduin.eldar.org 7.0_STABLE NetBSD 7.0_STABLE (ANDUIN) #0: Thu Feb 25 12:38:29 EST 2016 brad%gimli.nat.eldar.org@localhost:/usr/src/sys/arch/amd64/compile/ANDUIN amd64
Architecture: x86_64
Machine: amd64
>Description:

I had noticed something messy at times when I would expand the
underling lvm size of a filesystem for a guest Xen VM.  After
lvextending the size, and then presenting it to the VM, and running
resize_ffs in the guest, the filesystem would have inconsistency,
sometimes quite severe.  I initially attributed this to having wapbl
enabled for the filesystem before expanding it, but I have found a
test case that illustrates this without wapbl enabled.

I narrowed the issue down to ffsv2 filesystems that are expanded in a
non zero-filled manor.

>How-To-Repeat:

Sorry for the length of this, I wanted to provide a couple of test cases that illustrate the issue:

First create some images that will be used for the test.  Two are
random blobs and a zero filled blob to show that it doesn't happen
there.

% dd if=/dev/urandom of=random.fs bs=1048576 count=10
10+0 records in
10+0 records out
10485760 bytes transferred in 0.300 secs (34952533 bytes/sec)
% dd if=/dev/urandom of=morerandom.fs bs=1048576 count=10
10+0 records in
10+0 records out
10485760 bytes transferred in 0.263 secs (39869809 bytes/sec)
% dd if=/dev/zero of=zero.fs bs=1048576 count=10
10+0 records in
10+0 records out
10485760 bytes transferred in 0.040 secs (262144000 bytes/sec)

Second..  make a ffsv2 filesystem of 10megs in size and resize it to 20megs:

# cp random.fs test_ffsv2.fs   
# vnconfig vnd0 test_ffsv2.fs
# disklabel -e vnd0
# disklabel vnd0 | grep '^ a'
 a:     20480         0     4.2BSD   4096 32768    16  # (Cyl.      0 -      9)
# newfs -O2 vnd0a
/dev/rvnd0a: 10.0MB (20480 sectors) block size 4096, fragment size 512
        using 4 cylinder groups of 2.50MB, 640 blks, 1136 inodes.
super-block backups (for fsck_ffs -b #) at:
144, 5264, 10384, 15504,
# dumpfs -s vnd0a
file system: /dev/rvnd0a
format  FFSv2
endian  little-endian
location 65536  (-b 128)
magic   19540119        time    Thu May  5 15:46:37 2016
superblock location     65536   id      [ 572ba31d 72225649 ]
cylgrp  dynamic inodes  FFSv2   sblock  FFSv2   fslevel 5
nbfree  2244    ndir    1       nifree  4541    nffree  14
.
.
.

# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 17966 free (14 frags, 2244 blocks, 0.1% fragmentation)

Expand it by concating on another 10meg blob:

# vnconfig -u vnd0
# cat morerandom.fs >> test_ffsv2.fs
# vnconfig vnd0 test_ffsv2.fs 
# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 17966 free (14 frags, 2244 blocks, 0.1% fragmentation)

# disklabel -e vnd0
# disklabel vnd0 | grep '^ a'
 a:     40960         0     4.2BSD   4096 32768    16  # (Cyl.      0 -     19)

# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 17966 free (14 frags, 2244 blocks, 0.1% fragmentation)

# resize_ffs /dev/rvnd0a
It's required to manually run fsck on file system before you can resize it

 Did you run fsck on your disk (Yes/No) ? Yes


The "damage" that fsck will report will vary depending on the random
bytes that are present in the new expanded space.  Sometimes, files
will be put in lost+found that are hard to remove as they can have any
sorts of flags sets including schg.  Further in one instance it
required another fsck after the files were unlinked to completely
clean the filesystem.

Note that fsck said that the file system was clean, when it obviously
wasn't.

# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
UNKNOWN FILE TYPE I=4544
CLEAR? yes

UNKNOWN FILE TYPE I=4545
CLEAR? yes

.
.
.

UNKNOWN FILE TYPE I=7981
CLEAR? yes

UNKNOWN FILE TYPE I=7982
CLEAR? yes

PARTIALLY ALLOCATED INODE I=7983
CLEAR? yes

** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 36078 free (14 frags, 4508 blocks, 0.0% fragmentation)

***** FILE SYSTEM WAS MODIFIED *****

The file system is clean now, and probably ok.  I do not have any
knowledge of losing anything when this happens.

# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 36078 free (14 frags, 4508 blocks, 0.0% fragmentation)

# mount /dev/vnd0a /mnt
# ls -l /mnt
# df /mnt
Filesystem    1K-blocks       Used      Avail %Cap Mounted on
/dev/vnd0a        18039          0      17137   0% /mnt


Now, do the same thing with a zero filled blob:

# cp zero.fs test_ffsv2_zero.fs
# vnconfig vnd0 test_ffsv2_zero.fs
# disklabel -e vnd0
# newfs -O2 vnd0a
/dev/rvnd0a: 10.0MB (20480 sectors) block size 4096, fragment size 512
        using 4 cylinder groups of 2.50MB, 640 blks, 1136 inodes.
super-block backups (for fsck_ffs -b #) at:
144, 5264, 10384, 15504,

# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 17966 free (14 frags, 2244 blocks, 0.1% fragmentation)

# vnconfig -u vnd0
# cat zero.fs >> test_ffsv2_zero.fs
# vnconfig vnd0 test_ffsv2_zero.fs
# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 17966 free (14 frags, 2244 blocks, 0.1% fragmentation)
# disklabel -e vnd0
# disklabel vnd0 | grep '^ a'
 a:     40960         0     4.2BSD   4096 32768    16  # (Cyl.      0 -     19)
# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 17966 free (14 frags, 2244 blocks, 0.1% fragmentation)
# resize_ffs /dev/rvnd0a
It's required to manually run fsck on file system before you can resize it

 Did you run fsck on your disk (Yes/No) ? Yes

Note that the file system is clean after the resize, which suggests
that something is not getting zero'ed out with resize_ffs.

# fsck -fy /dev/rvnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 36078 free (14 frags, 4508 blocks, 0.0% fragmentation)


Final test, use the same random blobs and try this with a ffsv1 filesystem:

# cp random.fs test_ffsv1.fs
# vnconfig vnd0 test_ffsv1.fs
# disklabel -e vnd0
# disklabel vnd0 | grep '^ a'
 a:     20480         0     4.2BSD   4096 32768    16  # (Cyl.      0 -      9)
# newfs vnd0a
/dev/rvnd0a: 10.0MB (20480 sectors) block size 4096, fragment size 512
        using 4 cylinder groups of 2.50MB, 640 blks, 1216 inodes.
super-block backups (for fsck_ffs -b #) at:
32, 5152, 10272, 15392,
# dumpfs -s /dev/rvnd0a
file system: /dev/rvnd0a
format  FFSv1
endian  little-endian
magic   11954           time    Thu May  5 15:58:21 2016
superblock location     8192    id      [ 572ba5dd 3831964d ]
cylgrp  dynamic inodes  4.4BSD  sblock  FFSv2   fslevel 4

# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 19134 free (14 frags, 2390 blocks, 0.1% fragmentation)

# cat morerandom.fs >> test_ffsv1.fs
# vnconfig vnd0 test_ffsv1.fs
# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 19134 free (14 frags, 2390 blocks, 0.1% fragmentation)
# disklabel -e vnd0
# disklabel vnd0 | grep '^ a'
 a:     40960         0     4.2BSD   4096 32768    16  # (Cyl.      0 -     19)

# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 19134 free (14 frags, 2390 blocks, 0.1% fragmentation)

# resize_ffs /dev/rvnd0a
It's required to manually run fsck on file system before you can resize it

 Did you run fsck on your disk (Yes/No) ? Yes


Note that this is the exact same random blobs used in the ffsv2 case,
except here the filesystem is clean.

# fsck -fy vnd0a
** /dev/rvnd0a
** File system is already clean
** Last Mounted on 
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
1 files, 1 used, 38302 free (14 frags, 4786 blocks, 0.0% fragmentation)


>Fix:
As an unreasonable work around one could always zero fill the space
before adding it to the filesystem.



Home | Main Index | Thread Index | Old Index