Subject: kern/13868: mount_ffs -o softdep causes kernel: page fault trap, code=0
To: None <>
From: None <>
List: netbsd-bugs
Date: 09/04/2001 22:11:36
>Number:         13868
>Category:       kern
>Synopsis:       mount_ffs -o softdep of blocksize & fragsize = 16384 fails
>Confidential:   no
>Severity:       critical
>Priority:       medium
>Responsible:    kern-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Tue Sep 04 20:07:00 PDT 2001
>Originator:     Tracy Di Marco White
>Release:        NetBSD 1.5.2
System: NetBSD lyra 1.5.2 NetBSD 1.5.2 (LYRA) #4: Sun Sep 2 00:50:07 CDT 2001 i386

# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
32 1 1 5

newfs -a 5 -b 16384 -f 16384 -c 407 /dev/rraid3a
Warning: 2720 sector(s) in last cylinder unallocated
/dev/rraid3a:   9255520 sectors in 2411 cylinders of 24 tracks, 160 sectors
        4519.3MB in 6 cyl groups (407 c/g, 763.12MB/g, 12032 i/g)
super-block backups (for fsck -b #) at:
      32, 1563072, 3126112, 4689152, 6252192, 7815232,
mount -o softdep /dev/raid3a /mnt
uvm_fault(0xc03053c0, 0xc406d000, 0, 1) -> 2
kernel: page fault trap, code=0
Stopped in mount_ffs at memcpy+0x1a:    repe movsl      (%esi),%es:(%edi)
db> t
memcpy(c9b310ec,c069b600,c9bdc65c,80000000,c9bf5c4c) at memcpy+0x1a
ffs_mount(c069b600,bfbfdcf5,bfbfdb7c,c9cdce7c,c9bdc65c) at ffs_mount+0x425
sys_mount(c9bdc65c,c9cdcf80,c9cdcf78,0,2) at sys_mount+0x496
syscall() at syscall+0x1d8
--- syscall (number 21) ---
db> reboot
syncing disks... done
unmounting /proc (procfs)...
unmounting /kern (kernfs)...
unmounting /usr (/dev/wd0e)...
uvm_vnp_terminate(0xc9bf5aac): terminating active vnode (refs=1)
uvm_vnp_terminate(0xc9bd2aa0): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9bd2690): terminating active vnode (refs=1)
uvm_vnp_terminate(0xc9bde75c): terminating active vnode (refs=1)
uvm_vnp_terminate(0xc9bc75b4): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9bc6bc8): terminating active vnode (refs=2)
uvm_vnp_terminate(0xc9bc6548): terminating active vnode (refs=2)
uvm_vnp_terminate(0xc9ba6204): terminating active vnode (refs=2)
uvm_vnp_terminate(0xc9bbc7ac): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9bbc39c): terminating active vnode (refs=6)
uvm_vnp_terminate(0xc9bb9bb8): terminating active vnode (refs=2)
uvm_vnp_terminate(0xc9bb9a18): terminating active vnode (refs=2)
uvm_vnp_terminate(0xc9bb9398): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9bb7124): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9bb5600): terminating active vnode (refs=2)
uvm_vnp_terminate(0xc9bb286c): terminating active vnode (refs=6)
uvm_vnp_terminate(0xc9bb26cc): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9bb245c): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9b8f868): terminating active vnode (refs=1)
uvm_vnp_terminate(0xc9b8f6c8): terminating active vnode (refs=1)
uvm_vnp_terminate(0xc9b8f528): terminating active vnode (refs=5)
uvm_vnp_terminate(0xc9b865f4): terminating active vnode (refs=1)
uvm_vnp_terminate(0xc9b88e0c): terminating active vnode (refs=10)
uvm_vnp_terminate(0xc9b88c6c): terminating active vnode (refs=2)
uvm_vnp_terminate(0xc9b8892c): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9b8878c): terminating active vnode (refs=4)
uvm_vnp_terminate(0xc9b80c64): terminating active vnode (refs=24)
uvm_vnp_terminate(0xc9b809f4): terminating active vnode (refs=1)
uvm_vnp_terminate(0xc9b096c0): terminating active vnode (refs=44)
uvm_vnp_terminate(0xc9b09380): terminating active vnode (refs=22)
unmounting / (/dev/wd0a)...
uvm_vnp_terminate(0xc9c602cc): terminating active vnode (refs=1)
uvm_vnp_terminate(0xc9bbc46c): terminating active vnode (refs=1)
panic: lockmgr: draining against myself
Stopped in mount_ffs at cpu_Debugger+0x4:       leave
db> t
cpu_Debugger(c9c068dc,10407,0,c9c14920,c017af62) at cpu_Debugger+0x4
panic(c02bb520,c9c0683c,c9c15010,1,0) at panic+0x64
lockmgr(c9c068dc,10007,c9c068d8,c9c1496c,c01a2511) at lockmgr+0x7d6
genfs_lock(c9c14960) at genfs_lock+0x1f
vclean(c9c0683c,8,c9c15010) at vclean+0x71
vgonel(c9c0683c,c9c15010) at vgonel+0x3b
vflush(c0609600,0,2,c0609600,0) at vflush+0x6f
ffs_flushfiles(c0609600,2,c9c15010,c0609600,0) at ffs_flushfiles+0x2c
softdep_flushfiles(c0609600,2,c9c15010,c0609600,0) at softdep_flushfiles+0x50
ffs_unmount(c0609600,80000,c9c15010,c0609600,c0609690) at ffs_unmount+0x2e
dounmount(c0609600,80000,c9c15010) at dounmount+0xd9
vfs_unmountall(c9c15010,0,0,c02e7008,c9c15010) at vfs_unmountall+0x5b
vfs_shutdown(0,c9c14b0c,c011bc9c,0,0) at vfs_shutdown+0x23b
cpu_reboot(0,0,0,c9c14bb4,c011b974) at cpu_reboot+0x3b
db_sifting_cmd(10,0,0,c9c14b3c,0) at db_sifting_cmd
db_command(c02e7008,c02e6e28,c02a8642) at db_command+0x1ec
db_command_loop(c02a2e72) at db_command_loop+0x82
db_trap(6,0,1,c4e4d000,0) at db_trap+0xee
kdb_trap(6,0,c9c14c6c) at kdb_trap+0xc0
trap() at trap+0x1ac
--- trap (number 6) ---
memcpy(c9b310ec,c0695600,c9c15010,80000000,c9c0683c) at memcpy+0x1a
ffs_mount(c0695600,bfbfdcf5,bfbfdb7c,c9c14e7c,c9c15010) at ffs_mount+0x425
sys_mount(c9c15010,c9c14f80,c9c14f78,0,2) at sys_mount+0x496
syscall() at syscall+0x1d8
--- syscall (number 21) ---

newfs as above, mount -o softdeps /dev/raid3a /mnt
Mounting without softdeps has same result.

Don't use those numbers, or make newfs not allow them or make mount_ffs
work with them.