NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

kern/45946: Kernel locks up in VMEM system



>Number:         45946
>Category:       kern
>Synopsis:       Kernel locks up in VMEM system
>Confidential:   no
>Severity:       critical
>Priority:       high
>Responsible:    kern-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Wed Feb 08 00:30:01 +0000 2012
>Originator:     tron%zhadum.org.uk@localhost
>Release:        NetBSD 5.99.64 2012-02-04 sources
>Organization:
Matthias Scheler                                  http://zhadum.org.uk/
>Environment:
System: NetBSD lyssa.zhadum.org.uk 5.99.64 NetBSD 5.99.64 (LYSSA) #0: Sat Feb 4 
20:02:22 GMT 2012 tron%lyssa.zhadum.org.uk@localhost:/src/sys/compile/LYSSA i386
Architecture: i386
Machine: i386
>Description:
I'm running NetBSD-current on the following VMware Fusion virtual machine:

Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
    2006, 2007, 2008, 2009, 2010, 2011, 2012
    The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
    The Regents of the University of California.  All rights reserved.

NetBSD 5.99.64 (LYSSA) #0: Sat Feb  4 20:02:22 GMT 2012
        tron%lyssa.zhadum.org.uk@localhost:/src/sys/compile/LYSSA
total memory = 3071 MB
avail memory = 3015 MB
timecounter: Timecounters tick every 10.000 msec
timecounter: Timecounter "i8254" frequency 1193182 Hz quality 100
VMware, Inc. VMware Virtual Platform (None)
mainbus0 (root)
cpu0 at mainbus0 apid 0: Intel(R) Xeon(R) CPU           X5365  @ 3.00GHz, id 
0x6f7
cpu1 at mainbus0 apid 1: Intel(R) Xeon(R) CPU           X5365  @ 3.00GHz, id 
0x6f7
cpu2 at mainbus0 apid 2: Intel(R) Xeon(R) CPU           X5365  @ 3.00GHz, id 
0x6f7
cpu3 at mainbus0 apid 3: Intel(R) Xeon(R) CPU           X5365  @ 3.00GHz, id 
0x6f7
cpu4 at mainbus0 apid 4: Intel(R) Xeon(R) CPU           X5365  @ 3.00GHz, id 
0x6f7
cpu5 at mainbus0 apid 5: Intel(R) Xeon(R) CPU           X5365  @ 3.00GHz, id 
0x6f7
ioapic0 at mainbus0 apid 6: pa 0xfec00000, version 11, 24 pins
acpi0 at mainbus0: Intel ACPICA 20110623
acpi0: X/RSDT: OemId <INTEL ,440BX   ,06040000>, AslId <VMW ,01324272>
acpi0: SCI interrupting at int 9
timecounter: Timecounter "ACPI-Fast" frequency 3579545 Hz quality 1000
MBRD (PNP0C02) at acpi0 not configured
PIC (PNP0001) at acpi0 not configured
attimer0 at acpi0 (TIME, PNP0100): io 0x40-0x43 irq 0
pcppi0 at acpi0 (SPKR, PNP0800): io 0x61
spkr0 at pcppi0
sysbeep0 at pcppi0
pckbc0 at acpi0 (KBC, PNP0303) (kbd port): io 0x60,0x64 irq 1
pckbc1 at acpi0 (MOUS, PNP0F13) (aux port): irq 12
HPET (PNP0103) at acpi0 not configured
lpt0 at acpi0 (LPTB, PNP0400): io 0x378-0x37f irq 7
com0 at acpi0 (COMA, PNP0501-1): io 0x3f8-0x3ff irq 4
com0: ns16550a, working fifo
com1 at acpi0 (COMB, PNP0501-2): io 0x2f8-0x2ff irq 3
com1: ns16550a, working fifo
fdc0 at acpi0 (FDC, PNP0700): io 0x3f0-0x3f5,0x3f7 irq 6 drq 2
fdc0: failed to evaluate _FDE: AE_NOT_FOUND
EXPL (PNP0C02) at acpi0 not configured
ACAD (ACPI0003) at acpi0 not configured
attimer0: attached to pcppi0
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard
pms0 at pckbc0 (aux slot)
pckbc0: using irq 12 for aux slot
wsmouse0 at pms0 mux 0
pci0 at mainbus0 bus 0: configuration mode 1
pci0: i/o space, memory space enabled, rd/line, rd/mult, wr/inv ok
pchb0 at pci0 dev 0 function 0: Intel 82443BX Host Bridge/Controller (rev. 0x01)
ppb0 at pci0 dev 1 function 0: Intel 82443BX AGP Interface (rev. 0x01)
pci1 at ppb0 bus 1
pci1: i/o space, memory space enabled
pcib0 at pci0 dev 7 function 0: Intel 82371AB (PIIX4) PCI-ISA Bridge (rev. 0x08)
piixide0 at pci0 dev 7 function 1: Intel 82371AB IDE controller (PIIX4) (rev. 
0x01)
piixide0: bus-master DMA support present
piixide0: primary channel configured to compatibility mode
piixide0: primary channel interrupting at ioapic0 pin 14
atabus0 at piixide0 channel 0
piixide0: secondary channel configured to compatibility mode
piixide0: secondary channel interrupting at ioapic0 pin 15
atabus1 at piixide0 channel 1
piixpm0 at pci0 dev 7 function 3: Intel 82371AB (PIIX4) Power Management 
Controller (rev. 0x08)
timecounter: Timecounter "piixpm0" frequency 3579545 Hz quality 1000
piixpm0: 24-bit timer
piixpm0: SMBus disabled
VMware Virtual Machine Communication Interface (miscellaneous system, revision 
0x10) at pci0 dev 7 function 7 not configured
vga0 at pci0 dev 15 function 0: VMware Virtual SVGA II (rev. 0x00)
wsdisplay0 at vga0 kbdmux 1: console (80x25, vt100 emulation), using wskbd0
wsmux1: connecting to wsdisplay0
drm at vga0 not configured
mpt has not been converted to device_t
mpt0 at pci0 dev 16 function 0: Symbios Logic 53c1020/53c1030 (rev. 0x01)
mpt0: applying 1030 quirk
mpt0: interrupting at ioapic0 pin 17
scsibus0 at mpt0: 16 targets, 8 luns per target
ppb1 at pci0 dev 17 function 0: VMware PCI Bridge (rev. 0x02)
pci2 at ppb1 bus 2
pci2: i/o space, memory space enabled, rd/line, wr/inv ok
uhci0 at pci2 dev 0 function 0: VMware product 0x0774 (rev. 0x00)
uhci0: interrupting at ioapic0 pin 18
usb0 at uhci0: USB revision 1.0
wm0 at pci2 dev 1 function 0: Intel i82545EM 1000BASE-T Ethernet (rev. 0x01)
wm0: interrupting at ioapic0 pin 19
wm0: 32-bit 66MHz PCI bus
wm0: 256 word (8 address bits) MicroWire EEPROM
wm0: Ethernet address 00:0c:29:xx:xx:xx
makphy0 at wm0 phy 1: Marvell 88E1011 Gigabit PHY, rev. 3
makphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 
1000baseT-FDX, auto
eap0 at pci2 dev 2 function 0: Ensoniq AudioPCI 97 (rev. 0x02)
eap0: interrupting at ioapic0 pin 16
eap0: ac97: Crystal CS4297A codec; no 3D stereo
audio0 at eap0: full duplex, playback, capture, mmap, independent
ehci0 at pci2 dev 3 function 0: VMware product 0x0770 (rev. 0x00)
ehci0: interrupting at ioapic0 pin 17
ehci0: EHCI version 1.0
usb1 at ehci0: USB revision 2.0
ppb2 at pci0 dev 21 function 0: VMware PCI Express Root Port (rev. 0x01)
ppb2: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci3 at ppb2 bus 3
pci3: i/o space, memory space enabled, rd/line, wr/inv ok
ppb3 at pci0 dev 21 function 1: VMware PCI Express Root Port (rev. 0x01)
ppb3: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci4 at ppb3 bus 4
pci4: i/o space, memory space enabled, rd/line, wr/inv ok
ppb4 at pci0 dev 21 function 2: VMware PCI Express Root Port (rev. 0x01)
ppb4: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci5 at ppb4 bus 5
pci5: i/o space, memory space enabled, rd/line, wr/inv ok
ppb5 at pci0 dev 21 function 3: VMware PCI Express Root Port (rev. 0x01)
ppb5: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci6 at ppb5 bus 6
pci6: i/o space, memory space enabled, rd/line, wr/inv ok
ppb6 at pci0 dev 21 function 4: VMware PCI Express Root Port (rev. 0x01)
ppb6: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci7 at ppb6 bus 7
pci7: i/o space, memory space enabled, rd/line, wr/inv ok
ppb7 at pci0 dev 21 function 5: VMware PCI Express Root Port (rev. 0x01)
ppb7: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci8 at ppb7 bus 8
pci8: i/o space, memory space enabled, rd/line, wr/inv ok
ppb8 at pci0 dev 21 function 6: VMware PCI Express Root Port (rev. 0x01)
ppb8: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci9 at ppb8 bus 9
pci9: i/o space, memory space enabled, rd/line, wr/inv ok
ppb9 at pci0 dev 21 function 7: VMware PCI Express Root Port (rev. 0x01)
ppb9: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci10 at ppb9 bus 10
pci10: i/o space, memory space enabled, rd/line, wr/inv ok
ppb10 at pci0 dev 22 function 0: VMware PCI Express Root Port (rev. 0x01)
ppb10: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci11 at ppb10 bus 11
pci11: i/o space, memory space enabled, rd/line, wr/inv ok
ppb11 at pci0 dev 22 function 1: VMware PCI Express Root Port (rev. 0x01)
ppb11: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci12 at ppb11 bus 12
pci12: i/o space, memory space enabled, rd/line, wr/inv ok
ppb12 at pci0 dev 22 function 2: VMware PCI Express Root Port (rev. 0x01)
ppb12: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci13 at ppb12 bus 13
pci13: i/o space, memory space enabled, rd/line, wr/inv ok
ppb13 at pci0 dev 22 function 3: VMware PCI Express Root Port (rev. 0x01)
ppb13: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci14 at ppb13 bus 14
pci14: i/o space, memory space enabled, rd/line, wr/inv ok
ppb14 at pci0 dev 22 function 4: VMware PCI Express Root Port (rev. 0x01)
ppb14: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci15 at ppb14 bus 15
pci15: i/o space, memory space enabled, rd/line, wr/inv ok
ppb15 at pci0 dev 22 function 5: VMware PCI Express Root Port (rev. 0x01)
ppb15: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci16 at ppb15 bus 16
pci16: i/o space, memory space enabled, rd/line, wr/inv ok
ppb16 at pci0 dev 22 function 6: VMware PCI Express Root Port (rev. 0x01)
ppb16: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci17 at ppb16 bus 17
pci17: i/o space, memory space enabled, rd/line, wr/inv ok
ppb17 at pci0 dev 22 function 7: VMware PCI Express Root Port (rev. 0x01)
ppb17: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci18 at ppb17 bus 18
pci18: i/o space, memory space enabled, rd/line, wr/inv ok
ppb18 at pci0 dev 23 function 0: VMware PCI Express Root Port (rev. 0x01)
ppb18: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci19 at ppb18 bus 19
pci19: i/o space, memory space enabled, rd/line, wr/inv ok
ppb19 at pci0 dev 23 function 1: VMware PCI Express Root Port (rev. 0x01)
ppb19: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci20 at ppb19 bus 20
pci20: i/o space, memory space enabled, rd/line, wr/inv ok
ppb20 at pci0 dev 23 function 2: VMware PCI Express Root Port (rev. 0x01)
ppb20: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci21 at ppb20 bus 21
pci21: i/o space, memory space enabled, rd/line, wr/inv ok
ppb21 at pci0 dev 23 function 3: VMware PCI Express Root Port (rev. 0x01)
ppb21: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci22 at ppb21 bus 22
pci22: i/o space, memory space enabled, rd/line, wr/inv ok
ppb22 at pci0 dev 23 function 4: VMware PCI Express Root Port (rev. 0x01)
ppb22: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci23 at ppb22 bus 23
pci23: i/o space, memory space enabled, rd/line, wr/inv ok
ppb23 at pci0 dev 23 function 5: VMware PCI Express Root Port (rev. 0x01)
ppb23: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci24 at ppb23 bus 24
pci24: i/o space, memory space enabled, rd/line, wr/inv ok
ppb24 at pci0 dev 23 function 6: VMware PCI Express Root Port (rev. 0x01)
ppb24: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci25 at ppb24 bus 25
pci25: i/o space, memory space enabled, rd/line, wr/inv ok
ppb25 at pci0 dev 23 function 7: VMware PCI Express Root Port (rev. 0x01)
ppb25: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci26 at ppb25 bus 26
pci26: i/o space, memory space enabled, rd/line, wr/inv ok
ppb26 at pci0 dev 24 function 0: VMware PCI Express Root Port (rev. 0x01)
ppb26: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci27 at ppb26 bus 27
pci27: i/o space, memory space enabled, rd/line, wr/inv ok
ppb27 at pci0 dev 24 function 1: VMware PCI Express Root Port (rev. 0x01)
ppb27: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci28 at ppb27 bus 28
pci28: i/o space, memory space enabled, rd/line, wr/inv ok
ppb28 at pci0 dev 24 function 2: VMware PCI Express Root Port (rev. 0x01)
ppb28: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci29 at ppb28 bus 29
pci29: i/o space, memory space enabled, rd/line, wr/inv ok
ppb29 at pci0 dev 24 function 3: VMware PCI Express Root Port (rev. 0x01)
ppb29: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci30 at ppb29 bus 30
pci30: i/o space, memory space enabled, rd/line, wr/inv ok
ppb30 at pci0 dev 24 function 4: VMware PCI Express Root Port (rev. 0x01)
ppb30: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci31 at ppb30 bus 31
pci31: i/o space, memory space enabled, rd/line, wr/inv ok
ppb31 at pci0 dev 24 function 5: VMware PCI Express Root Port (rev. 0x01)
ppb31: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci32 at ppb31 bus 32
pci32: i/o space, memory space enabled, rd/line, wr/inv ok
ppb32 at pci0 dev 24 function 6: VMware PCI Express Root Port (rev. 0x01)
ppb32: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci33 at ppb32 bus 33
pci33: i/o space, memory space enabled, rd/line, wr/inv ok
ppb33 at pci0 dev 24 function 7: VMware PCI Express Root Port (rev. 0x01)
ppb33: PCI Express 2.0 <Root Port of PCI-E Root Complex>
pci34 at ppb33 bus 34
pci34: i/o space, memory space enabled, rd/line, wr/inv ok
isa0 at pcib0
npx0 at isa0 port 0xf0-0xff
npx0: reported by CPUID; using exception 16
acpicpu0 at cpu0: ACPI CPU
acpicpu0: C1: HLT, lat   0 us, pow     0 mW
acpicpu1 at cpu1: ACPI CPU
acpicpu2 at cpu2: ACPI CPU
acpicpu3 at cpu3: ACPI CPU
acpicpu4 at cpu4: ACPI CPU
acpicpu5 at cpu5: ACPI CPU
timecounter: Timecounter "clockinterrupt" frequency 100 Hz quality 0
ERROR: 16096 cycle TSC drift observed
scsibus0: waiting 2 seconds for devices to settle...
fd0 at fdc0 drive 0: 1.44MB, 80 cyl, 2 head, 18 sec
fd1 at fdc0 drive 1: density unknown
atapibus0 at atabus0: 2 targets
cd0 at atapibus0 drive 1: <VMware Virtual IDE CDROM Drive, 0100000000000000000, 
0000000> cdrom removable
cd0: 32-bit data port
cd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 2 (Ultra/33)
wd0 at atabus0 drive 0
wd0: <VMware Virtual IDE Hard Drive>
wd0: drive supports 64-sector PIO transfers, LBA addressing
wd0: 32768 MB, 71014 cyl, 15 head, 63 sec, 512 bytes/sect x 67108864 sectors
uhub0 at usb0: VMware UHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
uhub1 at usb1: VMware EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub1: 6 ports with 6 removable, self powered
wd0: 32-bit data port
wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 2 (Ultra/33)
wd0(piixide0:0:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA)
cd0(piixide0:0:1): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using DMA)
uhidev0 at uhub0 port 1 configuration 1 interface 0
uhidev0: VMware VMware Virtual USB Mouse, rev 1.10/1.02, addr 2, iclass 3/0
ums0 at uhidev0: 16 buttons, W and Z dirs
wsmouse1 at ums0 mux 0
uhidev1 at uhub0 port 1 configuration 1 interface 1
uhidev1: VMware VMware Virtual USB Mouse, rev 1.10/1.02, addr 2, iclass 3/0
ums1 at uhidev1: 16 buttons, W and Z dirs
wsmouse2 at ums1 mux 0
uhub2 at uhub0 port 2: vendor 0x0e0f VMware Virtual USB Hub, class 9/0, rev 
1.10/1.00, addr 3
uhub2: 7 ports with 7 removable, self powered
Kernelized RAIDframe activated
boot device: wd0
root on wd0a dumps on wd0b
root file system type: ffs
wsdisplay0: screen 1 added (80x25, vt100 emulation)
wsdisplay0: screen 2 added (80x25, vt100 emulation)
wsdisplay0: screen 3 added (80x25, vt100 emulation)
wsdisplay0: screen 4 added (80x25, vt100 emulation)
wsdisplay0: screen 5 added (80x25, vt100 emulation)
wsdisplay0: screen 6 added (80x25, vt100 emulation)
wsdisplay0: screen 7 added (80x25, vt100 emulation)
/export/scratch: replaying log to disk
vmt0 at cpu0: Workstation

Since I upgrade from 5.99.60 to 5.99.64 running "/etc/daily" always locks
up when the "find" process gets stuck and becomes unkillable. Here is
the back trace of the "find" process:

sleepq_block(0,0,c052ac17,c058b150,1000,40000,d790926c,c0450606,c061d20c,1000) 
at c02667df
cv_wait(c05e68e0,c05e68ec,c05e5a70,c0392a29,0,0,ffffffff,1001,d7909328,d79092e0)
 at c023ab40
vmem_xalloc(c05e68e0,40000,1000,0,0,0,ffffffff,1001,d79093a8,d32b7770) at 
c0391cda
vmem_alloc(c05e68e0,40000,1001,d79093a8,0,0,d79093cc,c0300664,dffc3000,c0504f58)
 at c0392532
vmem_xalloc(c05e5a70,40000,1000,0,0,0,ffffffff,1001,d7909470,ffffffff) at 
c03920aa
vmem_alloc(c05e5a70,40000,1001,d7909470,c3aae900,ffffffff,d790945c,c05e9580,c05e9400,c05e94fc)
 at c0392532
qc_poolpage_alloc(c05e9400,1,d79094bc,c0269837,c059651c,c061d20c,d79094ec,c044c459,c061d1e0,2)
 at c0392b24
pool_grow(c05e9474,0,0,c044d48e,c061d1e0,d7909540,d790955c,c0300664,d524f000,c05e9478)
 at c0389ed1
pool_get(c05e9400,1,1,0,d524e,0,3,c061d20c,c3aae900,c0596518) at c0389844
pool_cache_get_slow(0,1,d79095bc,c0450606,3f,c3e82c00,d79095fc,c0300f8e,d524f000,0)
 at c038a92b
pool_cache_get_paddr(c05e9400,1,0,c02fcdaa,bff54940,a6e50103,0,c50a2540,c05e2d00,c05e2d00)
 at c038c0e9
vmem_alloc(c05e5a70,1000,1001,d7909670,4000000,c3a66240,d790969c,c038993a,c3a662b4,0)
 at c03924e0
uvm_km_kmem_alloc(c05e5a70,1000,1001,d79096b0,0,d524ff40,1,c0457038,c061d45c,c0389f44)
 at c0441c7d
pool_page_alloc(c3a6f480,1,0,d7909720,c3a6f9fc,ffffffff,d79096ec,c3a66240,c3a663c0,0)
 at c0388ab1
pool_grow(c3a6f4f4,ffffffff,d790973c,d524ee78,0,d524ff40,d790975c,c0252a6f,d524ff40,c3a6f4f8)
 at c0389ed1
pool_get(c3a6f480,1,d790979c,d524ee78,d524ee78,0,d79097cc,c04616c9,c4273024,c05520d0)
 at c0389844
pool_cache_get_slow(0,1,d790980c,c046c816,c4273000,1,0,c03f07a9,c4f1d440,4) at 
c038a92b
pool_cache_get_paddr(c3a6f480,1,0,0,d790987c,a2710,200,7ff,0,0) at c038c0e9
ffs_vget(c4273000,d8f9a,0,d790995c,d7909960,0,c0577fd0,cb36d300,cb36d2c8,600) 
at c01a8082
ufs_lookup(d79099b4,d520e844,d79099cc,c047ee1d,d79099bc,c50a2710,0,d520e844,20000,0)
 at c03f18e1
VOP_LOOKUP(d520e844,d7909a20,d7909bc4,2,c3a94160,d520e844,d7909a3c,c046f17f,d520e844,2)
 at c047dd2d
lookup_once(d7909aec,d7909ae8,4,0,20,d7909b80,0,d524ef28,d7909acc,ffffffff) at 
c045ec0a
namei_tryemulroot(0,1,0,0,c058cd60,c058a8ac,d7909cf4,c045f250,bb92b634,c43b2400)
 at c045f5ee
namei(d7909ba0,d7909be0,d7909bec,c02604e2,c3ab0800,7f,c0586f40,c0260686,c3ab1000,c3aafd00)
 at c0460d09
do_sys_stat(bb92b634,0,d7909c08,600,0,0,0,81b4,d8f99,0) at c04699fc
sys___lstat50(c50a2540,d7909cf4,d7909d1c,0,c02f0010,c4ee0030,10,10,c02fe6ae,c509fd38)
 at c0469abc
syscall(d7909d48,bb9000b3,ab,bfbf001f,bb92001f,bb92b5e0,bb92b640,bfbfe858,bbbb1598,bb92b5e0)
 at c03a505d

And the same time "top -t" reports that "pgdaemon" is very very busy:

load averages:  0.07,  0.05,  0.02;               up 0+00:29:43         19:10:10
116 threads: 26 idle, 1 runnable, 83 sleeping, 6 on CPU
CPU0 states:  0.0% user,  0.0% nice, 26.3% system,  0.0% interrupt, 73.7% idle
CPU1 states:  0.0% user,  0.0% nice, 17.0% system,  0.0% interrupt, 83.0% idle
CPU2 states:  0.0% user,  0.0% nice, 59.7% system,  0.0% interrupt, 40.3% idle
CPU3 states:  0.0% user,  0.0% nice,  9.2% system,  0.0% interrupt, 90.8% idle
CPU4 states:  0.0% user,  0.0% nice,  3.4% system,  0.0% interrupt, 96.6% idle
CPU5 states:  0.0% user,  0.0% nice, 13.0% system,  0.0% interrupt, 87.0% idle
Memory: 22M Act, 14M Inact, 13M Wired, 8608K Exec, 9572K File, 2544M Free
Swap: 4096M Total, 4096M Free

  PID   LID USERNAME PRI STATE      TIME   WCPU    CPU NAME      COMMAND
    0    70 root     126 CPU/2      3:37 50.34% 50.34% pgdaemon  [system]
    0     7 root     127 xcall/0    0:43 22.80% 22.80% xcall/0   [system]
    0    22 root     127 RUN/1      1:40 16.50% 16.50% xcall/1   [system]
    0    46 root     127 xcall/5    0:14 11.52% 11.52% xcall/5   [system]
    0    28 root     127 xcall/2    1:01 10.45% 10.45% xcall/2   [system]
    0    40 root     127 xcall/4    0:13  3.37%  3.37% xcall/4   [system]
    0    34 root     127 xcall/3    0:29  2.69%  2.69% xcall/3   [system]
  272     1 root      85 vmem/0     0:19  0.00%  0.00% -         find
    0    71 root     124 syncer/1   0:01  0.00%  0.00% ioflush   [system]
 1079     1 root      43 CPU/4      0:00  0.00%  0.00% -         top
    0     2 root       0 CPU/0      0:00  0.00%  0.00% idle/0    [system]
    0    17 root       0 CPU/1      0:00  0.00%  0.00% idle/1    [system]
    0    41 root       0 CPU/5      0:00  0.00%  0.00% idle/5    [system]
    0    29 root       0 CPU/3      0:00  0.00%  0.00% idle/3    [system]
    0    11 root     125 cacheg/1   0:00  0.00%  0.00% cachegc   [system]
    0    72 root     125 aiodon/0   0:00  0.00%  0.00% aiodoned  [system]
    0    61 root     125 vmem_r/5   0:00  0.00%  0.00% vmem_reha [system]
    0     1 root     125 uvm/5      0:00  0.00%  0.00% swapper   [system]
    0     8 root     125 mod_un/0   0:00  0.00%  0.00% modunload [system]
    0     9 root     125 vdrain/4   0:00  0.00%  0.00% vdrain    [system]
    0    10 root     125 vrele/5    0:00  0.00%  0.00% vrele     [system]
    0    73 root     123 physio/5   0:00  0.00%  0.00% physiod   [system]
    0    62 root      96 unpgc/2    0:00  0.00%  0.00% unpgc     [system]
    0    74 root      96 nfsiod/4   0:00  0.00%  0.00% nfsio     [system]
    0    63 root      96 usbevt/5   0:00  0.00%  0.00% usb0      [system]
    0    64 root      96 usbtsk/5   0:00  0.00%  0.00% usbtask-h [system]
    0    75 root      96 nfsiod/2   0:00  0.00%  0.00% nfsio     [system]
    0    65 root      96 usbtsk/5   0:00  0.00%  0.00% usbtask-d [system]
    0    66 root      96 usbevt/5   0:00  0.00%  0.00% usb1      [system]
    0    76 root      96 nfsiod/2   0:00  0.00%  0.00% nfsio     [system]
    0    77 root      96 nfsiod/1   0:00  0.00%  0.00% nfsio     [system]
    0    67 root      96 crypto/5   0:00  0.00%  0.00% cryptoret [system]
    0    12 root      96 nfssil/0   0:00  0.00%  0.00% nfssilly  [system]
    0    13 root      96 sopend/0   0:00  0.00%  0.00% sopendfre [system]
    0    14 root      96 pmfeve/0   0:00  0.00%  0.00% pmfevent  [system]
    0    15 root      96 pmfsus/0   0:00  0.00%  0.00% pmfsuspen [system]
    0    16 root      96 smtask/0   0:00  0.00%  0.00% sysmon    [system]
    0    60 root      96 sccomp/0   0:00  0.00%  0.00% atapibus0 [system]
    0    50 root      96 sccomp/0   0:00  0.00%  0.00% scsibus0  [system]
    0    49 root      96 atath/0    0:00  0.00%  0.00% atabus1   [system]
    0    47 root      96 pmsres/0   0:00  0.00%  0.00% pms0      [system]
    0    48 root      96 atath/0    0:00  0.00%  0.00% atabus0   [system]
  678     1 tron      85 ttyraw/4   0:00  0.00%  0.00% -         zsh
  228     1 root      85 select/1   0:00  0.00%  0.00% -         amd
  185     1 root      85 select/4   0:00  0.00%  0.00% -         ypbind
  184     1 root      85 select/5   0:00  0.00%  0.00% -         rpcbind
  174     1 root      85 kqueue/5   0:00  0.00%  0.00% -         syslogd
    1     1 root      85 wait/5     0:00  0.00%  0.00% -         init
  254     1 root      85 select/5   0:00  0.00%  0.00% -         mountd
  803     1 tron      85 select/4   0:00  0.00%  0.00% -         sshd

I have got a crash dump. If somebody can tell me how get the stack trace
of "pgdaemon" using "crash" I can provide it.

>How-To-Repeat:
Run "/bin/sh /etc/daily 2>&1 | tee /var/log/daily.out | sendmail -t"
in root shell.

>Fix:
Not known.



Home | Main Index | Thread Index | Old Index