Paul Goyette <paul%whooppee.com@localhost> writes: > # netstat -m > 553 mbufs in use: > 541 mbufs allocated to data > 11 mbufs allocated to packet headers > 1 mbufs allocated to socket names and addresses > 98 calls to protocol drain routines > # vmstat -mW | grep '^[MNm]' > Memory resource pool statistics > Name Size Requests Fail Releases InUse Avail Pgreq > Pgrel Npage PageSz Hiwat Minpg Maxpg Idle Flags Util > mbpl 512 32661 0 31465 1196 44 391 > 236 155 4096 287 2 inf 3 0x000 96.5% > mclpl 2048 27746 0 26731 1015 9 1644 > 1132 512 4096 865 4 524274 4 0x000 99.1% > mutex 64 4642962 0 2951894 1691068 604715 36471 > 30 36441 4096 36471 0 inf 1 0x040 72.5% mbuf and mbuf cluster are not the same thing. I thought mbufs were 256 bytes, but they seem to be 512 (They are 256 on my netbsd-6/i386 box.) Either way, they have a header and can hold some data. mbuf clusters are 2048 bytes. These are for data only, and are attached to a regular mbuf so that the 2K space is used instead of the "512-header" bytes. clusters can be used when data does not fit in a regular mbuf. Many ethernet interfaces pre-allocate clusters and stage them in the receive ring buffer so that the interface can just dma the data. So having a bunch of clusters used is pretty normal. I'm a little fuzzy, but I think some drivers have clusters preallocated and then get mbufs themselves to attach to them when processing the receive interupt. I see no fails counted. Why do you think you are out of clusters? Are you seeing that in dmesg? Or is it just a possible lockup explanation? Please describe the lockup symptoms more precisely. Also, look in vmstat -m for anything with fail != 0. you might also save vmstat -m to a file every 5 minutes, and look before/after the next lockup.
Description: PGP signature