Subject: Re: Heavy swapping causes deadlock in uvm_unlock_fpageq
To: Rick Byers <RickB@BigScaryChildren.net>
From: Chuck Silvers <chuq@chuq.com>
List: current-users
Date: 07/05/2001 08:11:24
hi,
most likely what's happening is that you're getting really close to running
out of swap and running into the problems with the out-of-swap detection
in the 1.5-branch code. top showing 100 MB of swap free doesn't mean much,
since it's entirely possible that a process allocating enough memory to
use up that 100 MB of swap will prevent top from have the memory it needs
to run.
if this is the case then the only thing you can do is use less memory or
add more swap space. I've been working on some changes to -current for
a while now that will make it so that the system doesn't get stuck
(instead killing a process), but this depends on other stuff that's
only in -current and the sum is too big to backport to 1.5.
-Chuck
On Thu, Jul 05, 2001 at 09:07:11AM -0400, Rick Byers wrote:
> Hi,
> I just started doing more memory intensive stuff (i.e. lots of swapping)
> on my NetBSD-1.5.1/i386 machine, and have been having problems with it
> hanging. I can still switch virtual consoles and break into the debugger,
> but nothing else works. Here is the stack trace:
> uvm_unlock_fpageq (+0x13)
> uvmpd_scan_inactive
> uvmpd_scan
> uvm_pageout
> start_pagedaemon
>
> So it looks like there is some sort of deadlock in the paging code. I
> left top running this last time and when it froze top wasn't indicating
> that swap space was almost exhausted or anything (infact I think it said
> 100 Mb free) - although there were about 6 perl processes running.
>
> Anyway, so my question is - I haven't noticed anything like this on my
> NetBSD-current/i386 machine (although it probably doesn't do as much
> swapping), does anyone know of anything that may have been fixed in
> -current but not -release that could cause this?
>
> My -release is built from June 26 sources - I'm updating now just in case.
> I've got a crash dump, so if there is anything I can do to help track this
> down - let me know. It's happened twice now in the last 12 hours, so I'm
> sure it'll happen again...
>
> Thanks,
> Rick