Subject: Re: deadlock with sched_lock in SA code
To: Eric Haszlakiewicz <email@example.com>
From: Chuck Silvers <firstname.lastname@example.org>
Date: 08/29/2005 09:53:09
On Sun, Aug 28, 2005 at 10:50:17PM -0500, Eric Haszlakiewicz wrote:
> On Sun, Aug 28, 2005 at 10:35:04PM -0400, Nathan J. Williams wrote:
> > YAMAMOTO Takashi <email@example.com> writes:
> > > > allocating pages from UVM can call wakeup(), so we must avoid that
> > > > while holding sched_lock. one way to do this would be to call
> > > > sadata_upcall_alloc() before acquiring sched_lock and passing the
> > > > resulting pointer to sa_switch(), instead of calling that in
> > > > sa_switch() itself. does anyone have any better suggestions?
> > > > if not, I'll fix it that way.
> > > >
> > > > -Chuck
> > >
> > > i thought there were an effort to allocate upcall data on new lwp's stack.
> > There was. I still have patches for that in one of my development
> > trees. It kind of stalled out when I reached sparc/sparc64 and tried
> > to wrap my head around how those stacks are handled.
> I think it'd still be possible to run into the same problem, as
> we might need to allocate a page for the lwp's stack, and uvm_pagealloc
> would kick the pagedaemon when memory was low.
> The new lwp we get from the cache isn't guaranteed to have an already
> allocated stack page(s), is it?
from a brief glance, it looks like we create new cached SA LWPs with
kernel stacks allocated, but I'm not sure that the stack couldn't be freed
at some point later.
it would seem a bit silly to require that cached LWPs have kernel stacks
always allocated when we allow in-use LWPs to have their stacks be
swapped out and freed. if we're that concerned about allowing kernel stack
memory to be reused, then we shouldn't exempt cached LWPs. on the other hand,
if we don't care so much about reusing that memory then it would be good
to remove all that code that implements swap-out of kernel stacks.