NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: kern/58666: panic: lock error: Reader / writer lock: rw_vector_enter,357: locking against myself
The following reply was made to PR kern/58666; it has been noted by GNATS.
From: Havard Eidnes <he%NetBSD.org@localhost>
To: riastradh%NetBSD.org@localhost
Cc: gnats-bugs%NetBSD.org@localhost, netbsd-bugs%NetBSD.org@localhost, chs%NetBSD.org@localhost
Subject: Re: kern/58666: panic: lock error: Reader / writer lock:
rw_vector_enter,357: locking against myself
Date: Sun, 08 Sep 2024 19:48:41 +0200 (CEST)
>> 5. RAM is currently short so uvm_km_alloc would have to sleep and wa=
it
>> for the pagedaemon to free pages before it can return, in which
>> case it returns null instead of sleeping because pmap_pdp_alloc
>> didn't pass UVM_KMF_WAITVA.
>
> Correction: kernel virtual address space, not RAM, is short, so
> uvm_km_alloc would have to sleep and wait for the pagedaemon to find
> some kva pages to free. By passing UVM_KMF_WAITVA, pmap_pdp_alloc
> would wait for kva instead of returning null.
I was going to say something like "probably not short of RAM".
So this turns into: What can I as a system manager (and we as a
project) do to
a) monitor the state of KVA
b) reduce the KVA pressure on the i386 port (or better manage KVA
pressure situations)
For a) I suspect we don't really have anything which can be
readily used(?)
For b) I recall having reduced maxvnodes on my trusty old
i386-running T60, but I'm not sure I've done that on this
particular host, so that's at least something to try. With 9.3
that has never been necessary, and it's seen its fair bit of
abuse by doing "-j 3" rust builds as part of testing /
validation, and also firefox builds with the same -j 3 setting.
However, it would be nice if the default parameter choices were
not so prone to causing a wedge.
Hmm, the maxvnodes choice, is that solely based on "available
RAM"? As you may recall, this host has 10G physical memory, but
has much smaller kernel virtual address space.
It also seems to me that the pagedaemon isn't managing to
actually do anything to remedy the situation when KVA either
becomes fragmented or comes under pressure. Is there something
we can do (to the code / KVA usage / KVA management) to prevent
this from turning into a wedge / hang situation, too easily
triggered with the default parameter choices?
Best regards,
- H=E5vard
Home |
Main Index |
Thread Index |
Old Index