[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Ramblings about sparc64 pmap locking
On Mon, Mar 22, 2010 at 10:35:22AM +1100, matthew green wrote:
> Concurrent access purely via activity on the V->P side is unlikely to
> happen due to the vm_map lock being held write locked during these types
> of operations (however I'm not 100% sure on that *). Anyway all these
> activities all deal with managed mappings so there are data structures to
> manage in the pmap, and interference between P->V and V->P activity so
> the pmap_lock is taken for most of the interfaces. This stuff will only
> ever happen with process context (even kthread, proc0) but never from
> interrupt level or a softint. So this is why pmap_lock is at IPL_NONE.
> * pmap_extract() is likely to be a renegade
> can you expand on this point at all? i'm trying to understand what
> pmap_extract() does that needs a lock. right now (except for DEBUG)
> all it does is a single pseg_get(), and that should run without a lock
> (and currently is locked via the pseg_lock i'm still not entirely sure
> why helps avoiding "impossible" pseg_set() failures.)
I can envision places where we might loop over a range and do pmap_extract()
to see if anything needs doing. It's just a hunch and I may well be wrong.
> my testing so far has shown that it doesn't need to take pmap_lock,
> but your comments above give me pause.
The pseg_get() thing I'm not sure about since I don't know about the data
structures that sparc64 uses to manage mappings on the hardware side of
things. As an example of somewhere it's not safe to be unlocked as-is
take x86. :-). There hardware data structures can be torn down by, say,
pmap_page_remove() while we're inspecting them with pmap_extract()
(interference between V->P and P->V ops). So in that particular case it's
unsafe on x86. pmap_collect() had the potential to be problematic but I
don't know if that even exists any more.
> 3) Looking at sparc64 pmap_activate(), it seems the lock is needed for
> context management. I'm told the management of said is per-CPU, so is
> there any reason we need a lock at all?
> it's not 100% per-cpu. allocate is, but free's can come from any
> where. i've replaced these usages of pmap_lock with a new
> pmap_ctx_lock at IPL_VM, and reverted pmap_lock back to IPL_NONE.
> that seems to work just great.
Just throwing out an idea because I don't understand the workings, but if
what you say is true could the lock be per-CPU and taken from a "foreign"
CPU only on free? That would give pretty good concurrency, scaling as the
number of CPUs increases and would avoid cache effects if the locks are
One last thing - IPL_VM seems wrong for the lock. I know it's a component
of the VM system however it'll get taken at IPL_SCHED while we're switching.
The existing IPL will be higher than level of the lock taken. That in
itself won't be a problem since mutex_spin_enter() handles that case OK.
The inversion (IPL_VM vs IPL_SCHED) just seems wrong although I can't point
out how it would cause a deadlock in this situation. :-)
Main Index |
Thread Index |