tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Patch: reorg UVM object locking to make pmap concurrency simpler



Although significantly faster than 4.0, exit() is still quite slow in
-current. Most of the time is spent in the pmap, traversing VA space and
tearing down mappings. Most of the time spent in the x86 pmap on an MP
system is eaten by the high cost of atomic instructions used to do locking,
and a large source of this locking is the per-page locks in the x86 pmap.

Background: the pmap must work in two directions which are pmap->page and
page->pmap: see the pmap interface. Implementing fine grained locking for
this is very difficult as locks must be taken in a well defined order which
cannot be reversed at will.

The x86 pmap does a very good job of operating concurrently by making use of
per-page locks and atomic, retryable updates to the hardware's paging
structures. This is computationally very expensive due to locking overhead,
but on an MP system is still a massive win over maintaining a global pmap
lock or the old scheme of reader-writer locking between pmap->page and
page->pmap operations.

So, one of the requirements for this fine grained system is per-page locks,
which are used to protect tracking of P->V mappings (which pmaps have which
pages mapped). It turns out that these locks can be easily eliminated,
because in nearly every place that the pmap is called for a particular page,
we already hold a mutex on the object that the page is associated with. By
ensuring that an object's lock is always held when calling into the pmap, we
can eliminate the per-page locks because we can be certain that no other
thread will try to operate on the same set of pages at the same time.

The below patch:

- Acquires VM object locks when calling into the pmap, where they were not
  already taken.

- Modifies amap and anon locking such that amaps and anons share locks.  If
  you lock an amap, you lock it's anons as a side effect. If you lock an
  anon you lock any amaps it is associated with, as a side effect.

Notes:

- Some objects provide pages that they do not directly manage. For example,
  tmpfs vnodes sources pages from a uao. Changes are needed so that
  uvm_objects can share locks in order to handle this case but the patch
  does not do this. Additional benefits to this are that highly-contended
  vnodes get an external lock, and we can share vnode locks with other
  structures (for example, name cache entries) in order to reduce the number
  of lock needed in critical paths (example, cache_enter()).

- The patch does not add code for arbitrary managed mappings, for example
  those made via pmap_enter() calls resulting from a write to /dev/mem.

- There are a few oddball cases that need to be looked at. Example, the
  nfs putpages code which seems to write pages not associated with the
  vnode. These can be found with KASSERT(uvm_page_locked_p(foo)) in the
  pmap module.

- pmap_collect() needs re-implmenting to be a largely MI-function. The MI
  function would traverse the vm_map and remove all mappings for non-wired
  pages, acquiring the needed locks as it goes. The existing pmap_collect()
  call would be converted to do "whatever else needs doing" in the pmap
  module. On x86 this means it would do nothing.

- With the patch and the changes noted above, it would be much simpler
  to make a pmap module fully concurrent. This would be of benefit to
  architectures like sparc64 that currently use a global lock around the
  pmap ('code path' locking).

Here's the patch:

        http://www.netbsd.org/~ad/cleanout/uvm.diff

Comments?

Thanks,
Andrew


Home | Main Index | Thread Index | Old Index