[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: pool_cache_invalidate(9) (wrong?) semantic -- enable xcall invalidation
On Wed, 9 Nov 2011 08:57:08 -0500, Thor Lancelot Simon wrote:
On Wed, Nov 09, 2011 at 11:49:10AM +0100, Jean-Yves Migeon wrote:
This is the idea. One shortcoming though: the pool_cache(9) content
is not synchronously invalidated, depleting the per-CPU cache would
only happen when the CPU pool_cache_get() a new object.
From an API perspective is it ok for everyone? In my case, I'll have
to xcall(9) a pool_cache_get() for each CPU to force-deplete the
pools because of Xen shortcomings (it tracks page types and will
raise an exception when pages are not freed on a suspension).
I guess I don't quite understand what's happening on suspension. You
are talking about suspending the Xen VM? Or taking a CPU offline?
When a page is allocated from the pmap_pdp_cache  it gets
constructed and pinned as a L2 page. The type associated to a page is
only destructed in dtor.
Upon suspend, Xen will save all states for pages. When resuming, the
hypervisor will deduce the types given the content of the pages.
Unfortunately, it handles our recursive mappings poorly and tries to pin
as L1 a page pinned as L2. This leads to instant-kill of the domU.
To avoid this, I remove all recursive mappings from pmaps on suspend
and reinstate them on resume. However pmaps "cached" in the
pool_cache(9) are only known to the pool internals, so I can't fix them;
only way is to invalidate/drain the pool to force dtor calls. hence,
If I use a per-CPU serial flag, the pool will only get drained upon
next pool_cache_get(). So I have to xcall(9) a _get() to effectively
destroy the per-CPU cached items.
Main Index |
Thread Index |