tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: pool_cache_invalidate(9) (wrong?) semantic -- enable xcall invalidation

On 18.11.2011 00:16, Mindaugas Rasiukevicius wrote:
Jean-Yves Migeon<>  wrote:
On 25.09.2011 01:50, Jean-Yves Migeon wrote:
I would like to turn back on the xcall(9) block found in
pool_cache_invalidate(), now that rmind@ has implemented high priority
cross calls [1].

FWIW, I am still struggling with this, without really having an idea on
how to fix this once and for all.

I only investigated two solutions, each one has its own share of
problems. Please let me if you see other possibilities:

1 # adding an "invalidate flag" to the pool cache, for each CPU.<...>

It is not really desirable to have extra (even if very small) overhead in
pool_cache_get() for a case which is particularly rare.


 The real problem
is the synchronisation in interrupt context.  How about looking for a
solution which avoids interrupt context?

I am still looking into it, but have noticed a few shortcomings. Fixing them largely depends on what is acceptable to modify to make this work.

The problem is following:
- a pool_cache_invalidate() will destroy all objects currently cached into a pool_cache(9). The current implementation does it only for the global cache, which is by far the easiest thing to do because it's protected by a mutex.

- per-CPU caches can only be invalidated by the CPU that "owns" them (no interlock available, very low overhead). This means that we have to schedule a xcall for the invalidation.

Even high priority xcall(9) cannot be issued from interrupt (or softint(9)) context, because of the serialization that happens around xc_high_pri (required to pass down arguments to the xcall). Only one xcall(9) can be submitted at a time, hence the condvar(9) sleep implemented in xc_highpri(9).

From my PoV, there are multiple solutions, although I am unable to estimate the level of effort required:

- avoid all sort of pool_cache_invalidate(9) call in interrupt or softint context. Dunno if it's acceptable or not, as pool_cache_invalidate(9) gets used in tricky places especially for pool reclaims. I fear that these get used in interrupt context, like networking stack, or pmap_growkernel().

- force all xcall(9) API consumers to pass dynamically allocated arguments, a bit like workqueue(9) enqueues works. Scheduling xcall(9) is now managed by a SIMPLEQ() of requests.

- extends softint(9) API so we can pass arguments to it as well as the targetted CPU(s) (optional argument).

The last two points make me think that the softint(9), workqueue(9) and xcall(9) APIs have a potential for unification; all of these are somewhat redundant, they all schedule/signal/dispatch stuff to other threads, albeit under different conditions though.

Jean-Yves Migeon

Home | Main Index | Thread Index | Old Index