Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/kern As pool reclaiming is unlikely to happen at interru...



details:   https://anonhg.NetBSD.org/src/rev/8438c652382d
branches:  trunk
changeset: 779621:8438c652382d
user:      jym <jym%NetBSD.org@localhost>
date:      Tue Jun 05 22:28:11 2012 +0000

description:
As pool reclaiming is unlikely to happen at interrupt or softint
context, re-enable the portion of code that allows invalidation of CPU-bound
pool caches.

Two reasons:
- CPU cached objects being invalidated, the probability of fetching an
obsolete object from the pool_cache(9) is greatly reduced. This speeds up
pool_cache_get() quite a bit as it does not have to keep destroying objects
until it finds an updated one when an invalidation is in progress.

- for situations where we have to ensure that no obsolete object remains
after a state transition (canonical example: pmap mappings between Xen VM
restoration), invalidating all pool_cache(9) is the safest way to go.

As it uses xcall(9) to broadcast the execution of pool_cache_transfer(),
pool_cache_invalidate() cannot be called from interrupt or softint context
(scheduling a xcall(9) can put a LWP to sleep).

pool_cache_xcall() => pool_cache_transfer() to reflect its use.

Invalidation being a costly process (1000s objects may be destroyed),
all places where pool_cache_invalidate() may be called from
interrupt/softint context will now get caught by the proper KASSERT(), and
fixed. Ping me when you see one.

Tested under i386 and amd64 by running ATF suite within 64MiB HVM
domains (tried triggering pgdaemon a few times).

No objection on tech-kern@.

XXX a similar fix has to be pulled up to NetBSD-6, but with a more
conservative approach.

See http://mail-index.netbsd.org/tech-kern/2012/05/29/msg013245.html

diffstat:

 sys/kern/subr_pool.c |  38 +++++++++++++++++++++++---------------
 1 files changed, 23 insertions(+), 15 deletions(-)

diffs (101 lines):

diff -r 151df54bf3b3 -r 8438c652382d sys/kern/subr_pool.c
--- a/sys/kern/subr_pool.c      Tue Jun 05 20:51:36 2012 +0000
+++ b/sys/kern/subr_pool.c      Tue Jun 05 22:28:11 2012 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: subr_pool.c,v 1.195 2012/05/05 19:15:10 rmind Exp $    */
+/*     $NetBSD: subr_pool.c,v 1.196 2012/06/05 22:28:11 jym Exp $      */
 
 /*-
  * Copyright (c) 1997, 1999, 2000, 2002, 2007, 2008, 2010
@@ -32,7 +32,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: subr_pool.c,v 1.195 2012/05/05 19:15:10 rmind Exp $");
+__KERNEL_RCSID(0, "$NetBSD: subr_pool.c,v 1.196 2012/06/05 22:28:11 jym Exp $");
 
 #include "opt_ddb.h"
 #include "opt_lockdebug.h"
@@ -191,7 +191,7 @@
 static void    pool_cache_cpu_init1(struct cpu_info *, pool_cache_t);
 static void    pool_cache_invalidate_groups(pool_cache_t, pcg_t *);
 static void    pool_cache_invalidate_cpu(pool_cache_t, u_int);
-static void    pool_cache_xcall(pool_cache_t);
+static void    pool_cache_transfer(pool_cache_t);
 
 static int     pool_catchup(struct pool *);
 static void    pool_prime_page(struct pool *, void *,
@@ -1425,7 +1425,7 @@
        /* If there is a pool_cache, drain CPU level caches. */
        *ppp = pp;
        if (pp->pr_cache != NULL) {
-               *wp = xc_broadcast(0, (xcfunc_t)pool_cache_xcall,
+               *wp = xc_broadcast(0, (xcfunc_t)pool_cache_transfer,
                    pp->pr_cache, NULL);
        }
 }
@@ -2007,31 +2007,39 @@
  *     Note: For pool caches that provide constructed objects, there
  *     is an assumption that another level of synchronization is occurring
  *     between the input to the constructor and the cache invalidation.
+ *
+ *     Invalidation is a costly process and should not be called from
+ *     interrupt context.
  */
 void
 pool_cache_invalidate(pool_cache_t pc)
 {
+       uint64_t where;
        pcg_t *full, *empty, *part;
-#if 0
-       uint64_t where;
+
+       KASSERT(!cpu_intr_p() && !cpu_softintr_p());
 
        if (ncpu < 2 || !mp_online) {
                /*
                 * We might be called early enough in the boot process
                 * for the CPU data structures to not be fully initialized.
-                * In this case, simply gather the local CPU's cache now
-                * since it will be the only one running.
+                * In this case, transfer the content of the local CPU's
+                * cache back into global cache as only this CPU is currently
+                * running.
                 */
-               pool_cache_xcall(pc);
+               pool_cache_transfer(pc);
        } else {
                /*
-                * Gather all of the CPU-specific caches into the
-                * global cache.
+                * Signal all CPUs that they must transfer their local
+                * cache back to the global pool then wait for the xcall to
+                * complete.
                 */
-               where = xc_broadcast(0, (xcfunc_t)pool_cache_xcall, pc, NULL);
+               where = xc_broadcast(0, (xcfunc_t)pool_cache_transfer,
+                   pc, NULL);
                xc_wait(where);
        }
-#endif
+
+       /* Empty pool caches, then invalidate objects */
        mutex_enter(&pc->pc_lock);
        full = pc->pc_fullgroups;
        empty = pc->pc_emptygroups;
@@ -2415,13 +2423,13 @@
 }
 
 /*
- * pool_cache_xcall:
+ * pool_cache_transfer:
  *
  *     Transfer objects from the per-CPU cache to the global cache.
  *     Run within a cross-call thread.
  */
 static void
-pool_cache_xcall(pool_cache_t pc)
+pool_cache_transfer(pool_cache_t pc)
 {
        pool_cache_cpu_t *cc;
        pcg_t *prev, *cur, **list;



Home | Main Index | Thread Index | Old Index