Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/kern Allow only one pending call to a pool's backing all...



details:   https://anonhg.NetBSD.org/src/rev/395fb6716c3e
branches:  trunk
changeset: 827443:395fb6716c3e
user:      riastradh <riastradh%NetBSD.org@localhost>
date:      Sat Oct 28 17:06:43 2017 +0000

description:
Allow only one pending call to a pool's backing allocator at a time.

Candidate fix for problems with hanging after kva fragmentation related
to PR kern/45718.

Proposed on tech-kern:

https://mail-index.NetBSD.org/tech-kern/2017/10/23/msg022472.html

Tested by bouyer@ on i386.

This makes one small change to the semantics of pool_prime and
pool_setlowat: they may fail with EWOULDBLOCK instead of ENOMEM, if
there is a pending call to the backing allocator in another thread but
we are not actually out of memory.  That is unlikely because nearly
always these are used during initialization, when the pool is not in
use.

XXX pullup-8
XXX pullup-7
XXX pullup-6 (requires tweaking the patch)
XXX pullup-5...

diffstat:

 sys/kern/subr_pool.c |  37 +++++++++++++++++++++++++++++++++----
 1 files changed, 33 insertions(+), 4 deletions(-)

diffs (70 lines):

diff -r 84ef446810f7 -r 395fb6716c3e sys/kern/subr_pool.c
--- a/sys/kern/subr_pool.c      Sat Oct 28 16:09:14 2017 +0000
+++ b/sys/kern/subr_pool.c      Sat Oct 28 17:06:43 2017 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: subr_pool.c,v 1.208 2017/06/08 04:00:01 chs Exp $      */
+/*     $NetBSD: subr_pool.c,v 1.209 2017/10/28 17:06:43 riastradh Exp $        */
 
 /*-
  * Copyright (c) 1997, 1999, 2000, 2002, 2007, 2008, 2010, 2014, 2015
@@ -33,7 +33,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: subr_pool.c,v 1.208 2017/06/08 04:00:01 chs Exp $");
+__KERNEL_RCSID(0, "$NetBSD: subr_pool.c,v 1.209 2017/10/28 17:06:43 riastradh Exp $");
 
 #ifdef _KERNEL_OPT
 #include "opt_ddb.h"
@@ -1051,6 +1051,23 @@
 {
        struct pool_item_header *ph = NULL;
        char *cp;
+       int error;
+
+       /*
+        * If there's a pool_grow in progress, wait for it to complete
+        * and try again from the top.
+        */
+       if (pp->pr_flags & PR_GROWING) {
+               if (flags & PR_WAITOK) {
+                       do {
+                               cv_wait(&pp->pr_cv, &pp->pr_lock);
+                       } while (pp->pr_flags & PR_GROWING);
+                       return ERESTART;
+               } else {
+                       return EWOULDBLOCK;
+               }
+       }
+       pp->pr_flags |= PR_GROWING;
 
        mutex_exit(&pp->pr_lock);
        cp = pool_allocator_alloc(pp, flags);
@@ -1062,13 +1079,25 @@
                        pool_allocator_free(pp, cp);
                }
                mutex_enter(&pp->pr_lock);
-               return ENOMEM;
+               error = ENOMEM;
+               goto out;
        }
 
        mutex_enter(&pp->pr_lock);
        pool_prime_page(pp, cp, ph);
        pp->pr_npagealloc++;
-       return 0;
+       error = 0;
+
+out:
+       /*
+        * If anyone was waiting for pool_grow, notify them that we
+        * may have just done it.
+        */
+       KASSERT(pp->pr_flags & PR_GROWING);
+       pp->pr_flags &= ~PR_GROWING;
+       cv_broadcast(&pp->pr_cv);
+
+       return error;
 }
 
 /*



Home | Main Index | Thread Index | Old Index