NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

PR/45718 CVS commit: src/sys/kern



The following reply was made to PR kern/45718; it has been noted by GNATS.

From: "Taylor R Campbell" <riastradh%netbsd.org@localhost>
To: gnats-bugs%gnats.NetBSD.org@localhost
Cc: 
Subject: PR/45718 CVS commit: src/sys/kern
Date: Fri, 20 Oct 2017 14:48:43 +0000

 Module Name:	src
 Committed By:	riastradh
 Date:		Fri Oct 20 14:48:43 UTC 2017
 
 Modified Files:
 	src/sys/kern: kern_exec.c
 
 Log Message:
 Carve out KVA for execargs on boot from an exec_map like we used to.
 
 Candidate fix for PR kern/45718: `processes sometimes get stuck and
 spin in vm_map', a problem that has been plaguing all our 32-bit
 ports for years.
 
 Since we currently use large (256k) buffers for execargs, and since
 nobody has stepped up to tackle breaking them into bite-sized (or at
 least page-sized) chunks, after KVA gets sufficiently fragmented we
 can't allocate new execargs buffers from kernel_map.
 
 Until 2008, we always carved out KVA for execargs on boot with a uvm
 submap exec_map of kernel_map.  Then ad@ found that the uvm_km_free
 call, to discard them when done, cost about 100us, which a pool
 avoided:
 
 https://mail-index.NetBSD.org/tech-kern/2008/06/25/msg001854.html
 https://mail-index.NetBSD.org/tech-kern/2008/06/26/msg001859.html
 
 ad@ _simultaneously_ introduced a pool _and_ eliminated the reserved
 KVA in the exec_map submap.  This change preserves the pool, but
 restores exec_map (with less code, by putting it in MI code instead
 of copying it in every MD initialization routine).
 
 Patch proposed on tech-kern:
 https://mail-index.NetBSD.org/tech-kern/2017/10/19/msg022461.html
 
 Patch tested by bouyer@:
 https://mail-index.NetBSD.org/tech-kern/2017/10/20/msg022465.html
 
 I previously discussed the issue on tech-kern before I knew of the
 history around exec_map:
 https://mail-index.NetBSD.org/tech-kern/2012/12/09/msg014695.html
 
 The candidate workaround I proposed of using pool_setlowat to force
 preallocation of KVA would also force preallocation of physical RAM,
 which is a waste not incurred by using exec_map, and which is part of
 why I never committed it.
 
 There may remain a general problem that if thread A calls pool_get
 and tries to service that request by a uvm_km_alloc call that hangs
 because KVA is scarce, and thread B does pool_put, the pool_put in
 thread B will not notify the pool_get in thread A that it doesn't
 need to wait for KVA, and so thread A may continue to hang in
 uvm_km_alloc.  However,
 
 (a) That won't apply here, because there is exactly as much KVA
 available in exec_map as exec_pool will ever try to use.
 
 (b) It is possible that may not even matter in other cases as long as
 the page daemon eventually tries to shrink the pool, which will cause
 a uvm_km_free that can unhang the hung uvm_km_alloc.
 
 XXX pullup-8
 XXX pullup-7
 XXX pullup-6
 XXX pullup-5, perhaps...
 
 
 To generate a diff of this commit:
 cvs rdiff -u -r1.447 -r1.448 src/sys/kern/kern_exec.c
 
 Please note that diffs are not public domain; they are subject to the
 copyright notices on the relevant files.
 


Home | Main Index | Thread Index | Old Index