Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/netbsd-8]: src/sys/kern Pull up following revision(s) (requested by kami...



details:   https://anonhg.NetBSD.org/src/rev/ad2ecdb26b11
branches:  netbsd-8
changeset: 434808:ad2ecdb26b11
user:      martin <martin%NetBSD.org@localhost>
date:      Sun Apr 01 08:45:43 2018 +0000

description:
Pull up following revision(s) (requested by kamil in ticket #679):

        sys/kern/kern_proc.c: revision 1.211

Make sysctl_doeproc() more predictable

Swap the order of looking into zombie and all process lists, start now
with the zombie one. This prevents a race observed previously that the
same process could be detected on both lists during a single polling call.

While there:
 - Short-circuit break for KERN_PROC_PID, once a pid has been detected.
 - Removal of redundant "if (kbuf)" and "if (marker)" checks.
 - Update of comments regarding potential optimization, explaining why we
   don't want to it as of now. Performance gain from lookup call vs
   iteration over a list is neglible on a regular system.
 - Return ESRCH when no results have been found. This allows more easily
   to implement a retry or abandon algorithm.

This corrects races observed in the existing ATF ptrace(2) tests, related
to await_zombie(). This function was expecting to check whether a process
has been transformed into a zombie, however it was causing occasional
crashes as it was overflowing the return buffer, returning the same pid
twice: once from allproc list and the second time from zombieproc one.

Fix suggested by <christos>
Short-circuit break suggested by <kre>

Discussed on tech-kern.

Sponsored by <The NetBSD Foundation>

diffstat:

 sys/kern/kern_proc.c |  48 +++++++++++++++++++++++++++++-------------------
 1 files changed, 29 insertions(+), 19 deletions(-)

diffs (109 lines):

diff -r 4ad486a949f4 -r ad2ecdb26b11 sys/kern/kern_proc.c
--- a/sys/kern/kern_proc.c      Sat Mar 31 11:22:06 2018 +0000
+++ b/sys/kern/kern_proc.c      Sun Apr 01 08:45:43 2018 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: kern_proc.c,v 1.206.6.1 2018/01/01 18:58:32 snj Exp $  */
+/*     $NetBSD: kern_proc.c,v 1.206.6.2 2018/04/01 08:45:43 martin Exp $       */
 
 /*-
  * Copyright (c) 1999, 2006, 2007, 2008 The NetBSD Foundation, Inc.
@@ -62,7 +62,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: kern_proc.c,v 1.206.6.1 2018/01/01 18:58:32 snj Exp $");
+__KERNEL_RCSID(0, "$NetBSD: kern_proc.c,v 1.206.6.2 2018/04/01 08:45:43 martin Exp $");
 
 #ifdef _KERNEL_OPT
 #include "opt_kstack.h"
@@ -1675,12 +1675,16 @@
        marker->p_flag = PK_MARKER;
 
        mutex_enter(proc_lock);
-       mmmbrains = false;
-       for (p = LIST_FIRST(&allproc);; p = next) {
+       /*
+        * Start with zombies to prevent reporting processes twice, in case they
+        * are dying and being moved from the list of alive processes to zombies.
+        */
+       mmmbrains = true;
+       for (p = LIST_FIRST(&zombproc);; p = next) {
                if (p == NULL) {
-                       if (!mmmbrains) {
-                               p = LIST_FIRST(&zombproc);
-                               mmmbrains = true;
+                       if (mmmbrains) {
+                               p = LIST_FIRST(&allproc);
+                               mmmbrains = false;
                        }
                        if (p == NULL)
                                break;
@@ -1705,17 +1709,17 @@
                }
 
                /*
-                * TODO - make more efficient (see notes below).
-                * do by session.
+                * Hande all the operations in one switch on the cost of
+                * algorithm complexity is on purpose. The win splitting this
+                * function into several similar copies makes maintenance burden
+                * burden, code grow and boost is neglible in practical systems.
                 */
                switch (op) {
                case KERN_PROC_PID:
-                       /* could do this with just a lookup */
                        match = (p->p_pid == (pid_t)arg);
                        break;
 
                case KERN_PROC_PGRP:
-                       /* could do this by traversing pgrp */
                        match = (p->p_pgrp->pg_id == (pid_t)arg);
                        break;
 
@@ -1821,10 +1825,20 @@
                        rw_exit(&p->p_reflock);
                        next = LIST_NEXT(p, p_list);
                }
+
+               /*
+                * Short-circuit break quickly!
+                */
+               if (op == KERN_PROC_PID)
+                       break;
        }
        mutex_exit(proc_lock);
 
        if (where != NULL) {
+               if (needed == 0) {
+                       error = ESRCH;
+                       goto out;
+               }
                *oldlenp = dp - where;
                if (needed > *oldlenp) {
                        error = ENOMEM;
@@ -1834,10 +1848,8 @@
                needed += KERN_PROCSLOP;
                *oldlenp = needed;
        }
-       if (kbuf)
-               kmem_free(kbuf, sizeof(*kbuf));
-       if (marker)
-               kmem_free(marker, sizeof(*marker));
+       kmem_free(kbuf, sizeof(*kbuf));
+       kmem_free(marker, sizeof(*marker));
        sysctl_relock();
        return 0;
  bah:
@@ -1848,10 +1860,8 @@
  cleanup:
        mutex_exit(proc_lock);
  out:
-       if (kbuf)
-               kmem_free(kbuf, sizeof(*kbuf));
-       if (marker)
-               kmem_free(marker, sizeof(*marker));
+       kmem_free(kbuf, sizeof(*kbuf));
+       kmem_free(marker, sizeof(*marker));
        sysctl_relock();
        return error;
 }



Home | Main Index | Thread Index | Old Index