Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/kern Make sysctl_doeproc() more predictable



details:   https://anonhg.NetBSD.org/src/rev/c974ae867e1a
branches:  trunk
changeset: 321348:c974ae867e1a
user:      kamil <kamil%NetBSD.org@localhost>
date:      Tue Mar 13 02:24:26 2018 +0000

description:
Make sysctl_doeproc() more predictable

Swap the order of looking into zombie and all process lists, start now
with the zombie one. This prevents a race observed previously that the
same process could be detected on both lists during a single polling call.

While there:
 - Short-circuit break for KERN_PROC_PID, once a pid has been detected.
 - Removal of redundant "if (kbuf)" and "if (marker)" checks.
 - Update of comments regarding potential optimization, explaining why we
   don't want to it as of now. Performance gain from lookup call vs
   iteration over a list is neglible on a regular system.
 - Return ESRCH when no results have been found. This allows more easily
   to implement a retry or abandon algorithm.

This corrects races observed in the existing ATF ptrace(2) tests, related
to await_zombie(). This function was expecting to check whether a process
has been transformed into a zombie, however it was causing occasional
crashes as it was overflowing the return buffer, returning the same pid
twice: once from allproc list and the second time from zombieproc one.

Fix suggested by <christos>
Short-circuit break suggested by <kre>

Discussed on tech-kern.

Sponsored by <The NetBSD Foundation>

diffstat:

 sys/kern/kern_proc.c |  48 +++++++++++++++++++++++++++++-------------------
 1 files changed, 29 insertions(+), 19 deletions(-)

diffs (109 lines):

diff -r 00c5ef26d5e3 -r c974ae867e1a sys/kern/kern_proc.c
--- a/sys/kern/kern_proc.c      Tue Mar 13 02:23:28 2018 +0000
+++ b/sys/kern/kern_proc.c      Tue Mar 13 02:24:26 2018 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: kern_proc.c,v 1.210 2018/03/11 15:13:05 kre Exp $      */
+/*     $NetBSD: kern_proc.c,v 1.211 2018/03/13 02:24:26 kamil Exp $    */
 
 /*-
  * Copyright (c) 1999, 2006, 2007, 2008 The NetBSD Foundation, Inc.
@@ -62,7 +62,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: kern_proc.c,v 1.210 2018/03/11 15:13:05 kre Exp $");
+__KERNEL_RCSID(0, "$NetBSD: kern_proc.c,v 1.211 2018/03/13 02:24:26 kamil Exp $");
 
 #ifdef _KERNEL_OPT
 #include "opt_kstack.h"
@@ -1674,12 +1674,16 @@
        marker->p_flag = PK_MARKER;
 
        mutex_enter(proc_lock);
-       mmmbrains = false;
-       for (p = LIST_FIRST(&allproc);; p = next) {
+       /*
+        * Start with zombies to prevent reporting processes twice, in case they
+        * are dying and being moved from the list of alive processes to zombies.
+        */
+       mmmbrains = true;
+       for (p = LIST_FIRST(&zombproc);; p = next) {
                if (p == NULL) {
-                       if (!mmmbrains) {
-                               p = LIST_FIRST(&zombproc);
-                               mmmbrains = true;
+                       if (mmmbrains) {
+                               p = LIST_FIRST(&allproc);
+                               mmmbrains = false;
                        }
                        if (p == NULL)
                                break;
@@ -1704,17 +1708,17 @@
                }
 
                /*
-                * TODO - make more efficient (see notes below).
-                * do by session.
+                * Hande all the operations in one switch on the cost of
+                * algorithm complexity is on purpose. The win splitting this
+                * function into several similar copies makes maintenance burden
+                * burden, code grow and boost is neglible in practical systems.
                 */
                switch (op) {
                case KERN_PROC_PID:
-                       /* could do this with just a lookup */
                        match = (p->p_pid == (pid_t)arg);
                        break;
 
                case KERN_PROC_PGRP:
-                       /* could do this by traversing pgrp */
                        match = (p->p_pgrp->pg_id == (pid_t)arg);
                        break;
 
@@ -1820,10 +1824,20 @@
                        rw_exit(&p->p_reflock);
                        next = LIST_NEXT(p, p_list);
                }
+
+               /*
+                * Short-circuit break quickly!
+                */
+               if (op == KERN_PROC_PID)
+                       break;
        }
        mutex_exit(proc_lock);
 
        if (where != NULL) {
+               if (needed == 0) {
+                       error = ESRCH;
+                       goto out;
+               }
                *oldlenp = dp - where;
                if (needed > *oldlenp) {
                        error = ENOMEM;
@@ -1833,10 +1847,8 @@
                needed += KERN_PROCSLOP;
                *oldlenp = needed;
        }
-       if (kbuf)
-               kmem_free(kbuf, sizeof(*kbuf));
-       if (marker)
-               kmem_free(marker, sizeof(*marker));
+       kmem_free(kbuf, sizeof(*kbuf));
+       kmem_free(marker, sizeof(*marker));
        sysctl_relock();
        return 0;
  bah:
@@ -1847,10 +1859,8 @@
  cleanup:
        mutex_exit(proc_lock);
  out:
-       if (kbuf)
-               kmem_free(kbuf, sizeof(*kbuf));
-       if (marker)
-               kmem_free(marker, sizeof(*marker));
+       kmem_free(kbuf, sizeof(*kbuf));
+       kmem_free(marker, sizeof(*marker));
        sysctl_relock();
        return error;
 }



Home | Main Index | Thread Index | Old Index