Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/kern kern_descrip.c: Use atomic_store_relaxed/release fo...



details:   https://anonhg.NetBSD.org/src/rev/303212cf9d69
branches:  trunk
changeset: 373653:303212cf9d69
user:      riastradh <riastradh%NetBSD.org@localhost>
date:      Thu Feb 23 02:58:40 2023 +0000

description:
kern_descrip.c: Use atomic_store_relaxed/release for ff->ff_file.

1. atomic_store_relaxed in fd_close avoids the appearance of race in
   sanitizers (minor bug).

2. atomic_store_release in fd_affix is necessary because the lock
   activity was not, in fact, enough to guarantee ordering (real bug
   some architectures like aarch64).

   The premise appears to have been that the mutex_enter/exit earlier
   in fd_affix is enough to guarantee that initialization of fp (A)
   happens before use of fp by a user once fp is published (B):

        fp->f_... = ...;                // A

        /* fd_affix */
        mutex_enter(&fp->f_lock);
        fp->f_count++;
        mutex_exit(&fp->f_lock);
        ...
        ff->ff_file = fp;               // B

   But actually mutex_enter/exit allow the following reordering by
   the CPU:

        mutex_enter(&fp->f_lock);
        ff->ff_file = fp;               // B
        fp->f_count++;
        fp->f_... = ...;                // A
        mutex_exit(&fp->f_lock);

   The only constraints they imply are:

        1. fp->f_count++ and B cannot precede mutex_enter
        2. mutex_exit cannot precede A and fp->f_count++

   They imply no constraint on the relative ordering of A, B, and
   fp->f_count++ amongst each other, however.

   This affects any architecture that has a native load-acquire or
   store-release operation in mutex_enter/exit, like aarch64, instead
   of explicit load-before-load/store and load/store-before-store
   barrier.

No need for atomic_store_* in fd_copy or fd_free because we have
exclusive access to ff as is.

XXX pullup-9
XXX pullup-10

diffstat:

 sys/kern/kern_descrip.c |  14 ++++----------
 1 files changed, 4 insertions(+), 10 deletions(-)

diffs (49 lines):

diff -r 1c7054a17a0c -r 303212cf9d69 sys/kern/kern_descrip.c
--- a/sys/kern/kern_descrip.c   Thu Feb 23 02:58:28 2023 +0000
+++ b/sys/kern/kern_descrip.c   Thu Feb 23 02:58:40 2023 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: kern_descrip.c,v 1.252 2023/02/23 02:58:28 riastradh Exp $     */
+/*     $NetBSD: kern_descrip.c,v 1.253 2023/02/23 02:58:40 riastradh Exp $     */
 
 /*-
  * Copyright (c) 2008, 2009 The NetBSD Foundation, Inc.
@@ -70,7 +70,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: kern_descrip.c,v 1.252 2023/02/23 02:58:28 riastradh Exp $");
+__KERNEL_RCSID(0, "$NetBSD: kern_descrip.c,v 1.253 2023/02/23 02:58:40 riastradh Exp $");
 
 #include <sys/param.h>
 #include <sys/systm.h>
@@ -624,7 +624,7 @@
         * will prevent them from adding additional uses to this file
         * while we are closing it.
         */
-       ff->ff_file = NULL;
+       atomic_store_relaxed(&ff->ff_file, NULL);
        ff->ff_exclose = false;
 
        /*
@@ -1152,12 +1152,6 @@
 
        /*
         * Insert the new file into the descriptor slot.
-        *
-        * The memory barriers provided by lock activity in this routine
-        * ensure that any updates to the file structure become globally
-        * visible before the file becomes visible to other LWPs in the
-        * current process; otherwise we would set ff->ff_file with
-        * atomic_store_release(&ff->ff_file, fp) at the bottom.
         */
        fdp = p->p_fd;
        dt = atomic_load_consume(&fdp->fd_dt);
@@ -1170,7 +1164,7 @@
        KASSERT(fd >= NDFDFILE || ff == (fdfile_t *)fdp->fd_dfdfile[fd]);
 
        /* No need to lock in order to make file initially visible. */
-       ff->ff_file = fp;
+       atomic_store_release(&ff->ff_file, fp);
 }
 
 /*



Home | Main Index | Thread Index | Old Index