NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: kern/56170: NFS+gcc-ASAN-related: panic: lock error: Mutex: mutex_vector_enter,543: locking against myself
The following reply was made to PR kern/56170; it has been noted by GNATS.
From: Taylor R Campbell <riastradh%NetBSD.org@localhost>
To: gnats-bugs%NetBSD.org@localhost
Cc: woods%planix.ca@localhost
Subject: Re: kern/56170: NFS+gcc-ASAN-related: panic: lock error: Mutex: mutex_vector_enter,543: locking against myself
Date: Sat, 15 May 2021 09:05:13 +0000
This is a multi-part message in MIME format.
--=_eoavAYPm/lvdxE8JbrWxVr5OLxSgiJLT
The disgusting hack is for a different issue.
The problem here is that sysctl vm.proc.$pid.<VM_PROC_MAP> holds
p->p_lock over vnode_to_path, which is forbidden. It should hold a
reference that prevents the proc from going away, but not p_lock, like
procfs_rw does in procfs_subr.c.
The attached patch tries to address the issue, and should be
pullable-up to 9 if it works, but I haven't tested it.
(That said, this code is grody, and we should factor out the common
parts this less grodily. It looks like sysctl proc.$pid.cwd might
have the same issue -- fill_cwd in sys/kern/kern_proc.c.)
--=_eoavAYPm/lvdxE8JbrWxVr5OLxSgiJLT
Content-Type: text/plain; charset="ISO-8859-1"; name="sysctlvmmap"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="sysctlvmmap.patch"
diff -r 461e169ca34e sys/uvm/uvm_map.c
--- a/sys/uvm/uvm_map.c Thu Apr 29 09:27:29 2021 +0000
+++ b/sys/uvm/uvm_map.c Sat May 15 01:06:25 2021 +0000
@@ -5217,6 +5217,7 @@ fill_vmentries(struct lwp *l, pid_t pid,
size_t *oldlenp)
{
int error;
+ struct lwp *l1 =3D NULL;
struct proc *p;
struct kinfo_vmentry *vme;
struct vmspace *vm;
@@ -5224,6 +5225,7 @@ fill_vmentries(struct lwp *l, pid_t pid,
struct vm_map_entry *entry;
char *dp;
size_t count, vmesize;
+ bool unlock =3D false;
=20
if (elem_size =3D=3D 0 || elem_size > 2 * sizeof(*vme))
return EINVAL;
@@ -5238,15 +5240,47 @@ fill_vmentries(struct lwp *l, pid_t pid,
} else
vmesize =3D 0;
=20
+ /*
+ * Find the process by pid and lock it -- unless pid is -1, in
+ * which case we get curproc, unlocked.
+ */
if ((error =3D proc_find_locked(l, &p, pid)) !=3D 0)
return error;
-
+ KASSERT(pid =3D=3D -1 || mutex_owned(p->p_lock));
+ unlock =3D true;
+
+ /*
+ * Initialize the loop state now, so we can back out what we've
+ * done if need be.
+ */
+ vm =3D NULL;
vme =3D NULL;
count =3D 0;
=20
+ /* Get a reference to the VM space. */
if ((error =3D proc_vmspace_getref(p, &vm)) !=3D 0)
goto out;
=20
+ /*
+ * Find a non-zombie lwp in the process so we can grab a
+ * reference to it and release p->p_lock while we work -- that
+ * way it won't go away but we can reach into subsystems that
+ * need to take the lock (and we don't hoard the lock for long
+ * durations).
+ */
+ LIST_FOREACH(l1, &p->p_lwps, l_sibling) {
+ if (l1->l_stat !=3D LSZOMB)
+ break;
+ }
+ if (l1 =3D=3D NULL) {
+ error =3D ESRCH;
+ goto out;
+ }
+ lwp_addref(l1);
+ if (pid !=3D -1)
+ mutex_exit(p->p_lock);
+ unlock =3D false;
+
map =3D &vm->vm_map;
vm_map_lock_read(map);
=20
@@ -5263,11 +5297,15 @@ fill_vmentries(struct lwp *l, pid_t pid,
}
count++;
}
+
vm_map_unlock_read(map);
- uvmspace_free(vm);
=20
out:
- if (pid !=3D -1)
+ if (vm !=3D NULL)
+ uvmspace_free(vm);
+ if (l1 !=3D NULL)
+ lwp_delref(l1);
+ if (unlock && pid !=3D -1)
mutex_exit(p->p_lock);
if (error =3D=3D 0) {
const u_int esize =3D uimin(sizeof(*vme), elem_size);
--=_eoavAYPm/lvdxE8JbrWxVr5OLxSgiJLT--
Home |
Main Index |
Thread Index |
Old Index