tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: racy acccess in kern_runq.c
Le 06/12/2019 à 07:52, Kengo NAKAHARA a écrit :
> Hi,
>
> There are some racy accesses in kern_runq.c detected by KCSAN. Those
> racy access messages is so frequency that they cover other messages,
> so I want to fix them. They can be fixed by the following patch.
>
> ====================
> diff --git a/sys/kern/kern_runq.c b/sys/kern/kern_runq.c
> index 04ff1732016..bb0815689cd 100644
> --- a/sys/kern/kern_runq.c
> +++ b/sys/kern/kern_runq.c
> @@ -627,7 +627,7 @@ sched_balance(void *nocallout)
> /* Make lockless countings */
> for (CPU_INFO_FOREACH(cii, ci)) {
> spc = &ci->ci_schedstate;
> -
> + spc_lock(ci);
> /*
> * Average count of the threads
> *
> @@ -643,10 +643,11 @@ sched_balance(void *nocallout)
> hci = ci;
> highest = spc->spc_avgcount;
> }
> + spc_unlock(ci);
> }
>
> /* Update the worker */
> - worker_ci = hci;
> + atomic_swap_ptr(&worker_ci, hci);
>
> if (nocallout == NULL)
> callout_schedule(&balance_ch, balance_period);
> @@ -734,12 +735,15 @@ sched_idle(void)
> spc_unlock(ci);
>
> no_migration:
> + spc_lock(ci);
> if ((spc->spc_flags & SPCF_OFFLINE) != 0 || spc->spc_count != 0) {
> + spc_unlock(ci);
> return;
> }
>
> /* Reset the counter, and call the balancer */
> spc->spc_avgcount = 0;
> + spc_unlock(ci);
> sched_balance(ci);
> tci = worker_ci;
> tspc = &tci->ci_schedstate;
> ====================
It just so happens we were talking about this yesterday.
worker_ci indeed needs to be a real atomic, but it means we also have to
fetch it atomically here:
744 tci = worker_ci;
For the other variables, my understanding was that we don't care about
races, but only care about making sure the accesses are not split, so it
seems to me atomic_relaxed would suffice.
Home |
Main Index |
Thread Index |
Old Index