NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: pthreads condition variables



On Thu, 19 Nov 2009 11:28:50 +0000, raymond.meyer%rambler.ru@localhost wrote:
> On Wed, 18 Nov 2009 22:48:06 +0100
> Jean-Yves Migeon <jeanyves.migeon%free.fr@localhost> wrote:
>> 
>> You mean a function to wake up different sleeping threads without 
>> relying to mutex protection? Seems hard, especially when you have to 
>> handle spurious wakeups.
> 
> No, not really. What I mean is this:
> 
> Currently Posix 'pthread_cond_wait' is defined as:
> 
> int pthread_cond_wait(
>       pthread_cond_t *cond, pthread_mutex_t *mutex);
> 
> I think it would be better to redesign this function as:
> 
> int pthread_cond_wait(
>       pthread_cond_t *cond, pthread_mutex_t *mutex, int lock_mutex);
> 
> When you call this function and pass 'lock_mutex' set to 1, then it
> behaves just like a currently implemented Posix function, i.e. when the
> thread wakes up, the mutex will be automatically locked.
> 
> However when you pass 'lock_mutex' set to 0, then when the thread wakes
> up, the mutex will be UNLOCKED.

You are breaking the contract. Condvar are used to signal the modification
of a predicate. If you allow a thread to check the predicate without
protecting it first with a mutex, the result is undefined.

BTW, setting lock_mutex to 0 is equivalent to calling mutex_unlock() just
before returning from cond_wait(). You are just moving the call elsewhere,
but it's still there. Remember that you must call pthread_cond_wait() with
the mutex locked, so you are serializing your calls to pthread_cond_wait()
too.

> The rationale for this is to avoid the
> 'thundering herd' problem and to wake up as many threads in parallel as
> possible. If you have 256 threads sleeping, and your algorithms and
> data structures are designed in such a way that threads don't modify
> shared data when woken up

Then you do not need a condvar. Better design your system around
sched_yield with a test and set operation. See sched_yield(3) and
atomic_ops(3) (especially compare and swap).

Something like:
while (atomic_cas(...there_is_nothing_to_do...))
    sched_yield();

>, then it would be much better to broadcast
> condition and have 256 threads wake up WITHOUT blocking on the same
> mutex in user space. The kernel would still probably have to lock the
> mutex when moving threads from sleep queue to run queue etc, but at
> least this would avoid unnecessary context switching in user space.

As I said, with adaptive mutexes, context switch only happens when the
thread that acquired the mutex is sleeping, and another waits for it. IMHO,
fairly improbable in your case.

-- 
Jean-Yves Migeon
jeanyves.migeon%free.fr@localhost




Home | Main Index | Thread Index | Old Index