tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: ld.elf_so locking



On Mon, Mar 14, 2011 at 04:37:21PM +0100, Joerg Sonnenberger wrote:

> Comments?

Only a few right now..

> There are currently two situations left where a recursion would currently
> result in a dead lock: a signal handler or constructor using __tls_get_addr
> requiring memory allocation when already in ld.elf_so and a signal
> handler occuring during exclusively locked calls like dlopen trying to
> access a shared lock e.g. to lazily resolve a relocation.
> 
> The second can be mostly be adressed by using sigprocmask in the

For stuff like that one idea is to have the signal mask directly in the
_lwp_ctl area, so that sigprocmask() isn't any kind of performance concern. 
The signal mask is thread private state in the kernel with a couple of
exceptions, Oe can be worked around and I think the other relates to SA (and
so can die).  As a perhipheral benefit this could be a nice boost to some
workloads since sigprocmask() is called a lot.

On a related note is there a thread-private area that we can reserve to
point to the _lwp_ctl block?

> +static volatile bool _rtld_mutex_may_recurse;
> +

Prefer "int" or sigatomic_t as bool may be sub-word sized and thus cause
atomicity problems.  Maybe this is not a concern (although volatile
suggests it is).

> +#ifdef __HAVE_FUNCTION_DESCRIPTORS
> +#define      lookup_mutex_enter()    _rtld_exclusive_enter()
> +#define      lookup_mutex_exit()     _rtld_exclusive_exit()
> +#else
> +#define      lookup_mutex_enter()    _rtld_shared_enter()
> +#define      lookup_mutex_exit()     _rtld_shared_exit()
> +#endif

Support for named lock objects would be nice (e.g. rtldlock_t *lock as
parameter, or "int lock" if the former would cause additional relocs).

> +void
> +_rtld_shared_enter(void)
> +{
> +     unsigned int cur;
> +     lwpid_t waiter, self = 0;
> +
> +     membar_enter();

membar_enter() should be after lock acquire point.  Suggest wrapping in
#ifndef __HAVE_ATOMIC_AS_MEMBAR.

> +     for (;;) {
> +             cur = _rtld_mutex;
> +             /*
> +              * First check if we are currently not exclusively locked.
> +              */
> +             if ((cur & RTLD_EXCLUSIVE_MASK) == 0) {
> +                     /* Yes, so increment use counter */
> +                     if (atomic_cas_uint(&_rtld_mutex, cur, cur + 1) != cur)
> +                             continue;
> +                     return;
> +             }
> +             /*
> +              * Someone has an exclusive lock.  Puts us on the waiter list.
> +              */
> +             if (!self)
> +                     self = _lwp_self();
> +             if (cur == (self | RTLD_EXCLUSIVE_MASK)) {
> +                     if (_rtld_mutex_may_recurse)
> +                             return;
> +                     _rtld_error("dead lock detected");
> +                     _rtld_die();
> +             }
> +             waiter = atomic_swap_uint(&_rtld_waiter_shared, self);
> +             /*
> +              * Check for race against _rtld_exclusive_exit before sleeping.
> +              */
> +             if ((_rtld_mutex & RTLD_EXCLUSIVE_MASK) ||
> +                 _rtld_waiter_exclusive)
> +                     _lwp_park(NULL, -1, __UNVOLATILE(&_rtld_mutex), NULL);
> +             /* Try to remove us from the waiter list. */
> +             atomic_cas_uint(&_rtld_waiter_shared, self, 0);
> +             if (waiter)
> +                     _lwp_unpark(waiter, __UNVOLATILE(&_rtld_mutex));
> +     }
> +}

This makes me a bit nervous since it's sort of hard to follow whats going
on.  Some further comments would be nice.  A comment about the realtime
restriction that we can't just go and do a dumb spin on the lock would be
good too (as described in pthread_mutex.c).


Home | Main Index | Thread Index | Old Index