On Wed, Apr 30, 2008 at 10:09:04AM +0100, Richard Earnshaw wrote:
The compiler never directly generates SWP, so this must be an in-line
assembler statement. Since there's no SWP instruction in Thumb, the
code that generates this will need reworking to push the required
instructions out-of-line.
Ok, I tried finding the culprit asm code but failed to notice
sys/arch/arm/include/lock.h:
...
#if defined(_KERNEL)
static __inline int
__swp(int __val, volatile unsigned char *__ptr)
{
__asm volatile("swpb %0, %1, [%2]"
: "=&r" (__val) : "r" (__val), "r" (__ptr) : "memory");
return __val;
}
#else
static __inline int
__swp(int __val, volatile int *__ptr)
{
__asm volatile("swp %0, %1, [%2]"
: "=&r" (__val) : "r" (__val), "r" (__ptr) : "memory");
return __val;
}
#endif /* _KERNEL */
...
static __inline void __attribute__((__unused__))
__cpu_simple_lock(__cpu_simple_lock_t *alp)
{
while (__swp(__SIMPLELOCK_LOCKED, alp) !=
__SIMPLELOCK_UNLOCKED)
continue;
}
static __inline int __attribute__((__unused__))
__cpu_simple_lock_try(__cpu_simple_lock_t *alp)
{
return (__swp(__SIMPLELOCK_LOCKED, alp) ==
__SIMPLELOCK_UNLOCKED);
}
...
I don't know ARM assembly that well, but it seems like an atomic
version of swp
is needed here in thumb asm. Is this correct?