tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: mutexes, locks and so on...



On Thu, Nov 11, 2010 at 05:22:03PM +0100, Johnny Billquist wrote:

> The mutex implementation in place now, is nice in many ways, as it
> is rather open to different implementations based on what the
> hardware can do. However, I found that only one platform (hppa) is
> currently using this. All others rely on the __HAVE_SIMPLE_MUTEXES
> implementation, which utilize a CAS function. Obviously the VAX does
> not have a CAS, and it is rather costly to simulate it, so I'm
> working on getting away from this. (Does really all other platforms
> have a CAS?)

What

> With mutex_spin, you instead store the original spl at the first
> mutex_spin_enter, and later calls to mutex_spin_enter can only
> possibly raise the ipl further. At mutex_spin_exit, we do not lower
> the spl again, until the final mutex_spin_exit, which resets the spl
> to the value as before any mutex was held.
> The cause a slightly different behaviour, as the spl will continue
> to possibly be very high, even though you are only holding a low ipl
> mutex. While it obviously don't cause a system to fail, it can
> introduce delays which might not be neccesary, and could in theory
> cause interrupts to be dropped that needn't be.
> 
> Is this a conscious design? Do we not expect/enforce mutexes to be
> released in the reverse order they were acquired?

It was a concious decision.  I did some profiling on SPL usage
during the newlock2 cycle.  We can nest SPLs like this, but it happens
infrequently enough that it doesn't matter.  There are a number of factors
that mitigate it anyway.  First, if nesting happens in an interrupt,
the original SPL will be restored on EOI.  Second, there are no spin mutexes
at soft level (IPL_SOFT*) - all that synchronization is handled by adaptive
mutexes.  Third, the number of priority levels was collapsed and made into
a hierarchy. We didn't need all the hard levels.  So the most complicated
things get on that front is you have IPL_VM, IPL_SCHED and IPL_HIGH to worry
about.  On many ports these map to two or even just one real IPL at the
hardware level.  Fourthly, where we would go for a two level ISR
(soft+hard) this would now usually be handled with an adaptive mutex
at the soft level, or some clever mechanism like a queue. 
The IPL is rased and lowered a whole lot less than it used to be.
 
> Moving on to locks:
> the simple lock defined in lock.h is easy enough, and I haven't
> found any problems with it. The rwlock, however, is written with the
> explicit assumption that there is a CAS function. It would be great
> if that code were a little more like the mutex code, so that
> alternative implementations could be done for architectures where
> you'd like to do it in other ways. Is the reason for this not being
> the case just an oversight, a lack of time and resources, or is
> there some underlying reason why this could not be done?

It was deliberate.  rwlocks are only effective in situations where the
codepath is heavyweight.  So I felt while it is worthwhile optimising them
if possible, an all out jihad is just not warranted (as it might be for
mutexes).  So I made the decision to make them rely on CAS so that the
implementation is transparent, easier to prove and maintain etc.

> Also, there are a few places in the code in the kernel, where
> atomic_cas is called explicitly. Wouldn't it be better if such code
> were abstracted out, so we didn't depend on specific cpu
> instructions in the MI code?

It is difficult to say without a good example. :-)

But I can say that the system is now most definitely designed for a world
where CAS is available.  These days that means even embedded hardware.
That too was a concious decision, i.e. while it's nice that we run on the
vax and m68k etc, lets design for today/tomorrow (CAS, 64-bit etc) and bring
the others along for the ride unless somehow impossible.
 


Home | Main Index | Thread Index | Old Index