tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: 4.x -> 5.x locking?



On Wed, Nov 09, 2011 at 09:52:13AM -0600, Eric Haszlakiewicz wrote:
 > > I don't think it guarantees it by itself. That is, if you want to access
 > > the data on a different CPU, you either need to take the mutex (and the
 > > read barrier in mutex_enter) or issue an explicit barrier.
 > 
 > I'm a bit unclear still.  Do you mean that in this sequence:
 > 1: CPU A: mutex_enter(mtx)
 > 2: CPU A: x = 1
 > 3: CPU A: mutex_exit(mtx)
 > 4: CPU B: mutex_enter(mtx)
 > 5: CPU B: dostuff(x);
 > 
 > The value changed in step #2 is only guaranteed to be available on
 > CPU B after Step #4 runs?  i.e. the mutex_enter call does
 > "something" to ensure that all changes from all other CPUs are
 > visible?

Yes. However, since we aren't talking about non-cache-coherent
architectures (which require even more manual manipulation) it's only
about access reordering in the memory hierarchy.

So it's not so much that a read barrier in mutex_enter causes the data
to become visible but that the read barrier in mutex_enter makes sure
the read takes place there and not earlier (via prefetch, compiler
optimization, etc.) and thus potentially happen before CPU A unlocks
the mutex and flushes the new value of X out.

The other problem is that because every processor architecture (and
sometimes, every model!) has its own set of rules, implicit or
explicit, about which barrier instructions are required and in what
contexts, it's very difficult to be sure that you've got all the right
incantations in place or to reason portably.

Memory barriers are a very unsatsifactory paradigm, but they're what
we've got.

-- 
David A. Holland
dholland%netbsd.org@localhost


Home | Main Index | Thread Index | Old Index