tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kernel condvars: how to use?



> Date: Fri, 8 Dec 2017 09:04:25 -0500 (EST)
> From: Mouse <mouse%Rodents-Montreal.ORG@localhost>
> 
> > Where do you set LPT_RF_WAITING?
> 
> Hm, now that you mention it, I think I don't.

Note that a flag like this is not normally necessary, or even
particularly helpful -- cv_broadcast already has essentially the same
optimization internally.

If anything, you should just test rather whether you changed from a
state that would block a read to a state in which a read can make
progress.  In this case, you might test whether sc->rbptr is newly
less than sc->rbfill.  But I'd just skip the test altogether and
always broadcast whenever sc->rbfill changes.

> > It's unclear to me why you have splhigh in lptopen.
> 
> To prevent sc->sc_state from being changed by anything else while the
> open-testing logic runs.
> 
> > splhigh should be reserved for very special purposes that are clearly
> > justified.
> 
> What is the correct way to interlock against anyone else changing
> sc_state, then?

Who else changes sc_state?  Whoever does so must agree with lptopen
about how to exclude access.

In new code, this should generally be done with a mutex that all the
callers agree on, since the callers might run on any CPU in parallel.
E.g., you might just use a general kmutex_t sc_lock in lpt_softc which
is configured for the highest-IPL interrupt handler that needs access
to it.

Only in legacy code running under the giant lock is it sufficient to
raise the IPL without any interprocessor locks, and only if all the
other callers are also giant-locked.

> > In this case, everything will run giant-locked, but if you're writing
> > new code you should avoid relying on that, and definitely avoid
> > volatile voodoo.
> 
> volatile is hardly voodoo.  It is required by C for data accessed by
> both the main line and the interrupt line and mutated by at least one
> of them.  (In current implementations, the interrupt line doesn't need
> it, but that's not the same as the language promising it will work.)

In the uniprocessor model of interrupts, and in special cases that we
know are limited to it (like CPU-bound threads synchronizing with
CPU-interrupts bound on the sam CPU), that's roughly right.  However,
we still normally use __insn_barrier to enforce ordering where it is
important in those cases where we aren't using mutexes or splfoo for
some reason.

In the multiprocessor model, volatile is basically always red herring.

> > For instance, in lpt_raw_read_tick, I advise that you unconditionally
> > acquire the lock and use it for all access to the new fields you
> > added: rawbuf, rflags, rbfill, rbptr, laststat.
> 
> Yes, I daresay I should.  But, locked or not, they still need to be
> accessed through volatile names, or the new values could be cached
> somewhere not visible to other concurrent accesses.  (If the mutex
> implementation uses hooks into the C implementation such that volatile
> is guaranteed unnecessary, fine, but that really needs to be clearly
> spelled out in the documentation - and probably would be very fragile,
> as a new compiler, or a new compiler version, could break it.)

Fixed in mutex(9).

> > Can you write down all the states you expect this thing to be in, and
> > the state transitions that each action -- lpt_raw_read_tick,
> > lpt_raw_read -- can take, and in what states lpt_raw_read must wait
> > for a tick to transition to another state before it can complete?
> 
> I think so, yes:
> 
> (1) Idle, not open
> (2) Open, no read() posted
> (3) Someone inside read(), not sleeping
> [...]

That's helpful, but I should have phrased my question a little more
clearly: Can you write down the states that the lpt_softc data
structure can be in?  Given a diagram of something like this:

                +------+-------+------+-----+
sc_state        | OPEN | OBUSY | INIT | RAW |
                +------+-------+------+-----+

                +--------------------------------+
rawbuf          | data...                        |
                +----|---------------|-----------+
                     \ rbptr         \ rbfill    \ LPT_RAWBUFSIZE

What are all the possible classes of tuples of values that sc_state,
rbptr, rbfill, &c., can have?  Each class is a discrete state.  In
each state, what can read do, and what state does it transition to?
In which states must read block?  In each state, what can read_tick
do, and what state does it transition to?  Which state transitions
does read_tick cause that may unblock read?

> Back in the days of using spl*() for mutual exclusion, the only harm
> from using too-high an spl was that you might block more than you need
> to (and, in some cases, can lock out whatever would normally wake you
> up).  Based on that, I would expect the use of a too-high IPL when
> creating a mutex to be a relatively benign mistake.  Is that inference
> wrong?

Just from the perspective of the program's function, you're mostly
right.  From the perspective of performance, using spin locks
unnecessarily wastes CPU time when the lock is contended, which
probably isn't really an issue in this case.

But from the perspective of the next human to read your code, you're
expressing a confusing message to them: `This code MUST exclude ALL
interrupts on the current CPU!'

When I read that, I wonder, `Huh, what IPL_HIGH interrupts could this
possibly be trying to communicate with?'  In this case, I think you're
actually trying to exclude _other_ lptopen and lptclose -- and, in
fact, _no_ interrupts.  Using a mutex would make that much clearer.
That's why I suggest a single sc_lock in struct lpt_softc that covers
~everything.

> > With LOCKDEBUG, if you know the lock address you can use `show lock'
> > in ddb.
> 
> ddb's lock == kernel's kmutex_t?  Or do I need to work out the address
> of the mutex's mtx_lock, or what?

ddb's lock == kernel's mutex_t


Home | Main Index | Thread Index | Old Index