tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: pserialized reader/writer locks



Taylor R Campbell <riastradh%NetBSD.org@localhost> wrote:
> While reading the fstrans(9) code a few months ago and trying to
> understand what it does, I threw together a couple of simple
> pserialize(9)-based reader/writer locks -- one recursive for nestable
> transactions like fstrans(9), the other non-recursive.
> 
> Unlike rwlock(9), frequent readers do not require interprocessor
> synchronization: they use pool_cache(9) to manage per-CPU caches of
> in-use records which writers wait for.  Writers, in contrast, are very
> expensive.
> 
> The attached code is untested and just represents ideas that were
> kicking around in my head.  Thoughts?  Worthwhile to experiment with?

I would expect a better problem statement even if it is a brain dump (one
sentence would have been enough).  Are you trying to solve sleepable reader
problem?  In such case, messing up pserialize(9) with somewhat read-write
lock semantics in ad-hoc way is a wrong approach.  If you are building a
read-optimised *lock*, then it should be designed and implemented as such
mechanism rather than built as an ad-hoc wrapper.

Basically, there are two approaches:

a) Implement read-optimised / read-mostly lock.  Years ago ad@ wrote an
implementation of rdlock(9).  It was not published, but he added a BSD
license so I guess it is okay to post here:

http://www.netbsd.org/~rmind/kern_rdlock.c

Alternatively, there is FreeBSD's rmlock(9):

http://nxr.netbsd.org/source/xref/src-freebsd/sys/kern/kern_rmlock.c

b) Implement grace-period based synchronisation mechanism which allows
readers to sleep, such as SRCU.  There are some dangers here:

http://lists.freebsd.org/pipermail/freebsd-arch/2014-June/015435.html
http://lists.freebsd.org/pipermail/freebsd-arch/2014-June/015454.html

However, it can provide less expensive writer side and more granular
garbage collection of objects.  I think it is worth to consider this way.

-- 
Mindaugas


Home | Main Index | Thread Index | Old Index