tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Locking strategy for device deletion (also see PR kern/48536)



paul%whooppee.com@localhost (Paul Goyette) writes:

>On Wed, 8 Jun 2016, Michael van Elst wrote:

>> paul%whooppee.com@localhost (Paul Goyette) writes:
>>
>>>> See misfs/specfs/spec_vnops.c::spec_close().
>>
>>> yes, that would certainly explain the situation.  It does, however, make
>>> it rather difficult to maintain a valid ref-count!
>>
>> specfs does the open refcounting. The device only has a single bit, open
>> sets it and close clears it. That bit is added to a common counter
>> that is used for other references.

>Hmmm.  Would it be valid, then, for my close() routine to reset the 
>ref-count to zero rather than simply decrementing?  Does the close() 
>only get called if there are _NO_ outstanding open()s for _any_ process?


specfs has a reference counter, when the counter counts to zero
it calls the close function.

So you have two phases for device access.

Before the driver is made available to specfs, you can open/close
yourself. Since you should be the only user, you don't need to count
anything. This is used for example to read the device label.

Later you must use specfs to allow concurrent access with
reference counting and specfs will do the right thing.

The wedge code has to handle this. The first wedge that is opened
for a device also opens the raw device through specfs (which is then
busy). There is a separate reference counter for this, so that
the raw device is closed when the last wedge is closed.

The function dkwedge_read (used to scan for things like GPT) has
to handle both cases. If there is an open wedge, it uses the
already open raw device (and bumps the reference counter).
If there is no open wedge yet, it opens the raw device through
specfs.

-- 
-- 
                                Michael van Elst
Internet: mlelstv%serpens.de@localhost
                                "A potential Snark may lurk in every tree."


Home | Main Index | Thread Index | Old Index