Subject: Re: CVS commit: src
To: Gordon Waidhofer <>
From: Bill Studenmund <>
List: tech-kern
Date: 06/22/2004 10:07:25
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 22, 2004 at 01:26:54AM -0700, Gordon Waidhofer wrote:
> > >=20
> > > ... it as a requirement that file=20
> > > systems do real, hard-core locking. And given the state of things whe=
n I=20
> > > started, that was a very good thing.
> >=20
> > why do you think that exposing a lock is a requirement?
> I'd like to ask the same question differently.
> Suppose a file system's VOP_LOCK() and VOP_UNLOCK()
> are no-ops, and the file system can be trusted to
> do the right thing (not really that hard) for the
> primary VOPs (LOOKUP, READ, WRITE, etc). What
> semantics would break?

Whatever access callers of the file system expected to be serialized that=
now aren't. i.e. a case where a caller called VOP_LOCK() and expected an=20
exclusive lock is now in place. Especially if it expected that lock to be=
held across a call to ltsleep().

Also, for things like delete and rename, would it be so easy? Or file=20

> >From VOP_LOCK(9)
>     VOP_LOCK() is used to serialise access to the
>     file system such as to present two writes to
>     the same file from happening at the same time.
> Why? Is this a semantic of the file model? Or is
> this a context to make things "easier" on the
> underlying file system?

It is file system semantics. A call to write(2), barring errors, is=20
supposed to be atomic. Thus if you have two write calls that overlap, the=
overlapping data are to have come from one call or the other, not some mix=
of both.

Take care,


Content-Type: application/pgp-signature
Content-Disposition: inline

Version: GnuPG v1.2.3 (NetBSD)