Subject: Re: mp->mnt_vnodelist change
To: Ignatios Souvatzis <>
From: Bill Studenmund <>
List: tech-kern
Date: 10/19/2006 15:54:06
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 19, 2006 at 08:44:48PM +0200, Ignatios Souvatzis wrote:
> On Thu, Oct 19, 2006 at 11:19:37AM -0700, Bill Studenmund wrote:
> > This hits on the real problem. We need to be able to issue all of the=
> > writes as async and wait for all of them to complete. As SANs become mo=
> > and more accessible, we need to be able to have lots of i/o outstanding.
> But we can't do this when we want write A to complete before B is
> attempted, can we?
> (Sorry, I missed the start of discussion; if this isn't relevant here
> just mention it.)

This is kinda relevant.

The start of the discussion regarded the fact that on UDF, changing the
order in which we flush vnodes GREATLY improved the speed of the flush.=20
The problem was that we only had one vnode's worth of i/o outstanding at=20
once, so we couldn't sort i/o. Also, the i/o's happened in reverse-seek=20
order, which is VERY SLOW for CDs.

My thought is that if we could flush in parallel, we could get all the i/o=
into an elevator sort, and thus get better performance.

You're discussing the fact that certain metadata update sequences
(updating a vnode) need to be handled in a specific order.

I would LOVE to handle lots of metadata updates in parallel. Obviously=20
each individual node would see a sequence of updates, but as many of the=20
sequence steps would be outstanding at once. I'm not sure how to do this=20
though as each of these updates requires a thread context, and such a=20
context can only handle one at a time.

Take care,


Content-Type: application/pgp-signature
Content-Disposition: inline

Version: GnuPG v1.4.3 (NetBSD)