Subject: Smoother writing for LFS
To: None <tech-kern@netbsd.org>
From: Thor Lancelot Simon <tls@rek.tjls.com>
List: tech-kern
Date: 10/23/2006 19:06:15
I've been thinking a bit about smoothing out the write bursts caused by
the interaction between LFS and our current "smooth" syncer.  I think I
might have a fairly simple solution.

1) Rather than a global estimate, maintain an estimate per-filesystem
   of the current number of dirty pages.  I'm not sure how hard this
   would be, and would appreciate feedback.

2) Maintain, per filesystem, minimum and maximum "target write sizes".

3) Once per second, traverse the list of filesystems, and for any
   filesystem with more than the minimum outstanding, clean until there's
   nothing left or we hit the maximum.

The sizes in #2 would also be useful for teaching NFS server write
gathering that LFS prefers to write a minimum of one segment at a time.

It is easy for LFS to track the current write bandwidth of the disk, so
we could set the maximum size so that the disk is never more than X% busy
with background writes in any given second.

The only problem is this: we don't have any way to track (or clean) only
the set of pages whose backing store is on a particular filesystem.  And
I don't know what a good interface for that might look like, or what it
would cost -- gain, this is an area where I'd appreciate suggestions.

However, we could quite possibly implement this first for metadata
buffers, where it might address some of the issues with our syncfs by
reducing the amount of outstanding data it handles.

-- 
  Thor Lancelot Simon	                                     tls@rek.tjls.com

  "We cannot usually in social life pursue a single value or a single moral
   aim, untroubled by the need to compromise with others."      - H.L.A. Hart