Subject: Re: softdep?
To: Mason Loring Bliss <mason@acheron.middleboro.ma.us>
From: Thor Lancelot Simon <tls@rek.tjls.com>
List: current-users
Date: 03/25/1999 12:39:47
On Thu, Mar 25, 1999 at 07:54:33AM -0500, Mason Loring Bliss wrote:
> On Thu, Mar 25, 1999 at 12:04:05AM -0600, Brian C. Grayson wrote:
> 
> > Look at the SEE ALSO section for mount_lfs -- it gives
> > references for some Ousterhout and Seltzer papers (4 total).
> > The McKusick book also discusses the in-tree version (modulo all
> > the recent changes) in some detail.
> 
> Hm. I guess I'll look for online versions of these... If you have any
> pointers to electric versions, they'd be appreciated.

For papers on the BSD implementation, see:
http://www.eecs.harvard.edu/~margo/papers/

(a link to Ousterhout's rebuttal of some of Seltzer's performance criticism
is there, too)

For papers on the original Sprite implementation, see:
ftp://ftp.cs.berkeley.edu/pub/sprite/sprite.papers.html

A fascinating paper on how to improve LFS performance, much of which is
relevant to our LFS (the cleaner policy changes, though not the adaptive
selection of segment sizes, which would be Hard) can be found at:
http://feeleymac.cs.ubc.ca/SOSP%2016/PAPERS/NEEFE/NEEFE.HTM

The two big issues with LFS performance (not that other filesystems don't
have their own, mind you!) are described and addressed in the last paper.
The former is poor performance when the disk is very full -- and research
has shown that for optimal performance, "very" means "more than 80%".  This
is because the cleaner has to read a huge number of blocks to find enough
free space to enable writing to continue.  They give a "hole-plugging"
method which could probably be not-terribly-painfully inserted in our 
cleaner that can ameliorate this.  The other is that for random read
workloads, particularly when the disk is large compared to the cache (and
remember, we have a comparatively puny buffer cache, particularly on
the x86 port) LFS can provide very poor read performance, and cleaning can
make things worse by clumping unrelated data together.  They propose an
extension of Rosenblum's original file-coalescing cleaner idea which
reorganizes to optimize for read somewhat like FFS does.  If you're 
cleaning, you have to move the data anyway, so why not move it where it's
cheapest to read?

That'd be harder to implement, but it would be So Cool if someone would
do it...

-- 
Thor Lancelot Simon	                                      tls@rek.tjls.com
	"And where do all these highways go, now that we are free?"