Subject: Re: LFS (was Thank you NetBSD)
To: Jonathan Stone <jonathan@dsg.stanford.edu>
From: Thor Lancelot Simon <tls@rek.tjls.com>
List: netbsd-users
Date: 02/23/2005 12:39:51
On Wed, Feb 23, 2005 at 07:56:19AM -0800, Jonathan Stone wrote:
> 
> Um. Didn't Margo Seltzer et al. claim that BSD-LFS storing inodes in
> the ifile reduced hairiness (relative to Mendel's Sprite-LFS) by
> eliminating much special-case handling for inodes?  Mabye in the 1993
> Winter USENIX paper?

Yes, they claimed that -- but, so far as I can tell, it was just a
claim, backed up by no data and not even by any kind of substantive
argumentation.

I have lost most if not all respect I ever had for the series of
Seltzer et al. LFS papers because even those results that should be
reproducible do not seem to be, and many measurements are plainly
interpreted "constructively", to be polite.  Ousterhout has written
detailed criticisms on some of these points and over time I find
them both more credible and more generally applicable to the BSD-LFS
work as a whole.

I once attempted to obtain and compile the version of the BSD-LFS
code used to obtain the results in the last Seltzer et al. LFS
paper.  I found the exercise rather frustrating.  I am not sure that
measurements of a filesystem that does not reliably meet its purported
consistency guarantees should be taken as demonstrating anything; if
the code scrambles your disk by the time it's done, why believe that
it wrote (or read) what the benchmark said it should in the first place?

> I seem to recall a claim that BSD-LFS putting inodes into the ifile
> (readonly to userspace, written into the log by the kernel like any
> other file, iirc) reduces seek activity measurably, relative to
> Sprite-LFS with inodes in a separate, fixed on-disk datastructure.

That's the claim.  There are a number of counterclaims, of which
Jesse's remark was to me the most persuasive.  I'll start with that
one: the complexity engendered by storing inodes in the log can make
cleaning intractable.  A related criticism that appears in a number
of later papers on cleaning (it's not treated in the Sprite papers
because the Sprite LFS _did_ use a separate structure) is that not
only cleaning but also _every update to a given file, thus mtime
update, thus inode rewrite_ causes the inode to migrate away from
the oldest data in the file; thus, many common access patterns,
for example appending to a file which is then read from its
beginning by another process, generate heavy seek loads on a busy
disk once the filesystem matures.

Sprite kept all the inodes at the middle of the disk.  This adapts
very well to hardware implementing virtual disks where some sectors
can be kept in NVRAM; or one can envision reserved "inode segments"
scattered throughout the disk, with placement of the inode for a
given file in a given segment based on some criteria using in-core
data.

The most damning criticism, to me, though, is that the seek load
can generally be entirely eliminated simply by incoring all the
metadata while the filesystem runs.  As I understand it, that is
precisely what the current HFS+ implementation does.

One thing that really, really doesn't lend credence to the BSD-LFS
claims about the ifile reducing seek load is that until Konrad
fixed it in our code, every single access to a file caused an
update to its inode, thus an extra seek.  This makes the Seltzer
group's measurements of overall seek load seem very questionable,
to me, just as the "sort blocks backwards" bug made their throughput
measurements seem very questionable to Ousterhout.

-- 
 Thor Lancelot Simon	                                      tls@rek.tjls.com

"The inconsistency is startling, though admittedly, if consistency is to be
 abandoned or transcended, there is no problem."		- Noam Chomsky