Subject: Re: LFS (was Thank you NetBSD)
To: None <netbsd-users@netbsd.org>
From: Jesse Off <joff@embeddedARM.com>
List: netbsd-users
Date: 02/18/2005 16:54:13
I worked a little on LFS a few years ago with Konrad Schroeder. One of the
basic issues with LFS is that the cleaner (aka garbage collector) has to
move filesystem blocks within a segment to another segment to make space for
the log to grow into. The core problem is that when you move files around,
their indirect and inode blocks (that may be in another segment) get dirtied
as a side-effect and have to be rewritten themselves. End result is that
the cleaner can not guarantee to make any forward progress at all in any
reasonably bounded amount of time/number of operations due to potential
interdependencies of the indirect and inode blocks. Actually, it can even
make the filesystem free-space condition worse. The cleaner is for the
most part very efficient at reclaiming space, but simply reserving 25% of
space does not necessarily make it invulnerable, either, especially when
dealing with atypical filesystem writing activity.
Indeed, the worst case scenario is pretty bad: take for instance a segment
being cleaned with every non-obsolete block being a large offset file block
for a different inode. Moving each block results in having to rewrite 2
indirect blocks and its inode block. If the blocks aren't in the segment
being cleaned, LFS now has to find space for 3 extra meta-data blocks to
move this particular block. In test cases, I've seen this result in a
vicious cycle that makes the filesystem unuseable with well less than 75%
full filesystems. (I can't remember the exact tests I used, its been a few
years...)
//Jesse Off
> I would recommend using LFS on a desktop machine which doesn't have to be
> ultra-reliable, for transient, often-written data such as temporary CVS
> checkouts and object directories. Just don't put anything precious there
> and be prepared to newfs the partition.