Subject: Re: LFS frailty vs. datestamping [Was Re: /dev/clock pseudodevice]
To: Timothy E. Denehy <firstname.lastname@example.org>
From: Bill Studenmund <email@example.com>
Date: 07/30/2001 14:03:53
On Mon, 30 Jul 2001, Timothy E. Denehy wrote:
> > LFS can do much better than FFS _on_certain_work_loads_. LFS is great for
> > data which are written all at once and not modified much. Infrequent
> > modification isn't bad, as the cleaner can come along and fix things. But
> > for a file where the same blocks get modified over and over, FFS will do
> > better than LFS. FFS just changes the existing data blocks. The only
> > possible metadata change would be m & c-time updates. LFS, though, has to
> > write each block out each time.
> If the same blocks are modified repeatedly at a sufficent rate, the updates
> will be absorbed by the buffer cache, and LFS will not write out each block
> each time. Furthermore, if the modified blocks are selected randomly from the
> file, FFS will perform a number of random writes when the blocks are flushed
> from the cache, whereas LFS will coalesce these blocks into a sequential write,
> and therefore LFS write performance will be much better.
> > I think something like a database receiving lots of updates would be a bad
> > traffic pattern for LFS. You can get parts of the files updated tens to
> > hundreds of times per second (depending on what the update rate is). While
> > a bunch of those writes could get coalessed (can LFS do that?), you're
> > still generating segments at a good rate. :-)
> The same argument applies here.
I think you missed my point. My point is not that LFS can or can't write
said blocks out faster. The point is that certain traffic patterns will,
for LFS, generate lots of stale or partially stale segments, with
partially stale being the worst.
Said another way, my point is that LFS is optimized for certain things
(see quote following).
To quote McKusic et al. in the 4.4BSD book, section 8.3, page 285, "The
LFS is optimized for writing, and no seek is require between writes,
regardless of the file to which the file writes belong. It is also
optimized for reading files written in their entirety over a brief period
If the usage pattern doesn't match what LFS was optimized for, it won't do
well. Free space, and reading a file not written in its entirety over a
brief period are not things LFS optimizes for. With a continually-churning
file, you can generate a usage pattern LFS wasn't optimized for, and in
extreme cases, for which LFS can behave very poorly.
This assertion doesn't mean that LFS is bad, or shouldn't be developed.
Just that everyone realize it's good for some things and not others.