Subject: RE: JFS
To: None <>
From: Bill Studenmund <>
List: tech-kern
Date: 02/24/2003 10:10:24
On Thu, 20 Feb 2003 wrote:

> The points I meant to make are:
> 1. embedded systems need to be able to save data that is non-volatile
> gathered at run-time.
> 2. A standard file system is an easy interface to save such data
> 3. This standard file system needs to have a slightly different behavior
> when using flash as a storage medium (fewer writes, compression, etc.)
> 4. JFFS fits part of these requirements
> 5. NetBSD doesn't have flash block device support (at least in 1.6) now.
> 6. Embedded systems sometimes go down "uncleanly" making a more fault
> tolerant FS a better option.  JFFS being a "log-based" file system is more
> fault tolerant.
> 7. JFFS would be a good start at supporting flash better in NetBSD.

I disagree with 6) and 7).

7) mainly as the typical journal size (say 16 MB) is on the order of the
size of most flash! Also, all metadata writes are written to the disk and
to the journal, so the journal is going to be written a lot. I think
that'll blow through your block usage & wear leveling.

6) is wrong because log-based file systems aren't journalized file
systems. A journalized file system is an otherwise-unchanged file system
with different metadata writing semantics. When changing metadata, you
actually note the change in an in-kernel log. When the log gets big enough
(or a timeout tripped) you write that blob of the log to the (on-disk)
journal. When the journal writes have completed, you then write all of the
blocks in question. When those writes have completed, you add an entry to
the journal indicating that those entries have been completed.

That way a change either: hasn't happened (died while writing journal), is
in the journal (died while writing blocks), or has fully happened. While
you can still loose data, you won't corrupt the file system, and
transactions will either have happened or not.

LFS is a log-based file system. It writes EVERYTHING, metadata and data,
to the disk like a log file. Among other things, this means that inode
metadata moves around. While it has the advantage that simultaneous writes
to different files can be consolidated into the same log segment, LFS has
very different full-filesystem behaviors. You can end up having to write
data to be able to free data. If your file system is sufficiently full/
fragmented, you can end up not being able to free up space, and so the
only option is to dump & newfs.

Take care,