Subject: Re: File system performance on i386
To: Christoph Hellwig <hch@ns.caldera.de>
From: Charles M. Hannum <root@ihack.net>
List: tech-perform
Date: 02/23/2001 13:52:29
I don't know where you got this silly idea you're arguing so vehemently,
but I really suggest that you read even a little bit of the 30+ years of
file system research that's readily available.

In ANY SYSTEM WITH ASYNCHRONOUS I/O -- which includes both Linux ext2fs
and NetBSD's ffs (unless you use MNT_SYNC) -- there is a possibility
that file data will not have been synced when the machine goes down.
There is NO way to avoid this unless you write everything synchronously
(i.e. MNT_SYNC) or use a hardware backup mechanism (e.g. PrestoServe).

The problem with a fully asynchronous file system is that you can lose a
lot of information about *which file* a piece of data belongs to.  E.g.,
if one file is deleted and another created, a particular block may be
reused.  With asynchronous I/O, it's possible to have enough information
on disk to see that there is a new file with that block, but the block
itself was never written.  This is actually a serious SECURITY problem,
as it can give private data away.  A system with ordered writes (either
softdep or the old ffs synchronous metadata mechanism) NEVER has this
type of problem.

But it's worse than that.  With a long chain of removal and creation
(say, deleting your Linux kernel source tree and unpacking a new one),
it's possible to make so much hash of your file system metadata that
e2fsck(8) just loses its lunch.  Since I've personally had this happen,
I can vouch that it's nowhere near as `reliable' as you claim.