Subject: Re: FFS reliability problems
To: None <email@example.com>
From: der Mouse <mouse@Rodents.Montreal.QC.CA>
Date: 05/15/2002 22:15:31
>> When I tried fsck with deliberately orphaned inodes with nonzero
>> size (I used clri on the containing directory), it put them in
> Well, that'll teach me not to put in what went by, but I was so
> incredulous myself at having seen it that I didn't catch it all.
I've done more experiments, and it appears that fsck torches
unreferenced files, rather than putting them in lost+found, when their
link count is zero (ie, they were still around only because they were
open when the crash happened).
> Needless to say, this was not pleasant to watch my work files
> (recoverable!) vapourise.
But understandable, since an unreferenced file with zero link count is
usually a file that "doesn't exist" as far as the filesystem namespace
goes, referenceable only by processes that have it open somehow.
I believe I could add an option to fsck - or at least fsck_ffs - that
would make it treat zero-link-count files the same as any other, at
least if they have nonzero size. Based on a quick look at diffs
between my tree's fsck_ffs and -current fsck_ffs, the result would
probably even drop into -current fairly painlessly.
Annoyingly, even running fsck by hand didn't help - when I said no to
CLEAR? it didn't offer me a RECONNECT? option. If I'd really wanted
the file contents I would have had to have used dumpi or equivalent, or
patched a directory entry by hand to point to it. That I definitely
think needs fixing.
/~\ The ASCII der Mouse
\ / Ribbon Campaign
X Against HTML firstname.lastname@example.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B