Subject: Re: Large filesystems, yet again
To: None <tech-kern@netbsd.org>
From: der Mouse <mouse@Rodents.Montreal.QC.CA>
List: tech-kern
Date: 01/30/2006 14:43:58
I wrote,

> Now I'm very suspicious.  It looks as though even a slightly-under-2T
> filesystem doesn't really work right.

One thing I forgot to say - I was using FFSv1, since I was staying
under v1's 2T (2^32-sector) limit.

I have now investigated further.  The problem lies somewhere with
indirect blocks.  When looked at on disk (eg, by fsck after unmounting)
all indirect blocks are filled with 0x00s.  Of course, this leads to
rather severely corrupt content, and also explains why fsck thinks the
files claim more blocks than they should.

Oddly, this didn't cause trouble in my tests of multi-gigabyte files,
even though those tests wrote many times a moby of data, which I would
expect would push everything necessary out of core.  (They didn't write
enough data for me to be certain they pushed all indirect blocks out of
core; it's possible that all indirect blocks could have stayed cached
between reading and writing.)

I'm looking further - for example, I don't yet know whether the correct
indirect block content is actually present on disk anywhere....

/~\ The ASCII				der Mouse
\ / Ribbon Campaign
 X  Against HTML	       mouse@rodents.montreal.qc.ca
/ \ Email!	     7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B