Subject: Re: lfs
To: Konrad Schroder <>
From: Sean Davis <>
List: tech-kern
Date: 02/27/2003 13:31:30
On Thu, Feb 27, 2003 at 10:10:03AM -0800, Konrad Schroder wrote:
> On Wed, Feb 26, 2003 at 04:32:07PM -0500, Thor Lancelot Simon wrote:
> > Clearly the underlying problems here should be fixed, but in the interim,
> > how about simply reserving some more space so that even root can't write
> > into it?  It seems to me that max(5%, segsz * 10) should suffice to avoid
> > the kind of deadlock that gets us into this mess...
> Yes, we could do that.  My guess is that it would have to be more like
> 20%, though.  Lowering lfs_bfree or raising lfs_minfreeseg should have
> this effect, though when I tried the former yesterday it uncovered another
> bug in the error case of lfs_truncate.
> On Wed, 26 Feb 2003, Sean Davis wrote:
> > I get a panic dumping a > 2GB filesystem to an LFS volume, maybe these
> > two cases are related? The panic doesn't happen until the amount of data
> > dumped (watched by running ls -l on the dump file over and over)
> > approaches 2GB..
> Completely unrelated :^)  This was a 32-bit arithmetic problem that I
> found and fixed last night.

Ah, cool. I really like LFS and want to use it, but I also really like my system
only rebooting when I ask it to ;-)

And what about the umount panic I uncovered?
panic: lfs_unmount: still dirty blocks on ifile vnode
that shouldn't be hard to fix, should it? I deleted a big file (the one that
caused the near-2GB-file panic) right before trying to umount, so I guess
lfs_cleanerd hadn't had time to notice, or something. Couldn't lfs_unmount sync
things properly before it lets the filesystem unmount, or something along those

						Konrad Schroder

/~\ The ASCII
\ / Ribbon Campaign                   Sean Davis
 X  Against HTML                       aka dive
/ \ Email!