Subject: Re: LFS partition limitations
To: Tracy J. Di Marco White <gendalia@iastate.edu>
From: Manuel Bouyer <bouyer@antioche.lip6.fr>
List: current-users
Date: 10/02/2000 15:57:57
On Mon, Oct 02, 2000 at 05:02:54AM -0500, Tracy J. Di Marco White wrote:
> 
> }Could you recompile fsck_lfs with '-g' ('make CFLAGS=-g' in fsck_lfs),
> }and run the resulting binary under gdb ?
> }(gdb ./fsck_lfs
> }run /dev/raid0a
> })
> 
> (gdb) run /dev/raid0d
> Starting program: /stuff/NetBSD/src/sbin/fsck_lfs/./fsck_lfs /dev/raid0d
> ** /dev/rraid0d (NO WRITE)
> 
> Program received signal SIGSEGV, Segmentation fault.
> 0x8054138 in bzero ()
> (gdb) bt
> #0  0x8054138 in bzero ()
> #1  0xbfbfd66c in ?? ()
> #2  0x804cae3 in checkfilesys (filesys=0x809e020 "/dev/rraid0d", mntpt=0x0, 
>     auxdata=0, child=0) at main.c:194
> #3  0x804c9f8 in main (argc=0, argv=0xbfbfdbac) at main.c:140
> #4  0x80481c5 in ___start ()

I'll let the ones familiar with LFS look at this :)

> 
> }In theory, you shouldn't need to fsck an LFS filesystem, so you can put
> }'0 0' in the fstab for it (it's what I have on my machine). 
> 
> Cool.
> 
> I copied the src, xsrc & pkgsrc trees to this partition, and I don't
> understand what df is doing.
> 
> During copy:
> # while 1
> while? df /stuff
> while? sleep 20
> while? end
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54479268  7915360 46019115    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54478533  7919316 46014432    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54482146  7923200 46014125    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54481456  7927177 46009464    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54484326  7931477 46008006    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54483835  7935400 46003597    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54487336  7939321 46003142    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54486354  7943328 45998162    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54489896  7947247 45997751    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54488865  7951287 45992690    14%    /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54491814  7955288 45991608    14%    /stuff
> 
> Things finished copying, and I deleted some stuff, then:
> # df /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  54828170  7558761 46721128    13%    /stuff
> 
> 5 hours later:
> # df /stuff
> Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
> /dev/raid0d  55417900  7595273 47268448    13%    /stuff
> 
> This started out near 85GB, I'm confused.

Hum, maybe there's a problem here too.
The fact that the total size changes is normal; there's no fixed space for
metadata in LFS, so the free space available for data blocks depends on
the space used for the metadata (so, from the number of files). When the
filesystem is idle, the cleaner can eventually garbage-collect metadata and
free some space.

--
Manuel Bouyer, LIP6, Universite Paris VI.           Manuel.Bouyer@lip6.fr
--