Subject: lfs_mountfs: please consider increasing BUFPAGES to at least 954
To: None <firstname.lastname@example.org>
From: Paul Mather <email@example.com>
Date: 06/02/2003 09:43:57
I've been trying out LFS for scratch, write-intensive filesystems like
/usr/obj, to see if it has improved since last I tried it out.
When I rebooted one of my NetBSD/alpha 1.6T-CURRENT systems after a
new kernel build, I was surprised to see the following message appear
on the console during boot:
lfs_mountfs: please consider increasing BUFPAGES to at least 954
This made me wonder several things, but primarily: what are BUFPAGES,
and at what level are they currently set on my system? I went looking
for a sysctl variable that might tell me, but could not find one. Is
this a hard-coded tunable, like BUFCACHE?
As for LFS, I basically stopped using it, as I found it still too
unstable. After a while I would get what seemed to be a deadlock in
the VM or FS layer. The rest of the machine would be running, but,
for example, issuing a "sync" command would never complete, and it was
not possible to umount any file systems or effect a clean shutdown.
After some reset-button induced shutdowns, the LFS file system would
be corrupted to the point that fsck_lfs could not salvage any of it.
Certain console errors complaining about inconsistent inodes in the
LFS filesystem during normal operation leads me to believe that the
LFS corruption was not all attributable to a hard reset, however.
"Without music to decorate it, time is just a bunch of boring production
deadlines or dates by which bills must be paid."
--- Frank Vincent Zappa