Subject: Re: Calculating frag size for LFS
To: Sean Davis <firstname.lastname@example.org>
From: Konrad Schroder <email@example.com>
Date: 04/25/2002 14:58:02
On Wed, 24 Apr 2002, Sean Davis wrote:
> What data does one need to calculate the best frag size / block size to use
> for a new LFS partition? I assume the exact size of the disk, from disklabel
> or whatever, but I'm not sure how to calculate the best frag size. I just
> recreated the partition I use for building NetBSD as lfs, and have noticed a
> good bit of speed improvement, I'm considering converting another filesystem
> used for non-critical data (stuff that won't kill the machine if it gets
> hosed :) to LFS, and want to try and figure out a better frag size than what
> newfs_lfs does with -A. Any tips?
First, a nit ... I think you mean "segments" instead of "fragments". In
particular, the -A flag tries to optimize your segment size. Good block
and fragment sizes depend on how large the average file on your filesystem
is going to be; the historical 8k/1k default seems reasonable, though if
you are going to have lots of very large files you might want to bump up
the block size. Fragment size on LFS is not very important.
What constitutes a good *segment* size depends on your usage pattern.
Larger segments mean that you can have longer sections of contiguous file
data, which in theory means you can write more quickly, and (in both
theory and practice, but less noticeable) you can read more quickly.
Smaller segments tend to make the cleaner's job easier, because they're
more likely to be completely empty instead of containing some blocks still
in use that have to be moved elsewhere; any time the cleaner can save
itself work you will be happier.