tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: allocating files with indirect blocks (was: choosing the file system block size)
Date: Mon, 14 May 2012 13:35:54 +0200
From: Edgar =?iso-8859-1?B?RnXf?= <ef%math.uni-bonn.de@localhost>
Message-ID: <20120514113554.GF12078%trav.math.uni-bonn.de@localhost>
| > For files large enough to need indirect blocks,
| > (a) the size is rounded up to the block size, not the frag size
| Why is that so?
I'm not going to try and speak for Kirk - ask him if you want a more
authoritative answer, but ...
Basically the idea is to avoid excessive fragmentation - note that no-one
(that I know of) has ever seen the need to implement a defragmenter for ffs
(unlike some other notable filesystems). Much of the reason for this is
that considerable effort was taken to manage the use of fragments to avoid
them gradually cluttering the filesystem and needing to be moved around by
some maintenance tool.
Part of that is this part - fragments exist at all as a space saving
measure (earlier attempts at faster filesystems than the original v7/32v
filesystems had simply made bigger blocks, but required much more space).
But that really matters only for small files, allocating 8Kb, rather than 1Kb,
for a 200 byte file is a huge penalty, and since there can be large numbers
of such small files, the overall cost is enormous (or was back in the 1980's,
these days discs are so big that it matters much less.)
But, once the file is big enough that it needs indirect blocks (for 8Kb
blocksize, that's > 88Kb if I remember the number of direct blocks in
ffs correctly) wasting on average 4Kb matters much less, it's just a
few percent, rather than the several hundred percent it can be for small
files.
Not allowing fragments to be used there, and simply wasting that space means
less fragments used overall, and so less fragmentation issues to deal with.
kre
Home |
Main Index |
Thread Index |
Old Index