Subject: Re: ffs fragmentation
To: None <tech-kern@netbsd.org>
From: Mike Cheponis <mac@Wireless.Com>
List: tech-kern
Date: 05/14/1999 13:27:54
On Fri, 14 May 1999, Eduardo E. Horvath wrote:
> On Fri, 14 May 1999, Jaromir Dolecek wrote:

>> Guenther Grau wrote:
>> > Might make sense? I don't know. How large is the fragmentation
>> > using ffs? I don't think it's usually worth running a daemon
>> > cleaning it up. Might make more sense for different filesystems, though.
>> 
>> For me - 3% after about a half a year; the NetBSD portion
>>...

> The ffs design is such that it does not suffer from fragmentation, or
> looking at it another way, forces a low level of fragmentation on
> everything.  So running a defrag utility is redundant, useless, and
> possibly dangerous since you could potentially lose data.

It's purely performance-related; if the performance is good enough, then
sure, why bother with a defragger.  (However, losing data is only possible
with badly designed/implemented s/w or broken h/w.)

> This also means that you can't really store swap blocks near the
> associated file blocks since the file blocks are scattered in large
> clumps across the disk.  

I'm not certain I understand this; isn't this actually relative to the
size of the file(s)? (I'll look in the 4.3 and 4.4 books about this in the
meantime, tho.)

> Ideally if you were going to do this you would want to allocate the swap
> blocks as if they were part of a normal file to mimize any performance
> degradation.  But if you did, there would be no real benefit over using a
> standard swap file.

The advantages of DynaSwap are:

(1) You don't need to have a separate swap partition

(2) You don't need to know the size of any of the disk used for "swap" - 
    the OS dyanmically does it for you

(3) Large (up to the size of unallocated disk space) arrays enable huge
    datasets.

-Mike