Subject: Re: Support for ACLs
To: None <tech-kern@netbsd.org>
From: Robert Elz <kre@munnari.OZ.AU>
List: tech-kern
Date: 03/10/2001 16:24:09
I replied to Bill...

  |     Date:        Fri, 09 Mar 2001 15:28:24 -0500
  |     From:        Bill Sommerfeld <sommerfeld@orchard.arlington.ma.us>
  |     Message-ID:  <20010309202830.19F942A2A@orchard.arlington.ma.us>
  | 
  |   | I would hope that any such scheme would also include support for
  |   | efficiently allocating very small files -- perhaps ones as small as 50
  |   | bytes or less.
  | 
  | That's an orthogonal change to the filesystem.   It is certainly one
  | that would be useful to have,

I should really have said that it is probably one that would be
useful to have...

My personal preference would be to add the associated files stuff
(assuming that the community in general agrees that it is a worthwhile
thing to try, and I, or someone, find the time to code it), and then
add ACLs using that if desired, and then measure the filesystem, and see
whether there really would be enough gains from special handing of tiny
files to make it worth the effort.

This is a filesystem optimisation technique, and like all optimisation,
guessing that it will help tends to produce the wrong outcome at least
as many times as it does the right one.

Assuming that it does show that there would be advantages (as Kirk
McKusick's measurements showed that fragments would improve utilisation
of traditional filesystems once blocks became bigger - and improve it
enough to be worth the extra complexity and overheads), it should also
show what size of "tiny" is it worth optimising.

That is, is handing the special case "file <= 60 bytes" (so all of the
data could be layered on the block pointers) sufficient, or would that
miss enough files between 60 and (say) 128 bytes, that some other technique
(frags of frags perhaps) would be a better solution.

Until the measurements are done, we won't know.

kre