Subject: Re: tuning TB RAID-backed filestore to reduce inode/superblock
To: Luke Mewburn <lukem@NetBSD.org>
From: George Michaelson <ggm@apnic.net>
List: current-users
Date: 09/07/2004 12:10:59
On Tue, 7 Sep 2004 12:06:32 +1000 Luke Mewburn <lukem@NetBSD.org> wrote:

>On Tue, Sep 07, 2004 at 11:36:38AM +1000, George Michaelson wrote:
>  | Is there a HOWTO for BSD which explains what is 'reasonable'
>  | overhead to work towards in constructing a 1.5Tb filestore on RAID?
>  |
>  | A co-worker just complained the consumed space for 1.5tb looked to
>  | be 100Gb and that seems a very high overhead. -no tunefs, no newfs
>  | options, no RAID tuning. Out of the box.
>
>That sounds rather high.

Thats what i thought. I suspect its metadata overhead in the c/g and blocksize
area. The thing is that from a naieve perspective, block/frag size doesn't really
want to change: the filemix in exposed UFS space hasn't changed, so the natural
block/frag size doesn't seem to me to need to change. That suggests that the
inode:file ratio, and cyl/group ratio are the only tuning knobs available
apart from any RAID level striping and 'lower layer' block/frag sizes for RAID
metadata (if there is such a beast)

>
>
>  | Any simple guidance would be appreciated.
>
>
>It depends on how many files (inodes) that you expect to store on 
>the volume.
>
>You can gain more space back by increasing:
>	-b	block size
>	-f	frag size
>	-i	bytes per inode
>and by decreasing the
>	-m	free space %
>
>
>You can also get an idea of the metadata overhead of the cylinder
>groups and inodes by newfs'ing a file (newfs -F -s somesize ...)
>and du-ing the result.  Don't forget to rm the test file between
>runs.  (Just don't use -Z to pre-zero all blocks during testing :)
>

thanks. I'll pass this on.

-george

>
>Cheers,
>Luke.
>


-- 
George Michaelson       |  APNIC                 |  See you at APNIC 18
Email: ggm@apnic.net    |  PO Box 2131 Milton    |  Nadi, Fiji
Phone: +61 7 3858 3150  |  QLD 4064 Australia    |  31 Aug -3 Sep 2004
  Fax: +61 7 3858 3199  |  http://www.apnic.net  |  www.apnic.net/meetings/18