tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Filesystem limits



On Sun, Jan 17, 2010 at 02:17:47PM +0000, Sad Clouds wrote:
> 
> Maybe I should have rephrased my question, i.e. not "filesystem limits", but 
> "per directory limits".
> 
> I can't create more than 32766 subdirectories in a given directory. If I run 
> a 
> test program, which creates subdirectories (1, 2, 3...N) in a loop, after 
> 32766 it exits with:
> 
> p3smp$ ./a.out
> mkdir() error: : Too many links
> i = 32766
> 
> Where is this "Too many links" coming from? Is it a filesystem limit, or a 
> dynamically adjustable sysctl limit? Is there a way to increase this limit, 
> or 
> maybe it will substantially decrease performance when doing directory lookups?

As stated by others, this is because the 'inode reference count' is a
16 bit value, and traditionally signed - giving a limit if 2^15.

You want to be glad that NetBSD actually checks the limit.
Solaris doesn't (or didn't until fairly recently) which led to one of
the bank payment systems exploding when the number of clients exceeded
32766.

Directory lookups (on ufs) are linear searches (possibly with in-kernel
hashes) so very large directories should be avoided.

This means that you really want to avoid directories with significant
numbers of entries - be they files or directories.

The usual solution is to use a separate subdirectory for each initial
letter of the wanted filename - see the SYSV/Solaris terminfo database.

        David

-- 
David Laight: david%l8s.co.uk@localhost


Home | Main Index | Thread Index | Old Index