tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Max. number of subdirectories dump
On Sun, Aug 18, 2013 at 12:24:12PM -0400, Mouse wrote:
> A directory may contain entries other than subdirectories. Since there
> is no enforced ordering of entries in a directory, the whole directory
> must be read to find all the subdirectories (unless 32767 subdirs are
> found first, I suppose, which is unlikely in practice); there could
> always be a subdirectory lurking out at the end of the directory.
> Since directories can be arbitrarily large (adding things other than
> subdirectories does not increase the directory's link count), they may
> involve double indirect blocks. (In theory, even triple indirect
> blocks, though that would mean a directory over 4299210752 bytes long
> (or more if filesystem blocks are bigger than 4K), which is difficult
> to test.)
That is not difficult to test, just annoying, given that directory ops
in ffs require linear searches and linear searches on directories that
size are a bit painful.
There's another problem, though. From ufs/ufs/dir.h:
/*
* Theoretically, directories can be more than 2Gb in length; however, in
* practice this seems unlikely. So, we define the type doff_t as a 32-bit
* quantity to keep down the cost of doing lookup on a 32-bit machine.
*/
#define doff_t int32_t
I have no idea what will happen if you exceed this limit but a quick
scan a couple weeks ago suggested that it isn't really enforced... so
I wouldn't really advise trying it except on a crash machine.
(and if someone does try it and it does blow up, please file a bug report)
--
David A. Holland
dholland%netbsd.org@localhost
Home |
Main Index |
Thread Index |
Old Index