Niels Dettenbach <nd%syndicat.com@localhost> writes: > Is anyone her who culd bring some light into this topic? Why this > "low" limit is still fixed in the kernel? Why it did not could be > changed at i.e. runtime (as on other systems and NetBSD within a small > window) or - at least - on boot time and a recompilation is required > to get this limit up? > > What side effects have a higher nmbclusters / mbuf setting? > > It seems that FreeBSD allows a larger scale of nmbclusters settings during > runtime then NetBSD and the amount of nmbclusters by default is calculated > during boot relatively to available physical RAM. > http://rerepi.wordpress.com/2008/04/19/tuning-freebsd-sysoev-rit/ What's hard about this is that allocating lots of memory can lead to running out of memory, and running out of kernel virtual address space. So if one is running i386 (or a machine with 32-bit pointers), this is scarier. NetBSD runs on many systems, and much of the code is MI (shared across all arches), and this leads to choices that are suboptimal on some platforms. Clearly this should be improved, but it's harder than it seems at first. So, the 16384 upper limit may be there because it's not clearly safe to go higher, and it's rare to need more clusters than that. On a system with known memory and workload, certainly you can tweak settings. I'd try NMBCLUSTERS=32768 and see how that goes. Also, it's possible that you are having a leak. Does this happen more after long uptimes? The following may be helpful: vmstat -m|egrep 'Memory|Name|mcl' which will show the number of denied allocations. Basically, watch "requests", "fail", and if requests-releases ends up higher and higher. One thing to keep in mind is that some network interfaces (e.g. bnx) use a vast number of clusters, because there are 512 receive slots and at idle each is filled with an mbuf cluster waiting for an arriving packet. So a system with 8 bnx interfaces will have 4096 clusters used, when there is zero traffic.
Description: PGP signature