tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Problem with lots of (perhaps too much?) memory
> On 17. May 2017, at 08:59, Paul Goyette <paul%whooppee.com@localhost> wrote:
>
> On Wed, 17 May 2017, J. Hannken-Illjes wrote:
>
>>>> Chances are very high that you are hitting the free page lock contention
>>>> bug.
>>>
>>> Further observation:
>>>
>>> 1. A run of lockstat shows that there's lots of activity on
>>> mntvnode_lock from vfs_insmntque
>>>
>>> 2. The problem can be triggered simply by running 'du -s' on a
>>> file system with lots of files (for example, after having run
>>> 'build.sh release' for ~40 different architectures, and having
>>> kept ALL of the obj/*, dest/*, and release/* output files).
>>>
>>> 3. The ioflush thread _never_ finishes. Even 12 hours after the
>>> trigger, and after an 8-hour sleep window doing nothing (other
>>> than receiving a couple dozen Emails), the ioflush thread is
>>> still using 5-10% of one CPU core/thread.
>>>
>>> 4. If I umount the trigger file system, ioflush time goes to near-
>>> zero. I can remount without problem, however shortly after
>>> re-running the 'du -s' command the problem returns.
>>>
>>> There was a comment on IRC that yamt@ had been working on a problem where "ioflush wastes a lot of CPU time when there are lots of vnodes" seems to describe this situation. Unfortunately, it seems that yamt never finished working on the problem. :(
>>
>> Two questions:
>>
>> - What is the size of your vnode cache (sysctl kern.maxvnodes)?
>
> kern.maxvnodes = 6706326
What happens if you lower it to 1258268 for example?
--
J. Hannken-Illjes - hannken%eis.cs.tu-bs.de@localhost - TU Braunschweig (Germany)
Home |
Main Index |
Thread Index |
Old Index