tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: lockstat from pathological builds Re: high sys time, very very slow builds on new 24-core system
- To: Thor Lancelot Simon <tls%panix.com@localhost>
- Subject: Re: lockstat from pathological builds Re: high sys time, very very slow builds on new 24-core system
- From: Andrew Doran <ad%netbsd.org@localhost>
- Date: Fri, 1 Apr 2011 20:48:50 +0000
On Fri, Apr 01, 2011 at 05:47:59PM +0000, Andrew Doran wrote:
> The global freelist is another area of concern.
Thinking about some first steps. We have a bunch of global counters and
statistics stored in struct uvmexp. For example: pga_zerohit, cpuhit,
colorhit etc.
While they might seem like small change I think these should become per-CPU
as a first step because they cause all kinds of cacheline bouncing and will
get in the way of whatever sort of scheme we go with to make the allocator
scalable.
So my suggestion is that where currently we have something like:
atomic_inc_uint(&uvmexp.pga_colorhit);
.. it would change to something like:
KPREEMPT_DISABLE(l);
l->l_cpu->cpu_data.cpu_uvmexp.pga_colorhit++;
KPREEMPT_ENABLE(l);
Anyplace we need to use these value we'd call a function to sum the value
across all CPUs, maybe at splhigh() so we can call the sum "reasonably"
quick and thereby "reasonably" accurate. I don't forsee a performance
problem on the reader side since it's rare that we read these values.
Then there are filepages, execpages, anonpages, zeropages, etc. The same
could naively be done for these as a short term step. In the places these
are used, we don't need a completely accurate view. In the longer term we
may want these to become per-NUMA node or whatever.
I don't have a good suggestion for uvmexp.free since we check that one quite
regularly. I think that one is quite closely tied to whatever scheme is
chosen for the allocator.
Thoughts?
Home |
Main Index |
Thread Index |
Old Index