tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: lockstat from pathological builds Re: high sys time, very very slow builds on new 24-core system
- To: Andrew Doran <ad%netbsd.org@localhost>
- Subject: Re: lockstat from pathological builds Re: high sys time, very very slow builds on new 24-core system
- From: Thor Lancelot Simon <tls%panix.com@localhost>
- Date: Fri, 1 Apr 2011 13:05:02 -0400
On Thu, Mar 31, 2011 at 04:32:12PM -0400, Thor Lancelot Simon wrote:
> On Thu, Mar 24, 2011 at 12:04:02PM +0000, Andrew Doran wrote:
> >
> > Try lockstat as suggested to see if something pathological is going on. In
> > addition to showing lock contention problems it can often highlight a code
> > path being hit too frequently for some reason.
>
> I have attached build.sh and lockstat output from 24-way and 8-way builds
> of the amd64 GENERIC kernel on this system. Also dmesg and mount -v output
> so you can see what's mounted how, etc.
Mindaugas spent quite a while looking at this with me. It appears to
be largely due to contention on the page queue lock.
Some of the contention is caused by the idle loop page zeroing code.
Mindaugas has dramatically improved its performance on large systems
but I would still recommend turning it off on systems with more than
12 CPUs.
I would actually like to know for what workload, on what hardware, the
idle loop zeroer is actually beneficial. I'm sure there are some
but clearly they are not workloads I've tried testing recently.
His improvement shaves about 20 seconds off the build time. Now we
scale nearly linearly up to about 12 build jobs and get a very slight
performance improvement from jobs 12-16. With vm.idlezero=0 set the
slight performance improvement is a little less slight.
I believe he's currently working on hashing the page queue lock to see
if that gives us some performance back for CPUs 12 through 24.
Thor
Home |
Main Index |
Thread Index |
Old Index