Current-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Build time measurements
Hi Andreas,
On Fri, Mar 27, 2020 at 10:39:44AM +0200, Andreas Gustafsson wrote:
> On Wednesday, I said:
> > I will rerun the 24-core tests with these disabled for comparison.
>
> Done. To recap, with a stock GENERIC kernel, the numbers were:
>
> 2016.09.06.06.27.17 3321.55 real 9853.49 user 5156.92 sys
> 2019.10.18.17.16.50 3767.63 real 10376.15 user 16100.99 sys
> 2020.03.17.22.03.41 2910.76 real 9696.10 user 18367.58 sys
> 2020.03.22.19.56.07 2711.14 real 9729.10 user 12068.90 sys
>
> After disabling DIAGNOSTIC and acpicpu, they are:
>
> 2016.09.06.06.27.17 3319.87 real 9767.39 user 4184.24 sys
> 2019.10.18.17.16.50 3525.65 real 10309.00 user 11618.57 sys
> 2020.03.17.22.03.41 2419.52 real 9577.58 user 9602.81 sys
> 2020.03.22.19.56.07 2363.06 real 9482.36 user 7614.66 sys
Thanks for repeating the tests. For the sys time to still be that high in
relation to user, there's some other limiting factor. Does that machine
have tmpfs /tmp? Is NUMA enabled in the BIOS? Different node number for
CPUs in recent kernels in dmesg is a good clue. Is it a really old source
tree? I would be interested to see lockstat output from a kernel build at
some point, if you're so inclined.
Cheers,
Andrew
Home |
Main Index |
Thread Index |
Old Index