NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: top(1) behavior



On Fri, Aug 11, 2023 at 11:25 PM Martin Husemann <martin%duskware.de@localhost> wrote:
>
> On Fri, Aug 11, 2023 at 07:04:01PM -0700, Kevin Bowling wrote:
> > A real example is doing a -j 8 kernel build.  Up in the top of top(1),
> > I see global CPU usage in the high 90%.  Down in the process list, I
> > see a few processes in the 1-10% range that do not add up to 800%.  I
> > see some of the cc1 processes with 0% WCPU%/CPU%.
>
> I am not sure I see what you mean, for me typically there are a few gcc
> processes that take all the cpu, or later (close to the end) a few
> xz or gzip processes. At a few spots ld wins and hogs at least one cpu
> completely.
>
> At least it looks not completely off, the top area displays statistics for
> aggregated intervals, while many processes (besides the ones mentioned)
> are very short lived - so the display is mostly what I'd expect it to be.
>
> If you do "top -b" (so the list is not truncated) and save it to a file,
> do you get a snapshot that is still unreasonable from your POV? Can you
> share a concrete example?

Here's a sample, this one is a bit better since the cc1plus processes
stick around for a bit longer but it still shows the WCPU% not adding
up near the global CPU stats.  I can annotate it as an image if it is
still not clear.

load averages:  6.15,  3.21,  1.67;               up 0+04:56:56
                                                            12:37:26
142 threads: 2 runnable, 123 sleeping, 10 zombie, 7 on CPU
CPU0 states: 92.6% user,  0.0% nice,  7.4% system,  0.0% interrupt,  0.0% idle
CPU1 states: 91.6% user,  0.0% nice,  8.2% system,  0.0% interrupt,  0.2% idle
CPU2 states: 91.6% user,  0.0% nice,  8.4% system,  0.0% interrupt,  0.0% idle
CPU3 states: 89.4% user,  0.0% nice, 10.0% system,  0.0% interrupt,  0.6% idle
CPU4 states: 57.9% user,  0.0% nice, 10.2% system,  0.0% interrupt, 31.9% idle
CPU5 states: 99.8% user,  0.0% nice,  0.2% system,  0.0% interrupt,  0.0% idle
CPU6 states: 96.2% user,  0.0% nice,  3.4% system,  0.0% interrupt,  0.4% idle
CPU7 states: 97.0% user,  0.0% nice,  3.0% system,  0.0% interrupt,  0.0% idle
Memory: 7213M Act, 5392K Inact, 110M Wired, 105M Exec, 5933M File, 7165M Free
Swap: 16G Total, 16G Free / Pools: 1017M Used

 PID   LID USERNAME PRI STATE       TIME   WCPU    CPU NAME      COMMAND
1598  1598 kev009    25 CPU/5       0:26 98.03% 72.66% -         cc1plus
6906  6906 kev009    25 CPU/7       0:26 96.51% 71.53% -         cc1plus
29633 29633 kev009    25 CPU/2       0:01 64.58%  6.15% -         cc1plus
6016  6016 kev009    25 CPU/6       0:01 75.00%  3.66% -         cc1plus
5636  5636 kev009    25 CPU/1       0:01 50.00%  2.44% -         cc1plus
2855  2855 kev009    85 poll/1      1:14  0.59%  0.59% -         xfce4-terminal
1867  1867 kev009    85 poll/3      2:00  0.00%  0.00% -         X
2041  2041 kev009    85 poll/0      0:19  0.00%  0.00% -         xfwm4
1867  1629 kev009    85 poll/3      0:14  0.00%  0.00% -         X
2016  2016 kev009    85 poll/7      0:07  0.00%  0.00% -         xfce4-panel
2099  2099 kev009    85 poll/1      0:03  0.00%  0.00% -         xfdesktop
2239  2239 kev009    85 select/0    0:02  0.00%  0.00% -         xscreensaver
 598   598 root      85 poll/2      0:01  0.00%  0.00% -         dhcpcd
 599   599 _dhcpcd   85 poll/3      0:01  0.00%  0.00% -         dhcpcd
2016  2089 kev009    85 poll/0      0:01  0.00%  0.00% gmain     xfce4-panel
 774   774 kev009    85 poll/0      0:01  0.00%  0.00% -         gvfsd-trash
18154 18154 kev009    43 CPU/3       0:00  0.00%  0.00% -         top
3737  3737 kev009    25 CPU/0       0:00  0.00%  0.00% -         cc1plus
15778 15778 kev009    25 RUN/3       0:00  0.00%  0.00% -         cc1plus
9044  9044 kev009    25 RUN/3       0:00  0.00%  0.00% -         cc1plus
3365  3365 kev009    86 wait/0      0:00  0.00%  0.00% -         su
   1     1 root      85 wait/3      0:00  0.00%  0.00% -         init
23460 23460 kev009    85 poll/1      0:00  0.00%  0.00% -         nbmake
1402  1402 kev009    85 poll/2      0:00  0.00%  0.00% -         nbmake
19515 19515 kev009    85 ttyraw/0    0:00  0.00%  0.00% -         sh
 545   545 kev009    85 poll/6      0:00  0.00%  0.00% -         thunar
2016  2087 kev009    85 poll/1      0:00  0.00%  0.00% gdbus     xfce4-panel
2044  2205 kev009    85 poll/2      0:00  0.00%  0.00% gdbus     xfsettingsd
2044  2019 kev009    85 poll/2      0:00  0.00%  0.00% gmain     xfsettingsd
2044  2044 kev009    85 poll/2      0:00  0.00%  0.00% -         xfsettingsd
2041  2267 kev009    85 poll/7      0:00  0.00%  0.00% gdbus     xfwm4
2041  1886 kev009    85 poll/5      0:00  0.00%  0.00% gmain     xfwm4
2036  2036 kev009    85 poll/3      0:00  0.00%  0.00% -         ssh-agent
1617  2037 kev009    85 poll/4      0:00  0.00%  0.00% gdbus
at-spi2-registry
1617  2021 kev009    85 poll/6      0:00  0.00%  0.00% gmain
at-spi2-registry
1617  1617 kev009    85 poll/1      0:00  0.00%  0.00% -
at-spi2-registry



> For FreeBSD showing the values more like you expect I'd blame that on clang/
> llvm being slow, but I'll be shot for this remark :-)

There's probably something to that.

> But still specualting without data: one thing that could make a serious
> difference is the device you are running your build.sh on having a
> suboptimal driver spending a lot of cpu in the kernel for the IO the
> build produces.

> Martin


Home | Main Index | Thread Index | Old Index