Subject: Re: sun4c slowness again
To: Izumi Tsutsui <tsutsui@ceres.dti.ne.jp>
From: Andreas Hallmann <hallmann@ahatec.de>
List: port-sparc
Date: 10/11/2006 19:03:00
Have you tried to modify the vm allocation?

Setting

vm.filemin=0
vm.filemax=1

in /etc/sysctl.conf

or manualy via

sysctl -w vm.filemin=0
sysctl -w vm.filemax=1

helps a lot an machines with memory shortage (nowadays)

You can watch the effect with top.

AHA

Izumi Tsutsui wrote:

> I'm trying build.sh tools against -current tree on my SPARCstation,
> but it looks very slow on gcc due to PMEG shortage:
> 
> ---
> output of top:
> 
> load averages:  2.11,  1.67,  1.41                  up 2 days,  4:44   02:34:32
> 41 processes:  1 runnable, 38 sleeping, 1 stopped, 1 on processor
> CPU states:  4.8% user,  0.0% nice, 94.3% system,  1.0% interrupt,  0.0% idle
> Memory: 25M Act, 13M Inact, 13M Wired, 6752K Exec, 14M File, 1260K Free
> Swap: 127M Total, 28M Used, 99M Free
> 
>   PID USERNAME PRI NICE   SIZE   RES STATE      TIME   WCPU    CPU COMMAND
>  3974 root      59    0    10M   19M RUN       20:20 84.81% 84.81% cc1
>  6286 tsutsui   18    0   664K 1284K pause      0:05  2.05%  2.05% tcsh
> 16995 tsutsui    2    0   252K 1224K CPU        0:00  9.04%  2.00% top
> 19144 root       2    0   156K  784K poll       5:53  1.12%  1.12% rlogind
>   365 root      18    0  1088K 3932K pause     37:21  1.07%  1.07% ntpd
>   210 root       2    0   156K  304K select    35:56  0.98%  0.98% ypbind
>   246 root       2    0   456K 9556K select    14:45  0.63%  0.63% amd
>     4 root      18    0     0K 7404K syncer     4:10  0.00%  0.00% [ioflush]
> 20576 root      10    0  4116K  424K wait       2:10  0.00%  0.00% <nbgmake>
>     3 root     -18    0     0K 7404K pgdaemon   2:04  0.00%  0.00% [pagedaemon]
>   597 root       2    0   260K  448K select     2:03  0.00%  0.00% master
>   171 root       2    0   196K  444K kqread     1:42  0.00%  0.00% syslogd
>     5 root     -18    0     0K 7404K aiodoned   1:33  0.00%  0.00% [aiodoned]
>   183 root       2    0   324K  296K poll       1:32  0.00%  0.00% rpcbind
>   652 root      10    0   304K  360K nanoslee   1:05  0.00%  0.00% cron
>   425 root       2    0   292K  220K select     1:04  0.00%  0.00% <sshd>
>   605 postfix    2    0   336K  440K select     0:25  0.00%  0.00% <qmgr>
>  1653 root      10    0  2100K  308K wait       0:22  0.00%  0.00% <nbmake>
> 
> ---
> output of systat vmstat:
> 
>     2 users    Load  2.19  1.73  1.44                  Wed Oct 11 02:35:02
> 
> Proc:r  d  s  w     Csw    Trp    Sys   Int   Sof    Flt      PAGING   SWAPPING
>      1     8         20    393     30   202    93    385      in  out   in  out
>                                                         ops
>   91.9% Sy   5.7% Us   0.0% Ni   2.4% In   0.0% Id    pages
> |    |    |    |    |    |    |    |    |    |    |
> ==============================================>>>%                        forks
>                                                                           fkppw
>            memory totals (in kB)            1440 Interrupts               fksvm
>           real  virtual     free              92 lev1                     pwait
> Active   25784    54680      912                 lev3                     relck
> All      59272    88168   102512               2 lev5                     rlkok
>                                                  lev6                     noram
> Namei         Sys-cache     Proc-cache       100 clock                    ndcpy
>     Calls     hits    %     hits     %           lev12                    fltcp
>         9        9  100                      100 prof                     zfod
>                                                3 vcfl pg                  cow
> Disks:   fd0   sd0  nfs0  nfs1  nfs2         381 vcfl seg              64 fmin
>  seeks                                           vcfl ctx              85 ftarg
>  xfers                                           vcfl rng                 itarg
>  bytes                                       381 mmu stln pmgs       3407 wired
>  %busy                                       381 mmu pagein               pdfre
>                                                  zs1 intr                 pdscn
>                                                  esp0 intr
> 
> ---
> It looks too much "mmu stln pmgs" in pmap.c:me_alloc().
> 
> Is there any tuning variable to avoid this?
> ---
> Izumi Tsutsui