tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: lookup on memory shortage



On Tue, Sep 30, 2008 at 12:10:34PM +0200, Manuel Bouyer wrote:
> [...]
> db> show uvmexp
> Current UVM status:
>   pagesize=4096 (0x1000), pagemask=0xfff, pageshift=12
>   125114 VM pages: 70679 active, 34527 inactive, 1520 wired, 9 free
>   pages  84819 anon, 18159 file, 3748 exec
>   freemin=256, free-target=341, wired-max=41704
>   faults=1230798242, traps=1231391590, intrs=19787899, ctxswitch=69011216
>   softint=27506896, syscalls=658831806, swapins=490, swapouts=530
>   fault counts:
>     noram=18668, noanon=0, pgwait=8, pgrele=0
>     ok relocks(total)=176116(176124), anget(retrys)=210791151(137139), 
> amapcopy=
> 126345300
>     neighbor anon/obj pg=330698213/1683147083, 
> gets(lock/unlock)=418364813/38984
> 
>     cases: anon=153914740, anoncow=51459898, obj=342346991, prcopy=76017815, 
> prz
> ero=596820944
>   daemon and swap counts:
>     woke=32248, revs=13625, scans=3718773, obscans=2574210, anscans=511975
>     busy=4666, freed=2786331, reactivate=248373, deactivate=5131980
>     pageouts=243529, pending=88830, nswget=136855
>     nswapdev=1, swpgavail=65535
>     swpages=65535, swpginuse=65535, swpgonly=56406, paging=0
> 
> This raises several questions. First, I've trouble parsing the
> swap section of the show uvmexp: what do swpgavail, swpages and
> swpginuse means ? If my swap full ?

Here's what top shows while the box is running (whole list of processes,
and not wedged yet):

load averages:  0.10,  0.06,  0.01;               up 0+06:42:38        18:40:08
50 processes: 48 sleeping, 1 zombie, 1 on CPU
CPU states:  5.8% user,  0.0% nice,  4.4% system,  0.0% interrupt, 89.8% idle
Memory: 281M Act, 137M Inact, 5964K Wired, 14M Exec, 76M File, 1308K Free
Swap: 256M Total, 256M Used, 324K Free

  PID USERNAME PRI NICE   SIZE   RES STATE      TIME   WCPU    CPU COMMAND
 5076 root      85    0  6528K  333M biowait    3:39  7.42%  7.42% cc1plus
  220 root      85    0   904K 5408K pause       ???  0.00%  0.00% ntpd
  431 bouyer    43    0    96K 1184K CPU        0:03  0.00%  0.00% top
 3953 root      83    0    76K    4K wait       0:02  0.00%  0.00% <pbulk-build
 9403 bouyer    85    0   344K 2956K select     0:00  0.00%  0.00% sshd
   34 bouyer    85    0   392K 1348K select     0:00  0.00%  0.00% screen-4.0.3
  351 root      85    0    56K  516K nanoslp    0:00  0.00%  0.00% cron
  109 root      85    0    72K  408K kqueue     0:00  0.00%  0.00% syslogd
  445 root      85    0  1176K    4K pause      0:00  0.00%  0.00% <tcsh>
10169 bouyer    85    0  1168K    4K pause      0:00  0.00%  0.00% <tcsh>
  405 bouyer    85    0  1092K    4K pause      0:00  0.00%  0.00% <tcsh>
  391 bouyer    85    0  1092K    4K pause      0:00  0.00%  0.00% <tcsh>
28280 bouyer    85    0   380K    4K pause      0:00  0.00%  0.00% <screen-4.0.
 1315 root      85    0   344K    4K netio      0:00  0.00%  0.00% <sshd>
  240 root      85    0   284K    4K select     0:00  0.00%  0.00% <sshd>
  343 postfix   85    0   264K    4K kqueue     0:00  0.00%  0.00% <qmgr>
  434 root      85    0   240K    4K pause      0:00  0.00%  0.00% <ksh>
  538 postfix   85    0   208K    4K kqueue     0:00  0.00%  0.00% <pickup>
21183 root      85    0   192K    4K wait       0:00  0.00%  0.00% <gmake>
29765 root      85    0   164K    4K wait       0:00  0.00%  0.00% <sh>
 3944 root      85    0   160K    4K wait       0:00  0.00%  0.00% <sh>
  473 root      85    0   160K    4K wait       0:00  0.00%  0.00% <sh>
 3860 root      85    0   160K    4K wait       0:00  0.00%  0.00% <sh>
  332 root      85    0   156K    4K kqueue     0:00  0.00%  0.00% <master>
  362 root      85    0    60K    4K nanoslp    0:00  0.00%  0.00% <getty>
  318 root      85    0    60K    4K nanoslp    0:00  0.00%  0.00% <getty>
  363 root      85    0    60K    4K nanoslp    0:00  0.00%  0.00% <getty>
  344 root      85    0    56K    4K kqueue     0:00  0.00%  0.00% <inetd>
  326 root      85    0    52K    4K ttyraw     0:00  0.00%  0.00% <getty>
    1 root      85    0    44K    4K wait       0:00  0.00%  0.00% <init>
29957 root      85    0    36K    4K kqueue     0:00  0.00%  0.00% <tail>
  227 root      85    0    28K    4K kqueue     0:00  0.00%  0.00% <powerd>
17668 root      81    0   164K    4K wait       0:00  0.00%  0.00% <sh>
 1432 root      81    0   164K    4K wait       0:00  0.00%  0.00% <sh>
 1878 root      81    0   156K    4K wait       0:00  0.00%  0.00% <g++>
 4901 root      76    0   196K    4K wait       0:00  0.00%  0.00% <gmake>
29853 root      76    0   192K    4K wait       0:00  0.00%  0.00% <gmake>
29281 root      76    0   192K    4K wait       0:00  0.00%  0.00% <gmake>
29493 root      76    0   192K    4K wait       0:00  0.00%  0.00% <gmake>
  521 root      76    0   192K    4K wait       0:00  0.00%  0.00% <gmake>
28691 root      76    0   192K    4K wait       0:00  0.00%  0.00% <gmake>
27064 root      76    0   172K    4K wait       0:00  0.00%  0.00% <make>
28112 root      76    0   168K    4K wait       0:00  0.00%  0.00% <make>
29180 root      76    0   164K    4K wait       0:00  0.00%  0.00% <sh>
 1809 root      76    0   164K    4K wait       0:00  0.00%  0.00% <sh>
14914 root      76    0   164K    4K wait       0:00  0.00%  0.00% <sh>
18164 root      76    0   164K    4K wait       0:00  0.00%  0.00% <sh>
29726 root      76    0   164K    4K wait       0:00  0.00%  0.00% <sh>
 3163 root      76    0   160K    4K wait       0:00  0.00%  0.00% <sh>

the system has 512MB RAM and 256MB swap. I'm not sure how we can get to a
state where almost all virtual memory is used, when the largest process is
only 330MB and all other should not sum up to more than a few 10s of MB.

-- 
Manuel Bouyer, LIP6, Universite Paris VI.           
Manuel.Bouyer%lip6.fr@localhost
     NetBSD: 26 ans d'experience feront toujours la difference
--


Home | Main Index | Thread Index | Old Index