Subject: Re: Kernel resident memory size is quite large
To: None <port-amd64@netbsd.org>
From: None <antiright@gmail.com>
List: port-amd64
Date: 03/30/2006 07:57:54
--tKW2IUtsqtDRztdT
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
--tKW2IUtsqtDRztdT
Content-Type: message/rfc822
Content-Disposition: inline
Date: Wed, 29 Mar 2006 16:06:46 -0500
From: jefbed
To: Havard Eidnes <he@netbsd.org>
Subject: Re: Kernel resident memory size is quite large
Message-ID: <20060329210646.GA7762@antiright.dyndns.org>
References: <20060329105225.GA18685@antiright.dyndns.org> <20060329.152028.04464439.he@uninett.no> <20060329134932.GA24749@beta.martani.repy.czf> <20060329.160858.06082150.he@uninett.no>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20060329.160858.06082150.he@uninett.no>
User-Agent: Mutt/1.4.2.1i
On Wed, Mar 29, 2006 at 04:08:58PM +0200, Havard Eidnes wrote:
> > > > Is there any reason why the memory used by the kernel is so large?
> > > > Top is showing that kernel processes use 233MB of memory.
> > > > The system has 1GB, so the amount used by the kernel is a large percentage.
> > > > Does this memory include the buffer cache, or is it occupied by the kernel
> > > > itself?
> > >
> > > First I have to ask how you read top's output to come to the
> > > conclusion you do. E.g. you can't simply add the "RES" sizes of
> > > the processes with [] around their names; as far as I know, these
> > > processes all share the same virtual address space.
> > >
Yes
> > > However, with that said, it is quite normal for the kernel's data
> > > structures to occupy a major portion of physical memory, since this
> > > includes the area for buffering file data, vnodes etc. etc.
> > ^^^^^^^^^^^^^^^^^^^
> >
> > I don't think that buffered file data will show as the RES size of kernel
> > threads.
>
Buffered data is separate in top's accounting (my system shows 484MB of
file data in memory).
> No, I agree, but that's why I specifically asked how he came to the
> conclusion that "kernel processes use 233MB of memory", and since no
> raw data was given to support this conclusion, it's worth double-
> checking and asking for that raw data, e.g. a cut+paste from top's
> output.
>
Paste of top output, as requested, sorted by RES:
load averages: 0.14, 0.16, 0.16 16:05:55
72 processes: 1 runnable, 69 sleeping, 1 stopped, 1 on processor
CPU states: 1.5% user, 0.0% nice, 0.5% system, 0.0% interrupt, 98.0% idle
Memory: 452M Act, 151M Inact, 5252K Wired, 21M Exec, 484M File, 54M Free
Swap: 512M Total, 512M Free
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
11 root 18 0 0K 230M syncer 3:30 0.00% 0.00% [ioflush]
2 root 10 0 0K 230M usbevt 0:00 0.00% 0.00% [usb0]
3 root 10 0 0K 230M usbtsk 0:00 0.00% 0.00% [usbtask]
4 root 10 0 0K 230M usbevt 0:00 0.00% 0.00% [usb1]
5 root -6 0 0K 230M atath 0:00 0.00% 0.00% [atabus0]
6 root -6 0 0K 230M atath 0:00 0.00% 0.00% [atabus1]
7 root -6 0 0K 230M atath 0:00 0.00% 0.00% [atabus2]
8 root -6 0 0K 230M atath 0:00 0.00% 0.00% [atabus3]
9 root -6 0 0K 230M sccomp 0:00 0.00% 0.00% [atapibus0]
0 root -18 0 0K 230M schedule 0:00 0.00% 0.00% [swapper]
10 root -18 0 0K 230M pgdaemon 0:00 0.00% 0.00% [pagedaemon]
12 root -18 0 0K 230M aiodoned 0:00 0.00% 0.00% [aiodoned]
399 jefbed 2 0 61M 152M select 267:18 0.15% 0.15% XFree86
497 jefbed 2 0 4928K 8640K select 0:35 0.00% 0.00% fluxbox
861 jefbed 2 0 2812K 6948K select 0:58 0.05% 0.05% xterm
303 root 10 0 200K 6716K mfsidl 0:00 0.00% 0.00% mount_mfs
8516 jefbed 2 0 644K 4724K poll 0:00 0.00% 0.00% arshell
538 root 2 0 572K 4604K poll 0:00 0.00% 0.00% arshell
563 root 18 0 1180K 3944K pause 0:03 0.00% 0.00% ntpd
491 jefbed 2 0 408K 3208K select 0:00 0.00% 0.00% ssh
1168 jefbed 2 0 368K 3140K select 0:00 0.00% 0.00% ssh
659 jefbed 2 10 644K 3024K poll 0:03 0.00% 0.00% xscreensaver
141 jefbed 2 0 2048K 2960K select 0:06 0.00% 0.00% screen-4.0.2
7762 jefbed 10 0 1060K 2880K wait 0:00 0.00% 0.00% mutt
17326 jefbed 2 0 568K 2816K netio 0:00 0.00% 0.00% fetchmail
> "RES" coloumn, so if he is indeed seeing 233M for each and every
I do recognize that this memory is shared amongst the processes,
so I'm not implying that *each* uses that amount.
-Jeff
--tKW2IUtsqtDRztdT--