[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Linux compat and swap
[Please forgive me to give supplementary information gathered for all
answers and "before". And thank you to all for answering.]
To be more precise:
The node is not a desktop node but a server one (so there is no X11
running, so no firefox or whatever), with two main functions:
1) A file server;
2) A processing node for data (GIS) manipulations, since there can be
huge amounts of data to process, so it is more efficient to process
where it is saved due to the network capabilities.
The node' uptime is 225 days (since some disk replacements) serving
during office time files to 2 to 6 nodes. And it is processing data
(sometimes during office time; generally outside office time).
It has 3862 MB of avail memory.
When using a program to convert coordinates, this program being a Linux
binary, the program bailed out, 3 times in a row, after processing 20M
records. Splitting the input data under this amount, I managed to
=> This is why I jumped to the conclusion that the program had memory
leak because processing should be one line of input at a time so
whatever the number of lines, I fail to see why it would need more
memory for more records.
I found, afterwards, in /var/log/messages, that this program was killed
because "out of swap".
=> Since it was the only one I've seen doing this and since it was under
emulation I jumped to the conclusion that it had something to do with
allocating in the swap.
=> This conclusion was false because (after sending the message)
I looked in the /var/log/messages* to see if such a message had appeared
before for other programs. I found (the log go back to 2014...) that it
had appeared a couple of times with another program: postgres. So the
emulation has indeed nothing to do with it. [Postgres is not used
for its network capabilities and the instances that were killed were
statistics and internal administration so it went unnoticed, postgres
being overkill for the application and being soon replaced, when
a query language is necessary, with sqlite3 files---and basic files
when no query language but only key indexing is necessary.]
Concerning my sentence "RAM is not exhausted", my assumption (another
one...) was that when memory for processing was getting scarced, file
caching was reduced since there seems to be a basic "page" device for
files: the file on disk. So for me, when there was still a huge amount
of memory for file caching there was still a huge amount of memory as
spare for programs at the detriment of file caching hence speed.
Am I totally wrong about this? (I was monitoring other processes with
top(1) that's why I have seen memory statistics)
On Thu, Apr 23, 2020 at 08:45:34AM +0200, ignatios%cs.uni-bonn.de@localhost wrote:
> On Wed, Apr 22, 2020 at 08:46:08PM +0200, tlaronde%polynum.com@localhost wrote:
> > an exhaustion of the NetBSD swap partition
> > the RAM is not exhausted...
> I beg your pardon? What exactly is happening? Does swapctl -l
> claim it's full? What does vmstat say?
> also: i think you shold be able to see with
> fstat -p insertthepidhere
> what the open files are (well, Inode number and file system).
> Any of them on a file system that's a tmpfs? But still, ram would
> be a problem, too.
Thierry Laronde <tlaronde +AT+ polynum +dot+ com>
Key fingerprint = 0FF7 E906 FBAF FE95 FD89 250D 52B1 AE95 6006 F40C
Main Index |
Thread Index |