Subject: Re: memory (re-)allocation woes
To: theo borm <>
From: Greg A. Woods <>
List: tech-kern
Date: 11/27/2004 14:56:48
[ On Saturday, November 27, 2004 at 16:05:29 (+0100), theo borm wrote: ]
> Subject: memory (re-)allocation woes
> As a follow-up to my previous post, I checked if the same
> problem (user program memory allocation freezing/rebooting
> a 1.6.2 system) does occur in simpler set-ups. I ended up
> adding local root and swap disks to some of the cluster
> nodes and did fresh installs of 1.6.2 on those.
> I am sad to say that it does.

No problem for the slightly more portable (architecture-wise) version
attached below on my AlphaServer 4000 (2x400MHz, 1.5GB RAM, 2G swap)

14:32 [703] $ /sbin/swapctl -lk
Device      1K-blocks     Used    Avail Capacity  Priority
/dev/ld0b     2048256   452360  1595896    22%    0

14:32 [704] $ ulimit -a
time(cpu-seconds)    unlimited
file(blocks)         unlimited
coredump(blocks)     unlimited
data(kbytes)         1048576
stack(kbytes)        32768
lockedmem(kbytes)    262144
memory(kbytes)       2048000
nofiles(descriptors) 13196
processes            4116

14:29 [702] $ ./trealloc 1048576 1048576 
[[ .... ]]
step 342: reallocating 359661568 bytes... trealloc: realloc() failed: Cannot allocate memory

The system got a little sluggish, and my emacs processes got swapped out
because I wasn't using them (I was just playing Spider to keep a small
program running to get a good feel for system response :-), but
otherwise it just chugged along fine.

It was interesting to watch it in top as it grew and shrunk and then
grew larger again with each step.  :-)

I think the problem might be that the default rlimits are too large for
the resources available on the systems you're using.  On BSD systems
which over-commit memory resources you cannot allow user processes to
get out of hand and soak up too much of the system's resources.

Note how the program failed on my system when it did reach the maximum
data size limit imposed by my rlimits.

(that the program thought it was only trying to use 702464 kb at the
time only reveals, I think, the wastage in our malloc() implementation)

						Greg A. Woods

+1 416 218-0098                  VE3TCP            RoboHack <>
Planix, Inc. <>          Secrets of the Weird <>

 * trealloc.c: original posted:
 * From: theo borm <>
 * Message-ID: <>
 * Date: Sat, 27 Nov 2004 16:05:29 +0100
 * To:

#include <stdio.h>
#include <stdlib.h>

main(int argc, char *argv[])
	char *mem1;
	char *mem2;
	int i = 0;
	unsigned long int ulngint;
	size_t size;
	size_t increment;

	if (argc != 3) {
		fprintf(stderr, "Usage: realloctest <initial-size> <increment>\n");
	if (sscanf(argv[1], "%lu", &ulngint) != 1)
		errx(1, "Invalid size");
	size = ulngint;

	if (sscanf(argv[2], "%lu", &ulngint) != 1)
		errx(1, "Invalid increment");
	increment = ulngint;

	printf("first allocation (%lu bytes)... ", (unsigned long int) size);
	if ((mem1 = malloc(size)) == NULL)
		err(1, "malloc() failed");

	while (1) {
		size += increment;
		printf("step %d: reallocating %lu bytes... ", i, (unsigned long int) size);
		if ((mem2 = realloc(mem1, size)) == NULL) {
			warn("realloc() failed");
		mem1 = mem2;