Subject: Re: segmentation fault on fclose?
To: der Mouse <mouse@Collatz.McRCIM.McGill.EDU>
From: Chris G. Demetriou <>
List: current-users
Date: 09/06/1994 15:48:27
> > So, now that I'm bothering you gurus... why does *this* fail?
> > #include <stdio.h>
> > main()
> > {
> > char    buf[512 * 1024];
> > }
> > Is this some kernel size limitation?
> Essentially, yes.  The thing is, the kernel has to have some way to
> tell the difference between stack growth and wild pointers.

in a word, "huh"?

In all probability, this has nothing to do with detecting stack growth
vs. wild pointers.

What it has to do with is the stack size limit.  Normally (on the
i386; i'm not sure about other ports) the stack size limit is 512k.

You can't have more than 512k of stack, whether you grow it in small
chunks, or in one large one.

if you unlimit your stack size, (or raise the limit), it should
probably work fine.  for instance:

27 [sun-lamp] tmp % cat > xxx.c
char buf[512*1024];
28 [sun-lamp] tmp % cc xxx.c
29 [sun-lamp] tmp % limit stacksize
stacksize       512 kbytes
30 [sun-lamp] tmp % a.out
Segmentation fault (core dumped)
31 [sun-lamp] tmp % limit stacksize 516k
32 [sun-lamp] tmp % a.out

In other words, you can't allocate 512k on the stack, and still expect
to run in 512k of stack space.

The reason that the stack size limit is there isn't to help detect
wild pointers, it's there to keep the stack from growing too large,
just as the data size limit is there to keep the heap from growing too

'unlimit' or setting specific limits will help.  Generally, however,
well-written programs work fine in the default limits.  (the obvious
exception is when a program is doing an _AWFUL_ lot... 8-)