Subject: increasing a process's max data size
To: None <port-sparc@netbsd.org>
From: Ray Phillips <r.phillips@jkmrc.uq.edu.au>
List: port-sparc
Date: 09/05/2003 17:08:26
Dear port-sparc:

Last December I started a thread on this list [1] about allowing a 
process to use more than NetBSD's built-in limit of 256 MB of RAM. 
Since then I've built a custom kernel for my SS10 by adding these 
lines to /usr/src/sys/arch/sparc/conf/GENERIC

options DFLDSIZ=408944640
options MAXDSIZ=408944640
options NMBCLUSTERS=1024
options NKMEMPAGES=4096

This change does allow squid to use more memory and everything is 
fine, except that is, when it's using more than 256 MB RAM and I send 
it a signal such as

   # squid -k reconfigure

in which case the machine crashes and has to be power cycled.  If 
squid is using less than 256 MB sending it a signal causes no 
problems, so I've obviously missed something important.  Could you 
tell me what, please?

I've tried this on a NetBSD/i386 machine and found the same problem 
exists there.


Ray


[1] which finished with this posting
     http://mail-index.netbsd.org/port-sparc/2002/12/06/0003.html




This kernel gives these values:

# limit
cputime         unlimited
filesize        unlimited
datasize        399360 kbytes
stacksize       512 kbytes
coredumpsize    unlimited
memoryuse       360384 kbytes
memorylocked    120128 kbytes
maxproc         80
openfiles       64
# limit -h
cputime         unlimited
filesize        unlimited
datasize        399360 kbytes
stacksize       399360 kbytes
coredumpsize    unlimited
memoryuse       360384 kbytes
memorylocked    360384 kbytes
maxproc         532
openfiles       1772