Subject: increasing a process's max data size
To: None <port-sparc@netbsd.org>
From: Ray Phillips <r.phillips@jkmrc.uq.edu.au>
List: port-sparc
Date: 12/05/2002 22:05:31
I'm running NetBSD/sparc 1.5.2 on an SS10. The machine's main task
is to run squid and that process has begin to restart once a day or
so, generating the error
xcalloc: Unable to allocate 1 blocks of 4104 bytes!
which the FAQ says is due to the process's maximum data segment size
being reached. Is that value set by MAXDSIZ in
/usr/src/sys/arch/sparc/include/vmparam.h
(256 MB by default), and am I free to make it whatever I like, then
build a new kernel?
Is there a way to raise the limit just for the squid process? I
experimented with sysctl (squid's pid was 18116):
# sysctl proc.18116.rlimit.datasize.hard \
proc.18116.rlimit.datasize.soft
proc.18116.rlimit.datasize.hard = 268435456
proc.18116.rlimit.datasize.soft = 268435456
# sysctl -w proc.18116.rlimit.datasize.soft=373293056
proc.18116.rlimit.datasize.soft: 268435456 -> 373293056
ap0# sysctl -w proc.18116.rlimit.datasize.hard=373293056
proc.18116.rlimit.datasize.hard: 268435456 -> 373293056
ap0# sysctl proc.18116.rlimit.datasize.soft \
proc.18116.rlimit.datasize.hard
proc.18116.rlimit.datasize.soft = 268435456
proc.18116.rlimit.datasize.hard = 268435456
Why didn't the changes come into effect?
Ray