Subject: Re: increasing a process's max data size
To: None <port-sparc@netbsd.org>
From: Bernd Sieker <bsieker@freenet.de>
List: port-sparc
Date: 12/05/2002 16:18:07
On 05.12.02, 22:05:31, Ray Phillips wrote:
> I'm running NetBSD/sparc 1.5.2 on an SS10. The machine's main task
> is to run squid and that process has begin to restart once a day or
> so, generating the error
>
> xcalloc: Unable to allocate 1 blocks of 4104 bytes!
>
Is that a really very heavily loaded squid on a machine with more
than 256MB of RAM?
Otherwise I would rather restrict squid's memory usage. (It makes
no sense to have squid cache more data in "RAM" then you have actual
RAM. squid itself is probably better in writing unused data to
disk than the VM system (for this particular task.)
Note that the actual squid process may become a lot bigger than
what you set in the cache_mem variable . I use cache_mem 16 MB,
and the actual process size is usually around 40-45 MB. This is a
squid for a small LAN (10 machines sharing one ADSL 768/128 line)
> which the FAQ says is due to the process's maximum data segment size
> being reached. Is that value set by MAXDSIZ in
You also have to make sure the new maximum data size is also the
actual data size limit for the process. You can set the limits
using "limit" or "ulimit" or "unlimit" (depending on the shell).
Or you can set the default data size limit (as opposed to the
maximum limit) in the kernel configuration as well:
options DFLDSIZ=bytes
(see options(4))
>
> Ray
Regards,
Bernd
--
Bernd Sieker
NetBSD: Use the ENTIRE computer!
-- Andrew Gillham