Subject: Re: 24 meg limit still in effect?
To: Chuck McManis <cmcmanis@mcmanis.com>
From: John <john@sixgirls.org>
List: port-vax
Date: 07/20/2001 03:16:25
> Well I've run 1.5 on a 4000/60 with 104MB and it worked fine. It was my
> build machine until I switched over to a 4000/90. To track down your panic
> several bits of information are necessary
>          1) What kernel and userland are you running?

1.5.1 (1.5-release tree from 2-July)

>          2) Where did the kernel panic?
>          3) What part of the kernel build process was it in?

Good question; I didn't have the serial port plugged in, and it isn't
usually physically accessible, so I couldn't see what the in-kernel
debugger said. I've since made sure savecore runs on restart.

>          4) Did you try filling memory some other way to see if that
> panic'd it?

I'm running (or trying to run) sieving software (SNFS, Special Number
Field Sieve) which allocates a 64 meg block and walks all over it. It
doesn't swap well, and is a really good way to test memory.

>          5) How confident are you in your RAM?

Very. I've rebuild the source tree at least a half a dozen times with no
compiler errors (except when I've forgotten to unlimit).

>          6) Do you have the latest toolchain?

2-July!

I guess I should look at a system core before I delve into trying to fix
the problem, but I thought I'd get some feedback about the current
practical limit on data size.

Thanks,
John Klos