[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: 4.0.1/sun3 broken?
On Feb 13, 2011, at 11:59 AM, der Mouse wrote:
I don't _recall_ seeing anything go past that would bear on this, but
perhaps I missed it.
I'm trying to run an at least vaguely modern NetBSD on one of my
Sun-3s, in response to a ping from someone interested in historical
So I set up netboot and untarred the 4.0.1 distribution sets. It runs
apparently fine single-user diskless. But rather than try to do
everything diskless, I wanted to move stuff to the disk I have on it.
I got it labeled fine. But when I try to newfs it....
Sun3# newfs /dev/rsd0a
/dev/rsd0a: 4212.0MB (8626176 sectors) block size 16384, fragment
using 23 cylinder groups of 183.14MB, 11721 blks, 23168 inodes.
super-block backups (for fsck_ffs -b #) at:
trap type=0x0, code=0x145, v=0x5a972c66
kernel: Bus error trap
pid = 17, lid = 1, pc = 0E0F053C, ps = 2004, sfc = 1, dfc = 1
[...register dump...kernel stack dump...]
This appears to be repeatable; I rebooted and tried again and it
crashed with the same values printed except for the pid and a few
fragments of the kernel stack.
ddb's stack trace says the call chain is syscall, syscall_plain,
sys_pwrite, dofilewrite, vn_write, VOP_WRITE, nfsspec_write,
spec_write, sdwrite, physio, vmapbuf, uvm_km_alloc, uvm_map,
uvm_map_prepare, uvm_km_va_drain, callback_run_roundrobin, trap,
About all this says to me is that it's in a part of the kernel I don't
know - and that it (probably) isn't something like broken hardware
causing the sd driver to go insane.
The kernel is the stock sun3 4.0.1 GENERIC kernel, MD5
3e31d29f198ba236eafd8692b6ed9d4f. Full dmesg appears after my
I'm actually running into something very similar in all installers
from 4.0.1 onward. I haven't been able to get NetBSD installed on my
3/60 (24Mb) via an installer in quite awhile. I see the same thing you
do: the newfs causes a kernel panic. And, yes, I know I should've said
something sooner... I found this when I tried to do a new install of
5.0 on my 3/60, ran into the issue, and then started
troubleshooting... and ran out of time.
What I've narrowed it down to is that it seems to have issues with
partitions over 1Gb. I've not tried 5.1 yet, but I have a feeling that
I'll run into the same thing as I last did with 5.0.2. What I did was
use the same drive & partitions to try to install everything from 3.0
up to 5.0.2. 3.0 can newfs without issue to all of my partitions.
4.0.1, 5.0, 5.0.1, and 5.0.2 installers can newfs my smaller
partitions (/ and /var, coming in around 64Mb a piece). /dev/rsd0g,
which comes in around 1400Mb, will consistently panic on newfs. I did
a little bit of messing with partition sizes and found that I can
newfs with 4.0.1+ as long as the partition is less than 1024Mb. Once
you go over, the newfs will cause the kernel panic.
Not sure what additional information folks would want/need. Stack
Help me race for a cure to defeat leukemia and other blood cancers!
Main Index |
Thread Index |