Subject: Re: Old mail (but relevent to SCSI drivers/Jaz/Zip disks?)
To: David A. Gatwood <marsmail@globegate.utm.edu>
From: Colin Wood <cwood@ichips.intel.com>
List: port-mac68k
Date: 08/18/1997 16:36:16
David A. Gatwood wrote:
> 
> On Mon, 18 Aug 1997, Colin Wood wrote:
> > Anyway, for some systems (i.e. those with Quantum drives and the ncr5380
> > chip) the sbc driver appears to be more stable than the ncrscsi driver.
> > This seems to be the case for most systems using a Jaz/Zip drive as well.
> > However, there are a few systems where the sbc driver will totally hork
> > the filesystem and the ncrscsi driver works just fine (yours seems to
> > qualify).  Unfortunately, other than rather general statements like those
> > I've just given, there is no real way to determine which system is which
> > before actually running a kernel and seeing how it goes :-(
> 
> hork?  Never seen that one before....

I wish I could say I made it up, but whatever it's origins, 'horked' has
always sounded like the perfect word to describe the filesystem while I'm
sitting there watching 'fsck -y' run and the list of munged inode's goes
scrolling by ;-)  It really is a great word, tho.  Try using it sometime:
"My machine was running just fine until the nightly daily script ran and
_horked_ the machine" or "Every time I try to do a kernel build these
days, the system _horks_ itself..."  See? ;-)

> Systems known to have trouble with
> both drivers include the PB145 (which gets a little fs glitchiness with
> ncrscsi, and which panics when mounting the fs or in fsck, I forget which,
> with sbc).  Submitting a trace when you get a panic might help in finding 
> the glitch, depending on what the glitch is.  Dunno.  Anybody?

Anything is better than nothing?  It might turn out to be very helpful,
actually.  Perhaps Scott or Allen will recognize some kind of pattern in a
trace...
> 
> > Well, if you're hosed, you're hosed :-(  I've to do a complete reinstall I
> > think twice in the last 2 years.  Normally, I keep a copy of my most
> > recent tar files sitting around just in case things fall apart.
> 
> Once a week.  ;-)

Ouch!  That's painful!  I hope you keep anything important backed up.

> > > >
> > > > Kinda strange....did you run 'fsck' and then look, or did you just go and
> > > > 'newfs' it?
> > > 
> > > I ran 'fsck' first.  It didn't change anything. 'df' still showed several
> > > hundred thousand blocks allocated until after I 'newfs'ed it.
> > 
> > Hmmmmmmm...I still don't know why this is happening.
> 
> They weren't by any chance marked bad by mkfs, were they?  Does it even
> have that capability?  Maybe it's just generating buggy inodes on that
> drive.  Have you tried using it for MacOS?  It could be either bad blocks
> or a flaky SCSI bus (termination problems come to mind).  Another
> possibility might be that the driver bugs are causing slightly bad reads
> (flakiness) during the fsck check.  Normally, there should not be any fs
> damage during an initial fsck after mkfs....  And since Apple's MacOS scsi
> drivers tend to be less error-prone than the ones in NetBSD-mac68k, I'd
> tend to blame the SCSI code rather than mkfs, _but_ that's only a guess.

Probably so.  Since Mkfs uses the MacOS SCSI drivers as it's running,
chances are it's the NetBSD side (or perhaps the drive itself) that's the
problem.

> > > >> In case anyone needs to know, I'm running this on a PB160 w/ 12MB RAM on
> > > >> an external 800MB Quantum hard disk with additioinal adventures on an
> 
> Aha!  It's a PB160.  Same motherboard (essentially) as the 145, or at
> least my 145 says PowerBook 145/160 on it....  So it's _not_ just mine....
> ;-)  Sounds like something inherent to the design of that particular model
> that's not behaving well with either driver. 

I guess I should probably note this in the FAQ next time I try it :-)

> 
> > > The boot fails right after the [preserving x bytes of netbsd
> > > symbol table] thing.  Right where every kernel fails on the PB160 ;).
> 
> Try zapping PRAM.  That fixes hangs there for some people.  Not sure why.

Hmmm...this too, although I hear that continually doing this is _really_
bad for the system (I know of at least one system which now has a bad
motherboard b/c the user zapped PRAM pretty much every time he booted).

Later.

-- 
Colin Wood                                 cwood@ichips.intel.com
Component Design Engineer - MD6                 Intel Corporation
-----------------------------------------------------------------
I speak only on my own behalf, not for my employer.