Subject: UDMA downgrading
To: None <current-users@netbsd.org>
From: Thomas Hertz <thomas@hz.se>
List: current-users
Date: 10/15/2003 20:14:13
I'm running a bunch of IDE UDMA100 disks (4, to be exact) in a raid5 
array. The disks have one controller per disk hanging off two Promise 
Ultra100 cards (using pdcide(4), but this problem showed up on pciide(4) 
as well). After some load on the array, and a few hours of time, the 
disks start downgrading to UDMA/33 while giving out these messages:

wd1(pdcide0:0:0): using PIO mode 4, Ultra-DMA mode 2 (Ultra/33) (using 
DMA data transfers)
wd1a: DMA error reading fsbn 118858432 of 118858432-118858463 (wd1 bn 
118858432; cn 117915 tn 1 sn 49), retrying
wd1: soft error (corrected)

Strangely enough I haven't been able to reproduce even ONE of these 
messages when the disks are not on a raid array! I realize that the 
raidframe is putting heavier load on the disks, but it has a huge impact 
on performance with the disks running on UDMA33!

What is the problem here? I've gotten all kinds of suggestions from a 
bad mainboard to bad disks, but none of them seem likely. Would it be 
better if I would put the disks in the end of the IDE cables instead of 
the middle one? Is somehow raidframe to blame here? I'm confused.

Secondly, is it possible to make the driver not downgrade the transfer 
mode when it experiences these DMA errors once in a while? They really 
only occur like once an hour or so, and that shouldn't be a problem.

--
Thomas Hertz