Subject: Re: Many PCI IDE cards in one box, is this possible?
To: Greg Oster <oster@cs.usask.ca>
From: Brian Buhrow <buhrow@zocalo.net>
List: port-i386
Date: 01/19/2001 05:56:34
	Actually, I put 11 drives in a single raid 5 array, and 
made the 12th drive a hot spare.  The parity calculation seems to take
about
24 hours, and while it's running, it takes a half hour to login.  This is
on an Intel motherboard with a 733MHZ cpu and 256MB of memory.  I'm hoping
this is just the pain of initialization, but if it keeps up this way, I
don't know how useful it will be.  We're currently waiting for the second
attempt at parity initialization to complete.  I've also been making the
stripe size smaller in an effort to lower the per disk i/o request size to
keep individual i/o errors from the too long IDE cables from spoiling the
party.  So far, so good.  Unfortunately, Manuel says that the limit for
UDMA100 cables is 18 inches.  Our cables are 24 inches, and we have no
physical way of shortening the cables and reasonably installing these disks
in a box compatible with the cooling requirements to run so many disks.
So, other than living with the slower disk performance with these  longer
cables, is there anything we can do?
-Brian

On Jan 19,  1:41am, Greg Oster wrote:
} Subject: Re: Many PCI IDE cards in one box, is this possible?
} Brian Buhrow writes:
} > 	Thanks for that, I didn't know.  On a related note, while I have 
} > your attention, I notice that when I try to configure 12 of these drives
} > into an array, and start the parity calculation process, the machine
} > becomes incredibly slow, and the load goes up to 13 or 14.  Ps shows that
} > raidframe_parity is the all consuming process, and occasionally, I see
} > messages like:
} > pcide2:1:1: bogus intr
} > The drives have all stepped down to mode pio 4, so I don't think they're
} > running that fast, but something seems saturated.  Any notion of what it
} > might be? 
} 
} have a look at 'systat vmstat' and, in particular, at the number of interrupts 
} being served... 'systat iostat' might be interesting as well...
} 
} > Will this problem resolve itself when we begin using the raid5
} > array in a light manner?
} 
} Do you have all 12 drives in a single RAID5 array?  "ouch".  
} I think you'll be ok once you start 'normal operation' (depending on what your 
} RAID configuration, disklabels, newfs parameters, etc. look like..)
} 
} Later...
} 
} Greg Oster
} 
} 
>-- End of excerpt from Greg Oster