Subject: Re: recommended systems
To: None <tmm@net2.mcci.com>
From: Luke Mewburn <lukem@wasabisystems.com>
List: port-i386
Date: 04/20/2001 00:32:59
On Thu, Apr 19, 2001 at 04:24:51PM +0200, wojtek@3miasto.net wrote:
> > My gut feel is that if you can afford it, stick with LVD SCSI and a
> > hardware RAID card, because of the simplicity of cabling, and the card
> 
> it's all true. but not when one big SCSI drive costs more than whole
> machine without drives.

maybe in europe. here in australia, the 18GB scsi's are about A$600,
which is a bit cheaper than a 60GB ide. as i said, *if you can
afford it*, which is usually a company or university, not home.


> > For a home system or small server, however, I think you can't go wrong
> > with the 2 device Escalade to mirror 2 drives, or the 4 drive version
> > to do RAID1+0 (RAID1 mirror then RAID0 stripe the mirrors), especially
> > if you put the drives in hot-plug ide carriers (which cost ~ U$15 here
> > in Australia).
> 
> for up to 4 drives wouldn't be software solution OK?

nope. been there, done that. the performance sucked under transactional
load (e.g, untar or cvs in one window, try and use the same filesystem
via the shell in another), even with a separate ata66 or ata100
master controller per drive. my much older scsi disks under the
same raidframe config did not suck. single threaded benchmarks such as
dd off the disk or bonnie showed s/w IDE raid as being `faster', but
in practice, it was not. note that various `hardware raid' IDE cards
suffer the same problem (promise, adaptec aaa); i believe the 3ware is
one of the few ide raid cards that doesn't suck like this.

plus, it was a lot more stuffing around getting the system to boot of
raidframe raid1 mirroring; see my previous posts in the mail archives
about this.  with h/w raid, it just appears like a bios provided
disk of 60GB (if you have 4 x 30GB in raid1+0)