Port-alpha archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Is this list alive?



On Sun, Dec 16, 2012 at 08:01:49PM -0500, Dave McGuire wrote:
> I'm amazed that the 1000A is still running.  They're the only Alphas
> I've ever found to be less reliable than the Multia.  I'm glad to hear
> at least ONE of them was decent.

I've never met anybody else with a 1000A -- any suggestions on things to
look out for, common failures, etc?

>   But...why move away from SCSI?

roughly ordered: capacity, performance, availability, physical space,
and power consumption / waste heat.

capacities of non-SAS SCSI drives seem to be capped at 300GB, and it's
unclear to me if anybody is still in production of non-SAS SCSI drives.
sure, I can stockpile 36GB drives, but a few spare 2TB drives is a lot
smaller stack for the same storage space...

theoretically it should be possible to bridge between "classic" SCSI and
SAS (and by superset, SATA,) but I don't know where to find such
devices.  clues welcome.

> I'll bet even older SCSI drives will last a good bit longer than the
> consumer-grade drek flooding the market now.

certainly a lot of the old drives keep going!  I have an RZ56 and RZ57
in my DECstation that just seem to keep going.  unfortunately for these
drives, longevity isn't the only metric I'm concerned about.

> And non-consumer-grade SATA/SAS drives cost a fortune.

so do high capacity SCSI and SCA drives.  :)

> It'll take a good long while to recoup that kind of cost in reduced
> power consumption.

pulling out my kill-a-watt, shuffling some power cords, and exercising
redundant power supplies, I see that the external disk tray on my 1000A
is pulling a bit over 120W while idle.  over the course of a year, that
costs me roughly $125 to keep running at $0.12/kWh.  that certainly
covers the cost of a gig-E card, and I have 80GB of RAIDed free space on
another machine that I could export via iSCSI... (now could I get it all
configured for $80 of my time investment.  hee hee.)

> Ugh, why [suck it up and migrate all the services to amd64]?  I parse
> "commode" in "commodity".  [amd64] hardware is crap...

where else to go?

the same concerns as storage apply: capacity, performance, availability,
physical space, and power consumption / waste heat.

I don't recall how long it took the last time I built the NetBSD world
on my 1000A, but on a five-year-old x86 system at work, it takes about
20 minutes.

for kicks, I shuffled some disk, added some swap, and kicked off a
niced native build of NetBSD-6 on my 1000A.  hopefully it'll take less
than a day.  :)

> unless you buy really high-end stuff, but you can buy a new car for
> what some of those machines cost.

a lot of DEC gear cost as much as a new car when it was new, too.  a
better question is can I find anything (from any architecture) that has
comparable reliability.

> And in the end you still have to deal with crap like the BIOS.

EFI is indeed a horror show, but is it bad enough to forgo the other
benefits?  for me the answer is no, so long as I don't forget how it
does suck, and remember to give sh*t to the co-workers responsible when
the opportunities arise...  :)

>   My next major infrastructure upgrade will be from UltraSPARC-III+ to
> UltraSPARC-T2.  Going 2-3 years between reboots, I'm not in too much of
> a rush to even do that.

that's definitely how it should be, and something the computing industry
seems to have forgotten.  flaky hardware?  virtualize it across multiple
systems.  flaky software?  design your application for redundancy.

damn kids don't know what a lawn is anymore.  :)

-- 
  Aaron J. Grier | "Not your ordinary poofy goof." | 
agrier%poofygoof.com@localhost


Home | Main Index | Thread Index | Old Index