Subject: Re: URLs for bonnie and iozone?
To: Rob Healey <email@example.com>
From: David Greenman <davidg@Root.COM>
Date: 04/13/1996 06:09:42
>> >One's curiosity gets lowered when logging into an ftp site that proclaims
>> >that you're 789 of a possible 1250 users, and the performance is like
>> >watching slugs fornicate.
>> You're sitting off of the MCI's backbone, which is suffereing from severe
>> packet loss these days. ftp.cdrom.com still has CPU and bandwidth to burn
>> (it does around 12-14Mbs to the net right now out of a possible 100Mbs) and
>> your poor performance has nothing to do with the machine's load. The
>> situation should improve once MCI finishes their backbone upgrade in the
>> next few months.
> In MCI's defense I think it's the crossover points, NAPS, that
> are the real problem.
No need to defend MCI. In this case, the problem really is with their
network. This isn't something "I think", it's something I've carefully
analyzed over several months, reported to various network engineers, and
subsequantly gotten independant confirmation of.
> When I stay within MCI I get pretty close to full bore performance.
It's good to hear that at least *part* of their network hasn't melted down
(or isn't melting anymore), but unfortunately that's not the case in the Bay
> It's only when I have to cross over to another provider that things
> turn to "slugs fornicating"...
There used to be a problem with MAE-west at Ames - the shared FDDI was
overloaded. That's not a problem anymore since they upgraded the shared FDDI
to a DEC Gigaswitch. CRL (wcarchive's service provider) doesn't send traffic
for MCI through there anyway, so this isn't an issue. The MCI traffic goes
through a DS3 circuit to the PacBell NAP.
> It seems popular to blame MCI these days for the NAP situation but
> it isn't their fault per say. A good chunk of their internal
> network is now OC3, 155Mbit ATM, and you can tell from the
Not sure what you mean by "the NAP situation", unless perhaps you're
suggesting that MCI pipes to the various NAPs are too small. Most of the NAP's
use switched FDDI or (RSN) switched ATM and congestion within the NAP is no
longer a problem. There used to be congestion at MAE-east, but that seems to
have largely been eliminated since they installed a Gigaswitch. Traffic for
CRL/wcarchive doesn't usually go through MAE-east, so this wouldn't be an
The problem that was orginally reported here (if you can call an insult a
"report") was caused by congestion in MCI's core routers in San Francisco and
and/or Sacremento. This is the most highly congested part of their network.
During peak traffic times of the day, it's not been unusual to see >60% packet
loss going through the SF core router. The problem is so bad that rlogin/telnet
sessions will timeout due to excessive retries. Let me tell you first hand:
it's nearly impossible to edit files this way. :-)
MCI was supposed to start their move to ATM/OC-3c this weekend. I've seen
spuratic testing of ATM on various cicuits, but it usually only lasted for a
day or two. As of right now, all of the traffic going into and out of the Bay
Area is still plain-old DS3 hooked to Cisco 7000's. What I heard was that only
*one* of the DS3 circuits was going to be updated each week, with the first
one being the SF<->Denver circuit, and that this was supposed to begin "April
12th". MCI usually does upgrades early Sunday morning, so this date seems
strange to me.
Core-team/Principal Architect, The FreeBSD Project