tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: ifconfig v2

On Jun 12, 2013, at 06:05 , Mouse <mouse%Rodents-Montreal.ORG@localhost> wrote:

> Personally, I don't think the traffic-rate stuff belongs in the kernel.
> I'd prefer to see that implemented in netstat by sampling stats twice
> with a measured delay and doing the arithmetic there.  In aid of doing
> this with very short delays, I'd say it would be good to make network
> stats come back from the kernel with a timestamp attached.  "Mechanism,
> not policy."

Uh, "load average"?

An example "show interface" output from a Cisco route server:

Ethernet1/0 is up, line protocol is up 
  Hardware is AmdP2, address is 0050.73d0.cd1c (bia 0050.73d0.cd1c)
  Description: "mdf001ffisxs0003.lax1 e4/24"
  Internet address is
  MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec, rely 255/255, load 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Queueing strategy: fifo
  Output queue 0/40, 0 drops; input queue 0/75, 2 drops, 184 flushes
  5 minute input rate 2000 bits/sec, 3 packets/sec
  5 minute output rate 2000 bits/sec, 3 packets/sec
     144626276 packets input, 3744307030 bytes, 0 no buffer
     Received 100821 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 input packets with dribble condition detected
     157988942 packets output, 624419973 bytes, 0 underruns
     0 output errors, 108111 collisions, 0 interface resets
     0 babbles, 0 late collision, 393433 deferred
     0 lost carrier, 0 no carrier
     0 output buffer failures, 0 output buffers swapped out

Cisco has used "five minutes" for their rate average for as long as I can 
remember. Akin to Unix's three load averages: 1 min, 5 min, 15 min, maintained 
by our kernel clock interrupt handler.

Given that Unix still doesn't distinguish between output queue limit drops and 
mbuf exhaustion drops (see PR kern/7285 ), 
reporting of I/F queue drops in the netstat -i display seems prudent, too, as 
you see above.

        Erik <>

Home | Main Index | Thread Index | Old Index