NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Networking: Lots of collisions?
I see something similar, but I'm running NetBSD 1.6.2. I tried with a
packet capture size of 96 bytes then capturing the entire packet.
tcpdump -i wm0 -w cap.cap not port 23
tcpdump: listening on wm0
^C
90383 packets received by filter
41278 packets dropped by kernel
tcpdump -i wm0 -w cap.cap -s 0 not port 23
tcpdump: listening on wm0
^C
97789 packets received by filter
20619 packets dropped by kernel
ls -al cap.cap
-rw-r--r-- 1 root wheel 79545505 Oct 9 20:46 cap.cap
FreeBSD 6,2 had more success with the packet capture size at 96 bytes:
tcpdump -i em0 -w cap.cap not port 23
tcpdump: listening on em0, link-type EN10MB (Ethernet), capture size 96 bytes
82210 packets captured
92548 packets received by filter
10335 packets dropped by kernel
But had less success when the entire packet was captured:
tcpdump: listening on em0, link-type EN10MB (Ethernet), capture size 65535
bytes
9075 packets captured
82811 packets received by filter
73731 packets dropped by kernel
But I have trouble with capturing packets even under Linux (Ubuntu 10.04.1):
tcpdump -i eth1 -s 0 -w cap.cap not port 23
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size
65535 bytes
73723 packets captured
103186 packets received by filter
29463 packets dropped by kernel
So from my point of view it looks more like a VMWare issue.
Jason M.
> Thanks for tip Jason,
>
> I did as you said, and it seems to have removed the collisions.
> However, I'm sure the problem/bottleneck is gone. tcpdump still loose most
> of the packets.
>
> 7 packets captured
> 2733 packets received by filter
> 1872 packets dropped by kernel
>
> /P
>
> On Oct 9, 2010, at 12:15 AM, jmitchel%bigjar.com@localhost wrote:
>
>> I forgot to mention that you don't have to reinstall when you change the
>> OS type in VMWare, it's just that VMWare only offers the e1000 adapter
>> if
>> the OS type is "Other (64-bit)". I don't know why that is.
>>
>> Jason M.
>>
>>> On Oct 8, 2010, at 11:12 PM, Fredrik Pettai wrote:
>>>> Hi,
>>>
>>> Forgot to mention that its NetBSD/i386.
>>>
>>> Looking at vmstat, I can see some things that stand out more than
>>> others:
>>>
>>> # vmstat
>>> procs memory page disks faults cpu
>>> r b w avm fre flt re pi po fr sr f0 c0 in sy cs us
>>> sy
>>> id
>>> 1 0 0 316088 158332 208 1 0 0 13 53 0 0 836 1763 1678 0
>>> 4
>>> 96
>>>
>>> one of two processes are occasionally in the run queue and during that
>>> many page faults surface:
>>>
>>> [...]
>>> 0 0 0 316092 158336 0 0 0 0 0 0 0 0 1538 3006 3144 0
>>> 6
>>> 94
>>> 2 0 0 316092 158328 1261 0 0 0 0 0 0 0 1576 3039 3032 8
>>> 8
>>> 84
>>> 0 0 0 316092 158328 0 0 0 0 0 0 0 0 1527 2900 3120 0
>>> 3
>>> 97
>>>
>>>> I just installed a netbsd-5-1-RC4 as a dns server, and I see a lot of
>>>> collisions:
>>>>
>>>> pcn0 in pcn0 out total in total out
>>>> packets errs packets errs colls packets errs packets errs
>>>> colls
>>>> 39428897 0 22000706 0 5500180 39428897 0 22001100 0
>>>> 5500180
>>>> 3227 0 1892 0 474 3227 0 1892 0 474
>>>> 3373 0 2060 0 514 3373 0 2060 0 514
>>>> 3168 0 1926 0 482 3168 0 1926 0 482
>>>>
>>>> Now, since it's running in VMware, one could guess that it's a
>>>> underlying problem (in VMware or maybe even in the physical
>>>> infrastructure).
>>>> But I also have virtualized Linux machines that are quite busy too,
>>>> and
>>>> they don't show this kind of networking problem.
>>>> (They run in the same VMware hardware)
>>>>
>>>> Trying to do a tcpdump shows that the netbsd system doesn't handle
>>>> that
>>>> very well either:
>>>>
>>>> # tcpdump -i pcn0
>>>> [...]
>>>> ^C
>>>> 5 packets captured
>>>> 2585 packets received by filter
>>>> 1726 packets dropped by kernel
>>>>
>>>> Doing it on the Linux machine works fine:
>>>>
>>>> # tcpdump -i eth0
>>>> [...]
>>>> ^C
>>>> 2844 packets captured
>>>> 2845 packets received by filter
>>>> 0 packets dropped by kernel
>>>>
>>>> To that I might add that the servers doesn't have any typical CPU load
>>>> etc.
>>>>
>>>> # top -o cpu
>>>> load averages: 0.59, 0.65, 0.65; up 0+12:32:18
>>>> 23:05:05
>>>> 24 processes: 23 sleeping, 1 on CPU
>>>> CPU states: 0.0% user, 0.0% nice, 2.0% system, 2.0% interrupt,
>>>> 96.0%
>>>> idle
>>>> Memory: 306M Act, 2852K Inact, 6040K Wired, 7980K Exec, 117M File,
>>>> 155M
>>>> Free
>>>> Swap: 256M Total, 256M Free
>>>> PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU
>>>> COMMAND
>>>> 3929 user 85 0 94M 91M netio 20:49 2.69% 2.69% [dns
>>>> process]
>>>>
>>>> Anybody else that has seen something similar? (in VMware?)
>>>> Any hints on what to do to make the networking stack more optimized?
>>>> It's currently just the defaults.
>>>>
>>>> /P
>>>
>>>
>>
>
>
Home |
Main Index |
Thread Index |
Old Index