Subject: Re: pcn interfac under vmware issue?
To: Jason Thorpe <thorpej@shagadelic.org>
From: Peter Eisch <peter@boku.net>
List: netbsd-users
Date: 10/31/2006 06:26:37
On 10/30/06 11:27 PM, "Jason Thorpe" <thorpej@shagadelic.org> wrote:
>
> On Oct 25, 2006, at 1:19 PM, Peter Eisch wrote:
>
>> On 10/23/06 8:31 PM, "Thor Lancelot Simon" <tls@rek.tjls.com> wrote:
>>
>>> On Mon, Oct 23, 2006 at 07:59:40PM -0500, John Darrow wrote:
>>>> On 20 Oct 2006 17:21:45 -0500, Peter Eisch <peter@boku.net> wrote:
>>>>>
>>>>> Oct 20 15:15:14: vcpu-0| NOT_IMPLEMENTED
>>>>> /build/mts/release/bora-29996/pompeii2005/bora/devices/net/
>>>>> vlance_shared.c:6
>>>>> 98
>>>>
>>>> Yes, we have encountered this. According to gkm (who works/worked
>>>> at
>>>> VMWare), it's a VMWare bug with the vlance emulation only supporting
>>>> "15 sg elements in esx". We were also seeing hangs of output from
>>>> individual NetBSD VMs (e.g. a ssh session would hang when hit with
>>>> lots of output).
>
> Someone please send me dmesg from a NetBSD running in VMware with the
> patched pcn driver. I will change the pcn driver to use a different
> number of Tx segments based on whether or not we are running under
> VMware.
>
[done]
I can confirm that running with '4' makes it usable. I've been running
email filtering since Sat without a glitch (23k connections/day and loads up
to 10).
peter
>>>>
>>>> The included patch seems to be handling things for us (we haven't
>>>> seen crashes or hangs since it was put in place).
>>>>
>>>> --- /current/src/sys/dev/pci/if_pcn.c_v1.32 2006-10-16
>>>> 19:28:36.000000000
>>>> -0500
>>>> +++ /current/src/sys/dev/pci/if_pcn.c 2006-10-18
>>>> 02:17:04.000000000 -0500
>>>> @@ -126,7 +126,11 @@
>>>> * DMA segments, but only allocate the max of 512 descriptors. The
>>>> * transmit logic can deal with this, we just are hoping to sneak
>>>> by.
>>>> */
>>>> +#ifdef PCN_VMWARE_SGLIMIT
>>>> +#define PCN_NTXSEGS 4
>>>
>>> I think you want "15" here, not "4". More is better, and according
>>> to
>>> the information above, 15 should be safe, no?
>>>
>>> It's bogus for the thing to claim to be a pcnet if it in fact is not
>>> quite a pcnet, and since the number of supported s/g elements is a
>>> documented in the pcnet and ilacc databooks... sigh.
>>>
>>> If you don't want to patch your kernel, it might be possible to run
>>> the
>>> thing in 24-bit-descriptor mode instead by removing pcn from your
>>> kernel
>>> so it matches le@pci. I wonder if that will show the same Vmware
>>> bug?
>>>
>>
>> The le@pci doesn't work at all. It is detected, configures and
>> shows link,
>> but any I/O fails.
>>
>> I'll try the source suggestion next.
>>
>> peter
>
> -- thorpej
>
>