Subject: Re: NetBsd as A File Server
To: None <>
From: Jonathan Stone <>
List: current-users
Date: 12/28/1999 12:46:16
>Sam wrote:
>> wrote:
>> >
>> > >We are trying to build the best File Server we can possibly make with
>> > >NetBsd on intel architecture. We built servers under FreeBSD but we are
>> > >not satisfied with the NFS perfs, and Free Bsd in general. (PIII 500, 4
>> > >U2W 18Go stripped HD, and 3coms/digital NIC).
>> >
>> > The NFS implementation in both systems -- in any *BSD system, AFAIK --
>> > is derived from the 4.3-Reno NFS implementation, done by Rick Macklem
>> > and others.  I have not tried FreeBSD's NFS, but I wouldn't expect
>> > major gains going from one NFS implementation to the other.
>> >
>> Actually, we could only use our disks as 80M/s on FreeBSD, with the U2W
>> Adaptec Card,
>> cause we haven't got any card that works with Netbsd yet (we should get
>> Advansys', and Qlogic's soon).

[qI have a adaptec U2W on my desk.  I have a paper deadline in January,
and I'm planning an oral defense, so I dont have many cycles, but I
can find a few.  What can I do to help get support for that card into

>> On free BSD, we are faster than the NetApp on big files, but much slower
>> on small ones.
>> Thats why we think the problem comes from nfs or FreeBSD file system,
>> and why we want
>> to try different file systems on NetBSD.
>Moreover FreeBSD supports only FFS, we would like to try LFS on NetBSD.
>Cause it should be better on small files.
>Maybe Ext2fs is worth try too. Is there any "newfs_ext2" that would 
>create ext2 file system on our stripped U2W disks?

Avoid ext2. Avoid it like the plague.  The ext2_fsck time after a
crash and at mount-check time are appalling. The internal structure
(based on benchmarks I've seen, back to Kevin Lai's Usenix 96 paper)
suggest it has little-to-no gain over ffs with unordered asynch
writes.  Dont use ext2fs, ever. Use FFS with softdeps instead,
or use LFS.  You'll be much happier.

>We were wandering too what exactly was the collision that Free and Net

A collision on the cat-5 twisted pair. If the drivers say there
was a collision, then there relaly was a collision. Monitor them with
Normal cause is either (a) cable overload, or (b) a mismatch in
full-duplex settings between your NIC and the switch port.

Monitor them with 
	netstat -i <interfacename> -w 1
	(e.g., netstat -i ex0 -w 1)

If the collision rate is signficantly above 0, and you have a port
direct into a switch, then the probable cause is a full-duplex
mismatch between your card and the switch.

>Cause our servers are directly linked to cisco switchs, so collisions as
>know it should never happend. Our Alpha servers do not report collision
>at all 
>linked to the same switchs..
>What do you think?

Same as I thought the first time.  Some (okay, most) of the early
Cisco switches did not implement Nway properly. (they used PHY chips
that didnt do Nway, way back around 1995/1996).  There is no way,
__none at all__, that those switches can autodetect full-duplex.  The
hardware simply cannot do it.  The same is apparently true on some
newer Cisco blades for (e.g.) Catalyst switches.
I havent' seen it with the 29xx line or the 35xx line, myself;
those autosense full-duplex just fine.

If your NICs are autosensing to 100base-TX (no -FDX), then hardware
the NICs to 100base-TX-FDX. Conversely, if the NICs are autosensing to
full-duplex, hardwire them to half-duplex.  Use whatever combination
makes the collsions go away.  Then, once everything is working with
negligible collisions, go back and make the necesssary changes to the
port media settings on the Cisco and/or the NIC, so that both are in

 (On NetBSD you can do that via
		 ifconfig ifmeda <int>  100base-TX-FDX
		 ifconfig ifmeda <int>  100base-TX

I have no idea about FreeBSD.)
Then, confirm that you get a collision rate of exactly zero.

One caveat is that (depending on how recent your NetBSD drivers and
kernel are), you may have to physically break contact to get things to
work, after changing media. I've only seen that when forcing changes
between 10Mbit and 100Mbit, though.

Also, the cisco will also force a 30-second outage (wait) anytime
it loses its peer signal, so be prepared for that.

Hope that helps.