Subject: Re: Article
To: Ross Patterson <Ross.Patterson@CatchFS.Com>
From: Andrew Brown <atatat@atatdot.net>
List: current-users
Date: 01/11/2003 11:24:39
>> Anyone read this ?
>> http://www.eweek.com/article2/0,3959,809353,00.asp?kc=EWTH102099TX1K0100487
>
>Ofir Arkin posted an update to Bugtraq this afternoon 
>(http://online.securityfocus.com/archive/1/306110/2003-01-07/2003-01-13/0), 
>essentially telling people not to dismiss this "Etherleak" problem out of 
>hand (.  He asserts that NetBSD was tested and found to exhibit the problem, 
>although that's the only NetBSD reference in his note or the original paper 
>(http://www.sys-security.com/archive/papers/atstake_etherleak_report.pdf).  

i won't pretend to have read the paper very closely, but i did skim
it, i do think i understand the issue at hand, and i think the closing
quip of "just how secure is your vlan" is a silly question.  the
answer, of course, is "not as secure as my physical lan".

>That said, I agree with the other posters - it seems like a tempest in a 
>teapot to me.

yea, verily.  i have some (mostly rhetorical) questions about the
testing.

(1) how do you know the machine that was doing the sniffing isn't
inserting the padding?

(2) how do you know the nic being used on the sniffer machine isn't
doing the padding?

(3) how do you determine that the padding is being done by the remote
operating system and not by the remote nic?

(4) how do you determine that the intermediate networking equipment
(hubs and/or switches) isn't doing the padding?

(5) how do you determine that the data is "sensitive" and not merely
an accidental resending of data that was already picked up from the
network?

i suppose that by using four or more machines of different varieties
(one windows 98 or windows xp, one macos, one netbsd, one openbsd,
etc) so that operating differences would be spread out, and having all
of them sniffing at all times so that data could be correlated and one
could tell if they were all seeing the "extra" data...

the use of several 10mbit hubs so that no one would *miss* any data
would also be a good idea.  as well as exchanging the hub for another
one (or two or three) so as to eliminate *it* as a source of bad data.
rotating a large supply of cards between the machines would also be
good.

i didn't see any evidence in the paper of any attempt to cleanse and
control the testing in this manner.  it seemed to concentrate mainly
on issues with some linux drivers and gesture at the idea that other
operating systems may be vulnerable as well.

-- 
|-----< "CODE WARRIOR" >-----|
codewarrior@daemon.org             * "ah!  i see you have the internet
twofsonet@graffiti.com (Andrew Brown)                that goes *ping*!"
werdna@squooshy.com       * "information is power -- share the wealth."