Subject: Re: Adding TTL information to gethostbyname() and friends
To: NetBSD Networking Technical Discussion List <tech-net@NetBSD.ORG>
From: Greg A. Woods <woods@weird.com>
List: tech-net
Date: 06/03/2003 15:39:08
[ On , June 3, 2003 at 09:16:49 (-0700), Ian Lance Taylor wrote: ]
> Subject: Re: Adding TTL information to gethostbyname() and friends
>
> Wrong.  The browsers right now cache DNS entries for the wrong amount
> of time.  The browsers right now do the wrong thing.

Yes and no.  Effectively they just do something that's not necessary in a
properly implemented RFC 1123 compliant environment.   This is no worse
a botch than SunOS 'nscd'.

I would like to see evidence of just how wrongly implemented existing
browser DNS caches are.  For example do they really violate the de facto
minimum TTL of 300 seconds?

>  Any plan for
> fixing this problem must include a plan for changing the browsers.

Well, that depends on what you mean.  They will be "fixed" if the bad
hacks they have now are not necessary and especially if problems with
those hacks become more apparent than their current benefit.

> > If indeed you care less about non-free environments then I would guess
> > that youcan see that attempts to make non-standard changes to an API
> > used across free and non-free environmetns are even more futile since
> > doing so can only complicate things for both OS vendors as well as
> > application authors and maintainers.  Only strong standards will likely
> > be implemented in non-free environments, but I'm sure you know already
> > that standards which try to invent new APIs from whole cloth, or even
> > just API extensions, are often prone to failure.
> 
> I know nothing of the sort.  I see many APIs which come from free
> software implementations adopted into proprietary implementations.

I think you didn't read all of what I wrote there, or if you did then
you're ignoring the lessons of history.

There can be only so much "pull" from application authors wanting
vendors to support new APIs and when those APIs are hacks that avoid
fixing the real problem then vendors may be more inclined to fix the
real problem than to implement the hacks.

> Wrong.  If the browser is too slow, the user blames the browser.  The
> browser writers may be willing to use a local cache, but they will
> require some way to tell that there is a local cache.

If the system the browser runs on is using fully RFC 1123 compliant DNS
resolvers then the browser will not be too slow just because it's doing
too many long-latency DNS lookups.  The browser authors will be using a
"local" cache if the underlying platforms are fully RFC 1123 compliant.

> Wrong.  The browser will want to either cache NIS information, or will
> want to know that there is a working nscd implementation to cache it
> locally.

You have violated so many layers it's just not funny.  PLEASE try to
read the relevant bits of RFC 1123 and understand that this caching
issue _MUST_ be kept transparent to all applications.  This is NetBSD
we're talking about first here, and with NetBSD the motto is that if
it's worth doing then it's worth doing "Right(tm)".

If we make DNS caching work properly, by default, in at least NetBSD
then we'll be well on the way to showing how it can be done the right
way for every system and sooner or later application authors will come
to trust that they don't have to go to the extent of hacking together
their own DNS cache implementation just because their target platform(s)
don't have proper DNS caching enabled by default.

Why do you want to make it easier for OS vendors to force more work and
more hacks on every application instead of following the original design
and doing it right at the right levels in the first place?  Why do you
want to condone the current hacks of those applications which have
already violated the cache layering laid out by RFC 1123?

> The last change was November 5, 2002.

Yes, but if you recall the old address will be kept in service for
quite some time yet (the phrase "for the foreseeable future" was used in
the announcement).

I went and looked and found the previous change before that was way back
on November 16, 1995.  Even that one wasn't a critical change.

>  I haven't updated the OS on my
> server since May, 2000, and I haven't updated the resolver since
> January 2001.

Then you're still way ahead of the game.

Regardless any properly functioning resolver need only know the address
of one lone functioning and reachable root server in order to re-prime
its root cache.  Your argument that the root cache needs frequent
updating is a straw man -- updates by OS vendors are more than
sufficient even given the tendency of ordinary people to not upgrade
their systems very often.  There will likely never ever be a critical
change to the root cache.

-- 
								Greg A. Woods

+1 416 218-0098;            <g.a.woods@ieee.org>;           <woods@robohack.ca>
Planix, Inc. <woods@planix.com>; VE3TCP; Secrets of the Weird <woods@weird.com>