Subject: Re: Making file-based getXent quicker
To: Thor Lancelot Simon <tls@rek.tjls.com>
From: Brian Ginsbach <ginsbach@NetBSD.org>
List: tech-userlevel
Date: 03/20/2006 10:59:02
On Mon, Mar 20, 2006 at 11:57:28AM -0500, Thor Lancelot Simon wrote:
> On Mon, Mar 20, 2006 at 09:12:50AM -0600, Brian Ginsbach wrote:
> > 
> > Yes, it maybe cool to mmap files and all that but there are probably
> > many unforeseen consequences.  Seems like effort could be better
> > spent elsewhere.  I think it took SGI a while to get UNS, nsd(1M),
> > right.  There may still be problems with it as I don't really follow
> > IRIX that closely any longer.
> 
> There sure are still problems with it.  As of three years ago -- a *long*
> time after SGI first made nsd mandatory -- I still found it necessary
> to kill and restart nsd on my Irix fileserver three or four times a year,
> to resolve issues either of complete failure to respond to requests, or
> extreme and inexplicable response latency.

I figured as much.  I was in no way advocating for anything like nsd.
I just see what Darren is proposing as another form of nsd.  But maybe
I misunderstood his original proposal.

> 
> The experience made me very, very skeptical of the idea of making all
> requests to any critical system database rely upon any such daemon
> process.

Exactly.

> 
> One other thing I'm highly curious about here is why we should think
> that most requests from these databases don't hit in every relevant cache
> along the way to being resolved.  Is there really much performance benefit
> to looking an entry up in shared memory when compared to retrieving it from
> the filesystem cache, with the filename lookup handled by the name cache?
> 
> It seems intuitively obvious to me that a daemon like nsd is likely to
> be _worse_ than either the "just let it hit the cache" or the "use shared
> memory" solutions.
> 

I agree.  Maybe I'm being obtuse but I'd lump "use shared memory" as
advocated by Darren in with the worse.