Subject: Re: Replacing the sysctl() interface.
To: Simon Burge <simonb@netbsd.org>
From: Aidan Cully <aidan@kublai.com>
List: tech-kern
Date: 06/27/2000 21:05:06
On Wed, Jun 28, 2000 at 12:11:30AM +1000, Simon Burge wrote:
> erh@nimenees.com wrote:
> > 	This is a bit delayed, and I'm probably sounding like a broken
> > record, but each time someone mentions this I'm liking the special 
> > socket type idea more and more.
> 
> Ok, I'm mostly ignorant of the socket idea - so you open a socket, set
> some particular socket option read back the data, right?

The way I understood it, the info read back is based on the address
you bind the socket to, rather than any socket option...  e.g.,
struct sockaddr_skern {
	u_int8_t skern_len;
	u_int8_t skern_family;
	int      skern_mib[32];
	int      skern_miblen;
};
where skern_mib corresponds to the 'mib' argument to sysctl().

> Doesn't this
> mean that the complete reply needs to be buffered in the kernel?  Take
> the kern.proc sysctl case - you copyout as much data as requested and
> you're finished, nothing left for a later syscall to finish handling.
> 
> Am I missing something, or is the socket approach not suited to large
> amounts of data transfer?

I've given this approach a fair amount of thought...  I've come up with
a solution, for the proc case, which would restrict the amount of data
you'd need bufferred to a list of pids, an index to the current pid, and
a struct kinfo_proc2.  (another method which meant just buffering one
pid also occurred to me, but it required sorting the lists in the
pidhashtbl, and behaved a bit strangely with allowing new processes to be
returned to userland.)  When the current kinfo_proc2 is finished being
copied out, we attempt to fetch the structure for the next process in our
list, until we run out of processes or we succesfully get the
kinfo_proc2.  The list of processes is a snapshot generated when the
socket is bound().

I don't know what other sysctl()s want large data transfers...

--aidan