tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: revivesa status 2008/07/09



On Jul 23, 2008, at 7:44 PM, SODA Noriyuki wrote:

On Thu, 24 Jul 2008 02:18:56 +0000,
     David Holland <dholland-tech%NetBSD.org@localhost> said:

I just don't believe that there are very many apps that actually behave
like this.

Right now there aren't, overall, very many apps that are threaded such
that the threading buys much in the way of performance. There also
aren't, overall, very many apps that make 10,000 user threads, because
it doesn't work, at least not without using Erlang or some other
similar environment with its own threading code.

That's just not right.
See the graph attached at Message-ID:
<18567.57418.474635.710749%srapc2586.sra.co.jp@localhost>

NetBSD SA could run 7,000 threads on Celeron 400MHz with only 128MB RAM
even 5 years ago.
With today's RAM size, 70,000 threads or even 700,000 threads must be
possible.

Just because there aren't many apps that you know of doesn't mean there aren't any at all, nor that there is no potential for growth in this area. There are a lot of apps that use state threads, which is essentially a non-preemptive analog to scheduler activations. This is very useful for applications that have to handle a lot of small tasks that have to process work pipelines where each step in the pipeline involves a wait for some event to occur - either another thread to finish doing something, or for I/O to complete.

It might seem like there's no benefit to the I/O completion bit, but the fact is that if you have a ton of I/O going on, it's likely that many I/Os will be complete at any given time. There's only a single context switch back from the kernel required to signal all of these completions if you are using something like select or poll, or even a kernel event queue socket. So although you do not reduce your thread- I/O-related context switches to zero in this case, you do cut them roughly in half. And it's actually *easier* to code for than the inside-out event loops you have to write for straight event-driven I/O.

It may be that there are no applications that you know of that make use of this capability (and, to be honest, I am not sure that the NetBSD SA implementation itself even supports it). But there is major performance winnage to be had in multiplexing I/O using userland threads, and it certainly *could* be taken advantage of by a state threads implementation.

Being a big fan of the whole OLPC/EEPC small computing revival, things that make multithreading really efficient on slow uniprocessor machines are important to me, but I realize that this is not the case for everybody, and I'm not trying to make a recommendation as to what path NetBSD should proceed upon. I'm just sayin', there is serious value in userland threads. They're certainly doing their part to pay my bills. And no, not with Erlang - with plain old C.



Home | Main Index | Thread Index | Old Index