tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: revivesa status 2008/07/09



On Thu, Jul 24, 2008 at 11:07:36AM +0100, Mindaugas Rasiukevicius wrote:
> Jason Thorpe <thorpej%shagadelic.org@localhost> wrote:
> > 
> > On Jul 12, 2008, at 10:07 AM, Gary Thorpe wrote:
> > 
> > > The advantages of SA are supposed to be on *I/O* bound workloads  
> > > because it can reduce overhead due to kernel context switches by  
> > > doing userland switches where appropriate (according to previous  
> > > benchmarks, research etc.), but I could be wron on this.
> > 
> > No.  I/O bound switching in SA requires a kernel context switch  
> > (because the thread blocks in the kernel).
> 
> Not only context switch, but it also needs a new LWP (kernel thread) when
> blocking. In a case of new pthread (userland) or cold "LWP-cache" (actually,
> it is pool) it means: creation of new LWP + context switch. Also, at some
> point LWP should be destroyed or put back to the "LWP-cache", what means
> more overhead, or well.. wasting of memory.
> 
> Imagine a case of 1000 (new) pthreads which block - that would mean:
> 1000 * (LWP creation + SA context switch) operations. Plus, LWPs for VPs...

That would be the exact same LWP usage as a 1:1 threading model would 
give. The SA process spends the time creating the LWPs between blocking 
events while the 1:1 process created all of the same LWPs at initial 
thread creation time.

Yes, the SA process also has an extra LWP sitting around per VP, but that 
is a constant so it really only should count against process startup.

One other thing to consider is how long different context switches take. 
The two important ones are intra-process-same-space switches (inter-LWP in 
the kernel and inter-thread in SA userland) and user-kernel switches. When 
I was starting the Wasabi iSCSI target, I asked around before we used (SA) 
pthreads to implement this. I asked a number of NetBSD threading folks 
about this.

The answer I was given was that user-kernel switches are NOTABLY more 
expensive. Like 10x. Their numbers, not mine. So while SA is adding extra 
steps, they are steps that aren't the most expensive thing around.

What I don't understand, though, is why we're discussing this issue like 
this. I don't see what the NetBSD kernel loses by having both 1:1 AND SA 
threading support. While the SA code is a fresh port, it is a fresh port 
of the NetBSD 4 code. So it actually is something we're familiar with as a 
project. People on this list have shown that SA does better on some work 
loads, and other people have shown (quite spectacularly) that 1:1 performs 
stunningly.

Yes, we had a nasty discussion when 1:1 was brought into current. But 
looking back, I think most of the nastiness was due to the fact that it 
was presented as an either-or proposition. We now have an entirely 
differnt case. Re-adding SA does NOT mean losing 1:1.

Take care,

Bill

Attachment: pgpxtLvq0hodd.pgp
Description: PGP signature



Home | Main Index | Thread Index | Old Index