Subject: Re: Moving scheduler semantics from cpu_switch() to kern_synch.c
To: Jason Thorpe <>
From: Daniel Carosone <>
List: tech-kern
Date: 09/22/2006 12:36:17
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Fri, Sep 22, 2006 at 11:47:56AM +1000, Daniel Carosone wrote:
> Such programs are of course also most likely to be the ones that
> will hit the case you mentioned - they're probably threaded this way
> precisely to try and use all available CPUs on a parallelisable
> problem.
> I don't know how best to recognise this in something that a scheduler
> can use, though hints from the program will be a big factor, where
> given.

It occurs to me that, from within the kernel, SA and libpthread's
default behaviour is already a fairly big hint.  If the process has
multiple threads, but many of them are blocked in syscalls and other
things such that it's only using a small number of active lwp's, then
its behaviour is probably of the kind where processor/cache affinity
helps most: threading is being used for programmer convenience more so
than to exploit brute parallelism.

If it has more lwps, it has more active parallelism, and these lwp's
probably are better not conflicting for shared resources. =20

So maybe the problem is easier after all: put process lwp's in the
same processor set to start with, up to a maximum equal to the number
of processor elements in the set.  As soon as this number is exceeded,
or if some other evidence of real contention is seen, then switch to
spreading this processes lwp's across sets rather than within.

Content-Type: application/pgp-signature
Content-Disposition: inline

Version: GnuPG v1.4.5 (NetBSD)