Subject: Re: splx() optimization [was Re: SMP re-eetrancy in "bottom half" drivers]
To: Jonathan Stone <jonathan@dsg.stanford.edu>
From: Daniel Carosone <dan@geek.com.au>
List: tech-kern
Date: 06/10/2005 15:13:34
--H7G8UhBuWJ9AoGD+
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
On Thu, Jun 09, 2005 at 08:59:44PM -0700, Jonathan Stone wrote:
> Well, the more I think about it, the more I agree Stefan's
> observation, that the first item to attack is to get a device
> interrupt on CPU 0 to scehedule a softint on a _different_ CPU.
Yes, not least because of that key word (almost :) 'schedule'. This
also directly implies that the softint code is running (largely)
outside the kernel_lock.
> Otherwise, (as I think Jason also commented today), since we take all
> interrupts on the first CPU. and (as Stefan observed) the biglock
> implies we run softints on the same CPU as the hard interrupt which
> triggered them.... all we're likely to do is pay dramatically more
> overhead, yet most of the time, we'll still run the networking code
> (hardints and softints) on the one CPU.
Yes, and we have another problem that tends to show up under other
workloads as well: all the other cpu's can be spinning on the kernel
lock trying to service syscalls from user processes. This can be
pretty dramatic when the code running inside the kernel_lock is LFS
over cgd(4) over RAIDframe R5, and the code running outside is making
lots of vnode-related syscalls on the fs.
--
Dan.
--H7G8UhBuWJ9AoGD+
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (NetBSD)
iD8DBQFCqSF+EAVxvV4N66cRAtQhAKCUYZUS2MZmjf9wH2FkS0RdQo/oMgCeMTwz
zsL/qHnOBltMLzm0hzpyY7I=
=81Am
-----END PGP SIGNATURE-----
--H7G8UhBuWJ9AoGD+--