[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: status of NetBSD SMP support
Like Bartosz, I would like to add another question: about scalability and multi-core hardware, what is the state of NetBSD with respect to - for example - Solaris?
Being specifically designed to run also with the SPARC architecture, maybe Oracle Solaris fits very well in a multi-core environment and it is able to efficiently divide its workload between the cores. Is NetBSD able to do the same? If not, would NetBSD need to do the same?
> Sent: Wednesday, August 10, 2016 at 10:35 AM
> From: "Bartosz Marcinkiewicz" <bartoszmarc%gmail.com@localhost>
> To: "Erik Fair" <fair%netbsd.org@localhost>, "David Holland" <dholland-tech%netbsd.org@localhost>
> Cc: "Thor Lancelot Simon" <tls%panix.com@localhost>, "Cherry G. Mathew" <cherry.g.mathew%gmail.com@localhost>, "NetBSD Symmetric Multi-Processing" <tech-smp%netbsd.org@localhost>
> Subject: Re: status of NetBSD SMP support
> Pardon me for jumping in, but I have similar question, from a little
> different angle: what areas of the NetBSD kernel can be improved so they
> scale better on SMP machines, what can be implemented / researched?
> BR, bm.
> On 10/08/16 05:16, Erik Fair wrote:
> > To be clear: I know the kernel isn’t fully parallel ((S)MP) - otherwise the recent work on the networking stack (e.g. making ARP caches per interface) wouldn’t be necessary.
> > What I was looking for was a general status report of the NetBSD kernel (and userland) on SMP systems (more commonly known as “multicore” in au courant parlance): how multithreaded is it? How much lock contention? Is Big Lock gone - devolved into lots of small locks?
> > What’s the deal?
> > It’s one thing to boot and throw processes at cores (processors) - it’s another to have the kernel properly parallel so that when those processes make system calls, they don’t contend with each other (much).
> > All this in full cognizance of Amdahl’s Law and the non-parallel bits we can do nothing about (e.g. single I/O paths to sole devices). I just want to have some idea how much more parallelism we can pull out of the currently non-parallel code to reduce that serial time term in Amdahl’s Law to minimum, because the hardware guys are going to continue throw ever more cores at us for lack of any better idea of what to do with the chip area that Moore’s Law has given them (and us), and just as Amdahl predicted, that serial term will dominate as the core counts go up.
> > I bet the SIMD engines are going to get fancier, too. Just look at GPUs.
> > Erik <fair%netbsd.org@localhost>
> >> On Aug 9, 2016, at 11:17, David Holland <dholland-tech%netbsd.org@localhost> wrote:
> >> On Mon, Aug 08, 2016 at 07:36:05PM -0400, Thor Lancelot Simon wrote:
> >>>> Last I remember anyone reporting hard results, the scaling worked to
> >>>> ~16 but not to ~32 and the uvm page queue lock was the chief culprit.
> >>>> Dunno what if anything's been done about that...
> >>> There's a fragmentary discussion of it from around 2010 in the mailing list
> >>> archives, but something must have been done as that particular limitation
> >>> seems to have gone away. Our build cluster nodes run happily with 12
> >>> cores, 24 threads, and I do not see the scaling issues we observed between
> >>> 16 and 20 cores in my tests years ago.
> >> In that case we definitely need someone to collect some hard numbers :-)
> >> --
> >> David A. Holland
> >> dholland%netbsd.org@localhost
Main Index |
Thread Index |