[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Using NET_MPSAFE
On Sat, Aug 5, 2017 at 3:25 AM, Brian Buhrow <buhrow%nfbcal.org@localhost> wrote:
> hello. I'm excited to see the development of the MP-safe network
> stack in NetBSD. Now that some progress has been made in that regard and
> there are MP-safe drivers and stack components to use, I have some
> questions. I'm interested in using options NET_MPSAFE in NetBSD-8.0_BETA
> and the eventual netbsd-8 release. Here are my questions. I apologize if
> some of them seem obvious, but I don't want to make any assumptions when
> trying this new stuff.
First of all, the primary target of the work is routers. So if you use
NetBSD as clients or servers, you may not gain benefits from the work.
For such users, we're looking for someone who works on MP-safe Layer 4 :)
> 1. If I enable NET_MPSAFE in the kernel, will non-MP-ify'd components work
> in that kernel using the kernel lock? In other words, if I enable
> NET_MPSAFE and use the wm(4) driver, I'll get MP performance out of the
> network stack. However, what if I try to use a non-MP-ify'd component on
> that same machine, i.e. agr(4) or pf(4)? It looks to me like things should
> work, but traffic through the non-MP-ify'd components will be single
> threaded. Is this correct?
Nope, unfortunately. Non-MP-safe components need to be protected somehow
(probably by adding KERNEL_LOCK to the entrances of the component)
if NET_MPSAFE is enabled. That's why NET_MPSAFE is not enabled by default.
We're looking for someone who works on the tasks too.
Nonetheless, some non-MP-safe components luckily work even if NET_MPSAFE
is enabled. For example CARP isn't MP-safe yet however it works pretty
stable with NET_MPSAFE thanks to the big lock for the network stack
(softnet_lock). My dogfooding router works with it for several months
without any issues.
FYI: you can check the lists of MP-safe/non-MP-safe components at:
> 2. Am I correct that when NET_MPSAFE is turned on, the network stack is
> runing as an LWP inside the kernel?
> And, am I correct that this means that
> even if a particular network component is single-threaded, it's able to
> execute on any CPU, thus reducing CPU congestion on CPU0 as happens on the
> stock NetBSD kernels?
NetBSD doesn't have dedicated threads for network components (except for
timers). For transmissions from a userland program, the network stack runs
in a LWP of the program. For receptions, the network stack runs in some of
software interrupt contexts. In any cases, the big locks (KERNEL_LOCK and
softnet_lock) prevents such contexts from running in parallel. NET_MPSAFE
option gets rid of (some of) the big locks and thus the network stack runs
in parallel on multiple CPUs.
NET_MPSAFE doesn't remove the big locks for transmissions from userland,
so sending packets don't run in parallel. For packet receptions and
forwarding, NET_MPSAFE remove the big locks and packet processing runs
in parallel. If you use one of MP-safe network device drivers such as
wm(4), NET_MPSAFE enables the hardware multi-queue feature and incoming
packets are delivered to multiple CPUs. If you use non-MP-safe drivers,
all packets are delivered to CPU0 and no packet processing runs in
parallel even if NET_MPSAFE is enabled.
> 3. How stable is the NET_MPSAFE stack? Is anyone using it in any sort of
> production environment?
> the BSDCAN paper I read suggests it's pretty stable, but I'm wondering if
> anyone can report their experience.
We (IIJ) are working on making the network stack with NET_MPSAFE stable
enough for productions.
What I can say now is that if you use only MP-safe network components
it should be stable.
Main Index |
Thread Index |