tech-net archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: npf: nbuf_ensure_contig and large options
Maxime Villard <max%m00nbsd.net@localhost> wrote:
> > I do not see that as an NPF problem. It expects the network stack to
> > support certain operations and, in this case, it is a limitation of the
> > mbuf interface and m_ensure_contig().
>
> No it's not, the mbuf interface provides a way to read the packet entirely
> (via mbuf chains), and npf already handles mbuf chains correctly
> everywhere except in nbuf_ensure_contig. Here the limitation comes from
> NPF only, not from the kernel.
It is, because mbuf interface also provides a promise (albeit weak) that
it can re-arrange/re-allocate the buffer chain into a contiguous block of
a requested length (MHLEN just being an arbitrary implementation limit).
In fact, mbuf API always had such routines exactly for the purpose of
operating the protocol headers. Because the network stack, very much
like NPF, prefers to operate on a contiguous memory block when dealing
with the TCP/IP headers. The payload (data) is a different story (and
NPF, via BPF, is happy to handle a chain of any size buffers).
NPF could copy headers into the local structs, but that is inefficient.
So, it seems we agree that if m_ensure_contig() fails, for now we just
say -- "tough". If it becomes a real problem, then we will revise this.
> > The mbuf(9) API and the underlying structuring could be improved (it is
> > really dated by now). While mbufs of an arbitrary size can already be
> > allocated, it is very much desirable to have a fixed-size pool cache of
> > them (MLEN = 512 is also sensitive to performance).
>
> Fixed-size pools are already 99% of the mbufs allocated by the kernel;
> either sizeof(struct mbuf) (normal) or MCLBYTES (cluster). Arbitrary
> sizes are rare, only come from some drivers, and they have good reasons
> for using them (optimization, avoid reallocating mbufs and copying the
> driver buffers all the time).
I didn't say they weren't. I just pointed that there are good reasons
why we currently use pool_cache(9)ed buffers of a particular size (there
was also quite a bit of benchmarking regarding MLEN = 512 back in the day..
although things might have changed with modern CPUs).
So, that was exactly my point -- this is not just changing mbufs to use
arbitrary sized buffers or bumping the constants. If we will need to add
support for long header chains, then there should be more thought about
all this at the network stack level. But we seem to be in agreement.
--
Mindaugas
Home |
Main Index |
Thread Index |
Old Index