Port-arm archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Increasing amount of cached (unflushed/un-invalidated) file pages causing kernel panics

On Sun, 7 Nov 2021 23:00:28 -0000 (UTC)
mlelstv%serpens.de@localhost (Michael van Elst) wrote:

> fgsdfg4t534tf3%onet.pl@localhost (Marcin Kaminski) writes:
> >I'm wondering what is the default value of kern.maxvnodes based on,
> >as I've=
> > noticed, it varies from one board to another.  
> It is based on the MAXUSERS value (a kernel constant) but scaled up
> according to memory size.
> >Nonetheless one issue persists. Kernel panic  
> >[ 781.6927155] panic: kernel diagnostic assertion "len <=3D buflen"
> >failed:=
> > file "/.../usr/src/sys/kern/uipc_mbuf.c", line 1822 =
> >[ 781.6927155] fp ffffc000b015fa80 vpanic() at ffffc0000056a4dc
> >netbsd:vpan= ic+0x14c
> >[ 781.7027167] fp ffffc000b015fae0 kern_assert() at ffffc000007cade8
> >netbsd= :kern_assert+0x58
> >[ 781.7027167] fp ffffc000b015fb70 m_align() at ffffc00000598f44
> >netbsd:m_a= lign+0x114
> >[ 781.7027167] fp ffffc000b015fba0 m_split_internal() at
> >ffffc00000599e48 n= etbsd:m_split_internal+0xc8
> >[ 781.7127197] fp ffffc000b015fbf0 nfsrv_getstream() at
> >ffffc000004713fc ne= tbsd:nfsrv_getstream+0xac
> >[ 781.7127197] fp ffffc000b015fc40 nfsrv_rcv() at ffffc000004716e8
> >netbsd:n= fsrv_rcv+0x1c8  
> That looks like a bug in the NFS server code, not sure how that could
> be related to memory usage. The panic is triggered by receiving an NFS
> request.
> What is the NFS client ?

It was Linux 4.19 amd64

Encouraged to look on NFS rather than operation system, I revised
parameters used on both client and server side.

There's not much room for customization on servers side I guess(man
nfsd). Possibilities known to me:
sysctl: vfs.nfs.iothreads
nfsd parameters:  -n1 -4 -u

It turns out that with '-u' parameter there's no more panics, but NFSd
behaves somewhat strange (in my estimation).

On both Linux and FreeBSD clients when I write single file to NFS
mountpoint it transfers file with ~10MiB(max rate on that link), then
drops to few MiB or even below 1 MiB. It's rather no buffering here as
link activity leds blink without pause - some performance issues on
server side I suppose. On mixed load usage server behaves as if
detached for a moment.

Both aforementioned server side customization possibilities and client
mount options seem not to affect a problem. vfs.nfs.iothreads tested
with 1-6 value. -n with 1-6 values.

Server doesn't crash, but not all clients accept UDP as a transfer
protocol. Omitting transport protocol options (-t and -u) altogether
make panics to come back.

It's still Rock64(4GB RAM) aarch64, with kern.maxvnodes=5000

Best regards,
Marcin Kaminski

Home | Main Index | Thread Index | Old Index