tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
ATF t_mlock() babylon5 kernel panics
Apologies for the multi-list posting, but I think this needs a wide
audience - please respect the Reply-To and send replies only to
current-users@
I have been looking into this, a little.
First, while the t_mlock() test is most likely broken, it
should never cause a kernel panic (or even a kernel diagnostic
printf, which it is also doing) - so I do not propose fixing
it until the kernel gets fixed.
I would ask that no-one else fix it either.
It also seems to have changed behaviour since jemalloc() was
updated - which is also revealing, but again, that's a good
thing, so it should keep using the updated jemalloc().
The first issue I noticed, is that t_mlock() apparently belives
the malloc(3) man page, which states:
The malloc() function allocates size bytes of uninitialized memory. The
allocated space is suitably aligned (after possible pointer coercion) for
storage of any type of object.
and in particular, those last few words. The "any type of object" that
t_mlock wants to store is a "page" - that is a hardware page. It obtains
the size of that using:
page = sysconf(_SC_PAGESIZE);
and then does
buf = malloc(page);
and if buf is not NULL (which it does check) assumes that it now
has a correctly page aligned page sized block of memory, in which
it can run mlock() related tests.
Something tells me that the "any type of object" does not include this
one, and that t_mlock should be using posix_memalign() instead to allocate
its page, so it can specify that it needs a page aligned page.
Again, I am not proposing fixing the test until the kernel issues
are corrected, but it would be good for someone who knows what alignment
malloc() really promises to return (across all NetBSD architectures)
to rewrite the man page to say something more specific than "any type of
object" !
NetBSD's mlock() rounds down, so regardless of the alignment of the
space allocated, the mlock() tests should be working (the locking might
not be exactly what the test is expecting, but all it is doing is keeping
pages locked in memory - which pages exactly this test does not really
care).
On my test setup, the kernel did not panic. It does however experience
some other weirdness, some of which is also apparent in the bablylon5
tests, and others which might be.
My test system is an amd64 XEN DomU - configired with no swap space, and
just 1GB RAM. It typically has a tmpfs mounted limitted to 1/2 GB
(actually slightly more than that - not sure where I got the number from,
there may have been a typo... the -s param is -s=537255936 in fstab.
That oddity should be irrelevant.
The first thing I noticed was that when I run the t_mlock test in this
environment, it ends up failing when /tmp has run out of space. And I
mean really run out of space, in that it is gone forever, and nothing I
have thought of so far to try gets any of that space back again.
I assume that jemalloc() (aka malloc() in the test) is doing some kind
of mmap() that is backed by space on /tmp and continually grabbing more
until it eventually runs out, and that the kernel never releases that
space (even after the program that mapped it has exited). That seems
sub-optimal, and needs fixing in the kernel, anonymous mmap's (or whatever
kind jemalloc() is doing) need to be released when there are no more
processes that can possibly use them.
I did not try umount -f (easier to just reboot...) but a regular umount
failed (EBUSY) even though there was nothing visibly using anything on
/tmp (and I killed every possible program, leaving only init - and yes,
that did include the console shell I use to test things).
Umounting the tmpfs before running the t_mlock test worked fine (which also
illustrates that none of the very few daemon processes, nor the shell, etc,
from my login, are just happening to be using /tmp - and that it is the
results of the malloc() calls from t_mlock that must be the culprit.
(While ATF is running, it would be using /tmp as both its working
directory, and for output file storage, but after the test fails, all of
those processes are gone).
With an unmounted /tmp that issue (overflowed /tmp) was gone - but that
might just be because the (single) filesystem that everything lives on
in this test environment is fairly large, and very little of it is used
by the fairly bare-bones NetBSD that is installed in it (it is plenty
big enough to store all of the temporaries and results of a release build
when I use it to run that kind of test. That hasn't been needed (by me)
for a while, and I clean it out and do whole new fresh builds from time
to time, so right now it is only about 3% occupied)
Next issue: the kernel prints lots of
[ 41.2801543] pmap_unwire: wiring for pmap 0xffffc98002f6b5c0 va 0x7f7ff7ef1000did not change!
(note I do not yet have gson's recent change to insert the missing space...)
That happens during the mlock_clip subtest, which passes.
After allocating a buffer (as above), this test is wholly
for (size_t i = page; i >= 1; i = i - 1024) {
(void)mlock(buf, page - i);
(void)munlock(buf, page - i);
}
which isn't really much of a test, as it doesn't even attempt to
check the results of the mlock/munlock. "page" is the page size.
(4KB, 8KB or 16KB or whatever it is on an x86_64 (Intel branded processor)).
From the number of kernel messages, I'm assuming one from each of
those mlock/munlock calls (possibly excepting the first, but because
a similar buffer was malloc()/free() 'd in the previous sub-test, it
is conceivable that the page started out locked - that is, if malloc()
returned the same page as last time.
The buffer is free()'d after that loop ends (of course).
First question is why the kernel feels a need to printf in this
situation, and what that printf actually means. Anyone ?
Next, issue, the mlock_err() test fails - this is a test of mlock()
and munlock() calls that are supposed to fail. All the simple ones do
(attempting to mlock(0) or mlock(-1) for example (or to munlock() those).
Then the test tries something that POSIX says may fail, but which we
do not cause that to happen
mlock((void *)(((uintptr_t)buf) + page/3), page/5);
munlock((void *)(((uintptr_t)buf) + page/3), page/5);
The test includes code to verify that both of those return 0. That
has recently started failing (ie: this test used to work).
My guess (and that is all it is for now) is that "buf + page/3" might,
with the new malloc(), no longer be within the same hardware page as "buf"
and so when rounded down to a page boundary, is giving an address in
the next page, rather than the "one true page" that the test is assuming
it has allocated, and that that page is outside the space that has been
mmap()'d by jemalloc(), and consequently does not exist in the process
address space, resulting in an error. That error should be ENOMEM but
the test doesn't bother to reveal that info. Anyway, for this one I
am assuming just the test bug (bad assumption about malloc()) which will
go away when the test is fixed eventually.
Next test is the mlock_limits() test - as a guess (and again, this is
just a guess) it looks as if this might be where the loss of space in
the tmpfs tests happens.
This test does the standard buf = malloc(page), then forks (so both
processes should have that "page" allocated - via non-shared mmap()'d
space, I am presuming). The parent does nothing but wait for the
child, which uses its exit status to communicate whether the test
passed or failed (and the parent gets back "passed" (ie: exit(0) from
the child). The parent then does free(buf) and is finished.
The child manipulates its RLIMIT_MEMLOCK constantly setting it to 1
byte less than the amount it is going to mlock() in a loop like
for (ssize_t i = page; i >= 2; i -= 100) {
res.rlim_cur = i - 1;
res.rlim_max = i - 1;
if (setrlimit(RLIMIT_MEMLOCK, &res) != 0)
_exit(EXIT_FAILURE);
if (mlock(buf, i) != -1 || errno != EAGAIN) {
(void)munlock(buf, i);
_exit(EXIT_FAILURE);
}
}
(in which for brevity here, I have eliminated a call to fprintf()
and setting errno to 0 before the mlock(), though that should not
be necessary - errno is only tested if mlock() fails, in which case
it should have been set to the reason why it failed. EAGAIN (according
to mlock(2)) means the locking "would exceed either the system or
per-process limit for locked memory" - where here the latter is intended
to happen.
Since this test passes, one can assume that every call of mlock() does
return -1, and in every case, errno gets set to EAGAIN. That's good.
Note that the test does no munlock() calls, nor does it free the page
buffer ("buf") - it simply exits when all has succeeded (or failed,
though if that happened, it would munlock() - or attempt to).
Also note that nowhere does the test ever write to, or for that matter,
read from, the "buf" that was allocated in the parent, then inherited
over the fork() (there is just one fork() - no loops around that).
That is, there would be no reason for the kernel to have made a copy
of the pages via CoW. I'm assuming (guessing, in fact, very very wild
guess) that this is significant in the "full tmpfs" issue) - as it would
be quite rare, I presume, for a process with a mmap'd anonymous pages
to fork, and then never touch the pages at all (just attempt to manipulate
the kernel state for them via mlock()). Whether the mlock() actually does
anything at all (changes any kernel state) is unknown - it shouldn't.
I am presuming that this is where the tmpfs() is all consumed (though I
jave no idea why it would be all - and it is not impossible that happened
in one of the earlier tests) as the next test mlock_mmap doesn't use the
normal paradigm for allocating memory, instead it
does
buf = mmap(NULL, page, PROT_READ | PROT_WRITE,
MAP_ANON | MAP_PRIVATE | MAP_WIRED, -1, 0);
(written slightly differently, but that is what happens).
After the jemalloc() changes, this mmap() call fails (the test
goes on to check buf != MAP_FAILED and that check fails.
This should indicate that there is no more available anon memory.
What the rest of this test would do is irrelevant, as it doesn't do it.
The final test case is mlock_nested which does the normal buf = malloc(page)
which succeeds (presumably returning the same page as every other time)
The test then runs a loop (100 times) doing mlock(buf, page)
(without changing the parameters, or doing munlock()) each time.
This verifies that a locked page is allowed to be locked again
(after the first iteration, which initially locks it).
If one of those mlock() calls fails, the test fails, and does not
munlock() the page presumably locked the first time around, nor
does it free the buffer. But since this is the end, the whole
test will exit at this point.
This test (after jemalloc()) also fails, though again the test does
not tell us what error it received, just that mlock() did not return 0.
It also does not reveal which iteration of the loop it failed upon.
While I will not fix the bugs in the test that might alter the way
it interacts with the kernel, I will update the test to get more
information from the tests that fail (and check results that are
currently not checked and make them available - but so as not to
alter what happens, merely by output to stderr).
I'd appreciate it if someone who understands how the mappings work
could make the worst possible assumptions about what jemalloc() is
actually doing, or not doing, and see if the kernel bugs can be
explained, including the crash that happens during the mlock_clip()
test.
There, in the i386 tests, the kernel writes just one of the
printf messages about unwiring, then panics...
[ 6821.7909726] pmap_unwire: wiring for pmap 0xc1b9a000 va 0xac284000 did not change!
[ 6821.7909726] panic: kernel diagnostic assertion "pg->wire_count > 0" failed: file "/tmp/bracket/build/2019.03.12.18.46.20-i386/src/sys/uvm/uvm_page.c", line 1594
[ 6821.7909726] cpu0: Begin traceback...
[ 6821.7909726] vpanic(c107f760,c9867d30,c9867d44,c08e21ce,c107f760,c107f69f,c11314a1,c11341b8,63a,c9867de4) at netbsd:vpanic+0x12d
[ 6821.8109974] kern_assert(c107f760,c107f69f,c11314a1,c11341b8,63a,c9867de4,c9867d80,c08cfb14,c183ee98,ac284000) at netbsd:kern_assert+0x23
[ 6821.8310116] uvm_pagewire(c183ee98,ac284000,6d44000,3,33,3,ac284000,0,c1b9a000,c9867e01) at netbsd:uvm_pagewire+0x80
[ 6821.8511062] uvm_fault_upper_enter.isra.5(c2479490,c183ee98,c2479490,1,c08e5405,c183ee98,c9867e40,cfa787f1,c9867e28,c9867e28) at netbsd:uvm_fault_upper_enter.isra.5+0x1dd
[ 6821.8511062] uvm_fault_internal(c221a9f4,ac284000,3,3,c221aa04,c22949ac,c221aa24,ac285000,c9867f0c,c08d8ab6) at netbsd:uvm_fault_internal+0x1500
[ 6821.8710371] uvm_fault_wire(c221a9f4,ac284000,ac285000,3,1,0,58,c221aa24,c22949ac,1000) at netbsd:uvm_fault_wire+0x4f
[ 6821.8710371] uvm_map_pageable(c221a9f4,ac284000,ac285000,0,0,ac285000,c2391d00,c9867fa8,c138899c,c9867f9c) at netbsd:uvm_map_pageable+0x1bc
[ 6821.8911479] sys_mlock(c2391d00,c9867f68,c9867f60,c21985e4,c9865000,c9867f60,cb,c9867f68,0,0) at netbsd:sys_mlock+0xb9
[ 6821.9110634] syscall() at netbsd:syscall+0x143
[ 6821.9110634] --- syscall (number 203) ---
[ 6821.9310749] ac2d0257:
[ 6821.9310749] cpu0: End traceback...
(Aside: note this is a recent test run and includes gson's missing
space replacement fix!)
Also, an explanation of what might be hanging onto all that tmpfs memory
after everything should have exited.
Thanks all,
kre
Home |
Main Index |
Thread Index |
Old Index