Subject: Re: pmap l1pt allocation strategy
To: None <email@example.com>
From: Richard Earnshaw <firstname.lastname@example.org>
Date: 11/28/2002 09:52:15
> I've increased PMAP_STATIC_L1S to 128 and still get one or three per evening.
That's quite a lot of active processes (though not necessarily
unreasonable). How much RAM do you have?
> Wouldn't the wiser strategy be:
> 1. try if we have a free one on the l1pt free list
> 2. if this fails, try to allocate a new one
> 3. if this still fails, go to the static list?
> This way, we only go to the static list if memory is fragmented or if we
> run out of memory...
> Would this fail during bootstrap? As the machine in question is my production
> server, I don't want to waste time trying stupid things...
> The only alternative to make this pmap stable would be to use a VAX-like
> 16kB VM pages with 4KB mmu pages strategy, which naturally ensures that we
> never run out of aligned 16kB blocks.
It might make things a little more stable, though it would probably only
delay the onset of resource starvation slightly; and it would mean that
normally the static pool of L1S will be dead memory -- which argues for it
being a *much* lower number. Another alternative is to implement
page-table stealing, so that we nick the page tables off another process
(since they can be recreated from other info); but then we have to watch
out for wired entries.
We should also add page-table reclamation on process swap-out.
But the fundamental problem is that there is no way to ask the paging
system to free up a 16K-aligned block of memory -- all you can ask for is
4 pages, so we end up in the situation where the paging system thinks
there are enough free resources, but the vm system doesn't.