tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kernel memory allocators



a new version making the initalization order of the kernel_maps and
the pool allocator explicit.
fixing that not all pool_reclaim callbacks were installed in the
previous version.

This removes the deferred initialization queue for the pool_allocators
from the pool code.
No pool could make allocations anyway before the kernel maps were up,
this holds even true fro the arm32 pmap as far as I can see.

This behavior is now enforced.

The next step I try is to give the pool_subsystem differently sized
pool_allocators from which it can choose (if no allocator is passed
in) to lower the lost space if only few or a single item fits into a
poolpage.

On Fri, Jan 21, 2011 at 11:51 AM, Lars Heidieker
<lars.heidieker%googlemail.com@localhost> wrote:
>> Do you have your changes available for review?
>
> The kmem patch it includes:
>
> - enhanced vmk caching in the uvm_km module not only for page sized
> allocation but low integer multiplies.
>  (changed for rump as well)
> - a changed kmem(9) implementation (using these new caches) (it's not
> using vmem see note below)
> - removed the malloc(9) bucket system and made malloc(9) a thin
> wrapper around kmem, just like in the yamt-kmem branch.
>  (changed vmstat to deal with non more existing symbol for the malloc buckets)
>
> - pool_subsystem_init is split into pool_subsystem_bootstrap and
> pool_subsystem_init,
> after bootstrap static allocated pools can be initialized and after
> init allocation is allowed.
> the only instances (as fas as I found them) that do static pool
> initialization earlier are some pmaps those are changed accordingly.
> (Tested i386 and amd64 so far)
>
> vmem:
> Status quo:
> The kmem(9) implementation used vmem for its backing, with an
> pool_allocator for each size this is unusual for caches.
> The vmem(9) backing kmem(9) uses a quatum size of the machine
> alignment so 4 or 8 bytes, therefore the quantum caches of the vmem
> are very small and kmem extends these to larger ones.
> The import functions for vmem do this on a page sized basis and the
> uvm_map subsystem is in charge of controlling the virtual address
> layout and vmem is just an extra layer.
>
> Questions:
> Shouldn't vmem provide the pool caches with pages for import into the
> pools and the quantum caches of vmem should provide these pages for
> the low integer multiplied sizes? That's the way I understand the idea
> of vmem and it's implementation in solaris.
> But this makes only sense if vmem(9) is in charge of controlling lets
> say the kmem map and not the uvm_map system, slices of this submap
> would be described by vmem entries and not by map entries.
>
> With the extended vmk caching for the kernel_map and kmem_map I
> implemented the quatum caching idea.
>
> Results on an amd64 four-core 8gb machine:
>
> sizes after: building a kernel with make -j200, du /, ./build.sh -j8
> distribution
>                                  current
> changed kmem
> pool size:                 915mb / 950mb                  942mb/956mb
> pmap -R0 | wc          2700                                  1915
>
> sizes after pushing the memory system with several instances of the
> Sieve of Eratosthenes each one consuming about 540mb to shrink the
> pools.
>                                  current
> changed kmem
> pool size:                 657mb / 760mb                  620mb/740mb
> pmap -R0 | wc          4280                                  3327
>
>
> those numbers are not precise (especially the later ones) at all but
> they do hint in an direction.
> Keep in mind that allocations that go to malloc in the current
> implementation go to the pool in the changed one.
> Runtime of the build process was the same within a few seconds difference


Home | Main Index | Thread Index | Old Index