tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kernel memory allocators



yes, that makes sense for trying my changes with different pool_page
sized pool_allocators...
I think the initialization order has to be tested on bare metal or?

On Fri, Jan 21, 2011 at 12:29 PM, Antti Kantee <pooka%cs.hut.fi@localhost> 
wrote:
> btw, just in case you're interested, you can easily use rump for userspace
> development/testing of the kmem/vmem/pool layers.  src/tests/rump/rumpkern
> has examples on how to call kernelspace routines directly from user
> namespace.
>
> On Fri Jan 21 2011 at 11:51:08 +0100, Lars Heidieker wrote:
>> > Do you have your changes available for review?
>>
>> The kmem patch it includes:
>>
>> - enhanced vmk caching in the uvm_km module not only for page sized
>> allocation but low integer multiplies.
>>   (changed for rump as well)
>> - a changed kmem(9) implementation (using these new caches) (it's not
>> using vmem see note below)
>> - removed the malloc(9) bucket system and made malloc(9) a thin
>> wrapper around kmem, just like in the yamt-kmem branch.
>>   (changed vmstat to deal with non more existing symbol for the malloc 
>> buckets)
>>
>> - pool_subsystem_init is split into pool_subsystem_bootstrap and
>> pool_subsystem_init,
>> after bootstrap static allocated pools can be initialized and after
>> init allocation is allowed.
>> the only instances (as fas as I found them) that do static pool
>> initialization earlier are some pmaps those are changed accordingly.
>> (Tested i386 and amd64 so far)
>>
>> vmem:
>> Status quo:
>> The kmem(9) implementation used vmem for its backing, with an
>> pool_allocator for each size this is unusual for caches.
>> The vmem(9) backing kmem(9) uses a quatum size of the machine
>> alignment so 4 or 8 bytes, therefore the quantum caches of the vmem
>> are very small and kmem extends these to larger ones.
>> The import functions for vmem do this on a page sized basis and the
>> uvm_map subsystem is in charge of controlling the virtual address
>> layout and vmem is just an extra layer.
>>
>> Questions:
>> Shouldn't vmem provide the pool caches with pages for import into the
>> pools and the quantum caches of vmem should provide these pages for
>> the low integer multiplied sizes? That's the way I understand the idea
>> of vmem and it's implementation in solaris.
>> But this makes only sense if vmem(9) is in charge of controlling lets
>> say the kmem map and not the uvm_map system, slices of this submap
>> would be described by vmem entries and not by map entries.
>>
>> With the extended vmk caching for the kernel_map and kmem_map I
>> implemented the quatum caching idea.
>>
>> Results on an amd64 four-core 8gb machine:
>>
>> sizes after: building a kernel with make -j200, du /, ./build.sh -j8
>> distribution
>>                                   current
>> changed kmem
>> pool size:                 915mb / 950mb                  942mb/956mb
>> pmap -R0 | wc          2700                                  1915
>>
>> sizes after pushing the memory system with several instances of the
>> Sieve of Eratosthenes each one consuming about 540mb to shrink the
>> pools.
>>                                   current
>> changed kmem
>> pool size:                 657mb / 760mb                  620mb/740mb
>> pmap -R0 | wc          4280                                  3327
>>
>>
>> those numbers are not precise (especially the later ones) at all but
>> they do hint in an direction.
>> Keep in mind that allocations that go to malloc in the current
>> implementation go to the pool in the changed one.
>> Runtime of the build process was the same within a few seconds difference.
>>
>> kind rgards,
>> Lars
>
>
>
> --
> älä karot toivorikkauttas, kyl rätei ja lumpui piisaa
>



-- 
Mystische Erklärungen:
Die mystischen Erklärungen gelten für tief;
die Wahrheit ist, dass sie noch nicht einmal oberflächlich sind.
   -- Friedrich Nietzsche
   [ Die Fröhliche Wissenschaft Buch 3, 126 ]


Home | Main Index | Thread Index | Old Index