tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: vmem problems [was: Re: extent-patch and overview of what is supposed to follow]



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/09/11 05:09, Mindaugas Rasiukevicius wrote:
> Lars Heidieker <lars%heidieker.de@localhost> wrote:
>>>> this is a part of the changes to the kernel memory management.
>>>> It's a changing the subr_extent to use kmem(9) instead of malloc(9)
>>>> essentially removing the MALLOC_TYPE from it.
>>> Why start from this end, instead of converting extent(9) uses to vmem(9)
>>> and then just retire extent(9) subsystem?
>>>
>> There are problems with vmem in the general case as David Young pointed
>> out http://mail-index.netbsd.org/tech-kern/2009/12/03/msg006566.html
>>
>> Therefore my idea is to have a resource allocator that combines the
>> properties of both nothing I have started with except of thinking about
>> it. eg vmem is returning null in the error case which results in problem
>> when the resource range should include 0 and making offset wrappers...
>
> Just to come back on this..
>
> Third problem i.e. vmem(9) relying on malloc(9) is something what needs
> fixing, yes. VMEM_ADDR_NULL being 0 does not look like a major problem,
> a simple offsetting would work around it, but perhaps (vmem_addr_t *)-1
> would help as well (if its users do not need the whole range, but need
> a start at 0). And this gets related to problem 2 about ~(vmem_addr_t)0
> being the maximum value in the range - is there a need for a wider space?
>
> Therefore, I would say requirements of potential vmem(9) users should be
> investigated and understood first. Our vmem(9) API is now compatible with
> Solaris. If these are indeed real problems, we can decide to diverge.
>

The changed kmem implementation I made does not rely on vmem and
therefore vmem can use kmem then instead of malloc. This solves the
third problem.

I'm fine with the offset even though it's not elegant ;-)
The problem with not being able to use the entire range does bite some
extent users a quick grep for extent_create shows quite a few extents,
mostly for bus-spaces, that span the entire range.

The reason for changing kmem this way are that vmem is not really in
charge controlling the address space it allocates from it imports
slices from the uvm_km maps mostly in page size granularity, which are
given back to uvm_km once a page (or pages) are free.

There is no vmem_create(heap_start, heap_size ....) as this would
require vmem to be in charge for the entire "span", which would be
possible for a submap like kmem_map but it won't mix with other map
allocations.

a) So either kmem should not rely on vmem, then that layer can be
removed then with larger allocations going to uvm_km directly. For all
common allocation there is a pool_cache in kmem, with virtual address
caches at the map layer.

b) Or we should define a heap that is entirely controlled by vmem, but
then we need to stack different arenas with one controlling the entire
heap using PAGE_SIZE as it's quantum (with quantum caching) which
provides virtual addresses (not backed by memory) from which arenas
import (backed with memory) to back kmem and ideally in that world the
pools as well and special arenas for vmem needs and those are tricky.
One special arena that provides the vmem boundary tags would be
required with a low watermark and recursion in order to allocated more
boundary tags, diving into the reserve only allowed for restocking.
This arena requires it's import size to be bigger than the max
quantum-cache size of the base heap arena to evade recursion because
of pool restocking.
This would make vmem self supported for heap usage.

The principal of quantum caching vmem has can be applied to the kernel
maps theres is a already a page sized quantum cache with the
vmk_vacache pool for kernel maps. This can be extended to for low
integer multiplies of page size. (done for my changed kmem)

Within Solaris, according to the Bonwick Paper, they replaced the
rmalloc (extent like resource allocator ?) controlled heap, with a
vmem controlled one, but within NetBSD the situation seems to be
different, from my point of view, we do have a more elaborated uvm_map
system, in terms of maps with map_entries controlling small parts...
So option a seems to be less intrusive, simpler and easier to maintain
without any disadvantages, performance should be the same as nearly
all scalability comes from the caches and with quantum caches at the
map layer fragmentation in the kernel maps is way down, especially
when all allocation use these caches...

Opinions?

Lars

- -- 
- ------------------------------------

Mystische Erklärungen:
Die mystischen Erklärungen gelten für tief;
die Wahrheit ist, dass sie noch nicht einmal oberflächlich sind.

   -- Friedrich Nietzsche
   [ Die Fröhliche Wissenschaft Buch 3, 126 ]
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2gNAIACgkQcxuYqjT7GRbd+wCdH+8U+ZN21mraxao48giDfBTZ
/E0AoJSXI11m++b+Mdp6+0xpw9aTSxYr
=C57l
-----END PGP SIGNATURE-----



Home | Main Index | Thread Index | Old Index