tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kmem-pool-uvm



hi,

> Hi,
> 
> On 05/26/11 03:51, YAMAMOTO Takashi wrote:
>> hi,
>> 
>>>> Findings after having run the system for a while and having about 1.1gig
>>>> in the pool(9)s:
>>>> Option a: about 30000 allocated kernel map_entries (not in the map but
>>>> allocated)
>>>> Option b: about 100000 allocated boundary tags.
>>>> Option c: about 400000 allocated boundary tags.
>>>>
>>>> With boundary tags beeing about half the size of vm_map_entries the vmem
>>>> version uses slightly more memory but not so much.
>> 
>> why did you use different numbers for heap_va_arena's qcache_max
>> (8 * PAGE_SIZE) and VMK_VACACHE_MAP_QUANTUM (32 * PAGE_SIZE)?
>> 
>> if i read your patches correctly, the number of map entries/boundary tags
>> will be smaller if these constants are bigger, right?
>> 
> 
> I choose the 8 * PAGE_SIZE for qcache_max as the quantum caches are
> pool_caches, so if we have only two or three allocation of a particular
> size made by different cpus we have 2 or 3 times the va in the pool
> caches, with a lot va wasted.

in that case, the "two or three allocation" will likely be served by
a single pool page, won't it?  ie. the waste is same as the direct use of pool.

> This might or might not be a point but was the motivation to start with
> a lower value.
> If the size is increased the amount of boundary tags goes down a bit
> further, these caches very much have an influence on the control
> structure allocation count.
> 
> One could argument that the vmk_vacaches should be pool_caches as well
> (I tried that, no problem to switch them) for scalability reasons, then
> they will have to deal with the same wastage argument.
> Currently having these as pool_caches doesn't buy us much, as they have
> to get backed with physical memory, which is a process most likely
> serializing the allocation anyway... But this is no different between
> the two options ;-)
> 
>>>> Both versions use a modified kmem(9) that interfaces either with vmem or
>>>> the extended kva caches, which has page_aligned memory for allocations
>>>> of page_size and larger and cache_line aligned allocations for
>>>> allocations between cache_line size and page_size.
>>>> This should resolve some problems xen-kernels do have.
>> 
>> does the original (solaris) version of kmem_alloc provide aligned
>> allocations?
>> 
> 
> Yes it does, it switches to cache_line size for alignment for
> allocations >= cache_line size and to page_size alignment for
> allocations >= page_size.

kmem_alloc(9F) says:

        The allocated memory is at least double-word aligned, so it  can
        hold  any  C  data  structure.  No  greater alignment can be
        assumed.

% uname -sr
SunOS 5.10

so i don't think it's api-wise guaranteed.
IMO it's better to use a low-level allocator (eg. uvm_km_alloc) for
alignment-sensitive users.

YAMAMOTO Takashi

> 
>> YAMAMOTO Takashi
>> 
> 
> Lars


Home | Main Index | Thread Index | Old Index