tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kmem-pool-uvm



hi,

On 06/08/11 04:11, YAMAMOTO Takashi wrote:
> hi,
> 
>> Hi,
>>
>> On 05/26/11 03:51, YAMAMOTO Takashi wrote:
>>> hi,
>>>
>>>>> Findings after having run the system for a while and having about 1.1gig
>>>>> in the pool(9)s:
>>>>> Option a: about 30000 allocated kernel map_entries (not in the map but
>>>>> allocated)
>>>>> Option b: about 100000 allocated boundary tags.
>>>>> Option c: about 400000 allocated boundary tags.
>>>>>
>>>>> With boundary tags beeing about half the size of vm_map_entries the vmem
>>>>> version uses slightly more memory but not so much.
>>>
>>> why did you use different numbers for heap_va_arena's qcache_max
>>> (8 * PAGE_SIZE) and VMK_VACACHE_MAP_QUANTUM (32 * PAGE_SIZE)?
>>>
>>> if i read your patches correctly, the number of map entries/boundary tags
>>> will be smaller if these constants are bigger, right?
>>>
>>
>> I choose the 8 * PAGE_SIZE for qcache_max as the quantum caches are
>> pool_caches, so if we have only two or three allocation of a particular
>> size made by different cpus we have 2 or 3 times the va in the pool
>> caches, with a lot va wasted.
> 
> in that case, the "two or three allocation" will likely be served by
> a single pool page, won't it?  ie. the waste is same as the direct use of 
> pool.
> 

true, the wastage will be only larger if more puts and gets happen as
constructed objects are kept in the caches. However I don't think this
is a problem at all.

Lars


Home | Main Index | Thread Index | Old Index