tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kmem-pool-uvm



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 04/10/11 04:24, Mindaugas Rasiukevicius wrote:
> Concerns/questions:
>
> - What are the effects of replacing vmem(9) to fragmentation? Do you
> have some numbers that kmem_cache_big_sizes is doing a good job?
>
> - The patch decreases the variety of small-sized caches (e.g. no more
> 8, 24, 40, 56, etc). On what basis? Was it measured? It might be
> sensitive! See here:
> http://mail-index.netbsd.org/tech-kern/2009/01/11/msg003989.html
The basis was to reduce the count of caches, with the allocations from
one smaller less populated cache going to the next bigger one, this is
subject to tuning.
The performance problems mentioned came from that there was no cache
larger then 128 byte, so there is nothing changed with the caches up
to page_size.

If it is worth having the large caches is a good question there are
some allocations within the 8kb and less in the larger pools, but they
are few and they seem to be static so it might be a good idea to life
without them.
Especially with the more generalized uvm_km_alloc_poolpage* functions
(see below)

> - Can you try regress/sys/kern/allocfree/allocfree.c with your patch?
I've tested different sizes up to page_size with kmem and pool_cache
scaling linearly.
(On my 4 core machine)
> If yamt@ says OK for replacing the use of vmem(9), as he probably has a
> best understanding of fragmentation issues, then you have a green light
> for the patch! :)
>
I've further changed the uvm_km_alloc_poolpage_cache /
uvm_km_free_poolpage_cache to call uvm_km_alloc_poolpage /
uvm_km_free_poolpage when the requested size is larger then the
largest VMK_VACACHE for the map.

This enables kmem to use uvm_km_alloc_poolpage_cache for allocations
that are larger then the largest cache.
Essentially making uvm_km_alloc_poolpage_cache a function that that
can allocated arbitrary sized chunks of memory utilizing the VA_CACHE
and uvm_km_alloc_poolpage the same without the cache.
(The names of these functions should be a more general term probably
as they allocated a slab of memory for higher level memory allocators).

The vmem could use these changed functions as it's back end too it
currently uses uvm_km_alloc uvm_km_free directly, which should result
in higher fragmentation of the kernel map.

> Also, I suppose by mistake, patch has removed kmem_poison_check() and the
> following checks:
>
> - KASSERT(!cpu_intr_p());
> - KASSERT(!cpu_softintr_p());
>
> These are must-have, critical mechanisms for catching bugs!
>
Yes, definitely I have dropped them when I had made kmem interrupt
safe and missed to bring them back.

> uvm_km_alloc_poolpage_cache() and uvm_km_alloc_poolpage() now take size
> argument. Please KASSERT() that size is dividable by PAGE_SIZE. Also,
> in uvm_km_free_poolpage_cache(), the following was removed:
>
> -#if defined(DEBUG)
> - pmap_update(pmap_kernel());
> -#endif
>
> Since interaction between layers here is already a bit confusing, this
> definitely needs a comment i.e. why leaving a stale entry is OK. Which
> is because of KVA reservation (as it is KVA cache), where TLB flush would
> be performed in km_vacache_free(). (A case of KVA starvation, normally?)
>
>>
>> http://ftp.netbsd.org/pub/NetBSD/misc/para/kmem-pool-uvm-extent.patch
>>
> Few other comments on the patch.
>
> Please use diff -p option when generating the patches. Also, splitting
> mechanical parts and functional into a separate diffs would ease review,
> and e.g. malloc to kmem conversions can/should be committed separately.
>
> - hd = malloc(sizeof(*hd), M_DEVBUF, M_NOWAIT);
> + hd = kmem_alloc(sizeof(struct hook_desc), KM_NOSLEEP);
>
> All such M_NOWAIT cases should be audited that there are no uses from
> interrupt context, and if not - changed to KM_SLEEP (check for NULL can
> be removed too). Generally, there should be no KM_NOSLEEP uses without
> a very good reason.
>
> - t = malloc(len + sizeof(struct m_tag), M_PACKET_TAGS, wait);
> + t = kmem_alloc(len + sizeof(struct m_tag),
> + wait ? KM_SLEEP : KM_NOSLEEP);
>
> mbuf tags are created from interrupt context, so this is a bug (those
> removed KASSERT()s would have caught)!
>
I'll recheck those malloc -> kmem changes and separate them out.
> - free(corename, M_TEMP);
> + kmem_free(corename, strlen(corename) + 1);
>
> Hmm, these are a bit inefficient. I have been thinking about this..
> If, for nearly all cases, size is either constant or can-be/is normally
> saved in some structures, and only strings are real, justified cases to
> store the size within allocation, e.g. in a byte or word before the
> actual string - perhaps add kmem_stralloc() and kmem_strfree()?
>
It's not efficient ;-) I think we need a malloc/free like function
that encodes the allocated size in front of the memory anyway, there
are libs like libz that requires such functions and it can be used for
strings as well.
But beside those libs there are only strings.

> + size += REDZONE_SIZE + SIZE_SIZE;
> + if ((index = ((size - 1) >> KMEM_TINY_SHIFT)) < (KMEM_TINY_MAXSIZE >>
KMEM_TINY_SHIFT)) {
> + pc = kmem_cache_tiny[index];
> + } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) <
(KMEM_BIG_MAXSIZE >> KMEM_BIG_SHIFT)) {
> + pc = kmem_cache_big[index];
> + } else {
> + uvm_km_free(kernel_map, (vaddr_t)p, round_page(size), UVM_KMF_WIRED);
> + return;
> + }
>
> Please KNF, clean-up (the whole patch). Lines should be no longer than 80
> characters. When wrapping long lines, second level indents have four extra
> spaces (not a tab). There are many cases in the patch with the trailing,
> added or missed whitespaces, etc.
>
> - * to only a part of an amap). if the malloc of the array fails
> + * to only a part of an amap). if the kmem of the array fails
>
> In such places use word "allocation", no need to be allocator-specific. :)
>
> rw_enter(&swap_syscall_lock, RW_WRITER);
>
> - userpath = malloc(SWAP_PATH_MAX, M_TEMP, M_WAITOK);
> + userpath = kmem_alloc(SWAP_PATH_MAX, KM_SLEEP);
> ..
> - free(userpath, M_TEMP);
> + kmem_free(userpath, SWAP_PATH_MAX);
> rw_exit(&swap_syscall_lock);
>
> Opportunity to move out kmem_*() outside the lock, as it is safe!
>
> - sdp = malloc(sizeof *sdp, M_VMSWAP, M_WAITOK);
> - spp = malloc(sizeof *spp, M_VMSWAP, M_WAITOK);
> + sdp = kmem_alloc(sizeof *sdp, KM_SLEEP);
> + spp = kmem_alloc(sizeof *spp, KM_SLEEP);
> memset(sdp, 0, sizeof(*sdp));
>
> KNF: sizeof *sdp -> sizeof(*sdp). Also, -memset(), +kmem_zalloc().
>

KNFing in progress.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2iCi4ACgkQcxuYqjT7GRag9ACfeD2OP1QqPtIioYPF5g1mcnut
WJoAn1w0S1fWOKyAeRYivHP673Wiyp6r
=EiiC
-----END PGP SIGNATURE-----



Home | Main Index | Thread Index | Old Index