tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: kmem-pool-uvm
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 08/15/11 04:21, YAMAMOTO Takashi wrote:
> hi,
>
>> Hi,
>>
>> i uploaded a new version of the kmem-pool-vmem-uvm patch:
>> ftp://ftp.netbsd.org/pub/NetBSD/misc/para/kmem-pool-vmem-uvm.patch
>>
>
> thanks for working on this.
>
> can you provide a patch with diff -up?
>
> have you done benchmarks? eg. src/regress/sys/kern/allocfree i'm a
> little concerned about IPL_VM mutex overhead for kmem_alloc.
>
> YAMAMOTO Takashi
>
Hi,
these are the results of allocfree for different sizes:
current-32:
SIZE NCPU MALLOC KMEM POOL CACHE
32 1 68 68 66 56
32 2 384 67 417 54
32 3 727 67 594 53
32 4 1083 68 870 53
current-128:
SIZE NCPU MALLOC KMEM POOL CACHE
128 1 68 68 64 51
128 2 340 67 375 52
128 3 634 68 657 52
128 4 963 68 920 53
current-256:
SIZE NCPU MALLOC KMEM POOL CACHE
256 1 68 67 69 52
256 2 319 68 395 52
256 3 602 67 615 52
256 4 978 67 934 51
current-1k:
SIZE NCPU MALLOC KMEM POOL CACHE
1024 1 67 63 68 52
1024 2 310 65 460 52
1024 3 622 64 463 52
1024 4 964 61 917 52
current-4k:
SIZE NCPU MALLOC KMEM POOL CACHE
4096 1 69 59 75 52
4096 2 324 61 333 52
4096 3 503 63 608 52
4096 4 973 63 963 51
vmem-heap-32:
SIZE NCPU MALLOC KMEM POOL CACHE
32 1 63 57 65 55
32 2 61 57 418 55
32 3 73 57 585 55
32 4 71 57 866 54
vmem-heap-128:
SIZE NCPU MALLOC KMEM POOL CACHE
128 1 63 58 65 55
128 2 63 58 269 55
128 3 63 58 656 55
128 4 63 58 932 54
vmem-heap-256:
SIZE NCPU MALLOC KMEM POOL CACHE
256 1 63 57 69 55
256 2 62 57 408 56
256 3 63 58 653 55
256 4 63 58 935 56
vmem-heap-1k:
SIZE NCPU MALLOC KMEM POOL CACHE
1024 1 62 57 70 54
1024 2 65 60 455 54
1024 3 62 58 460 54
1024 4 62 58 915 56
vmem-heap-4k:
SIZE NCPU MALLOC KMEM POOL CACHE
4096 1 1929 57 121 56
4096 2 2552 58 355 55
4096 3 3350 56 631 54
4096 4 4466 57 957 55
The pool and cache allocation strategies are very much the same.
The changed kmem(9) is slightly faster despite the usage of IPL_VM, I
think this is mainly due to setting PR_NOTOUCH only for pools smaller
then CACHE_LINE_SIZE.
At 4k we cross the 4k boundary for malloc(9) (4k + header) so there is
no pool_cache for this size and large, while the current malloc(9)
implementation does have buckets up to 64k(?)
It might be good to have larger pool_caches in kmem(9) like for 8k, 12k,
16k eg
If allocation of exactly 4k,2k,1k go to malloc the malloc wrapper
solution gets inefficient due to for the allocation being just a bit too
large.
I'll gather some statistics about this the next days... this will also
shed some light if it is worse to have large cache_sizes in kmem(9)
Real world benchmark is almost the same.
a release build eg
real user system
current: 2735 8422 1390
vmem-heap: 2716 8419 1383
variations between runs are easily larger then this.
Lars
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk5K0OMACgkQcxuYqjT7GRai4ACgkQ6rbWEHkJSFHXpnkA3uTf6G
Cx4AnirBAU3SZos1lNEk4SVBclU3abN0
=XjyZ
-----END PGP SIGNATURE-----
Home |
Main Index |
Thread Index |
Old Index