tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Patch: optimize kmem_alloc for frequent mid-sized allocations



kmem_alloc() performs very well on single and multi-processor systems when
the allocation request can be satisfied with the quantum cache, as
demonstrated by the 'allocfree' kernel module:

        http://www.netbsd.org/~ad/cleanout/kmem-128.png

It breaks down when the allocation size is greater than can be satisfied
with the quantum cache - typically 128 bytes:

        http://www.netbsd.org/~ad/cleanout/kmem-1024.png

This is understandable and acceptable because for large allocations,
because:

- kmem_alloc() is more space efficient than malloc() and so has more work
  to do.

- It returns unused memory back to the system, unlike malloc() which holds
  onto the allocated memory.

It is a problem for "mid-sized" allocations, up to PAGE_SIZE because these
occur frequently. The below patch introduces an additional level of caching,
from the maximum size provided by the quantum cache up to PAGE_SIZE. It also
adds debug code to check that allocated size == freed size.

        http://www.netbsd.org/~ad/cleanout/kmem.diff

The patch is against 5.0 and so may not cleanly apply to -current.

Comments?

Thanks,
Andrew


Home | Main Index | Thread Index | Old Index