tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: making kmem more efficient
On Thu, 1 Mar 2012, Lars Heidieker wrote:
> On 03/01/2012 06:04 PM, Eduardo Horvath wrote:
> > On Thu, 1 Mar 2012, Lars Heidieker wrote:
> >
> >> Hi,
> >>
> >> this splits the lookup table into two parts, for smaller
> >> allocations and larger ones this has the following advantages:
> >>
> >> - smaller lookup tables (less cache line pollution) - makes large
> >> kmem caches possible currently up to min(16384, 4*PAGE_SIZE) -
> >> smaller caches allocate from larger pool-pages if that reduces the
> >> wastage
> >>
> >> any objections?
> >
> > Why would you want to go larger than PAGE_SIZE? At that point
> > wouldn't you just want to allocate individual pages and map them
> > into the VM space?
> >
> > Eduardo
> >
>
> Allocations larger then PAGE_SIZE are infrequent (at the moment) that's
> true.
> Supporting larger pool-pages makes some caches more efficient eg 320/384
> and some size possible like a byte 3072.
How does it make this more efficient? And why would you want to have a
3KB pool? How many 3KB allocations are made?
> Having these allocators in place the larger then PAGE_SIZE caches are a
> trivial extension.
That's not really the issue. It's easy to increase the kernel code size.
The question is whether the increase in complexity and code size is offset
by a commensurate performance improvement.
> All caches multiplies of PAGE_SIZE come for free they don't introduce
> any additional memory overhead in terms of footprint, on memory pressure
> they can always be freed from the cache and with having them in place
But you do have the overhead of the pool itself.
> you save the TLB shoot-downs because of mapping and un-mapping them and
> the allocation deallocation of the page frames.
> So they are more then a magnitude faster.
Bold claims. Do you have numbers that show the performance improvement?
> Lars
>
>
> Just some stats of a system (not up long) with those changes:
> collected with "vmstat -mvWC"
>
> > kmem-1024 1024 5789 0 5158 631 73 584
> > 408 176 4096 584 0 inf 3 0x800 89.6%
> > kmem-112 112 1755 0 847 908 28 27
> > 1 26 4096 26 0 inf 0 0x800 95.5%
Interesting numbers. What exactly do they mean? Column headers would
help decypher them.
Eduardo
Home |
Main Index |
Thread Index |
Old Index