Subject: Re: Research paper. Re: Multiple page sizes and the pmap API
To: Artem Belevich <tech-kern@netbsd.org>
From: David Laight <David.Laight@btinternet.com>
List: tech-kern
Date: 12/07/2001 22:33:24
yes - an interesting paper.
They seem to have missed a couple of points!

1) memory sizes are MUCH larger now than when 4k pages were 'selected'
   (as are program working sets).  So using larger pages for demand paged
   memory may, in itself, give a larger benefit than attempting to
   aggregate small pages that have already been allocated.  Especially
   if you have to copy the data in order to aggregate it.

2) certain mmu/cache controllers will let you do a data copy using a
   temporary buffer in the controller.  This can speed up page copying
   significantly.  (I don't know whether our mmu/cache guru ever tried
   to make bcopy() use this feature! but he thought about it)

It seems to me that they are actually trying to push their 'impulse'
memory controller - with its own set of page tables and TLBs? rather
than actually discussing the merits of larger pages.

I also can't actually decide whether 'more is better' on their graphs!!
Far too many of the benchmarks come out slower than the original case.

What they do show is that you need as many TLB as the cpu manufacturer
can squeeze in.

    David

> Subject: Research paper. Re: Multiple page sizes and the pmap API


> FWIW, here's one interesting research paper on multiple page sizes and
> the ways to deal with them:
> 
> http://citeseer.nj.nec.com/fang01reevaluating.html