Subject: Re: Multiple page sizes and the pmap API
To: David Laight <David.Laight@btinternet.com>
From: Wojciech Puchar <wojtek@chylonia.3miasto.net>
List: tech-kern
Date: 12/07/2001 23:44:13
> >
> > it would make GREAT speedup.
>
> One big benefit is that you use a lot less TLB entries, cutting down

it would be BIG benefit for programs like squid which allocates >100MB
data and uses it quite randomly.

> on the number of page table walks.  Never mind the memory saved by
> not having lower level page tables.
>
> The cost of faulting in the entire 4Mb might slow thing down - especially
> if you are doing (generally) sequential accesses and some 'look ahead'
> scheme gets the next page in before you fault on it...

4M is too much. but 64k or even 256 isn't on modern hardware (on disks
which reads/writes >100kB in seek time)

> >
> > > With this infrastructure in place, it would be pretty short work to
> > > get the devpager using large pages.  Chuq and I also discussed some
> > > ideas for using large pages for other types of mappings (vnode and
> > > anonymous memory), but they still depend on having this infrastructure
> >
> > what about mapping regular data this way?
>
> One problem, not to be overlooked, is that you need to find contiguous
> physical memory to map the 'large' page.
>
> You also have to swap it in/out as a single unit.
>
> This means it is (probably) only really advantagous to use very large pages
> for items that are going to be resident (eg kernel code and data, maybe
> memory backing kernel malloc())
>
> One possiblity would be to leave the system using 4/8k pages, but avoid
> generating the 2nd (3rd?) level page tables until you fault on the higher
> one.  The system can then decide whether to map 1Mb of contiguous memory
> for the superpage, or allocate the lower page table and a single entry.
>
> All rather complex!
but worth of thinking about