tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: pmap_extract(9) (was Re: xmd(4) (Re: XIP))



On Fri, 5 Nov 2010, Masao Uebayashi wrote:

> On Mon, Nov 01, 2010 at 03:55:01PM +0000, Eduardo Horvath wrote:
> > On Mon, 1 Nov 2010, Masao Uebayashi wrote:
> > 
> > > I think pmap_extract(9) is a bad API.
> > > 
> > > After MD bootstrap code detects all physical memories, it gives
> > > all the informations to UVM, including available KVA.  At this
> > > point UVM knows all the available resources of virtual/physical
> > > addresses.  UVM is responsible to manage all of these.
> > 
> > This is managed RAM.  What about I/O pages?
> 
> To access MMIO device pages, you need a physical address.  Physical
> address space is single, linear resource on all platforms.  I wonder
> why we can't manage it in MI way.

I suppose that depends on your definition of "linear".  But that's beside 
the point.

I/O pages have no KVA until a mapping is done.  UVM knows nothing about 
those mappings since they are managed solely by pmap.  I still don't see 
how what you're proposing here will work.

> 
> > 
> > > Calling pmap_extract(9) means that some kernel code asks pmap(9)
> > > to look up a physical address.  pmap(9) is only responsible to
> > > handle CPU and MMU.  Using it as a lookup database is an abuse.
> > > The only reasonable use of pmap_extract(9) is for debugging purpose.
> > > I think that pmap_extract(9) should be changed like:
> > > 
> > >   bool pmap_mapped_p(struct pmap *, vaddr_t);
> > > 
> > > and allow it to be used for KASSERT()s.
> > > 
> > > The only right way to retrieve P->V translation is to lookup from
> > > vm_map (== the fault handler).  If we honour this principle, VM
> > > and I/O code will be much more consistent.
> > 
> > pmap(9) has always needed a database to keep track of V->P mappings(*) as 
> > wll as P->V mappings so pmap_page_protect() can be implemented.  
> 
> pmap_extract() accesses page table (per-space).  pmap_page_protect()
> accesses PV (per-page).  I think they're totally different...

The purpose of pmap(9) is to manage MMU hardware.  Page tables are one 
possible implementation of MMU hardware.  Not all machines have page 
tables.  Some processors use reverse page tables.  Some just have TLBs.  
And if you read secion 5.13 of 
_The_Design_and_Implmentation_of_the_4.4BSD_Operating_System_ 
it says that pmap is allowed to forget any mappings that are not wired.  
So, in theory, all you need to do is keep a linked list of wired mappings 
to insert in the TLB on fault and forget everything else.  Of course, that 
doesn't seem to work so well with UVM.

Anyway, please keep in mind that not all machines are PCs.  I'd really 
hate to see a repeat of the Linux VM subsysem which directly manipulated 
x86 page tables even on architectures that don't have page tables let 
alone somehing compaible wih x86.  pmap(9) is an abstraction layer for 
good reason.

Eduardo


Home | Main Index | Thread Index | Old Index