tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: CVS commit: src/sys/uvm



On Tue, Dec 21, 2010 at 11:29:01AM -0800, Matt Thomas wrote:
> 
> On Dec 6, 2010, at 8:19 AM, Masao Uebayashi wrote:
> 
> > On Thu, Nov 25, 2010 at 11:32:39PM +0000, YAMAMOTO Takashi wrote:
> >> [ adding cc: tech-kern@ ]
> > 
> > The basic idea is straightforward; always allocate vm_physseg for
> > memories/devices.  If a vm_physseg is used as general purpose
> > memory, you allocate vm_page[] (as vm_physseg::pgs).  If it's
> > potentially mapped as cached, you allocate pvh (as vm_physseg:pvh).
> 
> Ewww.  How p->v is managed needs to be kept out of the MI code.

Could you elaborate the reason why so?

I've already proven that __HAVE_VM_PAGE_MD pmaps don't need struct
vm_page *.

> There can be a common implementation of it for use by MD code but
> that isn't the same as MI.
> 
> > Keep vm_physseg * + off_t array on stack.  If UVM objects uses
> > vm_page (e.g. vnode), its pager looks up vm_page -> vm_physseg *
> > + off_t *once* and cache it on stack.
> 
> off_t is not the right type.  psize_t would be more accurate.

Probably.

> 
> >> any valid paddr_t value will belong to exactly one vm_phsseg?
> > 
> > That's the idea.  This would clarify mem(4) backend too.
> > 
> > Note that allocating vm_physseg for device segments is cheap.
> 
> that's depends on how much more expensive finding physseg gets.

"Finding physseg" == "(reverse) lookup of vm_page -> vm_physseg".
It is done only once (for each page) in pagers that use vm_page.

Is the biggest concern lookup cost?  Then I'd point out that
uvm_pageismanaged() in pmap_enter() should die.  Cacheability is
decided by how VA is mapped.  Those uvm_pageismanaged() calls are
both inefficient and wrong.


Home | Main Index | Thread Index | Old Index