Subject: Re: pageable kernel pmap entries
To: Jonathan Stone <jonathan@DSG.Stanford.EDU>
From: Chuck Silvers <chuq@chuq.com>
List: tech-kern
Date: 05/04/1999 07:49:54
Jonathan Stone writes:
> 
> > > And was there ever a response for the downside that this'd have on
> > > ports like Alpha and mips?
> >
> >What downside are you talking about?
> 
> more TLB misses, more TLB thrashing, ....
> 
> I did *ask* for some justfication of why we were doing this, why it
> was necesary, what the costs were.... In short, what the performance
> tradeoffs were.
> 
> I never saw an answer.

dang, I skip reading my mail for a few hours and miss all the fun...

I hadn't thought about this issue, but we can play around with it some
and see what kind of impact changing the amount of virtual space used
for cached mappings has.  the reason I wanted non-wired mappings on TLB-only
architectures is to avoid having pmap design considerations limit the size
of the mapping cache.  if it turns out that having the cache be too big
causes problems we can adjust the size on a per-platform basis.

some people have expressed a desire to partition system memory into
page-cache memory vs. other uses (anonymous memory mostly) in a static
fashion, to mimic the memory usage of the current buffer cache.
(I haven't implemented this yet, but it's on the list.)  in the same
way, it might be desireable to partition TLB entries on platforms where
that's possible.  if you'd like to experiment with that in the mips pmap,
I'd love to hear what you find.  the changes to the UBC code to allow
this kind of experimentation should be trivial, there's just one pmap_enter()
call that you'd need to adjust.

btw, I never saw your original question, and I couldn't find it in
the tech-kern archives either.  maybe the mail got lost?

-Chuck