Subject: Re: More on pmap_kenter*()
To: Charles M. Hannum <>
From: Eduardo E. Horvath <>
List: tech-kern
Date: 03/27/1999 09:33:44
On Sat, 27 Mar 1999, Charles M. Hannum wrote:

> So I had a look at how and why pmap_kenter() is currently used.  For
> the most part, it's used because it's supposed to be `faster'.  It
> gets this `speed' by way of not dealing with R/M information at all.
> But let's take a look at where it's actually used:
> * Mapping file system buffers, the message buffer, and device
>   registers.  These generally only happen at boot time (or module load
>   time), and are thus not realistic performance issues.
> * Mapping pages into the kernel for pagers and DMA.  Unfortunately, it
>   seems to me to be canonically wrong to use pmap_kenter*() in these
>   cases, precisely because it does throw away the M information.

If you're doing DMA why bother mapping the buffers in to the CPU in the
first place?  When I map in a DVMA segment it only exists in the IOMMU and
the mappings cannot be used by the CPU.

> * Mapping kmem pages.  This might be more realistic, except that some
>   of these (particularly PCBs) actually do require M information
>   because they're paged (unless you just always assume they're
>   modified, which is sort of lame).  The only time it's safe to use it
>   for kmem is, e.g., to allocate pool pages for objects that are never
>   paged.  This isn't really a performance issue either.
> So, I think pmap_kenter*() is pretty bogus, and should probably be
> nuked.

There's still pmap_kenter_pgs() which allows optimization of cache
flushing over mapping a group of pages rather than an individual page, and
the potential for using large PTEs to map contiguous pages.  

Eduardo Horvath
	"I need to find a pithy new quote." -- me