tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: [Milkymist port] virtual memory management



Thank you for your answer Matt,

Le 09/02/14 19:49, Matt Thomas a écrit :
On Feb 9, 2014, at 10:07 AM, Yann Sionneau <yann.sionneau%gmail.com@localhost> 
wrote:

This seems like the easiest thing to do (because I won't have to think about 
recursive faults) but then if I put physical addresses in my 1st level page 
table, how does the kernel manage the page table entries?
BookE always has the MMU on and contains fixed TLB entries to make sure
all of physical ram is always mapped.
My TLB hardware is very simple and does not give me the option to "fix" a TLB entry so I won't be able to do that. the lm32 MMU is turned off upon exception (tlb miss for instance) automatically, then I can enable it back if I want. In the end the MMU is enabled back upon return from exception.

Since the kernel runs with MMU on, using virtual addresses, it cannot 
dereference physical pointers then it cannot add/modify/remove PTEs, right?
Wrong.  See above.
You mean that the TLB contains entries which map a physical address to itself? like 0xabcd.0000 is mapped to 0xabcd.0000? Or you mean all RAM is always mapped but to the (0xa000.000+physical_pframe) kind of virtual address you mention later in your reply?
Note that on BookE, PTEs are purely a software
construction and the H/W never reads them directly.
Here my HW is like BookE, I don't have hardware page tree walker, PTEs are only for the software to reload the TLB when there is an exception (tlb miss), TLB will never read memory to find PTE in my lm32 MMU implementation.

I'm sure there is some kernel internal mechanism that I don't know about which 
could help me getting the virtual address from the physical one, do you know 
which mechanism it would be?
Look at __HAVE_MM_MD_DIRECT_MAPPED_PHYS and/or PMAP_{MAP,UNMAP}_POOLPAGE.
For now I have something like that:

vaddr_t
pmap_md_map_poolpage(paddr_t pa, vsize_t size)
{
  const vaddr_t sva = (vaddr_t) pa - 0x40000000 + 0xc0000000;
  return sva;
}

But I guess it only works to access the content of kernel ELF (text and data) but not to access dynamic runtime kernel allocations, right?


Also, is it possible to make sure that everything (in kernel space) is mapped 
so that virtual_addr = physical_addr - RAM_START_ADDR + virtual_offset
In my case RAM_START_ADDR is 0x40000000 and I am trying to use virtual_offset 
of 0xc0000000 (everything in my kernel ELF binary is mapped at virtual address 
starting at 0xc0000000)
If I can ensure that this formula is always correct I can then use a very simple macro to 
translate "statically" a physical address to a virtual address.
Not knowing how much ram you have, I can only speak in generalities.
I have 128 MB of RAM.
But in general you reserve a part of the address space for direct mapped
memory and then place the kernel about that.

For instance, you might have 512MB of RAM which you map at 0xa000.0000
and then have the kernel's mapped va space start at 0xc000.0000.
So if I understand correctly, the first page of physical ram (0x4000.0000) is mapped at virtual address 0xa000.0000 *and* at 0xc000.0000 ? Isn't it a problem that a physical address is mapped twice in the same process (here the kernel)?
My caches are VIPT, couldn't it generate cache aliases issues?

Then conversion to from PA to VA is just adding a constant while getting
the PA from a direct mapped VA is just subtraction.

Then I have another question, who is supposed to build the kernel's page table? 
pmap_bootstrap()?
Some part of MD code.  pmap_bootstrap() could be that.

If so, then how do I allocate pages for that purpose? using 
pmap_pte_pagealloc() and pmap_segtab_init() ?
usually you use pmap_steal_memory to do that.
But for mpc85xx I just allocate the kernel initial segmap in the .bss.
But the page tables were from allocated using uvm can do prebootstrap
allocations.
Are you referring to the following code?

  /*
   * Now actually allocate the kernel PTE array (must be done
   * after virtual_end is initialized).
   */
  const vaddr_t kv_segtabs = avail[0].start;
  KASSERT(kv_segtabs == endkernel);
  KASSERT(avail[0].size >= NBPG * kv_nsegtabs);
  printf(" kv_nsegtabs=%#"PRIxVSIZE, kv_nsegtabs);
  printf(" kv_segtabs=%#"PRIxVADDR, kv_segtabs);
  avail[0].start += NBPG * kv_nsegtabs;
  avail[0].size -= NBPG * kv_nsegtabs;
  endkernel += NBPG * kv_nsegtabs;

  /*
   * Initialize the kernel's two-level page level.  This only wastes
   * an extra page for the segment table and allows the user/kernel
   * access to be common.
   */
  pt_entry_t **ptp = &stp->seg_tab[VM_MIN_KERNEL_ADDRESS >> SEGSHIFT];
  pt_entry_t *ptep = (void *)kv_segtabs;
  memset(ptep, 0, NBPG * kv_nsegtabs);
  for (size_t i = 0; i < kv_nsegtabs; i++, ptep += NPTEPG) {
    *ptp++ = ptep;
  }

FYI I am using those files for my pmap:

uvm/pmap/pmap.c
uvm/pmap/pmap_segtab.c
uvm/pmap/pmap_tlb.c

I am taking inspiration from the PPC Book-E (mpc85xx) code.
Regards,

--
Yann


Home | Main Index | Thread Index | Old Index