tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Using emap for i386/amd64 early during boot
Hi list,
I finished porting Jeremy's patch for PAE to GENERIC. Most of the patch
had to be rewritten, as I had to bring it closer to the way port-xen
already handles PAE (initial support was made for Xen, thanks to
manuel@), without interfering too much with the current code of
port-i386. See [1] for further details about that.
The patch introduces i386_cpu_switch_pmap(), used to hide the logic
behind pmap switching between PAE and non-PAE cases. This is motivated
by the fact that I had to implement a way of switching pmaps without
relying on %cr3 reload, while staying relatively easy to call within
asm; see bioscall() and kvm86_call() in [1].
i386_cpu_switch_pmap() uses emap to track TLB flushes (either through
tlbflush or lcr3). Unfortunately, bioscall() or kvm86_call() could be
called early during boot (before uvm_init()), which will fail badly.
As I have yet to understand the inner workings of emap, I'd like to know
if it is possible to wrap i386_cpu_switch_pmap() around uvm_emap
functions, like this:
[...]
u_int gen = uvm_emap_gen_return();
i386_cpu_switch_pmap(pmap);
uvm_emap_update(gen);
[...]
instead of doing it internally to the function. For the !PAE case, this
remains the same, but for PAE, the uvm_emap_gen_return() will be called
before the loop that modifies the L3 entries, which spans a splvm(), a
loop over 4 entries, and a splx().
What are the constraints about uvm_emap_gen_return/uvm_emap_update?
Quick grepping reveals example where they are used just before and after
tlbflush/lcr3, but this won't be my case here.
I am open to suggestions... I would prefer to avoid
predict_false/predit_true usage here to conditionally avoid calling
these when they are not initialized.
[1] http://www.netbsd.org/~jym/pae.diff
For convenience, here's i386_cpu_switch_pmap:
/*
* Switches pmap for the current CPU. Hides the implementation
* differences between the PAE and non-PAE cases.
*/
void
i386_cpu_switch_pmap(struct pmap *pmap)
{
#ifdef PAE
int i;
int s = splvm(); /* just to be safe */
struct cpu_info *ci = curcpu();
#ifdef XEN
paddr_t l3_pd = xpmap_ptom_masked(ci->ci_l3_pdirpa);
/* don't update the kernel L3 slot */
for (i = 0 ; i < PDP_SIZE - 1; i++) {
xpq_queue_pte_update(l3_pd + i * sizeof(pd_entry_t),
xpmap_ptom(pmap->pm_pdirpa[i]) | PG_V);
}
#else /* XEN */
pd_entry_t *l3_pd = ci->ci_l3_pdir;
for (i = 0 ; i < PDP_SIZE; i++) {
l3_pd[i] = pmap->pm_pdirpa[i] | PG_V;
}
#endif /* XEN */
splx(s);
u_int gen = uvm_emap_gen_return();
tlbflush();
uvm_emap_update(gen);
#else /* PAE */
u_int gen = uvm_emap_gen_return();
lcr3(pmap_pdirpa(pmap, 0));
uvm_emap_update(gen);
#endif /* PAE */
Thanks,
--
Jean-Yves Migeon
jeanyves.migeon%free.fr@localhost
Home |
Main Index |
Thread Index |
Old Index