Port-amd64 archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kprempt, pmap_load() and copy*



On Sun, Jun 19, 2011 at 06:38:40PM +0200, Manuel Bouyer wrote:
> Hello,
> I have a question about the kernel copy* function vs lazy pmap switching
> and kernel preemption.
> on amd64, lazy pmap switching is used: pmap_activate() just sets a per-cpu
> variable ci_want_pmapload to 1; the pmap is really loaded on the cpu
> just in time (i.e. when returning to userland, or something in the
> kernel needs it).
> The copyin/copyout & friend checks ci_want_pmapload and call do_pmap_load()
> before doing the work. do_pmap_load() will disable kernel preemtion
> before calling pmap_load() reenable it after and let kernel preemtion
> occur if needed. Before returning, do_pmap_load() checks ci_want_pmapload
> again and loops back to the beggining.
> 
> Now, what happens if preemtion and pmap switching occurs after that, while
> the copy* functions are working ? what is making sure that the right
> pmap is loaded again before returning to the interrupted copy* function ?
> Either the check before return in do_pmap_load() is not needed, or
> we can potentially copy data to/from the wrong user process here ...

It is cpu_kpreempt_exit() doing the work.
And indeed the loop in do_pmap_load() is not needed, because it's
between x86_copyfunc_start and x86_copyfunc_end

-- 
Manuel Bouyer <bouyer%antioche.eu.org@localhost>
     NetBSD: 26 ans d'experience feront toujours la difference
--


Home | Main Index | Thread Index | Old Index