Port-xen archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: evtchn_upcall_mask/pending synchronization
Hi Taylor,
I'm responding from memory here - bouyer@ helped me with the
synchronisation bits here, so he may be able to comment better, but I'll
try to answer some of the questions/ comments.
Taylor R Campbell <riastradh%NetBSD.org@localhost> writes:
> NetBSD/xen currently handles access to the Xen struct vcpu_info
> evtchn_upcall_mask, evtchn_upcall_pending members with expensive
> memory barriers -- variously lfence, mfence, or locked add, either
> with inline asm or via x86_lfence or xen_mb=membar_sync.
>
> But the Xen API contract promises that these members are written only
> on the same CPU that is hosting the VCPU in question. So no
> multiprocessor synchronization -- lfence, mfence, locked instruction,
> &c. -- is ever needed:
>
> * 'evtchn_upcall_pending' is written non-zero by Xen to indicate
> * a pending notification for a particular VCPU. It is then cleared
> * by the guest OS /before/ checking for pending work, thus avoiding
> * a set-and-check race. Note that the mask is only accessed by Xen
> * on the CPU that is currently hosting the VCPU. This means that the
> * pending and mask flags can be updated by the guest without special
> * synchronisation (i.e., no need for the x86 LOCK prefix).
>
> https://nxr.netbsd.org/xref/src/sys/external/mit/xen-include-public/dist/xen/include/public/xen.h?r=1.1#661
>
> In any context where curcpu() is stable, under this contract, it
Can you elaborate on what you mean by "stable" ? If you mean the
VCPU executing the current critical section code will not be rescheduled
onto a different physical CPU, I don't think there is any such explicit
guarantee. The only contract is that hypervisor code managing the mask
synchronisation will access it (using IPIs if needed) on the physical
CPU executing the correspondingly scheduled VCPU. Thus curcpu() could
indeed switch inside of the critical section in your snippet below, as
there is no explicit guarantee not to, as I described above. I may have
missed something here, but this is my understanding from reading the
hypervisor handler code some years ago while working on xensmp. ISTR
bouyer@ took care of the interrupt MP synchronisation stuff.
>
> suffices to mask and restore upcalls as follows (provided any
> interrupt handlers that touch the mask have balanced mask/restore
> pairs):
>
> /* mask upcalls */
> mask = vci->evtchn_upcall_mask;
> vci->evtchn_upcall_mask = 1;
> __insn_barrier();
>
> /* critical section */
>
> /* restore upcall mask */
> __insn_barrier();
> vci->evtchn_upcall_mask = mask;
> __insn_barrier();
> if (mask == 0 && vci->evtchn_upcall_pending)
> /* handle pending upcall */
>
> or to unmask rather than restore:
>
> __insn_barrier();
> vci->evtchn_upcall_mask = 0;
> if (vci->evtchn_upcall_pending)
> /* handle pending upcall */
>
> This should be considerably cheaper than the logic with barriers for
> interprocessor synchronization.
>
> If curcpu() is not stable, then the logic doesn't work irrespective of
> what type of barrier is involved.
This is interesting - definitely worth review - but see below:
> It's not clear to me if, e.g., x86_disable_intr is always run in a
> context where curcpu() is stable.
ISTR being similarly confused (and also very annoyed) - I don't think
the Xen documentation makes this explicit guarantee. But I think that is
how it behaves at the moment.
> If it is, maybe it should assert the fact; if not, then the definition
> in sys/arch/xen/x86/xen_intr.c is currently broken, and it needs to
> use kpreempt_disable/enable.
I'm not sure how kpreeempt_disable/enable will help in the case where
curcpu() could randomly be switched by the hypervisor to reflect a VCPU
rescheduled onto a different physical CPU ?
--
~cherry
Home |
Main Index |
Thread Index |
Old Index