Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

evtchn_upcall_mask/pending synchronization



NetBSD/xen currently handles access to the Xen struct vcpu_info
evtchn_upcall_mask, evtchn_upcall_pending members with expensive
memory barriers -- variously lfence, mfence, or locked add, either
with inline asm or via x86_lfence or xen_mb=membar_sync.

But the Xen API contract promises that these members are written only
on the same CPU that is hosting the VCPU in question.  So no
multiprocessor synchronization -- lfence, mfence, locked instruction,
&c. -- is ever needed:

    * 'evtchn_upcall_pending' is written non-zero by Xen to indicate
    * a pending notification for a particular VCPU. It is then cleared
    * by the guest OS /before/ checking for pending work, thus avoiding
    * a set-and-check race. Note that the mask is only accessed by Xen
    * on the CPU that is currently hosting the VCPU. This means that the
    * pending and mask flags can be updated by the guest without special
    * synchronisation (i.e., no need for the x86 LOCK prefix).

https://nxr.netbsd.org/xref/src/sys/external/mit/xen-include-public/dist/xen/include/public/xen.h?r=1.1#661

In any context where curcpu() is stable, under this contract, it
suffices to mask and restore upcalls as follows (provided any
interrupt handlers that touch the mask have balanced mask/restore
pairs):

	/* mask upcalls */
	mask = vci->evtchn_upcall_mask;
	vci->evtchn_upcall_mask = 1;
	__insn_barrier();

	/* critical section */

	/* restore upcall mask */
	__insn_barrier();
	vci->evtchn_upcall_mask = mask;
	__insn_barrier();
	if (mask == 0 && vci->evtchn_upcall_pending)
		/* handle pending upcall */

or to unmask rather than restore:

	__insn_barrier();
	vci->evtchn_upcall_mask = 0;
	if (vci->evtchn_upcall_pending)
		/* handle pending upcall */

This should be considerably cheaper than the logic with barriers for
interprocessor synchronization.

If curcpu() is not stable, then the logic doesn't work irrespective of
what type of barrier is involved.  It's not clear to me if, e.g.,
x86_disable_intr is always run in a context where curcpu() is stable.
If it is, maybe it should assert the fact; if not, then the definition
in sys/arch/xen/x86/xen_intr.c is currently broken, and it needs to
use kpreempt_disable/enable.


Home | Main Index | Thread Index | Old Index