Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: call for testing: xen 4.1 packages



On Fri, Apr 01, 2011 at 12:17:15AM +0200, Christoph Egger wrote:
> On 01.04.11 00:12, Thor Lancelot Simon wrote:
> > On Thu, Mar 31, 2011 at 11:16:39PM +0200, Manuel Bouyer wrote:
> >>
> >> As i understood it, backend drivers have moved to userland (in qemu-dm)
> >> even for PV guests.
> > 
> > Oof!  Doesn't this cause a quadruple context-switch for every I/O?
> 
> I don't know. Can you show me some numbers?

I cannot show you measured performance numbers, no.  I would hope the Xen
team could!

However, that does not mean the question can't be analyzed a priori.

If in fact all the backend drivers have moved to userland, to do one I/O
from an application in a PV guest, before the situation was:

        * context-switch to PV guest kernel !
        * context-switch to hypervisor
        * context-switch to dom0 kernel
        * context-switch to hypervisor
        * context-switch to PV kernel
        * context-switch to guest application !

I have marked the context switches which seem to me likely expensive as
hardware virtualization support can't help save/restore state with '!'.
I count 6 context switches with the old way.  If I understand the new way,
it's:

        * context-switch to PV guest kernel !
        * context-switch to hypervisor
        * context-switch to dom0 kernel
        * context-switch to dom0 qemu-dm !
        * context-switch to dom0 kernel !
        * context-switch to hypervisor
        * context-switch to PV kernel
        * context-switch to guest application !

So, 8 context switches, with both of the extra 2 being of the kind I suspect
are more expensive.

It may be  worse for network I/O with something like zero-copy send
(or sendfile) if that too has been forced through qemu-dm: before, you had
none of the expensive user-to-kernel context switches, now you have 2 of them.

Thor


Home | Main Index | Thread Index | Old Index