Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: [PATCH] xen: add persistent grants to xbd_xenbus

On 10/11/12 11:45, Manuel Bouyer wrote:
> On Sun, Nov 04, 2012 at 11:23:50PM +0100, Roger Pau Monne wrote:
>> This patch implements persistent grants for xbd_xenbus (blkfront in
>> Linux terminology). The effect of this change is to reduce the number
>> of unmap operations performed, since they cause a (costly) TLB
>> shootdown. This allows the I/O performance to scale better when a
>> large number of VMs are performing I/O.
>> On startup xbd_xenbus notifies the backend driver that it is using
>> persistent grants. If the backend driver is not capable of persistent
>> mapping, xbd_xenbus will still use the same grants, since it is
>> compatible with the previous protocol, and simplifies the code
>> complexity in xbd_xenbus.
>> Each time a request is send to the backend driver, xbd_xenbus will
>> check if there are free grants already mapped and will try to use one
>> of those. If there are not enough grants already mapped, xbd_xenbus
>> will request new grants, and they will be added to the list of free
>> grants when the transaction is finished, so they can be reused. Data
>> has to be copied from the request (struct buf) to the mapped grant, or
>> from the mapped grant to the request, depending on the operation being
>> performed.
>> To test the performance impact of this patch I've benchmarked the
>> number of IOPS performed simultaneously by 15 NetBSD DomU guests on a
>> Linux Dom0 that supports persistent grants. The backend used to
>> perform this tests was a ramdisk of size 1GB.
>>                      Sum of IOPS
>> Non-persistent            336718
>> Persistent                686010
>> As seen, there's a x2 increase in the total number of IOPS being
>> performed. As a reference, using exactly the same setup Linux DomUs
>> are able to achieve 1102019 IOPS, so there's still plenty of room for
>> improvement.
> I'd like to see a similar test run against a NetBSD dom0. Also, how big
> are your IOPs ?

I've run some test on a NetBSD Dom0, but it is not a big box, it only
has 8-ways. I've run 7 NetBSD DomUs on a NetBSD Dom0, and here are the
results (again using a block size of 4k and a md based backend):

Persistent frontend: 297688 IOPS
Non-persistent frontend: 326497 IOPS

This is consistent with the Linux graph on
which shows that there's a performance improvement when using at least 8
guests or more. I will also work on a persistent implementation for
NetBSD xbd backend, but I don't think it's going to make a difference
until we get a MP Dom0 (so it might be better to work on getting a MP
Dom0 first).


Home | Main Index | Thread Index | Old Index