[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
xbd 32k transfer limit
I know the comment in XEN3_DOMU says that xbd can't transfer > 32k and
that MAXPHYS is set to 32768. What is it that's limiting it? If there
is such a limit, has that changed with Xen 4.1?
Grepping around I found that MAXPHYS is used to define
XENSHM_MAX_PAGES_PER_REQUEST as (MAXPHYS >> PAGE_SHIFT) in xen_shm.h.
That define in turn is used to define array of granted tables, possibly,
if my reading skills doesn't fail me. I'm sensing that the limit is
probably related to the size of the ring of shared memory pages when
moving data between domains, but I'm not sure.
My question is related to raidframe setup. I found that for my set of 3
disks, setting the sectors per stripe unit to 64, which translates to
128 data sectors per stripe unit, yields the best sequential tranfer
rate (well, also since ffs is limited to max block size of 64k).
Unfortunately if you setup a domu using that setup, the write
performance really suffers, e.g. ~10-15MB/sec using dd, which I suspect
is due to xbd being limited to 32k transfer. However, setting the raid
to 32k data stripe unit, ffs to 32k blocks (for both dom0 and domU),
yields reasonable write performance over xbd, like 50-65MB/sec.
Btw, my kernel is 5.99.45 amd64 and xen is 4.1.
I appreciate any insight. In the meantime I'll try bumping the MAXPHYS
to 64k and see what blows up :-).
Main Index |
Thread Index |