Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

extremely delayed i/o when dom0 is busy



Yesterday I was updating the dom0 to the latest netbsd-6 and doing a
freebsd ports extraction one virtual machine while installing some
software on other. The disk I/O was extremely delayed, here's example
output from a SIGINFO from one of the FreeBSD virtual machines:

load: 0.00  cmd: csh 35575 [vnread] 56.03r 0.00u 0.00s 0% 10768k
^C
load: 0.00  cmd: csh 35575 [vnread] 116.48r 0.00u 0.00s 0% 10768k
load: 0.00  cmd: csh 35575 [vnread] 142.93r 0.00u 0.00s 0% 10768k

As I recall it took roughly 5 minutes to load /usr/bin/top's image into
core, which is, obviously, way too much for a binary of 53KB.




Now I suspect that this is related to the fact that the qemu processes
need to run to do I/O for these HVM+PV domains, and the buildworld is
being scheduled more often. If this is right, it should be less of a
problem for fully pv domains like NetBSD, but I haven't tested it.

Does that sound reasonable? And how would you go about adapting the
system to that?

I recently read an article about stub domains in Xen and avoiding this
problem as only device drivers would be running in the stub domain.
Another option might be renicing the qemu processes (or running
intensive tasks with a higher nice).


Have you noticed this same behavior and if yes, how did you deal with it?


Home | Main Index | Thread Index | Old Index