Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Xen kernel & tools recommendations



On Nov 25,  2:15pm, Greg Troxel wrote:
} Erik Fair <fair%netbsd.org@localhost> writes:
} 
} > If one were setting up a system with Xen to virtualize multiple NetBSD
} > guest instances and no other OS, what version of the Xen hypervisor
} > kernel, and which xen management tools are best, where "best" is:
} 
} A somewhat tough question.
} 
} I will assume you are talking netbsd-6 for the dom0.  netbsd-7 is almost
} certainly ok, too, but I tend to be conservative abotu dom0 code.

     With the fix for xsave/xrstor, I would expect netbsd-7 to be
okay.  But, personally, I would probably go with a release kernel
for dom0 unless you have a specific need for a newer one.

} I recommend amd64 for the dom0, partially due to xentools42 building
} issues (below) and partly because I think that's the normal path these
} days.

     I tend to just go with amd64 on modern hardware.

} Note that modern xen is all PAE for i386.  So you can use the
} i386 XEN3PAE_DOMU kernel, or amd64.
} 
} PCI passthrough is a recurring problem.  If you don't need it, don't
} worry.
} 
} In the howto, note that boot.cnf lets you boot xen with our bootloader,
} and you don't need to deal with grub.  man boot.cfg has an example.

     The entries I use, look like this:

menu=Boot Xen with 1GB for dom0:load /netbsd.xen0 console=pc;multiboot /xen44-kernel/xen.gz dom0_mem=1GB dom0_max_vcpus=1 dom0_vcpus_pin

menu=Boot Xen with dom0 in single-user mode:load /netbsd.xen0 -s;multiboot /xen44-kernel/xen.gz dom0_mem=1GB dom0_max_vcpus=1 dom0_vcpus_pin

menu=Boot GENERIC Xen with 1GB for dom0:load /netbsd.xen0gen console=pc;multiboot /xen44-kernel/xen.gz dom0_mem=1GB dom0_max_vcpus=1 dom0_vcpus_pin

Since NetBSD dom0 isn't SMP, I explicitly limit it to one CPU and
pin it to the CPU (dom0 is involved in all I/O, so I want to give
it as much advantage for speed as possible).  I also limit the
memory to leave more available for the domUs.

} > They work.
} >
} > They're being actively worked on for bugs/improvements, not abandon ware.
} 
} The versions in pkgsrc seem to be a bit behind what the upstream xen
} project is using.  However, 4.1 and 4.2 are getting security bugfixes
} (via backporting, sometimes).

     4.2 is still supported upstream; however, I have no idea for
how long.  Everything prior to that has reached end-of-life.

} xen 3.1 and 3.3 are really only of historical interest at this point, or
} for people that are still running them and haven't upgraded.
} 
} Given all that, I would say that you should choose from
} 
}   pkgsrc/sysutils/xenkernel41
}   pkgsrc/sysutils/xenkernel42
} 
} and then install the matching xentools.  On 4.1, I am using "xm", but
} that is deprecated and "xl" is recommended; it may be the only way on 4.2.

     xm is still available, but xl is recommended.  From the user's
viewpoint, it is pretty much a case of s/xm/xl/ (i.e. instead of
typing "xm list", type "xl list").  xm is still around in Xen 4.4,
but using it gives warnings about it being deprecated.  I'm guessing
that means it will be gone after 4.4.

} I have found that xenkernel42 does not build on netbsd-6 i386 due to
} compiler issues with the included qemu.  (It needs qemu to support HVM
} mode.)
} 
} For a new install, I would recommend 4.2.

     Unless there is a particular reason to use something else, I
would definitely agree with this.

} > They perform well (i.e. the hypervisor and associated toolset impose mini=
} mum overhead on the guest OSes).
} 
} That's not really been a big issue; overheads are reasonable and not
} particularly different from ersion to version.   If you care about
} overhead, I'd recommend running PV guests.
} 
} The biggest performance issue will be how you provide storage on the
} dom0 to hand to domU via xdb.  I tend to have files in the dom0, which
} has some overhead vs raw disk (< 10% usually), and then there is some
} speed loss from e.g domU rxbd0e to the dom0 file, again < 10%.

     On production systems, I tend to use LVM (now just if we had
a filesystem that supported on-line resizing).  For other systems,
I often use file backed storage.

} Your other likely big issue will be having enough memory so that each
} domU has enough not to page and you don't run out.  But if you stay away
} from that problem, it should work well.
} 
}-- End of excerpt from Greg Troxel


Home | Main Index | Thread Index | Old Index