Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: linux dom0 versus NetBSD dom0



On Wed, Jan 11, 2012 at 01:37:47PM -0800, Brian Buhrow wrote:
> 4.  Are there folks who have experience running Xen with NetBSD and Linux
> who would care to comment on their experience with each?


I used NetBSD/Xen way back in the NetBSD3.1/xen 2  days.   It was
actually pretty solid, way more so than the Linux Dom0s of the time.
The problem was back then that it was i386/non-PAE only, which caused
all kinds of problems at the time.  Also, the ext2 utilities did not
work well with ext3 filesystems.

We switched to Linux Dom0s before pae was stable because that's 
a showstopper when you want hardware with more than 4G ram. We 
haven't switched back yet.  (you can run a i386-PAE dom0 on a x86_64
hypervisor, but if you want a non-pae dom0 you must have a non-pae
i386 hypervisor.   I think the xen.org people stopped supporting
the non-pae i386 hypervisor some time ago.  But in '04-'05, that's what
I was using in production, and other than the ram limits, it was solid
and customers seemed to be much happier than on my previous FreeBSD
Jails.)   

Of course, now NetBSD supports PAE and x86_64, so I've kinda
been looking back every now and then.   One big problem I still
see is that NetBSD doesn't have anything like LVM (oh man, if netbsd
had stable zfs, I'd switch back in a nanosecond.)  so you are stuck
with loopback mount files,  which might be okay;  I mean, that's what
I used back in the day and it was okay.   There is a really old benchmark
showing that xen guests backed by loopback mount files were way faster
on NetBSD than on Linux, but on linux you'd use something like Tap:AIO
which as far as I know, NetBSD lacks, so you wouldn't use loopback
mounted files on linux anyhow, so it's not really a fair comparison.
(this bench might have been before tap:aio, so it very well could
have been a fair comparison at the time.)  

Anyhow, loopback mounted files do work.  They aren't great, but the
work.  You will be surprised how shitty the I/O is no matter what you
use;  Spinning disk shares poorly, even when there is no virtualization 
overhead.  

I hear there are also problems with suspend/resume and migrate,
but my experience has been that those things depend a whole lot on
the DomU kernel, too.  Many versions of Debian/Xen will hoark on 
restore;  I used to suspend/restore guests on reboot, but half the time
after being restored, the debian guests would give some "time went 
backwards" error and run really poorly after that until you rebooted
the guest, (this with the xen.org dom0 kernel)  so I've gone back to 
gracefully shutting down the guests before shutting down the dom0. 
As far as I can tell, if you allow arbitrary kernels, suspend/restore
is not really an option in general.  

I've been thinking about setting up some sort of per-user 'suspend or 
shutdown' setting, but I haven't done it yet.  

Oh man.  I am unhappy in general with Debian/Xen.  It doesn't work
properly with pvgrub, either;  you can't boot a debian i386-pae guest 
with a gigabyte or more of ram with pvgrub, even under the xen.org
linux dom0 kernel.   The whole point of my setup is to let the users
run their own kernel,  and pvgrub is the only way that isn't dangerous.
There have been several guest breakout exploits in the decompression
code for the kernel which pvgrub protected me from.    and i386 really 
does work a lot better under xen than x86_64 due to how xen uses
memory segments, so it's very disappointing. 

It's extra irritating because without seeing those problems, debian
looks like a pretty good choice for the dom0 -  but if it doesn't
even work properly as a guest, I have a very hard time trusting it
to work as a dom0.   (At prgmr.com, the discussions on this point
can get... heated.  That opinion is mine, others within the company
disagree rather strongly, so make of that what you will.) 


Home | Main Index | Thread Index | Old Index