NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: NetBSD Jails



At Sun, 17 May 2020 21:46:39 +0100, Sad Clouds <cryintothebluesky%gmail.com@localhost> wrote:
Subject: Re: NetBSD Jails
>
> Your main gripe about jails/zones/containers is added complexity, well
> guess what, with Xen/VMware/VirtualBox the complexity is still there,
> you just pushed it over to the hypervisor vendor.

Actually that's not true at all, at least not for a type-1 (bare metal)
hypervisor like Xen.  The hypervisor is a micro-kernel -- well contained
and actually quite simple (and this is even more-so since 4.11).  Things
do get a bit more hairy if you have to add qemu to the stack, but that's
the exception for poorly conceived environment, and not (supposed to be)
the rule.  Speaking of qemu, PVH dom0 will get rid of a whole lot more
complexity from dom0 as well.

Xen is also less complex at all the layers above the hypervisor as well
since you can use the exact same OS(es), the exact same APIs, and so on,
to build each VM instance and the applications programs that run within,
as well as for the control domain, e.g. for network and storage
configuration and assignment (though of course VM instantiation and
control over some hardware resources still requires a unique API for
each type of virtualization, i.e. Xen, VirtualBox, VMWare, etc.).

Xen further offers to eliminate complexity under the hood for the upper
layers too, e.g. with para-virtual device drivers for guest systems that
then use simple protocols to communicate with the hypervisor and dom0.

So I would say a good hypervisor offers a clear structure that controls
and eliminates complexity and also "reduces attack surface" (to use that
favourite phrase of security researchers).


> If you run multiple instances of the same OS version in Xen/VMware,
> that is a pretty inefficient way to partition your application domains.

Yes, this is true, but there is some debate.  E.g. on ARM with Linux
using KVM and Xen vs. Docker the conclusion of one study was:

    "a slightly better performance for containers in CPU bound workloads
    and request/response networking; conversely, thanks to their caching
    mechanisms, hypervisors perform better in most disk I/O operations
    and TCP streaming benchmark."

    https://doi.org/10.1109/AIEEE.2015.7367280

(and what always dominates performance?  I/O dominates!)

(Other studies I've scanned suggest there is even less performance
difference than most people seem to assume must be there.)

I still think the security and complexity issues with containers, are a
very much bigger concern than the pure efficiency losses of running full
VMs.  When it's all hidden behind a single command ("docker pull nginx")
then it's too easy to ignore the problems and so that's what people do
-- they take the easy street.

Never the less even VMs are not as secure and simple as bare hardware.
Those who have a good handle on their application software and customer
requirements, and who are able to keep things more uniform, are able to
bundle similar services together on the same bare metal all with one OS
kernel and one shared application configuration, with no loss of
efficiency and with no added complexity.

The really interesting things w.r.t. performance and efficiency with
shared hardware and without total loss of isolation (i.e. allowing
multi-tenant applications) happen when you start to look at application
specific unikernels in combination with a hypervisor.  This of course is
a little bit like vaporware still, even for Xen on NetBSD, though
Rumprun[1] shows great promise.

[1] https://github.com/rumpkernel/rumprun

Personally I think some combination of virtualization, unikernels,
capabilities, and old-made-new again single-level-store techniques are
the brightest future for shared computing, i.e. multi-tenant clouds.

(BTW, from what I understand, third-hand so it may or may not be true,
both Google and Amazon are actually running each "container" inside a
full VM just to ensure better isolation and security.  There's even
open-source software originally from Intel that does this.  Now where
did the efficiency go?)

(Also, if you really want to see some crazy way to do things, have a
look for "hypervisor-based containers" -- i.e. "let's keep all the crazy
extra APIs and kernel complexity and also run it together with the
application code all in a unikernel VM environment!")


> Also forget about chroot, it is not an enterprise solution.

Well, I guess that depends on what requirements one has.

If by "enterprise" one means:  "it has a clicky-GUI driving console",
well, no, of course not.

--
					Greg A. Woods <gwoods%acm.org@localhost>

Kelowna, BC     +1 250 762-7675           RoboHack <woods%robohack.ca@localhost>
Planix, Inc. <woods%planix.com@localhost>     Avoncote Farms <woods%avoncote.ca@localhost>

Attachment: pgpHzHGnTDbVC.pgp
Description: OpenPGP Digital Signature



Home | Main Index | Thread Index | Old Index