tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: CVS commit: src/tests/net/icmp



On Thu Jul 08 2010 at 23:22:44 +0200, Thomas Klausner wrote:
> [redirected from source-changes-d to a hopefully more suitable mailing
> list]
> 
> On Mon, Jul 05, 2010 at 12:26:17AM +0300, Antti Kantee wrote:
> > I'm happy to give a more detailed explanation on how it works, but I need
> > one or two questions to determine the place where I should start from.
> > I'm planning a short article on the unique advantages of rump in kernel
> > testing (four advantages by my counts so far), and some questions now
> > might even help me write that one about what people want to read instead
> > of what I guess they'd want to read.
> 
> I looked at the tests some more (tmpfs race, and the interface one
> from above). I think I can read them, but am unclear on some of the
> basic properties of a rump kernel.

Hi, good questions.

> For example:
> 1. Where is '/'? Does it have any relation to the host systems '/'? Is
> it completely virtual in the memory of the rump kernel?

From a practical perspective, it's in the same place as '/' on e.g.
a qemu instance or xen domu: "somewhere".  By default it's in memory,
but you can mount any file system as '/' over rumpfs (default rootfs).

Of course this is partially a trick question, since a rump kernel does
not necessarily have a '/' at all.  Running a configuration without file
systems at all can save quite a bit of memory, and can be the difference
between 50k and 100k nodes in a virtual netowrk (I've only tested up
to a few hundred nodes on my scrawny laptop, but I've done calculations
... I'm sure you can appreciate calculations ;).  In that case any rump
system calls attempting to use VFS will fail with ENOSYS.

> 2. Do I understand correctly that for e.g. copying a file from the
> host file system into a rump kernel file system, I would use read and
> rump_sys_write?

Well, yes and no.  It depends on which namespace you are making the
calls from.  If you are in the host namespace (i.e. not inside the rump
kernel), you can do that.  The paths given to rump_sys_open() are ones
relative to the rump kernel '/' (or whatever you've chrooted to inside
the rump kernel), and then you just use the file descriptor as usual.

If you are inside the rump kernel, you can access the host file system
namespace with "etfs", extra terrestrial file system, with which you can
establish mappings from the rump kernel namespace to the host namespace.
For example, the rump_foofs utils use this to configure a virtual block
device pointing to the host, so when I type

        rump_ffs /home/pooka/ffs.img /mount

even though VFS_MOUNT() operates inside the rump kernel, the device file
for mount is still use from the host (and, etfs can also just report it
as a block device, so you don't need any of the vnconfig nonsense).

> 3. Similarly for network interfaces -- open a socket with socket(2) or
> rump_socket(or so) and copy bytes with read/rump_sys_write?

I'm not quite sure what you want to copy from and where.  If you
connect() to a network service inside the rump kernel, you access it
from the host with read/write (or send/recv) just like any other peer.
If you rump_sys_connect(), you use rump_sys_read/rump_sys_write().

I probably should point out that rump has two different networking
configurations: a full networking stack and what I call "sockin".
The prior is exactly what you'd expect: interface, tcp/ip, sockets and a
unique IP (or other) address.  This can be a hassle sometimes when you
want to use networking from the rump kernel and do not have a separate
IP address or simply just don't have root privileges to configure a
tap interface.  sockin registers at the protocol layer in the kernel
and pretends to be an inet domain.  What it does is just maps requests
to the host sockets.  So e.g. PRU_CONNECT does connect() _on the host_.
This is helpful for cases where you need networking (e.g. rump_nfs and
rump_smbfs), but do not want to hassle and administrative boundary of
configuring a separate address.

> 4. Could you NFS export the rump kernel file system to the host?
> (Probably better to a second rump kernel...)

Yes.  When I make changes which affect nfs, I test them by running one
rump kernel with the nfs server and one instance of rump_nfs (the latter
using sockin, i.e. effectively the rump kernel nfs exports to the host).
This way I get a two-machine illusion -- naturally, since the nfs client
is quite finicky, I don't want to use mount_nfs for testing on my desktop.

nfs itself presents one of the unsolved issues with rump: the division
between rump kernel and host kernel is done at the syscall level: foo
or rump_sys_foo().  However, for libraries "foo" is already hardcoded.
This is especially problematic for libc, since even LD_PRELOAD will not
help.  There's a few different things I've been playing around with this,
but will try to detour into verbose explanations of them in this email.
The whole issue is explained here (and generally in the thread):
http://mail-index.netbsd.org/tech-kern/2009/10/16/msg006276.html

The above also contains instructions on how to use rump nfsd.  Notably,
these days you do not need to build the nfs server component on i386,
but you can use the kernel module readily available from /stand instead.

For those with a high threshold for pain, let's review that my first
version for nfsd inside a rump kernel used a forked puffs null mount
to access the host file system namespace to be able to export it.
(if that didn't make any sense, don't worry.  it didn't make sense to
me either ... but it worked ;).

Now the default requires to serve the file system from inside the rump
kernel namespace.  This, of course, can be from an image on the host if
etfs is used.

> 5. I think I read that rump (or some other part) can now talk USB. How
> would one attach a USB device on the host system to a USB controller
> inside the rump kernel?

It works by probing the host /dev/ugen{0...n} and faking a host controller
if it is able to open the device.  The kernel USB stack then operates
on the rump-faked host controller as it would, and the host controller
proxies the requests to /dev/ugen.  The whole thing was pretty much the
topic of my AsiaBSDCon paper this year.

There is no other magic than rumpusbhc<n> probes /dev/ugen<n>.
Plenty of examples for different USB device drivers are present in
src/share/examples/rump.

> Is that a good start for questions? :)

yes.  hopefully i did even half as well in answering them.

  - antti


Home | Main Index | Thread Index | Old Index