tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: process resource control (was Re: How Unix manages processes in userland)



On 2013-12-10 dyoung%pobox.com@localhost wrote:

We can say, we need more control because there are security issues.  Or
we can say, we have security issues because we lack control.  It's more
useful and truthful to say the latter, I think: we lack control over
the resources programs use.  If a program gets to use any of a user's
resources, it gets to use them all, so we have to be very careful what
programs we run.

That makes sense to me too. I think the two perspectives should be the same, but aren't necessarily the same in practice, so IMO it would make sense to consider particular issues both ways and try to solve for both :).

If I also could limit the number of unique disk blocks a program could
use, and the number of pages of virtual memory, and if I could restrict
the directories where it could link or unlink files, then maybe I could
depend on some untrusted program to apply some useful algorithm to its
standard input and write the result on its standard output, or else
crash trying to exceed its power, storage, network, or memory limits.

Yes, like that. Consider a restricted process that acts like a web server and which I access through a web browser (via the service framework, so the process itself doesn't deal with how it gets connected to the web browser). I think there is both an access control issue (the process only gets to interact with it's data) and a resource control issue (how much data can it store and how much processor and memory does it get to use, etc). The web browser would filter interaction with the rest of the system and might internally use restricted processes to help achieve that reliably.

In some cases the server process might internally need additional services (it might make use of a particular web API, for example). By providing these services via a separate local process (through the service framework) the use of these services can be filtered by trusted applications.

This example also brings up that the networking API and Plan9 per-process file systems are to some extent different ways of solving the same underlying issue of separating what you are connecting to or providing from distinct data streams, and that underlying functionality is useful even without network access.

I would also eventually s/web browser/well designed rendering console/, such that a web browser would be a filter that converts sloppy XML or HTML to well formed whatever the rendering console uses. That type of model would also make good use of an "everything is a resource" perspective.

People have been running untrusted binaries for a while now and web browsers are currently the main environment that has been attempting to deal with this (not with binaries exactly, but the lack of a way to deal with the problem effectively for actual executables means that higher level attempts keep failing in addition to being slower than they need to be). This is so much of what people actually want to do with a personal computer that web browers are literally turning into operating systems (not wildly popular ones yet, but it is still a fairly new thing). I don't think they are actually dealing with the fundamental problem that well, but I predict that if no OS actually deals with this fundamental problem well then people will end up using OSes that deal with it badly but at least attempt to deal with it.

-Matt


Home | Main Index | Thread Index | Old Index