NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

kern/37744: setuid doesn't enforce RLIMIT_NPROC



>Number:         37744
>Category:       kern
>Synopsis:       setuid doesn't enforce RLIMIT_NPROC
>Confidential:   no
>Severity:       serious
>Priority:       low
>Responsible:    kern-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Fri Jan 11 08:25:00 +0000 2008
>Originator:     Ed Ravin
>Release:        3.1
>Organization:
Public Access Networks Corp
>Environment:
NetBSD panix5.panix.com 3.1_RC3 NetBSD 3.1_RC3 (PANIX-35) #0: Wed Oct 18 
22:28:22 EDT 2006  
root%juggler.panix.com@localhost:/devel/netbsd/3.1-RC3/src/sys/arch/i386/compile/PANIX-35
 i386
>Description:
The maximum limit of processes for a user can be bypassed when the user logs in 
via ssh (or starts processes via cron / at).  This is because the new processes 
are created by root and then do setuid() and exec() to run as the desired user.

See discussion on tech-security:

  Subject: Re: enforcing RLIMIT_NPROC in setuid() ?

for more information.

To summarize, Christos suggested having exec() check the user's process limit 
and failing then, since programs might ignore the return value of setuid() if 
the check was done there.  I suggested doing the setuid() check anyway and have 
setuid() still perform the operation but return an error if the new UID is over 
the process limit - well-behaved programs will exit, poorly behaved programs 
will be no less unsafe than before.  Michael Richardson added this caveat:

> The whole RLIMIT_NPROC stuff is bizarre as a limit, as you can actually
> (as I recall) have different process trees under the same UID, that have
> different limits.  The tree that has a higher limit goes on it's way,
> fork()ing for fun and profit, and the tree with the lower limit, just
> runs out when the total exceeds' it's limit.  (unless this has changed
> recently)

>How-To-Repeat:
Login as a user via ssh. run "ulimit -n" to see your process limit.

Assuming you've got enough ptys, log in again via ssh until the number of 
sessions you have running exceeds that process limit.

You should also be able to do exceed the limit by adding multiple jobs in "at" 
to start up around the same time.  Or create a crontab with multiple entries 
for the same minute that each start one new process like "/bin/sh -c sleep 100".
>Fix:




Home | Main Index | Thread Index | Old Index