tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: lwp resource limit



In article <20120603194407.72F5514A47D%mail.netbsd.org@localhost>,
Mindaugas Rasiukevicius  <rmind%netbsd.org@localhost> wrote:
>
>maxlwp should be __read_mostly.  However, why is it (and all sysctls)
>in init_main.c?  I suppose you just followed current way (just historic
>code), but I think it is a bad practice.  We should move such globals,
>as well as sysctls, to the modules they belong to and make them static.
>In this case it would be kern_lwp.c module.

Yes, ok. I'll make that change.

>1024 seems to small for me.  How about 2048 (if not 4096)?  Please make
>it a #define somewhere (perhaps lwp.h?).  When would the MD code override
>the value?

I just followed the pattern with maxproc. I will bump the limit, but I
think that putting in the header is a mistake because things might use
the constant thinking that it always represents reality.

>If there are MD limitations, I tend to think that we should restructure
>the code a little and allow lwp_create() to fail..
>
>Variable 'enforce' should bool, but do we really need it?  We can know
>whether it is a first LWP i.e. fork() call or creation of kthread from
>inside.  Also, why not to account simple processes?  fork() is still a
>creation of LWP.  Do you know how QNX accounts this?

The problem I had was that it is too late to fail the lwp_create() call
inside the fork() code. I don't know how QNX accounts for it, but one
way to fix the problem is to check and increment the lwp limit at the
same place that the proc limit is incremented and fail then. I will
do that instead.

>For allowing process creation when maxlwp limit is exhausted, I would
>say this problem is better solved with maxprocperuid you mentioned.

Yes, perhaps. I think that the process/lwp limits need to be re-thought.


>> +    uid = kauth_cred_getuid(l1->l_cred);
>> +    count = chglwpcnt(uid, 1);
>
>There is no need to perform chglwpcnt(), which has a cost of atomic
>operation, if we are not checking against the limit (kthread case).

Ok, I can fix that if it is performance critical, but is it? In reality
how frequently do we create kernel threads, and is this the place where
we spend most of our time? I'd rather not make the code more complicated.

>> +            int nlwps = p->p_nlwps;
>> +            (void)chglwpcnt(kauth_cred_getuid(ncred), -nlwps);
>> +            (void)chglwpcnt(r, nlwps);
>> +
>
>You need to acquire p->p_lock around this.  I suppose this is the
>"atomicity issue" mentioned above. :)

Ok.

>> @@ -805,6 +807,7 @@ lwp_exit_switchaway(lwp_t *l)
>>      LOCKDEBUG_BARRIER(NULL, 0);
>>  
>>      kstack_check_magic(l);
>> +    chglwpcnt(kauth_cred_geteuid(l->l_cred), -1);
>>  
>
>Since this was done in lwp_exit(), why again?

Yes, this is one of the two bugs I've fixed since.

>> +#define _MEM(n) { # n, offsetof(struct uidinfo, ui_ ## n) }
>> +            _MEM(proccnt),
>> +            _MEM(lwpcnt),
>> +            _MEM(lockcnt),
>> +            _MEM(sbsize),
>> +#undef _MEM
>
>I would avoid macro, it is only four entries. :)

I find it tidier.

>> +                    node.sysctl_data = &cnt;
>> +                    uip = uid_find(kauth_cred_geteuid(l->l_cred));
>> +
>> +                    *(uint64_t *)node.sysctl_data = 
>> +                        *(u_long *)((char *)uip + nv[i].value);
>
>Potentially unaligned access?  Use memcpy()?

This will not happen, but note that one size is 64 bits and the other
can be 32 so I would need to use an intermediate variable.

>Use CTL_CREATE instead of static entry (KERN_MAXLWP)?  I thought we are
>trying to convert everything to dynamic creation/lookup and not add more
>hardcoded sysctl nodes.

Sure, I will fix that.

christos



Home | Main Index | Thread Index | Old Index