Source-Changes-D archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: CVS commit: src/lib/libperfuse



On Sun, Oct 23, 2011 at 05:13:13PM +1100, matthew green wrote:
...
> > perfuse memory usage can grow quite large when using a lot of vnodes,
> > and the amount of data memory involved is not easy to forcast. We therefore
> > raise the limit to the maximum.
...
> this seems like the wrong answer.  if rlimits aren't enough, then the
> *user* should be increasing them, not the system.

This again brings up the issue of the default 'rlimit' values.
The current 'hard' limits are based on some global system limits [1].
Letting a user process get near these limits is not a good idea,
and 'root' can increase it's hard limit anyway.

I'm not entirely sure there shouldn't also be a limit for the
amount of non-pageable kernel memory a process can use.

If a program sets 's 'soft' limits to the 'hard' ones there are
a lot of 'local user DoS' attacks available...

For things like open files, I'm not even sure there should be
a global kernel limit. I suspect that actually dates from times
when there was a statically allocated array of them.
Possibly open should be able to fail due to a likely lack of
kernel memory (especially for non-root), but a global count
is rather pointless. Knowing the current value, and the highest
value is probably useful.

        David

[1] The 'memory' limits are based on the amount of physical memory,
the relevant system limit would actually include the amount of swap.

-- 
David Laight: david%l8s.co.uk@localhost


Home | Main Index | Thread Index | Old Index