Subject: Re: new pid allocation code
To: None <>
From: David Laight <>
List: tech-kern
Date: 03/17/2003 21:02:58
> In short terms:
> big-O: f(n) = O(g(n)) iff 0 <= f(n) <= c * g(n) for all n >= k with
> fixed c.
> little-o: f(n) = o(g(n)) iff 0 <= f(n) <= c * g(n) for all n >= k. k
> must not be dependent on n, but may be dependent on c.

or (from the same reference):
    f(n) = o(g(n)) means f(n) becomes insignificant relative to g(n)
    f(n) = O(g(n)) means it is less than some constant multiple of g(n)
as n approaches infinity. 

> So my question should have been: "What's the dependence relation
> between k and c?" Or, better, "Do you really mean that you're O(1),
> because if so that rocks, no matter what the old algorithm was."

Yes, I did mean O(1), and the constants are not stupid!

> It's all pretty academic, though. I'd be surprised if there were a
> wide variance in your constants and the existing algorithms, or you
> wouldn't be touting {o,O}(n^2) versus {o,O}(1) (because you wouldn't
> have measurements that said anything like that).

Measurememts? Ah yes, ruler on .c file :-)

> If the largest cost in your new way is constant, and *any* cost in
> the existing way is n^2, you win. :^>

I could write one that wasn't :-)

For the existing code to show its O(n^2) behaviour, you need to:
1) have allocated over PID_MAX processes
2) have a spread of pids allocated to long lived processes

Basically when the next sequential pid is that of an existing process
(which would have to be a long-lived one), the full list of processes
has to be scanned in order to find the next block of free pids.
So a pass of the table (which actually generates less than PID_MAX pids)
has a cost of nproc * '# long lived procs' or about n^2.

In any case, removing a number from a linked list is cheaper!
(although slightly more expensive than incrementing a number).

pfind and pgfind also end up O(1), not O(n).  In this case the new
code is always better.


David Laight: