Subject: Re: Floating point in the kernel
To: Greg Hudson <ghudson@MIT.EDU>
From: Jukka Marin <>
List: tech-kern
Date: 09/22/1998 09:44:16
On Mon, Sep 21, 1998 at 06:24:42PM -0400, Greg Hudson wrote:
> > How is this different from blowing a response deadline because
> > another scheduled real-time task ate up so many system resources
> > that the first one didn't get to execute by its deadline? As I
> > understand it, if your CPU is too slow, it's too slow, and there's
> > nothing you can do about it. I'm just proposing artifically `slowing
> > down' the CPU.
> The problem is that restricting a process from taking up more than 80%
> of the CPU is not the same as slowing down the CPU.  If, say, you do
> it over a one-second interval with a 100MHz CPU, then what an
> application sees is a 100MHz CPU for 0.8 seconds and nothing for 0.2
> seconds, not an 80MHz CPU.  If the aforementioned robot arm takes 0.1
> seconds to punch through the wall and kill someone, then that's pretty
> bad.

Hmm.  If the system limits the CPU usage of any real-time process to 80%
of maximum, it doesn't need to be the same as "process runs 800 ms and
is forced to sleep 200 ms".  If the process uses less than 80% of CPU
time _averaged_, system will not restrict CPU usage at all, right?  And
if 80% (or 90% or 95%, whatever the limit is) of the CPU time is not
enough for the real-time process, then the system is too slow and it's
not the fault of the scheduler.  After all, if the real-time process
required 100% or 110% of the CPU time, it makes itself non-real-time
already because there's just not enough raw power to get things done in

The 80% limit should be only safe-guard, not something that is actually
being used when things are running normally.

About hardware interrupts being disabled while a real-time process is
running.. I don't think this could be done.  First, if the real-time
processes use measurable amounts of CPU time, they would block critical
interrupts (which have been problem even without real-time processes,
like serial interrupts or Ethernet interrupts with small amounts of
buffer RAM on the Ethernet card) and make the system unusable.  Second,
I would think many real-time processes actually depend on the interrupts
themselves - either they are getting data from an external device via
interrupts or simply timing the operations using a hardware timer.

I don't think an OS like NetBSD could ever provide real-time services
accurate to 1 us (or even close) without letting the process create an
interrupt routine for that purpose.  IMHO, the goal should be a bit more
realistic. :-)

> Now, if you prohibit a process from using more than 80% of the CPU
> over a very short interval of time--1 microsecond, say--and your
> real-time guarantees are only supposed to be accurate to 1 microsecond
> anyway, then you're golden.  But that seems unlikely.

I guess the accounting period should be longer.. like <100 ms if the
system must be responsive to other things even when the real-time
processes go crazy.. or 1000 ms if this is considered "very unlikely".