Subject: Improving NetBSD porting documentation.
To: None <tech-kern@netbsd.org>
From: Donald Gillies <gillies@cs.ubc.ca>
List: tech-kern
Date: 08/20/2001 11:36:25
I have a suggestion for how to improve the documentation on porting
BSD to a new architecture. There is a section in the documentation
that talks about how to assign interrupt priorities.
There is an optimal algorithm, known as the rate-monotonic algorithm
for static priority real-time scheduling. The interrupt basically
kick off static-priority functions [Liu-Layland 1973, JACM]. The
Liu-Layland algorithm is, the higher the (maximum) frequency of the
interrupt, the higher the priority should be. This means for example
that a 115 kbaud serial link (11,500 interrupts per second) should
have higher priority than a 1000 khz real-time clock (1000 interrupts
per second).
On the other hand, with a 16550 uart that only interrupts after 15
chars are received, the frequency would be at most 11,500 / 15 = 766
interrupts per second, and would have a lower priority than a 1000 khz
real-time clock.
you can deviate from this algorithm for policy reasons (e.g. you
mention that the clock used for time slicing should be higher priority
than any kernel thread such as disk drive, so that heavy disk accesses
can never slow down the clock.
just be aware that, the more you deviate from this optimal priority
assignment, the bigger the chance that your CPU could be sitting at
50% utilization and missing some low-priority interrupts.
Liu-Layland proved that the optimal assignment is at least as good as
any other assignment. They also proved that the optimal assignment
can soak up 69.3% (1/e) without missing any interrupts. Later work at
CMU showed that the Liu-Layland rate-monotonic priority assignment
yields 85% CPU utilization, in the average real-time system, before
interrupts start to be missed.
Don Gillies - gillies@graviton.com - Graviton, Inc.
EE Faculty, University of British Columbia, Vancouver, B.C. V6T 1Z4,
http://www.ece.ubc.ca/~gillies