Subject: delay()
To: None <port-i386@NetBSD.ORG>
From: Bakul Shah <bakul@netcom.com>
List: tech-kern
Date: 03/30/1995 16:07:24
Seems to me that a delay() based countdown loop would be
more accurate down to a microsecond or so (the loop count
multiplier to account for the CPU speed can be calibrated at
bootstrap time).

Instead /sys/arch/i386/isa/clock.c:delay(n) routine seems to
go through all sorts of gyrations.  Is there a strong
technical reason why the simpler method is not used?

I notice that the alpha, sun3 & hp300 port use the
calibrated multiplier trick, the sparc port is similar to
i386 port while pc532 uses its own thing.  I didn't check
the rest.

My opinion is that getting the delay() or DELAY() working
down to a microsecond or so is much more important than its
accuracy as the most common use for delay() is for handling
hardware related delays.  If greater accuracy is needed for
sound or some realtime activity related uses, perhaps
another precise_delay() routine should be defined and used.

Comments?

-- bakul