Subject: Re: problems with ahc vs. format command
To: Manuel Bouyer <firstname.lastname@example.org>
From: Robert Elz <kre@munnari.OZ.AU>
Date: 06/12/2001 23:06:58
Date: Mon, 11 Jun 2001 21:54:21 +0200
From: Manuel Bouyer <email@example.com>
| OK, there are interger overflow problems in both drivers, for such a long
| (6 hours) timeout. ncr53c9x.c tried to deal with this, but it fails anyway
| for long timeouts, for hz <= 100.
| Can you try the attached patches ?
Leaving aside whether the patches fix the problem reported, I question
whether sticking 64 bit arithmetic into relatively heavily used drivers
in order to handle a once a month request (if that) is really the right
way to go about it.
Rather than that, how about just accepting that if someone is asking for
a timeout of more than an hour or so, the chances that they really want it
to be highly accurate are vanishingly small (if you ask for an hour, whether
the timeout goes off in 59:58 or 60:02 is unlikely to bother anyone), and
so instead of doing 64 bit arithmetic (which is almost certainly compiler
invoked library routines on most architectures), do ...
(newtimeout > 3*1000*1000 ?
((newtimeout / 1000 + 1) * hz) : (newtimeout * hz) / 1000),
and so on? Even then the 3M could be lots shorter with reasonable
safety, to accomodate really high values of hz. If you prefer to
round to nearest, instead of the slightly safer up for timeouts, then
make (newtimeout / 1000 + 1) * hz be ((newtimeout + 500) / 1000) * hz.
That could perhaps be made into a macro that expands differently on
64 bit processors, where just doing the 64 bit arithmetic avoids the test
(and costs nothing).