Port-amd64 archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Changing x87 precision to full 63bit as default
On 7 Nov, 2013, at 07:52 , Greg Troxel <gdt%ir.bbn.com@localhost> wrote:
> as discussed a while ago, I would like to change the initial x87
> configuration to the system default, aka long double precision.
> This makes it possible to get working long double. A review of the libm
> assembler routines will follow to make sure they do correct rounding.
>
> Does this change affect just i386, or also amd64? If I follow
> correctly, it will change the behavior of programs on amd64 that use x87
> instructions rather than SSE, but that's an odd case, and therefore the
> behavior of almost all actual programs on amd64 will not change.
> Further, the floating point results on i386 will, post-patch, match the
> results on amd64. Explaining the above (correclty, which I may not have
> done) belongs in the commit message.
The change will effect amd64 programs which need to use long double. That
change will be a large improvement. I don't believe the change will make
i386 programs more closely match the results of the same program when run
on amd64, in fact it may do the opposite, but I can't think of a reason to
care about this since, in the rare case where the results of running a
program on amd64 and i386 do differ significantly the change makes it yet
more likely that the results produced by the i386 version are actually
the arithmetically correct results. Indeed I'm not even fond of the fact
that this change is only applied to programs which are recompiled since
if the change did cause an existing program to produce significantly different
results it is almost certainly because the results it is producing without
the change are wrong. I would rather that programs compiled for different
machines sometimes produce correct answers, even if this is inconsistent
between machines, than produce identical, consistently incorrect, answers on
all machines. Carrying extra precision in intermediate computations is
pretty much guaranteed to make the result of the computation have a higher
probability of being meaningful.
In reality, if the details of a particular machine's floating point
implementation causes a program's results to change significantly you
probably shouldn't be trusting any of those results and should instead
be looking at the program to understand why that is happening. When I
used to do this kind of stuff more regularly we would often seek out
a Vax or an IBM mainframe to do test runs of programs intended to primarily
run on something else to explicitly to try to find and understand problems
like this, until we figured out that IEEE 754 rounding modes could often
provide the same check. Trying to make all machines behave the same
(which usually means making them behave like some lowest common denominator,
e.g. maybe no support for long double precision > double) is counter-productive
to the extent that it makes it harder to notice that you are computing
(always identical) nonsense.
> Have you run paranoia (with no math options in CFLAGS) with the patch?
> Is it happy with everything? If not, can you explain what it objects
> to, and why it is wrong?
I'll be happy to run it once the patch goes in. I'm not sure if it is
necessary to run it before then since the patch simply makes NetBSD
on Intel machines behave the way other operating systems do already.
Note that the original author of paranoia is also the floating point
arithmetic expert who helped Intel with the design of the x87, which
was intended to work the way this patch is now allowing it to, so it
would be surprising if paranoia had any objections to the change.
Dennis Ferguson
Home |
Main Index |
Thread Index |
Old Index