Port-i386 archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Changing the default i387 precision

On 6 May, 2013, at 04:59 , Greg Troxel <gdt%ir.bbn.com@localhost> wrote:
> When removing -ffloat-store on netbsd-6/i386 (and netbsd-5),
> paranoia complains about many things.

I don't know what this is about but it is quite possible that what
you are seeing are symptoms of the problem the x87 precision change
he wants to make will fix.  The behaviour of long double arithmetic
on NetBSD at run time matches neither the description of the type
in <float.h> or the assumptions of NetBSD compilers (gcc and clang,
at least).  The compilation environment assumes that arithmetic
in the long double type behaves like the IEEE754 10-byte extended format,
but the actual run time behaviour of this type does not for either of
the x86 architectures.

For amd64 this problem is only a problem for programs which actually
use the long double type.  For the default i386 behaviour, however,
this problem effects all floating point arithmetic since on the i386
all floating point arithmetic is explicitly evaluated with long double
precision, so the fact that long double is broken potentially effects

> I'm not really sure what your goal is.  I can see the appeal of having
> "long double" have greater precision, but I think it's far more
> important to have IEEE754-compliant behavior for programs that don't try
> to set anything.   Can't a user set rounding/precision modes
> intentionally if they are a long double user?

Without wanting to put words in his mouth I suspect a good goal might be
basic C99 compliance (I would even say C89 compliance, since long
double was required by that standard as well, but I no longer have
a copy so I don't know exactly what was required).  Support for the
long double type isn't optional, portable programs can rely on its
existence.  The C ABIs for both i386 and amd64 specify that the long
double type for those architectures is the 80-bit IEEE754 extended format,
and while NetBSD's long double looks like that too, its run time arithmetic
in the type does not behave like that IEEE754 format.  The latter is not
a problem for C99 compliance since C99 (prefers but) doesn't require
IEEE754 behaviour.  What is required by C99, however, is that <float.h>
describe the actual behaviour of the type, but on NetBSD it does not.
LDBL_EPSILON claims a precision more than 3 orders of magnitude better
than the run time math delivers, while LDBL_MIN, LDBL_MAX, LDBL_MIN_EXP
and LDBL_MAX_EXP are probably all similar lies.  C99 also requires that
the compiler evaluate constant expressions in a way which produces the
same result that a run time evaluation of the same expression would,
but since gcc and clang both apparently believe that long double math
is done with the full precision of the 80-bit format compile time
math produces different results than run time math.

This is broken.  Either <float.h> and the compilers need to be changed
to match the run time behaviour of the type, or NetBSD's run time behaviour
needs to be changed to match the assumptions of the compile environment.

There's one other interesting manifest constant defined in <float.h>,
that being FLT_EVAL_METHOD.  The C standard (section in the
2005 draft I'm looking at) describes it thus:

  8 The values of operations with floating operands and values subject
    to the usual arithmetic conversions and of floating constants are
    evaluated to a format whose range and precision may be greater than
    required by the type. The use of evaluation formats is characterized
    by the implementation-defined value of FLT_EVAL_METHOD:

    -1 indeterminable;

    0 evaluate all operations and constants just to the range and precision
      of the type;

    1 evaluate operations and constants of type float and double to the range
      and precision of the double type, evaluate long double operations and
      constants to the range and precision of the long double type;

    2 evaluate all operations and constants to the range and precision of
      the long double type.

For amd64, the compiler sets FLT_EVAL_METHOD to 0 (it has SSE instructions).
Assuming this isn't a lie this means that the problem with long double only
effects programs which explicitly use long double.  It also should mean that
changing the x87 precision will fix the long double problem but have no effect
on existing programs which don't use the type.  This is a no-brainer change,
it should just be fixed.

For i386, unfortunately, the default compilation sets FLT_EVAL_METHOD to
2.  Since all math in all three floating point types is evaluated with long
double precision this means the compilers potentially screw up constant
expression evaluation (at least) for all floating point types (and if you look
at the paranoia.c program you may notice there is quite a bit in there that can
be done at compile time by a good compiler; with a FLT_EVAL_METHOD of 2, 
anything computed by the compiler in any of the 3 types may be inconsistent with
the NetBSD run time behaviour).  Since everything is potentially screwed up 
here I
think it is even more imperative that this be fixed, either by making the 
and environment match the actual run time behaviour or by making the run time
behaviour match the expectations of the compiler, and of these choices changing
the x87 precision seems like the best thing to do as well.  It is way easier
than diddling the compilers, it provides an IEEE754-compliant long double type
and it makes match what some of the bigger users of those compilers (e.g.
Linux and FreeBSD) already do.  This should get done as well.

Note that Apple compilers set FLT_EVAL_METHOD to 0 on i386.  Apple can do this
since they've never shipped a product with an Intel CPU which lacked SSE
instructions.  NetBSD's default binaries, on the other hand, should run even on
CPUs too old to have SSE instructions, so NetBSD's compilers have no default
choice other than to do all floating point math with the x87.  Apple also
apparently uses a non-standard ABI for the i386 (I know sizeof(long double)
there is 16, while the standard ABI document says it should be 12), another
thing that NetBSD probably shouldn't do.

Finally, just to be clear, the fact that the i386 does type-promoted floating
point expression evaluation (except when compiled with -ffloat-store, probably)
is not a bug, is explicitly allowed by the C standard, and says nothing about
IEEE754 compliance or lack thereof.  If anything it is something of a feature if
it isn't cost anything, and if you are forced to use x87 instructions for
arithmetic it actually doesn't cost anything you aren't already paying.  If you
skip to the bottom of this essay


you'll see

    I wish the makers of  MATLAB,  recalling their roots,  would rededicate
    attention to their numerical underpinnings.  Among improvements needed:

          Declarable  4-byte,  8-byte  and  (10 or 16)-byte  variables   *
          but use old  Kernighan-Ritchie C  rather than  Java/Fortran
          rules for expression-evaluation.

Note that the difference he's drawing in the last bit relates to the fact
that Java and Fortran require that all floating point operations be done
with the precision of the type (i.e. a FLT_EVAL_METHOD of 0) while K&R C
did type-promoted floating point arithmetic exclusively (i.e. a FLT_EVAL_METHOD
of 1 or 2; they're probably equivalent in this context since K&R C had no
long double); he's asking for MATLAB to do type-promoted arithmetic.  He also
seems to think that having a working extended floating point format is useful.
The author, William Kahan, was primarily responsible for the design of IEEE754
and I think was an original author of paranoia.c.

The only problem i386/amd64 NetBSD has with IEEE754 compliance is the
behaviour of its long double type (since you express concern about the
former I don't know why you don't seem concerned about the latter).  I
think the problems with NetBSD C floating point have nothing to do with
IEEE754 compliance and everything to do with the difference between how the
compiler environment expects long double math to behave and how it actually
behaves at run time; for the i386 this problem may infect all floating
point arithmetic.  I believe both these things can be usefully fixed by
changing the x87 precision, and I don't think you should have to call a
non-standard, platform-specific function to fix it.  I think the x87
default should change, the function can be used by people who think there's
something good about the current behaviour (though I can't imagine what
that would be).

Dennis Ferguson

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Home | Main Index | Thread Index | Old Index