Port-vax archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: VAX floating point formats & tuning GCC for CPU models (was Re: Some more patches for GCC...)



On Wed, 6 Apr 2016, Jake Hamby wrote:

> > A conversion between the VAX and IEEE floating point is non-trivial 
> > because of the infinities and qNaN FP data (sNaN can be mapped to the VAX 
> > reserved FP data, although even this is not an exact match).  There's the 
> > issue of accuracy loss with denormals too.  The small range difference 
> > between F-floating and IEEE single and then G-floating and IEEE double is 
> > probably the least of the trouble.
> > 
> > All this makes a good conversion implementation a tad of a challenge, 
> > although for programs which only rely on what some language standards 
> > (e.g. ISO C) provide with disregard of IEEE floating point you can get 
> > good results.
> > 
> > A while ago I actually tried to get a GCJ port running with the VAX 
> > floating point format used in native (compiled code) by implementing the 
> > necessary converters.  I got as far as to be able to build classpath, 
> > however I couldn't verify the result at the run time because I chose the 
> > wrong ;) OS (i.e. VAX/Linux) which has never got far enough to make it 
> > possible.  I might be able to dig that stuff out if there's interest, all 
> > of this was pretty much generic, suitable for any VAX OS.
> 
> You covered all the major differences between the two formats. The only 
> detail you missed was that in addition to the lack of infinity, denormal 
> numbers, and qNaN support, VAX FP also doesn't make a distinction 
> between +0 and -0 (which on VAX is the reserved FP data you mentioned).

 Indeed, good catch!  Some FP algorithms do rely on the presence of 
negative zero for their correct operation (some even rely on the sign of 
qNaN data, even though it's not guaranteed by IEEE 754-1985 and it has 
only been with IEEE 754-2008 update that the semantics of the sign of qNaN 
data has been clarified for some operations such as negation or modulus).

> Other than those differences, you can convert between IEEE single and 
> VAX F_float, or IEEE double and VAX G_float, by word-swapping the 16-bit 
> words within the value (VAX uses big-endian 16-bit words for its FP 
> formats, a PDP-11 artifact), and by adding or subtracting 2 from the 
> exponent bias (can be done with integer add/subtract on the word 
> containing the exponent).

 Correct; then you need to take care of range errors of course.  That's 
what I did for the GCJ/classpath port -- it wants IEEE FP for bytecode 
interpretation, but obviously you need VAX FP for native code, unless you 
switch to soft-float, that is.

> It took me several hours of confusion before I finally figured out why 
> the VAX docs say they use "excess 128" and excess 1024 encoding for the 
> exponent, when you have to add/subtract 129 or 1025 to get the real 
> exponent (IEEE uses excess 127 for singles and 1023 for doubles). On 
> VAX, the mantissa range is from 0.5 to 0.9999..., and on IEEE, the range 
> is from 1.0 to 1.9999..., with a hidden one bit in both cases. So if you 
> interpret the mantissa in IEEE terms, the "real" exponent bias for VAX 
> FP formats is 129 (F and D_float) or 1025 (G_float).

 Yes, this is a bit confusing -- in reality the hidden (implied) mantissa 
(significand in IEEE-speak) bit is 0 in IEEE formats and 1 in VAX formats.  
The bit becomes visible in Intel's 80-bit extended format, where you can 
actually set it to 1.

 BTW, there's a nice paper on the DEC vs Intel battle for IEEE 754 here: 
<http://www.eecs.berkeley.edu/~wkahan/ieee754status/754story.html>.  It 
explains how G-floating has been conceived; it's no coincidence its range 
is so close to IEEE double.  It was actually denormals that tipped the 
balance in favour to Intel after all (note that the i8087 wasn't fully 
IEEE-compliant either, as wasn't the original i80287 implementation).

  Maciej


Home | Main Index | Thread Index | Old Index