On Tue, 22 Oct 2013, Matt Thomas wrote:
return __rand48_seed[2] * 32768 + (__rand48_seed[1] >> 1);And all casts go away. the multiply promotes everything to unsigned int.Here, I think the multiply will be performed using signed int (in the common case that int is larger than 16 bits),No, it's unsigned due to __rand48_seed being unsigned. (confirmed by checking at the resultant code).
I meant signed int in the C abstract machine. On a real machine, the compiler is free to use a shift, or anything else that gives the correct result.
My reasoning was: __rand48_seed[2] is unsigned short (e.g. 16 bits). 32768 is signed int (e.g. 32 bits). 32-bit signed int is large enough to hold all possible values of 16-bit unsigned short, so the unsigned short is promoted to signed int (per C99 section 6.3.1.1 paragraph 2) and the multiply is done in signed int.
I think it's much better to use explicit fixed-width types.Well, it's an ancient interface. I had considered changing it to use 64-bit types internally (at least for LP64) but given how often it's used, I didn't think it was worth the effort.
What you have is probably fine on all platforms where int is no smaller than 32 bits, but I find the burden of trying to reason about what would happen on unusual platforms (if you don't use fixed width types) outweighs the burden of rewriting it to use fixed width types.
--apb (Alan Barrett)