tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: int vs. long in test(1)




On Jun 19, 2008, at 2:21 PM, Quentin Garnier wrote:

On Thu, Jun 19, 2008 at 02:09:37PM -0700, James Chacon wrote:

On Jun 19, 2008, at 12:22 PM, Joerg Sonnenberger wrote:

On Thu, Jun 19, 2008 at 11:56:43AM -0700, James Chacon wrote:
Isn't that wrong then for 64bit machines where int is 32 and the spec
says
"signed long" is what should be used here?

The wording of the standard means that you can support more, but don't
have to. It is valid to use multi-precision math for example.


Umm...it doesn't say "signed long as defined on a 32bit machine", it just
says signed long.

That implies to me on a given architecture you must support a signed long size here which would mean on LP64 chopping it at max int is incorrect.

Maybe I'm missing something, but isn't intmax_t defined as the largest
integer type a given machine can manipulate? That has to be at least as
large as long, on any machine.


Hmmm...My confusion...For some reason I read that as just integer max (implying the int type).

Yes, conversion to intmax_t should do it. Would be nice to be consistent across all platforms though. Do all of them define this as "long long"/64 bit?

James



Home | Main Index | Thread Index | Old Index