Subject: Re: bin/578: cc's -Wformat doesn't grok q modifier
To: None <firstname.lastname@example.org>
From: der Mouse <mouse@Collatz.McRCIM.McGill.EDU>
Date: 11/18/1994 20:46:58
>> [cc -Wformat versus %qd]
> I've wondered some since I began hearing about "long longs" and
> "quads". Wouldn't it make more sense for gcc to do:
> char 1 byte
> short 2 bytes
> int 4 bytes
> long 8 bytes
> rather than inventing long longs?
Perhaps. But I think inventing something like __quad__ would have been
better still. I don't think we will see 64-bit longs until 64-bit
CPUs become common. Whether we _should_ is debatable, and in the
absence of the huge body of code that more or less assumes int==long or
long==32bit, I think we should. Pragmatically, we can't really yet.
> As far as I know, this doesn't violate any C standard (I think the
> guarantee is
> char < short <= int <= long
I think it's actually char <= short <= int <= long.
> and we seemed to survive pretty well back in V7 days on the PDP11
> with int = 2 bytes and long = 4 bytes.
Well, sort of. The user base was also a lot smaller. :-)
> I realize this would break a lot of code (including much of my own,
> I'm sure), but it seems it will have to be done eventually. Is there
> any other reason for not doing it?
As mycroft mentioned, the efficiency penalty most current machines
would pay. Since one can't count on int being more than 16 bits, one
has to use long for 32 bits...which would mean a factor of two space
penalty and much slower arithmetic. Perhaps when 32-bit CPUs are as
common as 16-bit ones are now, int and long can and will to 64 bits.
(I would _much_ rather gcc had invented something like __sized_int(), a
parameterized type, so that one could write something like
"__sized_int(32) foo;" to make "foo" a 32-bit integer or
"__sized_int(77) foo;" to make "foo" a 77-bit integer...with
appropriate performance penalties when you ask for peculiar sizes, of
course. Oh well.)