Subject: Re: C Language Standard(s)
To: Chris G Demetriou <Chris_G_Demetriou@balvenie.pdl.cs.cmu.edu>
From: John F. Woods <jfw@jfwhome.funhouse.com>
List: current-users
Date: 12/22/1995 08:28:07
> of course, if you fix the size of int and long to 32- and 64-bits,
> respectively:
> 	(1) you're going to lose as machines move to 64-bits.  in
> 	    not _that_ long, there will be desktop machines on which
> 	    32-bit ops _are_ more espensive than 64-bit ops.  "what
> 	    do you do then?"  (That's already a problem on e.g.
> 	    the cray, but there it's not '64', it's another weird
> 	    number.)

The sizes shouldn't be _fixed_ as such (i.e. no software should depend
on the numbers of bits), but it's worth noting that not only is there
back pressure on hardware designers against making that difference
great (i.e. the tons and tons of badly written software which expects
32-bit quantities to be efficient; note that the Cray machines where
this is an issue (I think they also have a model with sizes 8 and 64,
in addition to the 60-bit beast) are not marketed to people planning
to run legacy C programs), but even if 32 bit quantities cost
_somewhat_ more than 64-bit quantities, there's lots of applications
where they still work out cheaper -- anything that deals with large
arrays of "ints": if you're memory-bandwidth limited, you will be
willing to spend lots of extra CPU cycles shovelling half as many
megabytes.  (That was one of the arguments that helped justify wimping
out at KSR and making int 32 bits, though there the cost difference
was only gates and not time, making the choice absolutely reasonable.)

>	(2) you're going to lose when 128 bit systems come around.
>	    It will happen eventually, and NetBSD may even be
>	    around when it does.  You thought the 32->64 transition
>	    was hard?

128-bit systems will be awfully interesting, one way or another.  There
aren't enough type words in C to express all the word sizes available on
a 128-bit system (unless the addressing granularity is 16 bits), so
already you're stuck.  Also, consider what the driving forces behind
the expansion to 64 bits were:  (1) too little address space, especially
for sparse applications, and (2) performance demanded a wide memory bus,
why not make the ALU that wide, too?  I predict it will be a _long_ time
before we see 2^64 bytes of semiconductor memory on a system; it will be
hard enough to put 2^64 bytes of disk storage on a system.[*]  Any further
address expansions will be driven by the desire for very sparse and coded
addresses, and these desires don't require a 128-bit address ALU.  (It's
a lot easier to ensure that you don't wrap across a 2^64 byte segment than
a 2^16 byte segment!)  As for the memory bus issue, real estate problems
will push back real hard on the next doubling -- board space is expensive,
and chip space for busses is EXPENSIVE.  (Yeah, this only delays the
inevitable, but I think it will delay it for a while.)


And even if I'm way off on my pessimism here, that STILL wouldn't justify
making long 64 bits on current 32 bit machines.  After all, ONE DAY
common computers may have 512 bit datapaths!  Let's make char 512 bits!
Surely one minute of runtime on one of THOSE babies will justify the tens
of quadrillions of wasted cycles in all the years previous...


[*] There exist, today, computers with more than 4GB of semiconductor memory;
if DRAM prices ever resume dropping, that might even become affordable soon.
But prices have to drop by a factor of 4 billion before someone fills up
64 bits of address with genuine memory (and maybe a factor of 4 million before
they can page that much space to disk).  *That* kind of jump requires
revolution, not evolution.