tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: why cast to char* through void*




On Jul 19, 2009, at 4:00 PM, David Holland wrote:


Yes; the problem is that things we think may be defined by the
platform (because of the function call ABI or characteristics of the
CPU or whatever) are, in fact, not, and fail silently with the next
compiler release. This has happened before with gcc and will doubtless
happen again.

This area is, IME, where most arguments about C and the C standard
arise: we know what happens on our CPUs when signed integer arithmetic
overflows, so we expect to be able to access that behavior by doing
computations on "int". Only, because it's still not actually defined,
the compiler is free to cause something else to happen, and sometimes
for one reason or another it will.

Right. Many years ago, I used a system where the result of an overflow when adding two ints was 0. This was in C, on 4.3bsd -- it just didn't happen to be 4.3bsd on a VAX. Not surprisingly, some software broke, but since that behavior was in accordance with the C standard of the day it was acceptable. (Of course, the problem was that the same thing happened when adding two unsigned ints, which was not in accordance with the language reference manual. They wouldn't listen to me, even when I pointed at the text, so I appealed to authority and asked Dennis Ritchie's opinion. They were willing to listen to him...)

Home | Main Index | Thread Index | Old Index