Subject: Re: a new KNF (and some comments)
To: Peter Seebach <seebs@plethora.net>
From: Noriyuki Soda <soda@sra.co.jp>
List: tech-misc
Date: 01/21/2000 17:52:08
>>>>> On Fri, 21 Jan 2000 00:53:02 -0600,
	seebs@plethora.net (Peter Seebach) said:

> In message <200001210641.PAA05868@srapc342.sra.co.jp>, Noriyuki Soda writes:
>> But what I'm talking is "keeping ABI".

> Yes, but the ABI is crufty and inefficient.

As far as I can say, it is not so inefficent.
And keeping solid ABI is, what OS vendor should do.

As I said previous message, I don't care about functions which don't
concern ABI. i.e. using ANSI style for applications and static
functions is no problem for me.

> I really think we have to be willing to make a little progress here;
> the ABI is spending a lot of time converting things around for no
> good reason.

Keeping solid ABI is quite important point.

>> But for global functions which are related to ABI, we should not use
>> "short" and "char" argument. Since "short" and "char" argument has
>> ABI problem.

> Well, if we don't use them at all, then we might as well use the ANSI
> definition, because there's no difference in ABI.

> We've changed ABI's in the past (e.g., a.out vs. ELF), I don't see
> it mattering *that* much.

But we've always kept functionality and backward compatibility.

If we define function prototypes like "int foo(short)", then

(a) The function cannot be called safely from 3rd party programs
  which don't include prototype definition header correctly.

	I agree that such 3rd party program is broken.
	But fixing such programs increases maintenance cost of pkgsrc.

	IMHO, keeping K&R ABI compatibility is a lot easier than
	continuously fixing 3rd party programs.
	Because we cannot control 3rd party, but we can control
	our ABI.

(b) The function cannot be called safely from K&R compiler.

	So, K&R compiler never can be used for NetBSD, any more.
	Although probably this doesn't matter these days.

>> And if we don't use "short" and "char" argument, the "inefficient"
>> issue doesn't matter, because K&R style function exactly produce
>> same performance as ANSI style does.

> But by the same token, they produce the same code, so we might as well
> give the compiler the more standard specification.

Mm. I missed your point.
If K&R and ANSI are same at efficiency, isn't it better to have
compatibility?

>> Yes, thus, prototype should be defined as follows. (as Chris said.)
>> int foo __P((int));

> Ahh, but that's not what we normally do - and it's misleading, because it
> implies that the function will understand the normal range of ints, and it
> won't.

Please look at <stdio.h>.
There is the following definition:
	int putchar __P((int));
The reason that putchar() takes int rather than char is exactly
same reason I claim.

So, both the ANSI/ISO committee and the authors of NetBSD library
keeps K&R ABI normally.

>> If you try to define as follows:
>> int foo __P((short));
>> then *YOU MADE ABI PROBLEM*.

> No, then the idea that our modern systems should handle argument passing
> based on the characteristics of early PDP and VAX systems has created an
> ABI problem.

> ABI problems are like which side of the road you drive on; no one side is
> intrinsically more right than the other.  As it happens, there's a standard,
> so it's probably best if we conform to the standard.

Mmm, I missed your point here, too.
ABI problem is not that `which side of the road'.

Library author of operating system should know what integral promotion 
means. And as far as he knows that, both K&R programs and ANSI programs
can be used with the library.
If the library author doesn't know that, only ANSI programs can be used
with the library.

Which is better?
It seems to be obvious for me.

As I wrote previous, I do not object that our applications follow the
ANSI style.

>>> If, elsewhere, you say
>>> int foo(short);
>>> you are allowed to be using different calling conventions.

>> And breaks ABI compatibility.

> It's just as accurate to claim that the functions declared in the K&R style
> are breaking ABI compatability.  They're both "wrong".

> As it happens, gcc currently does something very interesting; if you do
> 	int foo(short);

> 	int foo(s)
> 		short s;
> 	{
> 	}

> it pretends you always did it in the ANSI style, as I recall.

Perhaps don't you confuse a usage of argument s in function foo
with a calling convention of arguments s?

It is OK that gcc uses `s' as short inside of the function foo,
but calling convention about argument `s' should keep K&R ABI 
compatibility.
Could you provide the platform name and assembler output which
causes calling convention incompatibility with K&R.

And, if gcc produced the code like you wrote, then *GCC IS BROKEN*.

>> So, it is better to use K&R style for global functions which are
>> related to ABI (e.g. functions declared in /usr/include, and kernel
>> functions which can be called from device drivers and 3rd party
>> filesystems).  Because K&R style automatically detects ABI problem
>> like above.

> No, it automatically ignores them, and/or creates them.

No. Please try "gcc -pedantic" with above C source.

> ANSI allows you to tell, looking at the declaration of a function, what
> arguments it takes.  K&R doesn't.

I never said that we should not use ANSI prototype.
What I said is that ANSI prototype declaration + K&R function definition
for functions which concern with ABI.

I love ANSI prototype and ANSI-C, as you see that I use -pedantic. :-)

> We *MUST* provide prototypes.  They are not optional.

I never objected to this.
I completely agree, prototype is not optional.

> If you are providing a
> correct prototype, you can't use a K&R definition unless the default
> promotions don't change anything.

No. See above.
--
soda