Subject: Re: [ColdFire] Re: [RFC] Type of long double on ColdFire
To: ColdFire Mailing List <ColdFire@lists.wildrice.com>
From: Paul Brook <paul@codesourcery.com>
List: port-m68k
Date: 12/10/2005 02:46:01
On Saturday 10 December 2005 00:52, Aaron J. Grier wrote:
> On Fri, Dec 09, 2005 at 07:32:38PM +0000, Paul Brook wrote:
> > On Friday 09 December 2005 18:07, Aaron J. Grier wrote:
> > > On Fri, Dec 09, 2005 at 09:00:32AM +0100, David Brown wrote:
> > > > The only reason I can think of for having longer doubles in the
> > > > m68k gcc port is for older Macs, and I'd be doubtful if removing
> > > > support would be noticed by anyone.
> > >
> > > I'm sure the m68k hackers running NetBSD would...
> >
> > I'm not suggesting changing the m68k definition of long double, only
> > ColdFire.
>
> ahh. OK after doing some reading things are a little clearer.
> could it be switchable?  i386 has -m96bit-long-double and
> -m128bit-long-double.

I'm not keen on this idea. IMHO ABI breaking options tend to be of very lit=
tle=20
practical use, especially on targets like Linux and *BSD where binaries are=
=20
expected to be portable between different configs/machines. The conversatio=
n=20
usually goes something like:
"I compiled with -mfoo and my program broke"
"Yes. You also need to recompile the rest of your system/libc with -mfoo"
"Meh. maybe I'll not bother".

> > AFAIK NetBSD doesn't support ColdFire, and even if it did, it would be
> > separate from the existing m68k port. Am I missing something?
>
> NetBSD has separate kernel ports for the various 68k machines, but they
> are binary compatible at the application level.  if I had a v4 eval
> board with FPU at home, I'd certainly be attempting a port, if for no
> other reason than to do bulkbuilds of 68k binaries at 200+MHz rather
> than 50.
>
> in general I'm a bit grumpy about the current orthagonal-ness of
> coldfire support being added to gcc without updating support for
> existing 68k processors.  I'd like to see a common 68k target that would
> be least-common-denominator compatible across all 68k and coldfire
> variants, not for any particular application, but as a proof-of-concept
> that the changes being made to gcc are portable across the various 68k
> implementations, and that necessary flexibility to handle the variants
> is being built-in rather than bolted-on.

Well, to be honest m68k and ColdFire are fairly different architectures.
The basic instruction format is the same, but the supported addressing mode=
s,=20
=46PU, MAC, MMU and exception model are all different.

There have been suggestions (though AFAIK no actual patches) that 68k and=20
ColdFire should actually be two separate gcc ports, rather than trying to=20
support them both in the same port.

Running 68k code on ColdFire may be theoretically possible (I haven't check=
ed=20
all the details), but it would require trapping and emulating a *lot* of=20
instructions.  I wouldn't be surprised if your 200MHz ColdFire ends up goin=
g=20
slower than your 50MHz 68k. I don't know if it's even possible to run=20
ColdFire binaries on a 68k machine.

[Getting offtopic now]

> I realize I'm in the minority in this. =A0I've heard a lot of whining from
> Bernie and Peter on this point, but I still see it as an issue that
> needs a better answer than "nobody uses the old stuff, just ignore it"
> which leaves gcc support for older (still shipping!) 68k processors
> stagnant, and I'd hate to see the same thing being repeated in the
> future. =A0(oh, nobody uses v2 cores anymore...)

The only way to avoid that is to provide the resources (ie. programmers or=
=20
money to hire programmers) to maintain support for the "older stuff". whini=
ng=20
just irritates the people you want to help you :-)
=20
Paul