Port-amiga archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
floating point speed
Hi there,
I use a selfmade C program, which does a lot of floating point arithmetic,
on my A4000 with CyberStorm MK II 060/50. Now I noticed that this program
runs between 2 and 2.5 times faster on AmigaOS (depending on whether
CyberPatcher form phase5 is used or not) than on NetBSD-1.2. In both cases
it was compiled with gcc 2.7.2 with the options '-O -m68040 -m68881'.
On NetBSD, compiling only with '-O' makes the program about 5% slower, the
same as for '-O -m68020 -m68881' (I suppose the compiler is configured for
these processors by default). Using '-m68060' instead of '-m68040' seems to
make no significant difference.
On AmigaOS, there is also a runtime increase of about 5%, if the program is
compiled with '-O -m68020 -m68881' instead of '-O -m68040 -m68881'. Since
that compiler is configured for generic 68000 code by default, using only
'-O' makes the program about 3 times slower.
I cannot say how much of the runtime is consumed by which kind of
operations, because the program was never maent to be a benchmark test. It
is part of a data analysis process, so used for "real work".
Can anybody explain me where this significant difference in program
runtimes between NetBSD and AmigaOS comes from?
Are there perhaps some libraries compiled with debugging code instead of
optimization (involving possible performance loss for all applications)?
Or is the emulation code for the coprocessor functions which are not
supported in hardware by the 68040/68060 less efficient?
Regards,
Stefan Hensen
Home |
Main Index |
Thread Index |
Old Index