NetBSD-Users archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: NetBSD macros
Am 18.12.2009 um 22:25 schrieb Greg A. Woods:
No, actually you don't, usually -- and that's my point. You only use
the system-specific identification macros for system-specific
features,
i.e. those features found _only_ on one(1) type of system.
And these system specific features are either different per OS (many
#ifdefs) or available on a many systems (long #ifdefs). Both are bad
code and should be avoided by testing for the specific feature, which
will be always smaller and more efficient if you try to be portable.
If the feature is found on multiple types of systems, but not all, and
for some reason you must still use it, then you should identify it
by a
unique identifier and find some other way to enable that identifier,
other than messing about with system-specific identifiers.
And this is what autoconf does.
(However, BTW, often the version checking is irrelevant because the
feature is, for this example say, a common BSD feature that was
found in
the original code from which all the various modern variants have been
derived -- a wee bit of history goes a long way!)
This is only true for BSD features, but is already false when it comes
to POSIX features. Just think about getaddrinfo for example, which
still is not thread-safe on OpenBSD and is also not thread-safe on
NetBSD 3.x and FreeBSD 5.x. So you _HAVE_ to check versions here.
Also, all that said, I'd rather see a few snarly #ifdef lines in one
header file that can sort out a few necessary conflicting features
than
to have to use Autoconf.
Then you prefer bad design.
Most configure.ac files are large and result in huge scripts with many
tests because if you truly wish to use Autoconf properly to make your
code as ultra-portable as possible then you really must include a very
large number of tests (and also all the necessary #ifdef's in your
code), even for quite simple programs.
Badly written configure.ac files are not autoconf's fault - most of
the configure.ac files check stuff which they never use anyway - or at
least they check it and never pay attention to the check. It is quite
possible to write short configure.ac checks that do the checks you
want way more efficient than your #ifdef hell.
I.e. they often do know what they're doing -- it's just futile to do
it
sometimes.
No, checking for a feature and never paying attention to the result is
not knowing what you do - and a lot of configure.ac files do that,
because often it's just copy & pasted from another configure.ac file,
without really understanding it.
That's the problem with doing feature tests and relying on them solely
to keep your code portable.
>
I do agree that some people do write too many feature tests, but
unless
you're able to go out and re-test the program on every platform it has
already been tested on, it's sometimes really difficult to prove that
any given feature test can be safely removed. Not always, but
sometimes!
Well, as many write code that only works with gcc anyway, you can
leave out a lof of compiler checks. You only need to test gcc for C99
compatibility then, for example, instead of checking for each C99
construct separately.
The right way to do things is to write code that's portable in the
first
place. Don't ever make use of _any_ system-specific features unless
absolutely necessary! Use template header files to adapt to the few
features which often do vary between systems.
Well, it almost always is absolutely necessary. Even for simple tasks
like using socket.
There are of course classes of programs which must make use of
features
which are not standardised sufficiently and thus they must have
alternate implementations to supplement systems without, etc.
Which is basically almost every application doing more than simple
calculations.
Writing good portable code does require one to be a good historian of
the relevant systems and standards, and also to know when supporting a
given system, or class of systems, or even a standard, has become
irrelevant.
Who are you to decide whether a system is irrelevant or not? You can't
know if it's still out there in productive use. It's not too long ago
that they shut down the last multics system…
I suppose a first-time programmer who knows little beyond the system
he
or she is initially working with, can achieve some degree of
portability
of their early code by simply following the Autoconf guide and/or
book.
However even that can come later -- if they simply write good clean
code
that works properly on the one system they have access to, but which
is
as much as possible written to use the _standard_ APIs that their
system
happens to support (POSIX, ISO C, etc.), then any issues with
porting to
another system can be addressed at a later time, perhaps with the help
of someone more experienced in porting code. I.e. I would not
recommend
one try to make one's first big/largish program portable by
immediately
starting with the likes of GNU Autoconf!
Yeah, right, if the standard APIs would be fully supported by at least
one system. Neither GNU, nor BSD fully supports POSIX. And gcc does
not fully support ISO C 99 either.
The point is that the system does not (usually) change! Every test
that
you do twice or more is a waste of resources.
This is especially false for dependencies. Dependency libraries often
change, for example.
IFF the system does change, eg. you upgrade to a new version, and IFF
the system's APIs have changed incompatibly, then you need to throw
away
the cache and start fresh. You know when you've changed the system,
so
you know how to manage your cache file!
So you want to update the entire cache each time you changed only a
little bit to the system? Great, now _THAT_'s waste of checks!
And what if I decide to compile a single application with another
compiler? What if I decide to cross compile? What if I changed some
flags with which some stuff is not available?
Ideally Autoconf would come with a standard boiler-plate configure.ac
file which could be run once on a target system to generate a cache
for
all standard tests.
I already see the bug reports due to broken caches. Yeah, really
something I want to spend my time on: Closing bugs in bugtrackers and
tell people to regenerate their cache…
This is why abstraction was invented - and as most GUI applications
use Qt or GTK, they come with a framework as well that takes care of
portability for you.
I would claim that's not why many abstractions were invented,
especially
not for GUIs. They are not primarily portability layers -- though
sometimes they can be made to do that job as well. Even those which
did
start out as simple portability layers have almost always gone well
beyond that level. Most abstraction layers are designed to hide
complexity and to (hopefully) share common code.
Well, glib's and Qt4Core's main purpose is to provide functions for
commonly used tasks (like for example opening a file and building a
path) for every system they work on. This is what I'd call a
portability layer.
--
Jonathan
Home |
Main Index |
Thread Index |
Old Index