tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Removing ARCNET stuffs



On Sun, May 31, 2015 at 09:24:48PM -0400, Andrew Cagney wrote:
 > On 30 May 2015 at 19:09, David Holland <dholland-tech%netbsd.org@localhost> wrote:
 > > The reason I floated the idea of forking is that an OS that's
 > > specifically intended to be a high-quality Unix for older hardware can
 > > make a different set of decisions (most notably, it can let C++ go
 > > hang) and this allows avoiding a wide and growing range of problems
 > > that currently affect NetBSD on old hardware. Meanwhile, it would also
 > > (one hopes) allow retaining a critical mass of retrocomputing
 > > enthusiasts in one place, as opposed to having them all gradually
 > > drift away in different directions.
 > 
 > Perhaps there are several, for want of a better phrase, "niche" plays here:
 > 
 > - remove C++ from base; Since when was UNIX's system compiler C++

While killing off C++ would be a great thing on all kinds of grounds,
it isn't practical. If we removed C++ from base one of the first
things nearly everyone would have to do is build a C++ compiler from
pkgsrc; this is pointless. Also, since nowadays both gcc and clang
require C++ to build, it's highly problematic: one would need to begin
by downloading bootstrap binaries.

Too much 3rd-party software (including high-demand things like
Firefox) is written in C++.

An OS that's specifically not meant to host such things can blow that
off, but NetBSD per se is not and is very unlikely to ever be such an
OS.

 > (oh and please delete C++ groff,  just replace it with that AWK script)

which awk script? :-)

(quite seriously, I've been looking for a while for an alternative to
groff for typesetting the miscellaneous articles in base.

 > - focus more on, and sorry for the marketing term, "scalability"
 > 
 > That is being able to run things on smaller and larger systems.  This
 > means more focus to algorithms and choices, and less focus on burning
 > ram like no tomorrow.

That's not what "scalability" means out in the real word; there it
means "running on as many CPUs as possible".

but ignoring that -- who (other than apparently the gcc development
team) is focusing on burning ram? As I've already said a couple times
I'm sure there are places in NetBSD that are gratuitously slow for no
reason other than it's imperceptible on the hardware most developers
use for development. I at least am happy to fix these when they're
found; but I'm not in a position to find them and nobody in general
seems to be looking.

Similarly, it's certainly the case that the system's been expanding
over the years and that some of this is bloat; however, finding actual
instances of bloat is not so easy. Finding features that have been
added is easier; but most of those were added for a reason and/or
someone is using, so simply pruning them back isn't feasible.

A system whose specific goal is to make like 1990s Unix (or 1980s
Unix) can make fundamentally different decisions about things like
this -- as an obvious example it probably isn't necessary for such a
system to scale past say 16 processors, so a lot of the complicated
machinery that currently exists in pursuit of scaling could be
reverted. That alone would make a noticeable difference.

 > - look at, cough, simulators, cough, as a way to test and develop for
 > less mainstream architectures
 > 
 > The build process is sick, figuring out how to get it running on QEMU,
 > say, isn't.  Can this work, consistently, across all platforms:
 > 
 >     qemu-system-XXX -nographic -kernel
 > sys/arch/XXX/compile/QEMU/netbsd -cdrom releasedir/... -sd0 disk.img

No, it can't. But it would certainly be helpful to have top-level
rules to set up and boot test images. Currently you can use "anita
install" and "anita interact" for this, sort of, on some targets, but
it doesn't integrate too well with the build system.

Being able to do
   ./build.sh release
   ./build.sh live-image
   ./build.sh boot-live-image

would be nice. Or since live-image may have usage constraints that
make it not what one wants for hacking,
   ./build.sh release
   ./build.sh test-image
   ./build.sh boot-test-image

and maybe 
   ./build.sh test-image-kernel=TEST
   ./build.sh boot-test-image

to make for shorter build/test cycles. (At least for x86 and qemu,
this involves using a second disk image to hold the kernel -- the qemu
-kernel foo thing only works for Linux kernels. I have found that
keeping a third disk image around for scratch files that persist
across image regeneration is worthwhile too.)

I don't think there's any reason this can't be done; it just hasn't
been. It needs a fair amount of target-specific logic to be useful, as
the emulator of choice often isn't qemu; but getting that logic into a
central place instead of dispersed among private scripts belonging to
individual developers would be a step forward.

Developing mostly on emulators though has a way of leading to the real
hardware not actually working. This has already happened a few times. :-|

-- 
David A. Holland
dholland%netbsd.org@localhost


Home | Main Index | Thread Index | Old Index