tech-pkg archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Distributed bulk building for slow machines


I presented some ideas here
and in wip/distbb's
 3) /usr/pkg/share/doc/distbb/README

Thank you. I wasn't familiar with distbb. It may just be everything I need.

2) A list of priority packages which is made before a bulk (old
fashioned or pbulk) build is started

What's the reason to build priority packages and the rest in two steps?
All they can be built in parallel. See 2).

It's been suggested, and I generally agree, that some packages are pretty universally used and therefore it would be good to make sure they're available ASAP. Examples might be shells (tcsh, bash) and perl.

3) A method to use a fast machine to create a dependency tree which is
used after the priority package list is finished

Here if you mean scanning pkgsrc tree for gathering informarion about
packages, then this step is also parallized/distributed in distbb and
doesn't allow fast machines at all.

I'm not sure I understand how something can exclude fast machines...

I don't see any problem is setuping this kind of bulk build at all.
wip/distbb + rsync + ssh. So, yes, I can help to configure it.

Thank you. I'd appreciate that.

Also, if any developers have fast VAXen (relatively speaking,
obviously), m68k machines, sh3 (sh4; either endianness), StrongARM, or
big endian MIPS which you'd like to volunteer, please let me know.

It make sense to run hardware emulators for this zoo.

I'm in the process of setting up a nice 8 core system with 16 gigs of memory for this. The only problem I see is that since NetBSD doesn't run on EFI systems (it's an Xserve), I have to run the emulators under OS X.

Aside from simh for VAX and ARAnyM or UAE for m68k, what is recommended for MIPS, StrongARM, and sh3 / sh4?


Home | Main Index | Thread Index | Old Index