tech-pkg archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Tuning pbulk concurrency for large packages



Hi,

Jonathan Perkin mentioned using these tools:

For anyone interested, I've been running this patch for around a decade:

https://github.com/TritonDataCenter/pkgsrc/commit/2d2a83a6a93b46839a971a4fba22805915de50c0

and then have this tool:

https://github.com/TritonDataCenter/pkgbuild/blob/master/scripts/gen-make-jobs

to analyse a MAKE_JOBS=1 run and generate a suitable file for including that will tune MAKE_JOBS for the bigger packages. It could probably be adapted for PBULK_WEIGHT, though it would be better to also log build start times and perform analysis based on that.

With MAKE_JOBS tuned well I'm not currently bottlenecked on a long-tail so haven't looked into PBULK_WEIGHT yet.

I'd like to see about using this tool to figure out the best -j to use for certain packages on memory limited machines (aarch64eb on a six core machine with 4 gigs), plus perhaps maximum resident set size from (such as from time -l) for dividing up work between machines with different amounts of memory.

Has anyone done anything like this? Perhaps data for a given quarter's packages would be helpful to others.

There are other things I'd like to do, like track which packages have .xz distfiles and use a custom extract so VAXen and m68k don't have to unxz (or sometimes xz) on their own, but these aren't the kinds of things people with fancy machines need to worry about.

John


Home | Main Index | Thread Index | Old Index