tech-pkg archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

pkgsrc stability vs. updates



Sorry, I had misread the statement, I didn't want to argue about
dates.

I agree that having fallout for changes is bad and should be
avoided. We can discuss cases that happened recently in detail, if you
want, but that's not my point here.

I see this as a cooperative project, so if there is fallout (on
NetBSD) I ask the committers to fix it and more often than not I try
to step in and help.

I see this (value?) discussion as a weighing of two extremes:

One can just stay at pkgsrc-2024Q4 forever. The advantage is - it's
stable - nothing breaks (that isn't already broken). On the other
hand, you don't get security fixes, updates, or new packages.[1]

Or you can follow HEAD where you get the latest and greatest, but also
packages will stop building. Sometimes it's carelessness, which is
what I think angers most people here, sometimes it's upstream bugs,
and sometimes it's just pkgsrc or operating system "weirdness", since
we're not Debian Linux.

I also agree that getting commits more into the shape so that
"everything still works" is a good goal to aim for.

On the other hand, if we have too high requirements, then people will
just stop updating packages, because they don't want to deal with the
fallout. Just as one example, I don't think it's reasonable to expect
one person to fix all packages using boost when updating it - here's
(not just boost, in general) where I hope for a common effort. (I
think I have fixed more than my fair share of boost fallout which is
why I dare use it as example :-) ).

Another one that I think is a too-high requirement is what dreckly
claims to aim for - no regressions on any platforms in the CI.
Because at least I for one don't have the setup to test or fix seven
or more operating systems, and I don't know anyone else who has (with
perhaps the exception of Jonathan). CI is nice in telling you when
something breaks, but it's bad for actually fixing stuff because you
can just submit "guess-fixes" and wait for the next run to tell you if
it fixed the problem. This is highly inefficient.

Compare that to what problems we sometimes have even fixing packages
for NetBSD 10/x86_64 that fail in bulk builds, but build on the
committer's machines (not trying to pick on anyone, but a recent
example would be fltk).

So even if some people don't want to hear it - if an operating system
should work well under pkgsrc, we need to have at least one person
that is actively working on fixing packages for that system. And yes,
that will most often mean fixing updates because they were "only"
tested on other operating systems.

Perhaps we should reduce our portability claim to only mention the
operating systems where we have such people (I think NetBSD, macOS,
Illumos, some other Solaris?).

This is also why I think it's so important to feed back patches
upstream. That way upstream knows that there is interest in their
software for that platform, might even set up their own CI, and we
don't have to maintain the patches in pkgsrc when updating packages.
From my own experience, I think this got much much easier in the last
years, and many upstreams are quite happy to merge patches quickly
even for weird operating systems like NetBSD.

Anyway, just a brain dump from my side.
 Thomas

[1] I'd be interested in what the problems are developing
infrastructure changes on the latest stable branch - mk/ doesn't
change that much in day-to-day commits, so it should be stable to
develop on and easy to pull up to HEAD.


Home | Main Index | Thread Index | Old Index