tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Experiments with npf on -current



Alistair Crooks wrote:
On Tue, Nov 22, 2011 at 10:55:09PM -0600, Jeremy C. Reed wrote:
interest of progress.Remember that this is -CURRENT, where things like
this are *supposed* to happen?

As for me, I was glad Darren pointed this out. (In fact, I was quite surprised when I read the followup acknowledging known buggy code living in -current.)

-current should not have broken code (note that current-users list now has automated complaints on failures).

We should strive for a higher standard. We should encourage and maybe better require that we provide unit tests and/or behaviour tests with commits too. (Was there ever a public core announcement about when code is added or bug fixed, that the developer should consider adding ATF tests or regression tests for it?) (I'd like to extend this to include security audit tests as applicable, documentation requirements, and peer review requirements too.)

We should suggest and even force that code known to be broken to be reverted. (Well I think this is already true, but not happening?) (It will be easier when we have a better revision control so many can work easier on branches.)

It is this kind of short-term thinking that depresses me.  People do
not (typically) coordinate changes to the repo, and so there is
invariably some fallout, some things need to be fixed up, etc.  In
addition to that, I don't know anyone who has every single
architecture, let alone every single platform.  So some platforms go
untested.

I think it is completely unrealistic to expect -current to compile at
any one time, and I have been fixing some compilation issues in -current
on and off for a number of years now. Castigating people for checking
things in which are not 100% will have the marvellous effect of encouraging
people not to commit anything, rather than encouraging them to commit 100%
functional work.

Nowadays, we have people running automatic build tests, and the anita
runs are superb (thanks, gson!), along with some very enthusiastic
builders (bch and htodd, to name but two), and havard builds for some
of the more unorthodox architectures.  Which leads me to say:  I don't
know where you're coming from with this; in fact, I don't remember
your being active in this area, but I may have overlooked something
just recently.

So, yes, laudable aim - completely unworkable in practice.

Let me outline what my development process is for doing a
merge when importing ipfilter:

1) sync up a local copy of the repo with rsync to netbsd
2) checkout a copy of that repo
3) do a build of that checked out copy over nfs
4) create zfs snapshots for both the local repo and the
  local build on the server

This creates my baseline.

1) import local changes into the local repo, update from
  vendor branch to head and checkout
2) apply any required patches
3) update the checked out copy
4) run a build (without doing a "cleandir")
5) if there are any issues, add changes to required patches,
  rollback zfs snapshots, go back to (1)

Steps 1, 2, 3 and 4 are all scripted.

I should now have something that should let me update netbsd's
cvs without causing anyone grief. If there are any problems,
they're likely to be minor, quickly and easily fixed.

At this point I also have something that is easily tested and
whilst in the past I've relied upon using ipftest as the only
means of regression testing (processing the packets entirely
in a user space application that embodies all of the kernel
functions), I've now expanded that to allow packets to be
tested in the kernel. The next steps along that path are to
use dedicated test NICs (so that I can test the path into
and out of IP) and further to use different hosts.

It goes without saying that the above use of ZFS snapshots
means that my server isn't NetBSD but the additional time
cost from building over NFS is saved by being able to do
snapshot rollbacks.

I haven't tried the above with UFS snapshots from NetBSD
to know if they would work. If someone needs a reason to
keep pushing for ZFS to work well on NetBSD, consider the
above benefits for the commit cycle. For someone doing
test builds, you could do an "update; snapshot; incremental
build" quite easily.

In Solaris, all commits to the internal Hg repo kick off
a similar sequence of events and committers get an automatic
email if their commit resulted in any new build or lint
errors/warnings. ZFS is the primary enabled here.
Occassionally there are commits where an incremental build
is known to be problematic but this is called out in advance
by the committer(s).

Darren



Home | Main Index | Thread Index | Old Index