Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

failing regression tests (please fix them!)



Hi,

We still have 24 failing regression tests in the standard "anita test" run:

Failed test cases:
    ipf/t_ipf:bpf1, ipf/t_ipf:bpf_f1, ipf/t_ipf:n1, ipf/t_ipf:n11,
    ipf/t_ipf:n2, ipf/t_ipf:n4, ipf/t_ipf:n5, ipf/t_ipf:n6, ipf/t_ipf:ni10,
    ipf/t_ipf:ni11, ipf/t_ipf:ni12, ipf/t_ipf:ni19, ipf/t_ipf:ni20,
    ipf/t_ipf:ni5, kernel/t_umount:umount, lib/libevent/t_event:kqueue,
    lib/libevent/t_event:poll, lib/libevent/t_event:select,
    net/sys/t_connect:low_port, util/df/t_df:normal,
    util/grep/t_grep:basic, util/grep/t_grep:file_exp,
    util/sh/t_expand:strip, util/sh/t_set_e:all

Most of them look like broken / unupdated tests.  (There's also a bunch
of tests which are skipped for incorrect reasons.)

As should be well-known at least for developers, the details are available
on Andreas' most excellent automatic build/install/test page at:
http://www.gson.org/netbsd/bugs/build/

To get a clean baseline for a "0 broken tests" point where we can start
monitoring regressions effectively, in a few weeks I'll mark all of
the remaining failed tests as skipped with the tag "suspected broken".
This is because we really do not have any proof that those tests ever
worked -- I do trust the person initially committing them made sure they
worked, but I've noticed several times myself that a test which worked
on my desktop failed in the "anita test" run due to slightly different
environments.

Meanwhile, please fix any tests you feel ownership over or just want
to see the feature in question to receive good regression testing.
(and add new ones!)


Home | Main Index | Thread Index | Old Index