Subject: Re: Plans for importing ATF
To: David Holland <firstname.lastname@example.org>
From: Bill Stouder-Studenmund <email@example.com>
Date: 11/09/2007 15:09:21
Content-Type: text/plain; charset=us-ascii
On Tue, Nov 06, 2007 at 05:51:48PM -0500, David Holland wrote:
> On Tue, Nov 06, 2007 at 10:58:33AM +0100, Julio M. Merino Vidal wrote:
> > >>As a matter of fact, there are a couple of sh(1) regression tests
> > >>that fail at the moment in current, and I bet they remain unfixed
> > >>because no one actually executed the test suite to discover them
> > >>(which is understandable because it's not trivial to do).
> > >
> > >Enh, what's so hard about "cd /usr/src/regress/bin/sh && make =20
> > >regress"?
> > Doing that is not hard. But how do you collect the results the run? =
> > [...]
> I'm not saying what you've done is bad or not needed, just quibbling
> with the point you chose to raise. Running the tests for sh is not
> hard. :-)
How exactly is this quibbling supposed to help? Also, you seem to have=20
ignored part of what jmmv said. As a reminder, he said:
> First of all, I think it's an excellent tool for developers. Every =
> time you touch a specific piece of code, say sh(1), you can easily go =
> to the tests tree, run the tests and ensure that you have not broken =
He didn't say that running the tests now is hard. He said that with ATF it=
will be easy to run the tests and ensure that you have not broken=20
The second half of the text is key. Among other things it add interpreting=
the results of the regression tests to the mix. That's where I find the=20
I personally find something that says "PASS" "PASS" "FAIL" a lot clearer=20
than what I've seen in the pthread regression tests (I haven't looked at=20
the sh tests so I don't know how clear they are or aren't). As best I can=
tell, if we don't hang, we pass. :-) Only one of them actually says,=20
"PASS" on success.
> But improving the mechanism alone won't necessarily help the problem.
> It *isn't* hard to run the regression tests for sh, so why don't more
> people do so? I think the primary reason is that we tend to forget
> they're there. So it's important to increase the visibility. Doing a
> nightly test run and sending the results to current-users would be a
> good start.
> (In fact, one can start right away with something as simple as
> cd src/regress
> make regress >& LOG
> rcsdiff -u LOG | mail -s 'Nightly regress run' current-users
> ci -l -m `date +%Y%m%d` LOG
> until you're ready to commit the new stuff. There are problems doing
> it this way of course, but it works surprisingly well in practice.)
I think if we get to where we have a summary output, then we can think=20
about sending it to current-users. I think that sending the current output=
of, for instance, the libpthread regression tests would be so useless to=20
the majority of people as to be spam. I see you would only send the diffs,=
which is a lot less info. The problem I see here is that if someone misses=
a post, they miss a new error.
I think auto-reporting is good! I think we just need to print some sort of
quick summary of problem areas ("lib/libpthread reports failures"), then
more info below ("test lib/libpthread/X failed"), then a URL so folks can
go see the exact results.
> I did just discover a problem: src/regress/Makefile is missing any
> reference to src/regress/bin. Maybe that's part of why the tests for
> bin get no attention.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (NetBSD)
-----END PGP SIGNATURE-----