Subject: Re: Plans for importing ATF
To: Hubert Feyrer <firstname.lastname@example.org>
From: Julio M. Merino Vidal <email@example.com>
Date: 11/05/2007 14:46:15
On 05/11/2007, at 13:49, Hubert Feyrer wrote:
> On Sun, 4 Nov 2007, Julio M. Merino Vidal wrote:
>> Please raise your concerns quickly. I probably won't be able to
>> do all this for the following two weeks, but who knows, maybe I'll
>> find time ;-)
> No concerns, but: what do we do with it?
First of all, I think it's an excellent tool for developers. Every
time you touch a specific piece of code, say sh(1), you can easily go
to the tests tree, run the tests and ensure that you have not broken
anything. I have found this procedure an invaluable tool when
working on projects that do have extensive test suites that can
easily be executed at will.
Then, it'll also be useful to end users. For example, after you have
successfully installed a release on a production machine, you can
easily see that it passes some basic tests to ensure it 1) remains
stable and 2) behaves as expected. At the moment most people just
"builds a release" as a stress test, but the idea is that you can
have something more accurate and powerful. This is specially
interesting on platforms that see less testing; we all know that
there is much more (obvious) breakage in them than, say, i386.
As a matter of fact, there are a couple of sh(1) regression tests
that fail at the moment in current, and I bet they remain unfixed
because no one actually executed the test suite to discover them
(which is understandable because it's not trivial to do). Some other
tests (the ones for df(1)) pass in i386 but fail in amd64. I expect
people will have more pressure in fixing those problems if they can
be notified of them early enough (or notified at all).
> How does this tie into the NetBSD release process? Will it be ran
> as part of the daily builds, or is there a dedicated machine that
> runs this, and posts reports? While I understand that this is
> technically better than src/regress, how do we solve the problem of
> actually getting those tests ran on a regular base?
There are no plans on this. To get started, I'd say "don't branch/
release if any test fails". That'd be something for releng to do,
but I'd really like to see some machine running the test suites
periodically, and on as many platforms as possible. (That's
basically the point of HTML reports, so that we'd collect them into a
single machine and easily show them to interestd developers.)
We can sort this out later, once people has experimented with the new
framework, raised concerns about it, and when we have more tests in
> Oh, and from the 'documentation' department: Is there some "intro"
> text that shows a software author how to write tests for his software?
I added some example to the web site, but I haven't got the time to
write documentation. (There are manual pages for all the tools,
though.) Even though, writing tests is very easy I think, so using
others as an example should not be too problematic for now. Lame
excuse for lack of formal documentation, I know, and I'll eventually
get to writing some.
> (And I'm not going to ask if we're going to make regression tests
> mandatory for all code that we import in the future now :-)
We should! :-) But again, this is a policy that can be enforced
later on when people gets used to the framework and sees the value of
Julio M. Merino Vidal <firstname.lastname@example.org>