Subject: Re: Plans for importing ATF
To: David Holland <>
From: Julio M. Merino Vidal <>
List: tech-userlevel
Date: 11/06/2007 10:58:33
On 06/11/2007, at 3:47, David Holland wrote:

> On Mon, Nov 05, 2007 at 02:46:15PM +0100, Julio M. Merino Vidal wrote:
>> As a matter of fact, there are a couple of sh(1) regression tests
>> that fail at the moment in current, and I bet they remain unfixed
>> because no one actually executed the test suite to discover them
>> (which is understandable because it's not trivial to do).
> Enh, what's so hard about "cd /usr/src/regress/bin/sh && make  
> regress"?

Doing that is not hard.  But how do you collect the results the run?   
There is no unified log.  How do you configure tests that may require  
manual configuration before running?  How do you know which these  
are?  Why do you need a source tree to do that?  And on another order  
of things, which is the correct way to write those test programs?   
How to report results?  etc.

> I think the problem is that most of the sh tests check things
> sufficiently obscure that most people aren't willing to venture an
> opinion regarding whether it's sh or a failing test that's wrong.
> Then, the set of people willing to fiddle inside sh probably isn't
> that large either.

sh was just an example.  I assume that whoever added the tests on the  
first place ensured that they passed at that time.  (And in fact,  
IIRC, I tried some of them in NetBSD 4 and they raised no problems,  
whereas they did in -current.  But I may be misremembering.)  So if  
they fail now, they are exposing some regressions.  But that's not an  
excuse for saying that there aren't problems in the code!  They  
indeed are there, and they should be fixed.

But leaving sh aside due to its complexity, that's why I want tests  
during development.  When I add them, I ensure that the corresponding  
functionality passes the tests so that when they fail I can know I've  
broken something.

Julio M. Merino Vidal <>