ATF-devel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Test interdependences, and globals

On Tue, Jul 6, 2010 at 9:11 PM, Cliff Wright <> 
> One of the aspects of atf that I struggle with is the lack of globals. For 
> instance I have a test that generates a unique value that I need to use for 
> all other tests. I currently store this in /tmp, and it needs to be cleaned 
> up later. Also if the test that generates this value fails, then all other 
> tests should not run. I handle this by generating a global use file (in /tmp) 
> in the first test called previous_passed(that I set the executable bit on). I 
> then use atf_set require.progs /tmp/previous_passed on all the subsequent 
> tests. This file then gets deleted if I want further test to not run. A lot 
> of playing around to create the missing feature.

The whole point of all test case isolation provided by atf (and many
other testing frameworks, for that matter) is to make test cases
independent form each other.  Test cases must be self-contained so
that their side-effects are minimized and their results are easier to
reproduce.  What you are doing by creating files in /tmp side-steps
all this -- and if atf could, it'd be forbidding you to do so.

Can you provide some specific examples of why are you trying to do
this?  Why can't you recreate that unique value as a setup step for
every test case?

> Also I do a lot of gui testing, so a lot of temporary images get created, 
> which requires me to set the use.fs in every test, I would like to set this 
> once, globally for all tests.

That would be nice to have, yes.

> Without using my hacks, it would be nice to say run test 5 only if tests 2, 
> and 3 pass. Again some kind of global usage is needed for this. Maybe another 
> temp dir that is created while all the test suites are running? Maybe another 
> for the test suite itself?

Adding test dependencies would be possible, albeit tricky, but it
won't happen without a real and convincing use case for it.  Can you
elaborate?  If the first test fails, what's the problem of running the
other?  They'll just report failure, which is OK because they *are*
actually failing.  If you add dependencies, you are hiding real tests
on the assumption that their failures won't be helpful (and in most
cases such results are helpful because they provide additional data
points of why things failed).

Julio Merino

Home | Main Index | Thread Index | Old Index