ATF-devel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Test interdependences, and globals



On Tue, Jul 6, 2010 at 2:55 PM, Cliff Wright <cliff%snipe444.org@localhost> 
wrote:
> On Tue, 6 Jul 2010 21:40:27 +0100
> Julio Merino <jmmv%NetBSD.org@localhost> wrote:
>
>
>> Can you provide some specific examples of why are you trying to do
>> this?  Why can't you recreate that unique value as a setup step for
>> every test case?
>
> The testing we are doing is far to complicated to do as a single test 
> program. The concept of levels (e.g test suite, test program) works very well 
> for us. In the early setup, multiple test programs are run in a specific 
> sequence to get a gui in the right state to display a page of buttons, one of 
> which generates the unique value that will never be generated again. To 
> combine all these steps into a single test program, would complicate our 
> ability to test each little step(does the window exist?, did the menu 
> pop-up?, is the button green?). So even if I could regenerate the unique 
> value(and I can't), as soon as I run the next test (which might be: find and 
> press the yellow button that has this unique label) the unique value from the 
> previous test is lost. This breakup of an immense test into smaller tests 
> (and then into even smaller tests) is very important to us.
>
>> Adding test dependencies would be possible, albeit tricky, but it
>> won't happen without a real and convincing use case for it.  Can you
>> elaborate?  If the first test fails, what's the problem of running the
>> other?  They'll just report failure, which is OK because they *are*
>> actually failing.
>
> The problem particularly with a gui is the following steps might effect the 
> gui(e.g the wrong window is now open) so that the following suite(group) of 
> tests might now fail when they otherwise would have succeeded. We will be 
> running full tests with thousands of steps, so having usable results for 
> individual groups is very important.
>
>  If you add dependencies, you are hiding real tests
>> on the assumption that their failures won't be helpful (and in most
>> cases such results are helpful because they provide additional data
>> points of why things failed).
> I can't get specific, so here is a for instance. Say I want to run a suite of 
> tests on xcalc, 1 suite with it in rpn mode, and one suite with it not in rpn 
> mode. Say I find a major error in rpn mode. I would now like to skip all 
> other rpn tests, and now run the non-rpn tests. Any rpn tests that now run 
> would have to be ignored even if they passed(could be bogus results). When 
> running very large tests that could have over thousands of  steps, skipping 
> bad groups of tests will effect both time, and being able to interpret the 
> results.

Just out of curiosity, how have other monkey test[ suite]s achieved this?

-Garrett


Home | Main Index | Thread Index | Old Index