[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: atf for libcurses
On Thu, Nov 04, 2010 at 09:09:24PM -0700, Paul Goyette wrote:
> I guess this is where I disagree. The test, in my opinion, should be
> verifying that the desired results - visual appearance - are still
As I have pointed out earlier in this thread there may be output in
the stream that has a nil visual effect - by your argument you seem to
be saying this is ok because things look the same so it is all good.
I dispute this - if there is unnecessary output in the stream then
curses is not doing the job properly regardless if the end appearance
looks right or not. What you are suggesting is no better than what we
had before, the developer running something like nvi and having a look
saying "well, that looks ok to me". I know from experience that this
is not a good test.
> My understanding of regression testing is that you have a test that
> verifies that correct _results_ are generated, even if you change, fix,
> modify, or otherwise update the code that produces those results.
Right. As far as I am concerned the _results_ are a sequence of
commands from curses that go to the terminal for processing. What you
are trying to do is conflate the output of curses and the processing
of the terminal into one lump. I don't think that is a good idea
because we cannot separate out the curses bugs from the terminfo
nits. It also means that there has to be some sort of "golden"
terminal standard that is emulated which becomes messy fast.
Actually it occurs to me that people seem to be forgetting that the
intention of curses was to be terminal independent. The terminal
dependent stuff is encoded into termcap/terminfo. What I have done is
ensure that curses will emit a defined set of terminfo commands in
response to a stimulus.
> the test should not need to be updated to keep in synch with changes in
> the library's implementation.
If a change affects what is output then that change in output needs to
be analysed. If the effect is unintentional then the testing has
worked - someone introduced a bug they need to fix. If the effect is
a result of the change then the test should be updated to reflect
> Definitely not suggesting that the library will randomly decide which
> sequences to use from run to run. But I _AM_ suggesting that developers
> should be free to change their decision of which sequences to use from
> version to version, and appropriate regression tests should be able to
> determine that, regardless of which sequences are used, the resulting
> visual appearance has not changed.
Sorry, you really need to get over the hangup about it looking right
so it must be ok. By that argument it would be perfectly fine to
clear and redraw the screen for every change - what's the problem?
The screen looks the same so it must be ok.
> Changing the test procedure to keep it in lock-step with the specific
> implementation of that-which-is-being-tested does not, in my mind,
> generate meaningful test results.
Changing the expected output - NOT the test procedure. If the test
produces different output then this needs to be followed up like any
other test failure would. If unit tests are not picking up a change
in output from a constant set of inputs then they are not very good
unit tests. In fact, unit tests should be used to prove that a bug
has been fixed.
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited. If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility. It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
Main Index |
Thread Index |