[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: atf for libcurses
On Thu, Nov 04, 2010 at 05:56:43PM +0200, Antti Kantee wrote:
> So you understand the point of automated tests, but based on some
> conversations I've had not everybody does.
oh yes, I have had a bit of experience in this during my work life.
> Can you expand on what kind of bugs you are worried about?
Well, curses testing has pretty much been based on "if it looks right
then it must be ok", but this does not take account of the fact that
curses may be emitting sequences that, cumulatively, have no effect
on the end terminal state. If you want an for instance of this the
last refresh in the curses_timeout test emits:
So, it enters standout, outputs a control-c and then exits standout.
Why it does this I don't know, I have not dug into the curses code yet
to find out what it thinks it is doing here. It may be a quirk for
some ancient terminal but it may be a bug.
> Doesn't that capture the bugs you were worried about in the same
It can do if you are not careful but this is the same for any testing
being done. When I was creating some of my first tests I thought that
I had found a bug in curses. I noticed that curses was always using
the absolute cursor positioning sequence when I thought it should have
been using a couple of single character motion sequences (terminfo cup
vs cub1 for example). Curses evaluates the efficiency of absolute
cursor positioning vs a combination of single character motion
characters. I knew that the motion should have been done with a
couple of single motion commands but it wasn't. It turned out that
curses was doing the right thing - in the terminfo database I had
defined the command cub1 as the string "cub1", curses counted up the
characters for doing a couple of cub1 sequences, came up with 8
characters, found that doing a cup was shorter so that is what it was
emitting. Once I shortened the single motion commands to single
characters suddenly curses did what I expected.
I guess what I am saying is that you really have to understand what
you are testing before deciding whether it is right or not. Though it
does occur to me that the check files could just be reviewed later,
how likely that this review will happen is the sticking point.
> I seriously doubt people will read the result especially for more
> complicated tests where the terminfo line noise is several lines or
> even pages long. Massive "golden" files might be common in $application
> test cases.
I agree - I do believe this is the major problem of the scheme. I was
hoping to rope someone in under a GSoC project to help generate the
files. It is grunt work, it does need a good understanding of curses
so there is a bit to be learnt but not very exciting unfortunately.
> Ok, sounds easy enough. IIRC someone noticed that nvi developed a
> regression where it left the cursor at the wrong place and wanted to
> write a test for it. So in your scheme it would be a matter of sending
> the right commands to vi with "input" and figuring out what the expected
> output is.
Yes, I think so.
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited. If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility. It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
Main Index |
Thread Index |