Subject: Re: Automate Regression Framework - Google Summer of Code
To: Chetan Patil <>
From: Martin Husemann <>
List: tech-toolchain
Date: 06/13/2005 11:59:55
On Mon, Jun 13, 2005 at 02:35:22AM -0700, Chetan Patil wrote:
> Two Main files. results.log and regress.log(contain all the info as
> the test is executed). Test will have regress.log entry only if it
> fails.

That is fine - and generate a summary at the end, with an explicit list of
failed tests.

>  By the way what is SMOP ?

try "wtf smop" on any NetBSD system ;-)
SMOP: simple matter of programming

> (2) Ordinary User Vs Root user.
> Run all the tests as ordinary user.

Well, the point is more that we (a) need to clealy mark all tests as
"needs root", "needs non root", "doesn't care" or "needs to run both as
unprivileged and root".

> Some tests requiring Root user permissions should be elevated to root
> and then executed. So we will essentially end up writing a utility to
> give root permssions.

I'm fine with borrowing "just in time su{do}" from pkgsrc, or have to run
the script twice, once as root, once as non-privileged user.

> (3). I don't have much idea.

We can identify such cases and repeat the test twice - once with a dynamic 
and once with LDSTATIC=-static. The framework would be just a "TESTSTATIC=YES"
or whatever in the individual makefile, and some lines in the global 
"" to handle this.

> 1. Do we have hanging tests ? How about if they are run as a spawned
> proces of say SpawnIT and set parametsr in SpawnIT to kill test after
> certain time interval and other features as deemed necessary.
> (Doable, I had done this for a class work with limited features.)

Upto now no tests should do that in the "success" case ;-)
I wouldn't say this is a required feature - many times failing tests will
crash the machine (via a kernel panic), so at this level we won't get 

> 2.Are we tight on CPU cycles ?

Not realy.

> How about running as more than one thread.
> Single CPU --> Run test as 2 threads
> Single CPU with Dual Core --> Run tests are 3 threads

I wouldn't do that, since we can not be sure about the kernel interactions and
their influence on the tests.

> 3. How about flagging tests with ids such that 64-bit(IA64, AMD64/EM64T)
> tests are not run for x86. (I haven't paid close attention to
> EM64T/AMD64 but I am assuming they are compatible).

If tests are excluded/don't make sense for certain platforms (like SA related
tests for archs that do not yet implement them), the upper level makefile
just excludes those subdirectories. This works fine as is.

> 4.Their should be one main regress script.(My ex Boss was of this
> opinion that harness must not be modelled on the basis of OOAD).

Yes, and I would make it call the existing "make regress" or variants
thereof (for the relevant passes, i.e. non-static, root).

> Other suggested features for regress-main script are
> 1. Execute "n" number of tests( max or min)

Not sure about this.

> intermediate log files for failing tests. say
> usr/local/kernel_4.c fails so create an intermediate log in the same
> directory.

A must.

> 3.A way to resume test if it halts for some reason(core dump)

Yes, but see "kernel crashes" above, and not mission-critical.
The ".build_done" pkgsrc way of doing this might be enough (in the OBJDIR),
the script would blast away OBJDIR on full restart.

> 4. I could not do this. (Handle core dump. I purposely created a
> situation for core dump and tried to handle it but I couldn't, script
> would die. This will come very handy as it is not uncommon to see
> core-dumps). OR maybe handle Kernel Panic(That would be an awesome
> feature).

Yeah - fatal failure would need manual intervention. I would just ignore
this problem, as long as the "everything still works" test can be
done automatically.

> 5.Pattern Matching to execute group of tests. Say the Developers moved
> lot of code in some kernel libs and touched nothing else.

Not sure if this is worth the effort - as long as a complete regression run
can easily be done overnight.

> 6.Work on architecting and organizing the test suite such that we
> should be able to use this test suite for further phases of testing,
> say code coverage testing.

Maybe keep this in mind, but don't waste too much on it right now.