Test Integration Frameworks for Linaro Platforms

Registered by Paul Larson

References:
Abrek wiki document : https://wiki.linaro.org/QA/AutomatedTestingFramework
Abrek improvements blueprint : https://blueprints.launchpad.net/abrek/+spec/linaro-platforms-o-abrek-improvements

Discuss improvements needed in Abrek

Improvements from user point of view
----------------------------------------------

General comments:
  Any general comments before looking at the commands itself.

Installation command:
* catch outputs (stdout, stderr) from installation scripts and put them to log files
* offline installation procedures - Bug #724091
* Add support for debian repositories - Bug #776367

Uninstallation command:
* Add option to automatically remove dependencies and packages

Test definition:
* add support to store multiple measurements per test case, such as min, max, average values

Run command:
* Support passing arguments to the test - Bug #691505

Result command:
* Add support regular expression to remove more then one file as a time.
  example: abrek result remove pybench*
* Add support to list where the results files are stored - full path. not just file name

Distribution:
* Make debian distributed package of Abrek for easy installation

Other:
???

Improvements of current implementation
----------------------------------------------------

Blueprint information

Status:
Complete
Approver:
Paul Larson
Priority:
Medium
Drafter:
Le Chi Thu
Direction:
Approved
Assignee:
Le Chi Thu
Definition:
Obsolete
Series goal:
Accepted for linaro-11.11
Implementation:
Unknown
Milestone target:
None
Completed by
Le Chi Thu

Related branches

Sprints

Whiteboard

User Stories:

1. As a test developer, I would like to be able to define tests out of tree, so that I can easily modify them without touching the execution framework code
 - would be nice to point it at a URL to get the test definition from, which would, in turn, tell it where to get all the other pieces

2. As a tester, I would like to easily install the latest released version of the framework with a package

3. As a tester, I would like to be able to override the default arguments passed to the test, so that I can modify the way the test runs without creating a new test definition.
https://blueprints.launchpad.net/lava-test/+spec/linaro-platforms-o-override-test-options

4. As an android test developer, I would like a test automation framework that works with android, so that I can use it to run android test suites, collect results, and push them to the lava dashboard
 - Could possibly be an extension of the existing linaro test framework, but it may make more sense to have an android specific one.
 - 0xBench is suitable for some benchmarks, but not for the overall goal of having a test framework, but 0xbench could be one of the things that this runs

5. As a lava user, I want to see the output of my test as it is running so that I can watch the progress and better debug when something goes wrong.
https://blueprints.launchpad.net/lava-test/+spec/linaro-platforms-o-lava-test-stream-output

6. As a test developer I would like to see the output when the test installs so that I can log it and review it for problems.

7. As a tester, I would like to install tests from a cached location so that I do not have to download them.

8. As a test developer, I would like to be able to install packages from debian repositories.

[More to come]

Work Items:

Installation command:

* catch outputs (stdout, stderr) from installation scripts and put them to log files
   - should be a command line option, if not,
   ACTION: submit bug - ChiThu

* offline installation procedures - #724091
   - add fetch step that gets dependencies and test code and caches them for later installation
   - or have a local download cache with everything we could need, check that cache before going online to download
  ACTION: investigate

* Add support for debian repositories - #776367
 - export the ppa and packages in the manifest
  ACTION: investigate

Uninstallation command:
* Add option to automatically remove dependencies and packages. No need because the test machine will be clean after each run.

Test definition:
* add support to store multiple measurements per test case, such as min, max, average values - See user story 1 above.
- no - extra measurements should have a new test case name
- work on a good parser class
ACTION: investigate how the parser can produce multiple test results. One per measurement

Result command:
* Add support regular expression to remove more then one file as a time.
  example: abrek result remove pybench*
ACTION: implement it - ChiThu

Distribution:
* Make debian distributed package of Abrek for easy installation
ACTION: implement it. Zyga

Abrek documentation:
* we could convert the abrek wiki page (link above) into a man page
ACTION: Do it

Out of tree tests:
* It's not clear how to support custom parsers in the tree (for out of tree tests there is no issue, when a test is
  merged then it's cluttering the test code).
  ACTION: investigate

* Decide on how to support android tests (do we extend abrek or do we do abrek-android)
  ACTION: investigate

Other working items:

* Merging a out standing and approved merge proposals : ChiThu

(?)

Work Items

Dependency tree

* Blueprints in grey have been implemented.