QA Requirements for submitting automated tests

Registered by Patrick Wright on 2011-11-02

QA Requirements for submitting automated tests into the QA lab
- Test outputs
- Resources
- Jenkins

Blueprint information

Pete Graner
Patrick Wright
Canonical Platform QA Team
Series goal:
Accepted for precise
Milestone target:
Started by
Pete Graner on 2011-11-18
Completed by
Gema Gomez on 2012-01-23

Related branches



Work Items for precise-alpha-2:
[hggdh2] Define submission criteria for automated tests: DONE
[hggdh2] Define toolset and output standards for automated test suites: DONE
[patrickmwright] Define test suite rejection criteria: DONE

Session created in response to upstream development practices outlined by Rick Spencer and Jason Warner
In lieu of that session, we created this impromptu session to explain what requirements to expect from QA.

White box testing
* Unit tests
- Team/Developer will be designated persistent, on-demand VMs with 24/7 access in the QA lab
- If tests require a new install or pre-loaded image this will be handled with Orchestra
- Jenkins access will be granted upon approval
-- Reviewed Jenkins Guidelines (does not exist yet)
-- Job configurations get reviewed before running tests
- All test frameworks must generate standard output. This will require a transform if not using a test framework that can output in this format (Developers responsible for maintaining)
We haven't settled on an output format, but junit is the leading contender. We may support others as well.

Black box testing
* Functional, GUI, Smoke Tests, Integration, System, Performance tests as examples
- These tests often require unique configurations and environments, submissions will be discussed to obtain details.
- well documented frameworks will certainly be expedited
-- This includes to use of physical hardware
Submission Details
- Criteria will be published for developers to submit requests to add tests to QA infrastructure. Most likely in LP.
- Submission does not ensure your tests will be added quickly. There are many test suites already in queue and we are working through them. We are only trying to prepare developers with what our expectations will be when we start integrating tests.

What Jenkins is NOT:
- An AI or bot that figures out how to test your projects
- Just a scheduler
- A reporting tool

What Jenkins is:
- A Continuous Integration Build Tool
- Executor for frameworks that contain your tests
- Although it has scheduling capabilities, it can also poll for changes and jobs can be triggered remotely
- Repository of artifacts that test runs generate and can interpret tests results provided in a standard format (jUnit)

- Can we test across releases? Yes, we can run tests against any available release, the upgrade tests do that now and there's been some work around that regarding SRUs
- How to get notifications hasn't been decided. As we grow we'll have to have more mailing lists more specific to each topic, maybe discuss on IRC when results are available, etc (this seems like a topic for next UDS)
- Are we going to be reviewing on ongoing basis the tests people run to avoid danger to the datacenter/other people's runs? We'll be doing some sort of control but things will be running as isolately as possible and when things go wrong, we'll be recreating the machines
- DX have already added test cases to the lab. We set them up their own VMs, they'd control the VMs as they need it. There is an agent running on the VMs, we were doing a clone of the bzr branch on jenkins, then they do make check to run the tests. Many of their test cases are failing and now they can start fixing them. Now when they add new test cases they commit them to their branch and the new tests get executed
- LDDP is running on some ubiquity tests
- How to send the request? Send them to <email address hidden> (I will check the address for sure later)
- Is our jenkins configuration public so that people can have it set up on their own labs? Not at the moment.
- Maybe to organise jenkins training? Probably better to train them as they start using the tool

- QA to set up a wiki to explain how to submit requests for including test cases in our lab, will be published in the blueprint
- Mesa tests to be added to the QA infrastructure, they need some work from dev team to fully automate and maybe packaged.
- Certification tests to be looked into to be added to the QA infrastructure (sleep/resume test cases)
- Talk to Robert aboubt LightDM output transform


Work Items