Piloting a new test case management tool

Registered by Gema Gomez

Find an appropriate test case management tool and install it, find a way to integrate it with http://testcases.qa.ubuntu.com/. 1 cycle should be enough to install the tool and get the existing test cases that is worth keeping in there, we can also pilot the usage of it within QA before we move to the community using it.

Blueprint information

Status:
Complete
Approver:
Pete Graner
Priority:
Essential
Drafter:
Gema Gomez
Direction:
Approved
Assignee:
Canonical Platform QA Team
Definition:
Approved
Series goal:
Accepted for precise
Implementation:
Informational Informational
Milestone target:
None
Started by
Gema Gomez
Completed by
Gema Gomez

Related branches

Sprints

Whiteboard

Piloting a new test case management tool. Find an appropriate test case management tool and install it, find a way to integrate it with http://testcases.qa.ubuntu.com/. 1 cycle should be enough to install the tool and get the existing test cases that is worth keeping in there, we can also pilot the usage of it within QA before we move to the community using it.

    Patrick has tested testlink (http://www.teamst.org/) during Natty and built a prototype.

    Another option is the Litmus test case management tool, MPL license, https://wiki.mozilla.org/Litmus. Just by registering on the web and doing some basic mozilla testing it strikes as a very adequate tool for the job.

    We have tried testopia, and it is a very unfriendly and unusable tool, the frontend of Litmus looks much better.

    Requirements for the tool:

    Allow separation of manual and automated testing

    Allow the addition and review of new test cases through an API

    Test case execution and easy/consistent results reporting

    Support several testing configurations and keep track of the tester's environment/config

    Who run the test and how many runs per test case/configuration

    Actions

    Decide which tool to use

    Install it

    Writing a script to export our current test cases/metadata into the tool

    Get the pilot up and running and some people using it (feedback on missing/wrong things)

    Establish new test cases review/creation good practices and get buying in from the community, probably by sending them for review and agreeing on something suitable

    Correct the problems

    Evaluate the tool whilst IS does the review of the tool

    Document the process and the tool

    Open ID? ISO tracker? User definition?

    results-tracker (have a look at it)

    Versioning of test cases, handling of test cases ID

    API accessibility import/export test cases and results from the database

    Figure out how test cases and test case IDs and test code relate to package versions (!)

    ISO testing tracker result reporting (?)

    Relation between images and results should be maintained

Case Conductor (now moztrap) deployment schedule info, or where we are

We have been working with the Mozilla QA team on trying to bring case conductor up to the level we need to start using it. They have agreed to deliver some of the features we need, but it is taking them longer than they or us anticipated. This cycle was about piloting the tool and agreeing on whether to use it or not and how to integrate on our workflow, and things are looking clearer on that front:
- For manual testing we are currently using checkbox, until case conductor is usable and we will evaluate then what would be the benefits / disadvantages of moving to it.
- For automated testing we are putting in place a new test harness that will make developers and tester's lifes easier, it will define how to write automated testing, how to document the test cases and how to run any tests with the harness in standalone mode. This harness will also allow us to consolidate all our automated testing (the testing running on jenkins) and will allow people to run those tests on their own hardware.

After the pilot and seeing the amount of work that needs to happen to bring moztrap to the place we need it to be and integrate nicely with the tracker, we have decided to discuss during UDS-Q about extending the tracker to actually manage test cases.

(?)

Work Items

Work items for precise-alpha-1:
[jibel] Install litmus on one of our servers and make it available to the team: DONE
[jibel] Debug litmus to make it usable: DONE
[gema] Raise an RT to get the tool security reviewed and installed on litmus.qa.ubuntu.com: DONE

Work items for precise-alpha-2:
[gema] To fiddle with the temporary litmus install and come up with a way of using it/configuring it that makes sense: DONE
[gema] Branch litmus for our own use and customize for Ubuntu: DONE
[gema] Talk to Mozilla team regarding collaboration on litmus: DONE
[gema] Find about the new tool they are putting together (Case Conductor) and gather requirements for it to fit our needs: DONE
[gema] Remove the ticket for litmus, since we are going with Case Conductor: DONE
[gema] Follow up with the Mozilla QA team regarding Case Conductor Requirements for Ubuntu and Beta Testing: DONE

Work items for ubuntu-12.04-beta-1:
Start using the tool and adding test cases to it: POSTPONED
[gema] Case Conductor deployment schedule (including test case definition, test tool deployment, phases of the migration): POSTPONED

Work items for ubuntu-12.04-beta-2:
[jorge] Create a juju charm for moztrap (was Case Conductor): POSTPONED