Open source project coverage
Test Pythoscope on a popular open source library or tool. Build an environment with which we could easily publish and track quality of Pythoscope over time in regards to the chosen open-source project.
Blueprint information
- Status:
- Started
- Approver:
- Michal Kwiatkowski
- Priority:
- Medium
- Drafter:
- Michal Kwiatkowski
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- Approved
- Series goal:
- None
- Implementation:
- Beta Available
- Milestone target:
- 0.5-usability
- Started by
- Michal Kwiatkowski
- Completed by
Whiteboard
Implemented as a script (see tools/gather-
The script takes a source tar of Reverend, unpacks it, does pythoscope --init, adds two points of entry and tells pythoscope to generate tests for reverend/thomas.py module. After the generation is done nose is run on those test cases. Number of generated test cases is reported along with a coverage info.
Results for the reverend-fixes branch:
35 test cases:
19 passing
0 failing
16 stubs
69% coverage
Points of entry has been taken verbatim from the Reverend's README and home page.
I'm not sure how to go about tracking those reports over time.
First, current Reverend test is pretty narrow, so some more work is needed here. A more involved point of entry could be written, which would touch more pieces of the project. There's also a Tk user interface, which could be used to drive more exploratory test generation. Another idea is to look at a different open source project altogether.
Second, I don't think I need a fancy reporting app, I'm fine with manually running this from time to time. Since everything is in a repository anyway, we'll be able to generate historical data once we need them.