openerp requires a continuous integration test server

Registered by Raphaël Valyi - http://www.akretion.com

The current OpenERP testing environment is not what one would expect from a critical application such as an ERP, especially one written in a dynamic language such as Python where not even a compiler checks the code. The current situation leads to lot's of weekly regressions. In the absence of a good test suite, I would advice to freeze the current features and to only bugfix/refactor carefully. New developments for v5.2/6 should be only undertaken after a good test suite is in place IMHO.

Yes OpenERP does have a few unit tests running at db installation and this is good but this is not enough. Some (among me) suspect it doesn't even cover 10% of the codebase and it runs only when somebody installs the module with demo data, which is not after every commit, especially for little used modules.

We can improve the situation a lot:
1) tests should run after each commit or at least on a daily basis
2) all modules should have test data and should be tested
3) test coverage should be reported. I think a coverage of 50% would already be a good goal. Multithreading could eventually be disabled for tests if that is too much a hassle.
4) test failures should be reported publicly and to the commiter which commit broke the test.
5) if ever the base_module_quality works properly, then I think any module commit lowering the quality of a module should be reported publicly and to the guilty commiter.

To achieve 50% of test coverage, we would need to add some more test fixtures.
But I also think a lot can easily be done with totally automated tests.

Automated tests could be:
1) test every module from addons at least (and some from extra-addons, for instance those with a quality certificate) automatically. Roll back the database between tests. Also test a few basic combinations like a module with every profile at least.
2) automatically test the fields_view_get method for all views, for all model test/demo fixtures, thus we would simulate that clicking on any view for the demo data at least doesn't generate any stacktrace. Eventually test those view from eTiny.
3) automatically test all the defined reports fo all model test/demo fixtures. Thus we will ensure that clicking on a report won't break into a stacktrace.
4) I don't know if this is possible, but may be we could also call all existing on_change methods with fake data to ensure basic functioning.

I believe that setting up such a test infrastructure+server is rather easy. Having all those automated test running daily/after each commit will I think save hours of good will community members. Thus, all that new resulting free workforce could then focus on more functional testing, eventually completing the test suite.
In order to make it easier to maintain a large test suite I stringly suggest test/demo fixtures use a YAML format instead of XML. This is much easier to read/type and XML here doesn't make more sense I think.
See related blueprint:
https://blueprints.launchpad.net/openobject-server/+spec/pyyaml-yaml-data-import

Once you have a good coverage with that test suite, you could safely move OpenERP forward with any necessary refactoring/re-design/new features because you'll be sure that you won't regress at a point were you discourage community efforts.

Thanks.

Blueprint information

Status:
Complete
Approver:
None
Priority:
Undefined
Drafter:
None
Direction:
Needs approval
Assignee:
None
Definition:
Discussion
Series goal:
None
Implementation:
Implemented
Milestone target:
None
Started by
Raphaël Valyi - http://www.akretion.com
Completed by
Fabien (Open ERP)

Sprints

Whiteboard

(?)

Work Items