Tempest based benchmark scenario

Registered by Boris Pavlovic on 2014-02-04

Major goal is to reuse tons of already existing tempest scenarios in purpose of benchmarking.

So we should add something like TempestScenario(base.Scenarion) class.

And inside it couple of method:

def all(self)
   #just run all tests from tempest one by one
   pass

def set(set_name):
    # run one by one methods from set (set is hardcoded in inside Rally)
    pass

def random_test_from_set(set_name):
    # run random test from set with name set_name
    pass

def specific_regex(regex):
    # regexp that will find all benchmarks
    pass

def single_test(self, test_name):
    # run test with test_name
    pass

def list_of_test(self, test_names):
    # run one by one test from list of test_names
    pass

def random_test_from_list(self, test_names)
   # run random test from test_names
   pass

Blueprint information

Status:
Complete
Approver:
Boris Pavlovic
Priority:
High
Drafter:
Boris Pavlovic
Direction:
Approved
Assignee:
Andrey Kurilin
Definition:
Approved
Series goal:
None
Implementation:
Implemented
Milestone target:
None
Started by
Boris Pavlovic on 2014-03-22
Completed by
Boris Pavlovic on 2014-07-28

Whiteboard

Gerrit topic: https://review.openstack.org/#q,topic:bp/benchmark-scenarios-based-on-tempest,n,z

Addressed by: https://review.openstack.org/86337
    Add benchmark for tempest. Part 1

Addressed by: https://review.openstack.org/86836
    Add benchmark for tempest. Part 2

(?)

Work Items

Work items:
TempestContext implementation: DONE
TempestScenario implementation: DONE
- all: DONE
- set: DONE
- specific_regex: DONE
- single_test: DONE
- list_of_test: DONE

Dependency tree

* Blueprints in grey have been implemented.

This blueprint contains Public information 
Everyone can see this information.