API Performance Testing and Improvements
Establish immediate (13.10) performance goals for Juju Core API support, particularly in regards to watching the system (AllWatcher and similar). Include considerations of size of system, frequency of churn, and number of observers.
Establish metrics and testing approaches for evaluating those goals.
Iterate as (and if) necessary on API approach to meet the desired goals.
We should be able to communicate with users to clearly identify our current practical performance ability (upper bounds of number of units, number of observers, etc.).
We should know what our goals are in this regard, and meet them.
devop or IS wants to deploy to N units. Would like to use GUI to monitor and configure. Is it supported? Is it likely to go well?
salesperson talks to client about juju for task. fit for purpose? reassurances?
Juju Core and GUI developers want to avoid firedrill for a supported use that we could have identified as problematic and addressed earlier, to preserve sanity, health, and interpersonal relations.
- If current state is deemed unacceptable, this will grow far-reaching new tasks:
- Juju core needs new approach
- UX may need new approach
- GUI will need change for both
- Scope thus has possibility of exploding.
- How do we determine current goals, in terms of number of units, acceptable system resources, and so on?
- Who will build testing tools?
- Establish acceptable parameters
- Build or find tools to gauge fitness
- Run tools and evaluate (may require many resources)
- NICE TO HAVE? establish CI for running these fitness tests on Juju Core regularly
- Adjust Juju Core and GUI to meet acceptable parameters, if necessary.
[OUT OF SCOPE]