Autoscaling library independent of resources

Registered by Christopher Armstrong on 2014-01-09

Autoscaling logic should be refactored into a separate Python module to be used by the Autoscaling service.

This logic should:
- generate templates with scaled resources
- manage state in separate tables explicitly set up for autoscaling (such as cooldown tracking and rolling update status, or anything else that can't be inferred from previously generated templates / stacks and new requests)

This logic should *not*:
- manipulate stacks (that will be the responsibility of another component, e.g. the autoscaling service)

Blueprint information

Status:
Complete
Approver:
Steve Baker
Priority:
Medium
Drafter:
Qiming Teng
Direction:
Approved
Assignee:
Christopher Armstrong
Definition:
Obsolete
Series goal:
None
Implementation:
Good progress
Milestone target:
milestone icon ongoing
Started by
Christopher Armstrong on 2014-01-13
Completed by
Qiming Teng on 2016-11-23

Related branches

Sprints

Whiteboard

Work items (not using work items textarea below because it's complaining about syntax)

1. Pass the name and template between functions instead of Instance objects. Change variable names where it makes sense as part of this change (e.g. instances -> resources)

2. Decorate _create_template() and _replace() with @staticmethod. Anywhere that self was used, replace it with something passed in as a parameter (e.g. self.update_with_template() would be replaced by a callback function). Change any other variable names that make sense to change. Change the function names too if that minimises changes to other code, since you'll need new methods to call these ones with the correct parameters.

3. Move the two new library functions to the separate module, renaming if necessary.

4. (Optional) Do any refactoring of rolling_update if you must.

Just going to write down some notes from figuring out how this could be implemented.

- Why do very few resources have a "self.properties =" assignment in handle_update?
  1. At least one I found (the one in InstanceGroup) isn't required to happen by the tests.
  2. Some of the other ones are required for the tests to pass, at least according to my superficial experimentation.
  3. Why isn't it done implicitly for all resources?

- Rolling update is a very... interesting design constraint.
  1. I'm pretty sure the current implementation will forget about an in-progress rolling update if the heat engine is restarted. This is bad; the progress state should definitely be stored in the database.
  2. If we ultimately want rolling updates to be a feature of the autoscale engine, and not the heat engine, then the autoscale engine could basically trickle in template updates over time with individual resources updated.
  3. In addition, if we do the trickle-in-template-updates design, then rolling updates will no longer be constrained by the stack update time limit. I think that's good?
  4. Alternatively, we could let the Heat engine still be responsible for rolling updates. I could imagine a design where we leave ResourceGroup in Heat, and move the rolling update support into it. Then the AS engine would only need one template update, containing a ResourceGroup.

- ResourceGroup probably needs support for Metadata and UpdatePolicy, regardless of all these other points.

Gerrit topic: https://review.openstack.org/#q,topic:bp/as-lib,n,z

Addressed by: https://review.openstack.org/71143
    Refactor _create_template to not rely on instances

Addressed by: https://review.openstack.org/71168
    Move resource_templates to heat.scaling

Addressed by: https://review.openstack.org/71399
    Add unit tests for heat.scaling.template

Gerrit topic: https://review.openstack.org/#q,topic:bp/reorg-asg-code,n,z

Addressed by: https://review.openstack.org/137356
    Extract group functions into a utility module.

Addressed by: https://review.openstack.org/143605
    Move LB reload logic into scaling library

(?)

Work Items

Dependency tree

* Blueprints in grey have been implemented.

This blueprint contains Public information 
Everyone can see this information.