Two step scaling with Heat engine

Registered by Andrew Lazarev on 2015-02-25

When user requests cluster scaling Sahara creates new Heat template and updates with rollback_on_failure=True. In case of failure Sahara will try to update one more time with all affected instances removed, but no new added. So, the process looks like.
1. Sahara runs hadoop nodes decommissioning
2. Sahara runs heat stack update with both added and removed nodes and rollback_on_failure=True
3. If 2 failed Heat returns all deleted nodes back
4. Sahara runs heat stack update with removed nodes only
5. Heat removes nodes one more time

So, at step 3 heat restores nodes that will be deleted later anyway. This could be avoided by changing Sahara logic . This bp proposes to change step 2 to two separate updates to Heat. The first update will only remove nodes. The second one will only add. This will allow not to recreate deleted nodes if something happened during cluster extension.

Blueprint information

Status:
Not started
Approver:
Sergey Lukjanov
Priority:
Medium
Drafter:
Andrew Lazarev
Direction:
Approved
Assignee:
None
Definition:
Approved
Series goal:
None
Implementation:
Not started
Milestone target:
None

Related branches

Sprints

Whiteboard

Gerrit topic: https://review.openstack.org/#q,topic:bp/heat-two-step-scaling,n,z

Addressed by: https://review.openstack.org/159278
    Two step scaling with Heat engine

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.