Node placement control
HDFS provides a robust storage by placing the data replicas on different machines. Cloud introduces an additional level of abstraction as virtual machines on physical hosts. This can cause a situation when servers containing all copies of the same data located in on single compute node. In case of failing of this compute node the data will be lost, and this is not acceptable for production systems. One of the solutions is to place VMs in the same cluster on different compute nodes. So we should add to Savanna functionality to schedule a group of instances on different physical hosts to provide a reliable Hadoop deployment into OpenStack.
It use anti-affeinity-
Blueprint information
- Status:
- Complete
- Approver:
- Sergey Lukjanov
- Priority:
- High
- Drafter:
- Alexander Kuznetsov
- Direction:
- Approved
- Assignee:
- Alexander Kuznetsov
- Definition:
- Approved
- Series goal:
- Accepted for 0.2
- Implementation:
-
Implemented
- Milestone target:
-
0.2a1
- Started by
- Alexander Kuznetsov
- Completed by
- Alexander Kuznetsov
Related branches
Related bugs
Sprints
Whiteboard
Gerrit topic: https:/
Addressed by: https:/
Adding user key to the cluster Each cluster will have a it own private key for passwordless login It is possible to schedule a data nodes on diffirent hosts implements: blueprint node-placement-
Gerrit topic: https:/
Gerrit topic: https:/