Procedure for removing Ceph-OSD nodes
Removing successfully deployed or cleaning up failed deployed nodes without following proper draining and removal procedures may result IN DATA LOSS. Do not let your cluster reach its full ratio when removing an OSD. Removing OSDs could cause the cluster to reach or exceed its full ratio.
The node removing procedure should be:
1. Using data obtained from commands 'ceph osd tree' and 'ceph pg dump', check that after removing the node the cluster will not reach its full ratio.
2. Take the node OSDs out of the cluster, running the command ceph 'osd out {osd-num}'.
3. Poll the cluster using 'ceph pg stat' command until all placement groups are in 'active+clean' state.
4. Stop OSD daemons.
5. Remove OSDs from crush map: 'ceph osd crush remove {name}'
6. Remove the OSDs authentication key: ceph auth del osd.{osd-num}
7. Remove the OSDs from osd map: ceph osd rm {osd-num}
http://
Blueprint information
- Status:
- Not started
- Approver:
- None
- Priority:
- Undefined
- Drafter:
- Mykola Golub
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- New
- Series goal:
- None
- Implementation:
-
Unknown
- Milestone target:
- None
- Started by
- Completed by
Related branches
Related bugs
Sprints
Whiteboard
Work Items
Dependency tree
![](deptree.png)
* Blueprints in grey have been implemented.