sepatated calculated deployment
Separate deployment by several calculated timeouts
for example
fuel master with eth0=10G (eth0=10G $FME)
3 controllers
3 mongo
400 computes+ceph-osd (10 OSD per node)
bootstarp ~300M
1 controller - ~7G disk space $CtrlDS
1 mongo - ~ 4G disk space $MoDS
1 compute+ceph-osd - ~ 3G, +OSD#*2G 3G system and 10*journal size will be written $CmpDS
Lets calulate
3 controllers
bootstarp 900M $BS
deploy 7G*3=21G
3 mongo
bootstarp 900M
deploy 4G*3=12G
400 computes+ceph-osd (10 OSD per node)
bootstarp 400*0.3=120G
deploy 400*$CmpDS(
start deployment
provisioning step-by-step all nodes divided by 10-50 nodes
add variables time-to-
calculate maximum numbers of compute nodes that can be deployed in parallel
1 compute node thith ceph-osd ~ 15-20 mins
Use FME(Mb/s) to calculate how many data can be transferred by fuel-master in 15-20 mins
10Gb/s=1GB/s, but HDD can give ~100-150MB/s, we need to run test to determine speed ($FuelDiskTransfer)
20*60=1200sec*
1 compute node - 3G, $FuelDiskTransf
calculate timeout for calculated numbers of nodes
start to deploy 1-40 compute nodes
check every 3 min for progress to ensure that nodes became ready `fuel nodes | grep ….`
when 1-40 nodes is ready,calculate timeout for calculated numbers of nodes, start to deploy 41-80 and then up to 400
$CmpDS, $MoDS, $CtrlDS , $BS should be calculated empiric and *1.25
The usage of this methodology can help us to deploy huge clusters.
Blueprint information
- Status:
- Not started
- Approver:
- None
- Priority:
- Undefined
- Drafter:
- Denis Klepikov
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- New
- Series goal:
- None
- Implementation:
-
Unknown
- Milestone target:
-
7.0
- Started by
- Completed by