nova improvement of maximum attach volumes more than 26 vols
Currntly implementation, nova instance can only handle 26 volumes(vda-vdz).
this bp improves the limit.
# create a instance with single volume named sles15rc
# openstack server add volume sles15rc vol2
# openstack server add volume sles15rc vol3
:
# openstack server add volume sles15rc vol26
# openstack server add volume sles15rc vol27
Unexpected API Error. Please report this at http://
<class 'NovaException_
#
There is 2 problems in nova's implementation
CAUSE:
1) device limitation number is 26.
get_
The magic number 26 is a alphabet range.('a'-'z')
2) nova:find_
length characters(
vda .. vdz
sda .. sdz
TODO:
1) increasing the number of allowed volumes attached per instance > 26
26 -> ???
2) Fix nova can handle device name length more widely for universally.
not only under 26 volumes. like
ML:
openstack-dev ml is starting to discus this topic.
http://
Blueprint information
- Status:
- Complete
- Approver:
- melanie witt
- Priority:
- Low
- Drafter:
- Tsuyoshi Nagata
- Direction:
- Approved
- Assignee:
- melanie witt
- Definition:
- Approved
- Series goal:
- Accepted for stein
- Implementation:
- Implemented
- Milestone target:
- stein-3
- Started by
- Tsuyoshi Nagata
- Completed by
- Matt Riedemann
Related branches
Related bugs
Bug #1770527: add volume fails over 26vols and returns 500 API error with libvirt driver | Fix Released |
Bug #1773941: Not able to attach more than 25 volumes using virtio-scsi | Opinion |
Sprints
Whiteboard
Gerrit topic: https:/
Addressed by: https:/
[nova] increasing the number of allowed volumes attached per instance > 26
Gerrit topic: https:/
Addressed by: https:/
Fix nova can handle device name length more widely for universally.
Other information&
(jichenjc)
googled and find this:
https:/
https:/
(Tsuyoshi Nagata)
I explain on kvm hypervisor implementation.
vd the virtid_blk device assigned by pci-bus.
pci addressing was below
(complex:
each pci-bus have 32 devices.
max pci-bus was 256.
complex was matched by cpu socket.
currently, intel's pci-complex was built by QPI-connection.
QPI-CPU-connection is now 32.
then current maximum virtio_blk no is,
32 x 256 x 32 = 262144
get_dev_
*NOTE*
I already tested kvm live instance can only handles 32 virtio devices.
but shelved(stop) instance can handle more than 32 virtio devices.
('$ openstack server stop sles15rc'
'$ openstack server add volume sles15rc vol32' command makes hot-add new pci bus and new volumes(/dev/vdag). )
(melanie witt)
so seems we can follow that ?
I was thinking no larger than 1024, for example.
[1] https:/
get_dev_
(Tsuyoshi Nagata)
I tested attaching volumes to my instances. Its useless too taking time more than 256 volumes to provision a instance.
I decided this number is more smaller.
get_dev_
(melanie witt)
Note bug https:/
What is the practical application of this? We shouldn't make changes just because we can.
(Tsuyoshi Nagata)
My app is auto-testing ceph on openstack. a ceph-osd node has many volumes.
if each instance can handle many volumes, I can solve provisioning SDS on openstack.
(without buying real disks.)
N = 64
It seems OK.
N = 100
my kvm(SOC7) environment seems ok.
N = 200
my kvm(SOC7) environment seems ok.
a boot time getting longer. (> over 6 min)
N = 256
my kvm(SOC7) environment seems ok.
a boot time getting more longer. (> over 13 min)
N = 512
my kvm(SOC7) environment seems NG.
VNC shows black display. never boot up a instance with 512 volumes. (> over 1day)
(sahid)
virtio-blk is using for each disk, one PCI slot. Since there is a limit of 32 slots per machine (For Q35 it's different) and some of them are already used in our default guest configuration for networking devices, memory balloon device, USB controller... My thinking is we should probably keep the limit of 26.
For virtio-scsi it's different, we have one controller that is using one PCI slot. Nova currently does not support to have more virtio-scsi controller. One controller can have 256 targets and each target supports 1 to 2^14 logical device (LUN). So it's about 4194304, which does not really make sense. Limiting it to 128 or perhaps the number of targets seems good.
For the native QEMU scsi implementation I think a target can only handle one LUN, meaning 256 devices.
(Tsuyoshi Nagata)
I think bus limit number shoud be specific limit, not a convinent number.
I'll fix by "specific based limitation" on next patch set!
Gerrit topic: https:/
virtio=
(Zhenyu Zheng) this seems really unrealistic to me...
(Chen) Agree unless a source indicating this limit is provided.
virtio=1000
(Stephen Finucane)+2
(melanie witt)-1
(zhaixiaojun)+1
Addressed by: https:/
Propose configurable maximum number of volumes to attach
The spec for this was merged on 20180919, so this is approved for Stein. -- melwitt 20180924
I updated the name of this blueprint to conf-max-
Gerrit topic: https:/
Addressed by: https:/
WIP Add configuration of maximum disk devices to attach
Addressed by: https:/
Propagate exception message from _prep_block_device
Addressed by: https:/
Add method to generate device names universally
Addressed by: https:/
Add method to generate device names universally
Addressed by: https:/
Add method to generate device names universally
Addressed by: https:/
Raise 403 instead of 500 error from attach volume API
Gerrit topic: https:/
Addressed by: https:/
Amend "Configure max number of volumes to attach" spec