IP aware scheduling using Placement

Registered by Kevin Rasmussen

Subnet IP address availability should be treated as a capacity in the same way RAM, CPU, and Disk are.

We should make it so we can not land a VM on a hypervisor that does not have IP availability.

Proposed solution is to leverage placement for IP capacity like we do with RAM, CPU, and Disk.

I believe this can be accomplished by:

adding IP information in resources_from_request_spec() in nova/scheduler/utils.py

Or otherwise add to the "resources" in ScheduleManager.select_destinations() in nova/scheduler/manager.py prior to making the self.placement_client.get_allocation_candidates() call by using resources._add_resource() here:

```
136
137 resources = utils.resources_from_request_spec(spec_obj)
138 res = self.placement_client.get_allocation_candidates(ctxt,
139 resources)
```

This seems to need the same stuff as https://review.opendev.org/#/c/656885/
```
This tries to translate that into taking the requested networks during server create (via the request network id or port id), get the segment IDs per requested network, and then for each segment get their aggregates (maybe the segments in the same network are all in the same resource provider aggregate, I'm not sure). Once we have all that crap, shove it in the request spec and translate it to required provider aggregates in the scheduler during GET /allocation_candidates.
```

I am unsure if the usage of IPs for segments is functioning with Placement on Master, it doesn't seem to be working in Pike (What I am running) and I havn't identified where/if this has changed.

We need the usage of the IPs in placement for get_allocation_candidates to really do anything for us here.

Blueprint information

Status:
Not started
Approver:
None
Priority:
Undefined
Drafter:
Kevin Rasmussen
Direction:
Needs approval
Assignee:
None
Definition:
New
Series goal:
None
Implementation:
Unknown
Milestone target:
None

Related branches

Sprints

Whiteboard

Discussed at the Train PTG:

https://etherpad.openstack.org/p/ptg-train-xproj-nova-neutron

> I am unsure if the usage of IPs for segments is functioning with Placement on Master, it doesn't
> seem to be working in Pike (What I am running) and I havn't identified where/if this has changed.

The answer for that is 'no' since nothing is reporting IP_ALLOCATION resource class inventory or creating allocations (usage) against it. That would be work in neutron to report the inventory. I'm not sure who would create the allocations (probably nova during scheduling). Anyway, lots of details to work out, sounds like there might need to be a neutron spec (and maybe a nova spec). I've also linked in the old prep-for-network-aware-scheduling blueprint for context. -- mriedem 20190503

(?)

Work Items

Dependency tree

* Blueprints in grey have been implemented.

This blueprint contains Public information 
Everyone can see this information.