Outbound RPC version control

Registered by Dan Smith

Right now, a version N+1 node can receive and handle version N messages, but can only send version N+1 messages. Allowing an admin to lock down the RPC version until all nodes have been upgraded allows for graceful cluster-wide upgrades.

An example of locking down a version is setting:

  [rpc_api_client_caps]
  conductor = 1.48

in nova.conf on compute nodes as part of a Grizzly->Havana upgrade, before you upgrade the compute nodes. This ensures that Havana compute nodes will operate correctly a Grizzly nova-conductor.

Blueprint information

Status:
Complete
Approver:
Russell Bryant
Priority:
High
Drafter:
None
Direction:
Approved
Assignee:
Russell Bryant
Definition:
Approved
Series goal:
Accepted for havana
Implementation:
Implemented
Milestone target:
milestone icon 2013.2
Started by
Russell Bryant
Completed by
Russell Bryant

Related branches

Sprints

Whiteboard

This may require some work in the rpc library. If so, a blueprint should be created in openstack-common for that part. --russellb

---

This is where a client of a certain API also serves that api? Compute node talking to compute node? And this applies to major and minor versions?

I've seen something similar before in oVirt which they called a "cluster compatibility level" - until an entire cluster had been upgraded and known to support a newer version of the API, all nodes in the cluster use the older version of the API.

I wonder could all nodes report their supported version number in the DB and something (e.g. conductor) could decide they can move to the sending with the new version number?

-- markmc

Gerrit topic: https://review.openstack.org/#q,topic:bp/rpc-version-control,n,z

Addressed by: https://review.openstack.org/32720
    Sync can_send_version() helper from oslo-incubator.

Addressed by: https://review.openstack.org/32721
    Add rpc client side version control.

(?)

Work Items

This blueprint contains Public information 
Everyone can see this information.