a shared volume can be accessed by many instances

Registered by Rongze Zhu

Provide the ability to attach a single volume to multiple instances simultaneously. In order to do this R/W there are a number of issues involved with respect to data corruption etc. As a first pass it would be very useful to introduce a Read Only option that could be specified during attach and used to allow simultaneous attach to multiple instances.

Most of this will require work in Nova/Compute but there will need to be some comprehension added to Cinder, and perhaps the ability to mark a volume as Read Only might be useful as well.

This R/O volumes could be especially useful for things like Images and even D2D backups.

There's also a need for FC environments to multi-attach in general.

Blueprint information

Status:
Complete
Approver:
John Griffith
Priority:
Medium
Drafter:
Rongze Zhu
Direction:
Needs approval
Assignee:
None
Definition:
Superseded
Series goal:
None
Implementation:
Not started
Milestone target:
None
Completed by
Sean McGinnis

Related branches

Sprints

Whiteboard

Something like this, or perhaps an extension of this? https://blueprints.launchpad.net/nova/+spec/nova-sharedfs
______________________________________________

I'd encourage you to have a look a the FIle Share Service submission... perhaps there's some commonality in concept.

---------------------------------------------------------------------------------
Indeed, both of the items you mention could be part of the overall solution. However this particular item as discussed at the summit was to add the capability to attach a volume (block/cinder volume) to multiple instances simultaneously. Currently Cinder does not allow this, the BP here is strictly to enable that for existing block devices, regardless of FS/backend-type etc.

---------------------------------------------------------------
While there is no detailed spec up yet, would strongly request that multi-attach be a capability specified at volume creation time - this 1) makes it hard to multi-attach by accident, which is data corruption waiting to happen 2) makes implementation much easier for certain backends -- DuncanT

------------------------------------------------------------------------------
---Dmitry Russkikh--- I've moved read-only featurte in the separated blueprint https://blueprints.launchpad.net/cinder/+spec/read-only-volumes

------------------------------------------------------------------------------
---Julian Sternberg--- That's an important feature i'm missing aswell and can get handled with glusterfs/nfs. no need for blocking and handling I/O operations from cinder side,
since its getting managed by glusterfs/nfs stack itself.

Another useful case for this is, if you want to cluster webserver instances sharing the same webdir content.

Usually you would mount the glusterfs/nfs volume from client side
using /etc/fstab entry: HOSTNAME-OR-IP:/VOLNAME /MOUNT/TARGET/DIR glusterfs defaults,_netdev 0 0

But there is a better solution if using libvirt in combination with Glusterfs/and or NFS for this.

We have a nice new feature called: Network filesystem pool (see:http://libvirt.org/storage.html) for this task.

An example config in each libvirt xml instance config would then look like this:
<pool type="glusterfs">
         <name>somedatadir</name>
         <source>
           <host name="HOSTNAME-OR-IP"/>
           <dir path="/VOLNAME"/>
         </source>
         <target>
           <path>/MOUNT/TARGET/DIR</path>
         </target>
       </pool>

if anyone could implement this, include Horizon Dashboard option under Volumes tab, it would be great!

------------------------------------------------------------------------------
-- Caitlin Bestler -- Is there a proposal on how usage of a shared volume is tracked? Obviously a 'status' field will no longer be adequate.

---------------------------
-- Scott Brightwell -- +1 to the value of implementing shared block storage. -1 to the requirement of managing block locking, etc. It's the tenant responsibility to add a block locking mechanism to avoid corruption. +1 to the idea of a read-only flag +1 to a flag allowing / disallowing shared volume on creation.

I think it is no need for cinder to consider the problem of avoiding corruption because Cinder should only provide the feature for multi vm to attach this shared volume and Guest OS app(cluster database app as MSCS) should guarantee only one write at a time. -- ling-yun
-------------------------
-- Ronen Kofman -- +1 on Scott's comment, shared block is critical for any clustering software and the clustering software is responsible for managing access to the shared block device that is what clustered file systems are for. Any database clustering software and most application level high availability technique require shared block for quorum, voting, etc.
OpenStack does not provide HA capabilities and since virtualization level HA (even it was available on OpenStack), which is basically a VM restart, is not sufficient in many cases application level HA is even more critical. An example that comes to mind is Oracle RAC which has to have shared devices to work and without it we cannot scale the database beyond one instance.
The requirement from Cinder should be simply to allow designating volumes as "shared" (upon creation) which would mean that those volumes could be attached to multiple instances, The guest better know how to handle a shared device but this is no difference than the physical world where a block device can be connected to multiple servers, the responsibility is on the guest.

(?)

Work Items