Comment 5 for bug 1209199

Revision history for this message
Josh Durgin (jdurgin) wrote :

I'm not so worried about performance degrading from many clones based on the same snapshot - it's unlikely that all the clones will be accessing the same objects all at once, and if they do, it'll be in the osd's page cache and thus take longer to become a bottleneck.

I'm fine with (c) - the issue with using copy-on-write for cinder's clone_volume() is handling the snapshot the rbd driver needs to create (which the user shouldn't see). Since snapshots in rbd prevent the volume from being deleted, this hidden snapshot used for cloning would require the original volume to stick around if there are still clones of it, even after the user requested that the user volume is deleted. If we don't have auto-flattening as well, this would enable a user to use extra un-accounted for space when they create a clone, then delete the original volume. With one clone per volume, they could use up to twice their quota.

If we automatically flatten all children of a volume when the parent snapshot (or volume with a hidden snapshot from cloning) is deleted, we could overload the backend with too many flattens at once. We could introduce a complex system of long-running operations and scheduling for flattens, or we could use a simple heuristic like 'only flatten all children when the number of clones is fewer than N'. This kind of rule would bound the extra space usage to 1/N, and help avoid too many flattens at once.