Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

Bug #967832 reported by Lars Erik Pedersen
144
This bug affects 20 people
Affects Status Importance Assigned to Milestone
Glance
Confirmed
Undecided
Unassigned
OpenStack Compute (nova)
Won't Fix
Medium
Ryan Hallisey
OpenStack Dashboard (Horizon)
Won't Fix
High
Unassigned
OpenStack Identity (keystone)
Fix Released
High
Dolph Mathews

Bug Description

If you have running instances in a tenant, then remove all the users, and finally delete the tenant, the instances are still running. Causing serious trouble, since nobody has access to delete them. Also affects the "instances" page in horizon. It will break if this scenario occurs.

Tags: ops blueprint
description: updated
description: updated
Revision history for this message
Gabriel Hurley (gabriel-hurley) wrote :

This is actually tracked in the blueprint tenant-deletion (https://blueprints.launchpad.net/horizon/+spec/tenant-deletion), but I'll leave the bug as a reminder.

Changed in horizon:
importance: Undecided → Medium
status: New → Confirmed
importance: Medium → High
Revision history for this message
Joseph Heck (heckj) wrote :

Needs some discussion about how resources should be treated from a UX point of view at the Folsom design summit. My inclination would be to destroy all resources owned by a tenant when it's being removed, but that may be too harsh, and doesn't take into account future states where trust may be delegated.

Regardless, that also implies an inversion of control where Keystone would potentially need to reach into other systems with administrative access to impact resources, which I'm not sure I like.

tags: added: blueprint
Changed in keystone:
status: New → Confirmed
importance: Undecided → High
Revision history for this message
Jay Pipes (jaypipes) wrote :

For the record, I agree that the action should be to terminate and delete all resources attached to a tenant when the tenant is deleted...

Tom Fifield (fifieldt)
Changed in nova:
status: New → Confirmed
Thierry Carrez (ttx)
Changed in nova:
importance: Undecided → High
Michael Still (mikal)
tags: added: ops
Revision history for this message
Matt Joyce (matt-nycresistor) wrote :

I agree that, when a tenant is deleted, the running instances associated with it should be terminated.

Revision history for this message
Tong Li (litong01) wrote :

The easier and non destructive approach for this problem may be to check the resources (such as images, instances, volumes) under the tenant and make sure there is not anything left to be hanging around without anyone can access them when tenant get deleted. When the detects any such resource, the delete tenant should not proceed and should tell the user that it can not be deleted due to the resource issues. It also forces the operator to handle these resources first. In many cases, this approach might be a safer way to deal with deletion.

Revision history for this message
Stephan Fabel (sfabel) wrote :

In regards to #5, for what it's worth, I would like a choice... let me delete all resources but confirm with me first.

Revision history for this message
Craig Hadix (info-jjcftv6wldnzq84cskygyvhqqb9qwjfcq0yfnwzcca0ux8ircw2a3om624q2ycdp941uw5474gcdbi2qtcnliiwmmp1l) wrote :

I like a verification here where there is both a 'are you sure' along with an optional password, or other validated check, will prevent the last user from being deleted if resources are still active... however if the override is approved then the instances should be blown away with the last user

Revision history for this message
Gabriel Hurley (gabriel-hurley) wrote :

Closing this as a bug as it is superseded by the blueprint https://blueprints.launchpad.net/horizon/+spec/tenant-deletion

Changed in horizon:
status: Confirmed → Won't Fix
Revision history for this message
Joe Gordon (jogo) wrote :

This won't be fixed in Nova in Grizzly, and is something that is cross project and needs to be hashed out at the Summit.

Changed in nova:
importance: High → Medium
Adam Young (ayoung)
Changed in keystone:
milestone: none → havana-1
yelu (yeluaiesec)
Changed in keystone:
assignee: nobody → yelu (yeluaiesec)
Revision history for this message
Joshua Harlow (harlowja) wrote :

Wouldn't a good way be to have keystone broadcast user/tenant/role deletion/creation and let downstream systems consume said messages? Then nova can react however it wants, glance to, and any other downstream system that desires to take some kind of action when users/tenants/roles are deleted/added...

yelu (yeluaiesec)
Changed in keystone:
assignee: yelu (yeluaiesec) → nobody
Thierry Carrez (ttx)
no longer affects: nova/havana
Revision history for this message
Thierry Carrez (ttx) wrote :

<dolphm> that's being worked in bp notifications
<dolphm> https://blueprints.launchpad.net/keystone/+spec/notifications
<ttx> so for havana-2 ?
<dolphm> which is blocked by https://blueprints.launchpad.net/keystone/+spec/unified-logging-in-keystone ... which we just added to m1 and is nearly complete
<dolphm> ttx: yes, looks to be on pace for m2

Changed in keystone:
milestone: havana-1 → havana-2
Revision history for this message
Adam Young (ayoung) wrote :

The RPC mechanism assumes eventlet. Many people are running with Keystone in HTTPD and without eventlet. Is it possible that we can do the notification mechanism using straight AMQP and punt on 0Mq for a first implementation? We just need to be able to publish notifications, not accept them, so Keystone does not need the full RPC.

Revision history for this message
Dolph Mathews (dolph) wrote :

Removed m2 target because bp notifications has some blockers; it looks like it won't land during Havana at this point.

Changed in keystone:
milestone: havana-2 → none
Adam Young (ayoung)
Changed in keystone:
assignee: nobody → Victoria Martínez de la Cruz (vkmc)
Changed in keystone:
assignee: Victoria Martínez de la Cruz (vkmc) → nobody
Dolph Mathews (dolph)
Changed in keystone:
assignee: nobody → Dolph Mathews (dolph)
milestone: none → havana-3
Revision history for this message
Dolph Mathews (dolph) wrote :

keystone now emits notifications when projects/tenants are delete as part of https://blueprints.launchpad.net/keystone/+spec/notifications

Changed in keystone:
status: Confirmed → Fix Committed
Thierry Carrez (ttx)
Changed in keystone:
status: Fix Committed → Fix Released
Revision history for this message
Dolph Mathews (dolph) wrote :
summary: - Instances are still running when a tenant are deleted
+ Resources owned by a project/tenant are not cleaned up after that
+ project is deleted from keystone
Revision history for this message
Iccha Sethi (iccha-sethi) wrote :

I feel it is the responsibility of something external which does clean tenant deletion like horizon(https://blueprints.launchpad.net/horizon/+spec/tenant-deletion) should be doing the image members clean up

Thierry Carrez (ttx)
Changed in keystone:
milestone: havana-3 → 2013.2
Revision history for this message
Mark Washenberger (markwash) wrote :

I don't think it makes sense to look at this as a glance issue.

I agree some work may be needed in glance as in other projects to respond to tenant deletion events in a sensible way. But glance shouldn't be innovating that independently. Sounds like a cross-cutting-concern for all projects.

Changed in glance:
status: New → Won't Fix
Assaf Muller (amuller)
Changed in neutron:
status: New → Confirmed
Assaf Muller (amuller)
Changed in neutron:
assignee: nobody → Assaf Muller (amuller)
Revision history for this message
Dolph Mathews (dolph) wrote :

This was brought up again (specifically for nova) in bug https://bugs.launchpad.net/nova/+bug/1288230

Revision history for this message
Openstack Gerrit (openstack-gerrit) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/92599

Changed in neutron:
status: Confirmed → In Progress
Revision history for this message
Openstack Gerrit (openstack-gerrit) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/92600

Revision history for this message
kesten broughton (dathomir) wrote : Re: [Bug 967832] Re: Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

Not just the instances, but likely the subnet, router, gateway etc. Once
the tenant is gone it is very difficult to get the id's you need. If you
do a quantum (nova) port-list you will likely see lots of floating and
fixed ips related to it as well.

On Wed, May 7, 2014 at 9:37 AM, Openstack Gerrit
<email address hidden>wrote:

> Fix proposed to branch: master
> Review: https://review.openstack.org/92599
>
> ** Changed in: neutron
> Status: Confirmed => In Progress
>
> --
> You received this bug notification because you are subscribed to
> neutron.
> https://bugs.launchpad.net/bugs/967832
>
> Title:
> Resources owned by a project/tenant are not cleaned up after that
> project is deleted from keystone
>
> Status in OpenStack Image Registry and Delivery Service (Glance):
> Won't Fix
> Status in OpenStack Dashboard (Horizon):
> Won't Fix
> Status in OpenStack Identity (Keystone):
> Fix Released
> Status in OpenStack Neutron (virtual network service):
> In Progress
> Status in OpenStack Compute (Nova):
> Confirmed
>
> Bug description:
> If you have running instances in a tenant, then remove all the users,
> and finally delete the tenant, the instances are still running.
> Causing serious trouble, since nobody has access to delete them. Also
> affects the "instances" page in horizon. It will break if this
> scenario occurs.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/glance/+bug/967832/+subscriptions
>

Revision history for this message
Assaf Muller (amuller) wrote :
Ryan Hallisey (rthall14)
Changed in nova:
assignee: nobody → Ryan Hallisey (rthall14)
Revision history for this message
Assaf Muller (amuller) wrote :

Ryan, I *just* resumed working on this issue today (In Neutron). I think we should have a summit design session about an OpenStack wide solution. I hope you're coming to Paris :)

Revision history for this message
Ryan Hallisey (rthall14) wrote :

Hey Assaf,
I also picked this up again only about a week ago.
That would be awesome if we could do a summit design session about this solution!
I sumbitted a presentation related to openstack and selinux, so hopefully
I will get accepted and be headed to Paris!

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/115964

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/115965

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/115966

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on neutron (master)

Change abandoned by Salvatore Orlando (<email address hidden>) on branch: master
Review: https://review.openstack.org/92600
Reason: This patch has been inactive long enough that I think it's safe to abandon.
The author can resurrect it if needed.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Salvatore Orlando (<email address hidden>) on branch: master
Review: https://review.openstack.org/115964
Reason: This patch has been inactive long enough that I think it's safe to abandon.
The author can resurrect it if needed.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Salvatore Orlando (<email address hidden>) on branch: master
Review: https://review.openstack.org/115965
Reason: This patch has been inactive long enough that I think it's safe to abandon.
The author can resurrect it if needed.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Salvatore Orlando (<email address hidden>) on branch: master
Review: https://review.openstack.org/115966
Reason: This patch has been inactive long enough that I think it's safe to abandon.
The author can resurrect it if needed.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Assaf Muller (<email address hidden>) on branch: master
Review: https://review.openstack.org/92599

Revision history for this message
Matt Riedemann (mriedem) wrote :

Did anything happen at the Kilo summit in Paris about this? Are there any mailing list threads on this because if it hasn't come to a summit yet we should talk about it in the cross-project sessions in Vancouver.

A simple approach w/o listening on event notifications (which would be the slickest way) would be to have a periodic task running which is checking resource tenants to see if the tenant exists and if not, reap them (like we have for orphaned instances).

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
Dolph Mathews (dolph) wrote :

I agree with Mark's assertion in comment #17 that Glance shouldn't be going about a solution alone, but this certainly affects Glance.

Revision history for this message
Ian Cordasco (icordasc) wrote :

This also certainly affects Horizon

Changed in glance:
status: Won't Fix → Confirmed
Assaf Muller (amuller)
Changed in neutron:
assignee: Assaf Muller (amuller) → nobody
status: In Progress → Confirmed
no longer affects: neutron
Revision history for this message
Sean Dague (sdague) wrote :

The Tokyo Summit solution here was that this should be done via an osc plugin. There are really dramatic issues with auto delete from keystone deletes. Many sites need an archive process. Nova itself soft deletes many resources, and even has the ability to set an undo time on some of them.

This shouldn't be an automatic process in the cloud, it should be deliberate. Just like not deleting all the files on your system owned by a user if you remove that user from /etc/passwd.

Changed in nova:
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Related blueprints

Remote bug watches

Bug watches keep track of this bug in other bug trackers.