diff -Nru nova-2014.1.3/AUTHORS nova-2014.1.5/AUTHORS --- nova-2014.1.3/AUTHORS 2014-10-02 23:38:45.000000000 +0000 +++ nova-2014.1.5/AUTHORS 2015-06-18 22:33:13.000000000 +0000 @@ -6,6 +6,7 @@ Adalberto Medeiros Adam Gandelman Adam Gandelman +Adam Gandelman Adam Johnson Adam Spiers Adelina Tuvenie @@ -16,6 +17,7 @@ Adrien Cunin Adrien Cunin Ahmad Hassan +Akash Gangil Akihiro MOTOKI Akira Yoshiyama Akira Yoshiyama @@ -81,6 +83,7 @@ Avinash Prasad Avishay Traeger Balazs Gibizer +Bartosz Fic Belmiro Moreira Ben McGraw Ben Nemec @@ -160,6 +163,7 @@ Darren Worrall Davanum Srinivas Dave Lapsley +Dave McCowan Dave Walker (Daviey) David Besen David Hill @@ -185,6 +189,7 @@ Devdeep Singh Devendra Modium Devin Carlen +Dheeraj Gupta Dima Shulyak Dina Belova Dirk Mueller @@ -224,6 +229,7 @@ Evgeny Fedoruk Ewan Mellor Facundo Maldonado +Fei Long Wang Fei Long Wang Fengqian Gao Flaper Fesp @@ -261,6 +267,7 @@ Hendrik Volkmer Hengqing Hu Hirofumi Ichihara +Hiroyuki Eguchi Hisaharu Ishii Hisaki Ohara Hyunsun Moon @@ -358,6 +365,7 @@ Ken Pepple Ken'ichi Ohmichi Keshava Bharadwaj +Kevin Benton Kevin Benton Kevin Bringard Kevin L. Mitchell @@ -378,6 +386,7 @@ Leandro I. Costantino Li Chen Liam Kelleher +Liam Young Lianhao Lu Likitha Shetty Lin Hua Cheng @@ -399,6 +408,7 @@ Mana Kaneko Mandar Vaze Mandell Degerness +Marcio Roberto Starke Marco Sinhoreli Marcos Lobo Maris Fogels @@ -418,6 +428,7 @@ Matt Odden Matt Riedemann Matt Stephenson +Matt Symonds Matt Thompson Matthew Booth Matthew Gilliard @@ -435,6 +446,7 @@ Michael Kerrin Michael Still Michael Wilson +Michal Jura Miguel Lavalle Mike Bayer Mike Lundy @@ -592,6 +604,7 @@ Sumit Naiksatam Sunil Thaha Svetlana Shturm +Sylvain Bauza Takaaki Suzuki Takashi Natsume Takashi Sogabe @@ -669,6 +682,7 @@ Yuriy Taraday Yuriy Zveryanskyy Yuzlikeev Eduard +ZHU ZHU Zaina Afoulki Zane Bitter Zed Shaw @@ -684,6 +698,7 @@ ZhuRongze Ziad Sawalha Zoltan Arnold Nagy +abhishekkekane alexpilotti andrewbogott armando-migliaccio @@ -748,7 +763,9 @@ mkislinska msdubov pengyuwei +pkholkin pmoosh +pranali pyw rackerjoe ruichen diff -Nru nova-2014.1.3/ChangeLog nova-2014.1.5/ChangeLog --- nova-2014.1.3/ChangeLog 2014-10-02 23:38:43.000000000 +0000 +++ nova-2014.1.5/ChangeLog 2015-06-18 22:33:11.000000000 +0000 @@ -1,11 +1,87 @@ CHANGES ======= +2014.1.5 +-------- + +* Use ebtables to isolate dhcp traffic +* VMware: fix AttributeError: TaskInfo instance has no attribute 'name' +* libvirt: partial fix for live-migration with config drive +* Updated from global requirements +* Type conflict in trusted_filter.py using attestation_port default value +* Use instance.uuid instead of instance +* Make test_version_string_with_package_is_good work with pbr 0.11 +* Moves trusted filter unit tests into own file +* Use hypervisor hostname for compute trust level +* Recover from POWERING-* state on compute manager start-up +* Avoid referring to juno-era exception type +* libvirt: Make sure volumes are well detected during block migration +* libvirt: avoid changing UUID when redefining nwfilters +* delete python bytecode before every test run +* Drop use of oslo.utils in nova +* Eventlet green threads not released back to pool +* Bump stable/icehouse next version to 2014.1.5 + +2014.1.4 +-------- + +* Websocket Proxy should verify Origin header +* Fix kwargs['instance'] KeyError in @reverts_task_state decorator +* Revert "Eventlet green threads not released back to pool" +* Compute: Catch binding failed exception while init host +* Sync strutils from oslo-incubator for mask_password fix +* Updated from global requirements +* Make tests use sha256 as openssl default digest algorithm +* Eventlet green threads not released back to pool +* Fix image metadata returned for volumes +* Check min_ram and min_disk when boot from volume +* Extends use of ServiceProxy to more methods in HostAPI in cells +* Allow instances to attach to shared external nets +* Fix libvirt watchdog support +* Updated from global requirements +* Remove usage of self.__dict__ for message var replacement +* only emit deprecation warnings once +* Updated from global requirements +* Updated from global requirements +* Fix disconnecting necessary iSCSI sessions issue +* Fix connecting unnecessary iSCSI sessions issue +* Fix wrong command for _rescan_multipath +* Fix unsafe SSL connection on TrustedFilter +* Fix SecurityGroupExists error when booting instances +* Update "num_instance" during delete instance +* Fix nova evacuate issues for RBD +* Fix nova-compute start issue after evacuate +* Add _security_group_ensure_default() DBAPI method +* Run build_and_run_instance in a separate greenthread +* Fixes DOS issue in instance list ip filter +* Make the block device mapping retries configurable +* Retry on closing of luks encrypted volume in case device is busy +* HyperV Driver - Fix to implement hypervisor-uptime +* Add @_retry_on_deadlock to _instance_update() +* Nova api service doesn't handle SIGHUP properly +* Fix XML UnicodeEncode serialization error +* postgresql: use postgres db instead of template1 +* share neutron admin auth tokens +* VMware: validate that VM exists on backend prior to deletion +* VMWare: Fix VM leak when deletion of VM during resizing +* Sync process utils from oslo +* VMware: prevent race condition with VNC port allocation +* Fixes Hyper-V volume mapping issue on reboot +* Fix CellStateManagerFile init to failure +* Raise descriptive error for over volume quota +* Fixes missing ec2 api address disassociate error on failure +* Bump stable/icehouse next version to 2014.1.4 +* Fix instance cross AZ check when attaching volumes +* Ignore errors when deleting non-existing vifs + 2014.1.3 -------- * Adds tests for Hyper-V VM Utils * Removes unnecessary instructions in test_hypervapi +* libvirt: Handle unsupported host capabilities +* libvirt: Make `fakelibvirt.libvirtError` match +* Add _wrap_db_error() support to SessionTransaction.commit() * Fixes a Hyper-V list_instances localization issue * Adds list_instance_uuids to the Hyper-V driver * Add _wrap_db_error() support to Session.commit() diff -Nru nova-2014.1.3/debian/changelog nova-2014.1.5/debian/changelog --- nova-2014.1.3/debian/changelog 2015-01-16 20:31:43.000000000 +0000 +++ nova-2014.1.5/debian/changelog 2017-09-13 18:45:42.000000000 +0000 @@ -1,12 +1,232 @@ -nova (1:2014.1.3-0ubuntu2.1) trusty-security; urgency=medium +nova (1:2014.1.5-0ubuntu1.7) trusty-security; urgency=medium - * SECURITY UPDATE: denial of service via ip filter - - debian/patches/CVE-2014-3708.patch: filter locally in - nova/compute/api.py, added test to + * SECURITY UPDATE: DoS via instance deletion during migration + - debian/patches/CVE-2015-3241-1.patch: check for resize path on + libvirt instance delete in nova/tests/virt/libvirt/test_libvirt.py, + nova/virt/libvirt/driver.py. + - debian/patches/CVE-2015-3241-1.patch: sync process utils from oslo in + nova/openstack/common/processutils.py. + - debian/patches/CVE-2015-3241-1.patch: kill rsync/scp processes before + deleting instance in nova/tests/virt/libvirt/test_libvirt.py, + nova/tests/virt/libvirt/test_libvirt_utils.py, + nova/virt/libvirt/driver.py, nova/virt/libvirt/instancejobtracker.py, + nova/virt/libvirt/utils.py. + - CVE-2015-3241 + * SECURITY UPDATE: DoS via instance deletion during resize + - debian/patches/CVE-2015-3280.patch: delete orphaned instance files + from compute nodes in nova/compute/manager.py, + nova/tests/compute/test_compute_mgr.py. + - CVE-2015-3280 + * SECURITY UPDATE: DoS via crafted disk image + - debian/patches/CVE-2015-5162-1.patch: add prlimit parameter to + execute() in nova/openstack/common/prlimit.py, + nova/openstack/common/processutils.py, + nova/tests/openstack_common/test_processutils.py. + - debian/patches/CVE-2015-5162-2.patch: add support for missing process + limits in nova/openstack/common/prlimit.py, + nova/openstack/common/processutils.py, + nova/tests/openstack_common/test_processutils.py. + - debian/patches/CVE-2015-5162-3.patch: set address space & CPU time + limits when running qemu-img in nova/virt/images.py, + nova/tests/virt/libvirt/test_libvirt.py, + nova/tests/virt/libvirt/test_image_utils.py, + nova/tests/virt/libvirt/test_libvirt_utils.py. + - CVE-2015-5162 + * SECURITY UPDATE: arbitrary file read via snapshot + - debian/patches/CVE-2015-7548-1.patch: fix format detection in libvirt + snapshot in nova/tests/virt/libvirt/fake_libvirt_utils.py, + nova/tests/virt/libvirt/test_image_utils.py, + nova/tests/virt/libvirt/test_libvirt_utils.py, + nova/virt/libvirt/driver.py, nova/virt/libvirt/utils.py. + - debian/patches/CVE-2015-7548-2.patch: fix format conversion in + libvirt snapshot in nova/tests/virt/libvirt/test_libvirt.py, + nova/virt/images.py, nova/virt/libvirt/imagebackend.py. + - debian/patches/CVE-2015-7548-3.patch: fix backing file detection in + libvirt live snapshot in nova/tests/virt/libvirt/test_libvirt.py, + nova/tests/virt/libvirt/fake_libvirt_utils.py, nova/virt/images.py, + nova/virt/libvirt/driver.py, nova/virt/libvirt/utils.py. + - debian/patches/CVE-2015-7548-4.patch: disable live snapshot for + rbd-backed instances in nova/virt/libvirt/driver.py. + - CVE-2015-7548 + * SECURITY UPDATE: restriction bypass via security group changes + - debian/patches/CVE-2015-7713.patch: don't expect meta attributes in + object_compat that aren't in the db obj in nova/compute/manager.py, nova/tests/compute/test_compute.py. - - CVE-2014-3708 + - CVE-2015-7713 + * SECURITY UPDATE: password disclosure via xen log files + - debian/patches/CVE-2015-8749.patch: mask passwords in volume + connection_data dict in nova/virt/xenapi/volume_utils.py. + - CVE-2015-8749 + * SECURITY UPDATE: arbitrary file read via crafted qcow2 header + - debian/patches/CVE-2016-2140-1.patch: always copy or recreate + disk.info during a migration in nova/virt/libvirt/driver.py, + nova/tests/virt/libvirt/test_libvirt.py. + - debian/patches/CVE-2016-2140-2.patch: fix processing of libvirt + disk.info in non-disk-image cases in nova/virt/libvirt/driver.py, + nova/tests/virt/libvirt/test_libvirt.py. + - debian/patches/CVE-2016-2140-3.patch: decode disk_info before use in + nova/tests/virt/libvirt/test_libvirt.py, nova/virt/libvirt/driver.py. + - CVE-2016-2140 + * Thanks to Red Hat for the backports many of these patches are based on. - -- Marc Deslauriers Fri, 16 Jan 2015 15:30:14 -0500 + -- Marc Deslauriers Wed, 13 Sep 2017 14:30:17 -0400 + +nova (1:2014.1.5-0ubuntu1.6) trusty; urgency=medium + + * Allow evacuate for an instance in the Error state (LP: #1298061) + - d/p/remove_useless_state_check.patch remove unnecessary task_state check + - d/p/evacuate_error_vm.patch Allow evacuate from error state + + -- Liang Chen Fri, 09 Sep 2016 17:41:48 +0800 + +nova (1:2014.1.5-0ubuntu1.5) trusty; urgency=medium + + * Fix live migration usage of the wrong connector (LP: #1475411) + - d/p/Fix-live-migrations-usage-of-the-wrong-connector-inf.patch + * Fix wrong used ProcessExecutionError exception (LP: #1308839) + - d/p/Fix-wrong-used-ProcessExecutionError-exception.patch + * Clean up iSCSI multipath devices in Post Live Migration (LP: #1357368) + - d/p/Clean-up-iSCSI-multipath-devices-in-Post-Live-Migrat.patch + * Detach iSCSI latest path for latest disk (LP: #1374999) + - d/p/Detach-iSCSI-latest-path-for-latest-disk.patch + + -- Billy Olsen Fri, 29 Apr 2016 15:35:01 -0700 + +nova (1:2014.1.5-0ubuntu1.4) trusty; urgency=medium + + * Protect against possible rpcapi mismatch on upgrade (LP: #1506257) + - d/p/protect-against-upgrade-rpc-ver-mismatch.patch + + -- Edward Hope-Morley Thu, 22 Oct 2015 10:00:29 -0500 + +nova (1:2014.1.5-0ubuntu1.3) trusty; urgency=medium + + * Attempting to attach the same volume multiple times can cause + bdm record for existing attachment to be deleted. (LP: #1349888) + - d/p/fix-creating-bdm-for-failed-volume-attachment.patch + + -- Edward Hope-Morley Tue, 08 Sep 2015 12:32:45 +0100 + +nova (1:2014.1.5-0ubuntu1.2) trusty; urgency=medium + + * Add rsyslog retry support (LP: #1459046) + - d/p/add-support-for-syslog-connect-retries.patch + * Add vm clean shutdown support (LP: #1196924) + - d/p/clean-shutdown.patch + + -- Edward Hope-Morley Thu, 16 Jul 2015 11:55:57 +0100 + +nova (1:2014.1.5-0ubuntu1.1) trusty; urgency=medium + + [ Edward Hope-Morley ] + - d/nova-compute.upstart: Fix (another) race between nova-compute + and neutron-ovs-cleanup (LP: #1471022) + + -- Edward Hope-Morley Wed, 08 Jul 2015 09:44:18 -0500 + +nova (1:2014.1.5-0ubuntu1) trusty; urgency=medium + + * Resynchronize with stable/icehouse (08b5d48) (LP: #1467533): + - [74295ed] Use ebtables to isolate dhcp traffic + - [a83eb5f] VMware: fix AttributeError: TaskInfo instance has no attribute 'name' + - [8876294] libvirt: partial fix for live-migration with config drive + - [b77c188] Type conflict in trusted_filter.py using attestation_port default value + - [378a8d4] Use instance.uuid instead of instance + - [c12f21d] Make test_version_string_with_package_is_good work with pbr 0.11 + - [1668178] Moves trusted filter unit tests into own file + - [4812617] Use hypervisor hostname for compute trust level + - [d8853ee] Recover from POWERING-* state on compute manager start-up + - [0784b0c] Avoid referring to juno-era exception type + - [f513a28] libvirt: Make sure volumes are well detected during block migration + - [68ec684] libvirt: avoid changing UUID when redefining nwfilters + - [cc86ef5] delete python bytecode before every test run + - [3501ec2] Drop use of oslo.utils in nova + - [392dc22] Eventlet green threads not released back to pool + - [1e03160] Sync strutils from oslo-incubator for mask_password fix + - [7292c02] Allow instances to attach to shared external nets + - [dbc348d] Fix libvirt watchdog support + - [08b5d48] HyperV Driver - Fix to implement hypervisor-uptime + * d/p/drop-oslo-utils-usage.patch: Dropped; Fixed upstream. + * d/p/recover-from-power-state-on-compute.patch: Dropped; Fixed upstream. + * d/p/fix-requirements.patch: Rebased. + + -- Corey Bryant Mon, 22 Jun 2015 10:15:07 -0400 + +nova (1:2014.1.4-0ubuntu2.1) trusty; urgency=medium + + * Ensure that compute manager restarts during instance power + operations don't leave instances stuck in transitional task + states (LP: #1304333): + - d/p/recover-from-power-state-on-compute.patch + Cherry pick backport of upstream fix from OpenStack >= Juno. + + -- Edward Hope-Morley Wed, 22 Apr 2015 09:51:28 +0100 + +nova (1:2014.1.4-0ubuntu2) trusty; urgency=medium + + [ Edward Hope-Morley ] + * Fixed race between nova-compute and neutron-ovs-cleanup (LP: #1420572) + + [ Corey Bryant ] + * d/control: Set minimum python-six dependency to 1.5.2 (LP: #1403114). + + -- Corey Bryant Mon, 30 Mar 2015 09:28:30 -0400 + +nova (1:2014.1.4-0ubuntu1) trusty; urgency=medium + + * Resynchronize with stable/icehouse (cac6472) (LP: #1432608): + - [0ff6742] Websocket Proxy should verify Origin header + - [c70e1fb] Fix kwargs['instance'] KeyError in @reverts_task_state decorator + - [07ec12c] Revert "Eventlet green threads not released back to pool" + - [e9cf07b] Compute: Catch binding failed exception while init host + - [e275961] Make tests use sha256 as openssl default digest algorithm + - [a657582] Eventlet green threads not released back to pool + - [4b46a86] Fix image metadata returned for volumes + - [58a6393] Check min_ram and min_disk when boot from volume + - [c5411d2] Extends use of ServiceProxy to more methods in HostAPI in cells + - [1e2abd6] Remove usage of self.__dict__ for message var replacement + - [54f9225] only emit deprecation warnings once + - [52103be] Fix disconnecting necessary iSCSI sessions issue + - [cca94d0] Fix connecting unnecessary iSCSI sessions issue + - [ac9f5c7] Fix wrong command for _rescan_multipath + - [d7c8e93] Fix unsafe SSL connection on TrustedFilter + - [9ecc468] Fix SecurityGroupExists error when booting instances + - [33be7d7] Update "num_instance" during delete instance + - [3de3f10] Fix nova evacuate issues for RBD + - [fe289fb] Fix nova-compute start issue after evacuate + - [f781656] Add _security_group_ensure_default() DBAPI method + - [8812672] Run build_and_run_instance in a separate greenthread + - [b6a080b] Fixes DOS issue in instance list ip filter + - [5ab0421] Make the block device mapping retries configurable + - [0695e14] Retry on closing of luks encrypted volume in case device is busy + - [dffa810] Add @_retry_on_deadlock to _instance_update() + - [f086ca3] Nova api service doesn't handle SIGHUP properly + - [7cdb643] Fix XML UnicodeEncode serialization error + - [98a6c1e] postgresql: use postgres db instead of template1 + - [155664f] share neutron admin auth tokens + - [3e80433] VMware: validate that VM exists on backend prior to deletion + - [d71445c] VMWare: Fix VM leak when deletion of VM during resizing + - [56b62b7] Sync process utils from oslo + - [ddd62ff] VMware: prevent race condition with VNC port allocation + - [4174130] Fixes Hyper-V volume mapping issue on reboot + - [bfeae68] Fix CellStateManagerFile init to failure + - [5ec3cd3] Raise descriptive error for over volume quota + - [f9fad7a] Fixes missing ec2 api address disassociate error on failure + - [64ec1bf] Fix instance cross AZ check when attaching volumes + - [698c821] Ignore errors when deleting non-existing vifs + - [8141e7a] libvirt: Handle unsupported host capabilities + - [df9ead9] libvirt: Make `fakelibvirt.libvirtError` match + - [cac6472] Add _wrap_db_error() support to SessionTransaction.commit() + * d/p/drop-oslo-utils-usage.patch: Added to override new oslo.utils dep. + * d/p/disable-websockify-tests.patch: Added to disable websockify tests. + * d/p/block-device-mapping-config.patch: Dropped. Fixed upstream in [5ab0421]. + * d/p/libvirt-Handle-unsupported-host-capabilities.patch: Dropped. Fixed + upstream in [8141e7a] and [df9ead9]. + * d/p/cells-json-store.patch: Dropped. Fixed upstream in [bfeae68]. + * d/p/fix-requirements.patch: Rebased. + * d/p/update-run-tests.patch: Run tests with default concurrencey. + + -- Corey Bryant Fri, 20 Mar 2015 07:27:23 +0000 nova (1:2014.1.3-0ubuntu2) trusty; urgency=medium diff -Nru nova-2014.1.3/debian/control nova-2014.1.5/debian/control --- nova-2014.1.3/debian/control 2014-11-17 19:55:19.000000000 +0000 +++ nova-2014.1.5/debian/control 2016-09-09 09:41:48.000000000 +0000 @@ -47,7 +47,7 @@ python-pycadf (>= 0.1.9), python-routes, python-setuptools, - python-six (>= 1.4.1), + python-six (>= 1.5.2), python-sphinx (>> 1.0), python-sqlalchemy-ext ( >= 0.7.8-1~) | python-sqlalchemy, python-stevedore (>= 0.12), @@ -97,7 +97,7 @@ python-pycadf (>= 0.1.9), python-routes, python-simplejson, - python-six, + python-six (>= 1.5.2), python-sqlalchemy-ext ( >= 0.7.8-1~) | python-sqlalchemy (<< 0.6.3-2), python-stevedore (>= 0.12), python-suds, diff -Nru nova-2014.1.3/debian/nova-compute.upstart nova-2014.1.5/debian/nova-compute.upstart --- nova-2014.1.3/debian/nova-compute.upstart 2014-11-17 19:55:19.000000000 +0000 +++ nova-2014.1.5/debian/nova-compute.upstart 2016-09-09 09:41:48.000000000 +0000 @@ -7,6 +7,8 @@ chdir /var/run +env MAX_STATUS_CHECK_RETRIES=20 + pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ @@ -20,6 +22,36 @@ if status libvirt-bin; then start wait-for-state WAIT_FOR=libvirt-bin WAIT_STATE=running WAITER=nova-compute fi + + # If installed, wait for neutron-ovs-cleanup to complete prior to starting + # nova-compute. + if status neutron-ovs-cleanup; then + # See LP #1471022 for explanation of why we do like this + retries=$MAX_STATUS_CHECK_RETRIES + delay=1 + while true; do + # Already running? + s=`status neutron-ovs-cleanup` + echo $s + `echo $s| grep -qE "\sstart/running"` && break + if retries=`expr $retries - 1`; then + # Give it a push + echo "Attempting to start neutron-ovs-cleanup" + start neutron-ovs-cleanup || : + # Wait a bit to avoid hammering ovs-cleanup (which itself may be waiting + # on dependencies) + echo "Recheck neutron-ovs-cleanup status in ${delay}s" + sleep $delay + if _=`expr $retries % 2`; then + delay=`expr $delay + 2` + fi + else + echo "Max retries ($MAX_STATUS_CHECK_RETRIES) reached - no longer waiting for neutron-ovs-cleanup to start" + break + fi + done + fi end script exec start-stop-daemon --start --chuid nova --exec /usr/bin/nova-compute -- --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf + diff -Nru nova-2014.1.3/debian/patches/add-support-for-syslog-connect-retries.patch nova-2014.1.5/debian/patches/add-support-for-syslog-connect-retries.patch --- nova-2014.1.3/debian/patches/add-support-for-syslog-connect-retries.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/add-support-for-syslog-connect-retries.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,115 @@ +From fa2a6c6b6aee59b1a98fa7b93f55405457449bf0 Mon Sep 17 00:00:00 2001 +From: Edward Hope-Morley +Date: Thu, 18 Jun 2015 13:38:58 +0100 +Subject: [PATCH] Add support for syslog connect retries + +If we have requested logging to syslog and syslog is +not yet ready we shoudl allow for retry attempts. This +patch provides a new option syslog-connect-retries to +allow for retries with a 5 second interval between +each retry. + +Closes-Bug: 1459046 +Co-authored-by: Liang Chen +Conflicts: + nova/openstack/common/log.py + +Change-Id: I88269a75c56c68443230620217a469aebee523f8 +--- + nova/openstack/common/log.py | 58 +++++++++++++++++++++++++++++++++++--------- + 1 file changed, 46 insertions(+), 12 deletions(-) + +diff --git a/nova/openstack/common/log.py b/nova/openstack/common/log.py +index cdc439a..71700b7 100644 +--- a/nova/openstack/common/log.py ++++ b/nova/openstack/common/log.py +@@ -34,7 +34,9 @@ import logging.config + import logging.handlers + import os + import re ++import socket + import sys ++import time + import traceback + + from oslo.config import cfg +@@ -118,6 +120,10 @@ logging_cli_opts = [ + help='Use syslog for logging. ' + 'Existing syslog format is DEPRECATED during I, ' + 'and then will be changed in J to honor RFC5424'), ++ cfg.IntOpt('syslog-connect-retries', ++ default=3, ++ help='Number of attempts with a five second interval to retry ' ++ 'connecting to syslog. (if use-syslog=True)'), + cfg.BoolOpt('use-syslog-rfc-format', + # TODO(bogdando) remove or use True after existing + # syslog format deprecation in J +@@ -490,18 +496,6 @@ def _setup_logging_from_conf(): + for handler in log_root.handlers: + log_root.removeHandler(handler) + +- if CONF.use_syslog: +- facility = _find_facility_from_conf() +- # TODO(bogdando) use the format provided by RFCSysLogHandler +- # after existing syslog format deprecation in J +- if CONF.use_syslog_rfc_format: +- syslog = RFCSysLogHandler(address='/dev/log', +- facility=facility) +- else: +- syslog = logging.handlers.SysLogHandler(address='/dev/log', +- facility=facility) +- log_root.addHandler(syslog) +- + logpath = _get_log_file_path() + if logpath: + filelog = logging.handlers.WatchedFileHandler(logpath) +@@ -548,6 +542,46 @@ def _setup_logging_from_conf(): + logger = logging.getLogger(mod) + logger.setLevel(level) + ++ if CONF.use_syslog: ++ retries = CONF.syslog_connect_retries ++ syslog_ready = False ++ while True: ++ try: ++ facility = _find_facility_from_conf() ++ # TODO(bogdando) use the format provided by RFCSysLogHandler ++ # after existing syslog format deprecation in J ++ if CONF.use_syslog_rfc_format: ++ syslog = RFCSysLogHandler(address='/dev/log', ++ facility=facility) ++ else: ++ syslog = logging.handlers.SysLogHandler(address='/dev/log', ++ facility=facility) ++ log_root.addHandler(syslog) ++ syslog_ready = True ++ except socket.error: ++ if CONF.syslog_connect_retries <= 0: ++ log_root.error(_('Connection to syslog failed and no ' ++ 'retry attempts requested')) ++ break ++ ++ if retries: ++ log_root.info(_('Connection to syslog failed - ' ++ 'retrying in 5 seconds')) ++ retries -= 1 ++ else: ++ log_root.error(_('Connection to syslog failed and ' ++ 'max retry attempts reached')) ++ break ++ ++ time.sleep(5) ++ else: ++ break ++ ++ if not syslog_ready: ++ log_root.error(_('Unable to add syslog handler. Verify that ' ++ 'syslog is running.')) ++ ++ + _loggers = {} + + +-- +1.9.1 + diff -Nru nova-2014.1.3/debian/patches/block-device-mapping-config.patch nova-2014.1.5/debian/patches/block-device-mapping-config.patch --- nova-2014.1.3/debian/patches/block-device-mapping-config.patch 2014-11-17 19:55:19.000000000 +0000 +++ nova-2014.1.5/debian/patches/block-device-mapping-config.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,122 +0,0 @@ -Description: Make the block device mapping retries configurable -Author: Akash Gangil -Bug-Ubuntu: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1376927 -Forwarded: https://review.openstack.org/#/c/129276/ -diff --git a/nova/compute/manager.py b/nova/compute/manager.py -index 260a2b7..45e143c 100644 ---- a/nova/compute/manager.py -+++ b/nova/compute/manager.py -@@ -120,6 +120,10 @@ compute_opts = [ - cfg.IntOpt('network_allocate_retries', - default=0, - help="Number of times to retry network allocation on failures"), -+ cfg.IntOpt('block_device_allocate_retries', -+ default=180, -+ help='Number of times to retry block device' -+ ' allocation on failures') - ] - - interval_opts = [ -@@ -153,7 +157,11 @@ interval_opts = [ - cfg.IntOpt('instance_delete_interval', - default=300, - help=('Interval in seconds for retrying failed instance file ' -- 'deletes')) -+ 'deletes')), -+ cfg.IntOpt('block_device_allocate_retries_interval', -+ default=1, -+ help='Waiting time interval (seconds) between block' -+ ' device allocation retries on failures') - ] - - timeout_opts = [ -@@ -1135,24 +1143,21 @@ class ComputeManager(manager.Manager): - instance) - return network_info - -- def _await_block_device_map_created(self, context, vol_id, max_tries=180, -- wait_between=1): -+ def _await_block_device_map_created(self, context, vol_id): - # TODO(yamahata): creating volume simultaneously - # reduces creation time? - # TODO(yamahata): eliminate dumb polling -- # TODO(harlowja): make the max_tries configurable or dynamic? - attempts = 0 - start = time.time() -- while attempts < max_tries: -+ while attempts < CONF.block_device_allocate_retries: - volume = self.volume_api.get(context, vol_id) - volume_status = volume['status'] - if volume_status not in ['creating', 'downloading']: - if volume_status != 'available': - LOG.warn(_("Volume id: %s finished being created but was" - " not set as 'available'"), vol_id) -- # NOTE(harlowja): return how many attempts were tried - return attempts + 1 -- greenthread.sleep(wait_between) -+ greenthread.sleep(CONF.block_device_allocate_retries_interval) - attempts += 1 - # NOTE(harlowja): Should only happen if we ran out of attempts - raise exception.VolumeNotCreated(volume_id=vol_id, -diff --git a/nova/tests/compute/test_compute.py b/nova/tests/compute/test_compute.py -index f1e334d..81765b4 100644 ---- a/nova/tests/compute/test_compute.py -+++ b/nova/tests/compute/test_compute.py -@@ -34,6 +34,8 @@ from oslo import messaging - import six - from testtools import matchers as testtools_matchers - -+from eventlet import greenthread -+ - import nova - from nova import availability_zones - from nova import block_device -@@ -378,6 +380,8 @@ class ComputeVolumeTestCase(BaseTestCase): - lambda *a, **kw: None) - self.stubs.Set(self.compute.volume_api, 'check_attach', - lambda *a, **kw: None) -+ self.stubs.Set(greenthread, 'sleep', -+ lambda *a, **kw: None) - - def store_cinfo(context, *args, **kwargs): - self.cinfo = jsonutils.loads(args[-1].get('connection_info')) -@@ -438,7 +442,9 @@ class ComputeVolumeTestCase(BaseTestCase): - mock_get_by_id.assert_called_once_with(self.context, 'fake') - self.assertTrue(mock_attach.called) - -- def test_await_block_device_created_to_slow(self): -+ def test_await_block_device_created_too_slow(self): -+ self.flags(block_device_allocate_retries=2) -+ self.flags(block_device_allocate_retries_interval=0.1) - - def never_get(context, vol_id): - return { -@@ -449,13 +455,15 @@ class ComputeVolumeTestCase(BaseTestCase): - self.stubs.Set(self.compute.volume_api, 'get', never_get) - self.assertRaises(exception.VolumeNotCreated, - self.compute._await_block_device_map_created, -- self.context, '1', max_tries=2, wait_between=0.1) -+ self.context, '1') - - def test_await_block_device_created_slow(self): - c = self.compute -+ self.flags(block_device_allocate_retries=4) -+ self.flags(block_device_allocate_retries_interval=0.1) - - def slow_get(context, vol_id): -- while self.fetched_attempts < 2: -+ if self.fetched_attempts < 2: - self.fetched_attempts += 1 - return { - 'status': 'creating', -@@ -467,9 +475,7 @@ class ComputeVolumeTestCase(BaseTestCase): - } - - self.stubs.Set(c.volume_api, 'get', slow_get) -- attempts = c._await_block_device_map_created(self.context, '1', -- max_tries=4, -- wait_between=0.1) -+ attempts = c._await_block_device_map_created(self.context, '1') - self.assertEqual(attempts, 3) - - def test_boot_volume_serial(self): diff -Nru nova-2014.1.3/debian/patches/cells-json-store.patch nova-2014.1.5/debian/patches/cells-json-store.patch --- nova-2014.1.3/debian/patches/cells-json-store.patch 2014-11-17 19:55:19.000000000 +0000 +++ nova-2014.1.5/debian/patches/cells-json-store.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,76 +0,0 @@ -Description: Fix nova cells failing with a json topology file bug -Author: Liam Young -Bug: https://bugs.launchpad.net/nova/+bug/1314677 -Forwarded: https://review.openstack.org/#/c/124811/ -diff --git a/nova/cells/state.py b/nova/cells/state.py -index b9112bd..1e12450 100644 ---- a/nova/cells/state.py -+++ b/nova/cells/state.py -@@ -152,10 +152,7 @@ class CellStateManager(base.Base): - cells_config = CONF.cells.cells_config - - if cells_config: -- config_path = CONF.find_file(cells_config) -- if not config_path: -- raise cfg.ConfigFilesNotFoundError(config_files=[cells_config]) -- return CellStateManagerFile(cell_state_cls, config_path) -+ return CellStateManagerFile(cell_state_cls) - - return CellStateManagerDB(cell_state_cls) - -@@ -450,8 +447,11 @@ class CellStateManagerDB(CellStateManager): - - - class CellStateManagerFile(CellStateManager): -- def __init__(self, cell_state_cls, cells_config_path): -- self.cells_config_path = cells_config_path -+ def __init__(self, cell_state_cls=None): -+ cells_config = CONF.cells.cells_config -+ self.cells_config_path = CONF.find_file(cells_config) -+ if not self.cells_config_path: -+ raise cfg.ConfigFilesNotFoundError(config_files=[cells_config]) - super(CellStateManagerFile, self).__init__(cell_state_cls) - - def _cell_data_sync(self, force=False): -diff --git a/nova/tests/cells/test_cells_state_manager.py b/nova/tests/cells/test_cells_state_manager.py -index 1c29927..4841e14 100644 ---- a/nova/tests/cells/test_cells_state_manager.py -+++ b/nova/tests/cells/test_cells_state_manager.py -@@ -16,12 +16,16 @@ - Tests For CellStateManager - """ - -+import mock -+import six -+ - from oslo.config import cfg - - from nova.cells import state - from nova import db - from nova.db.sqlalchemy import models - from nova import exception -+from nova.openstack.common import fileutils - from nova import test - - -@@ -78,6 +82,19 @@ class TestCellsStateManager(test.TestCase): - state.CellStateManager) - self.assertEqual(['no_such_file_exists.conf'], e.config_files) - -+ @mock.patch.object(cfg.ConfigOpts, 'find_file') -+ @mock.patch.object(fileutils, 'read_cached_file') -+ def test_filemanager_returned(self, mock_read_cached_file, mock_find_file): -+ mock_find_file.return_value = "/etc/nova/cells.json" -+ mock_read_cached_file.return_value = (False, six.StringIO({})) -+ self.flags(cells_config='cells.json', group='cells') -+ self.assertIsInstance(state.CellStateManager(), -+ state.CellStateManagerFile) -+ -+ def test_dbmanager_returned(self): -+ self.assertIsInstance(state.CellStateManager(), -+ state.CellStateManagerDB) -+ - def test_capacity_no_reserve(self): - # utilize entire cell - cap = self._capacity(0.0) - diff -Nru nova-2014.1.3/debian/patches/clean-shutdown.patch nova-2014.1.5/debian/patches/clean-shutdown.patch --- nova-2014.1.3/debian/patches/clean-shutdown.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/clean-shutdown.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,502 @@ +commit 879bbcf902c7a8ba0b3c58660b461f5b4918834e +Author: Phil Day +Date: Fri Jan 24 15:43:20 2014 +0000 + + Power off commands should give guests a chance to shutdown + + Currently in libvirt operations which power off an instance such as stop, + shelve, rescue, and resize simply destroy the underlying VM. Some + GuestOS's do not react well to this type of power failure, and so it would + be better if these operations followed the same approach as soft_reboot + and give the guest as chance to shutdown gracefully. + + The shutdown behavior is defined by two values: + + - shutdown_timeout defines the overall period a Guest is allowed to + complete it's shutdown. The default valus is set via nova.conf and can be + overridden on a per image basis by image metadata allowing different types + of guest OS to specify how long they need to shutdown cleanly. + + - shutdown_retry_interval defines how frequently within that period + the Guest will be signaled to shutdown. This is a protection against + guests that may not be ready to process the shutdown signal when it + is first issued. (e.g. still booting). This is defined as a constant. + + This is one of a set of changes that will eventually expose the choice + of whether to give the GuestOS a chance to shutdown via the API. + + This change implements the libvirt changes to power_off() and adds + a clean shutdown to compute.manager.stop(). + + Subsequent patches will: + - Add clean shutdown to Shelve + - Add clean shutdown to Rescue + - Convert soft_reboot to use the same approach + - Expose clean shutdown via rpcapi + - Expose clean shutdown via API + + Partially-Implements: blueprint user-defined-shutdown + Closes-Bug: #1196924 + DocImpact + + Conflicts: + nova/compute/manager.py + nova/tests/virt/test_ironic_api_contracts.py + + Change-Id: I432b0b0c09db82797f28deb5617f02ee45a4278c + (cherry picked from commit c07ed15415c0ec3c5862f437f440632eff1e94df) + +diff --git a/nova/compute/manager.py b/nova/compute/manager.py +index 990b92f..e27103f 100644 +--- a/nova/compute/manager.py ++++ b/nova/compute/manager.py +@@ -183,6 +183,10 @@ timeout_opts = [ + default=0, + help="Automatically confirm resizes after N seconds. " + "Set to 0 to disable."), ++ cfg.IntOpt("shutdown_timeout", ++ default=60, ++ help="Total amount of time to wait in seconds for an instance " ++ "to perform a clean shutdown."), + ] + + running_deleted_opts = [ +@@ -575,6 +579,11 @@ class ComputeManager(manager.Manager): + + target = messaging.Target(version='3.23') + ++ # How long to wait in seconds before re-issuing a shutdown ++ # signal to a instance during power off. The overall ++ # time to wait is set by CONF.shutdown_timeout. ++ SHUTDOWN_RETRY_INTERVAL = 10 ++ + def __init__(self, compute_driver=None, *args, **kwargs): + """Load configuration options and connect to the hypervisor.""" + self.virtapi = ComputeVirtAPI(self) +@@ -2137,6 +2146,25 @@ class ComputeManager(manager.Manager): + instance=instance) + self._set_instance_error_state(context, instance['uuid']) + ++ def _get_power_off_values(self, context, instance, clean_shutdown): ++ """Get the timing configuration for powering down this instance.""" ++ if clean_shutdown: ++ timeout = compute_utils.get_value_from_system_metadata(instance, ++ key='image_os_shutdown_timeout', type=int, ++ default=CONF.shutdown_timeout) ++ retry_interval = self.SHUTDOWN_RETRY_INTERVAL ++ else: ++ timeout = 0 ++ retry_interval = 0 ++ ++ return timeout, retry_interval ++ ++ def _power_off_instance(self, context, instance, clean_shutdown=True): ++ """Power off an instance on this host.""" ++ timeout, retry_interval = self._get_power_off_values(context, ++ instance, clean_shutdown) ++ self.driver.power_off(instance, timeout, retry_interval) ++ + def _shutdown_instance(self, context, instance, + bdms, requested_networks=None, notify=True): + """Shutdown an instance on this host.""" +@@ -2308,16 +2336,23 @@ class ComputeManager(manager.Manager): + @reverts_task_state + @wrap_instance_event + @wrap_instance_fault +- def stop_instance(self, context, instance): ++ def stop_instance(self, context, instance, clean_shutdown=True): + """Stopping an instance on this host.""" +- self._notify_about_instance_usage(context, instance, "power_off.start") +- self.driver.power_off(instance) +- current_power_state = self._get_power_state(context, instance) +- instance.power_state = current_power_state +- instance.vm_state = vm_states.STOPPED +- instance.task_state = None +- instance.save(expected_task_state=task_states.POWERING_OFF) +- self._notify_about_instance_usage(context, instance, "power_off.end") ++ ++ @utils.synchronized(instance.uuid) ++ def do_stop_instance(): ++ self._notify_about_instance_usage(context, instance, ++ "power_off.start") ++ self._power_off_instance(context, instance, clean_shutdown) ++ current_power_state = self._get_power_state(context, instance) ++ instance.power_state = current_power_state ++ instance.vm_state = vm_states.STOPPED ++ instance.task_state = None ++ instance.save(expected_task_state=task_states.POWERING_OFF) ++ self._notify_about_instance_usage(context, instance, ++ "power_off.end") ++ ++ do_stop_instance() + + def _power_on(self, context, instance): + network_info = self._get_instance_nw_info(context, instance) +diff --git a/nova/compute/utils.py b/nova/compute/utils.py +index 119510c..ced00eb 100644 +--- a/nova/compute/utils.py ++++ b/nova/compute/utils.py +@@ -267,6 +267,25 @@ def get_image_metadata(context, image_service, image_id, instance): + return utils.get_image_from_system_metadata(system_meta) + + ++def get_value_from_system_metadata(instance, key, type, default): ++ """Get a value of a specified type from image metadata. ++ ++ @param instance: The instance object ++ @param key: The name of the property to get ++ @param type: The python type the value is be returned as ++ @param default: The value to return if key is not set or not the right type ++ """ ++ value = instance.system_metadata.get(key, default) ++ try: ++ return type(value) ++ except ValueError: ++ LOG.warning(_("Metadata value %(value)s for %(key)s is not of " ++ "type %(type)s. Using default value %(default)s."), ++ {'value': value, 'key': key, 'type': type, ++ 'default': default}, instance=instance) ++ return default ++ ++ + def notify_usage_exists(notifier, context, instance_ref, current_period=False, + ignore_missing_network_data=True, + system_metadata=None, extra_usage_info=None): +diff --git a/nova/tests/api/ec2/test_cloud.py b/nova/tests/api/ec2/test_cloud.py +index 00ea03e..9d037cf 100644 +--- a/nova/tests/api/ec2/test_cloud.py ++++ b/nova/tests/api/ec2/test_cloud.py +@@ -2449,7 +2449,8 @@ class CloudTestCase(test.TestCase): + + self.stubs.Set(fake_virt.FakeDriver, 'power_on', fake_power_on) + +- def fake_power_off(self, instance): ++ def fake_power_off(self, instance, ++ shutdown_timeout, shutdown_attempts): + virt_driver['powered_off'] = True + + self.stubs.Set(fake_virt.FakeDriver, 'power_off', fake_power_off) +diff --git a/nova/tests/compute/test_compute.py b/nova/tests/compute/test_compute.py +index b126a52..cb680f3 100644 +--- a/nova/tests/compute/test_compute.py ++++ b/nova/tests/compute/test_compute.py +@@ -2064,7 +2064,8 @@ class ComputeTestCase(BaseTestCase): + + called = {'power_off': False} + +- def fake_driver_power_off(self, instance): ++ def fake_driver_power_off(self, instance, ++ shutdown_timeout, shutdown_attempts): + called['power_off'] = True + + self.stubs.Set(nova.virt.fake.FakeDriver, 'power_off', +diff --git a/nova/tests/compute/test_compute_utils.py b/nova/tests/compute/test_compute_utils.py +index 2304e95..7415f46 100644 +--- a/nova/tests/compute/test_compute_utils.py ++++ b/nova/tests/compute/test_compute_utils.py +@@ -711,6 +711,28 @@ class ComputeGetImageMetadataTestCase(test.TestCase): + self.assertThat(expected, matchers.DictMatches(image_meta)) + + ++class ComputeUtilsGetValFromSysMetadata(test.TestCase): ++ ++ def test_get_value_from_system_metadata(self): ++ instance = fake_instance.fake_instance_obj('fake-context') ++ system_meta = {'int_val': 1, ++ 'int_string': '2', ++ 'not_int': 'Nope'} ++ instance.system_metadata = system_meta ++ ++ result = compute_utils.get_value_from_system_metadata( ++ instance, 'int_val', int, 0) ++ self.assertEqual(1, result) ++ ++ result = compute_utils.get_value_from_system_metadata( ++ instance, 'int_string', int, 0) ++ self.assertEqual(2, result) ++ ++ result = compute_utils.get_value_from_system_metadata( ++ instance, 'not_int', int, 0) ++ self.assertEqual(0, result) ++ ++ + class ComputeUtilsGetNWInfo(test.TestCase): + def test_instance_object_none_info_cache(self): + inst = fake_instance.fake_instance_obj('fake-context', +diff --git a/nova/tests/virt/libvirt/test_libvirt.py b/nova/tests/virt/libvirt/test_libvirt.py +index 2478e8e..ed1c8e8 100644 +--- a/nova/tests/virt/libvirt/test_libvirt.py ++++ b/nova/tests/virt/libvirt/test_libvirt.py +@@ -5608,6 +5608,82 @@ class LibvirtConnTestCase(test.TestCase): + conn._hard_reboot(self.context, instance, network_info, + block_device_info) + ++ def _test_clean_shutdown(self, seconds_to_shutdown, ++ timeout, retry_interval, ++ shutdown_attempts, succeeds): ++ info_tuple = ('fake', 'fake', 'fake', 'also_fake') ++ shutdown_count = [] ++ ++ def count_shutdowns(): ++ shutdown_count.append("shutdown") ++ ++ # Mock domain ++ mock_domain = self.mox.CreateMock(libvirt.virDomain) ++ ++ mock_domain.info().AndReturn( ++ (libvirt_driver.VIR_DOMAIN_RUNNING,) + info_tuple) ++ mock_domain.shutdown().WithSideEffects(count_shutdowns) ++ ++ retry_countdown = retry_interval ++ for x in xrange(min(seconds_to_shutdown, timeout)): ++ mock_domain.info().AndReturn( ++ (libvirt_driver.VIR_DOMAIN_RUNNING,) + info_tuple) ++ if retry_countdown == 0: ++ mock_domain.shutdown().WithSideEffects(count_shutdowns) ++ retry_countdown = retry_interval ++ else: ++ retry_countdown -= 1 ++ ++ if seconds_to_shutdown < timeout: ++ mock_domain.info().AndReturn( ++ (libvirt_driver.VIR_DOMAIN_SHUTDOWN,) + info_tuple) ++ ++ self.mox.ReplayAll() ++ ++ def fake_lookup_by_name(instance_name): ++ return mock_domain ++ ++ def fake_create_domain(**kwargs): ++ self.reboot_create_called = True ++ ++ conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ++ instance = {"name": "instancename", "id": "instanceid", ++ "uuid": "875a8070-d0b9-4949-8b31-104d125c9a64"} ++ self.stubs.Set(conn, '_lookup_by_name', fake_lookup_by_name) ++ self.stubs.Set(conn, '_create_domain', fake_create_domain) ++ result = conn._clean_shutdown(instance, timeout, retry_interval) ++ ++ self.assertEqual(succeeds, result) ++ self.assertEqual(shutdown_attempts, len(shutdown_count)) ++ ++ def test_clean_shutdown_first_time(self): ++ self._test_clean_shutdown(seconds_to_shutdown=2, ++ timeout=5, ++ retry_interval=3, ++ shutdown_attempts=1, ++ succeeds=True) ++ ++ def test_clean_shutdown_with_retry(self): ++ self._test_clean_shutdown(seconds_to_shutdown=4, ++ timeout=5, ++ retry_interval=3, ++ shutdown_attempts=2, ++ succeeds=True) ++ ++ def test_clean_shutdown_failure(self): ++ self._test_clean_shutdown(seconds_to_shutdown=6, ++ timeout=5, ++ retry_interval=3, ++ shutdown_attempts=2, ++ succeeds=False) ++ ++ def test_clean_shutdown_no_wait(self): ++ self._test_clean_shutdown(seconds_to_shutdown=6, ++ timeout=0, ++ retry_interval=3, ++ shutdown_attempts=1, ++ succeeds=False) ++ + def test_resume(self): + dummyxml = ("instance-0000000a" + "" +diff --git a/nova/virt/baremetal/driver.py b/nova/virt/baremetal/driver.py +index c1de148..b24e50a 100644 +--- a/nova/virt/baremetal/driver.py ++++ b/nova/virt/baremetal/driver.py +@@ -399,8 +399,9 @@ class BareMetalDriver(driver.ComputeDriver): + """Cleanup after instance being destroyed.""" + pass + +- def power_off(self, instance, node=None): ++ def power_off(self, instance, timeout=0, retry_interval=0, node=None): + """Power off the specified instance.""" ++ # TODO(PhilDay): Add support for timeout (clean shutdown) + if not node: + node = _get_baremetal_node_by_instance_uuid(instance['uuid']) + pm = get_power_manager(node=node, instance=instance) +diff --git a/nova/virt/driver.py b/nova/virt/driver.py +index 2fc95cc..2db2964 100644 +--- a/nova/virt/driver.py ++++ b/nova/virt/driver.py +@@ -579,10 +579,13 @@ class ComputeDriver(object): + # TODO(Vek): Need to pass context in for access to auth_token + raise NotImplementedError() + +- def power_off(self, instance): ++ def power_off(self, instance, timeout=0, retry_interval=0): + """Power off the specified instance. + + :param instance: nova.objects.instance.Instance ++ :param timeout: time to wait for GuestOS to shutdown ++ :param retry_interval: How often to signal guest while ++ waiting for it to shutdown + """ + raise NotImplementedError() + +diff --git a/nova/virt/fake.py b/nova/virt/fake.py +index ea175cb..19d81a8 100644 +--- a/nova/virt/fake.py ++++ b/nova/virt/fake.py +@@ -179,7 +179,7 @@ class FakeDriver(driver.ComputeDriver): + block_device_info=None): + pass + +- def power_off(self, instance): ++ def power_off(self, instance, shutdown_timeout=0, shutdown_attempts=0): + pass + + def power_on(self, context, instance, network_info, block_device_info): +diff --git a/nova/virt/hyperv/driver.py b/nova/virt/hyperv/driver.py +index 566a9a2..e975cf7 100644 +--- a/nova/virt/hyperv/driver.py ++++ b/nova/virt/hyperv/driver.py +@@ -111,7 +111,8 @@ class HyperVDriver(driver.ComputeDriver): + def resume(self, context, instance, network_info, block_device_info=None): + self._vmops.resume(instance) + +- def power_off(self, instance): ++ def power_off(self, instance, timeout=0, retry_interval=0): ++ # TODO(PhilDay): Add support for timeout (clean shutdown) + self._vmops.power_off(instance) + + def power_on(self, context, instance, network_info, +diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py +index 43f4762..7cddad3 100644 +--- a/nova/virt/libvirt/driver.py ++++ b/nova/virt/libvirt/driver.py +@@ -45,6 +45,7 @@ import glob + import mmap + import os + import shutil ++import six + import socket + import sys + import tempfile +@@ -2157,8 +2158,85 @@ class LibvirtDriver(driver.ComputeDriver): + dom = self._lookup_by_name(instance['name']) + dom.resume() + +- def power_off(self, instance): ++ def _clean_shutdown(self, instance, timeout, retry_interval): ++ """Attempt to shutdown the instance gracefully. ++ ++ :param instance: The instance to be shutdown ++ :param timeout: How long to wait in seconds for the instance to ++ shutdown ++ :param retry_interval: How often in seconds to signal the instance ++ to shutdown while waiting ++ ++ :returns: True if the shutdown succeeded ++ """ ++ ++ # List of states that represent a shutdown instance ++ SHUTDOWN_STATES = [power_state.SHUTDOWN, ++ power_state.CRASHED] ++ ++ try: ++ dom = self._lookup_by_name(instance["name"]) ++ except exception.InstanceNotFound: ++ # If the instance has gone then we don't need to ++ # wait for it to shutdown ++ return True ++ ++ (state, _max_mem, _mem, _cpus, _t) = dom.info() ++ state = LIBVIRT_POWER_STATE[state] ++ if state in SHUTDOWN_STATES: ++ LOG.info(_("Instance already shutdown."), ++ instance=instance) ++ return True ++ ++ LOG.debug("Shutting down instance from state %s", state, ++ instance=instance) ++ dom.shutdown() ++ retry_countdown = retry_interval ++ ++ for sec in six.moves.range(timeout): ++ ++ dom = self._lookup_by_name(instance["name"]) ++ (state, _max_mem, _mem, _cpus, _t) = dom.info() ++ state = LIBVIRT_POWER_STATE[state] ++ ++ if state in SHUTDOWN_STATES: ++ LOG.info(_("Instance shutdown successfully after %d " ++ "seconds."), sec, instance=instance) ++ return True ++ ++ # Note(PhilD): We can't assume that the Guest was able to process ++ # any previous shutdown signal (for example it may ++ # have still been startingup, so within the overall ++ # timeout we re-trigger the shutdown every ++ # retry_interval ++ if retry_countdown == 0: ++ retry_countdown = retry_interval ++ # Instance could shutdown at any time, in which case we ++ # will get an exception when we call shutdown ++ try: ++ LOG.debug("Instance in state %s after %d seconds - " ++ "resending shutdown", state, sec, ++ instance=instance) ++ dom.shutdown() ++ except libvirt.libvirtError: ++ # Assume this is because its now shutdown, so loop ++ # one more time to clean up. ++ LOG.debug("Ignoring libvirt exception from shutdown " ++ "request.", instance=instance) ++ continue ++ else: ++ retry_countdown -= 1 ++ ++ time.sleep(1) ++ ++ LOG.info(_("Instance failed to shutdown in %d seconds."), ++ timeout, instance=instance) ++ return False ++ ++ def power_off(self, instance, timeout=0, retry_interval=0): + """Power off the specified instance.""" ++ if timeout: ++ self._clean_shutdown(instance, timeout, retry_interval) + self._destroy(instance) + + def power_on(self, context, instance, network_info, +diff --git a/nova/virt/vmwareapi/driver.py b/nova/virt/vmwareapi/driver.py +index e514bbb..aedc5c3 100644 +--- a/nova/virt/vmwareapi/driver.py ++++ b/nova/virt/vmwareapi/driver.py +@@ -704,8 +704,9 @@ class VMwareVCDriver(VMwareESXDriver): + _vmops = self._get_vmops_for_compute_node(instance['node']) + _vmops.unrescue(instance) + +- def power_off(self, instance): ++ def power_off(self, instance, timeout=0, retry_interval=0): + """Power off the specified instance.""" ++ # TODO(PhilDay): Add support for timeout (clean shutdown) + _vmops = self._get_vmops_for_compute_node(instance['node']) + _vmops.power_off(instance) + +diff --git a/nova/virt/xenapi/driver.py b/nova/virt/xenapi/driver.py +index e7a0d1c..ccbe765 100644 +--- a/nova/virt/xenapi/driver.py ++++ b/nova/virt/xenapi/driver.py +@@ -325,8 +325,9 @@ class XenAPIDriver(driver.ComputeDriver): + """Unrescue the specified instance.""" + self._vmops.unrescue(instance) + +- def power_off(self, instance): ++ def power_off(self, instance, timeout=0, retry_interval=0): + """Power off the specified instance.""" ++ # TODO(PhilDay): Add support for timeout (clean shutdown) + self._vmops.power_off(instance) + + def power_on(self, context, instance, network_info, diff -Nru nova-2014.1.3/debian/patches/Clean-up-iSCSI-multipath-devices-in-Post-Live-Migrat.patch nova-2014.1.5/debian/patches/Clean-up-iSCSI-multipath-devices-in-Post-Live-Migrat.patch --- nova-2014.1.3/debian/patches/Clean-up-iSCSI-multipath-devices-in-Post-Live-Migrat.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/Clean-up-iSCSI-multipath-devices-in-Post-Live-Migrat.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,140 @@ +From 65e606faa159b4ff6124b60e9ba090833f93a48b Mon Sep 17 00:00:00 2001 +From: Jeegn Chen +Date: Fri, 15 Aug 2014 21:40:14 +0800 +Subject: [PATCH 3/4] Clean up iSCSI multipath devices in Post Live Migration + +When a volume is attached to a VM in the source compute node through +multipath, the related files in /dev/disk/by-path/ are like this + +stack@ubuntu-server12:~/devstack$ ls /dev/disk/by-path/*24 +/dev/disk/by-path/ip-192.168.3.50:3260-iscsi-iqn.1992-04.com.emc:cx. +fnm00124500890.a5-lun-24 +/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx. +fnm00124500890.b4-lun-24 + +The information on its corresponding multipath device is like this +stack@ubuntu-server12:~/devstack$ sudo multipath -l 3600601602ba034 +00921130967724e411 +3600601602ba03400921130967724e411 dm-3 DGC,VRAID +size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw +|-+- policy='round-robin 0' prio=-1 status=active +| `- 19:0:0:24 sdl 8:176 active undef running +`-+- policy='round-robin 0' prio=-1 status=enabled + `- 18:0:0:24 sdj 8:144 active undef running + +But when the VM is migrated to the destination, the related information is +like the following example since we CANNOT guarantee that all nodes are able +to access the same iSCSI portals and the same target LUN number. And the +information is used to overwrite connection_info in the DB before the post +live migration logic is executed. + +stack@ubuntu-server13:~/devstack$ ls /dev/disk/by-path/*24 +/dev/disk/by-path/ip-192.168.3.51:3260-iscsi-iqn.1992-04.com.emc:cx. +fnm00124500890.b5-lun-100 +/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx. +fnm00124500890.b4-lun-100 + +stack@ubuntu-server13:~/devstack$ sudo multipath -l 3600601602ba034 +00921130967724e411 +3600601602ba03400921130967724e411 dm-3 DGC,VRAID +size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw +|-+- policy='round-robin 0' prio=-1 status=active +| `- 19:0:0:100 sdf 8:176 active undef running +`-+- policy='round-robin 0' prio=-1 status=enabled + `- 18:0:0:100 sdg 8:144 active undef running + +As a result, if post live migration in source side uses , and + to find the devices to clean up, it may use 192.168.3.51, +iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 100. +However, the correct one should be 192.168.3.50, iqn.1992-04.com.emc:cx. +fnm00124500890.a5 and 24. + +Similar philosophy in (https://bugs.launchpad.net/nova/+bug/1327497) can be +used to fix it: Leverage the unchanged multipath_id to find correct devices +to delete. + +Conflicts: + nova/tests/virt/libvirt/test_libvirt_volume.py + +NOTE(wolsen): Conflicts are due to additional tests not included in this +cherry-pick. + +Change-Id: I875293c3ade9423caa2b8afe9eca25a74606d262 +Closes-Bug: #1357368 +(cherry picked from commit aa9104ccedb3ff13cc34a498b11f5e8ff100fd99) +(cherry picked from commit 9c3ec16576e2f7c9d5aff6e4b620d708e6636568) +--- + nova/tests/virt/libvirt/test_libvirt_volume.py | 30 ++++++++++++++++++++++++++ + nova/virt/libvirt/volume.py | 8 ++++++- + 2 files changed, 37 insertions(+), 1 deletion(-) + +diff --git a/nova/tests/virt/libvirt/test_libvirt_volume.py b/nova/tests/virt/libvirt/test_libvirt_volume.py +index e068c01..187061b 100644 +--- a/nova/tests/virt/libvirt/test_libvirt_volume.py ++++ b/nova/tests/virt/libvirt/test_libvirt_volume.py +@@ -351,6 +351,36 @@ class LibvirtVolumeTestCase(test.NoDBTestCase): + ['-f', 'fake-multipath-devname'], + check_exit_code=[0, 1]) + ++ def test_libvirt_iscsi_driver_multipath_id(self): ++ libvirt_driver = volume.LibvirtISCSIVolumeDriver(self.fake_conn) ++ libvirt_driver.use_multipath = True ++ self.stubs.Set(libvirt_driver, '_run_iscsiadm_bare', ++ lambda x, check_exit_code: ('',)) ++ self.stubs.Set(libvirt_driver, '_rescan_iscsi', lambda: None) ++ self.stubs.Set(libvirt_driver, '_get_host_device', lambda x: None) ++ self.stubs.Set(libvirt_driver, '_rescan_multipath', lambda: None) ++ fake_multipath_id = 'fake_multipath_id' ++ fake_multipath_device = '/dev/mapper/%s' % fake_multipath_id ++ self.stubs.Set(libvirt_driver, '_get_multipath_device_name', ++ lambda x: fake_multipath_device) ++ ++ def fake_disconnect_volume_multipath_iscsi(iscsi_properties, ++ multipath_device): ++ if fake_multipath_device != multipath_device: ++ raise Exception('Invalid multipath_device.') ++ ++ self.stubs.Set(libvirt_driver, '_disconnect_volume_multipath_iscsi', ++ fake_disconnect_volume_multipath_iscsi) ++ with mock.patch.object(os.path, 'exists', return_value=True): ++ vol = {'id': 1, 'name': self.name} ++ connection_info = self.iscsi_connection(vol, self.location, ++ self.iqn) ++ libvirt_driver.connect_volume(connection_info, ++ self.disk_info) ++ self.assertEqual(fake_multipath_id, ++ connection_info['data']['multipath_id']) ++ libvirt_driver.disconnect_volume(connection_info, "fake") ++ + def iser_connection(self, volume, location, iqn): + return { + 'driver_volume_type': 'iser', +diff --git a/nova/virt/libvirt/volume.py b/nova/virt/libvirt/volume.py +index 8e18b0e..a2e6b14 100644 +--- a/nova/virt/libvirt/volume.py ++++ b/nova/virt/libvirt/volume.py +@@ -350,6 +350,8 @@ class LibvirtISCSIVolumeDriver(LibvirtBaseVolumeDriver): + + if multipath_device is not None: + host_device = multipath_device ++ connection_info['data']['multipath_id'] = \ ++ multipath_device.split('/')[-1] + + conf.source_type = "block" + conf.source_path = host_device +@@ -362,7 +364,11 @@ class LibvirtISCSIVolumeDriver(LibvirtBaseVolumeDriver): + host_device = self._get_host_device(iscsi_properties) + multipath_device = None + if self.use_multipath: +- multipath_device = self._get_multipath_device_name(host_device) ++ if 'multipath_id' in iscsi_properties: ++ multipath_device = ('/dev/mapper/%s' % ++ iscsi_properties['multipath_id']) ++ else: ++ multipath_device = self._get_multipath_device_name(host_device) + + super(LibvirtISCSIVolumeDriver, + self).disconnect_volume(connection_info, disk_dev) +-- +1.9.1 + diff -Nru nova-2014.1.3/debian/patches/CVE-2014-3708.patch nova-2014.1.5/debian/patches/CVE-2014-3708.patch --- nova-2014.1.3/debian/patches/CVE-2014-3708.patch 2015-01-16 20:29:23.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2014-3708.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,189 +0,0 @@ -From b6a080bbdaf1a5d8534e8e0519e150f55c46d18c Mon Sep 17 00:00:00 2001 -From: Vishvananda Ishaya -Date: Mon, 22 Sep 2014 23:31:07 -0700 -Subject: [PATCH] Fixes DOS issue in instance list ip filter - -Converts the ip filtering to filter the list locally based -on the network info cache instead of making an extremely expensive -call over to nova network where it attempts to retrieve a list -of every instance in the system. - -Change-Id: I455f6ab4acdecacc5152b11a183027f933dc4475 -Closes-bug: #1358583 -(cherry picked from commit 24c8cc53fd6a62fcad1287b2cdcf32d2ff0991d9) ---- - nova/compute/api.py | 30 +++++++++++---- - nova/tests/compute/test_compute.py | 75 ++++++++++++++++++++++++++++-------- - 2 files changed, 81 insertions(+), 24 deletions(-) - -Index: nova-2014.1.3/nova/compute/api.py -=================================================================== ---- nova-2014.1.3.orig/nova/compute/api.py 2015-01-16 15:29:19.226910479 -0500 -+++ nova-2014.1.3/nova/compute/api.py 2015-01-16 15:29:19.182910046 -0500 -@@ -1885,6 +1885,9 @@ - sort_key, sort_dir, limit=limit, marker=marker, - expected_attrs=expected_attrs) - -+ if 'ip6' in filters or 'ip' in filters: -+ inst_models = self._ip_filter(inst_models, filters) -+ - if want_objects: - return inst_models - -@@ -1895,18 +1898,29 @@ - - return instances - -+ @staticmethod -+ def _ip_filter(inst_models, filters): -+ ipv4_f = re.compile(str(filters.get('ip'))) -+ ipv6_f = re.compile(str(filters.get('ip6'))) -+ result_objs = [] -+ for instance in inst_models: -+ nw_info = compute_utils.get_nw_info_for_instance(instance) -+ for vif in nw_info: -+ for fixed_ip in vif.fixed_ips(): -+ address = fixed_ip.get('address') -+ if not address: -+ continue -+ version = fixed_ip.get('version') -+ if ((version == 4 and ipv4_f.match(address)) or -+ (version == 6 and ipv6_f.match(address))): -+ result_objs.append(instance) -+ continue -+ return instance_obj.InstanceList(objects=result_objs) -+ - def _get_instances_by_filters(self, context, filters, - sort_key, sort_dir, - limit=None, - marker=None, expected_attrs=None): -- if 'ip6' in filters or 'ip' in filters: -- res = self.network_api.get_instance_uuids_by_ip_filter(context, -- filters) -- # NOTE(jkoelker) It is possible that we will get the same -- # instance uuid twice (one for ipv4 and ipv6) -- uuids = set([r['instance_uuid'] for r in res]) -- filters['uuid'] = uuids -- - fields = ['metadata', 'system_metadata', 'info_cache', - 'security_groups'] - if expected_attrs: -Index: nova-2014.1.3/nova/tests/compute/test_compute.py -=================================================================== ---- nova-2014.1.3.orig/nova/tests/compute/test_compute.py 2015-01-16 15:29:19.226910479 -0500 -+++ nova-2014.1.3/nova/tests/compute/test_compute.py 2015-01-16 15:29:19.186910085 -0500 -@@ -60,6 +60,7 @@ - from nova.objects import block_device as block_device_obj - from nova.objects import instance as instance_obj - from nova.objects import instance_group as instance_group_obj -+from nova.objects import instance_info_cache as cache_obj - from nova.objects import migration as migration_obj - from nova.objects import quotas as quotas_obj - from nova.openstack.common.gettextutils import _ -@@ -83,7 +84,6 @@ - from nova.tests import matchers - from nova.tests.objects import test_flavor - from nova.tests.objects import test_migration --from nova.tests.objects import test_network - from nova import utils - from nova.virt import block_device as driver_block_device - from nova.virt import event -@@ -6726,6 +6726,35 @@ - self.assertIsNone(instance['task_state']) - return instance, instance_uuid - -+ def test_ip_filtering(self): -+ info = [{ -+ 'address': 'aa:bb:cc:dd:ee:ff', -+ 'id': 1, -+ 'network': { -+ 'bridge': 'br0', -+ 'id': 1, -+ 'label': 'private', -+ 'subnets': [{ -+ 'cidr': '192.168.0.0/24', -+ 'ips': [{ -+ 'address': '192.168.0.10', -+ 'type': 'fixed', -+ }] -+ }] -+ } -+ }] -+ -+ info1 = cache_obj.InstanceInfoCache(network_info=jsonutils.dumps(info)) -+ inst1 = instance_obj.Instance(id=1, info_cache=info1) -+ info[0]['network']['subnets'][0]['ips'][0]['address'] = '192.168.0.20' -+ info2 = cache_obj.InstanceInfoCache(network_info=jsonutils.dumps(info)) -+ inst2 = instance_obj.Instance(id=2, info_cache=info2) -+ instances = instance_obj.InstanceList(objects=[inst1, inst2]) -+ -+ instances = self.compute_api._ip_filter(instances, {'ip': '.*10'}) -+ self.assertEqual(len(instances), 1) -+ self.assertEqual(instances[0].id, 1) -+ - def test_create_with_too_little_ram(self): - # Test an instance type with too little memory. - -@@ -7530,33 +7559,47 @@ - db.instance_destroy(c, instance2['uuid']) - db.instance_destroy(c, instance3['uuid']) - -- @mock.patch('nova.db.network_get') -- @mock.patch('nova.db.fixed_ips_by_virtual_interface') -- def test_get_all_by_multiple_options_at_once(self, fixed_get, network_get): -+ def test_get_all_by_multiple_options_at_once(self): - # Test searching by multiple options at once. - c = context.get_admin_context() -- network_manager = fake_network.FakeNetworkManager(self.stubs) -- fixed_get.side_effect = ( -- network_manager.db.fixed_ips_by_virtual_interface) -- network_get.return_value = ( -- dict(test_network.fake_network, -- **network_manager.db.network_get(None, 1))) -- self.stubs.Set(self.compute_api.network_api, -- 'get_instance_uuids_by_ip_filter', -- network_manager.get_instance_uuids_by_ip_filter) -+ -+ def fake_network_info(ip): -+ info = [{ -+ 'address': 'aa:bb:cc:dd:ee:ff', -+ 'id': 1, -+ 'network': { -+ 'bridge': 'br0', -+ 'id': 1, -+ 'label': 'private', -+ 'subnets': [{ -+ 'cidr': '192.168.0.0/24', -+ 'ips': [{ -+ 'address': ip, -+ 'type': 'fixed', -+ }] -+ }] -+ } -+ }] -+ return jsonutils.dumps(info) - - instance1 = self._create_fake_instance({ - 'display_name': 'woot', - 'id': 1, -- 'uuid': '00000000-0000-0000-0000-000000000010'}) -+ 'uuid': '00000000-0000-0000-0000-000000000010', -+ 'info_cache': {'network_info': -+ fake_network_info('192.168.0.1')}}) - instance2 = self._create_fake_instance({ - 'display_name': 'woo', - 'id': 20, -- 'uuid': '00000000-0000-0000-0000-000000000020'}) -+ 'uuid': '00000000-0000-0000-0000-000000000020', -+ 'info_cache': {'network_info': -+ fake_network_info('192.168.0.2')}}) - instance3 = self._create_fake_instance({ - 'display_name': 'not-woot', - 'id': 30, -- 'uuid': '00000000-0000-0000-0000-000000000030'}) -+ 'uuid': '00000000-0000-0000-0000-000000000030', -+ 'info_cache': {'network_info': -+ fake_network_info('192.168.0.3')}}) - - # ip ends up matching 2nd octet here.. so all 3 match ip - # but 'name' only matches one diff -Nru nova-2014.1.3/debian/patches/CVE-2015-3241-1.patch nova-2014.1.5/debian/patches/CVE-2015-3241-1.patch --- nova-2014.1.3/debian/patches/CVE-2015-3241-1.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-3241-1.patch 2017-09-12 15:48:00.000000000 +0000 @@ -0,0 +1,335 @@ +From 8232a7c6d58fe24e74259557986b5af9655bfd31 Mon Sep 17 00:00:00 2001 +From: John Warren +Date: Wed, 11 Jun 2014 20:29:28 +0000 +Subject: [PATCH] Check for resize path on libvirt instance delete + +If an instance is deleted after the instance's disk image path has +been renamed by adding the "_resize" suffix to it but before the +resize operation completes, the libvirt driver will not delete the +orphaned files and manual intervention is needed to get them deleted. + +This fix addresses the issue by attempting to rename the instance path +by adding a "_del" suffix and if that fails, renaming the instance path +with the "_resize" suffix by replacing the "_resize" suffix with the +"_del" suffix. If both renaming operations fail, the sequence is +repeated, in case the the disk image path initially had the "_resize" +suffix and another thread removed it before the second rename operation +was attempted. These rename operations are used in favor of checking +for the existence of paths and deleting if found, because rename +operations are atomic whereas another thread could rename the path +between the exist check and the deleting. + +Regardless of the outcome of the renaming operations, the existence of +the instance path with the "_del" suffix is verified and if it exists, +it is deleted. This is done in case a prior delete operation that +managed to create the "_del" path was subsequently interrupted before +all instance files could be deleted. + +Note that the LibvirtConnTestCase.test_delete_instance_files test case +was removed in order to eliminate redundancy. + +Closes-Bug: #1308565 + +(cherry picked from commit 98e6891dfd4408c56644f55fe3cff88703beb4bf) + +Upstream-Juno: https://review.openstack.org/#/c/99472/ +Related: rhbz 1257789 +Related: CVE-2015-3241 + +Change-Id: Ifcb2e18211347ccf3e5472779c5917a729a6eced +Reviewed-on: https://code.engineering.redhat.com/gerrit/57492 +Tested-by: RHOS Jenkins +Reviewed-by: Padraig Brady +--- + nova/tests/virt/libvirt/test_libvirt.py | 192 +++++++++++++++++++++++++------- + nova/virt/libvirt/driver.py | 50 +++++++-- + 2 files changed, 194 insertions(+), 48 deletions(-) + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt.py 2017-09-12 11:47:57.420575903 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2017-09-12 11:47:57.412575798 -0400 +@@ -5246,9 +5246,10 @@ class LibvirtConnTestCase(test.TestCase) + else: + libvirt_driver.LibvirtDriver.volume_driver_method( + mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg()) +- self.mox.StubOutWithMock(shutil, "rmtree") +- shutil.rmtree(os.path.join(CONF.instances_path, +- 'instance-%08x' % int(instance['id']))) ++ self.mox.StubOutWithMock(libvirt_driver.LibvirtDriver, ++ 'delete_instance_files') ++ (libvirt_driver.LibvirtDriver.delete_instance_files(mox.IgnoreArg()). ++ AndReturn(True)) + self.mox.StubOutWithMock(libvirt_driver.LibvirtDriver, '_cleanup_lvm') + libvirt_driver.LibvirtDriver._cleanup_lvm(instance) + +@@ -5327,44 +5328,6 @@ class LibvirtConnTestCase(test.TestCase) + self.stubs.Set(os.path, 'exists', fake_os_path_exists) + conn.destroy(self.context, instance, [], None, False) + +- def test_delete_instance_files(self): +- instance = {"name": "instancename", "id": "42", +- "uuid": "875a8070-d0b9-4949-8b31-104d125c9a64", +- "cleaned": 0, 'info_cache': None, 'security_groups': []} +- +- self.mox.StubOutWithMock(db, 'instance_get_by_uuid') +- self.mox.StubOutWithMock(os.path, 'exists') +- self.mox.StubOutWithMock(shutil, "rmtree") +- +- db.instance_get_by_uuid(mox.IgnoreArg(), mox.IgnoreArg(), +- columns_to_join=['info_cache', +- 'security_groups'], +- use_slave=False +- ).AndReturn(instance) +- os.path.exists(mox.IgnoreArg()).AndReturn(False) +- os.path.exists(mox.IgnoreArg()).AndReturn(True) +- shutil.rmtree(os.path.join(CONF.instances_path, instance['uuid'])) +- os.path.exists(mox.IgnoreArg()).AndReturn(True) +- os.path.exists(mox.IgnoreArg()).AndReturn(False) +- os.path.exists(mox.IgnoreArg()).AndReturn(True) +- shutil.rmtree(os.path.join(CONF.instances_path, instance['uuid'])) +- os.path.exists(mox.IgnoreArg()).AndReturn(False) +- self.mox.ReplayAll() +- +- def fake_obj_load_attr(self, attrname): +- if not hasattr(self, attrname): +- self[attrname] = {} +- +- conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) +- self.stubs.Set(instance_obj.Instance, 'fields', +- {'id': int, 'uuid': str, 'cleaned': int}) +- self.stubs.Set(instance_obj.Instance, 'obj_load_attr', +- fake_obj_load_attr) +- +- inst_obj = instance_obj.Instance.get_by_uuid(None, instance['uuid']) +- self.assertFalse(conn.delete_instance_files(inst_obj)) +- self.assertTrue(conn.delete_instance_files(inst_obj)) +- + def test_reboot_different_ids(self): + class FakeLoopingCall: + def start(self, *a, **k): +@@ -9310,6 +9273,153 @@ class LibvirtDriverTestCase(test.TestCas + instance = self._create_instance() + self.assertTrue(conn.instance_on_disk(instance)) + ++ @mock.patch('shutil.rmtree') ++ @mock.patch('nova.utils.execute') ++ @mock.patch('os.path.exists') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ def test_delete_instance_files(self, get_instance_path, exists, exe, ++ shutil): ++ lv = self.libvirtconnection ++ get_instance_path.return_value = '/path' ++ params = dict(uuid='fake-uuid', id=1) ++ instance = self._create_instance(params) ++ ++ exists.side_effect = [False, False, True, False] ++ ++ result = lv.delete_instance_files(instance) ++ get_instance_path.assert_called_with(instance) ++ exe.assert_called_with('mv', '/path', '/path_del') ++ shutil.assert_called_with('/path_del') ++ self.assertTrue(result) ++ ++ @mock.patch('shutil.rmtree') ++ @mock.patch('nova.utils.execute') ++ @mock.patch('os.path.exists') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ def test_delete_instance_files_resize(self, get_instance_path, exists, ++ exe, shutil): ++ lv = self.libvirtconnection ++ get_instance_path.return_value = '/path' ++ params = dict(uuid='fake-uuid', id=1) ++ instance = self._create_instance(params) ++ ++ nova.utils.execute.side_effect = [Exception(), None] ++ exists.side_effect = [False, False, True, False] ++ ++ result = lv.delete_instance_files(instance) ++ get_instance_path.assert_called_with(instance) ++ expected = [mock.call('mv', '/path', '/path_del'), ++ mock.call('mv', '/path_resize', '/path_del')] ++ self.assertEqual(expected, exe.mock_calls) ++ shutil.assert_called_with('/path_del') ++ self.assertTrue(result) ++ ++ @mock.patch('shutil.rmtree') ++ @mock.patch('nova.utils.execute') ++ @mock.patch('os.path.exists') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ def test_delete_instance_files_failed(self, get_instance_path, exists, exe, ++ shutil): ++ lv = self.libvirtconnection ++ get_instance_path.return_value = '/path' ++ params = dict(uuid='fake-uuid', id=1) ++ instance = self._create_instance(params) ++ ++ exists.side_effect = [False, False, True, True] ++ ++ result = lv.delete_instance_files(instance) ++ get_instance_path.assert_called_with(instance) ++ exe.assert_called_with('mv', '/path', '/path_del') ++ shutil.assert_called_with('/path_del') ++ self.assertFalse(result) ++ ++ @mock.patch('shutil.rmtree') ++ @mock.patch('nova.utils.execute') ++ @mock.patch('os.path.exists') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ def test_delete_instance_files_mv_failed(self, get_instance_path, exists, ++ exe, shutil): ++ lv = self.libvirtconnection ++ get_instance_path.return_value = '/path' ++ params = dict(uuid='fake-uuid', id=1) ++ instance = self._create_instance(params) ++ ++ nova.utils.execute.side_effect = Exception() ++ exists.side_effect = [True, True] ++ ++ result = lv.delete_instance_files(instance) ++ get_instance_path.assert_called_with(instance) ++ expected = [mock.call('mv', '/path', '/path_del'), ++ mock.call('mv', '/path_resize', '/path_del')] * 2 ++ self.assertEqual(expected, exe.mock_calls) ++ self.assertFalse(result) ++ ++ @mock.patch('shutil.rmtree') ++ @mock.patch('nova.utils.execute') ++ @mock.patch('os.path.exists') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ def test_delete_instance_files_resume(self, get_instance_path, exists, ++ exe, shutil): ++ lv = self.libvirtconnection ++ get_instance_path.return_value = '/path' ++ params = dict(uuid='fake-uuid', id=1) ++ instance = self._create_instance(params) ++ ++ nova.utils.execute.side_effect = Exception() ++ exists.side_effect = [False, False, True, False] ++ ++ result = lv.delete_instance_files(instance) ++ get_instance_path.assert_called_with(instance) ++ expected = [mock.call('mv', '/path', '/path_del'), ++ mock.call('mv', '/path_resize', '/path_del')] * 2 ++ self.assertEqual(expected, exe.mock_calls) ++ self.assertTrue(result) ++ ++ @mock.patch('shutil.rmtree') ++ @mock.patch('nova.utils.execute') ++ @mock.patch('os.path.exists') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ def test_delete_instance_files_none(self, get_instance_path, exists, ++ exe, shutil): ++ lv = self.libvirtconnection ++ get_instance_path.return_value = '/path' ++ params = dict(uuid='fake-uuid', id=1) ++ instance = self._create_instance(params) ++ ++ nova.utils.execute.side_effect = Exception() ++ exists.side_effect = [False, False, False, False] ++ ++ result = lv.delete_instance_files(instance) ++ get_instance_path.assert_called_with(instance) ++ expected = [mock.call('mv', '/path', '/path_del'), ++ mock.call('mv', '/path_resize', '/path_del')] * 2 ++ self.assertEqual(expected, exe.mock_calls) ++ self.assertEqual(0, len(shutil.mock_calls)) ++ self.assertTrue(result) ++ ++ @mock.patch('shutil.rmtree') ++ @mock.patch('nova.utils.execute') ++ @mock.patch('os.path.exists') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ def test_delete_instance_files_concurrent(self, get_instance_path, exists, ++ exe, shutil): ++ lv = self.libvirtconnection ++ get_instance_path.return_value = '/path' ++ params = dict(uuid='fake-uuid', id=1) ++ instance = self._create_instance(params) ++ ++ nova.utils.execute.side_effect = [Exception(), Exception(), None] ++ exists.side_effect = [False, False, True, False] ++ ++ result = lv.delete_instance_files(instance) ++ get_instance_path.assert_called_with(instance) ++ expected = [mock.call('mv', '/path', '/path_del'), ++ mock.call('mv', '/path_resize', '/path_del')] ++ expected.append(expected[0]) ++ self.assertEqual(expected, exe.mock_calls) ++ shutil.assert_called_with('/path_del') ++ self.assertTrue(result) ++ + + class LibvirtVolumeUsageTestCase(test.TestCase): + """Test for LibvirtDriver.get_all_volume_usage.""" +Index: nova-2014.1.5/nova/virt/libvirt/driver.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/driver.py 2017-09-12 11:47:57.420575903 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/driver.py 2017-09-12 11:47:57.416575850 -0400 +@@ -5342,23 +5342,59 @@ class LibvirtDriver(driver.ComputeDriver + + def delete_instance_files(self, instance): + target = libvirt_utils.get_instance_path(instance) +- if os.path.exists(target): +- LOG.info(_('Deleting instance files %s'), target, ++ # A resize may be in progress ++ target_resize = target + '_resize' ++ # Other threads may attempt to rename the path, so renaming the path ++ # to target + '_del' (because it is atomic) and iterating through ++ # twice in the unlikely event that a concurrent rename occurs between ++ # the two rename attempts in this method. In general this method ++ # should be fairly thread-safe without these additional checks, since ++ # other operations involving renames are not permitted when the task ++ # state is not None and the task state should be set to something ++ # other than None by the time this method is invoked. ++ target_del = target + '_del' ++ for i in six.moves.range(2): ++ try: ++ utils.execute('mv', target, target_del) ++ break ++ except Exception: ++ pass ++ try: ++ utils.execute('mv', target_resize, target_del) ++ break ++ except Exception: ++ pass ++ # Either the target or target_resize path may still exist if all ++ # rename attempts failed. ++ remaining_path = None ++ for p in (target, target_resize): ++ if os.path.exists(p): ++ remaining_path = p ++ break ++ ++ # A previous delete attempt may have been interrupted, so target_del ++ # may exist even if all rename attempts during the present method ++ # invocation failed due to the absence of both target and ++ # target_resize. ++ if not remaining_path and os.path.exists(target_del): ++ LOG.info(_('Deleting instance files %s'), target_del, + instance=instance) ++ remaining_path = target_del + try: +- shutil.rmtree(target) ++ shutil.rmtree(target_del) + except OSError as e: + LOG.error(_('Failed to cleanup directory %(target)s: ' +- '%(e)s'), {'target': target, 'e': e}, ++ '%(e)s'), {'target': target_del, 'e': e}, + instance=instance) + + # It is possible that the delete failed, if so don't mark the instance + # as cleaned. +- if os.path.exists(target): +- LOG.info(_('Deletion of %s failed'), target, instance=instance) ++ if remaining_path and os.path.exists(remaining_path): ++ LOG.info(_('Deletion of %s failed'), remaining_path, ++ instance=instance) + return False + +- LOG.info(_('Deletion of %s complete'), target, instance=instance) ++ LOG.info(_('Deletion of %s complete'), target_del, instance=instance) + return True + + @property diff -Nru nova-2014.1.3/debian/patches/CVE-2015-3241-2.patch nova-2014.1.5/debian/patches/CVE-2015-3241-2.patch --- nova-2014.1.3/debian/patches/CVE-2015-3241-2.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-3241-2.patch 2017-09-12 12:09:00.000000000 +0000 @@ -0,0 +1,126 @@ +From 203d8803b786a2eaf73389f6c1209f720e1533dd Mon Sep 17 00:00:00 2001 +From: abhishekkekane +Date: Sat, 8 Aug 2015 02:28:50 -0700 +Subject: [PATCH] Sync process utils from oslo for execute callbacks + +------------------------------------------------ +The sync pulls in the following changes: + +Ifc23325 Add 2 callbacks to processutils.execute() +I22b2d7b processutils: ensure on_completion callback is always called +I59d5799 Let oslotest manage the six.move setting for mox +I245750f Remove `processutils` dependency on `log` +Ia5bb418 Fix exception message in openstack.common.processutils.execute +----------------------------------------------- + +Related-Bug: 1387543 +(cherry picked from commit bf23643e36c8764b4bd532546a2cc04385fe0cff) + +Upstream patch removes the six move from +nova/openstack/common/__init__.py. This backport leaves it there as +it doesn't seem to be related, and it upsets python 2.6. + +Upstream-Juno: https://review.openstack.org/#/c/208876/ +Related: rhbz 1257789 +Related: CVE-2015-3241 + +Change-Id: I22b2d7bde8797276f7670bc289d915dab5122481 +Reviewed-on: https://code.engineering.redhat.com/gerrit/57493 +Reviewed-by: Vladik Romanovsky +Tested-by: RHOS Jenkins +Reviewed-by: Matthew Booth +--- + nova/openstack/common/processutils.py | 59 ++++++++++++++++++++++++----------- + 1 file changed, 40 insertions(+), 19 deletions(-) + +diff --git a/nova/openstack/common/processutils.py b/nova/openstack/common/processutils.py +index cb787e2..4ad0a96 100644 +--- a/nova/openstack/common/processutils.py ++++ b/nova/openstack/common/processutils.py +@@ -112,6 +112,17 @@ def execute(*cmd, **kwargs): + :type shell: boolean + :param loglevel: log level for execute commands. + :type loglevel: int. (Should be logging.DEBUG or logging.INFO) ++ :param on_execute: This function will be called upon process creation ++ with the object as a argument. The Purpose of this ++ is to allow the caller of `processutils.execute` to ++ track process creation asynchronously. ++ :type on_execute: function(:class:`subprocess.Popen`) ++ :param on_completion: This function will be called upon process ++ completion with the object as a argument. The ++ Purpose of this is to allow the caller of ++ `processutils.execute` to track process completion ++ asynchronously. ++ :type on_completion: function(:class:`subprocess.Popen`) + :returns: (stdout, stderr) from process execution + :raises: :class:`UnknownArgumentError` on + receiving unknown arguments +@@ -127,6 +138,8 @@ def execute(*cmd, **kwargs): + root_helper = kwargs.pop('root_helper', '') + shell = kwargs.pop('shell', False) + loglevel = kwargs.pop('loglevel', logging.DEBUG) ++ on_execute = kwargs.pop('on_execute', None) ++ on_completion = kwargs.pop('on_completion', None) + + if isinstance(check_exit_code, bool): + ignore_exit_code = not check_exit_code +@@ -135,8 +148,7 @@ def execute(*cmd, **kwargs): + check_exit_code = [check_exit_code] + + if kwargs: +- raise UnknownArgumentError(_('Got unknown keyword args ' +- 'to utils.execute: %r') % kwargs) ++ raise UnknownArgumentError(_('Got unknown keyword args: %r') % kwargs) + + if run_as_root and hasattr(os, 'geteuid') and os.geteuid() != 0: + if not root_helper: +@@ -168,23 +180,32 @@ def execute(*cmd, **kwargs): + close_fds=close_fds, + preexec_fn=preexec_fn, + shell=shell) +- result = None +- for _i in six.moves.range(20): +- # NOTE(russellb) 20 is an arbitrary number of retries to +- # prevent any chance of looping forever here. +- try: +- if process_input is not None: +- result = obj.communicate(process_input) +- else: +- result = obj.communicate() +- except OSError as e: +- if e.errno in (errno.EAGAIN, errno.EINTR): +- continue +- raise +- break +- obj.stdin.close() # pylint: disable=E1101 +- _returncode = obj.returncode # pylint: disable=E1101 +- LOG.log(loglevel, _('Result was %s') % _returncode) ++ ++ if on_execute: ++ on_execute(obj) ++ ++ try: ++ result = None ++ for _i in six.moves.range(20): ++ # NOTE(russellb) 20 is an arbitrary number of retries to ++ # prevent any chance of looping forever here. ++ try: ++ if process_input is not None: ++ result = obj.communicate(process_input) ++ else: ++ result = obj.communicate() ++ except OSError as e: ++ if e.errno in (errno.EAGAIN, errno.EINTR): ++ continue ++ raise ++ break ++ obj.stdin.close() # pylint: disable=E1101 ++ _returncode = obj.returncode # pylint: disable=E1101 ++ LOG.log(loglevel, _('Result was %s') % _returncode) ++ finally: ++ if on_completion: ++ on_completion(obj) ++ + if not ignore_exit_code and _returncode not in check_exit_code: + (stdout, stderr) = result + sanitized_stdout = strutils.mask_password(stdout) diff -Nru nova-2014.1.3/debian/patches/CVE-2015-3241-3.patch nova-2014.1.5/debian/patches/CVE-2015-3241-3.patch --- nova-2014.1.3/debian/patches/CVE-2015-3241-3.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-3241-3.patch 2017-09-12 15:51:05.000000000 +0000 @@ -0,0 +1,362 @@ +Backport of: + +From 70d2a051b057054676df663291885defb84a6dd6 Mon Sep 17 00:00:00 2001 +From: abhishekkekane +Date: Mon, 6 Jul 2015 01:51:26 -0700 +Subject: [PATCH] libvirt: Kill rsync/scp processes before deleting instance + +In the resize operation, during copying files from source to +destination compute node scp/rsync processes are not aborted after +the instance is deleted because linux kernel doesn't delete instance +files physically until all processes using the file handle is closed +completely. Hence rsync/scp process keeps on running until it +transfers 100% of file data. + +Added new module instancejobtracker to libvirt driver which will add, +remove or terminate the processes running against particular instances. +Added callback methods to execute call which will store the pid of +scp/rsync process in cache as a key: value pair and to remove the +pid from the cache after process completion. Process id will be used to +kill the process if it is running while deleting the instance. Instance +uuid is used as a key in the cache and pid will be the value. + +Conflicts: + nova/tests/unit/virt/libvirt/test_driver.py + nova/tests/unit/virt/libvirt/test_utils.py + nova/virt/libvirt/driver.py + nova/virt/libvirt/utils.py + +Note: The required unit-tests are manually added to the below path, +as new path for unit-tests is not present in stable/juno release. +nova/tests/virt/libvirt/test_driver.py +nova/tests/virt/libvirt/test_utils.py + +SecurityImpact + +Closes-bug: #1387543 +(cherry picked from commit 7ab75d5b0b75fc3426323bef19bf436a258b9707) +(cherry picked from commit b5020a047fc487f35b76fc05f31e52665a1afda1) +(cherry picked from commit 539693e40388c4729c99a2c133b573896296df2a) + +Upstream-Juno: https://review.openstack.org/#/c/214528/ +Resolves: rhbz 1257789 +Resolves: CVE-2015-3241 + +Change-Id: Ie03acc00a7c904aec13c90ae6a53938d08e5e0c9 +Reviewed-on: https://code.engineering.redhat.com/gerrit/57494 +Reviewed-by: Vladik Romanovsky +Tested-by: RHOS Jenkins +Reviewed-by: Matthew Booth +--- + nova/tests/virt/libvirt/test_image_utils.py | 6 +- + nova/tests/virt/libvirt/test_libvirt.py | 40 +++++++++++ + nova/tests/virt/libvirt/test_libvirt_utils.py | 6 +- + nova/virt/libvirt/driver.py | 19 +++++- + nova/virt/libvirt/instancejobtracker.py | 97 +++++++++++++++++++++++++++ + nova/virt/libvirt/utils.py | 14 ++-- + 6 files changed, 172 insertions(+), 10 deletions(-) + create mode 100644 nova/virt/libvirt/instancejobtracker.py + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt.py 2017-09-12 11:48:41.737154733 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2017-09-12 11:48:41.729154628 -0400 +@@ -23,6 +23,7 @@ import mox + import os + import re + import shutil ++import signal + import tempfile + import uuid + +@@ -6709,6 +6710,15 @@ class LibvirtConnTestCase(test.TestCase) + self.mox.ReplayAll() + self.assertTrue(conn._is_storage_shared_with('foo', '/path')) + ++ def test_store_pid_remove_pid(self): ++ instance = self.create_instance_obj(self.context) ++ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ++ popen = mock.Mock(pid=3) ++ drvr.job_tracker.add_job(instance, popen.pid) ++ self.assertIn(3, drvr.job_tracker.jobs[instance.uuid]) ++ drvr.job_tracker.remove_job(instance, popen.pid) ++ self.assertNotIn(instance.uuid, drvr.job_tracker.jobs) ++ + def test_create_domain_define_xml_fails(self): + """Tests that the xml is logged when defining the domain fails.""" + fake_xml = "this is a test" +@@ -8687,12 +8697,18 @@ class LibvirtDriverTestCase(test.TestCas + def fake_execute(*args, **kwargs): + pass + ++ def fake_copy_image(src, dest, host=None, receive=False, ++ on_execute=None, on_completion=None): ++ self.assertIsNotNone(on_execute) ++ self.assertIsNotNone(on_completion) ++ + self.stubs.Set(self.libvirtconnection, 'get_instance_disk_info', + fake_get_instance_disk_info) + self.stubs.Set(self.libvirtconnection, '_destroy', fake_destroy) + self.stubs.Set(self.libvirtconnection, 'get_host_ip_addr', + fake_get_host_ip_addr) + self.stubs.Set(utils, 'execute', fake_execute) ++ self.stubs.Set(libvirt_utils, 'copy_image', fake_copy_image) + + ins_ref = self._create_instance() + flavor = {'root_gb': 10, 'ephemeral_gb': 20} +@@ -9294,6 +9310,30 @@ class LibvirtDriverTestCase(test.TestCas + + @mock.patch('shutil.rmtree') + @mock.patch('nova.utils.execute') ++ @mock.patch('os.path.exists') ++ @mock.patch('os.kill') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ def test_delete_instance_files_kill_running( ++ self, get_instance_path, kill, exists, exe, shutil): ++ lv = self.libvirtconnection ++ get_instance_path.return_value = '/path' ++ params = dict(uuid='fake-uuid', id=1) ++ instance = self._create_instance(params) ++ lv.job_tracker.jobs[instance.uuid] = [3, 4] ++ ++ exists.side_effect = [False, False, True, False] ++ ++ result = lv.delete_instance_files(instance) ++ get_instance_path.assert_called_with(instance) ++ exe.assert_called_with('mv', '/path', '/path_del') ++ kill.assert_has_calls([mock.call(3, signal.SIGKILL), mock.call(3, 0), ++ mock.call(4, signal.SIGKILL), mock.call(4, 0)]) ++ shutil.assert_called_with('/path_del') ++ self.assertTrue(result) ++ self.assertNotIn(instance.uuid, lv.job_tracker.jobs) ++ ++ @mock.patch('shutil.rmtree') ++ @mock.patch('nova.utils.execute') + @mock.patch('os.path.exists') + @mock.patch('nova.virt.libvirt.utils.get_instance_path') + def test_delete_instance_files_resize(self, get_instance_path, exists, +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt_utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt_utils.py 2017-09-12 11:48:41.737154733 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt_utils.py 2017-09-12 11:48:41.729154628 -0400 +@@ -218,7 +218,8 @@ blah BLAH: bb + mock_execute.assert_called_once_with('cp', 'src', 'dest') + + _rsync_call = functools.partial(mock.call, +- 'rsync', '--sparse', '--compress') ++ 'rsync', '--sparse', '--compress', ++ on_execute=None, on_completion=None) + + @mock.patch('nova.utils.execute') + def test_copy_image_rsync(self, mock_execute): +@@ -241,6 +242,7 @@ blah BLAH: bb + + mock_execute.assert_has_calls([ + self._rsync_call('--dry-run', 'src', 'host:dest'), +- mock.call('scp', 'src', 'host:dest'), ++ mock.call('scp', 'src', 'host:dest', ++ on_execute=None, on_completion=None), + ]) + self.assertEqual(2, mock_execute.call_count) +Index: nova-2014.1.5/nova/virt/libvirt/driver.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/driver.py 2017-09-12 11:48:41.737154733 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/driver.py 2017-09-12 11:50:03.642222778 -0400 +@@ -104,6 +104,7 @@ from nova.virt.libvirt import config as + from nova.virt.libvirt import firewall as libvirt_firewall + from nova.virt.libvirt import imagebackend + from nova.virt.libvirt import imagecache ++from nova.virt.libvirt import instancejobtracker + from nova.virt.libvirt import utils as libvirt_utils + from nova.virt import netutils + from nova.virt import watchdog_actions +@@ -414,6 +415,8 @@ class LibvirtDriver(driver.ComputeDriver + + self._volume_api = volume.API() + ++ self.job_tracker = instancejobtracker.InstanceJobTracker() ++ + @property + def disk_cachemode(self): + if self._disk_cachemode is None: +@@ -5102,6 +5105,12 @@ class LibvirtDriver(driver.ComputeDriver + img_path = info['path'] + fname = os.path.basename(img_path) + from_path = os.path.join(inst_base_resize, fname) ++ ++ on_execute = lambda process: self.job_tracker.add_job( ++ instance, process.pid) ++ on_completion = lambda process: self.job_tracker.remove_job( ++ instance, process.pid) ++ + if info['type'] == 'qcow2' and info['backing_file']: + tmp_path = from_path + "_rbase" + # merge backing file +@@ -5111,11 +5120,15 @@ class LibvirtDriver(driver.ComputeDriver + if shared_storage: + utils.execute('mv', tmp_path, img_path) + else: +- libvirt_utils.copy_image(tmp_path, img_path, host=dest) ++ libvirt_utils.copy_image(tmp_path, img_path, host=dest, ++ on_execute=on_execute, ++ on_completion=on_completion) + utils.execute('rm', '-f', tmp_path) + + else: # raw or qcow2 with no backing file +- libvirt_utils.copy_image(from_path, img_path, host=dest) ++ libvirt_utils.copy_image(from_path, img_path, host=dest, ++ on_execute=on_execute, ++ on_completion=on_completion) + except Exception: + with excutils.save_and_reraise_exception(): + self._cleanup_remote_migration(dest, inst_base, +@@ -5377,6 +5390,8 @@ class LibvirtDriver(driver.ComputeDriver + # invocation failed due to the absence of both target and + # target_resize. + if not remaining_path and os.path.exists(target_del): ++ self.job_tracker.terminate_jobs(instance) ++ + LOG.info(_('Deleting instance files %s'), target_del, + instance=instance) + remaining_path = target_del +Index: nova-2014.1.5/nova/virt/libvirt/instancejobtracker.py +=================================================================== +--- /dev/null 1970-01-01 00:00:00.000000000 +0000 ++++ nova-2014.1.5/nova/virt/libvirt/instancejobtracker.py 2017-09-12 11:48:41.733154681 -0400 +@@ -0,0 +1,97 @@ ++# Copyright 2015 NTT corp. ++# All Rights Reserved. ++# Licensed under the Apache License, Version 2.0 (the "License"); you may ++# not use this file except in compliance with the License. You may obtain ++# a copy of the License at ++# ++# http://www.apache.org/licenses/LICENSE-2.0 ++# ++# Unless required by applicable law or agreed to in writing, software ++# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT ++# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the ++# License for the specific language governing permissions and limitations ++# under the License. ++ ++ ++import collections ++import errno ++import os ++import signal ++ ++from nova.openstack.common.gettextutils import _LE ++from nova.openstack.common.gettextutils import _LW ++from nova.openstack.common import log as logging ++ ++ ++LOG = logging.getLogger(__name__) ++ ++ ++class InstanceJobTracker(object): ++ def __init__(self): ++ self.jobs = collections.defaultdict(list) ++ ++ def add_job(self, instance, pid): ++ """Appends process_id of instance to cache. ++ ++ This method will store the pid of a process in cache as ++ a key: value pair which will be used to kill the process if it ++ is running while deleting the instance. Instance uuid is used as ++ a key in the cache and pid will be the value. ++ ++ :param instance: Object of instance ++ :param pid: Id of the process ++ """ ++ self.jobs[instance.uuid].append(pid) ++ ++ def remove_job(self, instance, pid): ++ """Removes pid of process from cache. ++ ++ This method will remove the pid of a process from the cache. ++ ++ :param instance: Object of instance ++ :param pid: Id of the process ++ """ ++ uuid = instance.uuid ++ if uuid in self.jobs and pid in self.jobs[uuid]: ++ self.jobs[uuid].remove(pid) ++ ++ # remove instance.uuid if no pid's remaining ++ if not self.jobs[uuid]: ++ self.jobs.pop(uuid, None) ++ ++ def terminate_jobs(self, instance): ++ """Kills the running processes for given instance. ++ ++ This method is used to kill all running processes of the instance if ++ it is deleted in between. ++ ++ :param instance: Object of instance ++ """ ++ pids_to_remove = list(self.jobs.get(instance.uuid, [])) ++ for pid in pids_to_remove: ++ try: ++ # Try to kill the process ++ os.kill(pid, signal.SIGKILL) ++ except OSError as exc: ++ if exc.errno != errno.ESRCH: ++ LOG.error(_LE('Failed to kill process %(pid)s ' ++ 'due to %(reason)s, while deleting the ' ++ 'instance.'), {'pid': pid, 'reason': exc}, ++ instance=instance) ++ ++ try: ++ # Check if the process is still alive. ++ os.kill(pid, 0) ++ except OSError as exc: ++ if exc.errno != errno.ESRCH: ++ LOG.error(_LE('Unexpected error while checking process ' ++ '%(pid)s.'), {'pid': pid}, ++ instance=instance) ++ else: ++ # The process is still around ++ LOG.warn(_LW("Failed to kill a long running process " ++ "%(pid)s related to the instance when " ++ "deleting it."), {'pid': pid}, ++ instance=instance) ++ ++ self.remove_job(instance, pid) +Index: nova-2014.1.5/nova/virt/libvirt/utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/utils.py 2017-09-12 11:48:41.737154733 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/utils.py 2017-09-12 11:50:28.938552498 -0400 +@@ -481,12 +481,15 @@ def get_disk_backing_file(path, basename + return backing_file + + +-def copy_image(src, dest, host=None): ++def copy_image(src, dest, host=None, on_execute=None, ++ on_completion=None): + """Copy a disk image to an existing directory + + :param src: Source image + :param dest: Destination path + :param host: Remote host ++ :param on_execute: Callback method to store pid of process in cache ++ :param on_completion: Callback method to remove pid of process from cache + """ + + if not host: +@@ -505,11 +508,14 @@ def copy_image(src, dest, host=None): + # Do a relatively light weight test first, so that we + # can fall back to scp, without having run out of space + # on the destination for example. +- execute('rsync', '--sparse', '--compress', '--dry-run', src, dest) ++ execute('rsync', '--sparse', '--compress', '--dry-run', src, dest, ++ on_execute=on_execute, on_completion=on_completion) + except processutils.ProcessExecutionError: +- execute('scp', src, dest) ++ execute('scp', src, dest, on_execute=on_execute, ++ on_completion=on_completion) + else: +- execute('rsync', '--sparse', '--compress', src, dest) ++ execute('rsync', '--sparse', '--compress', src, dest, ++ on_execute=on_execute, on_completion=on_completion) + + + def write_to_file(path, contents, umask=None): diff -Nru nova-2014.1.3/debian/patches/CVE-2015-3280.patch nova-2014.1.5/debian/patches/CVE-2015-3280.patch --- nova-2014.1.3/debian/patches/CVE-2015-3280.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-3280.patch 2017-09-12 12:20:55.000000000 +0000 @@ -0,0 +1,218 @@ +From 38efa64f487ed644068b28ea050bb43f2e291208 Mon Sep 17 00:00:00 2001 +From: Rajesh Tailor +Date: Wed, 4 Mar 2015 05:05:19 -0800 +Subject: [PATCH] Delete orphaned instance files from compute nodes + +While resizing/revert-resizing instance, if instance gets deleted +in between, then instance files remains either on the source or +destination compute node. + +To address this issue, added a new periodic task +'_cleanup_incomplete_migrations' which takes care of deleting +instance files from source/destination compute nodes and then +mark migration record as failed so that it doesn't appear again +in the next periodic task run. + +SecurityImpact + +(cherry picked from commit 18d6b5cc79973fc553daf7a92f22cce4dc0ca013) + +Conflicts: + nova/compute/manager.py + nova/tests/unit/compute/test_compute_mgr.py + +(cherry picked from commit fa72fb8b51d59e04913c871539cee98a3da79058) + +Conflicts: + nova/tests/compute/test_compute_mgr.py + nova/compute/manager.py + +Closes-Bug: 1392527 +Resolves: rhbz 1264278 +Resolves: rhbz 1264279 +Upstream-Juno: https://review.openstack.org/#/c/219301/ +Change-Id: I9866d8e32e99b9f907921f4b226edf7b62bd83a7 +Reviewed-on: https://code.engineering.redhat.com/gerrit/58740 +Tested-by: RHOS Jenkins +Reviewed-by: Nikola Dipanov +Reviewed-by: Jon Schlueter +--- + nova/compute/manager.py | 60 ++++++++++++++++++++++++++-- + nova/tests/compute/test_compute_mgr.py | 73 ++++++++++++++++++++++++++++++++++ + 2 files changed, 129 insertions(+), 4 deletions(-) + +Index: nova-2014.1.5/nova/compute/manager.py +=================================================================== +--- nova-2014.1.5.orig/nova/compute/manager.py 2017-09-12 08:20:52.683037454 -0400 ++++ nova-2014.1.5/nova/compute/manager.py 2017-09-12 08:20:52.675037354 -0400 +@@ -245,12 +245,18 @@ def errors_out_migration(function): + def decorated_function(self, context, *args, **kwargs): + try: + return function(self, context, *args, **kwargs) +- except Exception: ++ except Exception as ex: + with excutils.save_and_reraise_exception(): + migration = kwargs['migration'] +- status = migration.status +- if status not in ['migrating', 'post-migrating']: +- return ++ ++ # NOTE(rajesht): If InstanceNotFound error is thrown from ++ # decorated function, migration status should be set to ++ # 'error', without checking current migration status. ++ if not isinstance(ex, exception.InstanceNotFound): ++ status = migration.status ++ if status not in ['migrating', 'post-migrating']: ++ return ++ + migration.status = 'error' + try: + migration.save(context.elevated()) +@@ -3279,6 +3285,7 @@ class ComputeManager(manager.Manager): + @wrap_exception() + @reverts_task_state + @wrap_instance_event ++ @errors_out_migration + @wrap_instance_fault + def revert_resize(self, context, instance, migration, reservations): + """Destroys the new instance on the destination machine. +@@ -3333,6 +3340,7 @@ class ComputeManager(manager.Manager): + @wrap_exception() + @reverts_task_state + @wrap_instance_event ++ @errors_out_migration + @wrap_instance_fault + def finish_revert_resize(self, context, instance, reservations, migration): + """Finishes the second half of reverting a resize. +@@ -5834,3 +5842,47 @@ class ComputeManager(manager.Manager): + instance.cleaned = True + with utils.temporary_mutation(context, read_deleted='yes'): + instance.save(context) ++ ++ @periodic_task.periodic_task(spacing=CONF.instance_delete_interval) ++ def _cleanup_incomplete_migrations(self, context): ++ """Delete instance files on failed resize/revert-resize operation ++ ++ During resize/revert-resize operation, if that instance gets deleted ++ in-between then instance files might remain either on source or ++ destination compute node because of race condition. ++ """ ++ LOG.debug('Cleaning up deleted instances with incomplete migration ') ++ migration_filters = {'host': CONF.host, ++ 'status': 'error'} ++ migrations = migration_obj.MigrationList.get_by_filters(context, ++ migration_filters) ++ ++ if not migrations: ++ return ++ ++ inst_uuid_from_migrations = set([migration.instance_uuid for migration ++ in migrations]) ++ ++ inst_filters = {'deleted': True, 'soft_deleted': False, ++ 'uuid': inst_uuid_from_migrations} ++ attrs = ['info_cache', 'security_groups', 'system_metadata'] ++ with utils.temporary_mutation(context, read_deleted='yes'): ++ instances = instance_obj.InstanceList.get_by_filters( ++ context, inst_filters, expected_attrs=attrs, use_slave=True) ++ ++ for instance in instances: ++ if instance.host != CONF.host: ++ for migration in migrations: ++ if instance.uuid == migration.instance_uuid: ++ # Delete instance files if not cleanup properly either ++ # from the source or destination compute nodes when ++ # the instance is deleted during resizing. ++ self.driver.delete_instance_files(instance) ++ try: ++ migration.status = 'failed' ++ migration.save(context.elevated()) ++ except exception.MigrationNotFound: ++ LOG.warning(_LW("Migration %s is not found."), ++ migration.id, context=context, ++ instance=instance) ++ break +Index: nova-2014.1.5/nova/tests/compute/test_compute_mgr.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/compute/test_compute_mgr.py 2017-09-12 08:20:52.683037454 -0400 ++++ nova-2014.1.5/nova/tests/compute/test_compute_mgr.py 2017-09-12 08:20:52.675037354 -0400 +@@ -870,6 +870,79 @@ class ComputeManagerUnitTestCase(test.No + self.assertFalse(c.cleaned) + self.assertEqual('1', c.system_metadata['clean_attempts']) + ++ @mock.patch.object(migration_obj.Migration, 'save') ++ @mock.patch.object(migration_obj.MigrationList, 'get_by_filters') ++ @mock.patch.object(instance_obj.InstanceList, 'get_by_filters') ++ def _test_cleanup_incomplete_migrations(self, inst_host, ++ mock_inst_get_by_filters, ++ mock_migration_get_by_filters, ++ mock_save): ++ def fake_inst(context, uuid, host): ++ inst = instance_obj.Instance(context) ++ inst.uuid = uuid ++ inst.host = host ++ return inst ++ ++ def fake_migration(uuid, status, inst_uuid, src_host, dest_host): ++ migration = migration_obj.Migration() ++ migration.uuid = uuid ++ migration.status = status ++ migration.instance_uuid = inst_uuid ++ migration.source_compute = src_host ++ migration.dest_compute = dest_host ++ return migration ++ ++ fake_instances = [fake_inst(self.context, '111', inst_host), ++ fake_inst(self.context, '222', inst_host)] ++ ++ fake_migrations = [fake_migration('123', 'error', '111', ++ 'fake-host', 'fake-mini'), ++ fake_migration('456', 'error', '222', ++ 'fake-host', 'fake-mini')] ++ ++ mock_migration_get_by_filters.return_value = fake_migrations ++ mock_inst_get_by_filters.return_value = fake_instances ++ ++ with mock.patch.object(self.compute.driver, 'delete_instance_files'): ++ self.compute._cleanup_incomplete_migrations(self.context) ++ ++ # Ensure that migration status is set to 'failed' after instance ++ # files deletion for those instances whose instance.host is not ++ # same as compute host where periodic task is running. ++ for inst in fake_instances: ++ if inst.host != CONF.host: ++ for mig in fake_migrations: ++ if inst.uuid == mig.instance_uuid: ++ self.assertEqual('failed', mig.status) ++ ++ def test_cleanup_incomplete_migrations_dest_node(self): ++ """Test to ensure instance files are deleted from destination node. ++ ++ If an instance gets deleted during resizing/revert-resizing ++ operation, in that case instance files gets deleted from ++ instance.host (source host here), but there is possibility that ++ instance files could be present on destination node. ++ ++ This test ensures that `_cleanup_incomplete_migration` periodic ++ task deletes orphaned instance files from destination compute node. ++ """ ++ self.flags(host='fake-mini') ++ self._test_cleanup_incomplete_migrations('fake-host') ++ ++ def test_cleanup_incomplete_migrations_source_node(self): ++ """Test to ensure instance files are deleted from source node. ++ ++ If instance gets deleted during resizing/revert-resizing operation, ++ in that case instance files gets deleted from instance.host (dest ++ host here), but there is possibility that instance files could be ++ present on source node. ++ ++ This test ensures that `_cleanup_incomplete_migration` periodic ++ task deletes orphaned instance files from source compute node. ++ """ ++ self.flags(host='fake-host') ++ self._test_cleanup_incomplete_migrations('fake-mini') ++ + def test_swap_volume_volume_api_usage(self): + # This test ensures that volume_id arguments are passed to volume_api + # and that volume states are OK diff -Nru nova-2014.1.3/debian/patches/CVE-2015-5162-1.patch nova-2014.1.5/debian/patches/CVE-2015-5162-1.patch --- nova-2014.1.3/debian/patches/CVE-2015-5162-1.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-5162-1.patch 2017-09-13 12:31:39.000000000 +0000 @@ -0,0 +1,370 @@ +From 994da57713461bb5524e641a93efc0ecd94ef329 Mon Sep 17 00:00:00 2001 +From: Victor Stinner +Date: Fri, 14 Oct 2016 16:17:58 +0200 +Subject: [PATCH] Add prlimit parameter to execute() + +Add a new oslo_concurrency.prlimit module which is written to be used +on the command line: + + python -m oslo_concurrency.prlimit --rss=RSS -- program arg1 ... + +This module calls setrlimit() to restrict the resources and then +executes the program. Its command line is written to be the same than +the Linux prlimit system program. + +Add a new ProcessLimits class processutils: resource limits on a +process. + +Add an optional prlimit parameter to process_utils.execute(). If the +parameter is used, wrap the command through the new oslo_concurrency +prlimit wrapper. + +Linux provides a prlimit command line tool which implements the same +feature (and even more), but it requires util-linux v2.21, and +OpenStack targets other operating systems like Solaris and FreeBSD. + +Upstream-Liberty: https://review.openstack.org/#/c/327630/ +Resolves: rhbz#1382549 + +NOTE(vstinner): The backport comes from oslo.concurrency of OSP 6, I +edited the patch manually to adapt it to the old +nova/openstack/common/ hierarchy and I created a new unit test file. +The test_relative_path() unit test was not backported because +execute() the env_variables parameter required by the test. + +Change-Id: Ib40aa62958ab9c157a2bd51d7ff3edb445556285 +Related-Bug: 1449062 +(cherry-pick from b2e78569c5cabc9582c02aacff1ce2a5e186c3ab) +(cherry picked from commit e33f64fc7920bc4c7051f35042237403fddf1f02) +Reviewed-on: https://code.engineering.redhat.com/gerrit/87169 +Tested-by: RHOS Jenkins +Reviewed-by: Kashyap Chamarthy +Tested-by: Victor Stinner +--- + nova/openstack/common/prlimit.py | 89 +++++++++++++++++ + nova/openstack/common/processutils.py | 49 +++++++++ + nova/tests/openstack_common/__init__.py | 0 + nova/tests/openstack_common/test_processutils.py | 122 +++++++++++++++++++++++ + 4 files changed, 260 insertions(+) + create mode 100644 nova/openstack/common/prlimit.py + create mode 100644 nova/tests/openstack_common/__init__.py + create mode 100644 nova/tests/openstack_common/test_processutils.py + +diff --git a/nova/openstack/common/prlimit.py b/nova/openstack/common/prlimit.py +new file mode 100644 +index 0000000..fa1ef68 +--- /dev/null ++++ b/nova/openstack/common/prlimit.py +@@ -0,0 +1,89 @@ ++# Copyright 2016 Red Hat. ++# All Rights Reserved. ++# ++# Licensed under the Apache License, Version 2.0 (the "License"); you may ++# not use this file except in compliance with the License. You may obtain ++# a copy of the License at ++# ++# http://www.apache.org/licenses/LICENSE-2.0 ++# ++# Unless required by applicable law or agreed to in writing, software ++# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT ++# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the ++# License for the specific language governing permissions and limitations ++# under the License. ++ ++from __future__ import print_function ++ ++import argparse ++import os ++import resource ++import sys ++ ++USAGE_PROGRAM = ('%s -m nova.openstack.common.prlimit' ++ % os.path.basename(sys.executable)) ++ ++RESOURCES = ( ++ # argparse argument => resource ++ ('as', resource.RLIMIT_AS), ++ ('nofile', resource.RLIMIT_NOFILE), ++ ('rss', resource.RLIMIT_RSS), ++) ++ ++ ++def parse_args(): ++ parser = argparse.ArgumentParser(description='prlimit', prog=USAGE_PROGRAM) ++ parser.add_argument('--as', type=int, ++ help='Address space limit in bytes') ++ parser.add_argument('--nofile', type=int, ++ help='Maximum number of open files') ++ parser.add_argument('--rss', type=int, ++ help='Maximum Resident Set Size (RSS) in bytes') ++ parser.add_argument('program', ++ help='Program (absolute path)') ++ parser.add_argument('program_args', metavar="arg", nargs='...', ++ help='Program parameters') ++ ++ args = parser.parse_args() ++ return args ++ ++ ++def main(): ++ args = parse_args() ++ ++ program = args.program ++ if not os.path.isabs(program): ++ # program uses a relative path: try to find the absolute path ++ # to the executable ++ if sys.version_info >= (3, 3): ++ import shutil ++ program_abs = shutil.which(program) ++ else: ++ import distutils.spawn ++ program_abs = distutils.spawn.find_executable(program) ++ if program_abs: ++ program = program_abs ++ ++ for arg_name, rlimit in RESOURCES: ++ value = getattr(args, arg_name) ++ if value is None: ++ continue ++ try: ++ resource.setrlimit(rlimit, (value, value)) ++ except ValueError as exc: ++ print("%s: failed to set the %s resource limit: %s" ++ % (USAGE_PROGRAM, arg_name.upper(), exc), ++ file=sys.stderr) ++ sys.exit(1) ++ ++ try: ++ os.execv(program, [program] + args.program_args) ++ except Exception as exc: ++ print("%s: failed to execute %s: %s" ++ % (USAGE_PROGRAM, program, exc), ++ file=sys.stderr) ++ sys.exit(1) ++ ++ ++if __name__ == "__main__": ++ main() +diff --git a/nova/openstack/common/processutils.py b/nova/openstack/common/processutils.py +index 4ad0a96..4a31171 100644 +--- a/nova/openstack/common/processutils.py ++++ b/nova/openstack/common/processutils.py +@@ -23,6 +23,7 @@ import os + import random + import shlex + import signal ++import sys + + from eventlet.green import subprocess + from eventlet import greenthread +@@ -81,6 +82,38 @@ def _subprocess_setup(): + signal.signal(signal.SIGPIPE, signal.SIG_DFL) + + ++class ProcessLimits(object): ++ """Resource limits on a process. ++ ++ Attributes: ++ ++ * address_space: Address space limit in bytes ++ * number_files: Maximum number of open files. ++ * resident_set_size: Maximum Resident Set Size (RSS) in bytes ++ ++ This object can be used for the *prlimit* parameter of :func:`execute`. ++ """ ++ ++ def __init__(self, **kw): ++ self.address_space = kw.pop('address_space', None) ++ self.number_files = kw.pop('number_files', None) ++ self.resident_set_size = kw.pop('resident_set_size', None) ++ if kw: ++ raise ValueError("invalid limits: %s" ++ % ', '.join(sorted(kw.keys()))) ++ ++ def prlimit_args(self): ++ """Create a list of arguments for the prlimit command line.""" ++ args = [] ++ if self.address_space: ++ args.append('--as=%s' % self.address_space) ++ if self.number_files: ++ args.append('--nofile=%s' % self.number_files) ++ if self.resident_set_size: ++ args.append('--rss=%s' % self.resident_set_size) ++ return args ++ ++ + def execute(*cmd, **kwargs): + """Helper method to shell out and execute a command through subprocess. + +@@ -123,10 +156,17 @@ def execute(*cmd, **kwargs): + `processutils.execute` to track process completion + asynchronously. + :type on_completion: function(:class:`subprocess.Popen`) ++ :param prlimit: Set resource limits on the child process. See ++ below for a detailed description. ++ :type prlimit: :class:`ProcessLimits` + :returns: (stdout, stderr) from process execution + :raises: :class:`UnknownArgumentError` on + receiving unknown arguments + :raises: :class:`ProcessExecutionError` ++ ++ The *prlimit* parameter can be used to set resource limits on the child ++ process. If this parameter is used, the child process will be spawned by a ++ wrapper process which will set limits before spawning the command. + """ + + process_input = kwargs.pop('process_input', None) +@@ -140,6 +180,7 @@ def execute(*cmd, **kwargs): + loglevel = kwargs.pop('loglevel', logging.DEBUG) + on_execute = kwargs.pop('on_execute', None) + on_completion = kwargs.pop('on_completion', None) ++ prlimit = kwargs.pop('prlimit', None) + + if isinstance(check_exit_code, bool): + ignore_exit_code = not check_exit_code +@@ -158,6 +199,14 @@ def execute(*cmd, **kwargs): + cmd = shlex.split(root_helper) + list(cmd) + + cmd = map(str, cmd) ++ ++ if prlimit: ++ args = [sys.executable, '-m', 'nova.openstack.common.prlimit'] ++ args.extend(prlimit.prlimit_args()) ++ args.append('--') ++ args.extend(cmd) ++ cmd = args ++ + sanitized_cmd = strutils.mask_password(' '.join(cmd)) + + while attempts > 0: +diff --git a/nova/tests/openstack_common/__init__.py b/nova/tests/openstack_common/__init__.py +new file mode 100644 +index 0000000..e69de29 +diff --git a/nova/tests/openstack_common/test_processutils.py b/nova/tests/openstack_common/test_processutils.py +new file mode 100644 +index 0000000..4822539 +--- /dev/null ++++ b/nova/tests/openstack_common/test_processutils.py +@@ -0,0 +1,122 @@ ++# Copyright 2011 OpenStack Foundation. ++# All Rights Reserved. ++# ++# Licensed under the Apache License, Version 2.0 (the "License"); you may ++# not use this file except in compliance with the License. You may obtain ++# a copy of the License at ++# ++# http://www.apache.org/licenses/LICENSE-2.0 ++# ++# Unless required by applicable law or agreed to in writing, software ++# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT ++# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the ++# License for the specific language governing permissions and limitations ++# under the License. ++ ++from __future__ import print_function ++ ++import os ++import resource ++import sys ++ ++from nova.openstack.common import processutils ++from nova import test ++ ++ ++class PrlimitTestCase(test.TestCase): ++ # Simply program that does nothing and returns an exit code 0. ++ # Use Python to be portable. ++ SIMPLE_PROGRAM = [sys.executable, '-c', 'pass'] ++ ++ def soft_limit(self, res, substract, default_limit): ++ # Create a new soft limit for a resource, lower than the current ++ # soft limit. ++ soft_limit, hard_limit = resource.getrlimit(res) ++ if soft_limit < 0: ++ soft_limit = default_limit ++ else: ++ soft_limit -= substract ++ return soft_limit ++ ++ def memory_limit(self, res): ++ # Substract 1 kB just to get a different limit. Don't substract too ++ # much to avoid memory allocation issues. ++ # ++ # Use 1 GB by default. Limit high enough to be able to load shared ++ # libraries. Limit low enough to be work on 32-bit platforms. ++ return self.soft_limit(res, 1024, 1024 ** 3) ++ ++ def limit_address_space(self): ++ max_memory = self.memory_limit(resource.RLIMIT_AS) ++ return processutils.ProcessLimits(address_space=max_memory) ++ ++ def test_simple(self): ++ # Simple test running a program (/bin/true) with no parameter ++ prlimit = self.limit_address_space() ++ stdout, stderr = processutils.execute(*self.SIMPLE_PROGRAM, ++ prlimit=prlimit) ++ self.assertEqual(stdout.rstrip(), '') ++ self.assertEqual(stderr.rstrip(), '') ++ ++ def check_limit(self, prlimit, resource, value): ++ code = ';'.join(('import resource', ++ 'print(resource.getrlimit(resource.%s))' % resource)) ++ args = [sys.executable, '-c', code] ++ stdout, stderr = processutils.execute(*args, prlimit=prlimit) ++ expected = (value, value) ++ self.assertEqual(stdout.rstrip(), str(expected)) ++ ++ def test_address_space(self): ++ prlimit = self.limit_address_space() ++ self.check_limit(prlimit, 'RLIMIT_AS', prlimit.address_space) ++ ++ def test_resident_set_size(self): ++ max_memory = self.memory_limit(resource.RLIMIT_RSS) ++ prlimit = processutils.ProcessLimits(resident_set_size=max_memory) ++ self.check_limit(prlimit, 'RLIMIT_RSS', max_memory) ++ ++ def test_number_files(self): ++ nfiles = self.soft_limit(resource.RLIMIT_NOFILE, 1, 1024) ++ prlimit = processutils.ProcessLimits(number_files=nfiles) ++ self.check_limit(prlimit, 'RLIMIT_NOFILE', nfiles) ++ ++ def test_unsupported_prlimit(self): ++ self.assertRaises(ValueError, processutils.ProcessLimits, xxx=33) ++ ++ def test_execv_error(self): ++ prlimit = self.limit_address_space() ++ args = ['/missing_path/dont_exist/program'] ++ try: ++ processutils.execute(*args, prlimit=prlimit) ++ except processutils.ProcessExecutionError as exc: ++ self.assertEqual(exc.exit_code, 1) ++ self.assertEqual(exc.stdout, '') ++ expected = ('%s -m nova.openstack.common.prlimit: ' ++ 'failed to execute /missing_path/dont_exist/program: ' ++ % os.path.basename(sys.executable)) ++ self.assertIn(expected, exc.stderr) ++ else: ++ self.fail("ProcessExecutionError not raised") ++ ++ def test_setrlimit_error(self): ++ prlimit = self.limit_address_space() ++ ++ # trying to set a limit higher than the current hard limit ++ # with setrlimit() should fail. ++ higher_limit = prlimit.address_space + 1024 ++ ++ args = [sys.executable, '-m', 'nova.openstack.common.prlimit', ++ '--as=%s' % higher_limit, ++ '--'] ++ args.extend(self.SIMPLE_PROGRAM) ++ try: ++ processutils.execute(*args, prlimit=prlimit) ++ except processutils.ProcessExecutionError as exc: ++ self.assertEqual(exc.exit_code, 1) ++ self.assertEqual(exc.stdout, '') ++ expected = ('%s -m nova.openstack.common.prlimit: ' ++ 'failed to set the AS resource limit: ' ++ % os.path.basename(sys.executable)) ++ self.assertIn(expected, exc.stderr) ++ else: ++ self.fail("ProcessExecutionError not raised") diff -Nru nova-2014.1.3/debian/patches/CVE-2015-5162-2.patch nova-2014.1.5/debian/patches/CVE-2015-5162-2.patch --- nova-2014.1.3/debian/patches/CVE-2015-5162-2.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-5162-2.patch 2017-09-13 12:33:08.000000000 +0000 @@ -0,0 +1,201 @@ +From b64a7d38673b48c2c12f9fcfd249a3f57c02e8f4 Mon Sep 17 00:00:00 2001 +From: Victor Stinner +Date: Fri, 14 Oct 2016 16:38:36 +0200 +Subject: [PATCH] processutils: add support for missing process limits + +The original commit adding support for process limits only wired +up address space, max files and resident set size limits. This +is not sufficient to enable nova to protect qemu-img commands +against malicious images. + +This commit adds support for the remaining limits supported +by python: core file size, cpu time, data size, file size, +locked memory size, max processes and stack size. + +Upstream-Liberty: https://review.openstack.org/#/c/332222/ +Resolves: rhbz#1382549 + +Related-bug: #1449062 +Change-Id: I164c4b35e1357a0f80ed7fe00a7ae8f49df92e31 +(cherry picked from commit 8af826953d1ad2cab2ecf360e0c794de70a367c3) +(cherry picked from commit 5f417f8e9656e097070036daced26d8b0f3728c3) +(cherry picked from commit d65d931da8490576f0abf30f124ca3a032b481c7) +Reviewed-on: https://code.engineering.redhat.com/gerrit/87170 +Tested-by: RHOS Jenkins +Reviewed-by: Kashyap Chamarthy +Tested-by: Victor Stinner +--- + nova/openstack/common/prlimit.py | 21 +++++++++++++ + nova/openstack/common/processutils.py | 38 +++++++++++++++++------- + nova/tests/openstack_common/test_processutils.py | 37 ++++++++++++++++++++++- + 3 files changed, 85 insertions(+), 11 deletions(-) + +diff --git a/nova/openstack/common/prlimit.py b/nova/openstack/common/prlimit.py +index fa1ef68..a3dc8a7 100644 +--- a/nova/openstack/common/prlimit.py ++++ b/nova/openstack/common/prlimit.py +@@ -26,8 +26,15 @@ USAGE_PROGRAM = ('%s -m nova.openstack.common.prlimit' + RESOURCES = ( + # argparse argument => resource + ('as', resource.RLIMIT_AS), ++ ('core', resource.RLIMIT_CORE), ++ ('cpu', resource.RLIMIT_CPU), ++ ('data', resource.RLIMIT_DATA), ++ ('fsize', resource.RLIMIT_FSIZE), ++ ('memlock', resource.RLIMIT_MEMLOCK), + ('nofile', resource.RLIMIT_NOFILE), ++ ('nproc', resource.RLIMIT_NPROC), + ('rss', resource.RLIMIT_RSS), ++ ('stack', resource.RLIMIT_STACK), + ) + + +@@ -35,10 +42,24 @@ def parse_args(): + parser = argparse.ArgumentParser(description='prlimit', prog=USAGE_PROGRAM) + parser.add_argument('--as', type=int, + help='Address space limit in bytes') ++ parser.add_argument('--core', type=int, ++ help='Core file size limit in bytes') ++ parser.add_argument('--cpu', type=int, ++ help='CPU time limit in seconds') ++ parser.add_argument('--data', type=int, ++ help='Data size limit in bytes') ++ parser.add_argument('--fsize', type=int, ++ help='File size limit in bytes') ++ parser.add_argument('--memlock', type=int, ++ help='Locked memory limit in bytes') + parser.add_argument('--nofile', type=int, + help='Maximum number of open files') ++ parser.add_argument('--nproc', type=int, ++ help='Maximum number of processes') + parser.add_argument('--rss', type=int, + help='Maximum Resident Set Size (RSS) in bytes') ++ parser.add_argument('--stack', type=int, ++ help='Stack size limit in bytes') + parser.add_argument('program', + help='Program (absolute path)') + parser.add_argument('program_args', metavar="arg", nargs='...', +diff --git a/nova/openstack/common/processutils.py b/nova/openstack/common/processutils.py +index 4a31171..17508d7 100644 +--- a/nova/openstack/common/processutils.py ++++ b/nova/openstack/common/processutils.py +@@ -88,16 +88,36 @@ class ProcessLimits(object): + Attributes: + + * address_space: Address space limit in bytes +- * number_files: Maximum number of open files. ++ * core_file_size: Core file size limit in bytes ++ * cpu_time: CPU time limit in seconds ++ * data_size: Data size limit in bytes ++ * file_size: File size limit in bytes ++ * memory_locked: Locked memory limit in bytes ++ * number_files: Maximum number of open files ++ * number_processes: Maximum number of processes + * resident_set_size: Maximum Resident Set Size (RSS) in bytes ++ * stack_size: Stack size limit in bytes + + This object can be used for the *prlimit* parameter of :func:`execute`. + """ + ++ _LIMITS = { ++ "address_space": "--as", ++ "core_file_size": "--core", ++ "cpu_time": "--cpu", ++ "data_size": "--data", ++ "file_size": "--fsize", ++ "memory_locked": "--memlock", ++ "number_files": "--nofile", ++ "number_processes": "--nproc", ++ "resident_set_size": "--rss", ++ "stack_size": "--stack", ++ } ++ + def __init__(self, **kw): +- self.address_space = kw.pop('address_space', None) +- self.number_files = kw.pop('number_files', None) +- self.resident_set_size = kw.pop('resident_set_size', None) ++ for limit in self._LIMITS.keys(): ++ setattr(self, limit, kw.pop(limit, None)) ++ + if kw: + raise ValueError("invalid limits: %s" + % ', '.join(sorted(kw.keys()))) +@@ -105,12 +125,10 @@ class ProcessLimits(object): + def prlimit_args(self): + """Create a list of arguments for the prlimit command line.""" + args = [] +- if self.address_space: +- args.append('--as=%s' % self.address_space) +- if self.number_files: +- args.append('--nofile=%s' % self.number_files) +- if self.resident_set_size: +- args.append('--rss=%s' % self.resident_set_size) ++ for limit in self._LIMITS.keys(): ++ val = getattr(self, limit) ++ if val is not None: ++ args.append("%s=%s" % (self._LIMITS[limit], val)) + return args + + +diff --git a/nova/tests/openstack_common/test_processutils.py b/nova/tests/openstack_common/test_processutils.py +index 4822539..a10f68c 100644 +--- a/nova/tests/openstack_common/test_processutils.py ++++ b/nova/tests/openstack_common/test_processutils.py +@@ -32,7 +32,7 @@ class PrlimitTestCase(test.TestCase): + # Create a new soft limit for a resource, lower than the current + # soft limit. + soft_limit, hard_limit = resource.getrlimit(res) +- if soft_limit < 0: ++ if soft_limit <= 0: + soft_limit = default_limit + else: + soft_limit -= substract +@@ -70,6 +70,31 @@ class PrlimitTestCase(test.TestCase): + prlimit = self.limit_address_space() + self.check_limit(prlimit, 'RLIMIT_AS', prlimit.address_space) + ++ def test_core_size(self): ++ size = self.soft_limit(resource.RLIMIT_CORE, 1, 1024) ++ prlimit = processutils.ProcessLimits(core_file_size=size) ++ self.check_limit(prlimit, 'RLIMIT_CORE', prlimit.core_file_size) ++ ++ def test_cpu_time(self): ++ time = self.soft_limit(resource.RLIMIT_CPU, 1, 1024) ++ prlimit = processutils.ProcessLimits(cpu_time=time) ++ self.check_limit(prlimit, 'RLIMIT_CPU', prlimit.cpu_time) ++ ++ def test_data_size(self): ++ max_memory = self.memory_limit(resource.RLIMIT_DATA) ++ prlimit = processutils.ProcessLimits(data_size=max_memory) ++ self.check_limit(prlimit, 'RLIMIT_DATA', max_memory) ++ ++ def test_file_size(self): ++ size = self.soft_limit(resource.RLIMIT_FSIZE, 1, 1024) ++ prlimit = processutils.ProcessLimits(file_size=size) ++ self.check_limit(prlimit, 'RLIMIT_FSIZE', prlimit.file_size) ++ ++ def test_memory_locked(self): ++ max_memory = self.memory_limit(resource.RLIMIT_MEMLOCK) ++ prlimit = processutils.ProcessLimits(memory_locked=max_memory) ++ self.check_limit(prlimit, 'RLIMIT_MEMLOCK', max_memory) ++ + def test_resident_set_size(self): + max_memory = self.memory_limit(resource.RLIMIT_RSS) + prlimit = processutils.ProcessLimits(resident_set_size=max_memory) +@@ -80,6 +105,16 @@ class PrlimitTestCase(test.TestCase): + prlimit = processutils.ProcessLimits(number_files=nfiles) + self.check_limit(prlimit, 'RLIMIT_NOFILE', nfiles) + ++ def test_number_processes(self): ++ nprocs = self.soft_limit(resource.RLIMIT_NPROC, 1, 65535) ++ prlimit = processutils.ProcessLimits(number_processes=nprocs) ++ self.check_limit(prlimit, 'RLIMIT_NPROC', nprocs) ++ ++ def test_stack_size(self): ++ max_memory = self.memory_limit(resource.RLIMIT_STACK) ++ prlimit = processutils.ProcessLimits(stack_size=max_memory) ++ self.check_limit(prlimit, 'RLIMIT_STACK', max_memory) ++ + def test_unsupported_prlimit(self): + self.assertRaises(ValueError, processutils.ProcessLimits, xxx=33) + diff -Nru nova-2014.1.3/debian/patches/CVE-2015-5162-3.patch nova-2014.1.5/debian/patches/CVE-2015-5162-3.patch --- nova-2014.1.3/debian/patches/CVE-2015-5162-3.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-5162-3.patch 2017-09-13 15:46:01.000000000 +0000 @@ -0,0 +1,245 @@ +Backport of: + +From 6bc37dcceca823998068167b49aec6def3112397 Mon Sep 17 00:00:00 2001 +From: "Daniel P. Berrange" +Date: Mon, 18 Apr 2016 16:32:19 +0000 +Subject: [PATCH] virt: set address space & CPU time limits when running + qemu-img + +This uses the new 'prlimit' parameter for oslo.concurrency execute +method, to set an address space limit of 1GB and CPU time limit +of 2 seconds, when running qemu-img. + +This is a re-implementation of the previously reverted commit + +commit da217205f53f9a38a573fb151898fbbeae41021d +Author: Tristan Cacqueray +Date: Wed Aug 5 17:17:04 2015 +0000 + + virt: Use preexec_fn to ulimit qemu-img info call + +NOTE (kchamart) [stable/liberty]: Add a check for the presence of +'ProcessLimits' attribute (which is only present in +oslo.concurrency>=2.6.1; and a conditional check for 'prlimit' parameter +in qemu_img_info() method. + +Upstream discussion[1][2] that led to merging this patch to +stable/liberty branch. + +[1] http://lists.openstack.org/pipermail/openstack-dev/2016-September/104091.html +[2] http://lists.openstack.org/pipermail/openstack-dev/2016-September/104303.html + +Closes-Bug: #1449062 +Change-Id: I135b5242af1bfdcb0ea09a6fcda21fc03a6fbe7d +(cherry picked from commit 068d851561addfefb2b812d91dc2011077cb6e1d) +--- + nova/tests/unit/virt/libvirt/test_driver.py | 7 ++++-- + nova/tests/unit/virt/libvirt/test_utils.py | 27 ++++++++++++++-------- + nova/virt/images.py | 16 ++++++++++++- + .../apply-limits-to-qemu-img-8813f7a333ebdf69.yaml | 8 +++++++ + 4 files changed, 46 insertions(+), 12 deletions(-) + create mode 100644 releasenotes/notes/apply-limits-to-qemu-img-8813f7a333ebdf69.yaml + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 11:45:23.529581583 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 11:45:23.529581583 -0400 +@@ -4476,7 +4476,8 @@ class LibvirtConnTestCase(test.TestCase) + + self.mox.StubOutWithMock(utils, "execute") + utils.execute('env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', +- '/test/disk.local').AndReturn((ret, '')) ++ '/test/disk.local', prlimit=images.QEMU_IMG_LIMITS, ++ ).AndReturn((ret, '')) + + self.mox.ReplayAll() + conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) +@@ -4588,7 +4589,8 @@ class LibvirtConnTestCase(test.TestCase) + + self.mox.StubOutWithMock(utils, "execute") + utils.execute('env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', +- '/test/disk.local').AndReturn((ret, '')) ++ '/test/disk.local', prlimit=images.QEMU_IMG_LIMITS, ++ ).AndReturn((ret, '')) + + self.mox.ReplayAll() + conn_info = {'driver_volume_type': 'fake'} +@@ -8283,7 +8285,8 @@ class LibvirtUtilsTestCase(test.TestCase + rval = ('', '') + os.path.exists('/some/path').AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', '/some/path').AndReturn(rval) ++ 'qemu-img', 'info', '/some/path', ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn(rval) + utils.execute('qemu-img', 'create', '-f', 'qcow2', + '-o', 'backing_file=/some/path', + '/the/new/cow') +@@ -8320,7 +8323,7 @@ class LibvirtUtilsTestCase(test.TestCase + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists('/some/path').AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', +- '/some/path').AndReturn(('''image: 00000001 ++ '/some/path', prlimit=images.QEMU_IMG_LIMITS).AndReturn(('''image: 00000001 + file format: raw + virtual size: 4.4M (4592640 bytes) + disk size: 4.4M''', '')) +Index: nova-2014.1.5/nova/virt/images.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/images.py 2017-09-13 11:45:23.529581583 -0400 ++++ nova-2014.1.5/nova/virt/images.py 2017-09-13 11:45:23.529581583 -0400 +@@ -29,6 +29,8 @@ from nova.openstack.common import fileut + from nova.openstack.common.gettextutils import _ + from nova.openstack.common import imageutils + from nova.openstack.common import log as logging ++from nova.openstack.common import units ++from nova.openstack.common import processutils + from nova import utils + + LOG = logging.getLogger(__name__) +@@ -41,6 +43,16 @@ image_opts = [ + + CONF = cfg.CONF + CONF.register_opts(image_opts) ++QEMU_IMG_LIMITS = None ++ ++try: ++ QEMU_IMG_LIMITS = processutils.ProcessLimits( ++ cpu_time=2, ++ address_space=1 * units.Gi) ++except Exception: ++ LOG.error('Please upgrade to oslo.concurrency version ' ++ '2.6.1 -- this version has fixes for the ' ++ 'vulnerability CVE-2015-5162.') + + + def qemu_img_info(path): +@@ -50,8 +62,11 @@ def qemu_img_info(path): + if not os.path.exists(path) and CONF.libvirt.images_type != 'rbd': + return imageutils.QemuImgInfo() + +- out, err = utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path) ++ cmd = ('env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', path) ++ if QEMU_IMG_LIMITS is not None: ++ out, err = utils.execute(*cmd, prlimit=QEMU_IMG_LIMITS) ++ else: ++ out, err = utils.execute(*cmd) + return imageutils.QemuImgInfo(out) + + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_image_utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_image_utils.py 2017-09-13 11:45:23.529581583 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_image_utils.py 2017-09-13 11:45:23.529581583 -0400 +@@ -50,7 +50,8 @@ disk size: 96K + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((output, '')) + self.mox.ReplayAll() + d_type = libvirt_utils.get_disk_type(path) + self.assertEqual(f, d_type) +@@ -71,7 +72,8 @@ disk size: 96K + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((output, '')) + self.mox.ReplayAll() + d_backing = libvirt_utils.get_disk_backing_file(path) + self.assertIsNone(d_backing) +@@ -97,7 +99,8 @@ disk size: 96K + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((output, '')) + self.mox.ReplayAll() + d_size = libvirt_utils.get_disk_size(path) + self.assertEqual(i, d_size) +@@ -111,7 +114,8 @@ disk size: 96K + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((output, '')) + self.mox.ReplayAll() + d_size = libvirt_utils.get_disk_size(path) + self.assertEqual(i, d_size) +@@ -130,7 +134,8 @@ blah BLAH: bb + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((example_output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((example_output, '')) + self.mox.ReplayAll() + image_info = images.qemu_img_info(path) + self.assertEqual('disk.config', image_info.image) +@@ -152,7 +157,8 @@ backing file: /var/lib/nova/a328c7998805 + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((example_output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((example_output, '')) + self.mox.ReplayAll() + image_info = images.qemu_img_info(path) + self.assertEqual('disk.config', image_info.image) +@@ -179,7 +185,8 @@ backing file: /var/lib/nova/a328c7998805 + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((example_output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((example_output, '')) + self.mox.ReplayAll() + image_info = images.qemu_img_info(path) + self.assertEqual('disk.config', image_info.image) +@@ -207,7 +214,8 @@ junk stuff: bbb + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((example_output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((example_output, '')) + self.mox.ReplayAll() + image_info = images.qemu_img_info(path) + self.assertEqual('disk.config', image_info.image) +@@ -231,7 +239,8 @@ ID TAG VM SIZE + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((example_output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((example_output, '')) + self.mox.ReplayAll() + image_info = images.qemu_img_info(path) + self.assertEqual('disk.config', image_info.image) +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt_utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt_utils.py 2017-09-13 11:45:23.529581583 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt_utils.py 2017-09-13 11:45:54.838008056 -0400 +@@ -22,6 +22,7 @@ from oslo.config import cfg + from nova.openstack.common import processutils + from nova import test + from nova import utils ++from nova.virt import images + from nova.virt.libvirt import utils as libvirt_utils + + CONF = cfg.CONF +@@ -41,7 +42,8 @@ blah BLAH: bb + self.mox.StubOutWithMock(utils, 'execute') + os.path.exists(path).AndReturn(True) + utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path).AndReturn((example_output, '')) ++ 'qemu-img', 'info', path, ++ prlimit=images.QEMU_IMG_LIMITS).AndReturn((example_output, '')) + self.mox.ReplayAll() + disk_type = libvirt_utils.get_disk_type(path) + self.assertEqual(disk_type, 'raw') diff -Nru nova-2014.1.3/debian/patches/CVE-2015-7548-1.patch nova-2014.1.5/debian/patches/CVE-2015-7548-1.patch --- nova-2014.1.3/debian/patches/CVE-2015-7548-1.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-7548-1.patch 2017-09-13 17:07:44.000000000 +0000 @@ -0,0 +1,277 @@ +Backport of: + +From 2cd7f611200021c1089f3258a16014be18eb7da9 Mon Sep 17 00:00:00 2001 +From: Matthew Booth +Date: Wed, 9 Dec 2015 15:36:32 +0000 +Subject: [PATCH] Fix format detection in libvirt snapshot + +The libvirt driver was using automatic format detection during +snapshot for disks stored on the local filesystem. This opened an +exploit if nova was configured to use local file storage, and +additionally to store those files in raw format by specifying +use_cow_images = False in nova.conf. An authenticated user could write +a qcow2 header to their guest image with a backing file on the host. +libvirt.utils.get_disk_type() would then misdetect the type of this +image as qcow2 and pass this to the Qcow2 image backend, whose +snapshot_extract method interprets the image as qcow2 and writes the +backing file to glance. The authenticated user can then download the +host file from glance. + +This patch makes 2 principal changes. libvirt.utils.get_disk_type, +which ought to be removed entirely as soon as possible, is updated to +no longer do format detection if the format can't be determined from +the path. Its name is changed to get_disk_type_from_path to reflect +its actual function. + +libvirt.utils.find_disk is updated to return both the path and format +of the root disk, rather than just the path. This is the most reliable +source of this information, as it reflects the actual format in use. +The previous format detection function of get_disk_type is replaced by +the format taken from libvirt. + +We replace a call to get_disk_type in _rebase_with_qemu_img with an +explicit call to qemu_img_info, as the other behaviour of +get_disk_type was not relevant in this context. qemu_img_info is safe +from the backing file exploit when called on a file known to be a +qcow2 image. As the file in this context is a volume snapshot, this is +a safe use. + +(cherry picked from commit f228834204fd8bdcf62f67e00c49edf63662a7dd) + + Conflicts: + nova/tests/virt/libvirt/fake_libvirt_utils.py + nova/tests/virt/libvirt/test_image_utils.py + nova/virt/libvirt/driver.py + +Resolves: rhbz 1295730 +Resolves: rhbz 1295729 +Partial-Bug: #1524274 +Change-Id: I94c1c0d26215c061f71c3f95e1a6bf3a58fa19ea +Reviewed-on: https://code.engineering.redhat.com/gerrit/64913 +Tested-by: RHOS Jenkins +Reviewed-by: Matthew Booth +--- + nova/tests/virt/libvirt/fake_libvirt_utils.py | 11 ++++++++-- + nova/tests/virt/libvirt/test_image_utils.py | 29 ++++++--------------------- + nova/tests/virt/libvirt/test_libvirt_utils.py | 19 +++--------------- + nova/virt/libvirt/driver.py | 25 +++++++++++++++++------ + nova/virt/libvirt/utils.py | 26 +++++++++++++++++++----- + 5 files changed, 58 insertions(+), 52 deletions(-) + +Index: nova-2014.1.5/nova/tests/virt/libvirt/fake_libvirt_utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/fake_libvirt_utils.py 2017-09-13 13:05:46.310514860 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/fake_libvirt_utils.py 2017-09-13 13:05:46.306514807 -0400 +@@ -94,7 +94,9 @@ def get_disk_backing_file(path): + return disk_backing_files.get(path, None) + + +-def get_disk_type(path): ++def get_disk_type_from_path(path): ++ if disk_type in ('raw', 'qcow2'): ++ return None + return disk_type + + +@@ -156,7 +158,12 @@ def file_open(path, mode=None): + + + def find_disk(virt_dom): +- return "filename" ++ if disk_type == 'lvm': ++ return ("/dev/nova-vg/lv", "raw") ++ elif disk_type in ['raw', 'qcow2']: ++ return ("filename", disk_type) ++ else: ++ return ("unknown_type_disk", None) + + + def load_file(path): +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_image_utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_image_utils.py 2017-09-13 13:05:46.310514860 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_image_utils.py 2017-09-13 13:06:58.099472963 -0400 +@@ -22,40 +22,21 @@ from nova.virt.libvirt import utils as l + + + class ImageUtilsTestCase(test.NoDBTestCase): +- def test_disk_type(self): ++ def test_disk_type_from_path(self): + # Seems like lvm detection + # if its in /dev ?? + for p in ['/dev/b', '/dev/blah/blah']: +- d_type = libvirt_utils.get_disk_type(p) ++ d_type = libvirt_utils.get_disk_type_from_path(p) + self.assertEqual('lvm', d_type) + + # Try rbd detection +- d_type = libvirt_utils.get_disk_type('rbd:pool/instance') ++ d_type = libvirt_utils.get_disk_type_from_path('rbd:pool/instance') + self.assertEqual('rbd', d_type) + + # Try the other types +- template_output = """image: %(path)s +-file format: %(format)s +-virtual size: 64M (67108864 bytes) +-cluster_size: 65536 +-disk size: 96K +-""" + path = '/myhome/disk.config' +- for f in ['raw', 'qcow2']: +- output = template_output % ({ +- 'format': f, +- 'path': path, +- }) +- self.mox.StubOutWithMock(os.path, 'exists') +- self.mox.StubOutWithMock(utils, 'execute') +- os.path.exists(path).AndReturn(True) +- utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path, +- prlimit=images.QEMU_IMG_LIMITS).AndReturn((output, '')) +- self.mox.ReplayAll() +- d_type = libvirt_utils.get_disk_type(path) +- self.assertEqual(f, d_type) +- self.mox.UnsetStubs() ++ d_type = libvirt_utils.get_disk_type_from_path(path) ++ self.assertIsNone(d_type) + + def test_disk_backing(self): + path = '/myhome/disk.config' +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt_utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt_utils.py 2017-09-13 13:05:46.310514860 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt_utils.py 2017-09-13 13:07:24.239821827 -0400 +@@ -29,24 +29,10 @@ CONF = cfg.CONF + + + class LibvirtUtilsTestCase(test.NoDBTestCase): +- def test_get_disk_type(self): ++ def test_get_disk_type_from_path(self): + path = "disk.config" +- example_output = """image: disk.config +-file format: raw +-virtual size: 64M (67108864 bytes) +-cluster_size: 65536 +-disk size: 96K +-blah BLAH: bb +-""" +- self.mox.StubOutWithMock(os.path, 'exists') +- self.mox.StubOutWithMock(utils, 'execute') +- os.path.exists(path).AndReturn(True) +- utils.execute('env', 'LC_ALL=C', 'LANG=C', +- 'qemu-img', 'info', path, +- prlimit=images.QEMU_IMG_LIMITS).AndReturn((example_output, '')) +- self.mox.ReplayAll() +- disk_type = libvirt_utils.get_disk_type(path) +- self.assertEqual(disk_type, 'raw') ++ disk_type = libvirt_utils.get_disk_type_from_path(path) ++ self.assertIsNone(disk_type) + + def test_logical_volume_size(self): + executes = [] +Index: nova-2014.1.5/nova/virt/libvirt/driver.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/driver.py 2017-09-13 13:05:46.310514860 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/driver.py 2017-09-13 13:05:46.306514807 -0400 +@@ -1505,10 +1505,23 @@ class LibvirtDriver(driver.ComputeDriver + snapshot_image_service, snapshot_image_id = _image_service + snapshot = snapshot_image_service.show(context, snapshot_image_id) + +- disk_path = libvirt_utils.find_disk(virt_dom) +- source_format = libvirt_utils.get_disk_type(disk_path) ++ # source_format is an on-disk format ++ # source_type is a backend type ++ disk_path, source_format = libvirt_utils.find_disk(virt_dom) ++ source_type = libvirt_utils.get_disk_type_from_path(disk_path) ++ ++ # We won't have source_type for raw or qcow2 disks, because we can't ++ # determine that from the path. We should have it from the libvirt ++ # xml, though. ++ if source_type is None: ++ source_type = source_format ++ # For lxc instances we won't have it either from libvirt xml ++ # (because we just gave libvirt the mounted filesystem), or the path, ++ # so source_type is still going to be None. In this case, ++ # snapshot_backend is going to default to CONF.libvirt.images_type ++ # below, which is still safe. + +- image_format = CONF.libvirt.snapshot_image_format or source_format ++ image_format = CONF.libvirt.snapshot_image_format or source_type + + # NOTE(bfilippov): save lvm and rbd as raw + if image_format == 'lvm' or image_format == 'rbd': +@@ -1530,7 +1543,7 @@ class LibvirtDriver(driver.ComputeDriver + if self.has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION, + MIN_QEMU_LIVESNAPSHOT_VERSION, + REQ_HYPERVISOR_LIVESNAPSHOT) \ +- and not source_format == "lvm" and not source_format == 'rbd': ++ and not source_type == "lvm" and not source_format == 'rbd': + live_snapshot = True + # Abort is an idempotent operation, so make sure any block + # jobs which may have failed are ended. This operation also +@@ -1561,7 +1574,7 @@ class LibvirtDriver(driver.ComputeDriver + virt_dom.managedSave(0) + + snapshot_backend = self.image_backend.snapshot(disk_path, +- image_type=source_format) ++ image_type=source_type) + + if live_snapshot: + LOG.info(_("Beginning live snapshot process"), +Index: nova-2014.1.5/nova/virt/libvirt/utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/utils.py 2017-09-13 13:05:46.310514860 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/utils.py 2017-09-13 13:05:46.310514860 -0400 +@@ -604,13 +604,20 @@ def find_disk(virt_dom): + """ + xml_desc = virt_dom.XMLDesc(0) + domain = etree.fromstring(xml_desc) ++ driver = None + if CONF.libvirt.virt_type == 'lxc': +- source = domain.find('devices/filesystem/source') ++ filesystem = domain.find('devices/filesystem') ++ driver = filesystem.find('driver') ++ ++ source = filesystem.find('source') + disk_path = source.get('dir') + disk_path = disk_path[0:disk_path.rfind('rootfs')] + disk_path = os.path.join(disk_path, 'disk') + else: +- source = domain.find('devices/disk/source') ++ disk = domain.find('devices/disk') ++ driver = disk.find('driver') ++ ++ source = disk.find('source') + disk_path = source.get('file') or source.get('dev') + if not disk_path and CONF.libvirt.images_type == 'rbd': + disk_path = source.get('name') +@@ -621,17 +628,26 @@ def find_disk(virt_dom): + raise RuntimeError(_("Can't retrieve root device path " + "from instance libvirt configuration")) + +- return disk_path ++ if driver is not None: ++ format = driver.get('type') ++ # This is a legacy quirk of libvirt/xen. Everything else should ++ # report the on-disk format in type. ++ if format == 'aio': ++ format = 'raw' ++ else: ++ format = None ++ return (disk_path, format) + + +-def get_disk_type(path): ++def get_disk_type_from_path(path): + """Retrieve disk type (raw, qcow2, lvm) for given file.""" + if path.startswith('/dev'): + return 'lvm' + elif path.startswith('rbd:'): + return 'rbd' + +- return images.qemu_img_info(path).file_format ++ # We can't reliably determine the type from this path ++ return None + + + def get_fs_info(path): diff -Nru nova-2014.1.3/debian/patches/CVE-2015-7548-2.patch nova-2014.1.5/debian/patches/CVE-2015-7548-2.patch --- nova-2014.1.3/debian/patches/CVE-2015-7548-2.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-7548-2.patch 2017-09-13 17:09:42.000000000 +0000 @@ -0,0 +1,170 @@ +Backport of: + +From 26138c7a1e5c14b6084a83d563d4aa8883843726 Mon Sep 17 00:00:00 2001 +From: Matthew Booth +Date: Thu, 10 Dec 2015 16:34:19 +0000 +Subject: [PATCH] Fix format conversion in libvirt snapshot + +The libvirt driver was calling images.convert_image during snapshot to +convert snapshots to the intended output format. However, this +function does not take the input format as an argument, meaning it +implicitly does format detection. This opened an exploit for setups +using raw storage on the backend, including raw on filesystem, LVM, +and RBD (Ceph). An authenticated user could write a qcow2 header to +their instance's disk which specified an arbitrary backing file on the +host. When convert_image ran during snapshot, this would then write +the contents of the backing file to glance, which is then available to +the user. If the setup uses an LVM backend this conversion runs as +root, meaning the user can exfiltrate any file on the host, including +raw disks. + +This change adds an input format to convert_image. + +(cherry picked from commit 6e0b5d760afd86d439aaf6f34d6f031afdaf208c) + + Conflicts: + nova/tests/virt/libvirt/test_libvirt.py + nova/virt/libvirt/imagebackend.py + +Resolves: rhbz 1295729 +Resolves: rhbz 1295730 +Partial-Bug: #1524274 +Change-Id: If73e73718ecd5db262ed9904091024238f98dbc0 +Reviewed-on: https://code.engineering.redhat.com/gerrit/64914 +Tested-by: RHOS Jenkins +Reviewed-by: Matthew Booth +--- + nova/tests/virt/libvirt/test_libvirt.py | 7 ++++--- + nova/virt/images.py | 26 ++++++++++++++++++++++++-- + nova/virt/libvirt/imagebackend.py | 19 ++++++++++++++----- + 3 files changed, 42 insertions(+), 10 deletions(-) + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:09:38.757617032 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:09:38.753616979 -0400 +@@ -2829,7 +2829,7 @@ class LibvirtConnTestCase(test.TestCase) + libvirt_driver.utils.execute = self.fake_execute + self.stubs.Set(libvirt_driver.libvirt_utils, 'disk_type', 'raw') + +- def convert_image(source, dest, out_format): ++ def convert_image(source, dest, in_format, out_format): + libvirt_driver.libvirt_utils.files[dest] = '' + + self.stubs.Set(images, 'convert_image', convert_image) +@@ -2882,7 +2882,7 @@ class LibvirtConnTestCase(test.TestCase) + libvirt_driver.utils.execute = self.fake_execute + self.stubs.Set(libvirt_driver.libvirt_utils, 'disk_type', 'raw') + +- def convert_image(source, dest, out_format): ++ def convert_image(source, dest, in_format, out_format): + libvirt_driver.libvirt_utils.files[dest] = '' + + self.stubs.Set(images, 'convert_image', convert_image) +@@ -8531,7 +8531,8 @@ disk size: 4.4M''', '')) + target = 't.qcow2' + self.executes = [] + expected_commands = [('qemu-img', 'convert', '-O', 'raw', +- 't.qcow2.part', 't.qcow2.converted'), ++ 't.qcow2.part', 't.qcow2.converted', ++ '-f', 'qcow2'), + ('rm', 't.qcow2.part'), + ('mv', 't.qcow2.converted', 't.qcow2')] + images.fetch_to_raw(context, image_id, target, user_id, project_id, +Index: nova-2014.1.5/nova/virt/images.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/images.py 2017-09-13 13:09:38.757617032 -0400 ++++ nova-2014.1.5/nova/virt/images.py 2017-09-13 13:09:38.757617032 -0400 +@@ -70,9 +70,31 @@ def qemu_img_info(path): + return imageutils.QemuImgInfo(out) + + +-def convert_image(source, dest, out_format, run_as_root=False): ++def convert_image(source, dest, in_format, out_format, run_as_root=False): + """Convert image to other format.""" ++ if in_format is None: ++ raise RuntimeError("convert_image without input format is a security" ++ "risk") ++ _convert_image(source, dest, in_format, out_format, run_as_root) ++ ++ ++def convert_image_unsafe(source, dest, out_format, run_as_root=False): ++ """Convert image to other format, doing unsafe automatic input format ++ detection. Do not call this function. ++ """ ++ ++ # NOTE: there is only 1 caller of this function: ++ # imagebackend.Lvm.create_image. It is not easy to fix that without a ++ # larger refactor, so for the moment it has been manually audited and ++ # allowed to continue. Remove this function when Lvm.create_image has ++ # been fixed. ++ _convert_image(source, dest, None, out_format, run_as_root) ++ ++ ++def _convert_image(source, dest, in_format, out_format, run_as_root): + cmd = ('qemu-img', 'convert', '-O', out_format, source, dest) ++ if in_format is not None: ++ cmd = cmd + ('-f', in_format) + utils.execute(*cmd, run_as_root=run_as_root) + + +@@ -128,7 +150,7 @@ def fetch_to_raw(context, image_href, pa + staged = "%s.converted" % path + LOG.debug("%s was %s, converting to raw" % (image_href, fmt)) + with fileutils.remove_path_on_error(staged): +- convert_image(path_tmp, staged, 'raw') ++ convert_image(path_tmp, staged, fmt, 'raw') + os.unlink(path_tmp) + + data = qemu_img_info(staged) +Index: nova-2014.1.5/nova/virt/libvirt/imagebackend.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/imagebackend.py 2017-09-13 13:09:38.757617032 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/imagebackend.py 2017-09-13 13:09:38.757617032 -0400 +@@ -356,7 +356,7 @@ class Raw(Image): + self.correct_format() + + def snapshot_extract(self, target, out_format): +- images.convert_image(self.path, target, out_format) ++ images.convert_image(self.path, target, self.driver_format, out_format) + + + class Qcow2(Image): +@@ -465,7 +465,16 @@ class Lvm(Image): + size = size if resize else base_size + libvirt_utils.create_lvm_image(self.vg, self.lv, + size, sparse=self.sparse) +- images.convert_image(base, self.path, 'raw', run_as_root=True) ++ # NOTE: by calling convert_image_unsafe here we're ++ # telling qemu-img convert to do format detection on the input, ++ # because we don't know what the format is. For example, ++ # we might have downloaded a qcow2 image, or created an ++ # ephemeral filesystem locally, we just don't know here. Having ++ # audited this, all current sources have been sanity checked, ++ # either because they're locally generated, or because they have ++ # come from images.fetch_to_raw. However, this is major code smell. ++ images.convert_image_unsafe(base, self.path, self.driver_format, ++ run_as_root=True) + if resize: + disk.resize2fs(self.path, run_as_root=True) + +@@ -492,8 +501,8 @@ class Lvm(Image): + libvirt_utils.remove_logical_volumes(path) + + def snapshot_extract(self, target, out_format): +- images.convert_image(self.path, target, out_format, +- run_as_root=True) ++ images.convert_image(self.path, target, self.driver_format, ++ out_format, run_as_root=True) + + + class RBDVolumeProxy(object): +@@ -686,7 +695,7 @@ class Rbd(Image): + self._resize(self.rbd_name, size) + + def snapshot_extract(self, target, out_format): +- images.convert_image(self.path, target, out_format) ++ images.convert_image(self.path, target, 'raw', out_format) + + @staticmethod + def is_shared_block_storage(): diff -Nru nova-2014.1.3/debian/patches/CVE-2015-7548-3.patch nova-2014.1.5/debian/patches/CVE-2015-7548-3.patch --- nova-2014.1.3/debian/patches/CVE-2015-7548-3.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-7548-3.patch 2017-09-13 17:09:47.000000000 +0000 @@ -0,0 +1,165 @@ +Backport of: + +From d3573cca0764e853c1b0cead26cb65710919ee43 Mon Sep 17 00:00:00 2001 +From: Matthew Booth +Date: Fri, 11 Dec 2015 13:40:54 +0000 +Subject: [PATCH] Fix backing file detection in libvirt live snapshot + +When doing a live snapshot, the libvirt driver creates an intermediate +qcow2 file with the same backing file as the original disk. However, +it calls qemu-img info without specifying the input format explicitly. +An authenticated user can write data to a raw disk which will cause +this code to misinterpret the disk as a qcow2 file with a +user-specified backing file on the host, and return an arbitrary host +file as the backing file. + +This bug does not appear to result in a data leak in this case, but +this is hard to verify. It certainly results in corrupt output. + +(cherry picked from commit fec5b15911f7d4a927633875d042c6a94171b8ae) + + Conflicts: + nova/tests/virt/libvirt/fake_libvirt_utils.py + nova/tests/virt/libvirt/test_libvirt.py + nova/virt/images.py + nova/virt/libvirt/driver.py + +Resolves: rhbz 1295730 +Resolves: rhbz 1295729 +Closes-Bug: #1524274 +Change-Id: I11485f077d28f4e97529a691e55e3e3c0bea8872 +Reviewed-on: https://code.engineering.redhat.com/gerrit/64915 +Tested-by: RHOS Jenkins +Reviewed-by: Matthew Booth +--- + nova/tests/virt/libvirt/fake_libvirt_utils.py | 6 +++++- + nova/tests/virt/libvirt/test_libvirt.py | 3 +-- + nova/virt/images.py | 8 +++++--- + nova/virt/libvirt/driver.py | 16 ++++++++++------ + nova/virt/libvirt/utils.py | 9 +++++---- + 5 files changed, 26 insertions(+), 16 deletions(-) + +Index: nova-2014.1.5/nova/tests/virt/libvirt/fake_libvirt_utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/fake_libvirt_utils.py 2017-09-13 13:09:44.429692727 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/fake_libvirt_utils.py 2017-09-13 13:09:44.417692566 -0400 +@@ -90,7 +90,11 @@ def create_cow_image(backing_file, path) + pass + + +-def get_disk_backing_file(path): ++def get_disk_size(path, format=None): ++ return 0 ++ ++ ++def get_disk_backing_file(path, format=None): + return disk_backing_files.get(path, None) + + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:09:44.429692727 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:09:44.421692620 -0400 +@@ -7451,8 +7451,7 @@ class LibvirtConnTestCase(test.TestCase) + unplug.side_effect = test.TestingException + self.assertRaises(test.TestingException, + conn.cleanup, 'ctxt', fake_inst, 'netinfo') +- unplug.assert_called_once_with(fake_inst, 'netinfo', +- ignore_errors=True) ++ unplug.assert_called_once_with(fake_inst, 'netinfo', ignore_errors=True) + + @mock.patch('os.path.exists', return_value=True) + @mock.patch('tempfile.mkstemp') +Index: nova-2014.1.5/nova/virt/images.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/images.py 2017-09-13 13:09:44.429692727 -0400 ++++ nova-2014.1.5/nova/virt/images.py 2017-09-13 13:09:44.421692620 -0400 +@@ -55,7 +55,7 @@ except Exception: + 'vulnerability CVE-2015-5162.') + + +-def qemu_img_info(path): ++def qemu_img_info(path, format=None): + """Return an object containing the parsed output from qemu-img info.""" + # TODO(mikal): this code should not be referring to a libvirt specific + # flag. +@@ -63,6 +63,8 @@ def qemu_img_info(path): + return imageutils.QemuImgInfo() + + cmd = ('env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', path) ++ if format is not None: ++ cmd = cmd + ('-f', format) + if QEMU_IMG_LIMITS is not None: + out, err = utils.execute(*cmd, prlimit=QEMU_IMG_LIMITS) + else: +Index: nova-2014.1.5/nova/virt/libvirt/driver.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/driver.py 2017-09-13 13:09:44.429692727 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/driver.py 2017-09-13 13:09:44.425692673 -0400 +@@ -1593,7 +1593,7 @@ class LibvirtDriver(driver.ComputeDriver + # NOTE(xqueralt): libvirt needs o+x in the temp directory + os.chmod(tmpdir, 0o701) + self._live_snapshot(virt_dom, disk_path, out_path, +- image_format) ++ source_format, image_format, metadata) + else: + snapshot_backend.snapshot_extract(out_path, image_format) + finally: +@@ -1651,7 +1651,8 @@ class LibvirtDriver(driver.ComputeDriver + else: + return True + +- def _live_snapshot(self, domain, disk_path, out_path, image_format): ++ def _live_snapshot(self, domain, disk_path, out_path, ++ source_format, image_format, image_meta): + """Snapshot an instance without downtime.""" + # Save a copy of the domain's running XML file + xml = domain.XMLDesc(0) +@@ -1667,9 +1668,11 @@ class LibvirtDriver(driver.ComputeDriver + # in QEMU 1.3. In order to do this, we need to create + # a destination image with the original backing file + # and matching size of the instance root disk. +- src_disk_size = libvirt_utils.get_disk_size(disk_path) ++ src_disk_size = libvirt_utils.get_disk_size(disk_path, ++ format=source_format) + src_back_path = libvirt_utils.get_disk_backing_file(disk_path, +- basename=False) ++ format=source_format, ++ basename=False) + disk_delta = out_path + '.delta' + libvirt_utils.create_cow_image(src_back_path, disk_delta, + src_disk_size) +Index: nova-2014.1.5/nova/virt/libvirt/utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/utils.py 2017-09-13 13:09:44.429692727 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/utils.py 2017-09-13 13:09:44.425692673 -0400 +@@ -457,24 +457,25 @@ def pick_disk_driver_name(hypervisor_ver + return None + + +-def get_disk_size(path): ++def get_disk_size(path, format=None): + """Get the (virtual) size of a disk image + + :param path: Path to the disk image ++ :param format: the on-disk format of path + :returns: Size (in bytes) of the given disk image as it would be seen + by a virtual machine. + """ +- size = images.qemu_img_info(path).virtual_size ++ size = images.qemu_img_info(path, format).virtual_size + return int(size) + + +-def get_disk_backing_file(path, basename=True): ++def get_disk_backing_file(path, basename=True, format=None): + """Get the backing file of a disk image + + :param path: Path to the disk image + :returns: a path to the image's backing store + """ +- backing_file = images.qemu_img_info(path).backing_file ++ backing_file = images.qemu_img_info(path, format).backing_file + if backing_file and basename: + backing_file = os.path.basename(backing_file) + diff -Nru nova-2014.1.3/debian/patches/CVE-2015-7548-4.patch nova-2014.1.5/debian/patches/CVE-2015-7548-4.patch --- nova-2014.1.3/debian/patches/CVE-2015-7548-4.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-7548-4.patch 2017-09-13 17:09:50.000000000 +0000 @@ -0,0 +1,41 @@ +From d28d73214f07d8577747d2b2e70dc11f370e4465 Mon Sep 17 00:00:00 2001 +From: Matthew Booth +Date: Thu, 14 Apr 2016 17:13:37 +0100 +Subject: [PATCH] Disable live snapshot for rbd-backed instances + +The backport of change I11485f077d28f4e97529a691e55e3e3c0bea8872 +missed a use of source_format. After this change source_format +strictly contains a file format, and source_type contains the name of +the backend. Therefore, for rbd source_format is 'raw', and +source_type is 'rbd'. The test to enable live migration still expected +source_format for rbd to be 'rbd', which caused the exclusion to be +missed. + +This change is a fixup to the backport. The new line is in line with +upstream. + +Downstream-Only +Resolves: rhbz#1326489 + +Change-Id: I6dcbceb39a97b5fbe7bf42d367596afc4ea061e0 +Reviewed-on: https://code.engineering.redhat.com/gerrit/72226 +Reviewed-by: Lee Yarwood +Tested-by: RHOS Jenkins +Tested-by: Matthew Booth +--- + nova/virt/libvirt/driver.py | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +Index: nova-2014.1.5/nova/virt/libvirt/driver.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/driver.py 2017-09-13 13:09:48.401745734 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/driver.py 2017-09-13 13:09:48.397745682 -0400 +@@ -1543,7 +1543,7 @@ class LibvirtDriver(driver.ComputeDriver + if self.has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION, + MIN_QEMU_LIVESNAPSHOT_VERSION, + REQ_HYPERVISOR_LIVESNAPSHOT) \ +- and not source_type == "lvm" and not source_format == 'rbd': ++ and source_type not in ('lvm', 'rbd'): + live_snapshot = True + # Abort is an idempotent operation, so make sure any block + # jobs which may have failed are ended. This operation also diff -Nru nova-2014.1.3/debian/patches/CVE-2015-7713.patch nova-2014.1.5/debian/patches/CVE-2015-7713.patch --- nova-2014.1.3/debian/patches/CVE-2015-7713.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-7713.patch 2017-09-12 12:47:11.000000000 +0000 @@ -0,0 +1,117 @@ +From 6dfb9690b1c1d2a0836db48a735953a23a098470 Mon Sep 17 00:00:00 2001 +From: Matt Riedemann +Date: Wed, 9 Sep 2015 20:29:09 -0700 +Subject: [PATCH] Don't expect meta attributes in object_compat that aren't in + the db obj + +The object_compat decorator expects to get the Instance object with +'metadata' and 'system_metadata' attributes but if those aren't in the +db instance dict object, Instance._from_db_object will fail with a +KeyError. + +In Kilo this happens per refresh_instance_security_rules because in the +compute API code, the instance passed to refresh_instance_security_rules +comes from the call to get the security group(s) which joins on the +instances column, but that doesn't join on the metadata/system_metadata +fields for the instances. So when the instances get to object_compat in +the compute manager and the db instance dict is converted to the +Instance object, it expects fields that aren't in the dict and we get +the KeyError. + +The refresh_instance_security_rules case is fixed in Liberty per commit +12fbe6f082ef9b70b89302e15daa12e851e507a7 - in that case the compute API +passes Instance objects to the compute manager so object_compat doesn't +have anything to do, _load_instance just sees instance_or_dict isn't a +dict and ignores it. + +We're making this change since (1) it's an obviously wrong assumption in +object_compat and should be fixed and (2) we need to backport this fix +to stable/kilo since it's an upgrade impact for users there. + +Closes-Bug: #1484738 +Resolves: rhbz#1272864 +Upstream-Liberty: https://review.openstack.org/222022 +Upstream-Kilo: https://review.openstack.org/222023 +Upstream-Juno: https://review.openstack.org/222026 + +Conflicts: + nova/tests/unit/compute/test_compute.py + +NOTE(mriedem): The conflict is due to the unit tests being moved in +kilo, otherwise this is unchanged. + +Change-Id: I36a954c095a9aa35879200784dc18e35edf689e6 +(cherry picked from commit 9369aab04e37b7818d49b00e65857be8b3564e9e) +(cherry picked from commit 08d1153d3be9f8d59aa0acc03eedd45a1697ed7b) +Reviewed-on: https://code.engineering.redhat.com/gerrit/61173 +Reviewed-by: RHOS Jenkins +Tested-by: RHOS Jenkins +Reviewed-by: Lee Yarwood +--- + nova/compute/manager.py | 6 +++++- + nova/tests/compute/test_compute.py | 21 +++++++++++++++++++++ + 2 files changed, 26 insertions(+), 1 deletion(-) + +Index: nova-2014.1.5/nova/compute/manager.py +=================================================================== +--- nova-2014.1.5.orig/nova/compute/manager.py 2017-09-12 08:47:08.890464733 -0400 ++++ nova-2014.1.5/nova/compute/manager.py 2017-09-12 08:47:08.882464635 -0400 +@@ -397,6 +397,11 @@ def object_compat(function): + def decorated_function(self, context, *args, **kwargs): + def _load_instance(instance_or_dict): + if isinstance(instance_or_dict, dict): ++ # try to get metadata and system_metadata for most cases but ++ # only attempt to load those if the db instance already has ++ # those fields joined ++ metas = [meta for meta in ('metadata', 'system_metadata') ++ if meta in instance_or_dict] + instance = instance_obj.Instance._from_db_object( + context, instance_obj.Instance(), instance_or_dict, + expected_attrs=metas) +@@ -404,7 +409,6 @@ def object_compat(function): + return instance + return instance_or_dict + +- metas = ['metadata', 'system_metadata'] + try: + kwargs['instance'] = _load_instance(kwargs['instance']) + except KeyError: +Index: nova-2014.1.5/nova/tests/compute/test_compute.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/compute/test_compute.py 2017-09-12 08:47:08.890464733 -0400 ++++ nova-2014.1.5/nova/tests/compute/test_compute.py 2017-09-12 08:47:08.882464635 -0400 +@@ -1203,6 +1203,24 @@ class ComputeTestCase(BaseTestCase): + def test_fn(_self, context, instance): + self.assertIsInstance(instance, instance_obj.Instance) + self.assertEqual(instance.uuid, db_inst['uuid']) ++ self.assertEqual(instance.metadata, db_inst['metadata']) ++ self.assertEqual(instance.system_metadata, ++ db_inst['system_metadata']) ++ test_fn(None, self.context, instance=db_inst) ++ ++ def test_object_compat_no_metas(self): ++ # Tests that we don't try to set metadata/system_metadata on the ++ # instance object using fields that aren't in the db object. ++ db_inst = fake_instance.fake_db_instance() ++ db_inst.pop('metadata', None) ++ db_inst.pop('system_metadata', None) ++ ++ @compute_manager.object_compat ++ def test_fn(_self, context, instance): ++ self.assertIsInstance(instance, instance_obj.Instance) ++ self.assertEqual(instance.uuid, db_inst['uuid']) ++ self.assertNotIn('metadata', instance) ++ self.assertNotIn('system_metadata', instance) + test_fn(None, self.context, instance=db_inst) + + def test_object_compat_more_positional_args(self): +@@ -1212,6 +1230,9 @@ class ComputeTestCase(BaseTestCase): + def test_fn(_self, context, instance, pos_arg_1, pos_arg_2): + self.assertIsInstance(instance, instance_obj.Instance) + self.assertEqual(instance.uuid, db_inst['uuid']) ++ self.assertEqual(instance.metadata, db_inst['metadata']) ++ self.assertEqual(instance.system_metadata, ++ db_inst['system_metadata']) + self.assertEqual(pos_arg_1, 'fake_pos_arg1') + self.assertEqual(pos_arg_2, 'fake_pos_arg2') + diff -Nru nova-2014.1.3/debian/patches/CVE-2015-8749.patch nova-2014.1.5/debian/patches/CVE-2015-8749.patch --- nova-2014.1.3/debian/patches/CVE-2015-8749.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2015-8749.patch 2017-09-12 13:33:41.000000000 +0000 @@ -0,0 +1,48 @@ +Backport of: + +From ef1ccdaca9512b88878155f7d8c2c77853d91252 Mon Sep 17 00:00:00 2001 +From: Matt Riedemann +Date: Mon, 16 Nov 2015 13:11:09 -0800 +Subject: [PATCH] xen: mask passwords in volume connection_data dict + +The connection_data dict can have credentials in it, so we need to scrub +those before putting the stringified dict into the StorageError message +and raising that up and when logging the dict. + +Note that strutils.mask_password converts the dict to a string using +six.text_type so we don't have to do that conversion first. + +SecurityImpact + +Change-Id: Ic5f4d4c26794550a92481bf2b725ef5eafa581b2 +Closes-Bug: #1516765 +(cherry picked from commit 8b289237ed6d53738c22878decf0c429301cf3d0) +(cherry picked from commit cf197ec2d682fb4da777df2291ca7ef101f73b77) +--- + nova/tests/unit/virt/xenapi/test_volume_utils.py | 17 +++++++++++++++-- + nova/tests/unit/virt/xenapi/test_volumeops.py | 16 ++++++++++++++++ + nova/virt/xenapi/volume_utils.py | 3 ++- + nova/virt/xenapi/volumeops.py | 6 +++++- + 4 files changed, 38 insertions(+), 4 deletions(-) + +Index: nova-2014.1.5/nova/virt/xenapi/volume_utils.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/xenapi/volume_utils.py 2017-09-12 09:32:54.906602924 -0400 ++++ nova-2014.1.5/nova/virt/xenapi/volume_utils.py 2017-09-12 09:33:35.547153637 -0400 +@@ -26,6 +26,7 @@ from oslo.config import cfg + + from nova.openstack.common.gettextutils import _ + from nova.openstack.common import log as logging ++from nova.openstack.common import strutils + + xenapi_volume_utils_opts = [ + cfg.IntOpt('introduce_vdi_retry_wait', +@@ -267,7 +268,7 @@ def parse_volume_info(connection_data): + target_host is None or + target_iqn is None): + raise StorageError(_('Unable to obtain target information' +- ' %s') % connection_data) ++ ' %s') % strutils.mask_password(connection_data)) + volume_info = {} + volume_info['id'] = volume_id + volume_info['target'] = target_host diff -Nru nova-2014.1.3/debian/patches/CVE-2016-2140-1.patch nova-2014.1.5/debian/patches/CVE-2016-2140-1.patch --- nova-2014.1.3/debian/patches/CVE-2016-2140-1.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2016-2140-1.patch 2017-09-13 17:52:14.000000000 +0000 @@ -0,0 +1,178 @@ +Backport of: + +From 48e30ff15efdf167ce5782b57ee3cf287c5b9049 Mon Sep 17 00:00:00 2001 +From: Lee Yarwood +Date: Wed, 24 Feb 2016 11:23:22 +0000 +Subject: [PATCH] libvirt: Always copy or recreate disk.info during a migration + +The disk.info file contains the path and format of any image, config or +ephermal disk associated with an instance. When using RAW images and migrating +an instance this file should always be copied or recreated. This avoids the Raw +imagebackend reinspecting the format of these disks when spawning the instance +on the destination host. + +By not copying or recreating this disk.info file, a malicious image written to +an instance disk on the source host will cause Nova to reinspect and record a +different format for the disk on the destination. This format then being used +incorrectly when finally spawning the instance on the destination. + +Conflicts: + nova/tests/unit/virt/libvirt/test_driver.py + nova/virt/libvirt/driver.py + +Resolves:rhbz #1313655 +Resolves:rhbz #1313656 +SecurityImpact +Closes-bug: #1548450 +Change-Id: Idfc16f54049aaeab31ac1c1d8d79a129acc9fb87 +Reviewed-on: https://code.engineering.redhat.com/gerrit/69013 +Reviewed-by: Matthew Booth +Tested-by: RHOS Jenkins +--- + nova/tests/virt/libvirt/test_libvirt.py | 81 +++++++++++++++++++++++++++++++++ + nova/virt/libvirt/driver.py | 26 +++++++++++ + 2 files changed, 107 insertions(+) + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:51:13.969596813 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:51:59.014183154 -0400 +@@ -4433,6 +4433,43 @@ class LibvirtConnTestCase(test.TestCase) + conn.pre_live_migration(self.context, instance, block_device_info=None, + network_info=[], disk_info={}) + ++ def test_pre_live_migration_recreate_disk_info(self): ++ ++ migrate_data = {'is_shared_storage': False, ++ 'is_volume_backed': False, ++ 'block_migration': True, ++ 'instance_relative_path': '/some/path/'} ++ disk_info = [{'disk_size': 5368709120, 'type': 'raw', ++ 'virt_disk_size': 5368709120, ++ 'path': '/some/path/disk', ++ 'backing_file': '', 'over_committed_disk_size': 0}, ++ {'disk_size': 1073741824, 'type': 'raw', ++ 'virt_disk_size': 1073741824, ++ 'path': '/some/path/disk.eph0', ++ 'backing_file': '', 'over_committed_disk_size': 0}] ++ image_disk_info = {'/some/path/disk': 'raw', ++ '/some/path/disk.eph0': 'raw'} ++ ++ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ++ instance = db.instance_create(self.context, self.test_instance) ++ instance_path = os.path.dirname(disk_info[0]['path']) ++ disk_info_path = os.path.join(instance_path, 'disk.info') ++ ++ with contextlib.nested( ++ mock.patch.object(os, 'mkdir'), ++ mock.patch.object(fake_libvirt_utils, 'write_to_file'), ++ mock.patch.object(drvr, '_create_images_and_backing') ++ ) as ( ++ mkdir, write_to_file, create_images_and_backing ++ ): ++ drvr.pre_live_migration(self.context, instance, ++ block_device_info=None, ++ network_info=[], ++ disk_info=disk_info, ++ migrate_data=migrate_data) ++ write_to_file.assert_called_with(disk_info_path, ++ jsonutils.dumps(image_disk_info)) ++ + def test_get_instance_disk_info_works_correctly(self): + # Test data + instance_ref = db.instance_create(self.context, self.test_instance) +@@ -8734,6 +8771,50 @@ class LibvirtDriverTestCase(test.TestCas + self.libvirtconnection.migrate_disk_and_power_off, + 'ctx', instance, '10.0.0.1', flavor, None) + ++ @mock.patch('nova.utils.execute') ++ @mock.patch('nova.virt.libvirt.utils.copy_image') ++ @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._destroy') ++ @mock.patch('nova.virt.libvirt.utils.get_instance_path') ++ @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' ++ '._is_storage_shared_with') ++ @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' ++ '.get_instance_disk_info') ++ def test_migrate_disk_and_power_off_resize_copy_disk_info(self, ++ mock_disk_info, ++ mock_shared, ++ mock_path, ++ mock_destroy, ++ mock_copy, ++ mock_execuate): ++ ++ instance = self._create_instance() ++ disk_info = jsonutils.dumps([{'disk_size': 1, 'type': 'qcow2', ++ 'virt_disk_size': 10737418240, 'path': '/test/disk', ++ 'backing_file': '/base/disk'}, ++ {'disk_size': 1, 'type': 'qcow2', ++ 'virt_disk_size': 536870912, 'path': '/test/disk.swap', ++ 'backing_file': '/base/swap_512'}]) ++ disk_info_text = jsonutils.loads(disk_info) ++ instance_base = os.path.dirname(disk_info_text[0]['path']) ++ flavor = {'root_gb': 10, 'ephemeral_gb': 25} ++ flavor_object = flavor_obj.Flavor(**flavor) ++ ++ mock_disk_info.return_value = disk_info ++ mock_path.return_value = instance_base ++ mock_shared.return_value = False ++ ++ admin_ctx = context.get_admin_context() ++ self.libvirtconnection.migrate_disk_and_power_off(admin_ctx, instance, ++ mock.sentinel, ++ flavor_object, None) ++ ++ src_disk_info_path = os.path.join(instance_base + '_resize', ++ 'disk.info') ++ dst_disk_info_path = os.path.join(instance_base, 'disk.info') ++ mock_copy.assert_any_call(src_disk_info_path, dst_disk_info_path, ++ host=mock.sentinel, on_execute=mock.ANY, ++ on_completion=mock.ANY) ++ + def test_wait_for_running(self): + def fake_get_info(instance): + if instance['name'] == "not_found": +Index: nova-2014.1.5/nova/virt/libvirt/driver.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/driver.py 2017-09-13 13:51:13.969596813 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/driver.py 2017-09-13 13:51:13.969596813 -0400 +@@ -4723,6 +4723,24 @@ class LibvirtDriver(driver.ComputeDriver + raise exception.DestinationDiskExists(path=instance_dir) + os.mkdir(instance_dir) + ++ # Recreate the disk.info file and in doing so stop the ++ # imagebackend from recreating it incorrectly by inspecting the ++ # contents of each file when using the Raw backend. ++ if disk_info: ++ image_disk_info = {} ++ for info in disk_info: ++ image_file = os.path.basename(info['path']) ++ image_path = os.path.join(instance_dir, image_file) ++ image_disk_info[image_path] = info['type'] ++ ++ LOG.debug('Creating disk.info with the contents: %s', ++ image_disk_info, instance=instance) ++ ++ image_disk_info_path = os.path.join(instance_dir, ++ 'disk.info') ++ libvirt_utils.write_to_file(image_disk_info_path, ++ jsonutils.dumps(image_disk_info)) ++ + # Ensure images and backing files are present. + self._create_images_and_backing(context, instance, instance_dir, + disk_info) +@@ -5145,6 +5163,14 @@ class LibvirtDriver(driver.ComputeDriver + libvirt_utils.copy_image(from_path, img_path, host=dest, + on_execute=on_execute, + on_completion=on_completion) ++ ++ # Ensure disk.info is written to the new path to avoid disks being ++ # reinspected and potentially changing format. ++ src_disk_info_path = os.path.join(inst_base_resize, 'disk.info') ++ dst_disk_info_path = os.path.join(inst_base, 'disk.info') ++ libvirt_utils.copy_image(src_disk_info_path, dst_disk_info_path, ++ host=dest, on_execute=on_execute, ++ on_completion=on_completion) + except Exception: + with excutils.save_and_reraise_exception(): + self._cleanup_remote_migration(dest, inst_base, diff -Nru nova-2014.1.3/debian/patches/CVE-2016-2140-2.patch nova-2014.1.5/debian/patches/CVE-2016-2140-2.patch --- nova-2014.1.3/debian/patches/CVE-2016-2140-2.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2016-2140-2.patch 2017-09-13 17:10:03.000000000 +0000 @@ -0,0 +1,98 @@ +From e9ee2bd26b5f863099cda5f4aa89c4d567984d27 Mon Sep 17 00:00:00 2001 +From: Matthew Booth +Date: Wed, 9 Mar 2016 17:27:03 +0000 +Subject: [PATCH] Fix processing of libvirt disk.info in non-disk-image cases + +In Idfc16f54049aaeab31ac1c1d8d79a129acc9fb87 a change was made +that caused non-disk-image backends to fall over because of an +undefined variable because they skipped processing of the disk.info +file. This adds a check for that case to make sure we don't run +that path in the non-disk-image backend case. + +Conflicts: + nova/tests/virt/libvirt/test_libvirt.py + +Upstream-Kilo: https://review.openstack.org/#/c/290847 +Closes-Bug: #1555287 +Change-Id: I02f8a5f0e29816336e500a8fe8dcc9ece15968e9 +Reviewed-on: https://code.engineering.redhat.com/gerrit/69570 +Tested-by: RHOS Jenkins +Reviewed-by: Matthew Booth +--- + nova/tests/virt/libvirt/test_libvirt.py | 14 +++++++++++--- + nova/virt/libvirt/driver.py | 21 ++++++++++++--------- + 2 files changed, 23 insertions(+), 12 deletions(-) + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:10:01.537921037 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:10:01.533920984 -0400 +@@ -8804,12 +8804,20 @@ class LibvirtDriverTestCase(test.TestCas + mock_shared.return_value = False + + admin_ctx = context.get_admin_context() +- self.libvirtconnection.migrate_disk_and_power_off(admin_ctx, instance, +- mock.sentinel, +- flavor_object, None) + + src_disk_info_path = os.path.join(instance_base + '_resize', + 'disk.info') ++ ++ with mock.patch.object(os.path, 'exists', autospec=True) \ ++ as mock_exists: ++ # disk.info exists on the source ++ mock_exists.side_effect = \ ++ lambda path: path == src_disk_info_path ++ self.libvirtconnection.migrate_disk_and_power_off(admin_ctx, ++ instance, mock.sentinel, ++ flavor_object, None) ++ self.assertTrue(mock_exists.called) ++ + dst_disk_info_path = os.path.join(instance_base, 'disk.info') + mock_copy.assert_any_call(src_disk_info_path, dst_disk_info_path, + host=mock.sentinel, on_execute=mock.ANY, +Index: nova-2014.1.5/nova/virt/libvirt/driver.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/driver.py 2017-09-13 13:10:01.537921037 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/driver.py 2017-09-13 13:10:01.533920984 -0400 +@@ -5134,17 +5134,18 @@ class LibvirtDriver(driver.ComputeDriver + if shared_storage: + dest = None + utils.execute('mkdir', '-p', inst_base) ++ ++ on_execute = lambda process: \ ++ self.job_tracker.add_job(instance, process.pid) ++ on_completion = lambda process: \ ++ self.job_tracker.remove_job(instance, process.pid) ++ + for info in disk_info: + # assume inst_base == dirname(info['path']) + img_path = info['path'] + fname = os.path.basename(img_path) + from_path = os.path.join(inst_base_resize, fname) + +- on_execute = lambda process: self.job_tracker.add_job( +- instance, process.pid) +- on_completion = lambda process: self.job_tracker.remove_job( +- instance, process.pid) +- + if info['type'] == 'qcow2' and info['backing_file']: + tmp_path = from_path + "_rbase" + # merge backing file +@@ -5167,10 +5168,12 @@ class LibvirtDriver(driver.ComputeDriver + # Ensure disk.info is written to the new path to avoid disks being + # reinspected and potentially changing format. + src_disk_info_path = os.path.join(inst_base_resize, 'disk.info') +- dst_disk_info_path = os.path.join(inst_base, 'disk.info') +- libvirt_utils.copy_image(src_disk_info_path, dst_disk_info_path, +- host=dest, on_execute=on_execute, +- on_completion=on_completion) ++ if os.path.exists(src_disk_info_path): ++ dst_disk_info_path = os.path.join(inst_base, 'disk.info') ++ libvirt_utils.copy_image(src_disk_info_path, ++ dst_disk_info_path, ++ host=dest, on_execute=on_execute, ++ on_completion=on_completion) + except Exception: + with excutils.save_and_reraise_exception(): + self._cleanup_remote_migration(dest, inst_base, diff -Nru nova-2014.1.3/debian/patches/CVE-2016-2140-3.patch nova-2014.1.5/debian/patches/CVE-2016-2140-3.patch --- nova-2014.1.3/debian/patches/CVE-2016-2140-3.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/CVE-2016-2140-3.patch 2017-09-13 17:10:08.000000000 +0000 @@ -0,0 +1,59 @@ +From a20a0e46b7841bb64e6bc17b9f0d255541739ea9 Mon Sep 17 00:00:00 2001 +From: Lee Yarwood +Date: Thu, 17 Mar 2016 16:36:08 +0000 +Subject: [PATCH] libvirt: Decode disk_info before use + +The fix for OSSA 2016-007 / CVE-2016-2140 in f302bf04 assumed that +disk_info is always a plain, decoded list. However prior to Liberty +when preforming a live block migration the compute manager populates +disk_info with an encoded JSON string when calling +self.driver.get_instance_disk_info. In the live migration case without +block migration disk_info is None. + +As a result we should always decode disk_info when a block migration +is called for to ensure that we can iterate over the disks and rebuild +the disk.info file. + +The following change removed the JSON encoding from +get_instance_disk_info and other methods within the libvirt driver for +Liberty. + +libvirt: Remove unnecessary JSON conversions +https://review.openstack.org/#/c/177437/6 + +Closes-Bug: #1558697 +Change-Id: Icfe1f23cc3af2d0166dac82109111e341623fc4a +Reviewed-on: https://code.engineering.redhat.com/gerrit/70141 +Tested-by: RHOS Jenkins +Reviewed-by: Matthew Booth +--- + nova/tests/virt/libvirt/test_libvirt.py | 2 +- + nova/virt/libvirt/driver.py | 2 +- + 2 files changed, 2 insertions(+), 2 deletions(-) + +Index: nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py +=================================================================== +--- nova-2014.1.5.orig/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:10:06.361985415 -0400 ++++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2017-09-13 13:10:06.357985362 -0400 +@@ -4465,7 +4465,7 @@ class LibvirtConnTestCase(test.TestCase) + drvr.pre_live_migration(self.context, instance, + block_device_info=None, + network_info=[], +- disk_info=disk_info, ++ disk_info=jsonutils.dumps(disk_info), + migrate_data=migrate_data) + write_to_file.assert_called_with(disk_info_path, + jsonutils.dumps(image_disk_info)) +Index: nova-2014.1.5/nova/virt/libvirt/driver.py +=================================================================== +--- nova-2014.1.5.orig/nova/virt/libvirt/driver.py 2017-09-13 13:10:06.361985415 -0400 ++++ nova-2014.1.5/nova/virt/libvirt/driver.py 2017-09-13 13:10:06.357985362 -0400 +@@ -4728,7 +4728,7 @@ class LibvirtDriver(driver.ComputeDriver + # contents of each file when using the Raw backend. + if disk_info: + image_disk_info = {} +- for info in disk_info: ++ for info in jsonutils.loads(disk_info): + image_file = os.path.basename(info['path']) + image_path = os.path.join(instance_dir, image_file) + image_disk_info[image_path] = info['type'] diff -Nru nova-2014.1.3/debian/patches/Detach-iSCSI-latest-path-for-latest-disk.patch nova-2014.1.5/debian/patches/Detach-iSCSI-latest-path-for-latest-disk.patch --- nova-2014.1.3/debian/patches/Detach-iSCSI-latest-path-for-latest-disk.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/Detach-iSCSI-latest-path-for-latest-disk.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,119 @@ +From 030621f36e1508af779764b34baf9915f27e0d4b Mon Sep 17 00:00:00 2001 +From: Billy Olsen +Date: Thu, 21 Apr 2016 21:00:24 +0000 +Subject: [PATCH 4/4] Detach iSCSI latest path for latest disk +Forwarded: https://review.openstack.org/#/c/135382/ +Bug: https://bugs.launchpad.net/nova/+bug/1374999 + +The logic responsible to disconnect iscsi volumes wasn't clearing +latest path for the latest remaining disk. With this change, +latest disk path is removed right before iscsi disk is disconnected. + +Also, the device descriptor was not removed if the iqn are different +and multipath is enabled. + +Conflicts: + nova/tests/virt/libvirt/test_libvirt_volume.py + +NOTE(wolsen): Conflicts are due to removing additional tests included +in stable/juno but not part of this cherry-pick + +Change-Id: Ib6f6cea40cc3a14a3a443b157d0decba5602bf13 +Closes-Bug: 1374999 +Closes-Bug: 1452032 + +(cherry picked from commit 768da20fab6f84a8c34a089767b87924045c905a) +(cherry picked from commit 092a88b534f133aaca5f969f69f77ac38b1878fa) +--- + nova/tests/virt/libvirt/test_libvirt_volume.py | 53 +++++++++++++++++++++++--- + nova/virt/libvirt/volume.py | 1 + + 2 files changed, 48 insertions(+), 6 deletions(-) + +--- a/nova/tests/virt/libvirt/test_libvirt_volume.py ++++ b/nova/tests/virt/libvirt/test_libvirt_volume.py +@@ -364,13 +364,10 @@ + self.stubs.Set(libvirt_driver, '_get_multipath_device_name', + lambda x: fake_multipath_device) + +- def fake_disconnect_volume_multipath_iscsi(iscsi_properties, +- multipath_device): +- if fake_multipath_device != multipath_device: +- raise Exception('Invalid multipath_device.') ++ fake_rm_mp_dev_desc = mock.MagicMock() + +- self.stubs.Set(libvirt_driver, '_disconnect_volume_multipath_iscsi', +- fake_disconnect_volume_multipath_iscsi) ++ self.stubs.Set(libvirt_driver, '_remove_multipath_device_descriptor', ++ fake_rm_mp_dev_desc) + with mock.patch.object(os.path, 'exists', return_value=True): + vol = {'id': 1, 'name': self.name} + connection_info = self.iscsi_connection(vol, self.location, +@@ -380,6 +377,50 @@ + self.assertEqual(fake_multipath_id, + connection_info['data']['multipath_id']) + libvirt_driver.disconnect_volume(connection_info, "fake") ++ fake_rm_mp_dev_desc.assert_called_with(fake_multipath_device) ++ ++ def test_disconnect_volume_multipath_iscsi_not_in_use(self): ++ libvirt_driver = volume.LibvirtISCSIVolumeDriver(self.fake_conn) ++ libvirt_driver.use_multipath = True ++ self.stubs.Set(libvirt_driver, '_run_iscsiadm_bare', ++ lambda x, check_exit_code: ('',)) ++ self.stubs.Set(libvirt_driver, '_rescan_iscsi', lambda: None) ++ self.stubs.Set(libvirt_driver, '_get_host_device', lambda x: None) ++ self.stubs.Set(libvirt_driver, '_rescan_multipath', lambda: None) ++ ++ fake_multipath_id = 'fake_multipath_id' ++ fake_multipath_device = '/dev/mapper/%s' % fake_multipath_id ++ ++ fake_remove_multipath_device_descriptor = mock.MagicMock() ++ fake_disconnect_mpath = mock.MagicMock() ++ ++ self.stubs.Set(libvirt_driver, '_get_multipath_device_name', ++ lambda x: fake_multipath_device) ++ ++ self.stubs.Set(libvirt_driver, ++ '_remove_multipath_device_descriptor', ++ fake_remove_multipath_device_descriptor) ++ ++ self.stubs.Set(libvirt_driver, ++ '_disconnect_mpath', fake_disconnect_mpath) ++ ++ self.stubs.Set(libvirt_driver, ++ '_get_target_portals_from_iscsiadm_output', ++ lambda x: [[self.location, self.iqn]]) ++ ++ with contextlib.nested( ++ mock.patch.object(os.path, 'exists', return_value=True), ++ mock.patch.object(libvirt_driver, '_connect_to_iscsi_portal') ++ ): ++ vol = {'id': 1, 'name': self.name} ++ connection_info = self.iscsi_connection(vol, self.location, ++ self.iqn) ++ libvirt_driver.connect_volume(connection_info, ++ self.disk_info) ++ libvirt_driver.disconnect_volume(connection_info, "fake") ++ ++ fake_remove_multipath_device_descriptor.assert_called_with( ++ fake_multipath_device) + + def iser_connection(self, volume, location, iqn): + return { +--- a/nova/virt/libvirt/volume.py ++++ b/nova/virt/libvirt/volume.py +@@ -460,6 +460,7 @@ + + if not devices: + # disconnect if no other multipath devices ++ self._remove_multipath_device_descriptor(multipath_device) + self._disconnect_mpath(iscsi_properties, ips_iqns) + return + +@@ -480,7 +481,6 @@ + if not in_use: + # disconnect if no other multipath devices with same iqn + self._disconnect_mpath(iscsi_properties, ips_iqns) +- return + elif multipath_device not in devices: + # delete the devices associated w/ the unused multipath + self._delete_mpath(iscsi_properties, multipath_device, ips_iqns) diff -Nru nova-2014.1.3/debian/patches/disable-websockify-tests.patch nova-2014.1.5/debian/patches/disable-websockify-tests.patch --- nova-2014.1.3/debian/patches/disable-websockify-tests.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/disable-websockify-tests.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,25 @@ +--- a/nova/tests/console/test_websocketproxy.py ++++ b/nova/tests/console/test_websocketproxy.py +@@ -16,8 +16,13 @@ + + + import mock ++import testtools ++ ++try: ++ from nova.console import websocketproxy ++except: ++ websocketproxy = None + +-from nova.console import websocketproxy + from nova import exception + from nova import test + from oslo.config import cfg +@@ -27,6 +32,7 @@ + + class NovaProxyRequestHandlerBaseTestCase(test.TestCase): + ++ @testtools.skipIf(websocketproxy is None, "websockify not available") + def setUp(self): + super(NovaProxyRequestHandlerBaseTestCase, self).setUp() + diff -Nru nova-2014.1.3/debian/patches/evacuate_error_vm.patch nova-2014.1.5/debian/patches/evacuate_error_vm.patch --- nova-2014.1.3/debian/patches/evacuate_error_vm.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/evacuate_error_vm.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,85 @@ +commit 4551896ced835b0cd89b9ff1ff17ef2bae2282f5 +Author: Chris Friesen +Date: Fri Mar 14 11:37:55 2014 -0600 + + Allow evacuate from vm_state=Error + + We currently allow reboot/rebuild/rescue for an instance in the Error state. + This commit allows "evacuate" as well, since it is essentially a "rebuild" + on a different compute node. + + This is useful in a number of cases, in particular if an initial evacuation + attempt fails. + + Change-Id: I3f513eb738c91fe71767308f57251629639efd6a + Closes-Bug: 1298061 + (cherry picked from commit 2f8dfc0da2fd7f13185c4638aa74013be617cf11) + +diff --git a/nova/compute/api.py b/nova/compute/api.py +index d939aaf..61dcdd0 100644 +--- a/nova/compute/api.py ++++ b/nova/compute/api.py +@@ -3040,7 +3040,8 @@ class API(base.Base): + host_name, block_migration=block_migration, + disk_over_commit=disk_over_commit) + +- @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED]) ++ @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, ++ vm_states.ERROR]) + def evacuate(self, context, instance, host, on_shared_storage, + admin_password=None): + """Running evacuate to target host. +diff --git a/nova/tests/compute/test_compute.py b/nova/tests/compute/test_compute.py +index e1297a9..6c2333e 100644 +--- a/nova/tests/compute/test_compute.py ++++ b/nova/tests/compute/test_compute.py +@@ -9234,9 +9234,9 @@ class ComputeAPITestCase(BaseTestCase): + instance.refresh() + self.assertEqual(instance['task_state'], task_states.MIGRATING) + +- def test_evacuate(self): ++ def _check_evacuate(self, instance_params=None): + instance = jsonutils.to_primitive(self._create_fake_instance( +- services=True)) ++ instance_params, services=True)) + instance_uuid = instance['uuid'] + instance = db.instance_get_by_uuid(self.context, instance_uuid) + self.assertIsNone(instance['task_state']) +@@ -9265,6 +9265,12 @@ class ComputeAPITestCase(BaseTestCase): + + db.instance_destroy(self.context, instance['uuid']) + ++ def test_evacuate(self): ++ self._check_evacuate() ++ ++ def test_error_evacuate(self): ++ self._check_evacuate({'vm_state': vm_states.ERROR}) ++ + def test_fail_evacuate_from_non_existing_host(self): + inst = {} + inst['vm_state'] = vm_states.ACTIVE +@@ -9333,9 +9339,7 @@ class ComputeAPITestCase(BaseTestCase): + jsonutils.to_primitive(self._create_fake_instance( + {'vm_state': vm_states.SOFT_DELETED})), + jsonutils.to_primitive(self._create_fake_instance( +- {'vm_state': vm_states.DELETED})), +- jsonutils.to_primitive(self._create_fake_instance( +- {'vm_state': vm_states.ERROR})) ++ {'vm_state': vm_states.DELETED})) + ] + + for instance in instances: +diff --git a/nova/tests/compute/test_compute_cells.py b/nova/tests/compute/test_compute_cells.py +index 55f500f..9045246 100644 +--- a/nova/tests/compute/test_compute_cells.py ++++ b/nova/tests/compute/test_compute_cells.py +@@ -148,6 +148,9 @@ class CellsComputeAPITestCase(test_compute.ComputeAPITestCase): + def test_evacuate(self): + self.skipTest("Test is incompatible with cells.") + ++ def test_error_evacuate(self): ++ self.skipTest("Test is incompatible with cells.") ++ + def test_delete_instance_no_cell(self): + cells_rpcapi = self.compute_api.cells_rpcapi + self.mox.StubOutWithMock(cells_rpcapi, diff -Nru nova-2014.1.3/debian/patches/fix-creating-bdm-for-failed-volume-attachment.patch nova-2014.1.5/debian/patches/fix-creating-bdm-for-failed-volume-attachment.patch --- nova-2014.1.3/debian/patches/fix-creating-bdm-for-failed-volume-attachment.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/fix-creating-bdm-for-failed-volume-attachment.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,363 @@ +From f2061f0b6e94cf364ff565366ab71dfcc68cd2ef Mon Sep 17 00:00:00 2001 +From: git-harry +Date: Mon, 4 Aug 2014 15:17:29 +0100 +Subject: [PATCH] Fix creating bdm for failed volume attachment + +This commit modifies the reserve_block_device_name method to return the +bdm object, when the corresponding keyword argument is True. This +ensures the correct bdm is destroyed if the attach fails. Currently the +code assumes only one bdm per volume and so retrieving it can cause the +incorrect db entry to be returned. + +Closes-Bug: #1349888 +(cherry picked from commit 339a97d0f2d17f531cfc79e09cd8b8bc75ce6e2a) + +Conflicts: + nova/compute/api.py + nova/compute/manager.py + nova/compute/rpcapi.py + nova/tests/compute/test_compute.py + nova/tests/compute/test_rpcapi.py + nova/tests/integrated/v3/test_extended_volumes.py + +Change-Id: I22a6db76d2044331d1a846eb4b6d7338c50270e2 +--- + nova/compute/api.py | 10 ++--- + nova/compute/manager.py | 10 +++-- + nova/compute/rpcapi.py | 45 +++++++++++++++++------ + nova/tests/compute/test_compute.py | 35 +++++++++--------- + nova/tests/compute/test_rpcapi.py | 20 +++++----- + nova/tests/integrated/test_api_samples.py | 6 ++- + nova/tests/integrated/v3/test_extended_volumes.py | 8 ++-- + 7 files changed, 80 insertions(+), 54 deletions(-) + +diff --git a/nova/compute/api.py b/nova/compute/api.py +index fd15df6..5605728 100644 +--- a/nova/compute/api.py ++++ b/nova/compute/api.py +@@ -2786,11 +2786,9 @@ class API(base.Base): + # the same time. When db access is removed from + # compute, the bdm will be created here and we will + # have to make sure that they are assigned atomically. +- device = self.compute_rpcapi.reserve_block_device_name( +- context, device=device, instance=instance, volume_id=volume_id, +- disk_bus=disk_bus, device_type=device_type) +- volume_bdm = block_device_obj.BlockDeviceMapping.get_by_volume_id( +- context, volume_id) ++ volume_bdm = self.compute_rpcapi.reserve_block_device_name( ++ context, instance, device, volume_id, disk_bus=disk_bus, ++ device_type=device_type) + try: + volume = self.volume_api.get(context, volume_id) + self.volume_api.check_attach(context, volume, instance=instance) +@@ -2801,7 +2799,7 @@ class API(base.Base): + with excutils.save_and_reraise_exception(): + volume_bdm.destroy(context) + +- return device ++ return volume_bdm.device_name + + @wrap_check_policy + @check_instance_lock +diff --git a/nova/compute/manager.py b/nova/compute/manager.py +index 6c49a64..aace5c2 100644 +--- a/nova/compute/manager.py ++++ b/nova/compute/manager.py +@@ -586,7 +586,7 @@ class ComputeVirtAPI(virtapi.VirtAPI): + class ComputeManager(manager.Manager): + """Manages the running instances from creation to destruction.""" + +- target = messaging.Target(version='3.23') ++ target = messaging.Target(version='3.35') + + # How long to wait in seconds before re-issuing a shutdown + # signal to a instance during power off. The overall +@@ -4226,7 +4226,8 @@ class ComputeManager(manager.Manager): + @reverts_task_state + @wrap_instance_fault + def reserve_block_device_name(self, context, instance, device, +- volume_id, disk_bus=None, device_type=None): ++ volume_id, disk_bus=None, device_type=None, ++ return_bdm_object=False): + # NOTE(ndipanov): disk_bus and device_type will be set to None if not + # passed (by older clients) and defaulted by the virt driver. Remove + # default values on the next major RPC version bump. +@@ -4249,7 +4250,10 @@ class ComputeManager(manager.Manager): + disk_bus=disk_bus, device_type=device_type) + bdm.create(context) + +- return device_name ++ if return_bdm_object: ++ return bdm ++ else: ++ return device_name + + return do_reserve() + +diff --git a/nova/compute/rpcapi.py b/nova/compute/rpcapi.py +index a1adfbf..2e39fd9 100644 +--- a/nova/compute/rpcapi.py ++++ b/nova/compute/rpcapi.py +@@ -22,6 +22,7 @@ from oslo import messaging + from nova import block_device + from nova import exception + from nova.objects import base as objects_base ++from nova.objects import block_device as block_device_obj + from nova.openstack.common.gettextutils import _ + from nova.openstack.common import jsonutils + from nova import rpc +@@ -241,6 +242,28 @@ class ComputeAPI(object): + 3.21 - Made rebuild take new-world BDM objects + 3.22 - Made terminate_instance take new-world BDM objects + 3.23 - Added external_instance_event() ++ build_and_run_instance was added in Havana and not used or ++ documented. ++ ++ ... Icehouse supports message version 3.23. So, any changes to ++ existing methods in 3.x after that point should be done such that they ++ can handle the version_cap being set to 3.23. ++ ++ 3.24 - Update rescue_instance() to take optional rescue_image_ref ++ 3.25 - Make detach_volume take an object ++ 3.26 - Make live_migration() and ++ rollback_live_migration_at_destination() take an object ++ ... Removed run_instance() ++ 3.27 - Make run_instance() accept a new-world object ++ 3.28 - Update get_console_output() to accept a new-world object ++ 3.29 - Make check_instance_shared_storage accept a new-world object ++ 3.30 - Make remove_volume_connection() accept a new-world object ++ 3.31 - Add get_instance_diagnostics ++ 3.32 - Add destroy_disks and migrate_data optional parameters to ++ rollback_live_migration_at_destination() ++ 3.33 - Make build_and_run_instance() take a NetworkRequestList object ++ 3.34 - Add get_serial_console method ++ 3.35 - Make reserve_block_device_name return a BDM object + ''' + + VERSION_ALIASES = { +@@ -795,22 +818,22 @@ class ComputeAPI(object): + + def reserve_block_device_name(self, ctxt, instance, device, volume_id, + disk_bus=None, device_type=None): +- version = '3.16' + kw = {'instance': instance, 'device': device, + 'volume_id': volume_id, 'disk_bus': disk_bus, +- 'device_type': device_type} +- +- if not self.client.can_send_version(version): +- # NOTE(russellb) Havana compat +- version = self._get_compat_version('3.0', '2.3') +- kw['instance'] = jsonutils.to_primitive( +- objects_base.obj_to_primitive(instance)) +- del kw['disk_bus'] +- del kw['device_type'] ++ 'device_type': device_type, 'return_bdm_object': True} ++ if self.client.can_send_version('3.35'): ++ version = '3.35' ++ else: ++ del kw['return_bdm_object'] ++ version = '3.16' + + cctxt = self.client.prepare(server=_compute_host(None, instance), + version=version) +- return cctxt.call(ctxt, 'reserve_block_device_name', **kw) ++ volume_bdm = cctxt.call(ctxt, 'reserve_block_device_name', **kw) ++ if not isinstance(volume_bdm, block_device_obj.BlockDeviceMapping): ++ volume_bdm = block_device_obj.BlockDeviceMapping.get_by_volume_id( ++ ctxt, volume_id) ++ return volume_bdm + + def backup_instance(self, ctxt, instance, image_id, backup_type, + rotation): +diff --git a/nova/tests/compute/test_compute.py b/nova/tests/compute/test_compute.py +index 9fd2603..1642fc5 100644 +--- a/nova/tests/compute/test_compute.py ++++ b/nova/tests/compute/test_compute.py +@@ -1731,7 +1731,8 @@ class ComputeTestCase(BaseTestCase): + + bdms = [] + +- def fake_rpc_reserve_block_device_name(self, context, **kwargs): ++ def fake_rpc_reserve_block_device_name(self, context, instance, device, ++ volume_id, **kwargs): + bdm = block_device_obj.BlockDeviceMapping( + **{'source_type': 'volume', + 'destination_type': 'volume', +@@ -1740,6 +1741,7 @@ class ComputeTestCase(BaseTestCase): + 'device_name': '/dev/vdc'}) + bdm.create(context) + bdms.append(bdm) ++ return bdm + + self.stubs.Set(cinder.API, 'get', fake_volume_get) + self.stubs.Set(cinder.API, 'check_attach', fake_check_attach) +@@ -8760,6 +8762,10 @@ class ComputeAPITestCase(BaseTestCase): + fake_bdm = fake_block_device.FakeDbBlockDeviceDict( + {'source_type': 'volume', 'destination_type': 'volume', + 'volume_id': 'fake-volume-id', 'device_name': '/dev/vdb'}) ++ bdm = block_device_obj.BlockDeviceMapping()._from_db_object( ++ self.context, ++ block_device_obj.BlockDeviceMapping(), ++ fake_bdm) + instance = self._create_fake_instance() + fake_volume = {'id': 'fake-volume-id'} + +@@ -8768,23 +8774,18 @@ class ComputeAPITestCase(BaseTestCase): + mock.patch.object(cinder.API, 'check_attach'), + mock.patch.object(cinder.API, 'reserve_volume'), + mock.patch.object(compute_rpcapi.ComputeAPI, +- 'reserve_block_device_name', return_value='/dev/vdb'), +- mock.patch.object(db, 'block_device_mapping_get_by_volume_id', +- return_value=fake_bdm), ++ 'reserve_block_device_name', return_value=bdm), + mock.patch.object(compute_rpcapi.ComputeAPI, 'attach_volume') + ) as (mock_get, mock_check_attach, mock_reserve_vol, mock_reserve_bdm, +- mock_bdm_get, mock_attach): ++ mock_attach): + + self.compute_api.attach_volume( + self.context, instance, 'fake-volume-id', + '/dev/vdb', 'ide', 'cdrom') + + mock_reserve_bdm.assert_called_once_with( +- self.context, device='/dev/vdb', instance=instance, +- volume_id='fake-volume-id', disk_bus='ide', +- device_type='cdrom') +- mock_bdm_get.assert_called_once_with( +- self.context, 'fake-volume-id', []) ++ self.context, instance, '/dev/vdb', 'fake-volume-id', ++ disk_bus='ide', device_type='cdrom') + self.assertEqual(mock_get.call_args, + mock.call(self.context, 'fake-volume-id')) + self.assertEqual(mock_check_attach.call_args, +@@ -8815,8 +8816,12 @@ class ComputeAPITestCase(BaseTestCase): + def fake_rpc_attach_volume(self, context, **kwargs): + called['fake_rpc_attach_volume'] = True + +- def fake_rpc_reserve_block_device_name(self, context, **kwargs): ++ def fake_rpc_reserve_block_device_name(self, context, instance, device, ++ volume_id, **kwargs): + called['fake_rpc_reserve_block_device_name'] = True ++ bdm = block_device_obj.BlockDeviceMapping() ++ bdm['device_name'] = '/dev/vdb' ++ return bdm + + self.stubs.Set(cinder.API, 'get', fake_volume_get) + self.stubs.Set(cinder.API, 'check_attach', fake_check_attach) +@@ -8828,17 +8833,11 @@ class ComputeAPITestCase(BaseTestCase): + self.stubs.Set(compute_rpcapi.ComputeAPI, 'attach_volume', + fake_rpc_attach_volume) + +- self.mox.StubOutWithMock(block_device_obj.BlockDeviceMapping, +- 'get_by_volume_id') +- block_device_obj.BlockDeviceMapping.get_by_volume_id( +- self.context, mox.IgnoreArg()).AndReturn('fake-bdm') +- self.mox.ReplayAll() +- + instance = self._create_fake_instance() + self.compute_api.attach_volume(self.context, instance, 1, device=None) + self.assertTrue(called.get('fake_check_attach')) + self.assertTrue(called.get('fake_reserve_volume')) +- self.assertTrue(called.get('fake_reserve_volume')) ++ self.assertTrue(called.get('fake_volume_get')) + self.assertTrue(called.get('fake_rpc_reserve_block_device_name')) + self.assertTrue(called.get('fake_rpc_attach_volume')) + +diff --git a/nova/tests/compute/test_rpcapi.py b/nova/tests/compute/test_rpcapi.py +index d4026ea..1a2b9f5 100644 +--- a/nova/tests/compute/test_rpcapi.py ++++ b/nova/tests/compute/test_rpcapi.py +@@ -24,6 +24,7 @@ from oslo.config import cfg + from nova.compute import rpcapi as compute_rpcapi + from nova import context + from nova import db ++from nova.objects import block_device as objects_block_dev + from nova.openstack.common import jsonutils + from nova import test + from nova.tests import fake_block_device +@@ -88,7 +89,13 @@ class ComputeRpcAPITestCase(test.TestCase): + rpc_mock, prepare_mock, csv_mock + ): + prepare_mock.return_value = rpcapi.client +- rpc_mock.return_value = 'foo' if rpc_method == 'call' else None ++ if 'return_bdm_object' in kwargs: ++ del kwargs['return_bdm_object'] ++ rpc_mock.return_value = objects_block_dev.BlockDeviceMapping() ++ elif rpc_method == 'call': ++ rpc_mock.return_value = 'foo' ++ else: ++ rpc_mock.return_value = None + csv_mock.side_effect = ( + lambda v: orig_prepare(version=v).can_send_version()) + +@@ -495,14 +502,9 @@ class ComputeRpcAPITestCase(test.TestCase): + + def test_reserve_block_device_name(self): + self._test_compute_api('reserve_block_device_name', 'call', +- instance=self.fake_instance, device='device', volume_id='id', +- disk_bus='ide', device_type='cdrom', version='3.16') +- +- # NOTE(russellb) Havana compat +- self.flags(compute='havana', group='upgrade_levels') +- self._test_compute_api('reserve_block_device_name', 'call', +- instance=self.fake_instance, device='device', volume_id='id', +- version='2.3') ++ instance=self.fake_instance, device='device', ++ volume_id='id', disk_bus='ide', device_type='cdrom', ++ version='3.35', return_bdm_object=True) + + def refresh_provider_fw_rules(self): + self._test_compute_api('refresh_provider_fw_rules', 'cast', +diff --git a/nova/tests/integrated/test_api_samples.py b/nova/tests/integrated/test_api_samples.py +index 3098aff..b2eb41b 100644 +--- a/nova/tests/integrated/test_api_samples.py ++++ b/nova/tests/integrated/test_api_samples.py +@@ -3855,13 +3855,15 @@ class VolumeAttachmentsSampleJsonTest(VolumeAttachmentsSampleBase): + extension_name = ("nova.api.openstack.compute.contrib.volumes.Volumes") + + def test_attach_volume_to_server(self): +- device_name = '/dev/vdd' + self.stubs.Set(cinder.API, 'get', fakes.stub_volume_get) + self.stubs.Set(cinder.API, 'check_attach', lambda *a, **k: None) + self.stubs.Set(cinder.API, 'reserve_volume', lambda *a, **k: None) ++ device_name = '/dev/vdd' ++ bdm = block_device_obj.BlockDeviceMapping() ++ bdm['device_name'] = device_name + self.stubs.Set(compute_manager.ComputeManager, + "reserve_block_device_name", +- lambda *a, **k: device_name) ++ lambda *a, **k: bdm) + self.stubs.Set(compute_manager.ComputeManager, + 'attach_volume', + lambda *a, **k: None) +diff --git a/nova/tests/integrated/v3/test_extended_volumes.py b/nova/tests/integrated/v3/test_extended_volumes.py +index 22e0479..9f24208 100644 +--- a/nova/tests/integrated/v3/test_extended_volumes.py ++++ b/nova/tests/integrated/v3/test_extended_volumes.py +@@ -78,20 +78,18 @@ class ExtendedVolumesSampleJsonTests(test_servers.ServersSampleBase): + self._verify_response('servers-detail-resp', subs, response, 200) + + def test_attach_volume(self): ++ bdm = block_device_obj.BlockDeviceMapping() + device_name = '/dev/vdd' +- disk_bus = 'ide' +- device_type = 'cdrom' ++ bdm['device_name'] = device_name + self.stubs.Set(cinder.API, 'get', fakes.stub_volume_get) + self.stubs.Set(cinder.API, 'check_attach', lambda *a, **k: None) + self.stubs.Set(cinder.API, 'reserve_volume', lambda *a, **k: None) + self.stubs.Set(compute_manager.ComputeManager, + "reserve_block_device_name", +- lambda *a, **k: device_name) ++ lambda *a, **k: bdm) + self.stubs.Set(compute_manager.ComputeManager, + 'attach_volume', + lambda *a, **k: None) +- self.stubs.Set(block_device_obj.BlockDeviceMapping, 'get_by_volume_id', +- classmethod(lambda *a, **k: None)) + + volume = fakes.stub_volume_get(None, context.get_admin_context(), + 'a26887c6-c47b-4654-abb5-dfadf7d3f803') +-- +1.9.1 + diff -Nru nova-2014.1.3/debian/patches/Fix-live-migrations-usage-of-the-wrong-connector-inf.patch nova-2014.1.5/debian/patches/Fix-live-migrations-usage-of-the-wrong-connector-inf.patch --- nova-2014.1.3/debian/patches/Fix-live-migrations-usage-of-the-wrong-connector-inf.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/Fix-live-migrations-usage-of-the-wrong-connector-inf.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,139 @@ +From 3ec5288e964a1eb187b016845738baeb4f03f81b Mon Sep 17 00:00:00 2001 +From: Anthony Lee +Date: Thu, 16 Jul 2015 13:02:00 -0700 +Subject: [PATCH 1/4] Fix live-migrations usage of the wrong connector + information + +During the post_live_migration step for the Nova libvirt driver +an incorrect assumption is being made about the connector +information being sent to _disconnect_volume. It is assumed that +the connection information on the source and destination is the +same but that is not always the case. The BDM, where the +connector information is being retrieved from only contains the +connection information for the destination. This will not work +when trying to disconnect volumes from the source during live +migration as the properties such as the target_lun and +initiator_target_map could be different. This ends up leaving +behind dangling LUNs and possibly removing the incorrect +volume's LUNs. + +The solution proposed here utilizes the connection_info that +can be retrieved for a host from Cinder's initialize_connection +API. This connection information contains the correct data for +the source host and allows volume LUNs to be removed properly. + +Conflicts: + nova/tests/unit/virt/libvirt/test_driver.py + +NOTE(mriedem): The conflicts are due to the tests being moved +in Kilo and 41f80226e0a1f73af76c7968617ebfda0aeb40b1 not being +in stable/juno (renamed conn var to drvr in libvirt tests). + +NOTE(wolsen): The conflicts in icehouse are due to the driver +invocation changing between icehouse and juno. + +Change-Id: I3dfb75eb58dfbc66b218bcee473af4c2ac282eb6 +Closes-Bug: #1475411 +Closes-Bug: #1288039 +Closes-Bug: #1423772 +(cherry picked from commit 587092c909e15e983f7aef31d7bc0862271a32c7) +(cherry picked from commit 9d2abbd9ab60ca873650759feaba98b4d8d35566) + +Conflicts: + nova/tests/virt/libvirt/test_libvirt.py +--- + nova/tests/virt/libvirt/test_libvirt.py | 31 +++++++++++++++++++++++++------ + nova/virt/libvirt/driver.py | 18 +++++++++++++++++- + 2 files changed, 42 insertions(+), 7 deletions(-) + +diff --git a/nova/tests/virt/libvirt/test_libvirt.py b/nova/tests/virt/libvirt/test_libvirt.py +index ce9914d..096fb60 100644 +--- a/nova/tests/virt/libvirt/test_libvirt.py ++++ b/nova/tests/virt/libvirt/test_libvirt.py +@@ -4496,10 +4496,22 @@ class LibvirtConnTestCase(test.TestCase): + + def test_post_live_migration(self): + vol = {'block_device_mapping': [ +- {'connection_info': 'dummy1', 'mount_device': '/dev/sda'}, +- {'connection_info': 'dummy2', 'mount_device': '/dev/sdb'}]} ++ {'connection_info': { ++ 'data': {'multipath_id': 'dummy1'}, ++ 'serial': 'fake_serial1'}, ++ 'mount_device': '/dev/sda', ++ }, ++ {'connection_info': { ++ 'data': {}, ++ 'serial': 'fake_serial2'}, ++ 'mount_device': '/dev/sdb', }]} ++ ++ def fake_initialize_connection(context, volume_id, connector): ++ return {'data': {}} ++ + conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + ++ fake_connector = {'host': 'fake'} + inst_ref = {'id': 'foo'} + cntx = context.get_admin_context() + +@@ -4507,17 +4519,24 @@ class LibvirtConnTestCase(test.TestCase): + with contextlib.nested( + mock.patch.object(driver, 'block_device_info_get_mapping', + return_value=vol['block_device_mapping']), ++ mock.patch.object(conn, "get_volume_connector", ++ return_value=fake_connector), ++ mock.patch.object(conn._volume_api, "initialize_connection", ++ side_effect=fake_initialize_connection), + mock.patch.object(conn, 'volume_driver_method') +- ) as (block_device_info_get_mapping, volume_driver_method): ++ ) as (block_device_info_get_mapping, get_volume_connector, ++ initialize_connection, volume_driver_method): + conn.post_live_migration(cntx, inst_ref, vol) + + block_device_info_get_mapping.assert_has_calls([ + mock.call(vol)]) ++ get_volume_connector.assert_has_calls([ ++ mock.call(inst_ref)]) + volume_driver_method.assert_has_calls([ + mock.call('disconnect_volume', +- v['connection_info'], +- v['mount_device'].rpartition("/")[2]) +- for v in vol['block_device_mapping']]) ++ {'data': {'multipath_id': 'dummy1'}}, 'sda'), ++ mock.call('disconnect_volume', ++ {'data': {}}, 'sdb')]) + + def test_get_instance_disk_info_excludes_volumes(self): + # Test data +diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py +index f7fd824..95792fc 100644 +--- a/nova/virt/libvirt/driver.py ++++ b/nova/virt/libvirt/driver.py +@@ -4735,8 +4735,24 @@ class LibvirtDriver(driver.ComputeDriver): + # Disconnect from volume server + block_device_mapping = driver.block_device_info_get_mapping( + block_device_info) ++ connector = self.get_volume_connector(instance) ++ volume_api = self._volume_api + for vol in block_device_mapping: +- connection_info = vol['connection_info'] ++ # Retrieve connection info from Cinder's initialize_connection API. ++ # The info returned will be accurate for the source server. ++ volume_id = vol['connection_info']['serial'] ++ connection_info = volume_api.initialize_connection(context, ++ volume_id, ++ connector) ++ ++ # Pull out multipath_id from the bdm information. The ++ # multipath_id can be placed into the connection info ++ # because it is based off of the volume and will be the ++ # same on the source and destination hosts. ++ if 'multipath_id' in vol['connection_info']['data']: ++ multipath_id = vol['connection_info']['data']['multipath_id'] ++ connection_info['data']['multipath_id'] = multipath_id ++ + disk_dev = vol['mount_device'].rpartition("/")[2] + self.volume_driver_method('disconnect_volume', + connection_info, +-- +1.9.1 + diff -Nru nova-2014.1.3/debian/patches/fix-requirements.patch nova-2014.1.5/debian/patches/fix-requirements.patch --- nova-2014.1.3/debian/patches/fix-requirements.patch 2014-11-17 19:55:19.000000000 +0000 +++ nova-2014.1.5/debian/patches/fix-requirements.patch 2016-09-09 09:41:48.000000000 +0000 @@ -4,13 +4,13 @@ --- a/requirements.txt +++ b/requirements.txt @@ -25,9 +25,8 @@ - python-neutronclient>=2.3.4,<3 - python-glanceclient>=0.9.0 - python-keystoneclient>=0.7.0 --six>=1.6.0 -+six>=1.5.2 + python-neutronclient>=2.3.4,<2.3.11 + python-glanceclient>=0.9.0,!=0.14.0,<=0.14.2 + python-keystoneclient>=0.7.0,<0.12.0 +-six>=1.6.0,<=1.9.0 ++six>=1.5.2,<=1.9.0 stevedore>=0.14 -websockify>=0.5.1,<0.6 wsgiref>=0.1.2 - oslo.config>=1.2.0 - oslo.rootwrap + oslo.config>=1.2.0,<1.5 + oslo.rootwrap<1.4 diff -Nru nova-2014.1.3/debian/patches/Fix-wrong-used-ProcessExecutionError-exception.patch nova-2014.1.5/debian/patches/Fix-wrong-used-ProcessExecutionError-exception.patch --- nova-2014.1.3/debian/patches/Fix-wrong-used-ProcessExecutionError-exception.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/Fix-wrong-used-ProcessExecutionError-exception.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,99 @@ +From 3a55f1422e81a642ae914e5b47f490639def34ea Mon Sep 17 00:00:00 2001 +From: Wangpan +Date: Thu, 17 Apr 2014 13:44:55 +0800 +Subject: [PATCH 2/4] Fix wrong used ProcessExecutionError exception + +This class has been moved to nova.openstack.common.processutils, +but a wrong usage is exists in nova.virt.libvirt.volume, +correct here. + +Conflicts: + nova/tests/virt/libvirt/test_libvirt_volume.py + +NOTE(wolsen): conflicts are due to test restructuring between +stable/juno and stable/icehouse. + +Closes-bug: #1308839 +Change-Id: I76f99b63dc5097b462dcff6ff63cbbb13d7580fb +(cherry picked from commit aa9383081230b92ecc7c1b176cb3eb62a237949c) +--- + nova/tests/virt/libvirt/test_libvirt_volume.py | 42 +++++++++++++++++++++++++- + nova/virt/libvirt/volume.py | 3 +- + 2 files changed, 42 insertions(+), 3 deletions(-) + +--- a/nova/tests/virt/libvirt/test_libvirt_volume.py ++++ b/nova/tests/virt/libvirt/test_libvirt_volume.py +@@ -14,14 +14,15 @@ + # under the License. + + import contextlib ++import fixtures + import os + import time + +-import fixtures + import mock + from oslo.config import cfg + + from nova import exception ++from nova.openstack.common import processutils + from nova.storage import linuxscsi + from nova import test + from nova.tests.virt.libvirt import fake_libvirt_utils +@@ -311,6 +312,45 @@ + '/sys/block/%s/device/delete' % dev_name)] + self.assertEqual(self.executes, expected_commands) + ++ def test_libvirt_iscsi_driver_disconnect_multipath_error(self): ++ libvirt_driver = volume.LibvirtISCSIVolumeDriver(self.fake_conn) ++ devs = ['/dev/disk/by-path/ip-%s-iscsi-%s-lun-2' % (self.location, ++ self.iqn)] ++ with contextlib.nested( ++ mock.patch.object(os.path, 'exists', return_value=True), ++ mock.patch.object(self.fake_conn, 'get_all_block_devices', ++ return_value=devs), ++ mock.patch.object(libvirt_driver, '_rescan_multipath'), ++ mock.patch.object(libvirt_driver, '_run_multipath'), ++ mock.patch.object(libvirt_driver, '_get_multipath_device_name', ++ return_value='/dev/mapper/fake-multipath-devname'), ++ mock.patch.object(libvirt_driver, ++ '_get_target_portals_from_iscsiadm_output', ++ return_value=[('fake-ip', 'fake-portal')]), ++ mock.patch.object(libvirt_driver, '_get_multipath_iqn', ++ return_value='fake-portal'), ++ ) as (mock_exists, mock_devices, mock_rescan_multipath, ++ mock_run_multipath, mock_device_name, mock_get_portals, ++ mock_get_iqn): ++ mock_run_multipath.side_effect = processutils.ProcessExecutionError ++ name = 'volume-00000001' ++ vol = {'id': 1, 'name': self.name} ++ connection_info = self.iscsi_connection(vol, self.location, ++ self.iqn) ++ conf = libvirt_driver.connect_volume(connection_info, ++ self.disk_info) ++ tree = conf.format_dom() ++ dev_name = 'ip-%s-iscsi-%s-lun-1' % (self.location, self.iqn) ++ dev_str = '/dev/disk/by-path/%s' % dev_name ++ self.assertEqual('block', tree.get('type')) ++ self.assertEqual(dev_str, tree.find('./source').get('dev')) ++ ++ libvirt_driver.use_multipath = True ++ libvirt_driver.disconnect_volume(connection_info, "vde") ++ mock_run_multipath.assert_called_once_with( ++ ['-f', 'fake-multipath-devname'], ++ check_exit_code=[0, 1]) ++ + def iser_connection(self, volume, location, iqn): + return { + 'driver_volume_type': 'iser', +--- a/nova/virt/libvirt/volume.py ++++ b/nova/virt/libvirt/volume.py +@@ -399,7 +399,7 @@ + try: + self._run_multipath(['-f', disk_descriptor], + check_exit_code=[0, 1]) +- except exception.ProcessExecutionError as exc: ++ except processutils.ProcessExecutionError as exc: + # Because not all cinder drivers need to remove the dev mapper, + # here just logs a warning to avoid affecting those drivers in + # exceptional cases. diff -Nru nova-2014.1.3/debian/patches/libvirt-Handle-unsupported-host-capabilities.patch nova-2014.1.5/debian/patches/libvirt-Handle-unsupported-host-capabilities.patch --- nova-2014.1.3/debian/patches/libvirt-Handle-unsupported-host-capabilities.patch 2014-11-17 19:55:19.000000000 +0000 +++ nova-2014.1.5/debian/patches/libvirt-Handle-unsupported-host-capabilities.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,303 +0,0 @@ -Description: Fix exception when starting LXC containers with libvirt-lxc. -Author: Chuck Short -Forwarded: Not Needed. ---- a/nova/tests/virt/libvirt/fakelibvirt.py -+++ b/nova/tests/virt/libvirt/fakelibvirt.py -@@ -172,18 +172,76 @@ - - - class libvirtError(Exception): -- def __init__(self, msg, -- error_code=VIR_ERR_INTERNAL_ERROR, -- error_domain=VIR_FROM_QEMU): -- self.error_code = error_code -- self.error_domain = error_domain -- Exception(self, msg) -+ """This class was copied and slightly modified from -+ `libvirt-python:libvirt-override.py`. -+ -+ Since a test environment will use the real `libvirt-python` version of -+ `libvirtError` if it's installed and not this fake, we need to maintain -+ strict compatability with the original class, including `__init__` args -+ and instance-attributes. -+ -+ To create a libvirtError instance you should: -+ -+ # Create an unsupported error exception -+ exc = libvirtError('my message') -+ exc.err = (libvirt.VIR_ERR_NO_SUPPORT,) -+ -+ self.err is a tuple of form: -+ (error_code, error_domain, error_message, error_level, str1, str2, -+ str3, int1, int2) -+ -+ Alternatively, you can use the `make_libvirtError` convenience function to -+ allow you to specify these attributes in one shot. -+ """ -+ def __init__(self, defmsg, conn=None, dom=None, net=None, pool=None, -+ vol=None): -+ Exception.__init__(self, defmsg) -+ self.err = None - - def get_error_code(self): -- return self.error_code -+ if self.err is None: -+ return None -+ return self.err[0] - - def get_error_domain(self): -- return self.error_domain -+ if self.err is None: -+ return None -+ return self.err[1] -+ -+ def get_error_message(self): -+ if self.err is None: -+ return None -+ return self.err[2] -+ -+ def get_error_level(self): -+ if self.err is None: -+ return None -+ return self.err[3] -+ -+ def get_str1(self): -+ if self.err is None: -+ return None -+ return self.err[4] -+ -+ def get_str2(self): -+ if self.err is None: -+ return None -+ return self.err[5] -+ -+ def get_str3(self): -+ if self.err is None: -+ return None -+ return self.err[6] -+ -+ def get_int1(self): -+ if self.err is None: -+ return None -+ return self.err[7] -+ -+ def get_int2(self): -+ if self.err is None: -+ return None -+ return self.err[8] - - - class NWFilter(object): -@@ -219,8 +277,10 @@ - try: - tree = etree.fromstring(xml) - except etree.ParseError: -- raise libvirtError("Invalid XML.", -- VIR_ERR_XML_DETAIL, VIR_FROM_DOMAIN) -+ raise make_libvirtError( -+ libvirtError, "Invalid XML.", -+ error_code=VIR_ERR_XML_DETAIL, -+ error_domain=VIR_FROM_DOMAIN) - - definition = {} - -@@ -369,7 +429,11 @@ - 123456789L] - - def migrateToURI(self, desturi, flags, dname, bandwidth): -- raise libvirtError("Migration always fails for fake libvirt!") -+ raise make_libvirtError( -+ libvirtError, -+ "Migration always fails for fake libvirt!", -+ error_code=VIR_ERR_INTERNAL_ERROR, -+ error_domain=VIR_FROM_QEMU) - - def attachDevice(self, xml): - disk_info = _parse_disk_info(etree.fromstring(xml)) -@@ -380,7 +444,11 @@ - def attachDeviceFlags(self, xml, flags): - if (flags & VIR_DOMAIN_AFFECT_LIVE and - self._state != VIR_DOMAIN_RUNNING): -- raise libvirtError("AFFECT_LIVE only allowed for running domains!") -+ raise make_libvirtError( -+ libvirtError, -+ "AFFECT_LIVE only allowed for running domains!", -+ error_code=VIR_ERR_INTERNAL_ERROR, -+ error_domain=VIR_FROM_QEMU) - self.attachDevice(xml) - - def detachDevice(self, xml): -@@ -533,9 +601,11 @@ - 'test:///default'] - - if uri not in uri_whitelist: -- raise libvirtError("libvirt error: no connection driver " -- "available for No connection for URI %s" % uri, -- 5, 0) -+ raise make_libvirtError( -+ libvirtError, -+ "libvirt error: no connection driver " -+ "available for No connection for URI %s" % uri, -+ error_code=5, error_domain=0) - - self.readonly = readonly - self._uri = uri -@@ -594,16 +664,20 @@ - def lookupByID(self, id): - if id in self._running_vms: - return self._running_vms[id] -- raise libvirtError('Domain not found: no domain with matching ' -- 'id %d' % id, -- VIR_ERR_NO_DOMAIN, VIR_FROM_QEMU) -+ raise make_libvirtError( -+ libvirtError, -+ 'Domain not found: no domain with matching id %d' % id, -+ error_code=VIR_ERR_NO_DOMAIN, -+ error_domain=VIR_FROM_QEMU) - - def lookupByName(self, name): - if name in self._vms: - return self._vms[name] -- raise libvirtError('Domain not found: no domain with matching ' -- 'name "%s"' % name, -- VIR_ERR_NO_DOMAIN, VIR_FROM_QEMU) -+ raise make_libvirtError( -+ libvirtError, -+ 'Domain not found: no domain with matching name "%s"' % name, -+ error_code=VIR_ERR_NO_DOMAIN, -+ error_domain=VIR_FROM_QEMU) - - def _emit_lifecycle(self, dom, event, detail): - if VIR_DOMAIN_EVENT_ID_LIFECYCLE not in self._event_callbacks: -@@ -904,14 +978,21 @@ - 'user': 26728850000000L, - 'iowait': 6121490000000L} - else: -- raise libvirtError("invalid argument: Invalid cpu number") -+ raise make_libvirtError( -+ libvirtError, -+ "invalid argument: Invalid cpu number", -+ error_code=VIR_ERR_INTERNAL_ERROR, -+ error_domain=VIR_FROM_QEMU) - - def nwfilterLookupByName(self, name): - try: - return self._nwfilters[name] - except KeyError: -- raise libvirtError("no nwfilter with matching name %s" % name, -- VIR_ERR_NO_NWFILTER, VIR_FROM_NWFILTER) -+ raise make_libvirtError( -+ libvirtError, -+ "no nwfilter with matching name %s" % name, -+ error_code=VIR_ERR_NO_NWFILTER, -+ error_domain=VIR_FROM_NWFILTER) - - def nwfilterDefineXML(self, xml): - nwfilter = NWFilter(self, xml) -@@ -964,6 +1045,24 @@ - pass - - -+def make_libvirtError(error_class, msg, error_code=None, -+ error_domain=None, error_message=None, -+ error_level=None, str1=None, str2=None, str3=None, -+ int1=None, int2=None): -+ """Convenience function for creating `libvirtError` exceptions which -+ allow you to specify arguments in constructor without having to manipulate -+ the `err` tuple directly. -+ -+ We need to pass in `error_class` to this function because it may be -+ `libvirt.libvirtError` or `fakelibvirt.libvirtError` depending on whether -+ `libvirt-python` is installed. -+ """ -+ exc = error_class(msg) -+ exc.err = (error_code, error_domain, error_message, error_level, -+ str1, str2, str3, int1, int2) -+ return exc -+ -+ - virDomain = Domain - - ---- a/nova/virt/libvirt/driver.py -+++ b/nova/virt/libvirt/driver.py -@@ -77,6 +77,7 @@ - from nova.openstack.common import excutils - from nova.openstack.common import fileutils - from nova.openstack.common.gettextutils import _ -+from nova.openstack.common.gettextutils import _LW - from nova.openstack.common import importutils - from nova.openstack.common import jsonutils - from nova.openstack.common import log as logging -@@ -2888,9 +2889,14 @@ - # this -1 checking should be removed later. - if features and features != -1: - self._caps.host.cpu.parse_str(features) -- except libvirt.VIR_ERR_NO_SUPPORT: -- # Note(yjiang5): ignore if libvirt has no support -- pass -+ except libvirt.libvirtError as ex: -+ error_code = ex.get_error_code() -+ if error_code == libvirt.VIR_ERR_NO_SUPPORT: -+ LOG.warn(_LW("URI %(uri)s does not support full set" -+ " of host capabilities: " "%(error)s"), -+ {'uri': self.uri(), 'error': ex}) -+ else: -+ raise - return self._caps - - def get_host_uuid(self): ---- a/nova/tests/virt/libvirt/test_libvirt.py -+++ b/nova/tests/virt/libvirt/test_libvirt.py -@@ -83,7 +83,7 @@ - try: - import libvirt - except ImportError: -- import nova.tests.virt.libvirt.fakelibvirt as libvirt -+ libvirt = fakelibvirt - libvirt_driver.libvirt = libvirt - - -@@ -887,6 +887,42 @@ - caps = conn.get_host_capabilities() - self.assertIn('aes', [x.name for x in caps.host.cpu.features]) - -+ def test_baseline_cpu_not_supported(self): -+ conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) -+ -+ # `mock` has trouble stubbing attributes that don't exist yet, so -+ # fallback to plain-Python attribute setting/deleting -+ cap_str = 'VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES' -+ if not hasattr(libvirt_driver.libvirt, cap_str): -+ setattr(libvirt_driver.libvirt, cap_str, True) -+ self.addCleanup(delattr, libvirt_driver.libvirt, cap_str) -+ -+ # Handle just the NO_SUPPORT error -+ not_supported_exc = fakelibvirt.make_libvirtError( -+ libvirt.libvirtError, -+ 'this function is not supported by the connection driver:' -+ ' virConnectBaselineCPU', -+ error_code=libvirt.VIR_ERR_NO_SUPPORT) -+ -+ with mock.patch.object(conn._conn, 'baselineCPU', -+ side_effect=not_supported_exc): -+ caps = conn.get_host_capabilities() -+ self.assertEqual(vconfig.LibvirtConfigCaps, type(caps)) -+ self.assertNotIn('aes', [x.name for x in caps.host.cpu.features]) -+ -+ # Clear cached result so we can test again... -+ conn._caps = None -+ -+ # Other errors should not be caught -+ other_exc = fakelibvirt.make_libvirtError( -+ libvirt.libvirtError, -+ 'other exc', -+ error_code=libvirt.VIR_ERR_NO_DOMAIN) -+ -+ with mock.patch.object(conn._conn, 'baselineCPU', -+ side_effect=other_exc): -+ self.assertRaises(libvirt.libvirtError, conn.get_host_capabilities) -+ - def test_lxc_get_host_capabilities_failed(self): - conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) - diff -Nru nova-2014.1.3/debian/patches/protect-against-upgrade-rpc-ver-mismatch.patch nova-2014.1.5/debian/patches/protect-against-upgrade-rpc-ver-mismatch.patch --- nova-2014.1.3/debian/patches/protect-against-upgrade-rpc-ver-mismatch.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/protect-against-upgrade-rpc-ver-mismatch.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,46 @@ +diff --git a/nova/compute/rpcapi.py b/nova/compute/rpcapi.py +index 2e39fd9..0b4eabe 100644 +--- a/nova/compute/rpcapi.py ++++ b/nova/compute/rpcapi.py +@@ -816,6 +816,25 @@ class ComputeAPI(object): + cctxt = self.client.prepare(server=host, version=version) + return cctxt.call(ctxt, 'get_host_uptime') + ++ def _reserve_block_device_name(self, ctxt, instance, device, volume_id, ++ disk_bus=None, device_type=None): ++ version = '3.16' ++ kw = {'instance': instance, 'device': device, ++ 'volume_id': volume_id, 'disk_bus': disk_bus, ++ 'device_type': device_type} ++ ++ if not self.client.can_send_version(version): ++ # NOTE(russellb) Havana compat ++ version = self._get_compat_version('3.0', '2.3') ++ kw['instance'] = jsonutils.to_primitive( ++ objects_base.obj_to_primitive(instance)) ++ del kw['disk_bus'] ++ del kw['device_type'] ++ ++ cctxt = self.client.prepare(server=_compute_host(None, instance), ++ version=version) ++ return cctxt.call(ctxt, 'reserve_block_device_name', **kw) ++ + def reserve_block_device_name(self, ctxt, instance, device, volume_id, + disk_bus=None, device_type=None): + kw = {'instance': instance, 'device': device, +@@ -829,7 +848,14 @@ class ComputeAPI(object): + + cctxt = self.client.prepare(server=_compute_host(None, instance), + version=version) +- volume_bdm = cctxt.call(ctxt, 'reserve_block_device_name', **kw) ++ try: ++ volume_bdm = cctxt.call(ctxt, 'reserve_block_device_name', **kw) ++ except messaging.rpc.client.RemoteError: ++ # NOTE(dosaboy): catch rpc api version mismatch (see bug 1506257) ++ volume_bdm = self._reserve_block_device_name(ctxt, instance, ++ device, volume_id, ++ disk_bus, device_type) ++ + if not isinstance(volume_bdm, block_device_obj.BlockDeviceMapping): + volume_bdm = block_device_obj.BlockDeviceMapping.get_by_volume_id( + ctxt, volume_id) diff -Nru nova-2014.1.3/debian/patches/remove_useless_state_check.patch nova-2014.1.5/debian/patches/remove_useless_state_check.patch --- nova-2014.1.3/debian/patches/remove_useless_state_check.patch 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/debian/patches/remove_useless_state_check.patch 2016-09-09 09:41:48.000000000 +0000 @@ -0,0 +1,114 @@ +commit 9f9ea6301ca27a1d9f15021e9495196aac92a91a +Author: Chris Yeoh +Date: Fri Mar 14 14:41:30 2014 +1030 + + Remove unnecessary passing of task_state to check_instance_state + + Remove cases where task_state=[None] was passed to check_instance_state + when that is essentially the default value anyway + + Change-Id: I49b6449b9ae43a5cfcf5a1ccac5ee9a64d2b3f3c + (cherry picked from commit e7cbb7a28c50a1e4deb3111ab80e7475d0eca4e1) + +diff --git a/nova/compute/api.py b/nova/compute/api.py +index fd15df6..d939aaf 100644 +--- a/nova/compute/api.py ++++ b/nova/compute/api.py +@@ -1775,8 +1775,7 @@ class API(base.Base): + @check_instance_lock + @check_instance_host + @check_instance_cell +- @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.ERROR], +- task_state=[None]) ++ @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.ERROR]) + def stop(self, context, instance, do_cast=True): + """Stop an instance.""" + self.force_stop(context, instance, do_cast) +@@ -2148,8 +2147,7 @@ class API(base.Base): + @check_instance_lock + @check_instance_cell + @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, +- vm_states.ERROR], +- task_state=[None]) ++ vm_states.ERROR]) + def rebuild(self, context, instance, image_href, admin_password, + files_to_inject=None, **kwargs): + """Rebuild the given instance with the provided attributes.""" +@@ -2385,8 +2383,7 @@ class API(base.Base): + @wrap_check_policy + @check_instance_lock + @check_instance_cell +- @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED], +- task_state=[None]) ++ @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED]) + def resize(self, context, instance, flavor_id=None, + **extra_instance_updates): + """Resize (ie, migrate) a running instance. +@@ -2486,8 +2483,7 @@ class API(base.Base): + @wrap_check_policy + @check_instance_lock + @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED, +- vm_states.PAUSED, vm_states.SUSPENDED], +- task_state=[None]) ++ vm_states.PAUSED, vm_states.SUSPENDED]) + def shelve(self, context, instance): + """Shelve an instance. + +@@ -2513,7 +2509,7 @@ class API(base.Base): + + @wrap_check_policy + @check_instance_lock +- @check_instance_state(vm_state=[vm_states.SHELVED], task_state=[None]) ++ @check_instance_state(vm_state=[vm_states.SHELVED]) + def shelve_offload(self, context, instance): + """Remove a shelved instance from the hypervisor.""" + instance.task_state = task_states.SHELVING_OFFLOADING +@@ -2524,7 +2520,7 @@ class API(base.Base): + @wrap_check_policy + @check_instance_lock + @check_instance_state(vm_state=[vm_states.SHELVED, +- vm_states.SHELVED_OFFLOADED], task_state=[None]) ++ vm_states.SHELVED_OFFLOADED]) + def unshelve(self, context, instance): + """Restore a shelved instance.""" + instance.task_state = task_states.UNSHELVING +@@ -2807,8 +2803,7 @@ class API(base.Base): + @check_instance_lock + @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, + vm_states.STOPPED, vm_states.RESIZED, +- vm_states.SOFT_DELETED], +- task_state=[None]) ++ vm_states.SOFT_DELETED]) + def attach_volume(self, context, instance, volume_id, device=None, + disk_bus=None, device_type=None): + """Attach an existing volume to an existing instance.""" +@@ -2836,8 +2831,7 @@ class API(base.Base): + @check_instance_lock + @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, + vm_states.STOPPED, vm_states.RESIZED, +- vm_states.SOFT_DELETED], +- task_state=[None]) ++ vm_states.SOFT_DELETED]) + def detach_volume(self, context, instance, volume): + """Detach a volume from an instance.""" + if volume['attach_status'] == 'detached': +@@ -2853,8 +2847,7 @@ class API(base.Base): + @check_instance_lock + @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, + vm_states.SUSPENDED, vm_states.STOPPED, +- vm_states.RESIZED, vm_states.SOFT_DELETED], +- task_state=[None]) ++ vm_states.RESIZED, vm_states.SOFT_DELETED]) + def swap_volume(self, context, instance, old_volume, new_volume): + """Swap volume attached to an instance.""" + if old_volume['attach_status'] == 'detached': +@@ -3047,8 +3040,7 @@ class API(base.Base): + host_name, block_migration=block_migration, + disk_over_commit=disk_over_commit) + +- @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED], +- task_state=[None]) ++ @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED]) + def evacuate(self, context, instance, host, on_shared_storage, + admin_password=None): + """Running evacuate to target host. diff -Nru nova-2014.1.3/debian/patches/series nova-2014.1.5/debian/patches/series --- nova-2014.1.3/debian/patches/series 2015-01-16 20:29:17.000000000 +0000 +++ nova-2014.1.5/debian/patches/series 2017-09-13 17:09:12.000000000 +0000 @@ -1,9 +1,32 @@ # Ubuntu specific patches below here. Note these can be dropped eventually. -block-device-mapping-config.patch +disable-websockify-tests.patch fix-requirements.patch skip_ipv6_test.patch arm-console-patch.patch update-run-tests.patch -libvirt-Handle-unsupported-host-capabilities.patch -cells-json-store.patch -CVE-2014-3708.patch +add-support-for-syslog-connect-retries.patch +clean-shutdown.patch +fix-creating-bdm-for-failed-volume-attachment.patch +protect-against-upgrade-rpc-ver-mismatch.patch +Fix-live-migrations-usage-of-the-wrong-connector-inf.patch +Fix-wrong-used-ProcessExecutionError-exception.patch +Clean-up-iSCSI-multipath-devices-in-Post-Live-Migrat.patch +Detach-iSCSI-latest-path-for-latest-disk.patch +remove_useless_state_check.patch +evacuate_error_vm.patch +CVE-2015-3241-1.patch +CVE-2015-3241-2.patch +CVE-2015-3241-3.patch +CVE-2015-3280.patch +CVE-2015-5162-1.patch +CVE-2015-5162-2.patch +CVE-2015-5162-3.patch +CVE-2015-7548-1.patch +CVE-2015-7548-2.patch +CVE-2015-7548-3.patch +CVE-2015-7548-4.patch +CVE-2015-7713.patch +CVE-2015-8749.patch +CVE-2016-2140-1.patch +CVE-2016-2140-2.patch +CVE-2016-2140-3.patch diff -Nru nova-2014.1.3/debian/patches/update-run-tests.patch nova-2014.1.5/debian/patches/update-run-tests.patch --- nova-2014.1.3/debian/patches/update-run-tests.patch 2014-11-17 19:55:19.000000000 +0000 +++ nova-2014.1.5/debian/patches/update-run-tests.patch 2016-09-09 09:41:48.000000000 +0000 @@ -1,20 +1,9 @@ -Description: Update run_tests.sh to show results and default the concurrency to 1. +Description: Update run_tests.sh to show results. Author: Chuck Short Forwarded: Not needed. -diff --git a/run_tests.sh b/run_tests.sh -index 1fecc4c..84537cb 100755 --- a/run_tests.sh +++ b/run_tests.sh -@@ -86,7 +86,7 @@ no_pep8=0 - coverage=0 - debug=0 - update=0 --concurrency=0 -+concurrency=4 - - LANG=en_US.UTF-8 - LANGUAGE=en_US:en -@@ -137,14 +137,7 @@ function run_tests { +@@ -137,14 +137,7 @@ ${wrapper} python setup.py egg_info fi echo "Running \`${wrapper} $TESTRTESTS\`" @@ -30,6 +19,3 @@ RESULT=$? set -e --- -1.9.0 - diff -Nru nova-2014.1.3/nova/api/ec2/cloud.py nova-2014.1.5/nova/api/ec2/cloud.py --- nova-2014.1.3/nova/api/ec2/cloud.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/api/ec2/cloud.py 2015-06-18 22:25:39.000000000 +0000 @@ -1271,6 +1271,9 @@ LOG.audit(_("Disassociate address %s"), public_ip, context=context) self.network_api.disassociate_floating_ip(context, instance, address=public_ip) + else: + msg = _('Floating ip is not associated.') + raise exception.InvalidAssociation(message=msg) return {'return': "true"} def run_instances(self, context, **kwargs): diff -Nru nova-2014.1.3/nova/api/openstack/compute/contrib/floating_ips.py nova-2014.1.5/nova/api/openstack/compute/contrib/floating_ips.py --- nova-2014.1.3/nova/api/openstack/compute/contrib/floating_ips.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/api/openstack/compute/contrib/floating_ips.py 2015-06-18 22:25:39.000000000 +0000 @@ -183,7 +183,7 @@ try: self.network_api.disassociate_and_release_floating_ip( context, instance, floating_ip) - except exception.Forbidden: + except exception.NotAuthorized: raise webob.exc.HTTPForbidden() except exception.CannotDisassociateAutoAssignedFloatingIP: msg = _('Cannot disassociate auto assigned floating ip') diff -Nru nova-2014.1.3/nova/api/openstack/compute/limits.py nova-2014.1.5/nova/api/openstack/compute/limits.py --- nova-2014.1.3/nova/api/openstack/compute/limits.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/api/openstack/compute/limits.py 2015-06-18 22:25:39.000000000 +0000 @@ -158,9 +158,11 @@ self.water_level = 0 self.capacity = self.unit self.request_value = float(self.capacity) / float(self.value) - msg = _("Only %(value)s %(verb)s request(s) can be " - "made to %(uri)s every %(unit_string)s.") - self.error_message = msg % self.__dict__ + msg = (_("Only %(value)s %(verb)s request(s) can be " + "made to %(uri)s every %(unit_string)s.") % + {'value': self.value, 'verb': self.verb, 'uri': self.uri, + 'unit_string': self.unit_string}) + self.error_message = msg def __call__(self, verb, url): """Represents a call to this limit from a relevant request. diff -Nru nova-2014.1.3/nova/api/openstack/wsgi.py nova-2014.1.5/nova/api/openstack/wsgi.py --- nova-2014.1.3/nova/api/openstack/wsgi.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/api/openstack/wsgi.py 2015-06-18 22:25:39.000000000 +0000 @@ -435,7 +435,9 @@ result.appendChild(node) else: # Type is atom - node = doc.createTextNode(str(data)) + if not isinstance(data, six.string_types): + data = six.text_type(data) + node = doc.createTextNode(data) result.appendChild(node) return result diff -Nru nova-2014.1.3/nova/CA/openssl.cnf.tmpl nova-2014.1.5/nova/CA/openssl.cnf.tmpl --- nova-2014.1.3/nova/CA/openssl.cnf.tmpl 2014-10-02 23:31:48.000000000 +0000 +++ nova-2014.1.5/nova/CA/openssl.cnf.tmpl 2015-06-18 22:25:30.000000000 +0000 @@ -34,7 +34,7 @@ unique_subject = no default_crl_days = 365 default_days = 365 -default_md = md5 +default_md = sha256 preserve = no email_in_dn = no nameopt = default_ca @@ -57,7 +57,7 @@ [ req ] default_bits = 1024 # Size of keys default_keyfile = key.pem # name of generated keys -default_md = md5 # message digest algorithm +default_md = sha256 # message digest algorithm string_mask = nombstr # permitted characters distinguished_name = req_distinguished_name diff -Nru nova-2014.1.3/nova/cells/state.py nova-2014.1.5/nova/cells/state.py --- nova-2014.1.3/nova/cells/state.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/cells/state.py 2015-06-18 22:25:39.000000000 +0000 @@ -152,10 +152,7 @@ cells_config = CONF.cells.cells_config if cells_config: - config_path = CONF.find_file(cells_config) - if not config_path: - raise cfg.ConfigFilesNotFoundError(config_files=[cells_config]) - return CellStateManagerFile(cell_state_cls, config_path) + return CellStateManagerFile(cell_state_cls) return CellStateManagerDB(cell_state_cls) @@ -450,8 +447,11 @@ class CellStateManagerFile(CellStateManager): - def __init__(self, cell_state_cls, cells_config_path): - self.cells_config_path = cells_config_path + def __init__(self, cell_state_cls=None): + cells_config = CONF.cells.cells_config + self.cells_config_path = CONF.find_file(cells_config) + if not self.cells_config_path: + raise cfg.ConfigFilesNotFoundError(config_files=[cells_config]) super(CellStateManagerFile, self).__init__(cell_state_cls) def _cell_data_sync(self, force=False): diff -Nru nova-2014.1.3/nova/compute/api.py nova-2014.1.5/nova/compute/api.py --- nova-2014.1.3/nova/compute/api.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/compute/api.py 2015-06-18 22:25:39.000000000 +0000 @@ -67,6 +67,7 @@ from nova.openstack.common import log as logging from nova.openstack.common import strutils from nova.openstack.common import timeutils +from nova.openstack.common import units from nova.openstack.common import uuidutils import nova.policy from nova import quota @@ -596,7 +597,6 @@ def _check_requested_image(self, context, image_id, image, instance_type): if not image: - # Image checks don't apply when building from volume return if image['status'] != 'active': @@ -684,9 +684,7 @@ files_to_inject): self._check_metadata_properties_quota(context, metadata) self._check_injected_file_quota(context, files_to_inject) - if image_id is not None: - self._check_requested_image(context, image_id, - image, instance_type) + self._check_requested_image(context, image_id, image, instance_type) def _validate_and_build_base_options(self, context, instance_type, boot_meta, image_href, image_id, @@ -862,14 +860,34 @@ try: image_id = bdm['image_id'] image_meta = self.image_service.show(context, image_id) - return image_meta.get('properties', {}) + return image_meta except Exception: raise exception.InvalidBDMImage(id=image_id) elif bdm.get('volume_id'): try: volume_id = bdm['volume_id'] volume = self.volume_api.get(context, volume_id) - return volume.get('volume_image_metadata', {}) + + properties = volume.get('volume_image_metadata', {}) + image_meta = {'properties': properties} + # NOTE(yjiang5): restore the basic attributes + # NOTE(mdbooth): These values come from + # volume_glance_metadata in cinder. This is a simple + # key/value table, and all values are strings. We need to + # convert them to ints to avoid unexpected type errors. + image_meta['min_ram'] = int(properties.get('min_ram', 0)) + image_meta['min_disk'] = int(properties.get('min_disk', 0)) + # Volume size is no longer related to the original image + # size, so we take it from the volume directly. Cinder + # creates volumes in Gb increments, and stores size in Gb, + # whereas glance reports size in bytes. As we're returning + # glance metadata here, we need to convert it. + image_meta['size'] = volume.get('size', 0) * units.Gi + # NOTE(yjiang5): Always set the image status as 'active' + # and depends on followed volume_api.check_attach() to + # verify it. This hack should be harmless with that check. + image_meta['status'] = 'active' + return image_meta except Exception: raise exception.InvalidBDMVolume(id=volume_id) @@ -945,10 +963,8 @@ image_id, boot_meta = self._get_image(context, image_href) else: image_id = None - boot_meta = {} - boot_meta['properties'] = \ - self._get_bdm_image_metadata(context, - block_device_mapping, legacy_bdm) + boot_meta = self._get_bdm_image_metadata( + context, block_device_mapping, legacy_bdm) self._check_auto_disk_config(image=boot_meta, auto_disk_config=auto_disk_config) @@ -1885,6 +1901,9 @@ sort_key, sort_dir, limit=limit, marker=marker, expected_attrs=expected_attrs) + if 'ip6' in filters or 'ip' in filters: + inst_models = self._ip_filter(inst_models, filters) + if want_objects: return inst_models @@ -1895,18 +1914,29 @@ return instances + @staticmethod + def _ip_filter(inst_models, filters): + ipv4_f = re.compile(str(filters.get('ip'))) + ipv6_f = re.compile(str(filters.get('ip6'))) + result_objs = [] + for instance in inst_models: + nw_info = compute_utils.get_nw_info_for_instance(instance) + for vif in nw_info: + for fixed_ip in vif.fixed_ips(): + address = fixed_ip.get('address') + if not address: + continue + version = fixed_ip.get('version') + if ((version == 4 and ipv4_f.match(address)) or + (version == 6 and ipv6_f.match(address))): + result_objs.append(instance) + continue + return instance_obj.InstanceList(objects=result_objs) + def _get_instances_by_filters(self, context, filters, sort_key, sort_dir, limit=None, marker=None, expected_attrs=None): - if 'ip6' in filters or 'ip' in filters: - res = self.network_api.get_instance_uuids_by_ip_filter(context, - filters) - # NOTE(jkoelker) It is possible that we will get the same - # instance uuid twice (one for ipv4 and ipv6) - uuids = set([r['instance_uuid'] for r in res]) - filters['uuid'] = uuids - fields = ['metadata', 'system_metadata', 'info_cache', 'security_groups'] if expected_attrs: diff -Nru nova-2014.1.3/nova/compute/cells_api.py nova-2014.1.5/nova/compute/cells_api.py --- nova-2014.1.3/nova/compute/cells_api.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/compute/cells_api.py 2015-06-18 22:25:39.000000000 +0000 @@ -554,10 +554,15 @@ # NOTE(danms): Currently cells does not support objects as # return values, so just convert the db-formatted service objects # to new-world objects here + + # NOTE(dheeraj): Use ServiceProxy here too. See johannes' + # note on service_get_all if db_service: - return service_obj.Service._from_db_object(context, - service_obj.Service(), - db_service) + cell_path, _id = cells_utils.split_cell_and_item(db_service['id']) + db_service['id'] = _id + ser_obj = service_obj.Service._from_db_object( + context, service_obj.Service(), db_service) + return ServiceProxy(ser_obj, cell_path) def service_update(self, context, host_name, binary, params_to_update): """Used to enable/disable a service. For compute services, setting to @@ -573,10 +578,15 @@ # NOTE(danms): Currently cells does not support objects as # return values, so just convert the db-formatted service objects # to new-world objects here + + # NOTE(dheeraj): Use ServiceProxy here too. See johannes' + # note on service_get_all if db_service: - return service_obj.Service._from_db_object(context, - service_obj.Service(), - db_service) + cell_path, _id = cells_utils.split_cell_and_item(db_service['id']) + db_service['id'] = _id + ser_obj = service_obj.Service._from_db_object( + context, service_obj.Service(), db_service) + return ServiceProxy(ser_obj, cell_path) def service_delete(self, context, service_id): """Deletes the specified service.""" diff -Nru nova-2014.1.3/nova/compute/manager.py nova-2014.1.5/nova/compute/manager.py --- nova-2014.1.3/nova/compute/manager.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/compute/manager.py 2015-06-18 22:25:39.000000000 +0000 @@ -120,6 +120,10 @@ cfg.IntOpt('network_allocate_retries', default=0, help="Number of times to retry network allocation on failures"), + cfg.IntOpt('block_device_allocate_retries', + default=180, + help='Number of times to retry block device' + ' allocation on failures') ] interval_opts = [ @@ -153,7 +157,11 @@ cfg.IntOpt('instance_delete_interval', default=300, help=('Interval in seconds for retrying failed instance file ' - 'deletes')) + 'deletes')), + cfg.IntOpt('block_device_allocate_retries_interval', + default=1, + help='Waiting time interval (seconds) between block' + ' device allocation retries on failures') ] timeout_opts = [ @@ -266,12 +274,21 @@ LOG.info(_("Task possibly preempted: %s") % e.format_message()) except Exception: with excutils.save_and_reraise_exception(): + wrapped_func = utils.get_wrapped_function(function) + keyed_args = safe_utils.getcallargs(wrapped_func, context, + *args, **kwargs) + # NOTE(mriedem): 'instance' must be in keyed_args because we + # have utils.expects_func_args('instance') decorating this + # method. + instance_uuid = keyed_args['instance']['uuid'] try: self._instance_update(context, - kwargs['instance']['uuid'], + instance_uuid, task_state=None) - except Exception: - pass + except Exception as e: + msg = _("Failed to revert task state for instance. " + "Error: %s") + LOG.warning(msg, e, instance_uuid=instance_uuid) return decorated_function @@ -612,16 +629,21 @@ self._resource_tracker_dict[nodename] = rt return rt + def _update_resource_tracker(self, context, instance): + """Let resource tracker know that instance has changed state.""" + + if (instance['host'] == self.host and + self.driver.node_is_available(instance['node'])): + rt = self._get_resource_tracker(instance.get('node')) + rt.update_usage(context, instance) + def _instance_update(self, context, instance_uuid, **kwargs): """Update an instance in the database using kwargs as value.""" instance_ref = self.conductor_api.instance_update(context, instance_uuid, **kwargs) - if (instance_ref['host'] == self.host and - self.driver.node_is_available(instance_ref['node'])): - rt = self._get_resource_tracker(instance_ref.get('node')) - rt.update_usage(context, instance_ref) + self._update_resource_tracker(context, instance_ref) return instance_ref @@ -897,11 +919,43 @@ instance.vm_state = vm_states.ACTIVE instance.save() + if instance.task_state == task_states.POWERING_OFF: + try: + LOG.debug(_("Instance in transitional state %s at start-up " + "retrying stop request"), + instance['task_state'], instance=instance) + self.stop_instance(context, instance) + except Exception: + # we don't want that an exception blocks the init_host + msg = _('Failed to stop instance') + LOG.exception(msg, instance=instance) + finally: + return + + if instance.task_state == task_states.POWERING_ON: + try: + LOG.debug(_("Instance in transitional state %s at start-up " + "retrying start request"), + instance['task_state'], instance=instance) + self.start_instance(context, instance) + except Exception: + # we don't want that an exception blocks the init_host + msg = _('Failed to start instance') + LOG.exception(msg, instance=instance) + finally: + return + net_info = compute_utils.get_nw_info_for_instance(instance) try: self.driver.plug_vifs(instance, net_info) except NotImplementedError as e: LOG.debug(e, instance=instance) + except exception.VirtualInterfacePlugException: + # we don't want an exception to block the init_host + LOG.exception(_("Vifs plug failed"), instance=instance) + self._set_instance_error_state(context, instance.uuid) + return + if instance.task_state == task_states.RESIZE_MIGRATING: # We crashed during resize/migration, so roll back for safety try: @@ -1135,24 +1189,21 @@ instance) return network_info - def _await_block_device_map_created(self, context, vol_id, max_tries=180, - wait_between=1): + def _await_block_device_map_created(self, context, vol_id): # TODO(yamahata): creating volume simultaneously # reduces creation time? # TODO(yamahata): eliminate dumb polling - # TODO(harlowja): make the max_tries configurable or dynamic? attempts = 0 start = time.time() - while attempts < max_tries: + while attempts < CONF.block_device_allocate_retries: volume = self.volume_api.get(context, vol_id) volume_status = volume['status'] if volume_status not in ['creating', 'downloading']: if volume_status != 'available': LOG.warn(_("Volume id: %s finished being created but was" " not set as 'available'"), vol_id) - # NOTE(harlowja): return how many attempts were tried return attempts + 1 - greenthread.sleep(wait_between) + greenthread.sleep(CONF.block_device_allocate_retries_interval) attempts += 1 # NOTE(harlowja): Should only happen if we ran out of attempts raise exception.VolumeNotCreated(volume_id=vol_id, @@ -1716,6 +1767,12 @@ block_device_info['swap']) return block_device_info + except exception.OverQuota: + msg = ('Failed to create block device for instance due to being ' + 'over volume resource quota') + LOG.debug(msg, instance=instance) + raise exception.InvalidBDM() + except Exception: LOG.exception(_('Instance failed block device setup'), instance=instance) @@ -1834,7 +1891,6 @@ # callers all pass objects already @wrap_exception() @reverts_task_state - @wrap_instance_event @wrap_instance_fault def build_and_run_instance(self, context, instance, image, request_spec, filter_properties, admin_password=None, @@ -1843,79 +1899,90 @@ node=None, limits=None): @utils.synchronized(instance.uuid) - def do_build_and_run_instance(context, instance, image, request_spec, - filter_properties, admin_password, injected_files, - requested_networks, security_groups, block_device_mapping, - node=None, limits=None): + def _locked_do_build_and_run_instance(*args, **kwargs): + self._do_build_and_run_instance(*args, **kwargs) - try: - LOG.audit(_('Starting instance...'), context=context, - instance=instance) - instance.vm_state = vm_states.BUILDING - instance.task_state = None - instance.save(expected_task_state= - (task_states.SCHEDULING, None)) - except exception.InstanceNotFound: - msg = _('Instance disappeared before build.') - LOG.debug(msg, instance=instance) - return - except exception.UnexpectedTaskStateError as e: - LOG.debug(e.format_message(), instance=instance) - return + # NOTE(danms): We spawn here to return the RPC worker thread back to + # the pool. Since what follows could take a really long time, we don't + # want to tie up RPC workers. + utils.spawn_n(_locked_do_build_and_run_instance, + context, instance, image, request_spec, + filter_properties, admin_password, injected_files, + requested_networks, security_groups, + block_device_mapping, node, limits) - # b64 decode the files to inject: - decoded_files = self._decode_files(injected_files) + @wrap_exception() + @reverts_task_state + @wrap_instance_event + @wrap_instance_fault + def _do_build_and_run_instance(self, context, instance, image, + request_spec, filter_properties, admin_password, injected_files, + requested_networks, security_groups, block_device_mapping, + node=None, limits=None): - if limits is None: - limits = {} + try: + LOG.audit(_('Starting instance...'), context=context, + instance=instance) + instance.vm_state = vm_states.BUILDING + instance.task_state = None + instance.save(expected_task_state= + (task_states.SCHEDULING, None)) + except exception.InstanceNotFound: + msg = _('Instance disappeared before build.') + LOG.debug(msg, instance=instance) + return + except exception.UnexpectedTaskStateError as e: + LOG.debug(e.format_message(), instance=instance) + return - if node is None: - node = self.driver.get_available_nodes()[0] - LOG.debug(_('No node specified, defaulting to %s'), node, - instance=instance) + # b64 decode the files to inject: + decoded_files = self._decode_files(injected_files) - try: - self._build_and_run_instance(context, instance, image, - decoded_files, admin_password, requested_networks, - security_groups, block_device_mapping, node, limits) - except exception.RescheduledException as e: - LOG.debug(e.format_message(), instance=instance) - # dhcp_options are per host, so if they're set we need to - # deallocate the networks and reallocate on the next host. - if self.driver.dhcp_options_for_instance(instance): - self._cleanup_allocated_networks(context, instance, - requested_networks) + if limits is None: + limits = {} - instance.task_state = task_states.SCHEDULING - instance.save() + if node is None: + node = self.driver.get_available_nodes()[0] + LOG.debug(_('No node specified, defaulting to %s'), node, + instance=instance) - self.compute_task_api.build_instances(context, [instance], - image, filter_properties, admin_password, - injected_files, requested_networks, security_groups, - block_device_mapping) - except (exception.InstanceNotFound, - exception.UnexpectedDeletingTaskStateError): - msg = _('Instance disappeared during build.') - LOG.debug(msg, instance=instance) - self._cleanup_allocated_networks(context, instance, - requested_networks) - except exception.BuildAbortException as e: - LOG.exception(e.format_message(), instance=instance) - self._cleanup_allocated_networks(context, instance, - requested_networks) - self._set_instance_error_state(context, instance.uuid) - except Exception: - # Should not reach here. - msg = _('Unexpected build failure, not rescheduling build.') - LOG.exception(msg, instance=instance) + try: + self._build_and_run_instance(context, instance, image, + decoded_files, admin_password, requested_networks, + security_groups, block_device_mapping, node, limits) + except exception.RescheduledException as e: + LOG.debug(e.format_message(), instance=instance) + # dhcp_options are per host, so if they're set we need to + # deallocate the networks and reallocate on the next host. + if self.driver.dhcp_options_for_instance(instance): self._cleanup_allocated_networks(context, instance, requested_networks) - self._set_instance_error_state(context, instance.uuid) - do_build_and_run_instance(context, instance, image, request_spec, - filter_properties, admin_password, injected_files, - requested_networks, security_groups, block_device_mapping, - node, limits) + instance.task_state = task_states.SCHEDULING + instance.save() + + self.compute_task_api.build_instances(context, [instance], + image, filter_properties, admin_password, + injected_files, requested_networks, security_groups, + block_device_mapping) + except (exception.InstanceNotFound, + exception.UnexpectedDeletingTaskStateError): + msg = _('Instance disappeared during build.') + LOG.debug(msg, instance=instance) + self._cleanup_allocated_networks(context, instance, + requested_networks) + except exception.BuildAbortException as e: + LOG.exception(e.format_message(), instance=instance) + self._cleanup_allocated_networks(context, instance, + requested_networks) + self._set_instance_error_state(context, instance.uuid) + except Exception: + # Should not reach here. + msg = _('Unexpected build failure, not rescheduling build.') + LOG.exception(msg, instance=instance) + self._cleanup_allocated_networks(context, instance, + requested_networks) + self._set_instance_error_state(context, instance.uuid) def _build_and_run_instance(self, context, instance, image, injected_files, admin_password, requested_networks, security_groups, @@ -2232,6 +2299,7 @@ instance.task_state = None instance.terminated_at = timeutils.utcnow() instance.save() + self._update_resource_tracker(context, instance) system_meta = utils.instance_sys_meta(instance) db_inst = self.conductor_api.instance_destroy( context, obj_base.obj_to_primitive(instance)) @@ -2545,7 +2613,8 @@ attach_block_devices=self._prep_block_device, block_device_info=block_device_info, network_info=network_info, - preserve_ephemeral=preserve_ephemeral) + preserve_ephemeral=preserve_ephemeral, + recreate=recreate) try: self.driver.rebuild(**kwargs) except NotImplementedError: @@ -4479,8 +4548,11 @@ is_volume_backed = self.compute_api.is_volume_backed_instance(ctxt, instance) dest_check_data['is_volume_backed'] = is_volume_backed + block_device_info = self._get_instance_block_device_info( + ctxt, instance, refresh_conn_info=True) return self.driver.check_can_live_migrate_source(ctxt, instance, - dest_check_data) + dest_check_data, + block_device_info) @object_compat @wrap_exception() diff -Nru nova-2014.1.3/nova/console/websocketproxy.py nova-2014.1.5/nova/console/websocketproxy.py --- nova-2014.1.3/nova/console/websocketproxy.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/console/websocketproxy.py 2015-06-18 22:25:39.000000000 +0000 @@ -20,13 +20,22 @@ import Cookie import socket +import urlparse import websockify from nova.consoleauth import rpcapi as consoleauth_rpcapi from nova import context +from nova import exception from nova.openstack.common.gettextutils import _ from nova.openstack.common import log as logging +from oslo.config import cfg + +CONF = cfg.CONF +CONF.import_opt('novncproxy_base_url', 'nova.vnc') +CONF.import_opt('html5proxy_base_url', 'nova.spice', group='spice') +CONF.import_opt('vnc_enabled', 'nova.vnc') +CONF.import_opt('enabled', 'nova.spice', group='spice') LOG = logging.getLogger(__name__) @@ -37,6 +46,20 @@ target_cfg=None, ssl_target=None, *args, **kwargs) + def verify_origin_proto(self, console_type, origin_proto): + if console_type == 'novnc': + expected_proto = \ + urlparse.urlparse(CONF.novncproxy_base_url).scheme + elif console_type == 'spice-html5': + expected_proto = \ + urlparse.urlparse(CONF.spice.html5proxy_base_url).scheme + else: + detail = _("Invalid Console Type for WebSocketProxy: '%s'") % \ + console_type + LOG.audit(detail) + raise exception.ValidationError(detail=detail) + return origin_proto == expected_proto + def new_client(self): """Called after a new WebSocket connection has been established.""" # Reopen the eventlet hub to make sure we don't share an epoll @@ -55,6 +78,28 @@ LOG.audit("Invalid Token: %s", token) raise Exception(_("Invalid Token")) + # Verify Origin + expected_origin_hostname = self.headers.getheader('Host') + if ':' in expected_origin_hostname: + e = expected_origin_hostname + expected_origin_hostname = e.split(':')[0] + origin_url = self.headers.getheader('Origin') + # missing origin header indicates non-browser client which is OK + if origin_url is not None: + origin = urlparse.urlparse(origin_url) + origin_hostname = origin.hostname + origin_scheme = origin.scheme + if origin_hostname == '' or origin_scheme == '': + detail = _("Origin header not valid.") + raise exception.ValidationError(detail=detail) + if expected_origin_hostname != origin_hostname: + detail = _("Origin header does not match this host.") + raise exception.ValidationError(detail=detail) + if not self.verify_origin_proto(connect_info['console_type'], + origin.scheme): + detail = _("Origin header protocol does not match this host.") + raise exception.ValidationError(detail=detail) + host = connect_info['host'] port = int(connect_info['port']) diff -Nru nova-2014.1.3/nova/db/sqlalchemy/api.py nova-2014.1.5/nova/db/sqlalchemy/api.py --- nova-2014.1.3/nova/db/sqlalchemy/api.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/db/sqlalchemy/api.py 2015-06-18 22:25:39.000000000 +0000 @@ -1590,6 +1590,13 @@ context - request context object values - dict containing column values. """ + + # NOTE(rpodolyaka): create the default security group, if it doesn't exist. + # This must be done in a separate transaction, so that this one is not + # aborted in case a concurrent one succeeds first and the unique constraint + # for security group names is violated by a concurrent INSERT + security_group_ensure_default(context) + values = values.copy() values['metadata'] = _metadata_refs( values.get('metadata'), models.InstanceMetadata) @@ -1610,7 +1617,7 @@ def _get_sec_group_models(session, security_groups): models = [] - default_group = security_group_ensure_default(context) + default_group = _security_group_ensure_default(context, session) if 'default' in security_groups: models.append(default_group) # Generate a new list, so we don't modify the original @@ -2234,6 +2241,7 @@ instance[metadata_type].append(newitem) +@_retry_on_deadlock def _instance_update(context, instance_uuid, values, copy_old_instance=False, columns_to_join=None): session = get_session() @@ -3749,8 +3757,21 @@ def security_group_ensure_default(context): """Ensure default security group exists for a project_id.""" - session = get_session() - with session.begin(): + + try: + return _security_group_ensure_default(context) + except exception.SecurityGroupExists: + # NOTE(rpodolyaka): a concurrent transaction has succeeded first, + # suppress the error and proceed + return security_group_get_by_name(context, context.project_id, + 'default') + + +def _security_group_ensure_default(context, session=None): + if session is None: + session = get_session() + + with session.begin(subtransactions=True): try: default_group = _security_group_get_by_names(context, session, diff -Nru nova-2014.1.3/nova/exception.py nova-2014.1.5/nova/exception.py --- nova-2014.1.3/nova/exception.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/exception.py 2015-06-18 22:25:39.000000000 +0000 @@ -156,6 +156,10 @@ "unique mac address failed") +class VirtualInterfacePlugException(NovaException): + msg_fmt = _("Virtual interface plugin failed") + + class GlanceConnectionFailed(NovaException): msg_fmt = _("Connection to glance host %(host)s:%(port)s failed: " "%(reason)s") @@ -1221,6 +1225,11 @@ "found.") +class InvalidAssociation(NotFound): + ec2_code = 'InvalidAssociationID.NotFound' + msg_fmt = _("Invalid association.") + + class NodeNotFound(NotFound): msg_fmt = _("Node %(node_id)s could not be found.") diff -Nru nova-2014.1.3/nova/network/linux_net.py nova-2014.1.5/nova/network/linux_net.py --- nova-2014.1.3/nova/network/linux_net.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/network/linux_net.py 2015-06-18 22:25:39.000000000 +0000 @@ -1320,7 +1320,7 @@ def delete_ovs_vif_port(bridge, dev): - _ovs_vsctl(['del-port', bridge, dev]) + _ovs_vsctl(['--', '--if-exists', 'del-port', bridge, dev]) delete_net_dev(dev) @@ -1638,29 +1638,14 @@ % (interface, address)) rules.append('OUTPUT -p ARP -o %s --arp-ip-src %s -j DROP' % (interface, address)) + rules.append('FORWARD -p IPv4 -i %s --ip-protocol udp ' + '--ip-destination-port 67:68 -j DROP' + % interface) + rules.append('FORWARD -p IPv4 -o %s --ip-protocol udp ' + '--ip-destination-port 67:68 -j DROP' + % interface) # NOTE(vish): the above is not possible with iptables/arptables ensure_ebtables_rules(rules) - # block dhcp broadcast traffic across the interface - ipv4_filter = iptables_manager.ipv4['filter'] - ipv4_filter.add_rule('FORWARD', - ('-m physdev --physdev-in %s -d 255.255.255.255 ' - '-p udp --dport 67 -j %s' - % (interface, CONF.iptables_drop_action)), - top=True) - ipv4_filter.add_rule('FORWARD', - ('-m physdev --physdev-out %s -d 255.255.255.255 ' - '-p udp --dport 67 -j %s' - % (interface, CONF.iptables_drop_action)), - top=True) - # block ip traffic to address across the interface - ipv4_filter.add_rule('FORWARD', - ('-m physdev --physdev-in %s -d %s -j %s' - % (interface, address, CONF.iptables_drop_action)), - top=True) - ipv4_filter.add_rule('FORWARD', - ('-m physdev --physdev-out %s -s %s -j %s' - % (interface, address, CONF.iptables_drop_action)), - top=True) def remove_isolate_dhcp_address(interface, address): @@ -1670,38 +1655,14 @@ % (interface, address)) rules.append('OUTPUT -p ARP -o %s --arp-ip-src %s -j DROP' % (interface, address)) + rules.append('FORWARD -p IPv4 -i %s --ip-protocol udp ' + '--ip-destination-port 67:68 -j DROP' + % interface) + rules.append('FORWARD -p IPv4 -o %s --ip-protocol udp ' + '--ip-destination-port 67:68 -j DROP' + % interface) remove_ebtables_rules(rules) # NOTE(vish): the above is not possible with iptables/arptables - # block dhcp broadcast traffic across the interface - ipv4_filter = iptables_manager.ipv4['filter'] - - drop_actions = ['DROP'] - if CONF.iptables_drop_action != 'DROP': - drop_actions.append(CONF.iptables_drop_action) - - for drop_action in drop_actions: - ipv4_filter.remove_rule('FORWARD', - ('-m physdev --physdev-in %s ' - '-d 255.255.255.255 ' - '-p udp --dport 67 -j %s' - % (interface, drop_action)), - top=True) - ipv4_filter.remove_rule('FORWARD', - ('-m physdev --physdev-out %s ' - '-d 255.255.255.255 ' - '-p udp --dport 67 -j %s' - % (interface, drop_action)), - top=True) - - # block ip traffic to address across the interface - ipv4_filter.remove_rule('FORWARD', - ('-m physdev --physdev-in %s -d %s -j %s' - % (interface, address, drop_action)), - top=True) - ipv4_filter.remove_rule('FORWARD', - ('-m physdev --physdev-out %s -s %s -j %s' - % (interface, address, drop_action)), - top=True) def get_gateway_rules(bridge): diff -Nru nova-2014.1.3/nova/network/neutronv2/api.py nova-2014.1.5/nova/network/neutronv2/api.py --- nova-2014.1.3/nova/network/neutronv2/api.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/network/neutronv2/api.py 2015-06-18 22:25:39.000000000 +0000 @@ -141,7 +141,7 @@ # Perform this check here rather than in validate_networks to # ensure the check is performed everytime allocate_for_instance # is invoked - if net.get('router:external'): + if net.get('router:external') and not net.get('shared'): raise exception.ExternalNetworkAttachForbidden( network_uuid=net['id']) diff -Nru nova-2014.1.3/nova/network/neutronv2/__init__.py nova-2014.1.5/nova/network/neutronv2/__init__.py --- nova-2014.1.3/nova/network/neutronv2/__init__.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/network/neutronv2/__init__.py 2015-06-18 22:25:39.000000000 +0000 @@ -17,25 +17,38 @@ from neutronclient.v2_0 import client as clientv20 from oslo.config import cfg -from nova.openstack.common import local +from nova.openstack.common import lockutils from nova.openstack.common import log as logging CONF = cfg.CONF LOG = logging.getLogger(__name__) -def _get_client(token=None): +class AdminTokenStore(object): + + _instance = None + + def __init__(self): + self.admin_auth_token = None + + @classmethod + def get(cls): + if cls._instance is None: + cls._instance = cls() + return cls._instance + + +def _get_client(token=None, admin=False): params = { 'endpoint_url': CONF.neutron_url, 'timeout': CONF.neutron_url_timeout, 'insecure': CONF.neutron_api_insecure, 'ca_cert': CONF.neutron_ca_certificates_file, 'auth_strategy': CONF.neutron_auth_strategy, + 'token': token, } - if token: - params['token'] = token - else: + if admin: params['username'] = CONF.neutron_admin_username if CONF.neutron_admin_tenant_id: params['tenant_id'] = CONF.neutron_admin_tenant_id @@ -46,6 +59,38 @@ return clientv20.Client(**params) +class ClientWrapper(clientv20.Client): + '''A neutron client wrapper class. + Wraps the callable methods, executes it and updates the token, + as it might change when expires. + ''' + + def __init__(self, base_client): + # Expose all attributes from the base_client instance + self.__dict__ = base_client.__dict__ + self.base_client = base_client + + def __getattribute__(self, name): + obj = object.__getattribute__(self, name) + if callable(obj): + obj = object.__getattribute__(self, 'proxy')(obj) + return obj + + def proxy(self, obj): + def wrapper(*args, **kwargs): + ret = obj(*args, **kwargs) + new_token = self.base_client.get_auth_info()['auth_token'] + _update_token(new_token) + return ret + return wrapper + + +def _update_token(new_token): + with lockutils.lock('neutron_admin_auth_token_lock'): + token_store = AdminTokenStore.get() + token_store.admin_auth_token = new_token + + def get_client(context, admin=False): # NOTE(dprince): In the case where no auth_token is present # we allow use of neutron admin tenant credentials if @@ -53,16 +98,10 @@ # This is to support some services (metadata API) where # an admin context is used without an auth token. if admin or (context.is_admin and not context.auth_token): - # NOTE(dims): We need to use admin token, let us cache a - # thread local copy for re-using this client - # multiple times and to avoid excessive calls - # to neutron to fetch tokens. Some of the hackiness in this code - # will go away once BP auth-plugins is implemented. - # That blue print will ensure that tokens can be shared - # across clients as well - if not hasattr(local.strong_store, 'neutron_client'): - local.strong_store.neutron_client = _get_client(token=None) - return local.strong_store.neutron_client + with lockutils.lock('neutron_admin_auth_token_lock'): + orig_token = AdminTokenStore.get().admin_auth_token + client = _get_client(orig_token, admin=True) + return ClientWrapper(client) # We got a user token that we can use that as-is if context.auth_token: diff -Nru nova-2014.1.3/nova/openstack/common/db/sqlalchemy/session.py nova-2014.1.5/nova/openstack/common/db/sqlalchemy/session.py --- nova-2014.1.3/nova/openstack/common/db/sqlalchemy/session.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/openstack/common/db/sqlalchemy/session.py 2015-06-18 22:25:39.000000000 +0000 @@ -700,6 +700,25 @@ def commit(self, *args, **kwargs): return super(Session, self).commit(*args, **kwargs) + def begin(self, **kw): + trans = super(Session, self).begin(**kw) + trans.__class__ = SessionTransactionWrapper + return trans + + +class SessionTransactionWrapper(sqlalchemy.orm.session.SessionTransaction): + @property + def bind(self): + return self.session.bind + + @_wrap_db_error + def commit(self, *args, **kwargs): + return super(SessionTransactionWrapper, self).commit(*args, **kwargs) + + @_wrap_db_error + def rollback(self, *args, **kwargs): + return super(SessionTransactionWrapper, self).rollback(*args, **kwargs) + def get_maker(engine, autocommit=True, expire_on_commit=False): """Return a SQLAlchemy sessionmaker using the given engine.""" diff -Nru nova-2014.1.3/nova/openstack/common/log.py nova-2014.1.5/nova/openstack/common/log.py --- nova-2014.1.3/nova/openstack/common/log.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/openstack/common/log.py 2015-06-18 22:25:39.000000000 +0000 @@ -263,7 +263,12 @@ >>> mask_password("u'original_password' : u'aaaaa'") "u'original_password' : u'***'" """ - message = six.text_type(message) + try: + message = six.text_type(message) + except UnicodeDecodeError: + # NOTE(jecarey): Temporary fix to handle cases where message is a + # byte string. A better solution will be provided in Kilo. + pass # NOTE(ldbragst): Check to see if anything in message contains any key # specified in _SANITIZE_KEYS, if not then just return the message since diff -Nru nova-2014.1.3/nova/openstack/common/processutils.py nova-2014.1.5/nova/openstack/common/processutils.py --- nova-2014.1.3/nova/openstack/common/processutils.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/openstack/common/processutils.py 2015-06-18 22:25:39.000000000 +0000 @@ -237,7 +237,8 @@ def ssh_execute(ssh, cmd, process_input=None, addl_env=None, check_exit_code=True): - LOG.debug(_('Running cmd (SSH): %s'), cmd) + sanitized_cmd = strutils.mask_password(cmd) + LOG.debug(_('Running cmd (SSH): %s'), sanitized_cmd) if addl_env: raise InvalidArgumentError(_('Environment not supported over SSH')) @@ -251,7 +252,10 @@ # NOTE(justinsb): This seems suspicious... # ...other SSH clients have buffering issues with this approach stdout = stdout_stream.read() + sanitized_stdout = strutils.mask_password(stdout) stderr = stderr_stream.read() + sanitized_stderr = strutils.mask_password(stderr) + stdin_stream.close() exit_status = channel.recv_exit_status() @@ -261,8 +265,8 @@ LOG.debug(_('Result was %s') % exit_status) if check_exit_code and exit_status != 0: raise ProcessExecutionError(exit_code=exit_status, - stdout=stdout, - stderr=stderr, - cmd=cmd) + stdout=sanitized_stdout, + stderr=sanitized_stderr, + cmd=sanitized_cmd) - return (stdout, stderr) + return (sanitized_stdout, sanitized_stderr) diff -Nru nova-2014.1.3/nova/openstack/common/strutils.py nova-2014.1.5/nova/openstack/common/strutils.py --- nova-2014.1.3/nova/openstack/common/strutils.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/openstack/common/strutils.py 2015-06-18 22:25:39.000000000 +0000 @@ -271,7 +271,12 @@ >>> mask_password("u'original_password' : u'aaaaa'") "u'original_password' : u'***'" """ - message = six.text_type(message) + try: + message = six.text_type(message) + except UnicodeDecodeError: + # NOTE(jecarey): Temporary fix to handle cases where message is a + # byte string. A better solution will be provided in Kilo. + pass # NOTE(ldbragst): Check to see if anything in message contains any key # specified in _SANITIZE_KEYS, if not then just return the message since diff -Nru nova-2014.1.3/nova/scheduler/filters/trusted_filter.py nova-2014.1.5/nova/scheduler/filters/trusted_filter.py --- nova-2014.1.3/nova/scheduler/filters/trusted_filter.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/scheduler/filters/trusted_filter.py 2015-06-18 22:25:39.000000000 +0000 @@ -42,15 +42,11 @@ https://github.com/OpenAttestation/OpenAttestation """ -import httplib -import socket -import ssl - from oslo.config import cfg +import requests from nova import context from nova import db -from nova.openstack.common.gettextutils import _ from nova.openstack.common import jsonutils from nova.openstack.common import log as logging from nova.openstack.common import timeutils @@ -74,6 +70,9 @@ cfg.IntOpt('attestation_auth_timeout', default=60, help='Attestation status cache valid period length'), + cfg.BoolOpt('attestation_insecure_ssl', + default=True, + help='Disable SSL cert verification for Attestation service') ] CONF = cfg.CONF @@ -82,37 +81,6 @@ CONF.register_opts(trusted_opts, group=trust_group) -class HTTPSClientAuthConnection(httplib.HTTPSConnection): - """Class to make a HTTPS connection, with support for full client-based - SSL Authentication - """ - - def __init__(self, host, port, key_file, cert_file, ca_file, timeout=None): - httplib.HTTPSConnection.__init__(self, host, - key_file=key_file, - cert_file=cert_file) - self.host = host - self.port = port - self.key_file = key_file - self.cert_file = cert_file - self.ca_file = ca_file - self.timeout = timeout - - def connect(self): - """Connect to a host on a given (SSL) port. - If ca_file is pointing somewhere, use it to check Server Certificate. - - Redefined/copied and extended from httplib.py:1105 (Python 2.6.x). - This is needed to pass cert_reqs=ssl.CERT_REQUIRED as parameter to - ssl.wrap_socket(), which forces SSL to check server certificate - against our client certificate. - """ - sock = socket.create_connection((self.host, self.port), self.timeout) - self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, - ca_certs=self.ca_file, - cert_reqs=ssl.CERT_REQUIRED) - - class AttestationService(object): # Provide access wrapper to attestation server to get integrity report. @@ -125,29 +93,36 @@ self.cert_file = None self.ca_file = CONF.trusted_computing.attestation_server_ca_file self.request_count = 100 + # If the CA file is not provided, let's check the cert if verification + # asked + self.verify = (not CONF.trusted_computing.attestation_insecure_ssl + and self.ca_file or True) + self.cert = (self.cert_file, self.key_file) def _do_request(self, method, action_url, body, headers): # Connects to the server and issues a request. # :returns: result data # :raises: IOError if the request fails - action_url = "%s/%s" % (self.api_url, action_url) + action_url = "https://%s:%s%s/%s" % (self.host, self.port, + self.api_url, action_url) try: - c = HTTPSClientAuthConnection(self.host, self.port, - key_file=self.key_file, - cert_file=self.cert_file, - ca_file=self.ca_file) - c.request(method, action_url, body, headers) - res = c.getresponse() - status_code = res.status - if status_code in (httplib.OK, - httplib.CREATED, - httplib.ACCEPTED, - httplib.NO_CONTENT): - return httplib.OK, res + res = requests.request(method, action_url, data=body, + headers=headers, cert=self.cert, + verify=self.verify) + status_code = res.status_code + # pylint: disable=E1101 + if status_code in (requests.codes.OK, + requests.codes.CREATED, + requests.codes.ACCEPTED, + requests.codes.NO_CONTENT): + try: + return requests.codes.OK, jsonutils.loads(res.text) + except (TypeError, ValueError): + return requests.codes.OK, res.text return status_code, None - except (socket.error, IOError): + except requests.exceptions.RequestException: return IOError, None def _request(self, cmd, subcmd, hosts): @@ -161,11 +136,7 @@ if self.auth_blob: headers['x-auth-blob'] = self.auth_blob status, res = self._do_request(cmd, subcmd, cooked, headers) - if status == httplib.OK: - data = res.read() - return status, jsonutils.loads(data) - else: - return status, None + return status, res def do_attestation(self, hosts): """Attests compute nodes through OAT service. @@ -203,11 +174,7 @@ # host in the first round that scheduler invokes us. computes = db.compute_node_get_all(admin) for compute in computes: - service = compute['service'] - if not service: - LOG.warn(_("No service for compute ID %s") % compute['id']) - continue - host = service['host'] + host = compute['hypervisor_hostname'] self._init_cache_entry(host) def _cache_valid(self, host): @@ -284,7 +251,7 @@ instance = filter_properties.get('instance_type', {}) extra = instance.get('extra_specs', {}) trust = extra.get('trust:trusted_host') - host = host_state.host + host = host_state.nodename if trust: return self.compute_attestation.is_trusted(host, trust) return True diff -Nru nova-2014.1.3/nova/test.py nova-2014.1.5/nova/test.py --- nova-2014.1.3/nova/test.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/test.py 2015-06-18 22:25:39.000000000 +0000 @@ -31,6 +31,7 @@ import shutil import sys import uuid +import warnings import fixtures from oslo.config import cfg @@ -314,6 +315,9 @@ CONF.set_override('enabled', True, 'osapi_v3') CONF.set_override('force_dhcp_release', False) CONF.set_override('periodic_enable', False) + # We don't need to kill ourselves in deprecation floods. Give + # me a ping, Vasily. One ping only, please. + warnings.simplefilter("once", DeprecationWarning) def _restore_obj_registry(self): objects_base.NovaObject._obj_classes = self._base_test_obj_backup diff -Nru nova-2014.1.3/nova/tests/api/ec2/test_cloud.py nova-2014.1.5/nova/tests/api/ec2/test_cloud.py --- nova-2014.1.3/nova/tests/api/ec2/test_cloud.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/api/ec2/test_cloud.py 2015-06-18 22:25:39.000000000 +0000 @@ -368,9 +368,9 @@ 'pool': 'nova'}) self.cloud.allocate_address(self.context) self.cloud.describe_addresses(self.context) - result = self.cloud.disassociate_address(self.context, - public_ip=address) - self.assertEqual(result['return'], 'true') + self.assertRaises(exception.InvalidAssociation, + self.cloud.disassociate_address, + self.context, public_ip=address) db.floating_ip_destroy(self.context, address) def test_describe_security_groups(self): diff -Nru nova-2014.1.3/nova/tests/api/openstack/test_wsgi.py nova-2014.1.5/nova/tests/api/openstack/test_wsgi.py --- nova-2014.1.3/nova/tests/api/openstack/test_wsgi.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/api/openstack/test_wsgi.py 2015-06-18 22:25:39.000000000 +0000 @@ -221,6 +221,14 @@ result = result.replace('\n', '').replace(' ', '') self.assertEqual(result, expected_xml) + def test_xml_contains_unicode(self): + input_dict = dict(test=u'\u89e3\u7801') + expected_xml = '\xe8\xa7\xa3\xe7\xa0\x81' + serializer = wsgi.XMLDictSerializer() + result = serializer.serialize(input_dict) + result = result.replace('\n', '').replace(' ', '') + self.assertEqual(expected_xml, result) + class JSONDictSerializerTest(test.NoDBTestCase): def test_json(self): diff -Nru nova-2014.1.3/nova/tests/cells/test_cells_state_manager.py nova-2014.1.5/nova/tests/cells/test_cells_state_manager.py --- nova-2014.1.3/nova/tests/cells/test_cells_state_manager.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/cells/test_cells_state_manager.py 2015-06-18 22:25:39.000000000 +0000 @@ -16,12 +16,16 @@ Tests For CellStateManager """ +import mock +import six + from oslo.config import cfg from nova.cells import state from nova import db from nova.db.sqlalchemy import models from nova import exception +from nova.openstack.common import fileutils from nova import test @@ -78,6 +82,19 @@ state.CellStateManager) self.assertEqual(['no_such_file_exists.conf'], e.config_files) + @mock.patch.object(cfg.ConfigOpts, 'find_file') + @mock.patch.object(fileutils, 'read_cached_file') + def test_filemanager_returned(self, mock_read_cached_file, mock_find_file): + mock_find_file.return_value = "/etc/nova/cells.json" + mock_read_cached_file.return_value = (False, six.StringIO({})) + self.flags(cells_config='cells.json', group='cells') + self.assertIsInstance(state.CellStateManager(), + state.CellStateManagerFile) + + def test_dbmanager_returned(self): + self.assertIsInstance(state.CellStateManager(), + state.CellStateManagerDB) + def test_capacity_no_reserve(self): # utilize entire cell cap = self._capacity(0.0) diff -Nru nova-2014.1.3/nova/tests/compute/test_compute_api.py nova-2014.1.5/nova/tests/compute/test_compute_api.py --- nova-2014.1.3/nova/tests/compute/test_compute_api.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/compute/test_compute_api.py 2015-06-18 22:25:39.000000000 +0000 @@ -1705,6 +1705,27 @@ self.compute_api.volume_snapshot_delete(self.context, volume_id, snapshot_id, {}) + def test_boot_volume_basic_property(self): + block_device_mapping = [{ + 'id': 1, + 'device_name': 'vda', + 'no_device': None, + 'virtual_name': None, + 'snapshot_id': None, + 'volume_id': '1', + 'delete_on_termination': False, + }] + fake_volume = {"volume_image_metadata": + {"min_ram": 256, "min_disk": 128, "foo": "bar"}} + with mock.patch.object(self.compute_api.volume_api, 'get', + return_value=fake_volume): + meta = self.compute_api._get_bdm_image_metadata( + self.context, block_device_mapping) + self.assertEqual(256, meta['min_ram']) + self.assertEqual(128, meta['min_disk']) + self.assertEqual('active', meta['status']) + self.assertEqual('bar', meta['properties']['foo']) + def _create_instance_with_disabled_disk_config(self, object=False): sys_meta = {"image_auto_disk_config": "Disabled"} params = {"system_metadata": sys_meta} diff -Nru nova-2014.1.3/nova/tests/compute/test_compute_mgr.py nova-2014.1.5/nova/tests/compute/test_compute_mgr.py --- nova-2014.1.3/nova/tests/compute/test_compute_mgr.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/compute/test_compute_mgr.py 2015-06-18 22:25:39.000000000 +0000 @@ -29,6 +29,7 @@ from nova import db from nova import exception from nova.network import model as network_model +from nova import objects from nova.objects import base as obj_base from nova.objects import block_device as block_device_obj from nova.objects import external_event as external_event_obj @@ -271,6 +272,31 @@ self.mox.VerifyAll() self.mox.UnsetStubs() + def test_init_instance_with_binding_failed_vif_type(self): + # this instance will plug a 'binding_failed' vif + instance = fake_instance.fake_instance_obj( + self.context, + uuid='fake-uuid', + info_cache=None, + power_state=power_state.RUNNING, + vm_state=vm_states.ACTIVE, + task_state=None, + expected_attrs=['info_cache']) + + with contextlib.nested( + mock.patch.object(context, 'get_admin_context', + return_value=self.context), + mock.patch.object(compute_utils, 'get_nw_info_for_instance', + return_value=network_model.NetworkInfo()), + mock.patch.object(self.compute.driver, 'plug_vifs', + side_effect=exception.VirtualInterfacePlugException( + "Unexpected vif_type=binding_failed")), + mock.patch.object(self.compute, '_set_instance_error_state') + ) as (get_admin_context, get_nw_info, plug_vifs, set_error_state): + self.compute._init_instance(self.context, instance) + set_error_state.assert_called_once_with(self.context, + instance.uuid) + def test_init_instance_failed_resume_sets_error(self): instance = fake_instance.fake_instance_obj( self.context, @@ -591,6 +617,54 @@ instance.power_state = power_state.RUNNING self._test_init_instance_cleans_reboot_state(instance) + def test_init_instance_retries_power_off(self): + instance = instance_obj.Instance(self.context) + instance.uuid = 'foo' + instance.id = 1 + instance.vm_state = vm_states.ACTIVE + instance.task_state = task_states.POWERING_OFF + with mock.patch.object(self.compute, 'stop_instance'): + self.compute._init_instance(self.context, instance) + call = mock.call(self.context, instance) + self.compute.stop_instance.assert_has_calls([call]) + + def test_init_instance_retries_power_on(self): + instance = instance_obj.Instance(self.context) + instance.uuid = 'foo' + instance.id = 1 + instance.vm_state = vm_states.ACTIVE + instance.task_state = task_states.POWERING_ON + with mock.patch.object(self.compute, 'start_instance'): + self.compute._init_instance(self.context, instance) + call = mock.call(self.context, instance) + self.compute.start_instance.assert_has_calls([call]) + + def test_init_instance_retries_power_on_silent_exception(self): + instance = instance_obj.Instance(self.context) + instance.uuid = 'foo' + instance.id = 1 + instance.vm_state = vm_states.ACTIVE + instance.task_state = task_states.POWERING_ON + with mock.patch.object(self.compute, 'start_instance', + return_value=Exception): + init_return = self.compute._init_instance(self.context, instance) + call = mock.call(self.context, instance) + self.compute.start_instance.assert_has_calls([call]) + self.assertIsNone(init_return) + + def test_init_instance_retries_power_off_silent_exception(self): + instance = instance_obj.Instance(self.context) + instance.uuid = 'foo' + instance.id = 1 + instance.vm_state = vm_states.ACTIVE + instance.task_state = task_states.POWERING_OFF + with mock.patch.object(self.compute, 'stop_instance', + return_value=Exception): + init_return = self.compute._init_instance(self.context, instance) + call = mock.call(self.context, instance) + self.compute.stop_instance.assert_has_calls([call]) + self.assertIsNone(init_return) + def test_get_instances_on_driver(self): fake_context = context.get_admin_context() @@ -930,14 +1004,20 @@ self.mox.StubOutWithMock(self.compute.compute_api, 'is_volume_backed_instance') + self.mox.StubOutWithMock(self.compute, + '_get_instance_block_device_info') self.mox.StubOutWithMock(self.compute.driver, 'check_can_live_migrate_source') instance_p = obj_base.obj_to_primitive(instance) self.compute.compute_api.is_volume_backed_instance( self.context, instance).AndReturn(is_volume_backed) + self.compute._get_instance_block_device_info( + self.context, instance, refresh_conn_info=True + ).AndReturn({'block_device_mapping': 'fake'}) self.compute.driver.check_can_live_migrate_source( - self.context, instance, expected_dest_check_data) + self.context, instance, expected_dest_check_data, + {'block_device_mapping': 'fake'}) self.mox.ReplayAll() @@ -1173,6 +1253,50 @@ destroy.assert_called_once_with(self.context, instance_2, None, {}, True) + def test_rebuild_default_impl(self): + def _detach(context, bdms): + pass + + def _attach(context, instance, bdms, do_check_attach=True): + return {'block_device_mapping': 'shared_block_storage'} + + def _spawn(context, instance, image_meta, injected_files, + admin_password, network_info=None, block_device_info=None): + self.assertEqual(block_device_info['block_device_mapping'], + 'shared_block_storage') + + with contextlib.nested( + mock.patch.object(self.compute.driver, 'destroy', + return_value=None), + mock.patch.object(self.compute.driver, 'spawn', + side_effect=_spawn), + mock.patch.object(objects.instance.Instance, 'save', + return_value=None) + ) as( + mock_destroy, + mock_spawn, + mock_save + ): + instance = fake_instance.fake_instance_obj(self.context) + instance.task_state = task_states.REBUILDING + instance.save(expected_task_state=[task_states.REBUILDING]) + self.compute._rebuild_default_impl(self.context, + instance, + None, + [], + admin_password='new_pass', + bdms=[], + detach_block_devices=_detach, + attach_block_devices=_attach, + network_info=None, + recreate=True, + block_device_info=None, + preserve_ephemeral=False) + + self.assertFalse(mock_destroy.called) + self.assertTrue(mock_save.called) + self.assertTrue(mock_spawn.called) + class ComputeManagerBuildInstanceTestCase(test.NoDBTestCase): def setUp(self): @@ -1230,7 +1354,9 @@ self.compute._notify_about_instance_usage(self.context, self.instance, event, **kwargs) - def test_build_and_run_instance_called_with_proper_args(self): + @mock.patch('nova.utils.spawn_n') + def test_build_and_run_instance_called_with_proper_args(self, mock_spawn): + mock_spawn.side_effect = lambda f, *a, **k: f(*a, **k) self.mox.StubOutWithMock(self.compute, '_build_and_run_instance') self.mox.StubOutWithMock(self.compute.conductor_api, 'action_event_start') @@ -1256,7 +1382,18 @@ block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits) - def test_build_abort_exception(self): + @mock.patch('nova.utils.spawn_n') + def test_build_abort_exception(self, mock_spawn): + def fake_spawn(f, *args, **kwargs): + # NOTE(danms): Simulate the detached nature of spawn so that + # we confirm that the inner task has the fault logic + try: + return f(*args, **kwargs) + except Exception: + pass + + mock_spawn.side_effect = fake_spawn + self.mox.StubOutWithMock(self.compute, '_build_and_run_instance') self.mox.StubOutWithMock(self.compute, '_cleanup_allocated_networks') self.mox.StubOutWithMock(self.compute, '_set_instance_error_state') @@ -1292,7 +1429,9 @@ block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits) - def test_rescheduled_exception(self): + @mock.patch('nova.utils.spawn_n') + def test_rescheduled_exception(self, mock_spawn): + mock_spawn.side_effect = lambda f, *a, **k: f(*a, **k) self.mox.StubOutWithMock(self.compute, '_build_and_run_instance') self.mox.StubOutWithMock(self.compute, '_set_instance_error_state') self.mox.StubOutWithMock(self.compute.compute_task_api, @@ -1327,7 +1466,9 @@ block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits) - def test_rescheduled_exception_do_not_deallocate_network(self): + @mock.patch('nova.utils.spawn_n') + def test_rescheduled_exception_do_not_deallocate_network(self, mock_spawn): + mock_spawn.side_effect = lambda f, *a, **k: f(*a, **k) self.mox.StubOutWithMock(self.compute, '_build_and_run_instance') self.mox.StubOutWithMock(self.compute, '_cleanup_allocated_networks') self.mox.StubOutWithMock(self.compute.compute_task_api, @@ -1362,7 +1503,10 @@ block_device_mapping=self.block_device_mapping, node=self.node, limits=self.limits) - def test_rescheduled_exception_deallocate_network_if_dhcp(self): + @mock.patch('nova.utils.spawn_n') + def test_rescheduled_exception_deallocate_network_if_dhcp( + self, mock_spawn): + mock_spawn.side_effect = lambda f, *a, **k: f(*a, **k) self.mox.StubOutWithMock(self.compute, '_build_and_run_instance') self.mox.StubOutWithMock(self.compute.driver, 'dhcp_options_for_instance') @@ -1430,7 +1574,9 @@ mox.IgnoreArg()) self.mox.ReplayAll() - self.compute.build_and_run_instance(self.context, self.instance, + with mock.patch('nova.utils.spawn_n') as mock_spawn: + mock_spawn.side_effect = lambda f, *a, **k: f(*a, **k) + self.compute.build_and_run_instance(self.context, self.instance, self.image, request_spec={}, filter_properties=[], injected_files=self.injected_files, admin_password=self.admin_pass, @@ -1572,7 +1718,9 @@ self.assertEqual(network_info, inst.info_cache.network_info) inst.save.assert_called_with(expected_task_state=task_states.SPAWNING) - def test_reschedule_on_resources_unavailable(self): + @mock.patch('nova.utils.spawn_n') + def test_reschedule_on_resources_unavailable(self, mock_spawn): + mock_spawn.side_effect = lambda f, *a, **k: f(*a, **k) reason = 'resource unavailable' exc = exception.ComputeResourcesUnavailable(reason=reason) diff -Nru nova-2014.1.3/nova/tests/compute/test_compute.py nova-2014.1.5/nova/tests/compute/test_compute.py --- nova-2014.1.3/nova/tests/compute/test_compute.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/compute/test_compute.py 2015-06-18 22:25:39.000000000 +0000 @@ -34,6 +34,8 @@ import six from testtools import matchers as testtools_matchers +from eventlet import greenthread + import nova from nova import availability_zones from nova import block_device @@ -58,6 +60,7 @@ from nova.objects import block_device as block_device_obj from nova.objects import instance as instance_obj from nova.objects import instance_group as instance_group_obj +from nova.objects import instance_info_cache as cache_obj from nova.objects import migration as migration_obj from nova.objects import quotas as quotas_obj from nova.openstack.common.gettextutils import _ @@ -65,6 +68,7 @@ from nova.openstack.common import jsonutils from nova.openstack.common import log as logging from nova.openstack.common import timeutils +from nova.openstack.common import units from nova.openstack.common import uuidutils from nova import policy from nova import quota @@ -81,7 +85,6 @@ from nova.tests import matchers from nova.tests.objects import test_flavor from nova.tests.objects import test_migration -from nova.tests.objects import test_network from nova import utils from nova.virt import block_device as driver_block_device from nova.virt import event @@ -378,6 +381,8 @@ lambda *a, **kw: None) self.stubs.Set(self.compute.volume_api, 'check_attach', lambda *a, **kw: None) + self.stubs.Set(greenthread, 'sleep', + lambda *a, **kw: None) def store_cinfo(context, *args, **kwargs): self.cinfo = jsonutils.loads(args[-1].get('connection_info')) @@ -438,7 +443,9 @@ mock_get_by_id.assert_called_once_with(self.context, 'fake') self.assertTrue(mock_attach.called) - def test_await_block_device_created_to_slow(self): + def test_await_block_device_created_too_slow(self): + self.flags(block_device_allocate_retries=2) + self.flags(block_device_allocate_retries_interval=0.1) def never_get(context, vol_id): return { @@ -449,13 +456,15 @@ self.stubs.Set(self.compute.volume_api, 'get', never_get) self.assertRaises(exception.VolumeNotCreated, self.compute._await_block_device_map_created, - self.context, '1', max_tries=2, wait_between=0.1) + self.context, '1') def test_await_block_device_created_slow(self): c = self.compute + self.flags(block_device_allocate_retries=4) + self.flags(block_device_allocate_retries_interval=0.1) def slow_get(context, vol_id): - while self.fetched_attempts < 2: + if self.fetched_attempts < 2: self.fetched_attempts += 1 return { 'status': 'creating', @@ -467,9 +476,7 @@ } self.stubs.Set(c.volume_api, 'get', slow_get) - attempts = c._await_block_device_map_created(self.context, '1', - max_tries=4, - wait_between=0.1) + attempts = c._await_block_device_map_created(self.context, '1') self.assertEqual(attempts, 3) def test_boot_volume_serial(self): @@ -498,13 +505,21 @@ def volume_api_get(*args, **kwargs): if metadata: return { - 'volume_image_metadata': {'vol_test_key': 'vol_test_value'} + 'size': 1, + 'volume_image_metadata': {'vol_test_key': 'vol_test_value', + 'min_ram': u'128', + 'min_disk': u'256', + 'size': u'536870912' + }, } else: return {} self.stubs.Set(self.compute_api.volume_api, 'get', volume_api_get) + expected_no_metadata = {'min_disk': 0, 'min_ram': 0, 'properties': {}, + 'size': 0, 'status': 'active'} + block_device_mapping = [{ 'id': 1, 'device_name': 'vda', @@ -518,9 +533,13 @@ image_meta = self.compute_api._get_bdm_image_metadata( self.context, block_device_mapping) if metadata: - self.assertEqual(image_meta['vol_test_key'], 'vol_test_value') + self.assertEqual(image_meta['properties']['vol_test_key'], + 'vol_test_value') + self.assertEqual(128, image_meta['min_ram']) + self.assertEqual(256, image_meta['min_disk']) + self.assertEqual(units.Gi, image_meta['size']) else: - self.assertEqual(image_meta, {}) + self.assertEqual(expected_no_metadata, image_meta) # Test it with new-style BDMs block_device_mapping = [{ @@ -534,9 +553,13 @@ image_meta = self.compute_api._get_bdm_image_metadata( self.context, block_device_mapping, legacy_bdm=False) if metadata: - self.assertEqual(image_meta['vol_test_key'], 'vol_test_value') + self.assertEqual(image_meta['properties']['vol_test_key'], + 'vol_test_value') + self.assertEqual(128, image_meta['min_ram']) + self.assertEqual(256, image_meta['min_disk']) + self.assertEqual(units.Gi, image_meta['size']) else: - self.assertEqual(image_meta, {}) + self.assertEqual(expected_no_metadata, image_meta) def test_boot_volume_no_metadata(self): self.test_boot_volume_metadata(metadata=False) @@ -564,7 +587,8 @@ self.context, block_device_mapping, legacy_bdm=False) if metadata: - self.assertEqual(image_meta['img_test_key'], 'img_test_value') + self.assertEqual('img_test_value', + image_meta['properties']['img_test_key']) else: self.assertEqual(image_meta, {}) @@ -1031,6 +1055,27 @@ self.compute.volume_snapshot_delete, self.context, self.instance_object, 'fake_id', 'fake_id2', {}) + @mock.patch.object(cinder.API, 'create', + side_effect=exception.OverQuota(overs='volumes')) + def test_prep_block_device_over_quota_failure(self, mock_create): + instance = self._create_fake_instance() + bdms = [ + block_device.BlockDeviceDict({ + 'boot_index': 0, + 'guest_format': None, + 'connection_info': None, + 'device_type': u'disk', + 'source_type': 'image', + 'destination_type': 'volume', + 'volume_size': 1, + 'image_id': 1, + 'device_name': '/dev/vdb', + })] + self.assertRaises(exception.InvalidBDM, + compute_manager.ComputeManager()._prep_block_device, + self.context, instance, bdms) + mock_create.assert_called_once() + class ComputeTestCase(BaseTestCase): def test_wrap_instance_fault(self): @@ -1509,6 +1554,30 @@ self._assert_state({'vm_state': vm_states.ERROR, 'task_state': None}) + @mock.patch('nova.compute.manager.ComputeManager._prep_block_device', + side_effect=exception.OverQuota(overs='volumes')) + def test_setup_block_device_over_quota_fail(self, mock_prep_block_dev): + """block device mapping over quota failure test. + + Make sure when we're over volume quota according to Cinder client, the + appropriate exception is raised and the instances to ERROR state, keep + the task state. + """ + instance = self._create_fake_instance() + self.assertRaises(exception.OverQuota, self.compute.run_instance, + self.context, instance=instance, request_spec={}, + filter_properties={}, requested_networks=[], + injected_files=None, admin_password=None, + is_first_time=True, node=None, + legacy_bdm_in_spec=False) + #check state is failed even after the periodic poll + self._assert_state({'vm_state': vm_states.ERROR, + 'task_state': None}) + self.compute.periodic_tasks(context.get_admin_context()) + self._assert_state({'vm_state': vm_states.ERROR, + 'task_state': None}) + mock_prep_block_dev.assert_called_once() + def test_run_instance_spawn_fail(self): """spawn failure test. @@ -6720,6 +6789,35 @@ self.assertIsNone(instance['task_state']) return instance, instance_uuid + def test_ip_filtering(self): + info = [{ + 'address': 'aa:bb:cc:dd:ee:ff', + 'id': 1, + 'network': { + 'bridge': 'br0', + 'id': 1, + 'label': 'private', + 'subnets': [{ + 'cidr': '192.168.0.0/24', + 'ips': [{ + 'address': '192.168.0.10', + 'type': 'fixed', + }] + }] + } + }] + + info1 = cache_obj.InstanceInfoCache(network_info=jsonutils.dumps(info)) + inst1 = instance_obj.Instance(id=1, info_cache=info1) + info[0]['network']['subnets'][0]['ips'][0]['address'] = '192.168.0.20' + info2 = cache_obj.InstanceInfoCache(network_info=jsonutils.dumps(info)) + inst2 = instance_obj.Instance(id=2, info_cache=info2) + instances = instance_obj.InstanceList(objects=[inst1, inst2]) + + instances = self.compute_api._ip_filter(instances, {'ip': '.*10'}) + self.assertEqual(len(instances), 1) + self.assertEqual(instances[0].id, 1) + def test_create_with_too_little_ram(self): # Test an instance type with too little memory. @@ -7524,33 +7622,47 @@ db.instance_destroy(c, instance2['uuid']) db.instance_destroy(c, instance3['uuid']) - @mock.patch('nova.db.network_get') - @mock.patch('nova.db.fixed_ips_by_virtual_interface') - def test_get_all_by_multiple_options_at_once(self, fixed_get, network_get): + def test_get_all_by_multiple_options_at_once(self): # Test searching by multiple options at once. c = context.get_admin_context() - network_manager = fake_network.FakeNetworkManager(self.stubs) - fixed_get.side_effect = ( - network_manager.db.fixed_ips_by_virtual_interface) - network_get.return_value = ( - dict(test_network.fake_network, - **network_manager.db.network_get(None, 1))) - self.stubs.Set(self.compute_api.network_api, - 'get_instance_uuids_by_ip_filter', - network_manager.get_instance_uuids_by_ip_filter) + + def fake_network_info(ip): + info = [{ + 'address': 'aa:bb:cc:dd:ee:ff', + 'id': 1, + 'network': { + 'bridge': 'br0', + 'id': 1, + 'label': 'private', + 'subnets': [{ + 'cidr': '192.168.0.0/24', + 'ips': [{ + 'address': ip, + 'type': 'fixed', + }] + }] + } + }] + return jsonutils.dumps(info) instance1 = self._create_fake_instance({ 'display_name': 'woot', 'id': 1, - 'uuid': '00000000-0000-0000-0000-000000000010'}) + 'uuid': '00000000-0000-0000-0000-000000000010', + 'info_cache': {'network_info': + fake_network_info('192.168.0.1')}}) instance2 = self._create_fake_instance({ 'display_name': 'woo', 'id': 20, - 'uuid': '00000000-0000-0000-0000-000000000020'}) + 'uuid': '00000000-0000-0000-0000-000000000020', + 'info_cache': {'network_info': + fake_network_info('192.168.0.2')}}) instance3 = self._create_fake_instance({ 'display_name': 'not-woot', 'id': 30, - 'uuid': '00000000-0000-0000-0000-000000000030'}) + 'uuid': '00000000-0000-0000-0000-000000000030', + 'info_cache': {'network_info': + fake_network_info('192.168.0.3')}}) # ip ends up matching 2nd octet here.. so all 3 match ip # but 'name' only matches one @@ -10766,7 +10878,7 @@ self.instance_type['root_gb'] = 1 def test_no_image_specified(self): - self.compute_api._check_requested_image(self.context, None, {}, + self.compute_api._check_requested_image(self.context, None, None, self.instance_type) def test_image_status_must_be_active(self): diff -Nru nova-2014.1.3/nova/tests/compute/test_host_api.py nova-2014.1.5/nova/tests/compute/test_host_api.py --- nova-2014.1.3/nova/tests/compute/test_host_api.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/compute/test_host_api.py 2015-06-18 22:25:39.000000000 +0000 @@ -400,24 +400,30 @@ self.mox.StubOutWithMock(self.host_api.cells_rpcapi, 'service_get_by_compute_host') + # Cells return services with full cell_path prepended to IDs + fake_service = dict(test_service.fake_service, id='cell1@1') + exp_service = fake_service.copy() + self.host_api.cells_rpcapi.service_get_by_compute_host(self.ctxt, - 'fake-host').AndReturn(test_service.fake_service) + 'fake-host').AndReturn(fake_service) self.mox.ReplayAll() result = self.host_api.service_get_by_compute_host(self.ctxt, 'fake-host') - self._compare_obj(result, test_service.fake_service) + self._compare_obj(result, exp_service) def test_service_update(self): host_name = 'fake-host' binary = 'nova-compute' params_to_update = dict(disabled=True) - service_id = 42 - expected_result = dict(test_service.fake_service, id=service_id) + service_id = 'cell1@42' # Cells prepend full cell path to ID + + update_result = dict(test_service.fake_service, id=service_id) + expected_result = update_result.copy() self.mox.StubOutWithMock(self.host_api.cells_rpcapi, 'service_update') self.host_api.cells_rpcapi.service_update( self.ctxt, host_name, - binary, params_to_update).AndReturn(expected_result) + binary, params_to_update).AndReturn(update_result) self.mox.ReplayAll() diff -Nru nova-2014.1.3/nova/tests/console/test_websocketproxy.py nova-2014.1.5/nova/tests/console/test_websocketproxy.py --- nova-2014.1.3/nova/tests/console/test_websocketproxy.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/console/test_websocketproxy.py 2015-06-18 22:25:39.000000000 +0000 @@ -0,0 +1,216 @@ +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +"""Tests for nova websocketproxy.""" + + +import mock + +from nova.console import websocketproxy +from nova import exception +from nova import test +from oslo.config import cfg + +CONF = cfg.CONF + + +class NovaProxyRequestHandlerBaseTestCase(test.TestCase): + + def setUp(self): + super(NovaProxyRequestHandlerBaseTestCase, self).setUp() + + self.wh = websocketproxy.NovaWebSocketProxy() + self.wh.socket = mock.MagicMock() + self.wh.msg = mock.MagicMock() + self.wh.do_proxy = mock.MagicMock() + self.wh.headers = mock.MagicMock() + CONF.set_override('novncproxy_base_url', + 'https://example.net:6080/vnc_auto.html') + CONF.set_override('html5proxy_base_url', + 'https://example.net:6080/vnc_auto.html', + 'spice') + + @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') + def test_new_client(self, check_token): + def _fake_getheader(header): + if header == 'cookie': + return 'token="123-456-789"' + elif header == 'Origin': + return 'https://example.net:6080' + elif header == 'Host': + return 'example.net:6080' + else: + return + + check_token.return_value = { + 'host': 'node1', + 'port': '10000', + 'console_type': 'novnc' + } + self.wh.socket.return_value = '' + self.wh.path = "http://127.0.0.1/?token=123-456-789" + self.wh.headers.getheader = _fake_getheader + + self.wh.new_client() + + check_token.assert_called_with(mock.ANY, token="123-456-789") + self.wh.socket.assert_called_with('node1', 10000, connect=True) + self.wh.do_proxy.assert_called_with('') + + @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') + def test_new_client_raises_with_invalid_origin(self, check_token): + def _fake_getheader(header): + if header == 'cookie': + return 'token="123-456-789"' + elif header == 'Origin': + return 'https://bad-origin-example.net:6080' + elif header == 'Host': + return 'example.net:6080' + else: + return + + check_token.return_value = { + 'host': 'node1', + 'port': '10000', + 'console_type': 'novnc' + } + self.wh.socket.return_value = '' + self.wh.path = "http://127.0.0.1/?token=123-456-789" + self.wh.headers.getheader = _fake_getheader + + self.assertRaises(exception.ValidationError, + self.wh.new_client) + + @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') + def test_new_client_raises_with_blank_origin(self, check_token): + def _fake_getheader(header): + if header == 'cookie': + return 'token="123-456-789"' + elif header == 'Origin': + return '' + elif header == 'Host': + return 'example.net:6080' + else: + return + + check_token.return_value = { + 'host': 'node1', + 'port': '10000', + 'console_type': 'novnc' + } + self.wh.socket.return_value = '' + self.wh.path = "http://127.0.0.1/?token=123-456-789" + self.wh.headers.getheader = _fake_getheader + + self.assertRaises(exception.ValidationError, + self.wh.new_client) + + @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') + def test_new_client_with_no_origin(self, check_token): + def _fake_getheader(header): + if header == 'cookie': + return 'token="123-456-789"' + elif header == 'Origin': + return None + elif header == 'Host': + return 'example.net:6080' + else: + return + + check_token.return_value = { + 'host': 'node1', + 'port': '10000', + 'console_type': 'novnc' + } + self.wh.socket.return_value = '' + self.wh.path = "http://127.0.0.1/?token=123-456-789" + self.wh.headers.getheader = _fake_getheader + + self.wh.new_client() + + check_token.assert_called_with(mock.ANY, token="123-456-789") + self.wh.socket.assert_called_with('node1', 10000, connect=True) + self.wh.do_proxy.assert_called_with('') + + @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') + def test_new_client_raises_with_wrong_proto_vnc(self, check_token): + def _fake_getheader(header): + if header == 'cookie': + return 'token="123-456-789"' + elif header == 'Origin': + return 'http://example.net:6080' + elif header == 'Host': + return 'example.net:6080' + else: + return + + check_token.return_value = { + 'host': 'node1', + 'port': '10000', + 'console_type': 'novnc' + } + self.wh.socket.return_value = '' + self.wh.path = "http://127.0.0.1/?token=123-456-789" + self.wh.headers.getheader = _fake_getheader + + self.assertRaises(exception.ValidationError, + self.wh.new_client) + + @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') + def test_raises_with_wrong_proto_spice(self, check_token): + def _fake_getheader(header): + if header == 'cookie': + return 'token="123-456-789"' + elif header == 'Origin': + return 'http://example.net:6080' + elif header == 'Host': + return 'example.net:6080' + else: + return + + check_token.return_value = { + 'host': 'node1', + 'port': '10000', + 'console_type': 'spice-html5' + } + self.wh.socket.return_value = '' + self.wh.path = "http://127.0.0.1/?token=123-456-789" + self.wh.headers.getheader = _fake_getheader + + self.assertRaises(exception.ValidationError, + self.wh.new_client) + + @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') + def test_raises_with_bad_console_type(self, check_token): + def _fake_getheader(header): + if header == 'cookie': + return 'token="123-456-789"' + elif header == 'Origin': + return 'https://example.net:6080' + elif header == 'Host': + return 'example.net:6080' + else: + return + + check_token.return_value = { + 'host': 'node1', + 'port': '10000', + 'console_type': 'bad-console-type' + } + self.wh.socket.return_value = '' + self.wh.path = "http://127.0.0.1/?token=123-456-789" + self.wh.headers.getheader = _fake_getheader + + self.assertRaises(exception.ValidationError, + self.wh.new_client) diff -Nru nova-2014.1.3/nova/tests/db/test_db_api.py nova-2014.1.5/nova/tests/db/test_db_api.py --- nova-2014.1.3/nova/tests/db/test_db_api.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/db/test_db_api.py 2015-06-18 22:25:39.000000000 +0000 @@ -24,6 +24,7 @@ import types import uuid as stdlib_uuid +import mock import mox import netaddr from oslo.config import cfg @@ -1340,6 +1341,24 @@ self.ctxt.user_id) self.assertEqual(1, usage.in_use) + @mock.patch.object(db.sqlalchemy.api, '_security_group_get_by_names') + def test_security_group_ensure_default_called_concurrently(self, sg_mock): + # make sure NotFound is always raised here to trick Nova to insert the + # duplicate security group entry + sg_mock.side_effect = exception.NotFound + + # create the first db entry + self.ctxt.project_id = 1 + db.security_group_ensure_default(self.ctxt) + security_groups = db.security_group_get_by_project( + self.ctxt, + self.ctxt.project_id) + self.assertEqual(1, len(security_groups)) + + # create the second one and ensure the exception is handled properly + default_group = db.security_group_ensure_default(self.ctxt) + self.assertEqual('default', default_group.name) + def test_security_group_update(self): security_group = self._create_security_group({}) new_values = { diff -Nru nova-2014.1.3/nova/tests/db/test_migrations.py nova-2014.1.5/nova/tests/db/test_migrations.py --- nova-2014.1.3/nova/tests/db/test_migrations.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/db/test_migrations.py 2015-06-18 22:25:39.000000000 +0000 @@ -260,9 +260,9 @@ os.environ['PGUSER'] = user # note(boris-42): We must create and drop database, we can't # drop database which we have connected to, so for such - # operations there is a special database template1. + # operations there is a special database postgres. sqlcmd = ("psql -w -U %(user)s -h %(host)s -c" - " '%(sql)s' -d template1") + " '%(sql)s' -d postgres") sqldict = {'user': user, 'host': host} sqldict['sql'] = ("drop database if exists %s;") % database @@ -325,7 +325,7 @@ os.environ['PGUSER'] = user sqlcmd = ("psql -w -U %(user)s -h %(host)s -c" - " '%(sql)s' -d template1") + " '%(sql)s' -d postgres") sql = ("create database if not exists %s;") % database createtable = sqlcmd % {'user': user, 'host': host, 'sql': sql} diff -Nru nova-2014.1.3/nova/tests/matchers.py nova-2014.1.5/nova/tests/matchers.py --- nova-2014.1.3/nova/tests/matchers.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/matchers.py 2015-06-18 22:25:39.000000000 +0000 @@ -30,7 +30,8 @@ def describe(self): return ('Keys in d1 and not d2: %(d1only)s.' - ' Keys in d2 and not d1: %(d2only)s' % self.__dict__) + ' Keys in d2 and not d1: %(d2only)s' % + {'d1only': self.d1only, 'd2only': self.d2only}) def get_details(self): return {} @@ -44,7 +45,9 @@ def describe(self): return ("Dictionaries do not match at %(key)s." - " d1: %(d1_value)s d2: %(d2_value)s" % self.__dict__) + " d1: %(d1_value)s d2: %(d2_value)s" % + {'key': self.key, 'd1_value': self.d1_value, + 'd2_value': self.d2_value}) def get_details(self): return {} @@ -114,7 +117,7 @@ def describe(self): return ('Length mismatch: len(L1)=%(len1)d != ' - 'len(L2)=%(len2)d' % self.__dict__) + 'len(L2)=%(len2)d' % {'len1': self.len1, 'len2': self.len2}) def get_details(self): return {} @@ -222,7 +225,7 @@ self.actual = state.actual def describe(self): - return "%(path)s: XML does not match" % self.__dict__ + return "%(path)s: XML does not match" % self.path def get_details(self): return { @@ -243,7 +246,10 @@ def describe(self): return ("%(path)s: XML tag mismatch at index %(idx)d: " "expected tag <%(expected_tag)s>; " - "actual tag <%(actual_tag)s>" % self.__dict__) + "actual tag <%(actual_tag)s>" % + {'path': self.path, 'idx': self.idx, + 'expected_tag': self.expected_tag, + 'actual_tag': self.actual_tag}) class XMLAttrKeysMismatch(XMLMismatch): @@ -257,7 +263,9 @@ def describe(self): return ("%(path)s: XML attributes mismatch: " "keys only in expected: %(expected_only)s; " - "keys only in actual: %(actual_only)s" % self.__dict__) + "keys only in actual: %(actual_only)s" % + {'path': self.path, 'expected_only': self.expected_only, + 'actual_only': self.actual_only}) class XMLAttrValueMismatch(XMLMismatch): @@ -272,7 +280,10 @@ def describe(self): return ("%(path)s: XML attribute value mismatch: " "expected value of attribute %(key)s: %(expected_value)r; " - "actual value: %(actual_value)r" % self.__dict__) + "actual value: %(actual_value)r" % + {'path': self.path, 'key': self.key, + 'expected_value': self.expected_value, + 'actual_value': self.actual_value}) class XMLTextValueMismatch(XMLMismatch): @@ -286,7 +297,9 @@ def describe(self): return ("%(path)s: XML text value mismatch: " "expected text value: %(expected_text)r; " - "actual value: %(actual_text)r" % self.__dict__) + "actual value: %(actual_text)r" % + {'path': self.path, 'expected_text': self.expected_text, + 'actual_text': self.actual_text}) class XMLUnexpectedChild(XMLMismatch): @@ -299,7 +312,8 @@ def describe(self): return ("%(path)s: XML unexpected child element <%(tag)s> " - "present at index %(idx)d" % self.__dict__) + "present at index %(idx)d" % + {'path': self.path, 'tag': self.tag, 'idx': self.idx}) class XMLExpectedChild(XMLMismatch): @@ -312,7 +326,8 @@ def describe(self): return ("%(path)s: XML expected child element <%(tag)s> " - "not present at index %(idx)d" % self.__dict__) + "not present at index %(idx)d" % + {'path': self.path, 'tag': self.tag, 'idx': self.idx}) class XMLMatchState(object): diff -Nru nova-2014.1.3/nova/tests/network/test_linux_net.py nova-2014.1.5/nova/tests/network/test_linux_net.py --- nova-2014.1.3/nova/tests/network/test_linux_net.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/network/test_linux_net.py 2015-06-18 22:25:39.000000000 +0000 @@ -634,13 +634,9 @@ linux_net.IptablesManager()) self.stubs.Set(linux_net, 'binary_name', 'test') executes = [] - inputs = [] def fake_execute(*args, **kwargs): executes.append(args) - process_input = kwargs.get('process_input') - if process_input: - inputs.append(process_input) return "", "" self.stubs.Set(utils, 'execute', fake_execute) @@ -669,118 +665,26 @@ iface, '--arp-ip-src', dhcp, '-j', 'DROP'), ('ebtables', '-t', 'filter', '-I', 'OUTPUT', '-p', 'ARP', '-o', iface, '--arp-ip-src', dhcp, '-j', 'DROP'), + ('ebtables', '-t', 'filter', '-D', 'FORWARD', '-p', 'IPv4', '-i', + iface, '--ip-protocol', 'udp', '--ip-destination-port', '67:68', + '-j', 'DROP'), + ('ebtables', '-t', 'filter', '-I', 'FORWARD', '-p', 'IPv4', '-i', + iface, '--ip-protocol', 'udp', '--ip-destination-port', '67:68', + '-j', 'DROP'), + ('ebtables', '-t', 'filter', '-D', 'FORWARD', '-p', 'IPv4', '-o', + iface, '--ip-protocol', 'udp', '--ip-destination-port', '67:68', + '-j', 'DROP'), + ('ebtables', '-t', 'filter', '-I', 'FORWARD', '-p', 'IPv4', '-o', + iface, '--ip-protocol', 'udp', '--ip-destination-port', '67:68', + '-j', 'DROP'), ('iptables-save', '-c'), ('iptables-restore', '-c'), ('ip6tables-save', '-c'), ('ip6tables-restore', '-c'), ] self.assertEqual(executes, expected) - expected_inputs = [ - '-A test-FORWARD -m physdev --physdev-in %s ' - '-d 255.255.255.255 -p udp --dport 67 -j DROP' % iface, - '-A test-FORWARD -m physdev --physdev-out %s ' - '-d 255.255.255.255 -p udp --dport 67 -j DROP' % iface, - '-A test-FORWARD -m physdev --physdev-in %s ' - '-d 192.168.1.1 -j DROP' % iface, - '-A test-FORWARD -m physdev --physdev-out %s ' - '-s 192.168.1.1 -j DROP' % iface, - ] - for inp in expected_inputs: - self.assertIn(inp, inputs[0]) - - executes = [] - inputs = [] - - @staticmethod - def fake_remove(bridge, gateway): - return - - self.stubs.Set(linux_net.LinuxBridgeInterfaceDriver, - 'remove_bridge', fake_remove) - - driver.unplug(network) - expected = [ - ('ebtables', '-t', 'filter', '-D', 'INPUT', '-p', 'ARP', '-i', - iface, '--arp-ip-dst', dhcp, '-j', 'DROP'), - ('ebtables', '-t', 'filter', '-D', 'OUTPUT', '-p', 'ARP', '-o', - iface, '--arp-ip-src', dhcp, '-j', 'DROP'), - ('iptables-save', '-c'), - ('iptables-restore', '-c'), - ('ip6tables-save', '-c'), - ('ip6tables-restore', '-c'), - ] - self.assertEqual(executes, expected) - for inp in expected_inputs: - self.assertNotIn(inp, inputs[0]) - - def test_isolated_host_iptables_logdrop(self): - # Ensure that a different drop action for iptables doesn't change - # the drop action for ebtables. - self.flags(fake_network=False, - share_dhcp_address=True, - iptables_drop_action='LOGDROP') - - # NOTE(vish): use a fresh copy of the manager for each test - self.stubs.Set(linux_net, 'iptables_manager', - linux_net.IptablesManager()) - self.stubs.Set(linux_net, 'binary_name', 'test') - executes = [] - inputs = [] - - def fake_execute(*args, **kwargs): - executes.append(args) - process_input = kwargs.get('process_input') - if process_input: - inputs.append(process_input) - return "", "" - - self.stubs.Set(utils, 'execute', fake_execute) - - driver = linux_net.LinuxBridgeInterfaceDriver() - - @staticmethod - def fake_ensure(bridge, interface, network, gateway): - return bridge - - self.stubs.Set(linux_net.LinuxBridgeInterfaceDriver, - 'ensure_bridge', fake_ensure) - - iface = 'eth0' - dhcp = '192.168.1.1' - network = {'dhcp_server': dhcp, - 'bridge': 'br100', - 'bridge_interface': iface} - driver.plug(network, 'fakemac') - expected = [ - ('ebtables', '-t', 'filter', '-D', 'INPUT', '-p', 'ARP', '-i', - iface, '--arp-ip-dst', dhcp, '-j', 'DROP'), - ('ebtables', '-t', 'filter', '-I', 'INPUT', '-p', 'ARP', '-i', - iface, '--arp-ip-dst', dhcp, '-j', 'DROP'), - ('ebtables', '-t', 'filter', '-D', 'OUTPUT', '-p', 'ARP', '-o', - iface, '--arp-ip-src', dhcp, '-j', 'DROP'), - ('ebtables', '-t', 'filter', '-I', 'OUTPUT', '-p', 'ARP', '-o', - iface, '--arp-ip-src', dhcp, '-j', 'DROP'), - ('iptables-save', '-c'), - ('iptables-restore', '-c'), - ('ip6tables-save', '-c'), - ('ip6tables-restore', '-c'), - ] - self.assertEqual(executes, expected) - expected_inputs = [ - ('-A test-FORWARD -m physdev --physdev-in %s ' - '-d 255.255.255.255 -p udp --dport 67 -j LOGDROP' % iface), - ('-A test-FORWARD -m physdev --physdev-out %s ' - '-d 255.255.255.255 -p udp --dport 67 -j LOGDROP' % iface), - ('-A test-FORWARD -m physdev --physdev-in %s ' - '-d 192.168.1.1 -j LOGDROP' % iface), - ('-A test-FORWARD -m physdev --physdev-out %s ' - '-s 192.168.1.1 -j LOGDROP' % iface), - ] - for inp in expected_inputs: - self.assertIn(inp, inputs[0]) executes = [] - inputs = [] @staticmethod def fake_remove(bridge, gateway): @@ -795,14 +699,14 @@ iface, '--arp-ip-dst', dhcp, '-j', 'DROP'), ('ebtables', '-t', 'filter', '-D', 'OUTPUT', '-p', 'ARP', '-o', iface, '--arp-ip-src', dhcp, '-j', 'DROP'), - ('iptables-save', '-c'), - ('iptables-restore', '-c'), - ('ip6tables-save', '-c'), - ('ip6tables-restore', '-c'), + ('ebtables', '-t', 'filter', '-D', 'FORWARD', '-p', 'IPv4', '-i', + iface, '--ip-protocol', 'udp', '--ip-destination-port', '67:68', + '-j', 'DROP'), + ('ebtables', '-t', 'filter', '-D', 'FORWARD', '-p', 'IPv4', '-o', + iface, '--ip-protocol', 'udp', '--ip-destination-port', '67:68', + '-j', 'DROP'), ] self.assertEqual(executes, expected) - for inp in expected_inputs: - self.assertNotIn(inp, inputs[0]) def _test_initialize_gateway(self, existing, expected, routes=''): self.flags(fake_network=False) diff -Nru nova-2014.1.3/nova/tests/network/test_neutronv2.py nova-2014.1.5/nova/tests/network/test_neutronv2.py --- nova-2014.1.3/nova/tests/network/test_neutronv2.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/network/test_neutronv2.py 2015-06-18 22:25:39.000000000 +0000 @@ -13,6 +13,8 @@ # License for the specific language governing permissions and limitations # under the License. # + +import contextlib import copy import uuid @@ -33,7 +35,6 @@ from nova.network.neutronv2 import constants from nova.objects import instance as instance_obj from nova.openstack.common import jsonutils -from nova.openstack.common import local from nova import test from nova import utils @@ -146,6 +147,47 @@ neutronv2.get_client, my_context) + def test_reuse_admin_token(self): + self.flags(neutron_url='http://anyhost/') + self.flags(neutron_url_timeout=30) + token_store = neutronv2.AdminTokenStore.get() + token_store.admin_auth_token = 'new_token' + my_context = context.RequestContext('userid', 'my_tenantid', + auth_token='token') + with contextlib.nested( + mock.patch.object(client.Client, "list_networks", + side_effect=mock.Mock), + mock.patch.object(client.Client, 'get_auth_info', + return_value={'auth_token': 'new_token1'}), + ): + client1 = neutronv2.get_client(my_context, True) + client1.list_networks(retrieve_all=False) + self.assertEqual('new_token1', token_store.admin_auth_token) + client1 = neutronv2.get_client(my_context, True) + client1.list_networks(retrieve_all=False) + self.assertEqual('new_token1', token_store.admin_auth_token) + + def test_admin_token_updated(self): + self.flags(neutron_url='http://anyhost/') + self.flags(neutron_url_timeout=30) + token_store = neutronv2.AdminTokenStore.get() + token_store.admin_auth_token = 'new_token' + tokens = [{'auth_token': 'new_token1'}, {'auth_token': 'new_token'}] + my_context = context.RequestContext('userid', 'my_tenantid', + auth_token='token') + with contextlib.nested( + mock.patch.object(client.Client, "list_networks", + side_effect=mock.Mock), + mock.patch.object(client.Client, 'get_auth_info', + side_effect=tokens.pop), + ): + client1 = neutronv2.get_client(my_context, True) + client1.list_networks(retrieve_all=False) + self.assertEqual('new_token', token_store.admin_auth_token) + client1 = neutronv2.get_client(my_context, True) + client1.list_networks(retrieve_all=False) + self.assertEqual('new_token1', token_store.admin_auth_token) + class TestNeutronv2Base(test.TestCase): @@ -187,8 +229,11 @@ 'name': 'out-of-this-world', 'router:external': True, 'tenant_id': 'should-be-an-admin'}] + # A network that is both shared and external + self.nets6 = [{'id': 'net_id', 'name': 'net_name', + 'router:external': True, 'shared': True}] self.nets = [self.nets1, self.nets2, self.nets3, - self.nets4, self.nets5] + self.nets4, self.nets5, self.nets6] self.port_address = '10.0.1.2' self.port_data1 = [{'network_id': 'my_netid1', @@ -979,6 +1024,12 @@ api.allocate_for_instance, self.context, self.instance, requested_networks=requested_networks) + def test_allocate_for_instance_with_external_shared_net(self): + """Only one network is available, it's external and shared.""" + ctx = context.RequestContext('userid', 'my_tenantid') + api = self._stub_allocate_for_instance(net_idx=6) + api.allocate_for_instance(ctx, self.instance) + def _deallocate_for_instance(self, number, requested_networks=None): api = neutronapi.API() port_data = number == 1 and self.port_data1 or self.port_data2 @@ -2302,52 +2353,12 @@ class TestNeutronClientForAdminScenarios(test.TestCase): - def test_get_cached_neutron_client_for_admin(self): - self.flags(neutron_url='http://anyhost/') - self.flags(neutron_url_timeout=30) - my_context = context.RequestContext('userid', - 'my_tenantid', - auth_token='token') - - # Make multiple calls and ensure we get the same - # client back again and again - client = neutronv2.get_client(my_context, True) - client2 = neutronv2.get_client(my_context, True) - client3 = neutronv2.get_client(my_context, True) - self.assertEqual(client, client2) - self.assertEqual(client, client3) - - # clear the cache - local.strong_store.neutron_client = None - - # A new client should be created now - client4 = neutronv2.get_client(my_context, True) - self.assertNotEqual(client, client4) - - def test_get_neutron_client_for_non_admin(self): - self.flags(neutron_url='http://anyhost/') - self.flags(neutron_url_timeout=30) - my_context = context.RequestContext('userid', - 'my_tenantid', - auth_token='token') - - # Multiple calls should return different clients - client = neutronv2.get_client(my_context) - client2 = neutronv2.get_client(my_context) - self.assertNotEqual(client, client2) - - def test_get_neutron_client_for_non_admin_and_no_token(self): - self.flags(neutron_url='http://anyhost/') - self.flags(neutron_url_timeout=30) - my_context = context.RequestContext('userid', - 'my_tenantid') - - self.assertRaises(exceptions.Unauthorized, - neutronv2.get_client, - my_context) def _test_get_client_for_admin(self, use_id=False, admin_context=False): + def client_mock(*args, **kwargs): + client.Client.httpclient = mock.MagicMock() + self.flags(neutron_auth_strategy=None) self.flags(neutron_url='http://anyhost/') self.flags(neutron_url_timeout=30) @@ -2368,18 +2379,18 @@ 'auth_strategy': None, 'timeout': CONF.neutron_url_timeout, 'insecure': False, - 'ca_cert': None} + 'ca_cert': None, + 'token': None} if use_id: kwargs['tenant_id'] = CONF.neutron_admin_tenant_id else: kwargs['tenant_name'] = CONF.neutron_admin_tenant_name - client.Client.__init__(**kwargs).AndReturn(None) + client.Client.__init__(**kwargs).WithSideEffects(client_mock) self.mox.ReplayAll() - # clear the cache - if hasattr(local.strong_store, 'neutron_client'): - delattr(local.strong_store, 'neutron_client') - + #clean global + token_store = neutronv2.AdminTokenStore.get() + token_store.admin_auth_token = None if admin_context: # Note that the context does not contain a token but is # an admin context which will force an elevation to admin diff -Nru nova-2014.1.3/nova/tests/scheduler/filters/test_trusted_filters.py nova-2014.1.5/nova/tests/scheduler/filters/test_trusted_filters.py --- nova-2014.1.3/nova/tests/scheduler/filters/test_trusted_filters.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/scheduler/filters/test_trusted_filters.py 2015-06-18 22:25:39.000000000 +0000 @@ -0,0 +1,245 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock +from oslo.config import cfg +import requests + +from nova.openstack.common import jsonutils +from nova.openstack.common import timeutils +from nova.scheduler.filters import trusted_filter +from nova import test +from nova.tests.scheduler import fakes + +CONF = cfg.CONF + + +class AttestationServiceTestCase(test.NoDBTestCase): + + def setUp(self): + super(AttestationServiceTestCase, self).setUp() + self.api_url = '/OpenAttestationWebServices/V1.0' + self.host = 'localhost' + self.port = '8443' + self.statuses = (requests.codes.OK, requests.codes.CREATED, + requests.codes.ACCEPTED, requests.codes.NO_CONTENT) + + @mock.patch.object(requests, 'request') + def test_do_request_possible_statuses(self, request_mock): + """This test case checks if '_do_request()' method returns + appropriate status_code (200) and result (text converted to json), + while status_code returned by request is in one of fourth eligible + statuses + """ + + for status_code in self.statuses: + request_mock.return_value.status_code = status_code + request_mock.return_value.text = '{"test": "test"}' + + attestation_service = trusted_filter.AttestationService() + status, result = attestation_service._do_request( + 'POST', 'PollHosts', {}, {}) + + self.assertEqual(requests.codes.OK, status) + self.assertEqual(jsonutils.loads(request_mock.return_value.text), + result) + + @mock.patch.object(requests, 'request') + def test_do_request_other_status(self, request_mock): + """This test case checks if '_do_request()' method returns + appropriate status (this returned by request method) and result + (None), while status_code returned by request is not in one of fourth + eligible statuses + """ + + request_mock.return_value.status_code = requests.codes.NOT_FOUND + request_mock.return_value.text = '{"test": "test"}' + + attestation_service = trusted_filter.AttestationService() + status, result = attestation_service._do_request( + 'POST', 'PollHosts', {}, {}) + + self.assertEqual(requests.codes.NOT_FOUND, status) + self.assertIsNone(result) + + @mock.patch.object(requests, 'request') + def test_do_request_unconvertible_text(self, request_mock): + for status_code in self.statuses: + # this unconvertible_texts leads to TypeError and ValueError + # in jsonutils.loads(res.text) in _do_request() method + for unconvertible_text in ({"test": "test"}, '{}{}'): + request_mock.return_value.status_code = status_code + request_mock.return_value.text = unconvertible_text + + attestation_service = trusted_filter.AttestationService() + status, result = attestation_service._do_request( + 'POST', 'PollHosts', {}, {}) + + self.assertEqual(requests.codes.OK, status) + self.assertEqual(unconvertible_text, result) + + +@mock.patch.object(trusted_filter.AttestationService, '_request') +class TestTrustedFilter(test.NoDBTestCase): + + def setUp(self): + super(TestTrustedFilter, self).setUp() + # TrustedFilter's constructor creates the attestation cache, which + # calls to get a list of all the compute nodes. + fake_compute_nodes = [ + {'hypervisor_hostname': 'node1', + 'service': {'host': 'host1'}, + } + ] + with mock.patch('nova.db.compute_node_get_all') as mocked: + mocked.return_value = fake_compute_nodes + self.filt_cls = trusted_filter.TrustedFilter() + + def test_trusted_filter_default_passes(self, req_mock): + filter_properties = {'context': mock.sentinel.ctx, + 'instance_type': {'memory_mb': 1024}} + host = fakes.FakeHostState('host1', 'node1', {}) + self.assertTrue(self.filt_cls.host_passes(host, filter_properties)) + self.assertFalse(req_mock.called) + + def test_trusted_filter_trusted_and_trusted_passes(self, req_mock): + oat_data = {"hosts": [{"host_name": "node1", + "trust_lvl": "trusted", + "vtime": timeutils.isotime()}]} + req_mock.return_value = requests.codes.OK, oat_data + + extra_specs = {'trust:trusted_host': 'trusted'} + filter_properties = {'context': mock.sentinel.ctx, + 'instance_type': {'memory_mb': 1024, + 'extra_specs': extra_specs}} + host = fakes.FakeHostState('host1', 'node1', {}) + self.assertTrue(self.filt_cls.host_passes(host, filter_properties)) + req_mock.assert_called_once_with("POST", "PollHosts", ["node1"]) + + def test_trusted_filter_trusted_and_untrusted_fails(self, req_mock): + oat_data = {"hosts": [{"host_name": "node1", + "trust_lvl": "untrusted", + "vtime": timeutils.isotime()}]} + req_mock.return_value = requests.codes.OK, oat_data + extra_specs = {'trust:trusted_host': 'trusted'} + filter_properties = {'context': mock.sentinel.ctx, + 'instance_type': {'memory_mb': 1024, + 'extra_specs': extra_specs}} + host = fakes.FakeHostState('host1', 'node1', {}) + self.assertFalse(self.filt_cls.host_passes(host, filter_properties)) + + def test_trusted_filter_untrusted_and_trusted_fails(self, req_mock): + oat_data = {"hosts": [{"host_name": "node", + "trust_lvl": "trusted", + "vtime": timeutils.isotime()}]} + req_mock.return_value = requests.codes.OK, oat_data + extra_specs = {'trust:trusted_host': 'untrusted'} + filter_properties = {'context': mock.sentinel.ctx, + 'instance_type': {'memory_mb': 1024, + 'extra_specs': extra_specs}} + host = fakes.FakeHostState('host1', 'node1', {}) + self.assertFalse(self.filt_cls.host_passes(host, filter_properties)) + + def test_trusted_filter_untrusted_and_untrusted_passes(self, req_mock): + oat_data = {"hosts": [{"host_name": "node1", + "trust_lvl": "untrusted", + "vtime": timeutils.isotime()}]} + req_mock.return_value = requests.codes.OK, oat_data + extra_specs = {'trust:trusted_host': 'untrusted'} + filter_properties = {'context': mock.sentinel.ctx, + 'instance_type': {'memory_mb': 1024, + 'extra_specs': extra_specs}} + host = fakes.FakeHostState('host1', 'node1', {}) + self.assertTrue(self.filt_cls.host_passes(host, filter_properties)) + + def test_trusted_filter_update_cache(self, req_mock): + oat_data = {"hosts": [{"host_name": "node1", + "trust_lvl": "untrusted", + "vtime": timeutils.isotime()}]} + + req_mock.return_value = requests.codes.OK, oat_data + extra_specs = {'trust:trusted_host': 'untrusted'} + filter_properties = {'context': mock.sentinel.ctx, + 'instance_type': {'memory_mb': 1024, + 'extra_specs': extra_specs}} + host = fakes.FakeHostState('host1', 'node1', {}) + + self.filt_cls.host_passes(host, filter_properties) # Fill the caches + + req_mock.reset_mock() + self.filt_cls.host_passes(host, filter_properties) + self.assertFalse(req_mock.called) + + req_mock.reset_mock() + + timeutils.set_time_override(timeutils.utcnow()) + timeutils.advance_time_seconds( + CONF.trusted_computing.attestation_auth_timeout + 80) + self.filt_cls.host_passes(host, filter_properties) + self.assertTrue(req_mock.called) + + timeutils.clear_time_override() + + def test_trusted_filter_update_cache_timezone(self, req_mock): + oat_data = {"hosts": [{"host_name": "node1", + "trust_lvl": "untrusted", + "vtime": "2012-09-09T05:10:40-04:00"}]} + req_mock.return_value = requests.codes.OK, oat_data + extra_specs = {'trust:trusted_host': 'untrusted'} + filter_properties = {'context': mock.sentinel.ctx, + 'instance_type': {'memory_mb': 1024, + 'extra_specs': extra_specs}} + host = fakes.FakeHostState('host1', 'node1', {}) + + timeutils.set_time_override( + timeutils.normalize_time( + timeutils.parse_isotime("2012-09-09T09:10:40Z"))) + + self.filt_cls.host_passes(host, filter_properties) # Fill the caches + + req_mock.reset_mock() + self.filt_cls.host_passes(host, filter_properties) + self.assertFalse(req_mock.called) + + req_mock.reset_mock() + timeutils.advance_time_seconds( + CONF.trusted_computing.attestation_auth_timeout - 10) + self.filt_cls.host_passes(host, filter_properties) + self.assertFalse(req_mock.called) + + timeutils.clear_time_override() + + def test_trusted_filter_combine_hosts(self, req_mock): + fake_compute_nodes = [ + {'hypervisor_hostname': 'node1', + 'service': {'host': 'host1'}, + }, + {'hypervisor_hostname': 'node2', + 'service': {'host': 'host2'}, + }, + ] + with mock.patch('nova.db.compute_node_get_all') as mocked: + mocked.return_value = fake_compute_nodes + self.filt_cls = trusted_filter.TrustedFilter() + oat_data = {"hosts": [{"host_name": "node1", + "trust_lvl": "untrusted", + "vtime": "2012-09-09T05:10:40-04:00"}]} + req_mock.return_value = requests.codes.OK, oat_data + extra_specs = {'trust:trusted_host': 'trusted'} + filter_properties = {'context': mock.sentinel.ctx, + 'instance_type': {'memory_mb': 1024, + 'extra_specs': extra_specs}} + host = fakes.FakeHostState('host1', 'node1', {}) + + self.filt_cls.host_passes(host, filter_properties) # Fill the caches + req_mock.assert_called_once_with("POST", "PollHosts", + ["node1", "node2"]) diff -Nru nova-2014.1.3/nova/tests/scheduler/test_host_filters.py nova-2014.1.5/nova/tests/scheduler/test_host_filters.py --- nova-2014.1.3/nova/tests/scheduler/test_host_filters.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/scheduler/test_host_filters.py 2015-06-18 22:25:39.000000000 +0000 @@ -15,19 +15,14 @@ Tests For Scheduler Host Filters. """ -import httplib - from oslo.config import cfg -import stubout from nova import context from nova import db from nova.openstack.common import jsonutils -from nova.openstack.common import timeutils from nova.pci import pci_stats from nova.scheduler import filters from nova.scheduler.filters import extra_specs_ops -from nova.scheduler.filters import trusted_filter from nova import servicegroup from nova import test from nova.tests.scheduler import fakes @@ -239,18 +234,8 @@ # the testing of the DB API code from the host-filter code. USES_DB = True - def fake_oat_request(self, *args, **kwargs): - """Stubs out the response from OAT service.""" - self.oat_attested = True - return httplib.OK, self.oat_data - def setUp(self): super(HostFiltersTestCase, self).setUp() - self.oat_data = '' - self.oat_attested = False - self.stubs = stubout.StubOutForTesting() - self.stubs.Set(trusted_filter.AttestationService, '_request', - self.fake_oat_request) self.context = context.RequestContext('fake', 'fake') self.json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], @@ -1282,124 +1267,6 @@ } self.assertTrue(filt_cls.host_passes(host, filter_properties)) - def test_trusted_filter_default_passes(self): - self._stub_service_is_up(True) - filt_cls = self.class_map['TrustedFilter']() - filter_properties = {'context': self.context.elevated(), - 'instance_type': {'memory_mb': 1024}} - host = fakes.FakeHostState('host1', 'node1', {}) - self.assertTrue(filt_cls.host_passes(host, filter_properties)) - - def test_trusted_filter_trusted_and_trusted_passes(self): - self.oat_data = {"hosts": [{"host_name": "host1", - "trust_lvl": "trusted", - "vtime": timeutils.isotime()}]} - self._stub_service_is_up(True) - filt_cls = self.class_map['TrustedFilter']() - extra_specs = {'trust:trusted_host': 'trusted'} - filter_properties = {'context': self.context.elevated(), - 'instance_type': {'memory_mb': 1024, - 'extra_specs': extra_specs}} - host = fakes.FakeHostState('host1', 'node1', {}) - self.assertTrue(filt_cls.host_passes(host, filter_properties)) - - def test_trusted_filter_trusted_and_untrusted_fails(self): - self.oat_data = {"hosts": [{"host_name": "host1", - "trust_lvl": "untrusted", - "vtime": timeutils.isotime()}]} - self._stub_service_is_up(True) - filt_cls = self.class_map['TrustedFilter']() - extra_specs = {'trust:trusted_host': 'trusted'} - filter_properties = {'context': self.context.elevated(), - 'instance_type': {'memory_mb': 1024, - 'extra_specs': extra_specs}} - host = fakes.FakeHostState('host1', 'node1', {}) - self.assertFalse(filt_cls.host_passes(host, filter_properties)) - - def test_trusted_filter_untrusted_and_trusted_fails(self): - self.oat_data = {"hosts": [{"host_name": "host1", - "trust_lvl": "trusted", - "vtime": timeutils.isotime()}]} - self._stub_service_is_up(True) - filt_cls = self.class_map['TrustedFilter']() - extra_specs = {'trust:trusted_host': 'untrusted'} - filter_properties = {'context': self.context.elevated(), - 'instance_type': {'memory_mb': 1024, - 'extra_specs': extra_specs}} - host = fakes.FakeHostState('host1', 'node1', {}) - self.assertFalse(filt_cls.host_passes(host, filter_properties)) - - def test_trusted_filter_untrusted_and_untrusted_passes(self): - self.oat_data = {"hosts": [{"host_name": "host1", - "trust_lvl": "untrusted", - "vtime": timeutils.isotime()}]} - self._stub_service_is_up(True) - filt_cls = self.class_map['TrustedFilter']() - extra_specs = {'trust:trusted_host': 'untrusted'} - filter_properties = {'context': self.context.elevated(), - 'instance_type': {'memory_mb': 1024, - 'extra_specs': extra_specs}} - host = fakes.FakeHostState('host1', 'node1', {}) - self.assertTrue(filt_cls.host_passes(host, filter_properties)) - - def test_trusted_filter_update_cache(self): - self.oat_data = {"hosts": [{"host_name": - "host1", "trust_lvl": "untrusted", - "vtime": timeutils.isotime()}]} - - filt_cls = self.class_map['TrustedFilter']() - extra_specs = {'trust:trusted_host': 'untrusted'} - filter_properties = {'context': self.context.elevated(), - 'instance_type': {'memory_mb': 1024, - 'extra_specs': extra_specs}} - host = fakes.FakeHostState('host1', 'node1', {}) - - filt_cls.host_passes(host, filter_properties) # Fill the caches - - self.oat_attested = False - filt_cls.host_passes(host, filter_properties) - self.assertFalse(self.oat_attested) - - self.oat_attested = False - - timeutils.set_time_override(timeutils.utcnow()) - timeutils.advance_time_seconds( - CONF.trusted_computing.attestation_auth_timeout + 80) - filt_cls.host_passes(host, filter_properties) - self.assertTrue(self.oat_attested) - - timeutils.clear_time_override() - - def test_trusted_filter_update_cache_timezone(self): - self.oat_data = {"hosts": [{"host_name": "host1", - "trust_lvl": "untrusted", - "vtime": "2012-09-09T05:10:40-04:00"}]} - - filt_cls = self.class_map['TrustedFilter']() - extra_specs = {'trust:trusted_host': 'untrusted'} - filter_properties = {'context': self.context.elevated(), - 'instance_type': {'memory_mb': 1024, - 'extra_specs': extra_specs}} - host = fakes.FakeHostState('host1', 'node1', {}) - - timeutils.set_time_override( - timeutils.normalize_time( - timeutils.parse_isotime("2012-09-09T09:10:40Z"))) - - filt_cls.host_passes(host, filter_properties) # Fill the caches - - self.oat_attested = False - filt_cls.host_passes(host, filter_properties) - self.assertFalse(self.oat_attested) - - self.oat_attested = False - timeutils.advance_time_seconds( - CONF.trusted_computing.attestation_auth_timeout - 10) - filt_cls.host_passes(host, filter_properties) - self.assertFalse(self.oat_attested) - - timeutils.clear_time_override() - def test_core_filter_passes(self): filt_cls = self.class_map['CoreFilter']() filter_properties = {'instance_type': {'vcpus': 1}} diff -Nru nova-2014.1.3/nova/tests/test_versions.py nova-2014.1.5/nova/tests/test_versions.py --- nova-2014.1.3/nova/tests/test_versions.py 2014-10-02 23:31:48.000000000 +0000 +++ nova-2014.1.5/nova/tests/test_versions.py 2015-06-18 22:25:39.000000000 +0000 @@ -27,7 +27,8 @@ def test_version_string_with_package_is_good(self): """Ensure uninstalled code get version string.""" - self.stubs.Set(version.version_info, 'version', '5.5.5.5') + self.stubs.Set(version.version_info, 'version_string', + lambda: '5.5.5.5') self.stubs.Set(version, 'NOVA_PACKAGE', 'g9ec3421') self.assertEqual("5.5.5.5-g9ec3421", version.version_string_with_package()) diff -Nru nova-2014.1.3/nova/tests/test_wsgi.py nova-2014.1.5/nova/tests/test_wsgi.py --- nova-2014.1.3/nova/tests/test_wsgi.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/test_wsgi.py 2015-06-18 22:25:39.000000000 +0000 @@ -22,6 +22,7 @@ import eventlet import eventlet.wsgi +import mock import requests import nova.exception @@ -126,6 +127,17 @@ server.stop() server.wait() + def test_server_pool_waitall(self): + # test pools waitall method gets called while stopping server + server = nova.wsgi.Server("test_server", None, + host="127.0.0.1", port=4444) + server.start() + with mock.patch.object(server._pool, + 'waitall') as mock_waitall: + server.stop() + server.wait() + mock_waitall.assert_called_once_with() + def test_uri_length_limit(self): server = nova.wsgi.Server("test_uri_length_limit", None, host="127.0.0.1", max_url_len=16384) @@ -145,6 +157,21 @@ server.stop() server.wait() + def test_wsgi_keep_alive(self): + self.flags(wsgi_keep_alive=False) + + # mocking eventlet spawn method to check it is called with + # configured 'wsgi_keep_alive' value. + with mock.patch.object(eventlet, + 'spawn') as mock_spawn: + server = nova.wsgi.Server("test_app", None, + host="127.0.0.1", port=0) + server.start() + _, kwargs = mock_spawn.call_args + self.assertEqual(CONF.wsgi_keep_alive, + kwargs['keepalive']) + server.stop() + class TestWSGIServerWithSSL(test.NoDBTestCase): """WSGI server with SSL tests.""" diff -Nru nova-2014.1.3/nova/tests/virt/hyperv/test_hostutils.py nova-2014.1.5/nova/tests/virt/hyperv/test_hostutils.py --- nova-2014.1.3/nova/tests/virt/hyperv/test_hostutils.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/virt/hyperv/test_hostutils.py 2015-06-18 22:25:39.000000000 +0000 @@ -0,0 +1,34 @@ +# Copyright 2014 Hewlett-Packard Development Company, L.P. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova import test +from nova.virt.hyperv import hostutils + + +class HostUtilsTestCase(test.NoDBTestCase): + """Unit tests for the Hyper-V hostutils class.""" + + def setUp(self): + self._hostutils = hostutils.HostUtils() + self._hostutils._conn_cimv2 = mock.MagicMock() + super(HostUtilsTestCase, self).setUp() + + @mock.patch('nova.virt.hyperv.hostutils.ctypes') + def test_get_host_tick_count64(self, mock_ctypes): + tick_count64 = "100" + mock_ctypes.windll.kernel32.GetTickCount64.return_value = tick_count64 + response = self._hostutils.get_host_tick_count64() + self.assertEqual(tick_count64, response) diff -Nru nova-2014.1.3/nova/tests/virt/hyperv/test_hypervapi.py nova-2014.1.5/nova/tests/virt/hyperv/test_hypervapi.py --- nova-2014.1.3/nova/tests/virt/hyperv/test_hypervapi.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/virt/hyperv/test_hypervapi.py 2015-06-18 22:25:39.000000000 +0000 @@ -17,6 +17,7 @@ """ import contextlib +import datetime import io import os import platform @@ -49,6 +50,7 @@ from nova.virt.hyperv import basevolumeutils from nova.virt.hyperv import constants from nova.virt.hyperv import driver as driver_hyperv +from nova.virt.hyperv import hostops from nova.virt.hyperv import hostutils from nova.virt.hyperv import livemigrationutils from nova.virt.hyperv import networkutils @@ -336,6 +338,13 @@ self.assertEqual(instances, fake_instances) + def test_get_host_uptime(self): + fake_host = "fake_host" + with mock.patch.object(self._conn._hostops, + "get_host_uptime") as mock_uptime: + self._conn._hostops.get_host_uptime(fake_host) + mock_uptime.assert_called_once_with(fake_host) + def test_get_info(self): self._instance_data = self._get_instance_data() @@ -581,15 +590,31 @@ constants.HYPERV_VM_STATE_DISABLED, constants.HYPERV_VM_STATE_DISABLED) - def test_power_on(self): + def _test_power_on(self, block_device_info): self._instance_data = self._get_instance_data() network_info = fake_network.fake_get_instance_nw_info(self.stubs) + vmutils.VMUtils.set_vm_state(mox.Func(self._check_instance_name), constants.HYPERV_VM_STATE_ENABLED) + if block_device_info: + self._mox.StubOutWithMock(volumeops.VolumeOps, + 'fix_instance_volume_disk_paths') + volumeops.VolumeOps.fix_instance_volume_disk_paths( + mox.Func(self._check_instance_name), block_device_info) + self._mox.ReplayAll() - self._conn.power_on(self._context, self._instance_data, network_info) + self._conn.power_on(self._context, self._instance_data, network_info, + block_device_info=block_device_info) self._mox.VerifyAll() + def test_power_on_having_block_devices(self): + block_device_info = db_fakes.get_fake_block_device_info( + self._volume_target_portal, self._volume_id) + self._test_power_on(block_device_info=block_device_info) + + def test_power_on_without_block_devices(self): + self._test_power_on(block_device_info=None) + def test_power_on_already_running(self): self._instance_data = self._get_instance_data() network_info = fake_network.fake_get_instance_nw_info(self.stubs) @@ -1814,3 +1839,56 @@ self.assertRaises(vmutils.HyperVException, self.volumeops._get_free_controller_slot, fake_scsi_controller_path) + + def test_fix_instance_volume_disk_paths(self): + block_device_info = db_fakes.get_fake_block_device_info( + self._volume_target_portal, self._volume_id) + + with contextlib.nested( + mock.patch.object(self.volumeops, + '_get_mounted_disk_from_lun'), + mock.patch.object(self.volumeops._vmutils, + 'get_vm_scsi_controller'), + mock.patch.object(self.volumeops._vmutils, + 'set_disk_host_resource'), + mock.patch.object(self.volumeops, + 'ebs_root_in_block_devices') + ) as (mock_get_mounted_disk_from_lun, + mock_get_vm_scsi_controller, + mock_set_disk_host_resource, + mock_ebs_in_block_devices): + + mock_ebs_in_block_devices.return_value = False + mock_get_mounted_disk_from_lun.return_value = "fake_mounted_path" + mock_set_disk_host_resource.return_value = "fake_controller_path" + + self.volumeops.fix_instance_volume_disk_paths( + "test_vm_name", + block_device_info) + + mock_get_mounted_disk_from_lun.assert_called_with( + 'iqn.2010-10.org.openstack:volume-' + self._volume_id, 1, True) + mock_get_vm_scsi_controller.assert_called_with("test_vm_name") + mock_set_disk_host_resource("test_vm_name", "fake_controller_path", + 0, "fake_mounted_path") + + +class HostOpsTestCase(HyperVAPIBaseTestCase): + """Unit tests for the Hyper-V hostops class.""" + + def setUp(self): + self._hostops = hostops.HostOps() + self._hostops._hostutils = mock.MagicMock() + self._hostops.time = mock.MagicMock() + super(HostOpsTestCase, self).setUp() + + @mock.patch('nova.virt.hyperv.hostops.time') + def test_host_uptime(self, mock_time): + self._hostops._hostutils.get_host_tick_count64.return_value = 100 + mock_time.strftime.return_value = "01:01:01" + + result_uptime = "01:01:01 up %s, 0 users, load average: 0, 0, 0" % ( + str(datetime.timedelta( + milliseconds=long(100)))) + actual_uptime = self._hostops.get_host_uptime() + self.assertEqual(result_uptime, actual_uptime) diff -Nru nova-2014.1.3/nova/tests/virt/hyperv/test_vmutils.py nova-2014.1.5/nova/tests/virt/hyperv/test_vmutils.py --- nova-2014.1.3/nova/tests/virt/hyperv/test_vmutils.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/virt/hyperv/test_vmutils.py 2015-06-18 22:25:39.000000000 +0000 @@ -172,7 +172,7 @@ mock_vm.associators.assert_called_with( wmi_result_class=self._vmutils._VIRTUAL_SYSTEM_SETTING_DATA_CLASS) mock_vmsettings[0].associators.assert_called_with( - wmi_result_class=self._vmutils._STORAGE_ALLOC_SETTING_DATA_CLASS) + wmi_result_class=self._vmutils._RESOURCE_ALLOC_SETTING_DATA_CLASS) self.assertEqual([mock_rasds[0]], disks) self.assertEqual([mock_rasds[1]], volumes) @@ -181,6 +181,9 @@ mock_rasd1.ResourceSubType = self._vmutils._IDE_DISK_RES_SUB_TYPE mock_rasd1.HostResource = [self._FAKE_VHD_PATH] mock_rasd1.Connection = [self._FAKE_VHD_PATH] + mock_rasd1.Parent = self._FAKE_CTRL_PATH + mock_rasd1.Address = self._FAKE_ADDRESS + mock_rasd1.HostResource = [self._FAKE_VHD_PATH] mock_rasd2 = mock.MagicMock() mock_rasd2.ResourceSubType = self._vmutils._PHYS_DISK_RES_SUB_TYPE @@ -567,3 +570,26 @@ def _assert_remove_resources(self, mock_svc): getattr(mock_svc, self._REMOVE_RESOURCE).assert_called_with( [self._FAKE_RES_PATH], self._FAKE_VM_PATH) + + def test_set_disk_host_resource(self): + self._lookup_vm() + mock_rasds = self._create_mock_disks() + + self._vmutils._get_vm_disks = mock.MagicMock( + return_value=([mock_rasds[0]], [mock_rasds[1]])) + self._vmutils._modify_virt_resource = mock.MagicMock() + self._vmutils._get_disk_resource_address = mock.MagicMock( + return_value=self._FAKE_ADDRESS) + + self._vmutils.set_disk_host_resource( + self._FAKE_VM_NAME, + self._FAKE_CTRL_PATH, + self._FAKE_ADDRESS, + mock.sentinel.fake_new_mounted_disk_path) + self._vmutils._get_disk_resource_address.assert_called_with( + mock_rasds[0]) + self._vmutils._modify_virt_resource.assert_called_with( + mock_rasds[0], self._FAKE_VM_PATH) + self.assertEqual( + mock.sentinel.fake_new_mounted_disk_path, + mock_rasds[0].HostResource[0]) diff -Nru nova-2014.1.3/nova/tests/virt/libvirt/fakelibvirt.py nova-2014.1.5/nova/tests/virt/libvirt/fakelibvirt.py --- nova-2014.1.3/nova/tests/virt/libvirt/fakelibvirt.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/virt/libvirt/fakelibvirt.py 2015-06-18 22:25:39.000000000 +0000 @@ -172,18 +172,76 @@ class libvirtError(Exception): - def __init__(self, msg, - error_code=VIR_ERR_INTERNAL_ERROR, - error_domain=VIR_FROM_QEMU): - self.error_code = error_code - self.error_domain = error_domain - Exception(self, msg) + """This class was copied and slightly modified from + `libvirt-python:libvirt-override.py`. + + Since a test environment will use the real `libvirt-python` version of + `libvirtError` if it's installed and not this fake, we need to maintain + strict compatability with the original class, including `__init__` args + and instance-attributes. + + To create a libvirtError instance you should: + + # Create an unsupported error exception + exc = libvirtError('my message') + exc.err = (libvirt.VIR_ERR_NO_SUPPORT,) + + self.err is a tuple of form: + (error_code, error_domain, error_message, error_level, str1, str2, + str3, int1, int2) + + Alternatively, you can use the `make_libvirtError` convenience function to + allow you to specify these attributes in one shot. + """ + def __init__(self, defmsg, conn=None, dom=None, net=None, pool=None, + vol=None): + Exception.__init__(self, defmsg) + self.err = None def get_error_code(self): - return self.error_code + if self.err is None: + return None + return self.err[0] def get_error_domain(self): - return self.error_domain + if self.err is None: + return None + return self.err[1] + + def get_error_message(self): + if self.err is None: + return None + return self.err[2] + + def get_error_level(self): + if self.err is None: + return None + return self.err[3] + + def get_str1(self): + if self.err is None: + return None + return self.err[4] + + def get_str2(self): + if self.err is None: + return None + return self.err[5] + + def get_str3(self): + if self.err is None: + return None + return self.err[6] + + def get_int1(self): + if self.err is None: + return None + return self.err[7] + + def get_int2(self): + if self.err is None: + return None + return self.err[8] class NWFilter(object): @@ -219,8 +277,10 @@ try: tree = etree.fromstring(xml) except etree.ParseError: - raise libvirtError("Invalid XML.", - VIR_ERR_XML_DETAIL, VIR_FROM_DOMAIN) + raise make_libvirtError( + libvirtError, "Invalid XML.", + error_code=VIR_ERR_XML_DETAIL, + error_domain=VIR_FROM_DOMAIN) definition = {} @@ -369,7 +429,11 @@ 123456789L] def migrateToURI(self, desturi, flags, dname, bandwidth): - raise libvirtError("Migration always fails for fake libvirt!") + raise make_libvirtError( + libvirtError, + "Migration always fails for fake libvirt!", + error_code=VIR_ERR_INTERNAL_ERROR, + error_domain=VIR_FROM_QEMU) def attachDevice(self, xml): disk_info = _parse_disk_info(etree.fromstring(xml)) @@ -380,7 +444,11 @@ def attachDeviceFlags(self, xml, flags): if (flags & VIR_DOMAIN_AFFECT_LIVE and self._state != VIR_DOMAIN_RUNNING): - raise libvirtError("AFFECT_LIVE only allowed for running domains!") + raise make_libvirtError( + libvirtError, + "AFFECT_LIVE only allowed for running domains!", + error_code=VIR_ERR_INTERNAL_ERROR, + error_domain=VIR_FROM_QEMU) self.attachDevice(xml) def detachDevice(self, xml): @@ -533,9 +601,11 @@ 'test:///default'] if uri not in uri_whitelist: - raise libvirtError("libvirt error: no connection driver " - "available for No connection for URI %s" % uri, - 5, 0) + raise make_libvirtError( + libvirtError, + "libvirt error: no connection driver " + "available for No connection for URI %s" % uri, + error_code=5, error_domain=0) self.readonly = readonly self._uri = uri @@ -594,16 +664,20 @@ def lookupByID(self, id): if id in self._running_vms: return self._running_vms[id] - raise libvirtError('Domain not found: no domain with matching ' - 'id %d' % id, - VIR_ERR_NO_DOMAIN, VIR_FROM_QEMU) + raise make_libvirtError( + libvirtError, + 'Domain not found: no domain with matching id %d' % id, + error_code=VIR_ERR_NO_DOMAIN, + error_domain=VIR_FROM_QEMU) def lookupByName(self, name): if name in self._vms: return self._vms[name] - raise libvirtError('Domain not found: no domain with matching ' - 'name "%s"' % name, - VIR_ERR_NO_DOMAIN, VIR_FROM_QEMU) + raise make_libvirtError( + libvirtError, + 'Domain not found: no domain with matching name "%s"' % name, + error_code=VIR_ERR_NO_DOMAIN, + error_domain=VIR_FROM_QEMU) def _emit_lifecycle(self, dom, event, detail): if VIR_DOMAIN_EVENT_ID_LIFECYCLE not in self._event_callbacks: @@ -904,14 +978,21 @@ 'user': 26728850000000L, 'iowait': 6121490000000L} else: - raise libvirtError("invalid argument: Invalid cpu number") + raise make_libvirtError( + libvirtError, + "invalid argument: Invalid cpu number", + error_code=VIR_ERR_INTERNAL_ERROR, + error_domain=VIR_FROM_QEMU) def nwfilterLookupByName(self, name): try: return self._nwfilters[name] except KeyError: - raise libvirtError("no nwfilter with matching name %s" % name, - VIR_ERR_NO_NWFILTER, VIR_FROM_NWFILTER) + raise make_libvirtError( + libvirtError, + "no nwfilter with matching name %s" % name, + error_code=VIR_ERR_NO_NWFILTER, + error_domain=VIR_FROM_NWFILTER) def nwfilterDefineXML(self, xml): nwfilter = NWFilter(self, xml) @@ -964,6 +1045,24 @@ pass +def make_libvirtError(error_class, msg, error_code=None, + error_domain=None, error_message=None, + error_level=None, str1=None, str2=None, str3=None, + int1=None, int2=None): + """Convenience function for creating `libvirtError` exceptions which + allow you to specify arguments in constructor without having to manipulate + the `err` tuple directly. + + We need to pass in `error_class` to this function because it may be + `libvirt.libvirtError` or `fakelibvirt.libvirtError` depending on whether + `libvirt-python` is installed. + """ + exc = error_class(msg) + exc.err = (error_code, error_domain, error_message, error_level, + str1, str2, str3, int1, int2) + return exc + + virDomain = Domain diff -Nru nova-2014.1.3/nova/tests/virt/libvirt/test_libvirt.py nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py --- nova-2014.1.3/nova/tests/virt/libvirt/test_libvirt.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt.py 2015-06-18 22:25:39.000000000 +0000 @@ -24,6 +24,7 @@ import re import shutil import tempfile +import uuid from eventlet import greenthread from lxml import etree @@ -83,7 +84,7 @@ try: import libvirt except ImportError: - import nova.tests.virt.libvirt.fakelibvirt as libvirt + libvirt = fakelibvirt libvirt_driver.libvirt = libvirt @@ -887,6 +888,42 @@ caps = conn.get_host_capabilities() self.assertIn('aes', [x.name for x in caps.host.cpu.features]) + def test_baseline_cpu_not_supported(self): + conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) + + # `mock` has trouble stubbing attributes that don't exist yet, so + # fallback to plain-Python attribute setting/deleting + cap_str = 'VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES' + if not hasattr(libvirt_driver.libvirt, cap_str): + setattr(libvirt_driver.libvirt, cap_str, True) + self.addCleanup(delattr, libvirt_driver.libvirt, cap_str) + + # Handle just the NO_SUPPORT error + not_supported_exc = fakelibvirt.make_libvirtError( + libvirt.libvirtError, + 'this function is not supported by the connection driver:' + ' virConnectBaselineCPU', + error_code=libvirt.VIR_ERR_NO_SUPPORT) + + with mock.patch.object(conn._conn, 'baselineCPU', + side_effect=not_supported_exc): + caps = conn.get_host_capabilities() + self.assertEqual(vconfig.LibvirtConfigCaps, type(caps)) + self.assertNotIn('aes', [x.name for x in caps.host.cpu.features]) + + # Clear cached result so we can test again... + conn._caps = None + + # Other errors should not be caught + other_exc = fakelibvirt.make_libvirtError( + libvirt.libvirtError, + 'other exc', + error_code=libvirt.VIR_ERR_NO_DOMAIN) + + with mock.patch.object(conn._conn, 'baselineCPU', + side_effect=other_exc): + self.assertRaises(libvirt.libvirtError, conn.get_host_capabilities) + def test_lxc_get_host_capabilities_failed(self): conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) @@ -1408,7 +1445,8 @@ self.assertEqual("none", cfg.devices[7].action) - def test_get_guest_config_with_watchdog_action_through_flavor(self): + def _test_get_guest_config_with_watchdog_action_flavor(self, + hw_watchdog_action="hw:watchdog_action"): self.flags(virt_type='kvm', group='libvirt') conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) @@ -1418,7 +1456,7 @@ db.flavor_extra_specs_update_or_create( self.context, flavor['flavorid'], - {'hw_watchdog_action': 'none'}) + {hw_watchdog_action: 'none'}) instance_ref = db.instance_create(self.context, self.test_instance) @@ -1429,7 +1467,7 @@ db.flavor_extra_specs_delete(self.context, flavor['flavorid'], - 'hw_watchdog_action') + hw_watchdog_action) self.assertEqual(8, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], @@ -1451,6 +1489,16 @@ self.assertEqual("none", cfg.devices[7].action) + def test_get_guest_config_with_watchdog_action_through_flavor(self): + self._test_get_guest_config_with_watchdog_action_flavor() + + # TODO(pkholkin): the test accepting old property name 'hw_watchdog_action' + # should be removed in L release + def test_get_guest_config_with_watchdog_action_through_flavor_no_scope( + self): + self._test_get_guest_config_with_watchdog_action_flavor( + hw_watchdog_action="hw_watchdog_action") + def test_get_guest_config_with_watchdog_action_meta_overrides_flavor(self): self.flags(virt_type='kvm', group='libvirt') @@ -3970,7 +4018,7 @@ self.mox.StubOutWithMock(conn, "_assert_dest_node_has_enough_disk") conn._assert_dest_node_has_enough_disk( self.context, instance_ref, dest_check_data['disk_available_mb'], - False) + False, None) self.mox.ReplayAll() conn.check_can_live_migrate_source(self.context, instance_ref, @@ -4076,7 +4124,8 @@ self.mox.StubOutWithMock(conn, "get_instance_disk_info") conn.get_instance_disk_info(instance_ref["name"]).AndReturn( '[{"virt_disk_size":2}]') - conn.get_instance_disk_info(instance_ref["name"]).AndReturn( + conn.get_instance_disk_info(instance_ref["name"], + block_device_info=None).AndReturn( '[{"virt_disk_size":2}]') dest_check_data = {"filename": "file", @@ -4277,6 +4326,33 @@ conn.pre_live_migration, c, inst_ref, vol, None, None, {'is_shared_storage': False}) + @mock.patch('nova.virt.driver.block_device_info_get_mapping', + return_value=()) + @mock.patch('nova.virt.configdrive.required_by', + return_value=True) + def test_pre_live_migration_block_with_config_drive_mocked_with_vfat( + self, mock_required_by, block_device_info_get_mapping): + self.flags(config_drive_format='vfat') + # Creating testdata + vol = {'block_device_mapping': [ + {'connection_info': 'dummy', 'mount_device': '/dev/sda'}, + {'connection_info': 'dummy', 'mount_device': '/dev/sdb'}]} + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + + self.test_instance['name'] = 'fake' + self.test_instance['kernel_id'] = None + + res_data = drvr.pre_live_migration( + self.context, self.test_instance, vol, [], None, + {'is_shared_storage': False}) + block_device_info_get_mapping.assert_called_once_with( + {'block_device_mapping': [ + {'connection_info': 'dummy', 'mount_device': '/dev/sda'}, + {'connection_info': 'dummy', 'mount_device': '/dev/sdb'} + ]} + ) + self.assertIsNone(res_data) + def test_pre_live_migration_vol_backed_works_correctly_mocked(self): # Creating testdata, using temp dir. with utils.tempdir() as tmpdir: @@ -7308,6 +7384,33 @@ unplug.assert_called_once_with(fake_inst, 'netinfo', ignore_errors=True) + @mock.patch('os.path.exists', return_value=True) + @mock.patch('tempfile.mkstemp') + @mock.patch('os.close', return_value=None) + def test_check_instance_shared_storage_local_raw(self, + mock_close, + mock_mkstemp, + mock_exists): + instance_uuid = str(uuid.uuid4()) + self.flags(images_type='raw', group='libvirt') + self.flags(instances_path='/tmp') + mock_mkstemp.return_value = (-1, + '/tmp/{0}/file'.format(instance_uuid)) + driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + instance = fake_instance.fake_instance_obj(self.context) + temp_file = driver.check_instance_shared_storage_local(self.context, + instance) + self.assertEqual('/tmp/{0}/file'.format(instance_uuid), + temp_file['filename']) + + def test_check_instance_shared_storage_local_rbd(self): + self.flags(images_type='rbd', group='libvirt') + driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + instance = fake_instance.fake_instance_obj(self.context) + self.assertIsNone(driver. + check_instance_shared_storage_local(self.context, + instance)) + class HostStateTestCase(test.TestCase): @@ -7406,18 +7509,33 @@ def filterDefineXMLMock(self, xml): class FakeNWFilterInternal: - def __init__(self, parent, name, xml): + def __init__(self, parent, name, u, xml): self.name = name + self.uuid = u self.parent = parent self.xml = xml + def XMLDesc(self, flags): + return self.xml + def undefine(self): del self.parent.filters[self.name] - pass + tree = etree.fromstring(xml) name = tree.get('name') + u = tree.find('uuid') + if u is None: + u = uuid.uuid4().hex + else: + u = u.text if name not in self.filters: - self.filters[name] = FakeNWFilterInternal(self, name, xml) + self.filters[name] = FakeNWFilterInternal(self, name, u, xml) + else: + if self.filters[name].uuid != u: + raise libvirt.libvirtError( + "Mismatching name '%s' with uuid '%s' vs '%s'" + % (name, self.filters[name].uuid, u)) + self.filters[name].xml = xml return True @@ -7958,6 +8076,26 @@ db.instance_destroy(admin_ctxt, instance_ref['uuid']) + def test_redefining_nwfilters(self): + fakefilter = NWFilterFakes() + self.fw._conn.nwfilterDefineXML = fakefilter.filterDefineXMLMock + self.fw._conn.nwfilterLookupByName = fakefilter.nwfilterLookupByName + + instance_ref = self._create_instance() + inst_id = instance_ref['id'] + inst_uuid = instance_ref['uuid'] + + self.security_group = self.setup_and_return_security_group() + + db.instance_add_security_group(self.context, inst_uuid, + self.security_group['id']) + + instance = db.instance_get(self.context, inst_id) + + network_info = _fake_network_info(self.stubs, 1) + self.fw.setup_basic_filtering(instance, network_info) + self.fw.setup_basic_filtering(instance, network_info) + def test_nwfilter_parameters(self): admin_ctxt = context.get_admin_context() @@ -9066,6 +9204,17 @@ 'detach_interface', power_state.SHUTDOWN, expected_flags=(libvirt.VIR_DOMAIN_AFFECT_CONFIG)) + def test_instance_on_disk(self): + conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + instance = self._create_instance() + self.assertFalse(conn.instance_on_disk(instance)) + + def test_instance_on_disk_rbd(self): + self.flags(images_type='rbd', group='libvirt') + conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + instance = self._create_instance() + self.assertTrue(conn.instance_on_disk(instance)) + class LibvirtVolumeUsageTestCase(test.TestCase): """Test for LibvirtDriver.get_all_volume_usage.""" diff -Nru nova-2014.1.3/nova/tests/virt/libvirt/test_libvirt_volume.py nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt_volume.py --- nova-2014.1.3/nova/tests/virt/libvirt/test_libvirt_volume.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/virt/libvirt/test_libvirt_volume.py 2015-06-18 22:25:39.000000000 +0000 @@ -13,10 +13,12 @@ # License for the specific language governing permissions and limitations # under the License. -import fixtures +import contextlib import os import time +import fixtures +import mock from oslo.config import cfg from nova import exception @@ -238,6 +240,12 @@ } } + def test_rescan_multipath(self): + libvirt_driver = volume.LibvirtISCSIVolumeDriver(self.fake_conn) + libvirt_driver._rescan_multipath() + expected_multipath_cmd = ('multipath', '-r') + self.assertIn(expected_multipath_cmd, self.executes) + def test_libvirt_iscsi_driver(self): # NOTE(vish) exists is to make driver assume connecting worked self.stubs.Set(os.path, 'exists', lambda x: True) @@ -480,6 +488,9 @@ connection_info['data']['device_path'] = mpdev_filepath target_portals = ['fake_portal1', 'fake_portal2'] libvirt_driver._get_multipath_device_name = lambda x: mpdev_filepath + iscsi_devs = ['ip-%s-iscsi-%s-lun-0' % (self.location, self.iqn)] + self.stubs.Set(libvirt_driver, '_get_iscsi_devices', + lambda: iscsi_devs) self.stubs.Set(libvirt_driver, '_get_target_portals_from_iscsiadm_output', lambda x: [[self.location, self.iqn]]) @@ -491,6 +502,46 @@ expected_multipath_cmd = ('multipath', '-f', 'foo') self.assertIn(expected_multipath_cmd, self.executes) + def test_libvirt_kvm_volume_with_multipath_connecting(self): + libvirt_driver = volume.LibvirtISCSIVolumeDriver(self.fake_conn) + ip_iqns = [[self.location, self.iqn], + ['10.0.2.16:3260', self.iqn], + [self.location, + 'iqn.2010-10.org.openstack:volume-00000002']] + + with contextlib.nested( + mock.patch.object(os.path, 'exists', return_value=True), + mock.patch.object(libvirt_driver, '_run_iscsiadm_bare'), + mock.patch.object(libvirt_driver, + '_get_target_portals_from_iscsiadm_output', + return_value=ip_iqns), + mock.patch.object(libvirt_driver, '_connect_to_iscsi_portal'), + mock.patch.object(libvirt_driver, '_rescan_iscsi'), + mock.patch.object(libvirt_driver, '_get_host_device', + return_value='fake-device'), + mock.patch.object(libvirt_driver, '_rescan_multipath'), + mock.patch.object(libvirt_driver, '_get_multipath_device_name', + return_value='/dev/mapper/fake-mpath-devname') + ) as (mock_exists, mock_run_iscsiadm_bare, mock_get_portals, + mock_connect_iscsi, mock_rescan_iscsi, mock_host_device, + mock_rescan_multipath, mock_device_name): + vol = {'id': 1, 'name': self.name} + connection_info = self.iscsi_connection(vol, self.location, + self.iqn) + libvirt_driver.use_multipath = True + libvirt_driver.connect_volume(connection_info, self.disk_info) + + # Verify that the supplied iqn is used when it shares the same + # iqn between multiple portals. + connection_info = self.iscsi_connection(vol, self.location, + self.iqn) + props1 = connection_info['data'].copy() + props2 = connection_info['data'].copy() + props2['target_portal'] = '10.0.2.16:3260' + expected_calls = [mock.call(props1), mock.call(props2), + mock.call(props1)] + self.assertEqual(expected_calls, mock_connect_iscsi.call_args_list) + def test_libvirt_kvm_volume_with_multipath_still_in_use(self): name = 'volume-00000001' location = '10.0.2.15:3260' @@ -540,6 +591,66 @@ self.mox.ReplayAll() libvirt_driver.disconnect_volume(connection_info, 'vde') + def test_libvirt_kvm_volume_with_multipath_disconnected(self): + libvirt_driver = volume.LibvirtISCSIVolumeDriver(self.fake_conn) + volumes = [{'name': self.name, + 'location': self.location, + 'iqn': self.iqn, + 'mpdev_filepath': '/dev/mapper/disconnect'}, + {'name': 'volume-00000002', + 'location': '10.0.2.15:3260', + 'iqn': 'iqn.2010-10.org.openstack:volume-00000002', + 'mpdev_filepath': '/dev/mapper/donotdisconnect'}] + iscsi_devs = ['ip-%s-iscsi-%s-lun-1' % (volumes[0]['location'], + volumes[0]['iqn']), + 'ip-%s-iscsi-%s-lun-1' % (volumes[1]['location'], + volumes[1]['iqn'])] + + def _get_multipath_device_name(path): + if '%s-lun-1' % volumes[0]['iqn'] in path: + return volumes[0]['mpdev_filepath'] + else: + return volumes[1]['mpdev_filepath'] + + def _get_multipath_iqn(mpdev): + if volumes[0]['mpdev_filepath'] == mpdev: + return volumes[0]['iqn'] + else: + return volumes[1]['iqn'] + + with contextlib.nested( + mock.patch.object(os.path, 'exists', return_value=True), + mock.patch.object(self.fake_conn, 'get_all_block_devices', + retrun_value=[volumes[1]['mpdev_filepath']]), + mock.patch.object(libvirt_driver, '_get_multipath_device_name', + _get_multipath_device_name), + mock.patch.object(libvirt_driver, '_get_multipath_iqn', + _get_multipath_iqn), + mock.patch.object(libvirt_driver, '_get_iscsi_devices', + return_value=iscsi_devs), + mock.patch.object(libvirt_driver, + '_get_target_portals_from_iscsiadm_output', + return_value=[[volumes[0]['location'], + volumes[0]['iqn']], + [volumes[1]['location'], + volumes[1]['iqn']]]), + mock.patch.object(libvirt_driver, '_disconnect_mpath') + ) as (mock_exists, mock_devices, mock_device_name, mock_get_iqn, + mock_iscsi_devices, mock_get_portals, mock_disconnect_mpath): + vol = {'id': 1, 'name': volumes[0]['name']} + connection_info = self.iscsi_connection(vol, + volumes[0]['location'], + volumes[0]['iqn']) + connection_info['data']['device_path'] =\ + volumes[0]['mpdev_filepath'] + libvirt_driver.use_multipath = True + libvirt_driver.disconnect_volume(connection_info, 'vde') + # Ensure that the mpath device is disconnected. + ips_iqns = [] + ips_iqns.append([volumes[0]['location'], volumes[0]['iqn']]) + mock_disconnect_mpath.assert_called_once_with( + connection_info['data'], ips_iqns) + def test_libvirt_kvm_volume_with_multipath_getmpdev(self): self.flags(iscsi_use_multipath=True, group='libvirt') self.stubs.Set(os.path, 'exists', lambda x: True) @@ -585,6 +696,9 @@ } target_portals = ['fake_portal1', 'fake_portal2'] libvirt_driver._get_multipath_device_name = lambda x: mpdev_filepath + iscsi_devs = ['ip-%s-iscsi-%s-lun-0' % (location, iqn)] + self.stubs.Set(libvirt_driver, '_get_iscsi_devices', + lambda: iscsi_devs) self.stubs.Set(libvirt_driver, '_get_target_portals_from_iscsiadm_output', lambda x: [[location, iqn]]) diff -Nru nova-2014.1.3/nova/tests/virt/vmwareapi/test_driver_api.py nova-2014.1.5/nova/tests/virt/vmwareapi/test_driver_api.py --- nova-2014.1.3/nova/tests/virt/vmwareapi/test_driver_api.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/virt/vmwareapi/test_driver_api.py 2015-06-18 22:25:39.000000000 +0000 @@ -1441,6 +1441,46 @@ None, self.destroy_disks) self.assertFalse(mock_destroy.called) + def _destroy_instance_without_vm_ref(self, resize_exists=False, + task_state=None): + + def fake_vm_ref_from_name(session, vm_name): + if resize_exists: + return 'fake-ref' + + self._create_instance() + with contextlib.nested( + mock.patch.object(vm_util, 'get_vm_ref_from_name', + fake_vm_ref_from_name), + mock.patch.object(self.conn._session, + '_call_method'), + mock.patch.object(self.conn._vmops, + '_destroy_instance') + ) as (mock_get, mock_call, mock_destroy): + self.instance.task_state = task_state + self.conn.destroy(self.context, self.instance, + self.network_info, + None, True) + if resize_exists: + if task_state == task_states.RESIZE_REVERTING: + expected = 1 + else: + expected = 2 + else: + expected = 1 + self.assertEqual(expected, mock_destroy.call_count) + self.assertFalse(mock_call.called) + + def test_destroy_instance_without_vm_ref(self): + self._destroy_instance_without_vm_ref() + + def test_destroy_instance_without_vm_ref_with_resize(self): + self._destroy_instance_without_vm_ref(resize_exists=True) + + def test_destroy_instance_without_vm_ref_with_resize_revert(self): + self._destroy_instance_without_vm_ref(resize_exists=True, + task_state=task_states.RESIZE_REVERTING) + def _rescue(self, config_drive=False): def fake_attach_disk_to_vm(vm_ref, instance, adapter_type, disk_type, vmdk_path=None, diff -Nru nova-2014.1.3/nova/tests/volume/test_cinder.py nova-2014.1.5/nova/tests/volume/test_cinder.py --- nova-2014.1.3/nova/tests/volume/test_cinder.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/tests/volume/test_cinder.py 2015-06-18 22:25:40.000000000 +0000 @@ -16,6 +16,7 @@ import mock from cinderclient import exceptions as cinder_exception +import mock from nova import context from nova import exception @@ -86,6 +87,17 @@ self.assertRaises(exception.InvalidInput, self.api.create, self.ctx, 1, '', '') + @mock.patch('nova.volume.cinder.cinderclient') + def test_create_over_quota_failed(self, mock_cinderclient): + mock_cinderclient.return_value.volumes.create.side_effect = ( + cinder_exception.OverLimit(413)) + self.assertRaises(exception.OverQuota, self.api.create, self.ctx, + 1, '', '') + mock_cinderclient.return_value.volumes.create.assert_called_once_with( + 1, user_id=None, imageRef=None, availability_zone=None, + volume_type=None, display_description='', snapshot_id=None, + display_name='', project_id=None, metadata=None) + def test_get_all(self): cinder.cinderclient(self.ctx).AndReturn(self.cinderclient) cinder._untranslate_volume_summary_view(self.ctx, @@ -110,22 +122,46 @@ def test_check_attach_availability_zone_differs(self): volume = {'status': 'available'} volume['attach_status'] = "detached" - instance = {'availability_zone': 'zone1'} - volume['availability_zone'] = 'zone2' - cinder.CONF.set_override('cinder_cross_az_attach', False) - self.assertRaises(exception.InvalidVolume, - self.api.check_attach, self.ctx, volume, instance) - volume['availability_zone'] = 'zone1' - self.assertIsNone(self.api.check_attach(self.ctx, volume, instance)) - cinder.CONF.reset() + instance = {'availability_zone': 'zone1', 'host': 'fakehost'} + + with mock.patch.object(cinder.az, 'get_instance_availability_zone', + side_effect=lambda context, + instance: 'zone1') as mock_get_instance_az: + + cinder.CONF.set_override('cinder_cross_az_attach', False) + volume['availability_zone'] = 'zone1' + self.assertIsNone(self.api.check_attach(self.ctx, + volume, instance)) + mock_get_instance_az.assert_called_once_with(self.ctx, instance) + mock_get_instance_az.reset_mock() + volume['availability_zone'] = 'zone2' + self.assertRaises(exception.InvalidVolume, + self.api.check_attach, self.ctx, volume, instance) + mock_get_instance_az.assert_called_once_with(self.ctx, instance) + mock_get_instance_az.reset_mock() + del instance['host'] + volume['availability_zone'] = 'zone1' + self.assertIsNone(self.api.check_attach( + self.ctx, volume, instance)) + mock_get_instance_az.assert_not_called() + volume['availability_zone'] = 'zone2' + self.assertRaises(exception.InvalidVolume, + self.api.check_attach, self.ctx, volume, instance) + mock_get_instance_az.assert_not_called() + cinder.CONF.reset() def test_check_attach(self): volume = {'status': 'available'} volume['attach_status'] = "detached" volume['availability_zone'] = 'zone1' - instance = {'availability_zone': 'zone1'} + instance = {'availability_zone': 'zone1', 'host': 'fakehost'} cinder.CONF.set_override('cinder_cross_az_attach', False) - self.assertIsNone(self.api.check_attach(self.ctx, volume, instance)) + + with mock.patch.object(cinder.az, 'get_instance_availability_zone', + side_effect=lambda context, instance: 'zone1'): + self.assertIsNone(self.api.check_attach( + self.ctx, volume, instance)) + cinder.CONF.reset() def test_check_detach(self): diff -Nru nova-2014.1.3/nova/virt/driver.py nova-2014.1.5/nova/virt/driver.py --- nova-2014.1.3/nova/virt/driver.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/driver.py 2015-06-18 22:25:40.000000000 +0000 @@ -748,8 +748,8 @@ """ raise NotImplementedError() - def check_can_live_migrate_source(self, ctxt, instance_ref, - dest_check_data): + def check_can_live_migrate_source(self, context, instance_ref, + dest_check_data, block_device_info=None): """Check if it is possible to execute live migration. This checks if the live migration can succeed, based on the @@ -758,6 +758,7 @@ :param context: security context :param instance_ref: nova.db.sqlalchemy.models.Instance :param dest_check_data: result of check_can_live_migrate_destination + :param block_device_info: result of _get_instance_block_device_info :returns: a dict containing migration info (hypervisor-dependent) """ raise NotImplementedError() diff -Nru nova-2014.1.3/nova/virt/fake.py nova-2014.1.5/nova/virt/fake.py --- nova-2014.1.3/nova/virt/fake.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/fake.py 2015-06-18 22:25:40.000000000 +0000 @@ -395,7 +395,7 @@ return {} def check_can_live_migrate_source(self, ctxt, instance_ref, - dest_check_data): + dest_check_data, block_device_info=None): return def finish_migration(self, context, migration, instance, disk_info, diff -Nru nova-2014.1.3/nova/virt/hyperv/driver.py nova-2014.1.5/nova/virt/hyperv/driver.py --- nova-2014.1.3/nova/virt/hyperv/driver.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/hyperv/driver.py 2015-06-18 22:25:40.000000000 +0000 @@ -116,7 +116,13 @@ def power_on(self, context, instance, network_info, block_device_info=None): - self._vmops.power_on(instance) + self._vmops.power_on(instance, block_device_info) + + def resume_state_on_host_boot(self, context, instance, network_info, + block_device_info=None): + """Resume guest state when a host is booted.""" + self._vmops.resume_state_on_host_boot(context, instance, network_info, + block_device_info) def live_migration(self, context, instance_ref, dest, post_method, recover_method, block_migration=False, @@ -159,7 +165,7 @@ ctxt, dest_check_data) def check_can_live_migrate_source(self, ctxt, instance_ref, - dest_check_data): + dest_check_data, block_device_info=None): return self._livemigrationops.check_can_live_migrate_source( ctxt, instance_ref, dest_check_data) @@ -212,5 +218,8 @@ def get_host_ip_addr(self): return self._hostops.get_host_ip_addr() + def get_host_uptime(self, host): + return self._hostops.get_host_uptime() + def get_rdp_console(self, context, instance): return self._rdpconsoleops.get_rdp_console(instance) diff -Nru nova-2014.1.3/nova/virt/hyperv/hostops.py nova-2014.1.5/nova/virt/hyperv/hostops.py --- nova-2014.1.3/nova/virt/hyperv/hostops.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/hyperv/hostops.py 2015-06-18 22:25:40.000000000 +0000 @@ -16,8 +16,10 @@ """ Management class for host operations. """ +import datetime import os import platform +import time from oslo.config import cfg @@ -178,3 +180,23 @@ host_ip = self._hostutils.get_local_ips()[0] LOG.debug(_("Host IP address is: %s"), host_ip) return host_ip + + def get_host_uptime(self): + """Returns the host uptime.""" + + tick_count64 = self._hostutils.get_host_tick_count64() + + # format the string to match libvirt driver uptime + # Libvirt uptime returns a combination of the following + # - curent host time + # - time since host is up + # - number of logged in users + # - cpu load + # Since the Windows function GetTickCount64 returns only + # the time since the host is up, returning 0s for cpu load + # and number of logged in users. + # This is done to ensure the format of the returned + # value is same as in libvirt + return "%s up %s, 0 users, load average: 0, 0, 0" % ( + str(time.strftime("%H:%M:%S")), + str(datetime.timedelta(milliseconds=long(tick_count64)))) diff -Nru nova-2014.1.3/nova/virt/hyperv/hostutils.py nova-2014.1.5/nova/virt/hyperv/hostutils.py --- nova-2014.1.3/nova/virt/hyperv/hostutils.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/hyperv/hostutils.py 2015-06-18 22:25:40.000000000 +0000 @@ -75,3 +75,6 @@ # Returns IPv4 and IPv6 addresses, ordered by protocol family addr_info.sort() return [a[4][0] for a in addr_info] + + def get_host_tick_count64(self): + return ctypes.windll.kernel32.GetTickCount64() diff -Nru nova-2014.1.3/nova/virt/hyperv/vmops.py nova-2014.1.5/nova/virt/hyperv/vmops.py --- nova-2014.1.3/nova/virt/hyperv/vmops.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/hyperv/vmops.py 2015-06-18 22:25:40.000000000 +0000 @@ -416,10 +416,15 @@ self._set_vm_state(instance["name"], constants.HYPERV_VM_STATE_DISABLED) - def power_on(self, instance): + def power_on(self, instance, block_device_info=None): """Power on the specified instance.""" LOG.debug(_("Power on instance"), instance=instance) - self._set_vm_state(instance["name"], + + if block_device_info: + self._volumeops.fix_instance_volume_disk_paths(instance['name'], + block_device_info) + + self._set_vm_state(instance['name'], constants.HYPERV_VM_STATE_ENABLED) def _set_vm_state(self, vm_name, req_state): @@ -433,3 +438,8 @@ LOG.error(_("Failed to change vm state of %(vm_name)s" " to %(req_state)s"), {'vm_name': vm_name, 'req_state': req_state}) + + def resume_state_on_host_boot(self, context, instance, network_info, + block_device_info=None): + """Resume guest state when a host is booted.""" + self.power_on(instance, block_device_info) diff -Nru nova-2014.1.3/nova/virt/hyperv/vmutils.py nova-2014.1.5/nova/virt/hyperv/vmutils.py --- nova-2014.1.3/nova/virt/hyperv/vmutils.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/hyperv/vmutils.py 2015-06-18 22:25:40.000000000 +0000 @@ -28,7 +28,7 @@ from oslo.config import cfg from nova import exception -from nova.openstack.common.gettextutils import _ +from nova.openstack.common.gettextutils import _, _LW from nova.openstack.common import log as logging from nova.virt.hyperv import constants @@ -81,6 +81,7 @@ _AFFECTED_JOB_ELEMENT_CLASS = "Msvm_AffectedJobElement" _VIRTUAL_SYSTEM_CURRENT_SETTINGS = 3 + _AUTOMATIC_STARTUP_ACTION_NONE = 0 _vm_power_states_map = {constants.HYPERV_VM_STATE_ENABLED: 2, constants.HYPERV_VM_STATE_DISABLED: 3, @@ -253,6 +254,8 @@ def _create_vm_obj(self, vs_man_svc, vm_name, notes): vs_gs_data = self._conn.Msvm_VirtualSystemGlobalSettingData.new() vs_gs_data.ElementName = vm_name + # Don't start automatically on host boot + vs_gs_data.AutomaticStartupAction = self._AUTOMATIC_STARTUP_ACTION_NONE (vm_path, job_path, @@ -386,6 +389,33 @@ diskdrive.HostResource = [mounted_disk_path] self._add_virt_resource(diskdrive, vm.path_()) + def _get_disk_resource_address(self, disk_resource): + return disk_resource.Address + + def set_disk_host_resource(self, vm_name, controller_path, address, + mounted_disk_path): + disk_found = False + vm = self._lookup_vm_check(vm_name) + (disk_resources, volume_resources) = self._get_vm_disks(vm) + for disk_resource in disk_resources + volume_resources: + if (disk_resource.Parent == controller_path and + self._get_disk_resource_address(disk_resource) == + str(address)): + if (disk_resource.HostResource and + disk_resource.HostResource[0] != mounted_disk_path): + LOG.debug('Updating disk host resource "%(old)s" to ' + '"%(new)s"' % + {'old': disk_resource.HostResource[0], + 'new': mounted_disk_path}) + disk_resource.HostResource = [mounted_disk_path] + self._modify_virt_resource(disk_resource, vm.path_()) + disk_found = True + break + if not disk_found: + LOG.warn(_LW('Disk not found on controller "%(controller_path)s" ' + 'with address "%(address)s"'), + {'controller_path': controller_path, 'address': address}) + def set_nic_connection(self, vm_name, nic_name, vswitch_conn_data): nic_data = self._get_nic_data_by_name(nic_name) nic_data.Connection = [vswitch_conn_data] @@ -454,6 +484,12 @@ r.ResourceSubType in [self._IDE_DISK_RES_SUB_TYPE, self._IDE_DVD_RES_SUB_TYPE]] + + if (self._RESOURCE_ALLOC_SETTING_DATA_CLASS != + self._STORAGE_ALLOC_SETTING_DATA_CLASS): + rasds = vmsettings[0].associators( + wmi_result_class=self._RESOURCE_ALLOC_SETTING_DATA_CLASS) + volume_resources = [r for r in rasds if r.ResourceSubType == self._PHYS_DISK_RES_SUB_TYPE] diff -Nru nova-2014.1.3/nova/virt/hyperv/vmutilsv2.py nova-2014.1.5/nova/virt/hyperv/vmutilsv2.py --- nova-2014.1.3/nova/virt/hyperv/vmutilsv2.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/hyperv/vmutilsv2.py 2015-06-18 22:25:40.000000000 +0000 @@ -57,6 +57,8 @@ _ETHERNET_PORT_ALLOCATION_SETTING_DATA_CLASS = \ 'Msvm_EthernetPortAllocationSettingData' + _AUTOMATIC_STARTUP_ACTION_NONE = 2 + _vm_power_states_map = {constants.HYPERV_VM_STATE_ENABLED: 2, constants.HYPERV_VM_STATE_DISABLED: 3, constants.HYPERV_VM_STATE_REBOOT: 11, @@ -91,6 +93,9 @@ vs_data.ElementName = vm_name vs_data.Notes = notes + # Don't start automatically on host boot + vs_data.AutomaticStartupAction = self._AUTOMATIC_STARTUP_ACTION_NONE + (job_path, vm_path, ret_val) = vs_man_svc.DefineSystem(ResourceSettings=[], @@ -159,6 +164,9 @@ self._add_virt_resource(diskdrive, vm.path_()) + def _get_disk_resource_address(self, disk_resource): + return disk_resource.AddressOnParent + def create_scsi_controller(self, vm_name): """Create an iscsi controller ready to mount volumes.""" scsicontrl = self._get_new_resource_setting_data( diff -Nru nova-2014.1.3/nova/virt/hyperv/volumeops.py nova-2014.1.5/nova/virt/hyperv/volumeops.py --- nova-2014.1.3/nova/virt/hyperv/volumeops.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/hyperv/volumeops.py 2015-06-18 22:25:40.000000000 +0000 @@ -250,3 +250,22 @@ def get_target_from_disk_path(self, physical_drive_path): return self._volutils.get_target_from_disk_path(physical_drive_path) + + def fix_instance_volume_disk_paths(self, instance_name, block_device_info): + mapping = driver.block_device_info_get_mapping(block_device_info) + + if self.ebs_root_in_block_devices(block_device_info): + mapping = mapping[1:] + + disk_address = 0 + for vol in mapping: + data = vol['connection_info']['data'] + target_lun = data['target_lun'] + target_iqn = data['target_iqn'] + + mounted_disk_path = self._get_mounted_disk_from_lun( + target_iqn, target_lun, True) + ctrller_path = self._vmutils.get_vm_scsi_controller(instance_name) + self._vmutils.set_disk_host_resource( + instance_name, ctrller_path, disk_address, mounted_disk_path) + disk_address += 1 diff -Nru nova-2014.1.3/nova/virt/libvirt/driver.py nova-2014.1.5/nova/virt/libvirt/driver.py --- nova-2014.1.3/nova/virt/libvirt/driver.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/libvirt/driver.py 2015-06-18 22:25:40.000000000 +0000 @@ -77,6 +77,7 @@ from nova.openstack.common import excutils from nova.openstack.common import fileutils from nova.openstack.common.gettextutils import _ +from nova.openstack.common.gettextutils import _LW from nova.openstack.common import importutils from nova.openstack.common import jsonutils from nova.openstack.common import log as logging @@ -2888,9 +2889,14 @@ # this -1 checking should be removed later. if features and features != -1: self._caps.host.cpu.parse_str(features) - except libvirt.VIR_ERR_NO_SUPPORT: - # Note(yjiang5): ignore if libvirt has no support - pass + except libvirt.libvirtError as ex: + error_code = ex.get_error_code() + if error_code == libvirt.VIR_ERR_NO_SUPPORT: + LOG.warn(_LW("URI %(uri)s does not support full set" + " of host capabilities: " "%(error)s"), + {'uri': self.uri(), 'error': ex}) + else: + raise return self._caps def get_host_uuid(self): @@ -3435,8 +3441,16 @@ raise exception.PciDeviceUnsupportedHypervisor( type=CONF.libvirt.virt_type) - watchdog_action = flavor.extra_specs.get('hw_watchdog_action', - 'disabled') + if 'hw_watchdog_action' in flavor.extra_specs: + LOG.warn(_LW('Old property name "hw_watchdog_action" is now ' + 'deprecated and will be removed in L release. ' + 'Use updated property name ' + '"hw:watchdog_action" instead')) + # TODO(pkholkin): accepting old property name 'hw_watchdog_action' + # should be removed in L release + watchdog_action = (flavor.extra_specs.get('hw_watchdog_action') or + flavor.extra_specs.get('hw:watchdog_action') + or 'disabled') if (image_meta is not None and image_meta.get('properties', {}).get('hw_watchdog_action')): watchdog_action = image_meta['properties']['hw_watchdog_action'] @@ -4193,6 +4207,22 @@ return stats def check_instance_shared_storage_local(self, context, instance): + """Check if instance files located on shared storage. + + This runs check on the destination host, and then calls + back to the source host to check the results. + + :param context: security context + :param instance: nova.db.sqlalchemy.models.Instance + :returns + :tempfile: A dict containing the tempfile info on the destination + host + :None: 1. If the instance path is not existing. + 2. If the image backend is shared block storage type. + """ + if self.image_backend.backend().is_shared_block_storage(): + return None + dirpath = libvirt_utils.get_instance_path(instance) if not os.path.exists(dirpath): @@ -4259,7 +4289,8 @@ self._cleanup_shared_storage_test_file(filename) def check_can_live_migrate_source(self, context, instance, - dest_check_data): + dest_check_data, + block_device_info=None): """Check if it is possible to execute live migration. This checks if the live migration can succeed, based on the @@ -4268,6 +4299,7 @@ :param context: security context :param instance: nova.db.sqlalchemy.models.Instance :param dest_check_data: result of check_can_live_migrate_destination + :param block_device_info: result of _get_instance_block_device_info :returns: a dict containing migration info """ # Checking shared storage connectivity @@ -4288,7 +4320,8 @@ raise exception.InvalidLocalStorage(reason=reason, path=source) self._assert_dest_node_has_enough_disk(context, instance, dest_check_data['disk_available_mb'], - dest_check_data['disk_over_commit']) + dest_check_data['disk_over_commit'], + block_device_info) elif not shared and (not is_volume_backed or has_local_disks): reason = _("Live migration can not be used " @@ -4306,7 +4339,8 @@ return dest_check_data def _assert_dest_node_has_enough_disk(self, context, instance, - available_mb, disk_over_commit): + available_mb, disk_over_commit, + block_device_info=None): """Checks if destination has enough disk for block migration.""" # Libvirt supports qcow2 disk format,which is usually compressed # on compute nodes. @@ -4322,7 +4356,8 @@ if available_mb: available = available_mb * units.Mi - ret = self.get_instance_disk_info(instance['name']) + ret = self.get_instance_disk_info(instance['name'], + block_device_info=block_device_info) disk_infos = jsonutils.loads(ret) necessary = 0 @@ -4570,11 +4605,13 @@ instance_relative_path = migrate_data.get('instance_relative_path') if not is_shared_storage: - # NOTE(mikal): block migration of instances using config drive is - # not supported because of a bug in libvirt (read only devices - # are not copied by libvirt). See bug/1246201 - if configdrive.required_by(instance): - raise exception.NoBlockMigrationForConfigDriveInLibVirt() + # NOTE(dims): Using config drive with iso format does not work + # because of a bug in libvirt with read only devices. However + # one can use vfat as config_drive_format which works fine. + # Please see bug/1246201 for details on the libvirt bug. + if CONF.config_drive_format != 'vfat': + if configdrive.required_by(instance): + raise exception.NoBlockMigrationForConfigDriveInLibVirt() # NOTE(mikal): this doesn't use libvirt_utils.get_instance_path # because we are ensuring that the same instance directory name @@ -5183,7 +5220,13 @@ # ensure directories exist and are writable instance_path = libvirt_utils.get_instance_path(instance) LOG.debug(_('Checking instance files accessibility %s'), instance_path) - return os.access(instance_path, os.W_OK) + shared_instance_path = os.access(instance_path, os.W_OK) + # NOTE(flwang): For shared block storage scenario, the file system is + # not really shared by the two hosts, but the volume of evacuated + # instance is reachable. + shared_block_storage = (self.image_backend.backend(). + is_shared_block_storage()) + return shared_instance_path or shared_block_storage def inject_network_info(self, instance, nw_info): self.firewall_driver.setup_basic_filtering(instance, nw_info) diff -Nru nova-2014.1.3/nova/virt/libvirt/firewall.py nova-2014.1.5/nova/virt/libvirt/firewall.py --- nova-2014.1.3/nova/virt/libvirt/firewall.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/libvirt/firewall.py 2015-06-18 22:25:40.000000000 +0000 @@ -15,6 +15,9 @@ # License for the specific language governing permissions and limitations # under the License. +import uuid + +from lxml import etree from oslo.config import cfg from nova.cloudpipe import pipelib @@ -59,31 +62,30 @@ return self._libvirt_get_connection() _conn = property(_get_connection) - @staticmethod - def nova_no_nd_reflection_filter(): + def nova_no_nd_reflection_filter(self): """This filter protects false positives on IPv6 Duplicate Address Detection(DAD). """ + uuid = self._get_filter_uuid('nova-no-nd-reflection') return ''' - + %s - ''' + ''' % uuid - @staticmethod - def nova_dhcp_filter(): + def nova_dhcp_filter(self): """The standard allow-dhcp-server filter is an one, so it uses ebtables to allow traffic through. Without a corresponding rule in iptables, it'll get blocked anyway. """ - + uuid = self._get_filter_uuid('nova-allow-dhcp-server') return ''' - 891e4787-e5c0-d59b-cbd6-41bc3c6b36fc + %s - ''' + ''' % uuid def setup_basic_filtering(self, instance, network_info): """Set up basic filtering (MAC, IP, and ARP spoofing protection).""" @@ -172,7 +174,9 @@ nic_id = vif['address'].replace(':', '') instance_filter_name = self._instance_filter_name(instance, nic_id) parameters = self._get_instance_filter_parameters(vif) + uuid = self._get_filter_uuid(instance_filter_name) xml = '''''' % instance_filter_name + xml += '%s' % uuid for f in filters: xml += '''''' % f xml += ''.join(parameters) @@ -210,23 +214,40 @@ filter_set = ['no-mac-spoofing', 'no-ip-spoofing', 'no-arp-spoofing'] - self._define_filter(self.nova_no_nd_reflection_filter) + + self._define_filter(self.nova_no_nd_reflection_filter()) filter_set.append('nova-no-nd-reflection') self._define_filter(self._filter_container('nova-nodhcp', filter_set)) filter_set.append('allow-dhcp-server') self._define_filter(self._filter_container('nova-base', filter_set)) self._define_filter(self._filter_container('nova-vpn', ['allow-dhcp-server'])) - self._define_filter(self.nova_dhcp_filter) + self._define_filter(self.nova_dhcp_filter()) self.static_filters_configured = True def _filter_container(self, name, filters): - xml = '''%s''' % ( - name, + uuid = self._get_filter_uuid(name) + xml = ''' + %s + %s + ''' % (name, uuid, ''.join(["" % (f,) for f in filters])) return xml + def _get_filter_uuid(self, name): + try: + flt = self._conn.nwfilterLookupByName(name) + xml = flt.XMLDesc(0) + doc = etree.fromstring(xml) + u = doc.find("./uuid").text + except Exception as e: + LOG.debug("Cannot find UUID for filter '%s': '%s'" % (name, e)) + u = uuid.uuid4().hex + + LOG.debug("UUID for filter '%s' is '%s'" % (name, u)) + return u + def _define_filter(self, xml): if callable(xml): xml = xml() diff -Nru nova-2014.1.3/nova/virt/libvirt/imagebackend.py nova-2014.1.5/nova/virt/libvirt/imagebackend.py --- nova-2014.1.3/nova/virt/libvirt/imagebackend.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/libvirt/imagebackend.py 2015-06-18 22:25:40.000000000 +0000 @@ -304,6 +304,11 @@ raise exception.DiskInfoReadWriteFail(reason=unicode(e)) return driver_format + @staticmethod + def is_shared_block_storage(): + """True if the backend puts images on a shared block storage.""" + return False + class Raw(Image): def __init__(self, instance=None, disk_name=None, path=None): @@ -683,6 +688,10 @@ def snapshot_extract(self, target, out_format): images.convert_image(self.path, target, out_format) + @staticmethod + def is_shared_block_storage(): + return True + class Backend(object): def __init__(self, use_cow): diff -Nru nova-2014.1.3/nova/virt/libvirt/vif.py nova-2014.1.5/nova/virt/libvirt/vif.py --- nova-2014.1.3/nova/virt/libvirt/vif.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/libvirt/vif.py 2015-06-18 22:25:40.000000000 +0000 @@ -592,7 +592,7 @@ 'vif': vif}) if vif_type is None: - raise exception.NovaException( + raise exception.VirtualInterfacePlugException( _("vif_type parameter must be present " "for this vif_driver implementation")) elif vif_type == network_model.VIF_TYPE_BRIDGE: @@ -612,7 +612,7 @@ elif vif_type == network_model.VIF_TYPE_MIDONET: self.plug_midonet(instance, vif) else: - raise exception.NovaException( + raise exception.VirtualInterfacePlugException( _("Unexpected vif_type=%s") % vif_type) def unplug_bridge(self, instance, vif): diff -Nru nova-2014.1.3/nova/virt/libvirt/volume.py nova-2014.1.5/nova/virt/libvirt/volume.py --- nova-2014.1.3/nova/virt/libvirt/volume.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/libvirt/volume.py 2015-06-18 22:25:40.000000000 +0000 @@ -281,10 +281,29 @@ check_exit_code=[0, 255])[0] \ or "" - for ip, iqn in self._get_target_portals_from_iscsiadm_output(out): + # There are two types of iSCSI multipath devices. One which shares + # the same iqn between multiple portals, and the other which use + # different iqns on different portals. Try to identify the type by + # checking the iscsiadm output if the iqn is used by multiple + # portals. If it is, it's the former, so use the supplied iqn. + # Otherwise, it's the latter, so try the ip,iqn combinations to + # find the targets which constitutes the multipath device. + ips_iqns = self._get_target_portals_from_iscsiadm_output(out) + same_portal = False + all_portals = set() + match_portals = set() + for ip, iqn in ips_iqns: + all_portals.add(ip) + if iqn == iscsi_properties['target_iqn']: + match_portals.add(ip) + if len(all_portals) == len(match_portals): + same_portal = True + + for ip, iqn in ips_iqns: props = iscsi_properties.copy() - props['target_portal'] = ip - props['target_iqn'] = iqn + props['target_portal'] = ip.split(",")[0] + if not same_portal: + props['target_iqn'] = iqn self._connect_to_iscsi_portal(props) self._rescan_iscsi() @@ -415,7 +434,23 @@ check_exit_code=[0, 255])[0] \ or "" - ips_iqns = self._get_target_portals_from_iscsiadm_output(out) + # Extract targets for the current multipath device. + ips_iqns = [] + entries = self._get_iscsi_devices() + for ip, iqn in self._get_target_portals_from_iscsiadm_output(out): + ip_iqn = "%s-iscsi-%s" % (ip.split(",")[0], iqn) + for entry in entries: + entry_ip_iqn = entry.split("-lun-")[0] + if entry_ip_iqn[:3] == "ip-": + entry_ip_iqn = entry_ip_iqn[3:] + if (ip_iqn != entry_ip_iqn): + continue + entry_real_path = os.path.realpath("/dev/disk/by-path/%s" % + entry) + entry_mpdev = self._get_multipath_device_name(entry_real_path) + if entry_mpdev == multipath_device: + ips_iqns.append([ip, iqn]) + break if not devices: # disconnect if no other multipath devices @@ -599,7 +634,7 @@ check_exit_code=[0, 1, 21, 255]) def _rescan_multipath(self): - self._run_multipath('-r', check_exit_code=[0, 1, 21]) + self._run_multipath(['-r'], check_exit_code=[0, 1, 21]) def _get_host_device(self, iscsi_properties): return ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-%s" % diff -Nru nova-2014.1.3/nova/virt/vmwareapi/driver.py nova-2014.1.5/nova/virt/vmwareapi/driver.py --- nova-2014.1.3/nova/virt/vmwareapi/driver.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/vmwareapi/driver.py 2015-06-18 22:25:40.000000000 +0000 @@ -977,7 +977,7 @@ try: task_info = self._call_method(vim_util, "get_dynamic_property", task_ref, "Task", "info") - task_name = task_info.name + task_name = getattr(task_info, 'name', '') if task_info.state in ['queued', 'running']: return elif task_info.state == 'success': diff -Nru nova-2014.1.3/nova/virt/vmwareapi/vmops.py nova-2014.1.5/nova/virt/vmwareapi/vmops.py --- nova-2014.1.3/nova/virt/vmwareapi/vmops.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/vmwareapi/vmops.py 2015-06-18 22:25:40.000000000 +0000 @@ -325,8 +325,7 @@ # Set the vnc configuration of the instance, vnc port starts from 5900 if CONF.vnc_enabled: - vnc_port = vm_util.get_vnc_port(self._session) - self._set_vnc_config(client_factory, instance, vnc_port) + self._get_and_set_vnc_config(client_factory, instance) def _create_virtual_disk(virtual_disk_path, file_size_in_kb): """Create a virtual disk of the size of flat vmdk file.""" @@ -1044,6 +1043,10 @@ instance_name = instance['uuid'] try: vm_ref = vm_util.get_vm_ref_from_name(self._session, instance_name) + if vm_ref is None: + LOG.warning(_('Instance does not exist on backend'), + instance=instance) + return lst_properties = ["config.files.vmPathName", "runtime.powerState", "datastore"] props = self._session._call_method(vim_util, @@ -1129,6 +1132,21 @@ self._destroy_instance(instance, network_info, destroy_disks=destroy_disks, instance_name=rescue_name) + # NOTE(arnaud): Destroy uuid-orig and uuid VMs iff it is not + # triggered by the revert resize api call. This prevents + # the uuid-orig VM to be deleted to be able to associate it later. + if instance['task_state'] != task_states.RESIZE_REVERTING: + # When VM deletion is triggered in middle of VM resize before VM + # arrive RESIZED state, uuid-orig VM need to deleted to avoid + # VM leak. Within method _destroy_instance it will check vmref + # exist or not before attempt deletion. + resize_orig_vmname = instance['uuid'] + self._migrate_suffix + vm_orig_ref = vm_util.get_vm_ref_from_name(self._session, + resize_orig_vmname) + if vm_orig_ref: + self._destroy_instance(instance, network_info, + destroy_disks=destroy_disks, + instance_name=resize_orig_vmname) self._destroy_instance(instance, network_info, destroy_disks=destroy_disks) LOG.debug(_("Instance destroyed"), instance=instance) @@ -1580,8 +1598,10 @@ LOG.debug(_("Reconfigured VM instance to set the machine id"), instance=instance) - def _set_vnc_config(self, client_factory, instance, port): + @utils.synchronized('vmware.get_and_set_vnc_port') + def _get_and_set_vnc_config(self, client_factory, instance): """Set the vnc configuration of the VM.""" + port = vm_util.get_vnc_port(self._session) vm_ref = vm_util.get_vm_ref(self._session, instance) vnc_config_spec = vm_util.get_vnc_config_spec( diff -Nru nova-2014.1.3/nova/virt/vmwareapi/vm_util.py nova-2014.1.5/nova/virt/vmwareapi/vm_util.py --- nova-2014.1.3/nova/virt/vmwareapi/vm_util.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/vmwareapi/vm_util.py 2015-06-18 22:25:40.000000000 +0000 @@ -28,7 +28,6 @@ from nova.openstack.common.gettextutils import _ from nova.openstack.common import log as logging from nova.openstack.common import units -from nova import utils from nova.virt.vmwareapi import error_util from nova.virt.vmwareapi import vim_util @@ -674,7 +673,6 @@ return virtual_machine_config_spec -@utils.synchronized('vmware.get_vnc_port') def get_vnc_port(session): """Return VNC port for an VM or None if there is no available port.""" min_port = CONF.vmware.vnc_port diff -Nru nova-2014.1.3/nova/virt/xenapi/driver.py nova-2014.1.5/nova/virt/xenapi/driver.py --- nova-2014.1.3/nova/virt/xenapi/driver.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/virt/xenapi/driver.py 2015-06-18 22:25:40.000000000 +0000 @@ -514,7 +514,7 @@ pass def check_can_live_migrate_source(self, ctxt, instance_ref, - dest_check_data): + dest_check_data, block_device_info=None): """Check if it is possible to execute live migration. This checks if the live migration can succeed, based on the @@ -524,6 +524,7 @@ :param instance_ref: nova.db.sqlalchemy.models.Instance :param dest_check_data: result of check_can_live_migrate_destination includes the block_migration flag + :param block_device_info: result of _get_instance_block_device_info """ return self._vmops.check_can_live_migrate_source(ctxt, instance_ref, dest_check_data) diff -Nru nova-2014.1.3/nova/volume/cinder.py nova-2014.1.5/nova/volume/cinder.py --- nova-2014.1.3/nova/volume/cinder.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/volume/cinder.py 2015-06-18 22:25:40.000000000 +0000 @@ -26,6 +26,7 @@ from cinderclient.v1 import client as cinder_client from oslo.config import cfg +from nova import availability_zones as az from nova import exception from nova.openstack.common.gettextutils import _ from nova.openstack.common import log as logging @@ -230,7 +231,14 @@ msg = _("already attached") raise exception.InvalidVolume(reason=msg) if instance and not CONF.cinder_cross_az_attach: - if instance['availability_zone'] != volume['availability_zone']: + # NOTE(sorrison): If instance is on a host we match against it's AZ + # else we check the intended AZ + if instance.get('host'): + instance_az = az.get_instance_availability_zone( + context, instance) + else: + instance_az = instance['availability_zone'] + if instance_az != volume['availability_zone']: msg = _("Instance and volume not in same availability_zone") raise exception.InvalidVolume(reason=msg) @@ -302,6 +310,8 @@ try: item = cinderclient(context).volumes.create(size, **kwargs) return _untranslate_volume_summary_view(context, item) + except cinder_exception.OverLimit: + raise exception.OverQuota(overs='volumes') except cinder_exception.BadRequest as e: raise exception.InvalidInput(reason=unicode(e)) diff -Nru nova-2014.1.3/nova/volume/encryptors/luks.py nova-2014.1.5/nova/volume/encryptors/luks.py --- nova-2014.1.3/nova/volume/encryptors/luks.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/volume/encryptors/luks.py 2015-06-18 22:25:40.000000000 +0000 @@ -103,4 +103,5 @@ """Closes the device (effectively removes the dm-crypt mapping).""" LOG.debug(_("closing encrypted volume %s"), self.dev_path) utils.execute('cryptsetup', 'luksClose', self.dev_name, - run_as_root=True, check_exit_code=True) + run_as_root=True, check_exit_code=True, + attempts=3) diff -Nru nova-2014.1.3/nova/wsgi.py nova-2014.1.5/nova/wsgi.py --- nova-2014.1.3/nova/wsgi.py 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/nova/wsgi.py 2015-06-18 22:25:40.000000000 +0000 @@ -69,6 +69,10 @@ "max_header_line may need to be increased when using " "large tokens (typically those generated by the " "Keystone v3 API with big service catalogs)."), + cfg.BoolOpt('wsgi_keep_alive', + default=True, + help="If False, closes the client socket connection " + "explicitly."), ] CONF = cfg.CONF CONF.register_opts(wsgi_opts) @@ -134,7 +138,8 @@ raise (self.host, self.port) = self._socket.getsockname()[0:2] - LOG.info(_("%(name)s listening on %(host)s:%(port)s") % self.__dict__) + LOG.info(_("%(name)s listening on %(host)s:%(port)s"), + {'name': self.name, 'host': self.host, 'port': self.port}) def start(self): """Start serving a WSGI application. @@ -193,7 +198,9 @@ except Exception: with excutils.save_and_reraise_exception(): LOG.error(_("Failed to start %(name)s on %(host)s" - ":%(port)s with SSL support") % self.__dict__) + ":%(port)s with SSL support"), + {'name': self.name, 'host': self.host, + 'port': self.port}) wsgi_kwargs = { 'func': eventlet.wsgi.server, @@ -203,7 +210,8 @@ 'custom_pool': self._pool, 'log': self._wsgi_logger, 'log_format': CONF.wsgi_log_format, - 'debug': False + 'debug': False, + 'keepalive': CONF.wsgi_keep_alive } if self._max_url_len: @@ -237,6 +245,7 @@ """ try: if self._server is not None: + self._pool.waitall() self._server.wait() except greenlet.GreenletExit: LOG.info(_("WSGI server has stopped.")) diff -Nru nova-2014.1.3/nova.egg-info/pbr.json nova-2014.1.5/nova.egg-info/pbr.json --- nova-2014.1.3/nova.egg-info/pbr.json 1970-01-01 00:00:00.000000000 +0000 +++ nova-2014.1.5/nova.egg-info/pbr.json 2015-06-18 22:33:13.000000000 +0000 @@ -0,0 +1 @@ +{"is_release": true, "git_version": "58ae4a6"} \ No newline at end of file diff -Nru nova-2014.1.3/nova.egg-info/PKG-INFO nova-2014.1.5/nova.egg-info/PKG-INFO --- nova-2014.1.3/nova.egg-info/PKG-INFO 2014-10-02 23:38:45.000000000 +0000 +++ nova-2014.1.5/nova.egg-info/PKG-INFO 2015-06-18 22:33:13.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.1 Name: nova -Version: 2014.1.3 +Version: 2014.1.5 Summary: Cloud computing fabric controller Home-page: http://www.openstack.org/ Author: OpenStack diff -Nru nova-2014.1.3/nova.egg-info/requires.txt nova-2014.1.5/nova.egg-info/requires.txt --- nova-2014.1.3/nova.egg-info/requires.txt 2014-10-02 23:38:45.000000000 +0000 +++ nova-2014.1.5/nova.egg-info/requires.txt 2015-06-18 22:33:13.000000000 +0000 @@ -1,35 +1,35 @@ pbr>=0.6,<1.0 SQLAlchemy>=0.7.8,!=0.9.5,<=0.9.99 amqplib>=0.6.1 -anyjson>=0.3.3 +anyjson>=0.3.3,<=0.3.3 argparse -boto>=2.12.0,!=2.13.0 -eventlet>=0.13.0 -Jinja2 -kombu>=2.4.8 -lxml>=2.3 +boto>=2.12.0,!=2.13.0,<2.35.0 +eventlet>=0.13.0,<0.16.0 +Jinja2<=2.7.3 +kombu>=2.5.0,<=3.0.7 +lxml>=2.3,<=3.4.2 Routes>=1.12.3,!=2.0 -WebOb>=1.2.3 -greenlet>=0.3.2 -PasteDeploy>=1.5.0 +WebOb>=1.2.3,<=1.4 +greenlet>=0.3.2,<=0.4.5 +PasteDeploy>=1.5.0,<=1.5.2 Paste -sqlalchemy-migrate>=0.8.2,!=0.8.4,!=0.9.2 -netaddr>=0.7.6 -suds>=0.4 -paramiko>=1.9.0 -pyasn1 -Babel>=1.3 -iso8601>=0.1.9 +sqlalchemy-migrate>=0.8.2,!=0.8.4,<=0.9.1 +netaddr>=0.7.6,<=0.7.14 +suds==0.4 +paramiko>=1.9.0,<=1.15.2 +pyasn1<=0.1.7 +Babel>=1.3,<=1.3 +iso8601>=0.1.9,<=0.1.10 jsonschema>=2.0.0,<3.0.0 -python-cinderclient>=1.0.6 -python-neutronclient>=2.3.4,<3 -python-glanceclient>=0.9.0 -python-keystoneclient>=0.7.0 -six>=1.6.0 +python-cinderclient>=1.0.6,<=1.1.1 +python-neutronclient>=2.3.4,<2.3.11 +python-glanceclient>=0.9.0,!=0.14.0,<=0.14.2 +python-keystoneclient>=0.7.0,<0.12.0 +six>=1.6.0,<=1.9.0 stevedore>=0.14 websockify>=0.5.1,<0.6 wsgiref>=0.1.2 -oslo.config>=1.2.0 -oslo.rootwrap -pycadf>=0.4.1 -oslo.messaging>=1.3.0 +oslo.config>=1.2.0,<1.5 +oslo.rootwrap<1.4 +pycadf>=0.4.1,<0.6.0 +oslo.messaging>=1.3.0,<1.5 diff -Nru nova-2014.1.3/nova.egg-info/SOURCES.txt nova-2014.1.5/nova.egg-info/SOURCES.txt --- nova-2014.1.3/nova.egg-info/SOURCES.txt 2014-10-02 23:38:45.000000000 +0000 +++ nova-2014.1.5/nova.egg-info/SOURCES.txt 2015-06-18 22:33:13.000000000 +0000 @@ -1217,6 +1217,7 @@ nova.egg-info/dependency_links.txt nova.egg-info/entry_points.txt nova.egg-info/not-zip-safe +nova.egg-info/pbr.json nova.egg-info/requires.txt nova.egg-info/top_level.txt nova/CA/.gitignore @@ -2102,6 +2103,7 @@ nova/tests/console/__init__.py nova/tests/console/test_console.py nova/tests/console/test_rpcapi.py +nova/tests/console/test_websocketproxy.py nova/tests/consoleauth/__init__.py nova/tests/consoleauth/test_consoleauth.py nova/tests/consoleauth/test_rpcapi.py @@ -3351,6 +3353,8 @@ nova/tests/scheduler/test_scheduler_options.py nova/tests/scheduler/test_scheduler_utils.py nova/tests/scheduler/test_weights.py +nova/tests/scheduler/filters/__init__.py +nova/tests/scheduler/filters/test_trusted_filters.py nova/tests/servicegroup/__init__.py nova/tests/servicegroup/test_db_servicegroup.py nova/tests/servicegroup/test_mc_servicegroup.py @@ -3396,6 +3400,7 @@ nova/tests/virt/hyperv/__init__.py nova/tests/virt/hyperv/db_fakes.py nova/tests/virt/hyperv/fake.py +nova/tests/virt/hyperv/test_hostutils.py nova/tests/virt/hyperv/test_hypervapi.py nova/tests/virt/hyperv/test_migrationops.py nova/tests/virt/hyperv/test_networkutilsv2.py diff -Nru nova-2014.1.3/PKG-INFO nova-2014.1.5/PKG-INFO --- nova-2014.1.3/PKG-INFO 2014-10-02 23:38:46.000000000 +0000 +++ nova-2014.1.5/PKG-INFO 2015-06-18 22:33:14.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.1 Name: nova -Version: 2014.1.3 +Version: 2014.1.5 Summary: Cloud computing fabric controller Home-page: http://www.openstack.org/ Author: OpenStack diff -Nru nova-2014.1.3/requirements.txt nova-2014.1.5/requirements.txt --- nova-2014.1.3/requirements.txt 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/requirements.txt 2015-06-18 22:25:40.000000000 +0000 @@ -1,35 +1,35 @@ pbr>=0.6,<1.0 SQLAlchemy>=0.7.8,!=0.9.5,<=0.9.99 amqplib>=0.6.1 -anyjson>=0.3.3 +anyjson>=0.3.3,<=0.3.3 argparse -boto>=2.12.0,!=2.13.0 -eventlet>=0.13.0 -Jinja2 -kombu>=2.4.8 -lxml>=2.3 +boto>=2.12.0,!=2.13.0,<2.35.0 +eventlet>=0.13.0,<0.16.0 +Jinja2<=2.7.3 +kombu>=2.5.0,<=3.0.7 +lxml>=2.3,<=3.4.2 Routes>=1.12.3,!=2.0 -WebOb>=1.2.3 -greenlet>=0.3.2 -PasteDeploy>=1.5.0 +WebOb>=1.2.3,<=1.4 +greenlet>=0.3.2,<=0.4.5 +PasteDeploy>=1.5.0,<=1.5.2 Paste -sqlalchemy-migrate>=0.8.2,!=0.8.4,!=0.9.2 -netaddr>=0.7.6 -suds>=0.4 -paramiko>=1.9.0 -pyasn1 -Babel>=1.3 -iso8601>=0.1.9 +sqlalchemy-migrate>=0.8.2,!=0.8.4,<=0.9.1 +netaddr>=0.7.6,<=0.7.14 +suds==0.4 +paramiko>=1.9.0,<=1.15.2 +pyasn1<=0.1.7 +Babel>=1.3,<=1.3 +iso8601>=0.1.9,<=0.1.10 jsonschema>=2.0.0,<3.0.0 -python-cinderclient>=1.0.6 -python-neutronclient>=2.3.4,<3 -python-glanceclient>=0.9.0 -python-keystoneclient>=0.7.0 -six>=1.6.0 +python-cinderclient>=1.0.6,<=1.1.1 +python-neutronclient>=2.3.4,<2.3.11 +python-glanceclient>=0.9.0,!=0.14.0,<=0.14.2 +python-keystoneclient>=0.7.0,<0.12.0 +six>=1.6.0,<=1.9.0 stevedore>=0.14 websockify>=0.5.1,<0.6 wsgiref>=0.1.2 -oslo.config>=1.2.0 -oslo.rootwrap -pycadf>=0.4.1 -oslo.messaging>=1.3.0 +oslo.config>=1.2.0,<1.5 +oslo.rootwrap<1.4 +pycadf>=0.4.1,<0.6.0 +oslo.messaging>=1.3.0,<1.5 diff -Nru nova-2014.1.3/setup.cfg nova-2014.1.5/setup.cfg --- nova-2014.1.3/setup.cfg 2014-10-02 23:38:46.000000000 +0000 +++ nova-2014.1.5/setup.cfg 2015-06-18 22:33:14.000000000 +0000 @@ -1,6 +1,6 @@ [metadata] name = nova -version = 2014.1.3 +version = 2014.1.5 summary = Cloud computing fabric controller description-file = README.rst diff -Nru nova-2014.1.3/test-requirements.txt nova-2014.1.5/test-requirements.txt --- nova-2014.1.3/test-requirements.txt 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/test-requirements.txt 2015-06-18 22:25:40.000000000 +0000 @@ -1,15 +1,15 @@ hacking>=0.8.0,<0.9 -coverage>=3.6 -discover +coverage>=3.6,<=3.7.1 +discover<=0.4.0 feedparser -fixtures>=0.3.14 -mock>=1.0 -mox>=0.5.3 -MySQL-python +fixtures>=0.3.14,<=1.0.0 +mock>=1.0,<=1.0.1 +mox>=0.5.3,<=0.5.3 +MySQL-python<=1.2.5 psycopg2 pylint==0.25.2 -python-subunit>=0.0.18 +python-subunit>=0.0.18,<=1.1.0 sphinx>=1.1.2,<1.1.999 -oslosphinx -testrepository>=0.0.18 -testtools>=0.9.34 +oslosphinx<=2.5.0 +testrepository>=0.0.18,<=0.0.20 +testtools>=0.9.34,!=1.2.0,!=1.4.0,<=1.7.1 diff -Nru nova-2014.1.3/tox.ini nova-2014.1.5/tox.ini --- nova-2014.1.3/tox.ini 2014-10-02 23:32:00.000000000 +0000 +++ nova-2014.1.5/tox.ini 2015-06-18 22:25:40.000000000 +0000 @@ -14,6 +14,7 @@ deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = + find . -type f -name "*.pyc" -delete python -m nova.openstack.common.lockutils python setup.py test --slowest --testr-args='{posargs}' [tox:jenkins]