Merge lp:~yamahata/nova/boot-from-volume into lp:~hudson-openstack/nova/trunk
- boot-from-volume
- Merge into trunk
Status: | Work in progress |
---|---|
Proposed branch: | lp:~yamahata/nova/boot-from-volume |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
605 lines (+246/-75) 8 files modified
nova/api/ec2/apirequest.py (+11/-5) nova/api/ec2/cloud.py (+5/-1) nova/api/openstack/servers.py (+50/-14) nova/compute/api.py (+46/-20) nova/compute/manager.py (+33/-3) nova/virt/driver.py (+1/-1) nova/virt/libvirt.xml.template (+33/-6) nova/virt/libvirt_conn.py (+67/-25) |
To merge this branch: | bzr merge lp:~yamahata/nova/boot-from-volume |
Related bugs: | |
Related blueprints: |
Boot From Volume
(High)
Snapshot, Clone and Boot from volumes
(Undefined)
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Dan Prince (community) | Needs Fixing | ||
Vish Ishaya (community) | Needs Fixing | ||
Review via email: mp+58021@code.launchpad.net |
Commit message
Description of the change
This branch implements the first step for boot from volume.
With --block-
for example
euca-run-instances ami-XXXX -k mykey -t m1.tiny -b/dev/
In fact since creating ec2 snapshot/volume from volume/snapshot isn't supported yet,
I used volume_id instead of snapshot_id.
This is first step to start discussion.
Code details.
- enhanced argument parser to interpret multi-dot separated argument
like BlockDeviceMapp
- pass down block device mapping id form nova-api to compute-api.
compute-api changes volume status to in-use to tell the compute-manager which volume to attach
- compute-manager pass those infos to virt dirver and libvirt_conn driver interprets it.
TODO:
- error recovery
- ephemeral device/no device
- ami support which needs to change db schema
- suport on ec2 snapshot/clone
- native api in addition to ec2 api?
- 994. By Yoshiaki Tamura
-
Fix parameter mismatch calling to_xml() from spawn() in libvirt_conn.py
- 995. By termie
-
The change to utils.execute's call style missed this call somehow, this should get libvirt snapshots working again.
Jay Pipes (jaypipes) wrote : | # |
Hi! Looks like a great contribution indeed, and I agree with Masanori's points about discussing at the summit. I wanted to add one more note, though, that we would want to see some unit tests that stress the new code paths. Let us know if you need assistance creating those tests. :)
Cheers!
jay
- 996. By Vish Ishaya
-
Fixes nova-manage image convert when the source directory is the same one that local image service uses.
- 997. By Jason Kölker
-
Change '== None' to 'is None'
- 998. By Mark Washenberger
-
Support admin password when specified in server create requests.
- 999. By Eldar Nugaev
-
Fix loggin in creation server in OpenStack API 1.0
- 1000. By Jason Kölker
-
use 'is not None' instead of '!= None'
- 1001. By Jason Kölker
-
Remove zope.interface from the requires file since it is not used anywhere.
- 1002. By Jason Kölker
-
pep8 fixes
- 1003. By Anne Gentle
-
Adding projectname username to the nova-manage project commands to fix a doc bug, plus some edits and elimination of a few doc todos.
Isaku Yamahata (yamahata) wrote : | # |
On Mon, Apr 18, 2011 at 03:48:14PM -0000, Masanori Itoh wrote:
> Hi Isaku,
>
> Great contribution!
> Actually we were also discussing developing this feature internally.
>
> BTW, this is a POC code of a not-approved-yet feature, isn't it?
> New feature will not be merged into trunk till the blueprint is approved and discussed at Design Summit in the OpenStack world.
Yes. I just didn't know what I should do with the new code.
I'm learning new rules as a new comer... (sometimes by making mistakes)
> I guess that Adam Johnson of Midokura will have a session on the blueprint of this feature
> at the upcoming Diablo Design Summit. So, what about holding the branch somewhere outside
> the trunk for a while?
Great idea. I'm looking forward to it.
>
> Thanks,
> Masanori
>
>
> --
> https:/
> You are the owner of lp:~yamahata/nova/boot-from-volume.
>
--
yamahata
- 1004. By Dan Prince
-
Implement quotas for the new v1.1 server metadata controller.
Created a new _check_
metadata_ properties_ quota method in the compute API that is used when creating instances and when updating server metadata. In doing so I modified the compute API so that metadata is a dict (not an array) to ensure we are using unique key values for metadata (which is implied by the API specs) and makes more sense with JSON request formats anyway. Additionally this branch enables and fixes the integration test to create servers with metadata.
- 1005. By Josh Kearney
-
Round 1 of pylint cleanup.
- 1006. By termie
-
attempts to make the docstring rules clearer
- 1007. By termie
-
Docstring cleanup and formatting. Minor style fixes as well.
- 1008. By Naveed Massjouni
-
Added an option to run_tests.sh so you can run just pep8. So now you can:
./run_tests.sh --just-pep8
or
./run_tests.sh -p - 1009. By Josh Kearney
-
Another small round of pylint clean-up.
- 1013. By Yoshiaki Tamura
-
If volumes exist in instance, get pathes to the volumes and convert
them to the xml format to let libvirt to see them upon booting. - 1014. By Yoshiaki Tamura
-
Extend create() to accept volume, and update DB to reserve the volume
before passing it to the manager. - 1015. By Yoshiaki Tamura
-
Add an parameter to specify volume upon creating instances.
Vish Ishaya (vishvananda) wrote : | # |
This is looking pretty good.
I like the direction that it is going and I think merging step-by-step is the right way. We definitely need some tests to cover this feature. Some sort of unit tests are a must. It would also be great to get a smoketest for this feature so that we can be sure that it is working against a real deployment.
Dan Prince (dan-prince) wrote : | # |
This branch no longer merges cleanly w/ trunk.
Needs fixing.
Unmerged revisions
- 1015. By Yoshiaki Tamura
-
Add an parameter to specify volume upon creating instances.
- 1014. By Yoshiaki Tamura
-
Extend create() to accept volume, and update DB to reserve the volume
before passing it to the manager. - 1013. By Yoshiaki Tamura
-
If volumes exist in instance, get pathes to the volumes and convert
them to the xml format to let libvirt to see them upon booting. - 1012. By Isaku Yamahata
-
ebs boot: compute node(kvm and libvirt) support for ebs boot
this patch teaches kvm and libvirt compute node ebs boot.
- 1011. By Isaku Yamahata
-
ebs boot: add parse for ebs boot argument
This patch adds the parser of ebs boot argument and stores those
infos into Volume table for compute node. - 1010. By Isaku Yamahata
-
api/ec2/api: teach multi dot-separated argument
nova.api.
ec2.apirequest. APIRequest knows only single dot-separated
arguments.
EBS boot uses multi dot-separeted arguments like
BlockDeviceMapping.1.DeviceNam e=snap- id
This patch teaches the parser those argument as the preparetion for ebs boot
support.
Preview Diff
1 | === modified file 'nova/api/ec2/apirequest.py' | |||
2 | --- nova/api/ec2/apirequest.py 2011-04-18 20:53:09 +0000 | |||
3 | +++ nova/api/ec2/apirequest.py 2011-04-21 04:09:46 +0000 | |||
4 | @@ -133,11 +133,17 @@ | |||
5 | 133 | # NOTE(vish): Automatically convert strings back | 133 | # NOTE(vish): Automatically convert strings back |
6 | 134 | # into their respective values | 134 | # into their respective values |
7 | 135 | value = _try_convert(value) | 135 | value = _try_convert(value) |
13 | 136 | if len(parts) > 1: | 136 | |
14 | 137 | d = args.get(key, {}) | 137 | if len(parts) > 1: |
15 | 138 | d[parts[1]] = value | 138 | d = args.get(key, {}) |
16 | 139 | value = d | 139 | args[key] = d |
17 | 140 | args[key] = value | 140 | for k in parts[1:-1]: |
18 | 141 | v = d.get(k, {}) | ||
19 | 142 | d[k] = v | ||
20 | 143 | d = v | ||
21 | 144 | d[parts[-1]] = value | ||
22 | 145 | else: | ||
23 | 146 | args[key] = value | ||
24 | 141 | 147 | ||
25 | 142 | for key in args.keys(): | 148 | for key in args.keys(): |
26 | 143 | # NOTE(vish): Turn numeric dict keys into lists | 149 | # NOTE(vish): Turn numeric dict keys into lists |
27 | 144 | 150 | ||
28 | === modified file 'nova/api/ec2/cloud.py' | |||
29 | --- nova/api/ec2/cloud.py 2011-04-20 23:21:37 +0000 | |||
30 | +++ nova/api/ec2/cloud.py 2011-04-21 04:09:46 +0000 | |||
31 | @@ -822,6 +822,9 @@ | |||
32 | 822 | if kwargs.get('ramdisk_id'): | 822 | if kwargs.get('ramdisk_id'): |
33 | 823 | ramdisk = self._get_image(context, kwargs['ramdisk_id']) | 823 | ramdisk = self._get_image(context, kwargs['ramdisk_id']) |
34 | 824 | kwargs['ramdisk_id'] = ramdisk['id'] | 824 | kwargs['ramdisk_id'] = ramdisk['id'] |
35 | 825 | for bdm in kwargs.get('block_device_mapping', []): | ||
36 | 826 | volume_id = ec2utils.ec2_id_to_id(bdm['Ebs']['SnapshotId']) | ||
37 | 827 | bdm['Ebs']['SnapshotId'] = volume_id | ||
38 | 825 | instances = self.compute_api.create(context, | 828 | instances = self.compute_api.create(context, |
39 | 826 | instance_type=instance_types.get_instance_type_by_name( | 829 | instance_type=instance_types.get_instance_type_by_name( |
40 | 827 | kwargs.get('instance_type', None)), | 830 | kwargs.get('instance_type', None)), |
41 | @@ -836,7 +839,8 @@ | |||
42 | 836 | user_data=kwargs.get('user_data'), | 839 | user_data=kwargs.get('user_data'), |
43 | 837 | security_group=kwargs.get('security_group'), | 840 | security_group=kwargs.get('security_group'), |
44 | 838 | availability_zone=kwargs.get('placement', {}).get( | 841 | availability_zone=kwargs.get('placement', {}).get( |
46 | 839 | 'AvailabilityZone')) | 842 | 'AvailabilityZone'), |
47 | 843 | block_device_mapping=kwargs.get('block_device_mapping', {})) | ||
48 | 840 | return self._format_run_instances(context, | 844 | return self._format_run_instances(context, |
49 | 841 | instances[0]['reservation_id']) | 845 | instances[0]['reservation_id']) |
50 | 842 | 846 | ||
51 | 843 | 847 | ||
52 | === modified file 'nova/api/openstack/servers.py' | |||
53 | --- nova/api/openstack/servers.py 2011-04-19 17:46:43 +0000 | |||
54 | +++ nova/api/openstack/servers.py 2011-04-21 04:09:46 +0000 | |||
55 | @@ -52,7 +52,7 @@ | |||
56 | 52 | "attributes": { | 52 | "attributes": { |
57 | 53 | "server": ["id", "imageId", "name", "flavorId", "hostId", | 53 | "server": ["id", "imageId", "name", "flavorId", "hostId", |
58 | 54 | "status", "progress", "adminPass", "flavorRef", | 54 | "status", "progress", "adminPass", "flavorRef", |
60 | 55 | "imageRef"], | 55 | "imageRef", "volume"], |
61 | 56 | "link": ["rel", "type", "href"], | 56 | "link": ["rel", "type", "href"], |
62 | 57 | }, | 57 | }, |
63 | 58 | "dict_collections": { | 58 | "dict_collections": { |
64 | @@ -129,15 +129,27 @@ | |||
65 | 129 | key_data = key_pair['public_key'] | 129 | key_data = key_pair['public_key'] |
66 | 130 | 130 | ||
67 | 131 | requested_image_id = self._image_id_from_req_data(env) | 131 | requested_image_id = self._image_id_from_req_data(env) |
77 | 132 | try: | 132 | |
78 | 133 | image_id = common.get_image_id_from_image_hash(self._image_service, | 133 | volume = self._volume_from_req_data(env) |
79 | 134 | context, requested_image_id) | 134 | |
80 | 135 | except: | 135 | # image_id or volume must be set to create |
81 | 136 | msg = _("Can not find requested image") | 136 | if not requested_image_id and not volume: |
82 | 137 | return faults.Fault(exc.HTTPBadRequest(msg)) | 137 | msg = _("imageRef or volume must be specified") |
83 | 138 | 138 | return exc.HTTPBadRequest(msg) | |
84 | 139 | kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image( | 139 | |
85 | 140 | req, image_id) | 140 | image_id = None |
86 | 141 | kernel_id = None | ||
87 | 142 | ramdisk_id = None | ||
88 | 143 | if requested_image_id: | ||
89 | 144 | try: | ||
90 | 145 | image_id = common.get_image_id_from_image_hash( | ||
91 | 146 | self._image_service, context, requested_image_id) | ||
92 | 147 | except: | ||
93 | 148 | msg = _("Can not find requested image") | ||
94 | 149 | return faults.Fault(exc.HTTPBadRequest(msg)) | ||
95 | 150 | |||
96 | 151 | kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image( | ||
97 | 152 | req, image_id) | ||
98 | 141 | 153 | ||
99 | 142 | personality = env['server'].get('personality') | 154 | personality = env['server'].get('personality') |
100 | 143 | injected_files = [] | 155 | injected_files = [] |
101 | @@ -154,6 +166,12 @@ | |||
102 | 154 | self._validate_server_name(name) | 166 | self._validate_server_name(name) |
103 | 155 | name = name.strip() | 167 | name = name.strip() |
104 | 156 | 168 | ||
105 | 169 | # FIXME(yamahata): convert volume into EC2 block device mapping | ||
106 | 170 | block_device_mapping=[] | ||
107 | 171 | if volume: | ||
108 | 172 | block_device_mapping.append({'DeviceName': volume['mountpoint'], | ||
109 | 173 | 'Ebs': {'SnapshotId': volume['id']}}) | ||
110 | 174 | |||
111 | 157 | try: | 175 | try: |
112 | 158 | inst_type = \ | 176 | inst_type = \ |
113 | 159 | instance_types.get_instance_type_by_flavor_id(flavor_id) | 177 | instance_types.get_instance_type_by_flavor_id(flavor_id) |
114 | @@ -163,12 +181,14 @@ | |||
115 | 163 | image_id, | 181 | image_id, |
116 | 164 | kernel_id=kernel_id, | 182 | kernel_id=kernel_id, |
117 | 165 | ramdisk_id=ramdisk_id, | 183 | ramdisk_id=ramdisk_id, |
118 | 184 | volume=volume, | ||
119 | 166 | display_name=name, | 185 | display_name=name, |
120 | 167 | display_description=name, | 186 | display_description=name, |
121 | 168 | key_name=key_name, | 187 | key_name=key_name, |
122 | 169 | key_data=key_data, | 188 | key_data=key_data, |
123 | 170 | metadata=env['server'].get('metadata', {}), | 189 | metadata=env['server'].get('metadata', {}), |
125 | 171 | injected_files=injected_files) | 190 | injected_files=injected_files, |
126 | 191 | block_device_mapping=block_device_mapping) | ||
127 | 172 | except quota.QuotaError as error: | 192 | except quota.QuotaError as error: |
128 | 173 | self._handle_quota_error(error) | 193 | self._handle_quota_error(error) |
129 | 174 | 194 | ||
130 | @@ -585,7 +605,10 @@ | |||
131 | 585 | 605 | ||
132 | 586 | class ControllerV10(Controller): | 606 | class ControllerV10(Controller): |
133 | 587 | def _image_id_from_req_data(self, data): | 607 | def _image_id_from_req_data(self, data): |
135 | 588 | return data['server']['imageId'] | 608 | try: |
136 | 609 | return data['server']['imageId'] | ||
137 | 610 | except KeyError: | ||
138 | 611 | return | ||
139 | 589 | 612 | ||
140 | 590 | def _flavor_id_from_req_data(self, data): | 613 | def _flavor_id_from_req_data(self, data): |
141 | 591 | return data['server']['flavorId'] | 614 | return data['server']['flavorId'] |
142 | @@ -612,13 +635,26 @@ | |||
143 | 612 | 635 | ||
144 | 613 | class ControllerV11(Controller): | 636 | class ControllerV11(Controller): |
145 | 614 | def _image_id_from_req_data(self, data): | 637 | def _image_id_from_req_data(self, data): |
148 | 615 | href = data['server']['imageRef'] | 638 | try: |
149 | 616 | return common.get_id_from_href(href) | 639 | href = data['server']['imageRef'] |
150 | 640 | return common.get_id_from_href(href) | ||
151 | 641 | except KeyError: | ||
152 | 642 | return | ||
153 | 617 | 643 | ||
154 | 618 | def _flavor_id_from_req_data(self, data): | 644 | def _flavor_id_from_req_data(self, data): |
155 | 619 | href = data['server']['flavorRef'] | 645 | href = data['server']['flavorRef'] |
156 | 620 | return common.get_id_from_href(href) | 646 | return common.get_id_from_href(href) |
157 | 621 | 647 | ||
158 | 648 | def _volume_from_req_data(self, data): | ||
159 | 649 | try: | ||
160 | 650 | volume = { | ||
161 | 651 | 'id': common.get_id_from_href(data['server']['volume']['id']), | ||
162 | 652 | 'mountpoint': data['server']['volume']['mountpoint'] | ||
163 | 653 | } | ||
164 | 654 | return volume | ||
165 | 655 | except KeyError: | ||
166 | 656 | return | ||
167 | 657 | |||
168 | 622 | def _get_view_builder(self, req): | 658 | def _get_view_builder(self, req): |
169 | 623 | base_url = req.application_url | 659 | base_url = req.application_url |
170 | 624 | flavor_builder = nova.api.openstack.views.flavors.ViewBuilderV11( | 660 | flavor_builder = nova.api.openstack.views.flavors.ViewBuilderV11( |
171 | 625 | 661 | ||
172 | === modified file 'nova/compute/api.py' | |||
173 | --- nova/compute/api.py 2011-04-20 19:11:14 +0000 | |||
174 | +++ nova/compute/api.py 2011-04-21 04:09:46 +0000 | |||
175 | @@ -124,13 +124,37 @@ | |||
176 | 124 | LOG.warn(msg) | 124 | LOG.warn(msg) |
177 | 125 | raise quota.QuotaError(msg, "MetadataLimitExceeded") | 125 | raise quota.QuotaError(msg, "MetadataLimitExceeded") |
178 | 126 | 126 | ||
179 | 127 | def _get_kernel_and_ramdisk(self, context, image_id, | ||
180 | 128 | kernel_id=None, ramdisk_id=None): | ||
181 | 129 | if not image_id: | ||
182 | 130 | return (None, None, None) | ||
183 | 131 | |||
184 | 132 | image = self.image_service.show(context, image_id) | ||
185 | 133 | |||
186 | 134 | os_type = None | ||
187 | 135 | if 'properties' in image and 'os_type' in image['properties']: | ||
188 | 136 | os_type = image['properties']['os_type'] | ||
189 | 137 | |||
190 | 138 | if kernel_id is None: | ||
191 | 139 | kernel_id = image['properties'].get('kernel_id', None) | ||
192 | 140 | if ramdisk_id is None: | ||
193 | 141 | ramdisk_id = image['properties'].get('ramdisk_id', None) | ||
194 | 142 | # FIXME(sirp): is there a way we can remove null_kernel? | ||
195 | 143 | # No kernel and ramdisk for raw images | ||
196 | 144 | if kernel_id == str(FLAGS.null_kernel): | ||
197 | 145 | kernel_id = None | ||
198 | 146 | ramdisk_id = None | ||
199 | 147 | LOG.debug(_("Creating a raw instance")) | ||
200 | 148 | |||
201 | 149 | return (kernel_id, ramdisk_id, os_type) | ||
202 | 150 | |||
203 | 127 | def create(self, context, instance_type, | 151 | def create(self, context, instance_type, |
204 | 128 | image_id, kernel_id=None, ramdisk_id=None, | 152 | image_id, kernel_id=None, ramdisk_id=None, |
205 | 129 | min_count=1, max_count=1, | 153 | min_count=1, max_count=1, |
206 | 130 | display_name='', display_description='', | 154 | display_name='', display_description='', |
207 | 131 | key_name=None, key_data=None, security_group='default', | 155 | key_name=None, key_data=None, security_group='default', |
208 | 132 | availability_zone=None, user_data=None, metadata={}, | 156 | availability_zone=None, user_data=None, metadata={}, |
210 | 133 | injected_files=None): | 157 | injected_files=None, block_device_mapping=[]): |
211 | 134 | """Create the number and type of instances requested. | 158 | """Create the number and type of instances requested. |
212 | 135 | 159 | ||
213 | 136 | Verifies that quota and other arguments are valid. | 160 | Verifies that quota and other arguments are valid. |
214 | @@ -152,22 +176,10 @@ | |||
215 | 152 | self._check_metadata_properties_quota(context, metadata) | 176 | self._check_metadata_properties_quota(context, metadata) |
216 | 153 | self._check_injected_file_quota(context, injected_files) | 177 | self._check_injected_file_quota(context, injected_files) |
217 | 154 | 178 | ||
234 | 155 | image = self.image_service.show(context, image_id) | 179 | (kernel_id, ramdisk_id, os_type) = \ |
235 | 156 | 180 | self._get_kernel_and_ramdisk(context, image_id, | |
236 | 157 | os_type = None | 181 | kernel_id, ramdisk_id) |
237 | 158 | if 'properties' in image and 'os_type' in image['properties']: | 182 | |
222 | 159 | os_type = image['properties']['os_type'] | ||
223 | 160 | |||
224 | 161 | if kernel_id is None: | ||
225 | 162 | kernel_id = image['properties'].get('kernel_id', None) | ||
226 | 163 | if ramdisk_id is None: | ||
227 | 164 | ramdisk_id = image['properties'].get('ramdisk_id', None) | ||
228 | 165 | # FIXME(sirp): is there a way we can remove null_kernel? | ||
229 | 166 | # No kernel and ramdisk for raw images | ||
230 | 167 | if kernel_id == str(FLAGS.null_kernel): | ||
231 | 168 | kernel_id = None | ||
232 | 169 | ramdisk_id = None | ||
233 | 170 | LOG.debug(_("Creating a raw instance")) | ||
238 | 171 | # Make sure we have access to kernel and ramdisk (if not raw) | 183 | # Make sure we have access to kernel and ramdisk (if not raw) |
239 | 172 | logging.debug("Using Kernel=%s, Ramdisk=%s" % | 184 | logging.debug("Using Kernel=%s, Ramdisk=%s" % |
240 | 173 | (kernel_id, ramdisk_id)) | 185 | (kernel_id, ramdisk_id)) |
241 | @@ -215,7 +227,7 @@ | |||
242 | 215 | 'locked': False, | 227 | 'locked': False, |
243 | 216 | 'metadata': metadata, | 228 | 'metadata': metadata, |
244 | 217 | 'availability_zone': availability_zone, | 229 | 'availability_zone': availability_zone, |
246 | 218 | 'os_type': os_type} | 230 | 'os_type': os_type or ''} |
247 | 219 | elevated = context.elevated() | 231 | elevated = context.elevated() |
248 | 220 | instances = [] | 232 | instances = [] |
249 | 221 | LOG.debug(_("Going to run %s instances..."), num_instances) | 233 | LOG.debug(_("Going to run %s instances..."), num_instances) |
250 | @@ -234,6 +246,17 @@ | |||
251 | 234 | instance_id, | 246 | instance_id, |
252 | 235 | security_group_id) | 247 | security_group_id) |
253 | 236 | 248 | ||
254 | 249 | # tell vm driver to attach volume at boot time by | ||
255 | 250 | # updating the volume table if volume_id is attachable | ||
256 | 251 | # TODO(yoshi) Change mount point to be bootable | ||
257 | 252 | # TODO(yamahata): eliminate EC2 dependency | ||
258 | 253 | for bdm in block_device_mapping: | ||
259 | 254 | device = bdm['DeviceName'] | ||
260 | 255 | volume_id = bdm['Ebs']['SnapshotId'] | ||
261 | 256 | self._check_volume_device(context, volume_id, device) | ||
262 | 257 | self.db.volume_attached(context, volume_id, instance_id, | ||
263 | 258 | device) | ||
264 | 259 | |||
265 | 237 | # Set sane defaults if not specified | 260 | # Set sane defaults if not specified |
266 | 238 | updates = dict(hostname=self.hostname_factory(instance_id)) | 261 | updates = dict(hostname=self.hostname_factory(instance_id)) |
267 | 239 | if (not hasattr(instance, 'display_name') or | 262 | if (not hasattr(instance, 'display_name') or |
268 | @@ -672,12 +695,15 @@ | |||
269 | 672 | """Inject network info for the instance.""" | 695 | """Inject network info for the instance.""" |
270 | 673 | self._cast_compute_message('inject_network_info', context, instance_id) | 696 | self._cast_compute_message('inject_network_info', context, instance_id) |
271 | 674 | 697 | ||
274 | 675 | def attach_volume(self, context, instance_id, volume_id, device): | 698 | def _check_volume_device(self, context, volume_id, device): |
273 | 676 | """Attach an existing volume to an existing instance.""" | ||
275 | 677 | if not re.match("^/dev/[a-z]d[a-z]+$", device): | 699 | if not re.match("^/dev/[a-z]d[a-z]+$", device): |
276 | 678 | raise exception.ApiError(_("Invalid device specified: %s. " | 700 | raise exception.ApiError(_("Invalid device specified: %s. " |
277 | 679 | "Example device: /dev/vdb") % device) | 701 | "Example device: /dev/vdb") % device) |
278 | 680 | self.volume_api.check_attach(context, volume_id=volume_id) | 702 | self.volume_api.check_attach(context, volume_id=volume_id) |
279 | 703 | |||
280 | 704 | def attach_volume(self, context, instance_id, volume_id, device): | ||
281 | 705 | """Attach an existing volume to an existing instance.""" | ||
282 | 706 | self._check_volume_device(context, volume_id, device) | ||
283 | 681 | instance = self.get(context, instance_id) | 707 | instance = self.get(context, instance_id) |
284 | 682 | host = instance['host'] | 708 | host = instance['host'] |
285 | 683 | rpc.cast(context, | 709 | rpc.cast(context, |
286 | 684 | 710 | ||
287 | === modified file 'nova/compute/manager.py' | |||
288 | --- nova/compute/manager.py 2011-04-19 18:29:16 +0000 | |||
289 | +++ nova/compute/manager.py 2011-04-21 04:09:46 +0000 | |||
290 | @@ -224,6 +224,29 @@ | |||
291 | 224 | self.network_manager.setup_compute_network(context, | 224 | self.network_manager.setup_compute_network(context, |
292 | 225 | instance_id) | 225 | instance_id) |
293 | 226 | 226 | ||
294 | 227 | # setup volume: | ||
295 | 228 | # Now here ebs is specified by volume id. | ||
296 | 229 | # TODO: | ||
297 | 230 | # When snapshot is supported, create volume from snapshot here. | ||
298 | 231 | block_device_mapping = [] | ||
299 | 232 | for volume in instance_ref['volumes']: | ||
300 | 233 | dev_path = self.volume_manager.setup_compute_volume(context, | ||
301 | 234 | volume['id']) | ||
302 | 235 | if dev_path.startswith('/dev/'): | ||
303 | 236 | info = {'type': 'block' | ||
304 | 237 | 'device_path': dev_path, | ||
305 | 238 | 'mount_device': volume['mountpoint']} | ||
306 | 239 | elif ':' in dev_path: | ||
307 | 240 | (protocol, name) = dev_path.split(':') | ||
308 | 241 | info = {'type': 'network', | ||
309 | 242 | 'protocol': protocol, | ||
310 | 243 | 'name': name, | ||
311 | 244 | 'mount_device': volume['mountpoint']} | ||
312 | 245 | block_device_mapping.append(info) | ||
313 | 246 | |||
314 | 247 | # FIXME:XXX determine when to cdboot. | ||
315 | 248 | #boot = 'cdrom' if instance_ref['image_id'] and volume_info else 'hd' | ||
316 | 249 | |||
317 | 227 | # TODO(vish) check to make sure the availability zone matches | 250 | # TODO(vish) check to make sure the availability zone matches |
318 | 228 | self.db.instance_set_state(context, | 251 | self.db.instance_set_state(context, |
319 | 229 | instance_id, | 252 | instance_id, |
320 | @@ -231,7 +254,9 @@ | |||
321 | 231 | 'spawning') | 254 | 'spawning') |
322 | 232 | 255 | ||
323 | 233 | try: | 256 | try: |
325 | 234 | self.driver.spawn(instance_ref) | 257 | self.driver.spawn(instance_ref, |
326 | 258 | block_device_mapping=block_device_mapping, | ||
327 | 259 | boot=boot) | ||
328 | 235 | now = datetime.datetime.utcnow() | 260 | now = datetime.datetime.utcnow() |
329 | 236 | self.db.instance_update(context, | 261 | self.db.instance_update(context, |
330 | 237 | instance_id, | 262 | instance_id, |
331 | @@ -243,6 +268,10 @@ | |||
332 | 243 | self.db.instance_set_state(context, | 268 | self.db.instance_set_state(context, |
333 | 244 | instance_id, | 269 | instance_id, |
334 | 245 | power_state.SHUTDOWN) | 270 | power_state.SHUTDOWN) |
335 | 271 | for volume in instance_ref['volumes']: | ||
336 | 272 | self.volume_manager.remove_compute_volume(context, | ||
337 | 273 | volume['id']) | ||
338 | 274 | self.db.volume_detached(context, volume['id']) | ||
339 | 246 | 275 | ||
340 | 247 | self._update_state(context, instance_id) | 276 | self._update_state(context, instance_id) |
341 | 248 | 277 | ||
342 | @@ -963,8 +992,9 @@ | |||
343 | 963 | 992 | ||
344 | 964 | # Detaching volumes. | 993 | # Detaching volumes. |
345 | 965 | try: | 994 | try: |
348 | 966 | for vol in self.db.volume_get_all_by_instance(ctxt, instance_id): | 995 | for volume in self.db.volume_get_all_by_instance(ctxt, |
349 | 967 | self.volume_manager.remove_compute_volume(ctxt, vol['id']) | 996 | instance_id): |
350 | 997 | self.volume_manager.remove_compute_volume(ctxt, volume['id']) | ||
351 | 968 | except exception.NotFound: | 998 | except exception.NotFound: |
352 | 969 | pass | 999 | pass |
353 | 970 | 1000 | ||
354 | 971 | 1001 | ||
355 | === modified file 'nova/virt/driver.py' | |||
356 | --- nova/virt/driver.py 2011-03-30 00:35:24 +0000 | |||
357 | +++ nova/virt/driver.py 2011-04-21 04:09:46 +0000 | |||
358 | @@ -61,7 +61,7 @@ | |||
359 | 61 | """Return a list of InstanceInfo for all registered VMs""" | 61 | """Return a list of InstanceInfo for all registered VMs""" |
360 | 62 | raise NotImplementedError() | 62 | raise NotImplementedError() |
361 | 63 | 63 | ||
363 | 64 | def spawn(self, instance, network_info=None): | 64 | def spawn(self, instance, network_info=None, block_device_mapping=[]): |
364 | 65 | """Launch a VM for the specified instance""" | 65 | """Launch a VM for the specified instance""" |
365 | 66 | raise NotImplementedError() | 66 | raise NotImplementedError() |
366 | 67 | 67 | ||
367 | 68 | 68 | ||
368 | === modified file 'nova/virt/libvirt.xml.template' | |||
369 | --- nova/virt/libvirt.xml.template 2011-03-30 05:59:13 +0000 | |||
370 | +++ nova/virt/libvirt.xml.template 2011-04-21 04:09:46 +0000 | |||
371 | @@ -39,7 +39,7 @@ | |||
372 | 39 | <initrd>${ramdisk}</initrd> | 39 | <initrd>${ramdisk}</initrd> |
373 | 40 | #end if | 40 | #end if |
374 | 41 | #else | 41 | #else |
376 | 42 | <boot dev="hd" /> | 42 | <boot dev='${boot}' /> |
377 | 43 | #end if | 43 | #end if |
378 | 44 | #end if | 44 | #end if |
379 | 45 | #end if | 45 | #end if |
380 | @@ -67,11 +67,24 @@ | |||
381 | 67 | <target dev='${disk_prefix}b' bus='${disk_bus}'/> | 67 | <target dev='${disk_prefix}b' bus='${disk_bus}'/> |
382 | 68 | </disk> | 68 | </disk> |
383 | 69 | #else | 69 | #else |
389 | 70 | <disk type='file'> | 70 | |
390 | 71 | <driver type='${driver_type}'/> | 71 | ## FIXME: allow no device |
391 | 72 | <source file='${basepath}/disk'/> | 72 | #if $getVar('disk', False) |
392 | 73 | <target dev='${disk_prefix}a' bus='${disk_bus}'/> | 73 | #if $boot == 'cdrom' |
393 | 74 | </disk> | 74 | <disk type='file' device='cdrom'> |
394 | 75 | <driver type='${driver_type}'/> | ||
395 | 76 | <source file='${basepath}/disk'/> | ||
396 | 77 | <target dev='hdc' bus='ide'/> | ||
397 | 78 | </disk> | ||
398 | 79 | #else if not ($getVar('ebs_root', False)) | ||
399 | 80 | <disk type='file' device='disk'> | ||
400 | 81 | <driver type='${driver_type}'/> | ||
401 | 82 | <source file='${basepath}/disk'/> | ||
402 | 83 | <target dev='${disk_prefix}a' bus='${disk_bus}'/> | ||
403 | 84 | </disk> | ||
404 | 85 | #end if | ||
405 | 86 | #end if | ||
406 | 87 | |||
407 | 75 | #if $getVar('local', False) | 88 | #if $getVar('local', False) |
408 | 76 | <disk type='file'> | 89 | <disk type='file'> |
409 | 77 | <driver type='${driver_type}'/> | 90 | <driver type='${driver_type}'/> |
410 | @@ -79,6 +92,20 @@ | |||
411 | 79 | <target dev='${disk_prefix}b' bus='${disk_bus}'/> | 92 | <target dev='${disk_prefix}b' bus='${disk_bus}'/> |
412 | 80 | </disk> | 93 | </disk> |
413 | 81 | #end if | 94 | #end if |
414 | 95 | #for $volume in $volumes | ||
415 | 96 | #if varExists('volume.type') | ||
416 | 97 | #set $volume.type = 'block' | ||
417 | 98 | #end if | ||
418 | 99 | <disk type='${volume.type}'> | ||
419 | 100 | <driver type='raw'/> | ||
420 | 101 | #if $volume.type == 'network' | ||
421 | 102 | <source protocol='${volume.protocol}' name='${volume.name}'/> | ||
422 | 103 | #else | ||
423 | 104 | <source dev='${volume.device_path}'/> | ||
424 | 105 | #end if | ||
425 | 106 | <target dev='${volume.mount_device}' bus='${disk_bus}'/> | ||
426 | 107 | </disk> | ||
427 | 108 | #end for | ||
428 | 82 | #end if | 109 | #end if |
429 | 83 | #end if | 110 | #end if |
430 | 84 | 111 | ||
431 | 85 | 112 | ||
432 | === modified file 'nova/virt/libvirt_conn.py' | |||
433 | --- nova/virt/libvirt_conn.py 2011-04-18 23:40:03 +0000 | |||
434 | +++ nova/virt/libvirt_conn.py 2011-04-21 04:09:46 +0000 | |||
435 | @@ -39,6 +39,7 @@ | |||
436 | 39 | import multiprocessing | 39 | import multiprocessing |
437 | 40 | import os | 40 | import os |
438 | 41 | import random | 41 | import random |
439 | 42 | import re | ||
440 | 42 | import shutil | 43 | import shutil |
441 | 43 | import subprocess | 44 | import subprocess |
442 | 44 | import sys | 45 | import sys |
443 | @@ -202,6 +203,10 @@ | |||
444 | 202 | network_info.append((network, mapping)) | 203 | network_info.append((network, mapping)) |
445 | 203 | return network_info | 204 | return network_info |
446 | 204 | 205 | ||
447 | 206 | # TODO: clean up open coding like | ||
448 | 207 | # volume['mountpoint'] = volume['mountpoint'].rpartition("/")[2] | ||
449 | 208 | def _strip_dev(mount_path): | ||
450 | 209 | return re.sub(r'^/dev/', '', mount_path) | ||
451 | 205 | 210 | ||
452 | 206 | class LibvirtConnection(driver.ComputeDriver): | 211 | class LibvirtConnection(driver.ComputeDriver): |
453 | 207 | 212 | ||
454 | @@ -608,15 +613,19 @@ | |||
455 | 608 | # NOTE(ilyaalekseyev): Implementation like in multinics | 613 | # NOTE(ilyaalekseyev): Implementation like in multinics |
456 | 609 | # for xenapi(tr3buchet) | 614 | # for xenapi(tr3buchet) |
457 | 610 | @exception.wrap_exception | 615 | @exception.wrap_exception |
460 | 611 | def spawn(self, instance, network_info=None): | 616 | def spawn(self, instance, network_info=None, |
461 | 612 | xml = self.to_xml(instance, False, network_info) | 617 | block_device_mapping=[], boot=None): |
462 | 618 | xml = self.to_xml(instance, False, network_info=network_info, | ||
463 | 619 | block_device_mapping=block_device_mapping, | ||
464 | 620 | boot=boot) | ||
465 | 613 | db.instance_set_state(context.get_admin_context(), | 621 | db.instance_set_state(context.get_admin_context(), |
466 | 614 | instance['id'], | 622 | instance['id'], |
467 | 615 | power_state.NOSTATE, | 623 | power_state.NOSTATE, |
468 | 616 | 'launching') | 624 | 'launching') |
469 | 617 | self.firewall_driver.setup_basic_filtering(instance, network_info) | 625 | self.firewall_driver.setup_basic_filtering(instance, network_info) |
470 | 618 | self.firewall_driver.prepare_instance_filter(instance, network_info) | 626 | self.firewall_driver.prepare_instance_filter(instance, network_info) |
472 | 619 | self._create_image(instance, xml, network_info) | 627 | self._create_image(instance, xml, network_info=network_info, |
473 | 628 | block_device_mapping=block_device_mapping) | ||
474 | 620 | domain = self._create_new_domain(xml) | 629 | domain = self._create_new_domain(xml) |
475 | 621 | LOG.debug(_("instance %s: is running"), instance['name']) | 630 | LOG.debug(_("instance %s: is running"), instance['name']) |
476 | 622 | self.firewall_driver.apply_instance_filter(instance) | 631 | self.firewall_driver.apply_instance_filter(instance) |
477 | @@ -797,7 +806,7 @@ | |||
478 | 797 | # TODO(vish): should we format disk by default? | 806 | # TODO(vish): should we format disk by default? |
479 | 798 | 807 | ||
480 | 799 | def _create_image(self, inst, libvirt_xml, suffix='', disk_images=None, | 808 | def _create_image(self, inst, libvirt_xml, suffix='', disk_images=None, |
482 | 800 | network_info=None): | 809 | network_info=None, block_device_mapping=[]): |
483 | 801 | if not network_info: | 810 | if not network_info: |
484 | 802 | network_info = _get_network_info(inst) | 811 | network_info = _get_network_info(inst) |
485 | 803 | 812 | ||
486 | @@ -851,25 +860,28 @@ | |||
487 | 851 | user=user, | 860 | user=user, |
488 | 852 | project=project) | 861 | project=project) |
489 | 853 | 862 | ||
490 | 854 | root_fname = '%08x' % int(disk_images['image_id']) | ||
491 | 855 | size = FLAGS.minimum_root_size | ||
492 | 856 | |||
493 | 857 | inst_type_id = inst['instance_type_id'] | 863 | inst_type_id = inst['instance_type_id'] |
494 | 858 | inst_type = instance_types.get_instance_type(inst_type_id) | 864 | inst_type = instance_types.get_instance_type(inst_type_id) |
509 | 859 | if inst_type['name'] == 'm1.tiny' or suffix == '.rescue': | 865 | |
510 | 860 | size = None | 866 | if disk_image['image_id'] and |
511 | 861 | root_fname += "_sm" | 867 | (not self._volume_in_mapping(self.root_mount_device, |
512 | 862 | 868 | block_device_mapping)): | |
513 | 863 | self._cache_image(fn=self._fetch_image, | 869 | root_fname = '%08x' % int(disk_images['image_id']) |
514 | 864 | target=basepath('disk'), | 870 | size = FLAGS.minimum_root_size |
515 | 865 | fname=root_fname, | 871 | if inst_type['name'] == 'm1.tiny' or suffix == '.rescue': |
516 | 866 | cow=FLAGS.use_cow_images, | 872 | size = None |
517 | 867 | image_id=disk_images['image_id'], | 873 | root_fname += "_sm" |
518 | 868 | user=user, | 874 | self._cache_image(fn=self._fetch_image, |
519 | 869 | project=project, | 875 | target=basepath('disk'), |
520 | 870 | size=size) | 876 | fname=root_fname, |
521 | 871 | 877 | cow=FLAGS.use_cow_images, | |
522 | 872 | if inst_type['local_gb']: | 878 | image_id=disk_images['image_id'], |
523 | 879 | user=user, | ||
524 | 880 | project=project, | ||
525 | 881 | size=size) | ||
526 | 882 | |||
527 | 883 | if inst_type['local_gb'] and not self._volume_in_mapping( | ||
528 | 884 | self.local_mount_device, block_device_mapping): | ||
529 | 873 | self._cache_image(fn=self._create_local, | 885 | self._cache_image(fn=self._create_local, |
530 | 874 | target=basepath('disk.local'), | 886 | target=basepath('disk.local'), |
531 | 875 | fname="local_%s" % inst_type['local_gb'], | 887 | fname="local_%s" % inst_type['local_gb'], |
532 | @@ -994,7 +1006,18 @@ | |||
533 | 994 | 1006 | ||
534 | 995 | return result | 1007 | return result |
535 | 996 | 1008 | ||
537 | 997 | def to_xml(self, instance, rescue=False, network_info=None): | 1009 | root_mount_device = 'vda' # FIXME for now. it's hard coded. |
538 | 1010 | local_mount_device = 'vdb' # FIXME for now. it's hard coded. | ||
539 | 1011 | def _volume_in_mapping(self, mount_device, block_device_mapping): | ||
540 | 1012 | mount_device_ = _strip_dev(mount_device) | ||
541 | 1013 | for vol in block_device_mapping: | ||
542 | 1014 | vol_mount_device = _strip_dev(vol['mount_device']) | ||
543 | 1015 | if vol_mount_device == mount_device_: | ||
544 | 1016 | return True | ||
545 | 1017 | return False | ||
546 | 1018 | |||
547 | 1019 | def to_xml(self, instance, rescue=False, | ||
548 | 1020 | network_info=None, block_device_mapping=[], boot='hd'): | ||
549 | 998 | # TODO(termie): cache? | 1021 | # TODO(termie): cache? |
550 | 999 | LOG.debug(_('instance %s: starting toXML method'), instance['name']) | 1022 | LOG.debug(_('instance %s: starting toXML method'), instance['name']) |
551 | 1000 | 1023 | ||
552 | @@ -1007,6 +1030,7 @@ | |||
553 | 1007 | for (network, mapping) in network_info: | 1030 | for (network, mapping) in network_info: |
554 | 1008 | nics.append(self._get_nic_for_xml(network, | 1031 | nics.append(self._get_nic_for_xml(network, |
555 | 1009 | mapping)) | 1032 | mapping)) |
556 | 1033 | |||
557 | 1010 | # FIXME(vish): stick this in db | 1034 | # FIXME(vish): stick this in db |
558 | 1011 | inst_type_id = instance['instance_type_id'] | 1035 | inst_type_id = instance['instance_type_id'] |
559 | 1012 | inst_type = instance_types.get_instance_type(inst_type_id) | 1036 | inst_type = instance_types.get_instance_type(inst_type_id) |
560 | @@ -1016,6 +1040,20 @@ | |||
561 | 1016 | else: | 1040 | else: |
562 | 1017 | driver_type = 'raw' | 1041 | driver_type = 'raw' |
563 | 1018 | 1042 | ||
564 | 1043 | #for volume in block_device_mapping: | ||
565 | 1044 | # volume['mountpoint'] = volume['mountpoint'].rpartition("/")[2] | ||
566 | 1045 | for vol in block_device_mapping: | ||
567 | 1046 | vol['mount_device'] = _strip_dev(vol['mount_device']) | ||
568 | 1047 | |||
569 | 1048 | |||
570 | 1049 | ebs_root = self._volume_in_mapping(self.root_mount_device, | ||
571 | 1050 | block_device_mapping) | ||
572 | 1051 | if self._volume_in_mapping(self.local_mount_device, | ||
573 | 1052 | block_device_mapping): | ||
574 | 1053 | local_gb = False | ||
575 | 1054 | else: | ||
576 | 1055 | local_gb = inst_type['local_gb'] | ||
577 | 1056 | |||
578 | 1019 | xml_info = {'type': FLAGS.libvirt_type, | 1057 | xml_info = {'type': FLAGS.libvirt_type, |
579 | 1020 | 'name': instance['name'], | 1058 | 'name': instance['name'], |
580 | 1021 | 'basepath': os.path.join(FLAGS.instances_path, | 1059 | 'basepath': os.path.join(FLAGS.instances_path, |
581 | @@ -1023,9 +1061,12 @@ | |||
582 | 1023 | 'memory_kb': inst_type['memory_mb'] * 1024, | 1061 | 'memory_kb': inst_type['memory_mb'] * 1024, |
583 | 1024 | 'vcpus': inst_type['vcpus'], | 1062 | 'vcpus': inst_type['vcpus'], |
584 | 1025 | 'rescue': rescue, | 1063 | 'rescue': rescue, |
586 | 1026 | 'local': inst_type['local_gb'], | 1064 | 'local': local_gb, |
587 | 1065 | 'boot': boot, | ||
588 | 1027 | 'driver_type': driver_type, | 1066 | 'driver_type': driver_type, |
590 | 1028 | 'nics': nics} | 1067 | 'nics': nics, |
591 | 1068 | 'ebs_root': ebs_root, | ||
592 | 1069 | 'volumes': block_device_mapping} | ||
593 | 1029 | 1070 | ||
594 | 1030 | if FLAGS.vnc_enabled: | 1071 | if FLAGS.vnc_enabled: |
595 | 1031 | if FLAGS.libvirt_type != 'lxc': | 1072 | if FLAGS.libvirt_type != 'lxc': |
596 | @@ -1037,7 +1078,8 @@ | |||
597 | 1037 | if instance['ramdisk_id']: | 1078 | if instance['ramdisk_id']: |
598 | 1038 | xml_info['ramdisk'] = xml_info['basepath'] + "/ramdisk" | 1079 | xml_info['ramdisk'] = xml_info['basepath'] + "/ramdisk" |
599 | 1039 | 1080 | ||
601 | 1040 | xml_info['disk'] = xml_info['basepath'] + "/disk" | 1081 | if instance['image_id']: |
602 | 1082 | xml_info['disk'] = xml_info['basepath'] + "/disk" | ||
603 | 1041 | 1083 | ||
604 | 1042 | xml = str(Template(self.libvirt_xml, searchList=[xml_info])) | 1084 | xml = str(Template(self.libvirt_xml, searchList=[xml_info])) |
605 | 1043 | LOG.debug(_('instance %s: finished toXML method'), | 1085 | LOG.debug(_('instance %s: finished toXML method'), |
Hi Isaku,
Great contribution!
Actually we were also discussing developing this feature internally.
BTW, this is a POC code of a not-approved-yet feature, isn't it?
New feature will not be merged into trunk till the blueprint is approved and discussed at Design Summit in the OpenStack world.
I guess that Adam Johnson of Midokura will have a session on the blueprint of this feature
at the upcoming Diablo Design Summit. So, what about holding the branch somewhere outside
the trunk for a while?
Thanks,
Masanori