Merge lp:~yamahata/nova/boot-from-volume into lp:~hudson-openstack/nova/trunk

Proposed by Isaku Yamahata
Status: Work in progress
Proposed branch: lp:~yamahata/nova/boot-from-volume
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 605 lines (+246/-75)
8 files modified
nova/api/ec2/apirequest.py (+11/-5)
nova/api/ec2/cloud.py (+5/-1)
nova/api/openstack/servers.py (+50/-14)
nova/compute/api.py (+46/-20)
nova/compute/manager.py (+33/-3)
nova/virt/driver.py (+1/-1)
nova/virt/libvirt.xml.template (+33/-6)
nova/virt/libvirt_conn.py (+67/-25)
To merge this branch: bzr merge lp:~yamahata/nova/boot-from-volume
Reviewer Review Type Date Requested Status
Dan Prince (community) Needs Fixing
Vish Ishaya (community) Needs Fixing
Review via email: mp+58021@code.launchpad.net

Description of the change

This branch implements the first step for boot from volume.
With --block-device-mapping option for euca-run-instances, VM will boot with the specified volumes attached.
for example
 euca-run-instances ami-XXXX -k mykey -t m1.tiny -b/dev/vdb=vol-00000003::false

In fact since creating ec2 snapshot/volume from volume/snapshot isn't supported yet,
I used volume_id instead of snapshot_id.
This is first step to start discussion.

Code details.
- enhanced argument parser to interpret multi-dot separated argument
  like BlockDeviceMapping.1.DeviceName=snap-id
- pass down block device mapping id form nova-api to compute-api.
  compute-api changes volume status to in-use to tell the compute-manager which volume to attach
- compute-manager pass those infos to virt dirver and libvirt_conn driver interprets it.

TODO:
- error recovery
- ephemeral device/no device
- ami support which needs to change db schema
- suport on ec2 snapshot/clone
- native api in addition to ec2 api?

To post a comment you must log in.
lp:~yamahata/nova/boot-from-volume updated
994. By Yoshiaki Tamura

Fix parameter mismatch calling to_xml() from spawn() in libvirt_conn.py

995. By termie

The change to utils.execute's call style missed this call somehow, this should get libvirt snapshots working again.

Revision history for this message
Masanori Itoh (itohm) wrote :

Hi Isaku,

Great contribution!
Actually we were also discussing developing this feature internally.

BTW, this is a POC code of a not-approved-yet feature, isn't it?
New feature will not be merged into trunk till the blueprint is approved and discussed at Design Summit in the OpenStack world.
I guess that Adam Johnson of Midokura will have a session on the blueprint of this feature
at the upcoming Diablo Design Summit. So, what about holding the branch somewhere outside
the trunk for a while?

Thanks,
Masanori

Revision history for this message
Jay Pipes (jaypipes) wrote :

Hi! Looks like a great contribution indeed, and I agree with Masanori's points about discussing at the summit. I wanted to add one more note, though, that we would want to see some unit tests that stress the new code paths. Let us know if you need assistance creating those tests. :)

Cheers!
jay

lp:~yamahata/nova/boot-from-volume updated
996. By Vish Ishaya

Fixes nova-manage image convert when the source directory is the same one that local image service uses.

997. By Jason Kölker

Change '== None' to 'is None'

998. By Mark Washenberger

Support admin password when specified in server create requests.

999. By Eldar Nugaev

Fix loggin in creation server in OpenStack API 1.0

1000. By Jason Kölker

use 'is not None' instead of '!= None'

1001. By Jason Kölker

Remove zope.interface from the requires file since it is not used anywhere.

1002. By Jason Kölker

pep8 fixes

1003. By Anne Gentle

Adding projectname username to the nova-manage project commands to fix a doc bug, plus some edits and elimination of a few doc todos.

Revision history for this message
Isaku Yamahata (yamahata) wrote :

On Mon, Apr 18, 2011 at 03:48:14PM -0000, Masanori Itoh wrote:
> Hi Isaku,
>
> Great contribution!
> Actually we were also discussing developing this feature internally.
>
> BTW, this is a POC code of a not-approved-yet feature, isn't it?
> New feature will not be merged into trunk till the blueprint is approved and discussed at Design Summit in the OpenStack world.

Yes. I just didn't know what I should do with the new code.
I'm learning new rules as a new comer... (sometimes by making mistakes)

> I guess that Adam Johnson of Midokura will have a session on the blueprint of this feature
> at the upcoming Diablo Design Summit. So, what about holding the branch somewhere outside
> the trunk for a while?

Great idea. I'm looking forward to it.

>
> Thanks,
> Masanori
>
>
> --
> https://code.launchpad.net/~yamahata/nova/boot-from-volume/+merge/58021
> You are the owner of lp:~yamahata/nova/boot-from-volume.
>

--
yamahata

lp:~yamahata/nova/boot-from-volume updated
1004. By Dan Prince

Implement quotas for the new v1.1 server metadata controller.

Created a new _check_metadata_properties_quota method in the compute API that is used when creating instances and when updating server metadata. In doing so I modified the compute API so that metadata is a dict (not an array) to ensure we are using unique key values for metadata (which is implied by the API specs) and makes more sense with JSON request formats anyway.

Additionally this branch enables and fixes the integration test to create servers with metadata.

1005. By Josh Kearney

Round 1 of pylint cleanup.

1006. By termie

attempts to make the docstring rules clearer

1007. By termie

Docstring cleanup and formatting. Minor style fixes as well.

1008. By Naveed Massjouni

Added an option to run_tests.sh so you can run just pep8. So now you can:
    ./run_tests.sh --just-pep8
or
    ./run_tests.sh -p

1009. By Josh Kearney

Another small round of pylint clean-up.

1013. By Yoshiaki Tamura

If volumes exist in instance, get pathes to the volumes and convert
them to the xml format to let libvirt to see them upon booting.

1014. By Yoshiaki Tamura

Extend create() to accept volume, and update DB to reserve the volume
before passing it to the manager.

1015. By Yoshiaki Tamura

Add an parameter to specify volume upon creating instances.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

This is looking pretty good.

I like the direction that it is going and I think merging step-by-step is the right way. We definitely need some tests to cover this feature. Some sort of unit tests are a must. It would also be great to get a smoketest for this feature so that we can be sure that it is working against a real deployment.

review: Needs Fixing
Revision history for this message
Dan Prince (dan-prince) wrote :

This branch no longer merges cleanly w/ trunk.

Needs fixing.

review: Needs Fixing

Unmerged revisions

1015. By Yoshiaki Tamura

Add an parameter to specify volume upon creating instances.

1014. By Yoshiaki Tamura

Extend create() to accept volume, and update DB to reserve the volume
before passing it to the manager.

1013. By Yoshiaki Tamura

If volumes exist in instance, get pathes to the volumes and convert
them to the xml format to let libvirt to see them upon booting.

1012. By Isaku Yamahata

ebs boot: compute node(kvm and libvirt) support for ebs boot

this patch teaches kvm and libvirt compute node ebs boot.

1011. By Isaku Yamahata

ebs boot: add parse for ebs boot argument

This patch adds the parser of ebs boot argument and stores those
infos into Volume table for compute node.

1010. By Isaku Yamahata

api/ec2/api: teach multi dot-separated argument

nova.api.ec2.apirequest.APIRequest knows only single dot-separated
arguments.
EBS boot uses multi dot-separeted arguments like
BlockDeviceMapping.1.DeviceName=snap-id
This patch teaches the parser those argument as the preparetion for ebs boot
support.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'nova/api/ec2/apirequest.py'
2--- nova/api/ec2/apirequest.py 2011-04-18 20:53:09 +0000
3+++ nova/api/ec2/apirequest.py 2011-04-21 04:09:46 +0000
4@@ -133,11 +133,17 @@
5 # NOTE(vish): Automatically convert strings back
6 # into their respective values
7 value = _try_convert(value)
8- if len(parts) > 1:
9- d = args.get(key, {})
10- d[parts[1]] = value
11- value = d
12- args[key] = value
13+
14+ if len(parts) > 1:
15+ d = args.get(key, {})
16+ args[key] = d
17+ for k in parts[1:-1]:
18+ v = d.get(k, {})
19+ d[k] = v
20+ d = v
21+ d[parts[-1]] = value
22+ else:
23+ args[key] = value
24
25 for key in args.keys():
26 # NOTE(vish): Turn numeric dict keys into lists
27
28=== modified file 'nova/api/ec2/cloud.py'
29--- nova/api/ec2/cloud.py 2011-04-20 23:21:37 +0000
30+++ nova/api/ec2/cloud.py 2011-04-21 04:09:46 +0000
31@@ -822,6 +822,9 @@
32 if kwargs.get('ramdisk_id'):
33 ramdisk = self._get_image(context, kwargs['ramdisk_id'])
34 kwargs['ramdisk_id'] = ramdisk['id']
35+ for bdm in kwargs.get('block_device_mapping', []):
36+ volume_id = ec2utils.ec2_id_to_id(bdm['Ebs']['SnapshotId'])
37+ bdm['Ebs']['SnapshotId'] = volume_id
38 instances = self.compute_api.create(context,
39 instance_type=instance_types.get_instance_type_by_name(
40 kwargs.get('instance_type', None)),
41@@ -836,7 +839,8 @@
42 user_data=kwargs.get('user_data'),
43 security_group=kwargs.get('security_group'),
44 availability_zone=kwargs.get('placement', {}).get(
45- 'AvailabilityZone'))
46+ 'AvailabilityZone'),
47+ block_device_mapping=kwargs.get('block_device_mapping', {}))
48 return self._format_run_instances(context,
49 instances[0]['reservation_id'])
50
51
52=== modified file 'nova/api/openstack/servers.py'
53--- nova/api/openstack/servers.py 2011-04-19 17:46:43 +0000
54+++ nova/api/openstack/servers.py 2011-04-21 04:09:46 +0000
55@@ -52,7 +52,7 @@
56 "attributes": {
57 "server": ["id", "imageId", "name", "flavorId", "hostId",
58 "status", "progress", "adminPass", "flavorRef",
59- "imageRef"],
60+ "imageRef", "volume"],
61 "link": ["rel", "type", "href"],
62 },
63 "dict_collections": {
64@@ -129,15 +129,27 @@
65 key_data = key_pair['public_key']
66
67 requested_image_id = self._image_id_from_req_data(env)
68- try:
69- image_id = common.get_image_id_from_image_hash(self._image_service,
70- context, requested_image_id)
71- except:
72- msg = _("Can not find requested image")
73- return faults.Fault(exc.HTTPBadRequest(msg))
74-
75- kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image(
76- req, image_id)
77+
78+ volume = self._volume_from_req_data(env)
79+
80+ # image_id or volume must be set to create
81+ if not requested_image_id and not volume:
82+ msg = _("imageRef or volume must be specified")
83+ return exc.HTTPBadRequest(msg)
84+
85+ image_id = None
86+ kernel_id = None
87+ ramdisk_id = None
88+ if requested_image_id:
89+ try:
90+ image_id = common.get_image_id_from_image_hash(
91+ self._image_service, context, requested_image_id)
92+ except:
93+ msg = _("Can not find requested image")
94+ return faults.Fault(exc.HTTPBadRequest(msg))
95+
96+ kernel_id, ramdisk_id = self._get_kernel_ramdisk_from_image(
97+ req, image_id)
98
99 personality = env['server'].get('personality')
100 injected_files = []
101@@ -154,6 +166,12 @@
102 self._validate_server_name(name)
103 name = name.strip()
104
105+ # FIXME(yamahata): convert volume into EC2 block device mapping
106+ block_device_mapping=[]
107+ if volume:
108+ block_device_mapping.append({'DeviceName': volume['mountpoint'],
109+ 'Ebs': {'SnapshotId': volume['id']}})
110+
111 try:
112 inst_type = \
113 instance_types.get_instance_type_by_flavor_id(flavor_id)
114@@ -163,12 +181,14 @@
115 image_id,
116 kernel_id=kernel_id,
117 ramdisk_id=ramdisk_id,
118+ volume=volume,
119 display_name=name,
120 display_description=name,
121 key_name=key_name,
122 key_data=key_data,
123 metadata=env['server'].get('metadata', {}),
124- injected_files=injected_files)
125+ injected_files=injected_files,
126+ block_device_mapping=block_device_mapping)
127 except quota.QuotaError as error:
128 self._handle_quota_error(error)
129
130@@ -585,7 +605,10 @@
131
132 class ControllerV10(Controller):
133 def _image_id_from_req_data(self, data):
134- return data['server']['imageId']
135+ try:
136+ return data['server']['imageId']
137+ except KeyError:
138+ return
139
140 def _flavor_id_from_req_data(self, data):
141 return data['server']['flavorId']
142@@ -612,13 +635,26 @@
143
144 class ControllerV11(Controller):
145 def _image_id_from_req_data(self, data):
146- href = data['server']['imageRef']
147- return common.get_id_from_href(href)
148+ try:
149+ href = data['server']['imageRef']
150+ return common.get_id_from_href(href)
151+ except KeyError:
152+ return
153
154 def _flavor_id_from_req_data(self, data):
155 href = data['server']['flavorRef']
156 return common.get_id_from_href(href)
157
158+ def _volume_from_req_data(self, data):
159+ try:
160+ volume = {
161+ 'id': common.get_id_from_href(data['server']['volume']['id']),
162+ 'mountpoint': data['server']['volume']['mountpoint']
163+ }
164+ return volume
165+ except KeyError:
166+ return
167+
168 def _get_view_builder(self, req):
169 base_url = req.application_url
170 flavor_builder = nova.api.openstack.views.flavors.ViewBuilderV11(
171
172=== modified file 'nova/compute/api.py'
173--- nova/compute/api.py 2011-04-20 19:11:14 +0000
174+++ nova/compute/api.py 2011-04-21 04:09:46 +0000
175@@ -124,13 +124,37 @@
176 LOG.warn(msg)
177 raise quota.QuotaError(msg, "MetadataLimitExceeded")
178
179+ def _get_kernel_and_ramdisk(self, context, image_id,
180+ kernel_id=None, ramdisk_id=None):
181+ if not image_id:
182+ return (None, None, None)
183+
184+ image = self.image_service.show(context, image_id)
185+
186+ os_type = None
187+ if 'properties' in image and 'os_type' in image['properties']:
188+ os_type = image['properties']['os_type']
189+
190+ if kernel_id is None:
191+ kernel_id = image['properties'].get('kernel_id', None)
192+ if ramdisk_id is None:
193+ ramdisk_id = image['properties'].get('ramdisk_id', None)
194+ # FIXME(sirp): is there a way we can remove null_kernel?
195+ # No kernel and ramdisk for raw images
196+ if kernel_id == str(FLAGS.null_kernel):
197+ kernel_id = None
198+ ramdisk_id = None
199+ LOG.debug(_("Creating a raw instance"))
200+
201+ return (kernel_id, ramdisk_id, os_type)
202+
203 def create(self, context, instance_type,
204 image_id, kernel_id=None, ramdisk_id=None,
205 min_count=1, max_count=1,
206 display_name='', display_description='',
207 key_name=None, key_data=None, security_group='default',
208 availability_zone=None, user_data=None, metadata={},
209- injected_files=None):
210+ injected_files=None, block_device_mapping=[]):
211 """Create the number and type of instances requested.
212
213 Verifies that quota and other arguments are valid.
214@@ -152,22 +176,10 @@
215 self._check_metadata_properties_quota(context, metadata)
216 self._check_injected_file_quota(context, injected_files)
217
218- image = self.image_service.show(context, image_id)
219-
220- os_type = None
221- if 'properties' in image and 'os_type' in image['properties']:
222- os_type = image['properties']['os_type']
223-
224- if kernel_id is None:
225- kernel_id = image['properties'].get('kernel_id', None)
226- if ramdisk_id is None:
227- ramdisk_id = image['properties'].get('ramdisk_id', None)
228- # FIXME(sirp): is there a way we can remove null_kernel?
229- # No kernel and ramdisk for raw images
230- if kernel_id == str(FLAGS.null_kernel):
231- kernel_id = None
232- ramdisk_id = None
233- LOG.debug(_("Creating a raw instance"))
234+ (kernel_id, ramdisk_id, os_type) = \
235+ self._get_kernel_and_ramdisk(context, image_id,
236+ kernel_id, ramdisk_id)
237+
238 # Make sure we have access to kernel and ramdisk (if not raw)
239 logging.debug("Using Kernel=%s, Ramdisk=%s" %
240 (kernel_id, ramdisk_id))
241@@ -215,7 +227,7 @@
242 'locked': False,
243 'metadata': metadata,
244 'availability_zone': availability_zone,
245- 'os_type': os_type}
246+ 'os_type': os_type or ''}
247 elevated = context.elevated()
248 instances = []
249 LOG.debug(_("Going to run %s instances..."), num_instances)
250@@ -234,6 +246,17 @@
251 instance_id,
252 security_group_id)
253
254+ # tell vm driver to attach volume at boot time by
255+ # updating the volume table if volume_id is attachable
256+ # TODO(yoshi) Change mount point to be bootable
257+ # TODO(yamahata): eliminate EC2 dependency
258+ for bdm in block_device_mapping:
259+ device = bdm['DeviceName']
260+ volume_id = bdm['Ebs']['SnapshotId']
261+ self._check_volume_device(context, volume_id, device)
262+ self.db.volume_attached(context, volume_id, instance_id,
263+ device)
264+
265 # Set sane defaults if not specified
266 updates = dict(hostname=self.hostname_factory(instance_id))
267 if (not hasattr(instance, 'display_name') or
268@@ -672,12 +695,15 @@
269 """Inject network info for the instance."""
270 self._cast_compute_message('inject_network_info', context, instance_id)
271
272- def attach_volume(self, context, instance_id, volume_id, device):
273- """Attach an existing volume to an existing instance."""
274+ def _check_volume_device(self, context, volume_id, device):
275 if not re.match("^/dev/[a-z]d[a-z]+$", device):
276 raise exception.ApiError(_("Invalid device specified: %s. "
277 "Example device: /dev/vdb") % device)
278 self.volume_api.check_attach(context, volume_id=volume_id)
279+
280+ def attach_volume(self, context, instance_id, volume_id, device):
281+ """Attach an existing volume to an existing instance."""
282+ self._check_volume_device(context, volume_id, device)
283 instance = self.get(context, instance_id)
284 host = instance['host']
285 rpc.cast(context,
286
287=== modified file 'nova/compute/manager.py'
288--- nova/compute/manager.py 2011-04-19 18:29:16 +0000
289+++ nova/compute/manager.py 2011-04-21 04:09:46 +0000
290@@ -224,6 +224,29 @@
291 self.network_manager.setup_compute_network(context,
292 instance_id)
293
294+ # setup volume:
295+ # Now here ebs is specified by volume id.
296+ # TODO:
297+ # When snapshot is supported, create volume from snapshot here.
298+ block_device_mapping = []
299+ for volume in instance_ref['volumes']:
300+ dev_path = self.volume_manager.setup_compute_volume(context,
301+ volume['id'])
302+ if dev_path.startswith('/dev/'):
303+ info = {'type': 'block'
304+ 'device_path': dev_path,
305+ 'mount_device': volume['mountpoint']}
306+ elif ':' in dev_path:
307+ (protocol, name) = dev_path.split(':')
308+ info = {'type': 'network',
309+ 'protocol': protocol,
310+ 'name': name,
311+ 'mount_device': volume['mountpoint']}
312+ block_device_mapping.append(info)
313+
314+ # FIXME:XXX determine when to cdboot.
315+ #boot = 'cdrom' if instance_ref['image_id'] and volume_info else 'hd'
316+
317 # TODO(vish) check to make sure the availability zone matches
318 self.db.instance_set_state(context,
319 instance_id,
320@@ -231,7 +254,9 @@
321 'spawning')
322
323 try:
324- self.driver.spawn(instance_ref)
325+ self.driver.spawn(instance_ref,
326+ block_device_mapping=block_device_mapping,
327+ boot=boot)
328 now = datetime.datetime.utcnow()
329 self.db.instance_update(context,
330 instance_id,
331@@ -243,6 +268,10 @@
332 self.db.instance_set_state(context,
333 instance_id,
334 power_state.SHUTDOWN)
335+ for volume in instance_ref['volumes']:
336+ self.volume_manager.remove_compute_volume(context,
337+ volume['id'])
338+ self.db.volume_detached(context, volume['id'])
339
340 self._update_state(context, instance_id)
341
342@@ -963,8 +992,9 @@
343
344 # Detaching volumes.
345 try:
346- for vol in self.db.volume_get_all_by_instance(ctxt, instance_id):
347- self.volume_manager.remove_compute_volume(ctxt, vol['id'])
348+ for volume in self.db.volume_get_all_by_instance(ctxt,
349+ instance_id):
350+ self.volume_manager.remove_compute_volume(ctxt, volume['id'])
351 except exception.NotFound:
352 pass
353
354
355=== modified file 'nova/virt/driver.py'
356--- nova/virt/driver.py 2011-03-30 00:35:24 +0000
357+++ nova/virt/driver.py 2011-04-21 04:09:46 +0000
358@@ -61,7 +61,7 @@
359 """Return a list of InstanceInfo for all registered VMs"""
360 raise NotImplementedError()
361
362- def spawn(self, instance, network_info=None):
363+ def spawn(self, instance, network_info=None, block_device_mapping=[]):
364 """Launch a VM for the specified instance"""
365 raise NotImplementedError()
366
367
368=== modified file 'nova/virt/libvirt.xml.template'
369--- nova/virt/libvirt.xml.template 2011-03-30 05:59:13 +0000
370+++ nova/virt/libvirt.xml.template 2011-04-21 04:09:46 +0000
371@@ -39,7 +39,7 @@
372 <initrd>${ramdisk}</initrd>
373 #end if
374 #else
375- <boot dev="hd" />
376+ <boot dev='${boot}' />
377 #end if
378 #end if
379 #end if
380@@ -67,11 +67,24 @@
381 <target dev='${disk_prefix}b' bus='${disk_bus}'/>
382 </disk>
383 #else
384- <disk type='file'>
385- <driver type='${driver_type}'/>
386- <source file='${basepath}/disk'/>
387- <target dev='${disk_prefix}a' bus='${disk_bus}'/>
388- </disk>
389+
390+ ## FIXME: allow no device
391+ #if $getVar('disk', False)
392+ #if $boot == 'cdrom'
393+ <disk type='file' device='cdrom'>
394+ <driver type='${driver_type}'/>
395+ <source file='${basepath}/disk'/>
396+ <target dev='hdc' bus='ide'/>
397+ </disk>
398+ #else if not ($getVar('ebs_root', False))
399+ <disk type='file' device='disk'>
400+ <driver type='${driver_type}'/>
401+ <source file='${basepath}/disk'/>
402+ <target dev='${disk_prefix}a' bus='${disk_bus}'/>
403+ </disk>
404+ #end if
405+ #end if
406+
407 #if $getVar('local', False)
408 <disk type='file'>
409 <driver type='${driver_type}'/>
410@@ -79,6 +92,20 @@
411 <target dev='${disk_prefix}b' bus='${disk_bus}'/>
412 </disk>
413 #end if
414+ #for $volume in $volumes
415+ #if varExists('volume.type')
416+ #set $volume.type = 'block'
417+ #end if
418+ <disk type='${volume.type}'>
419+ <driver type='raw'/>
420+ #if $volume.type == 'network'
421+ <source protocol='${volume.protocol}' name='${volume.name}'/>
422+ #else
423+ <source dev='${volume.device_path}'/>
424+ #end if
425+ <target dev='${volume.mount_device}' bus='${disk_bus}'/>
426+ </disk>
427+ #end for
428 #end if
429 #end if
430
431
432=== modified file 'nova/virt/libvirt_conn.py'
433--- nova/virt/libvirt_conn.py 2011-04-18 23:40:03 +0000
434+++ nova/virt/libvirt_conn.py 2011-04-21 04:09:46 +0000
435@@ -39,6 +39,7 @@
436 import multiprocessing
437 import os
438 import random
439+import re
440 import shutil
441 import subprocess
442 import sys
443@@ -202,6 +203,10 @@
444 network_info.append((network, mapping))
445 return network_info
446
447+# TODO: clean up open coding like
448+# volume['mountpoint'] = volume['mountpoint'].rpartition("/")[2]
449+def _strip_dev(mount_path):
450+ return re.sub(r'^/dev/', '', mount_path)
451
452 class LibvirtConnection(driver.ComputeDriver):
453
454@@ -608,15 +613,19 @@
455 # NOTE(ilyaalekseyev): Implementation like in multinics
456 # for xenapi(tr3buchet)
457 @exception.wrap_exception
458- def spawn(self, instance, network_info=None):
459- xml = self.to_xml(instance, False, network_info)
460+ def spawn(self, instance, network_info=None,
461+ block_device_mapping=[], boot=None):
462+ xml = self.to_xml(instance, False, network_info=network_info,
463+ block_device_mapping=block_device_mapping,
464+ boot=boot)
465 db.instance_set_state(context.get_admin_context(),
466 instance['id'],
467 power_state.NOSTATE,
468 'launching')
469 self.firewall_driver.setup_basic_filtering(instance, network_info)
470 self.firewall_driver.prepare_instance_filter(instance, network_info)
471- self._create_image(instance, xml, network_info)
472+ self._create_image(instance, xml, network_info=network_info,
473+ block_device_mapping=block_device_mapping)
474 domain = self._create_new_domain(xml)
475 LOG.debug(_("instance %s: is running"), instance['name'])
476 self.firewall_driver.apply_instance_filter(instance)
477@@ -797,7 +806,7 @@
478 # TODO(vish): should we format disk by default?
479
480 def _create_image(self, inst, libvirt_xml, suffix='', disk_images=None,
481- network_info=None):
482+ network_info=None, block_device_mapping=[]):
483 if not network_info:
484 network_info = _get_network_info(inst)
485
486@@ -851,25 +860,28 @@
487 user=user,
488 project=project)
489
490- root_fname = '%08x' % int(disk_images['image_id'])
491- size = FLAGS.minimum_root_size
492-
493 inst_type_id = inst['instance_type_id']
494 inst_type = instance_types.get_instance_type(inst_type_id)
495- if inst_type['name'] == 'm1.tiny' or suffix == '.rescue':
496- size = None
497- root_fname += "_sm"
498-
499- self._cache_image(fn=self._fetch_image,
500- target=basepath('disk'),
501- fname=root_fname,
502- cow=FLAGS.use_cow_images,
503- image_id=disk_images['image_id'],
504- user=user,
505- project=project,
506- size=size)
507-
508- if inst_type['local_gb']:
509+
510+ if disk_image['image_id'] and
511+ (not self._volume_in_mapping(self.root_mount_device,
512+ block_device_mapping)):
513+ root_fname = '%08x' % int(disk_images['image_id'])
514+ size = FLAGS.minimum_root_size
515+ if inst_type['name'] == 'm1.tiny' or suffix == '.rescue':
516+ size = None
517+ root_fname += "_sm"
518+ self._cache_image(fn=self._fetch_image,
519+ target=basepath('disk'),
520+ fname=root_fname,
521+ cow=FLAGS.use_cow_images,
522+ image_id=disk_images['image_id'],
523+ user=user,
524+ project=project,
525+ size=size)
526+
527+ if inst_type['local_gb'] and not self._volume_in_mapping(
528+ self.local_mount_device, block_device_mapping):
529 self._cache_image(fn=self._create_local,
530 target=basepath('disk.local'),
531 fname="local_%s" % inst_type['local_gb'],
532@@ -994,7 +1006,18 @@
533
534 return result
535
536- def to_xml(self, instance, rescue=False, network_info=None):
537+ root_mount_device = 'vda' # FIXME for now. it's hard coded.
538+ local_mount_device = 'vdb' # FIXME for now. it's hard coded.
539+ def _volume_in_mapping(self, mount_device, block_device_mapping):
540+ mount_device_ = _strip_dev(mount_device)
541+ for vol in block_device_mapping:
542+ vol_mount_device = _strip_dev(vol['mount_device'])
543+ if vol_mount_device == mount_device_:
544+ return True
545+ return False
546+
547+ def to_xml(self, instance, rescue=False,
548+ network_info=None, block_device_mapping=[], boot='hd'):
549 # TODO(termie): cache?
550 LOG.debug(_('instance %s: starting toXML method'), instance['name'])
551
552@@ -1007,6 +1030,7 @@
553 for (network, mapping) in network_info:
554 nics.append(self._get_nic_for_xml(network,
555 mapping))
556+
557 # FIXME(vish): stick this in db
558 inst_type_id = instance['instance_type_id']
559 inst_type = instance_types.get_instance_type(inst_type_id)
560@@ -1016,6 +1040,20 @@
561 else:
562 driver_type = 'raw'
563
564+ #for volume in block_device_mapping:
565+ # volume['mountpoint'] = volume['mountpoint'].rpartition("/")[2]
566+ for vol in block_device_mapping:
567+ vol['mount_device'] = _strip_dev(vol['mount_device'])
568+
569+
570+ ebs_root = self._volume_in_mapping(self.root_mount_device,
571+ block_device_mapping)
572+ if self._volume_in_mapping(self.local_mount_device,
573+ block_device_mapping):
574+ local_gb = False
575+ else:
576+ local_gb = inst_type['local_gb']
577+
578 xml_info = {'type': FLAGS.libvirt_type,
579 'name': instance['name'],
580 'basepath': os.path.join(FLAGS.instances_path,
581@@ -1023,9 +1061,12 @@
582 'memory_kb': inst_type['memory_mb'] * 1024,
583 'vcpus': inst_type['vcpus'],
584 'rescue': rescue,
585- 'local': inst_type['local_gb'],
586+ 'local': local_gb,
587+ 'boot': boot,
588 'driver_type': driver_type,
589- 'nics': nics}
590+ 'nics': nics,
591+ 'ebs_root': ebs_root,
592+ 'volumes': block_device_mapping}
593
594 if FLAGS.vnc_enabled:
595 if FLAGS.libvirt_type != 'lxc':
596@@ -1037,7 +1078,8 @@
597 if instance['ramdisk_id']:
598 xml_info['ramdisk'] = xml_info['basepath'] + "/ramdisk"
599
600- xml_info['disk'] = xml_info['basepath'] + "/disk"
601+ if instance['image_id']:
602+ xml_info['disk'] = xml_info['basepath'] + "/disk"
603
604 xml = str(Template(self.libvirt_xml, searchList=[xml_info]))
605 LOG.debug(_('instance %s: finished toXML method'),