您好,登錄后才能下訂單哦!
小編給大家分享一下nova怎么創建虛擬機,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!
總體描述:
1. 創建實例接口
還是先看接口API
REQ: curl \ -i 'http://ubuntu80:8774/v2/0e962df9db3f4469b3d9bfbc5ffdaf7e/servers' \ -X POST -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "User-Agent: python-novaclient" \ -H "X-Auth-Project-Id: admin" \ -H "X-Auth-Token: {SHA1}e87219521f61238b143fbb323b962930380ce022" \ -d '{"server": {"name": "ubuntu_test", "imageRef": "cde1d850-65bb-48f6-8ee9-b990c7ccf158", "flavorRef": "2", "max_count": 1, "min_count": 1, "networks": [{"uuid": "cfa25cef-96c3-46f1-8522-d9518eb5a451"}]}}'
這里對應的依然是Controller
具體位置為:
nova.api.openstack.compute.servers.Controller.create
注意這個方法的裝飾器有一個@wsgi.response(202),而根據HTTP協議狀態碼返回202表示服務器已接受請求,但尚未處理,表明這是一個異步任務。
最終此方法調用的self.compute_api.create(...)是由__init__(...)中的self.compute_api = compute.API()獲取。
因此compute.API()對應到nova.compute.api.API.create(...),其內部又調用nova.compute.api.API._create_instance(...) 。
就在nova.compute.api.API._create_instance(...)里面重點來了。
2. 任務狀態第一次變化為SCHEDULING
在nova.compute.api.API._create_instance(...)里有一步調用:
instances = self._provision_instances(context, instance_type, min_count, max_count, base_options, boot_meta, security_groups, block_device_mapping, shutdown_terminate, instance_group, check_server_group_quota)
這一方法的位置是nova.compute.api.API._provision_instances,其內部有如下調用:
instance = self.create_db_entry_for_new_instance(...)
而在nova.compute.api.API.create_db_entry_for_new_instance對應(self.create_db_entry_for_new_instance(...)),有如下調用:
self._populate_instance_for_create(context, instance, image, index, security_group, instance_type)
其對應的是nova.compute.api.API._populate_instance_for_create,內部第一次將任務狀態置為調度:
instance.vm_state = vm_states.BUILDING instance.task_state = task_states.SCHEDULING
那么回到_provision_instances方法,主要是申請了配額。
3. 由nova-api到nova-conductor
在nova.compute.api.API._create_instance(...)里有一步調用:
self.compute_task_api.build_instances(context, instances=instances, image=boot_meta, filter_properties=filter_properties, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=block_device_mapping, legacy_bdm=False)
從這一步開始離開nova-api,nova-api調用的是nova-conductor,nova-scheduler和nova-compute中的方法。
@property def compute_task_api(self): if self._compute_task_api is None: # TODO(alaski): Remove calls into here from conductor manager so # that this isn't necessary. #1180540 from nova import conductor self._compute_task_api = conductor.ComputeTaskAPI() return self._compute_task_api
4. nova-conductor調用nova-scheduler和nova-compute
這邊已經來到conductor部分,位置為nova.conductor.ComputeTaskAPI:
def ComputeTaskAPI(*args, **kwargs): use_local = kwargs.pop('use_local', False) if oslo.config.cfg.CONF.conductor.use_local or use_local: api = conductor_api.LocalComputeTaskAPI else: api = conductor_api.ComputeTaskAPI return api(*args, **kwargs)
這里use_local是默認置為False,這里默認調用的是
api = conductor_api.LocalComputeTaskAPI
它的位置是nova.conductor.LocalComputeTaskAPI在它的構造函數(__init__(...))中有manager.ComputeTaskManager即nova.conductor.ComputeTaskManager
這個類的build_instances方法,位置(nova.conductor.ComputeTaskManager.build_instances(...)):
nova-conductor會在build_instances()中生成request_spec字典,
request_spec = scheduler_utils.build_request_spec(...)
其中包括了詳細的虛擬機信息,nova-scheduler依據這些信息為虛擬機選擇一個最佳的主機,
hosts = self.scheduler_client.select_destiation(... , request_spec, ...)
然后nova-conductor再通過RPC調用nova-compute創建虛擬機
self.compute_rpcapi.build_and_run_instance(context, instance=instance, host=host['host'], image=image, request_spec=request_spec, filter_properties=local_filter_props, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=bdms, node=host['nodename'], limits=host['limits'])
這里調用的是nova.compute.rpcapi.ComputeAPI.build_and_run_instance
其中可以看到,調用的是'build_and_run_instance',而cctxt.cast(...)是異步遠程調用(調用后不立即返回),詳細可以搜索oslo.messaging模塊的使用
cctxt.cast(ctxt, 'build_and_run_instance', instance=instance, image=image, request_spec=request_spec, filter_properties=filter_properties, admin_password=admin_password, injected_files=injected_files, requested_networks=requested_networks, security_groups=security_groups, block_device_mapping=block_device_mapping, node=node, limits=limits)
調用對應的是nova.compute.manager.build_and_run_instance(...),其內部調用(注意:使用的是spawn方式調用)的是_do_build_and_run_instance(...)。
_do_build_and_run_instance(...)內部主要的調用為_build_and_run_instance函數(nova.compute.manager._build_and_run_instance(...)):
5. 建立和運行實例
定位到nova.compute.manager._build_and_run_instance(...)之后,看到如下代碼:
def _build_and_run_instance(self, context, instance, image, injected_files, admin_password, requested_networks, security_groups, block_device_mapping, node, limits, filter_properties): image_name = image.get('name') self._notify_about_instance_usage(context, instance, 'create.start', extra_usage_info={'image_name': image_name}) try: # 資源跟蹤器 rt = self._get_resource_tracker(node) with rt.instance_claim(context, instance, limits) as inst_claim: # NOTE(russellb) It's important that this validation be done # *after* the resource tracker instance claim, as that is where # the host is set on the instance. self._validate_instance_group_policy(context, instance, filter_properties) # 分配資源,包括網絡和存儲,在內部 # 任務狀態由task_states.SPAWNING變為task_states.NETWORKING再 # 變成task_states.BLOCK_DEVICE_MAPPING with self._build_resources(context, instance, requested_networks, security_groups, image, block_device_mapping) as resources: instance.vm_state = vm_states.BUILDING # 任務狀態變為孵化中 instance.task_state = task_states.SPAWNING instance.numa_topology = inst_claim.claimed_numa_topology instance.save(expected_task_state= task_states.BLOCK_DEVICE_MAPPING) block_device_info = resources['block_device_info'] network_info = resources['network_info'] # 調用底層virt api孵化實例 self.driver.spawn(context, instance, image, injected_files, admin_password, network_info=network_info, block_device_info=block_device_info) except ...: # NOTE(alaski): This is only useful during reschedules, remove it now. instance.system_metadata.pop('network_allocated', None) # 查看實例電源狀態 instance.power_state = self._get_power_state(context, instance) # 開啟實例電源(開機) instance.vm_state = vm_states.ACTIVE # 任務狀態清空 instance.task_state = None # 實例內部時間操作 instance.launched_at = timeutils.utcnow() try: instance.save(expected_task_state=task_states.SPAWNING) except (exception.InstanceNotFound, exception.UnexpectedDeletingTaskStateError) as e: with excutils.save_and_reraise_exception(): self._notify_about_instance_usage(context, instance, 'create.end', fault=e) # 通知創建過程結束 self._notify_about_instance_usage(context, instance, 'create.end', extra_usage_info={'message': _('Success')}, network_info=network_info)
第一步就是建立一個資源跟蹤器(RT: Resource Tracker),注意,RT分為索要跟蹤器(Claim RT)和周期跟蹤器(Periodic RT),當然我們還可以自己擴展插件叫擴展跟蹤器(Extensible RT)。
顧名思義,在_build_and_run_instance函數中建立的RT是索要(claim)跟蹤器,對計算節點上的資源進行核實,如果資源分配失敗,會拋出異常。
rt = self._get_resource_tracker(node) with rt.instance_claim(context, instance, limits) as inst_claim:
這里還要提到_build_resources函數,在這個函數里instance.task_state狀態
由task_states.SCHEDULING變為task_states.NETWORKING再變成task_states.BLOCK_DEVICE_MAPPING
self._build_resources(context, instance, requested_networks, security_groups, image, block_device_mapping)
待資源分配完成,任務狀態由task_states.BLOCK_DEVICE_MAPPING轉變為task_states.SPAWNING
instance.task_state = task_states.SPAWNING
當一切準備完畢時,調用self.driver.spawn孵化實例,底層則是Libvirt部分,啟動孵化過程
self.driver.spawn(context, instance, image, injected_files, admin_password, network_info=network_info, block_device_info=block_device_info)
之后做的就是開機,對時,通知創建過程結束,成功!
以上是“nova怎么創建虛擬機”這篇文章的所有內容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內容對大家有所幫助,如果還想學習更多知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。