求助openstack 实例创建云主机失败问题

ipinco 的BLOG
用户名:ipinco
文章数:16
访问量:2747
注册日期:
阅读量:5863
阅读量:12276
阅读量:421918
阅读量:1110307
51CTO推荐博文
实现功能:实现在openstack中能够重新调整新建的openstack主机的内存及磁盘空间大小。当我们在openstack主机实例中操作--&调整云主机大小 的时候并没有报错,但发现云主机的内存与磁盘空间并没有变化,查询/var/log/nova/nova-compute.log 发现有如下错误:nstance: c63d94-a239-8b4eb0343a13] Setting instance back to ACTIVE after: Instance rollback performed due to: Resize error: not able to execute ssh command: Unexpected error while running command.Command: ssh 192.168.10.247 mkdir -p /var/lib/nova/instances/c63d94-a239-8b4eb0343a13Exit code: 255Stdout: u''Stderr: u'Host key verification failed.\r\n'出现以上情况的原因是:OpenStack的云主机配置类型的修改,其实相当于做了一个云主机在不同宿主机的迁移,所以需要在计算节点之间进行无密码访问,由于OpenStack是由Nova组件来管理云主机,所以需要对Nova用户进行无密码访问,具体操作如下:如: 我有两个计算节点: compute1 与compute2a. 分别在两台节点vi /etc/passwd&&nova:x:110:116::/var/lib/nova:/bin/false&改为:nova:x:110:116::/var/lib/nova:/bin/shb. #passwd nova 为nova设置一个密码(分别在多台计算节点操作)c. #su - nova&& $ssh-keygen 生成公钥与私钥& $ssh-copy-id compute2 同样在另一台上$ssh-copy-id compute1&2. 测试两计算节点可以相互无密码登录后,既可看在后面自动调节云主机大小了本文出自 “” 博客,请务必保留此出处
了这篇文章
类别:┆阅读(0)┆评论(0)君,已阅读到文档的结尾了呢~~
扫扫二维码,随身浏览文档
手机或平板扫扫即可继续访问
Openstack新建云主机的流程
举报该文档为侵权文档。
举报该文档含有违规或不良信息。
反馈该文档无法正常浏览。
举报该文档为重复文档。
推荐理由:
将文档分享至:
分享完整地址
文档地址:
粘贴到BBS或博客
flash地址:
支持嵌入FLASH地址的网站使用
html代码:
&embed src='/DocinViewer--144.swf' width='100%' height='600' type=application/x-shockwave-flash ALLOWFULLSCREEN='true' ALLOWSCRIPTACCESS='always'&&/embed&
450px*300px480px*400px650px*490px
支持嵌入HTML代码的网站使用
您的内容已经提交成功
您所提交的内容需要审核后才能发布,请您等待!
3秒自动关闭窗口shy润物无声 的BLOG
用户名:shy润物无声
文章数:25
访问量:8794
注册日期:
阅读量:5863
阅读量:12276
阅读量:421918
阅读量:1110307
51CTO推荐博文
在创建用户时报[root@controller ~]# &openstack user create --password-prompt neutronUser Password:Repeat User Password:The request you have made requires authentication. (HTTP 401) (Request-ID: req-b0-4f07-89fa-解决方法把admin-token中的admin-token环境变量去掉[root@controller ~]# echo $OS_TOKENd55a8891f4adb5796f32[root@controller ~]# unset OS_TOKEN[root@controller ~]# echo $OS_TOKEN[root@controller ~]# openstack user create --password-prompt neutronUser Password:Repeat User Password:+-----------+----------------------------------+| Field & & | Value & & & & & & & & & & & & & &|+-----------+----------------------------------+| domain_id | default & & & & & & & & & & & & &|| enabled & | True & & & & & & & & & & & & & & || id & & & &| c || name & & &| neutron & & & & & & & & & & & & &|+-----------+----------------------------------+若有token时可以删除和查看用户但是不能创建用户。原因待查询问题二若在创建虚拟机时在控制节点上出现schduler.logAggregateRamFilter returned 0 hosts返回值为0说明内存不够。加内存即可问题三创建云硬盘和云主机失败。已经做完ceph和opstack对接查看cinder 的错误 17:53:28.978 1618 WARNING cinder.context [-] Arguments dropped when creating context: {u'user': None, u'tenant': None, u'user_identity': u'- - - - -'} 17:53:29.371 1618 WARNING cinder.context [-] Arguments dropped when creating context: {u'user': None, u'tenant': None, u'user_identity': u'- - - - -'} 17:53:42.034 1618 WARNING cinder.context [-] Arguments dropped when creating context: {u'user': u'cfb18f549d654be69cc4ba0ff146df89', u'tenant': u'6addc301cbc226ec303eeb1', u'user_identity': u'cfb18f549d654be69cc4ba0ff146df89 6addc301cbc226ec303eeb1 - - -'} 17:53:42.042 1618 WARNING cinder.scheduler.filters.capacity_filter [req-d88-4f46-9684-42def152f89f cfb18f549d654be69cc4ba0ff146df89 6addc301cbc226ec303eeb1 - - -] Insufficient free space for volume provision allocated 10093 GB, allow provisioned 8331.0 GB 17:53:42.049 1618 ERROR cinder.scheduler.flows.create_volume [req-d88-4f46-9684-42def152f89f cfb18f549d654be69cc4ba0ff146df89 6addc301cbc226ec303eeb1 - - -] Failed to schedule_create_volume: No valid host was found.& 17:53:42.906 1618 WARNING cinder.context [-] Arguments dropped when creating context: {u'user': None, u'tenant': None, u'user_identity': u'- - - - -'}问题分析貌似是ceph存储满了。查看ceph存储空间[root@node-11 ~]# ceph dfGLOBAL:& & SIZE & & & AVAIL & & &RAW USED & & %RAW USED&& & 16748G & & 16601G & & & & 146G & & & & &0.87&POOLS:& & NAME & & & & & &ID & & USED & & & %USED & & MAX AVAIL & & OBJECTS&& & data & & & & & &0 & & & & & 0 & & & & 0 & & & & 8295G & & & & & 0&& & metadata & & & &1 & & & & & 0 & & & & 0 & & & & 8295G & & & & & 0&& & rbd & & & & & & 2 & & & & & 0 & & & & 0 & & & & 8295G & & & & & 0&& & images & & & & &3 & & &28535M & & &0.17 & & & & 8295G & & & &3587&& & volumes & & & & 4 & & &37022M & & &0.22 & & & & 8295G & & & &9473&& & volumes_ssd & & 5 & & & & & 0 & & & & 0 & & & & 8295G & & & & & 0&& & compute & & & & 6 & & & & & 0 & & & & 0 & & & & 8295G & & & & & 0&查看副本数[root@node-12 ~]# &ceph osd pool get rbd sizesize: 2经过计算ceph可用量为8300G左右。实际使用了70G.到控制台处查看所有云硬盘的使用量查看已经分出去了多余8000G的容量导致创建云硬盘失败。问题四情景客户在创建4c 16G规格的虚拟机时报错创建两个2C &8G规格的云主机并无报错。查看logvim /var/log/nova/scheduler.log 01:30:28.286 6368 INFO nova.scheduler.filter_scheduler [req-13b090f9-abae-414f-a2f7-44aabe45e0ae412dd1c2d3c26 cd92e494a4954baab828a4da2ac657f4] Attempting to build 1 instance(s) uuids: [u'7814aea6-13ef-41c7-a69d-01b8cf567e07'] 01:30:28.427 6368 INFO nova.filters [req-13b090f9-abae-414f-a2f7-44aabe45e0ae412dd1c2d3c26 cd92e494a4954baab828a4da2ac657f4] Filter AggregateRamFilter returned 0 hosts 01:30:28.428 6368 WARNING nova.scheduler.driver [req-13b090f9-abae-414f-a2f7-44aabe45e0ae412dd1c2d3c26 cd92e494a4954baab828a4da2ac657f4] [instance: 7814aea6-13ef-41c7-a69d-01b8cf567e07] Setting instance to ERROR state.于是看出计算节点的资源原来是有的计算节点cpu充足内存不够有的内存不够cpu却充足。所以现在可以创建两个2C 8G的云主机却不能创建1台4C16G的云主机。问题四创建虚拟机失败libvirtd的问题打开nova的debug。看到一个错误& & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & 此时以为是cpu不支持虚拟化后来看到宿主机是物理机然后放弃了这一想法。看下libvirtd服务的状态systemctl&status&-l&libvirtd.service&
●&libvirtd.service&-&Virtualization&daemon
&&&Loaded:&loaded&(/usr/lib/systemd/system/libvirtd.&&vendor&preset:&enabled)
&&&Active:&active&(running)&since&Tue&&19:28:16&CST;&1&day&20h&ago
&&&&&Docs:&man:libvirtd(8)
&&&&&&&&&&&http://libvirt.org
&Main&PID:&8360&(libvirtd)
&&&CGroup:&/system.slice/libvirtd.service
&&&&&&&&&&&└─8360&/usr/sbin/libvirtd&--listen
Sep&22&13:51:07&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&13:55:45&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&14:05:59&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&14:07:57&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&14:23:45&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&14:41:55&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&14:56:29&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&15:17:18&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&15:19:11&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&15:32:50&zp2cp010&libvirtd[8360]:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&15:51:57&zp2cp010&libvirtd[29250]:&internal&error:&Cannot&probe&for&supported&suspend&types
Sep&22&15:51:57&zp2cp010&libvirtd[29250]:&Failed&to&get&host&power&management&capabilities
Sep&22&15:52:22&zp2cp010&libvirtd[29250]:&error&from&service:&GetMachineByPID:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&15:52:27&zp2cp010&libvirtd[29250]:&End&of&file&while&reading&data:&Input/output&error
Sep&22&15:52:47&zp2cp010&libvirtd[29250]:&error&from&service:&GetMachineByPID:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&15:52:47&zp2cp010&libvirtd[29250]:&error&from&service:&GetMachineByPID:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&15:52:47&zp2cp010&libvirtd[29250]:&error&from&service:&GetMachineByPID:&Activation&of&org.freedesktop.machine1&timed&out
Sep&22&15:52:48&zp2cp010&libvirtd[29250]:&internal&error:&Cannot&probe&for&supported&suspend&types
Sep&22&15:52:48&zp2cp010&libvirtd[29250]:&Failed&to&get&host&power&management&capabilities
Sep&22&15:53:54&zp2cp010&libvirtd[29250]:&Activation&of&org.freedesktop.machine1&timed& 发现是libvirt服务出问题了。于是想到了重启谁知道重启并没解决问题。于是google看下解决方法systemctl restart dbus-org.freedesktop.machine1.servicesystemctl restart &libvirtd.service&再看libvirt的状态Hint: Some lines were ellipsized, use -l to show in full.[root@node-1 ~]# systemctl status libvirtd.service&● libvirtd.service - Virtualization daemon& &Loaded: loaded (/usr/lib/systemd/system/libvirtd. vendor preset: enabled)& &Active: active (running) since Thu
16:51:53 CST; 3s ago& & &Docs: man:libvirtd(8)& & & & & &http://libvirt.org&Main PID: 1494879 (libvirtd)& &Memory: 17.1M& &CGroup: /system.slice/libvirtd.service& & & & & &├─ & 3813 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libv...& & & & & &├─ & 3814 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libv...& & & & & &└─1494879 /usr/sbin/libvirtdDec 29 16:51:53 node-1 systemd[1]: Starting Virtualization daemon...Dec 29 16:51:53 node-1 libvirtd[1494879]: libvirt version: 1.2.17, package: 13.el7_2.4 (CentOS BuildSystem &http://bugs.cent...os.org)Dec 29 16:51:53 node-1 libvirtd[1494879]: Module /usr/lib64/libvirt/connection-driver/libvirt_driver_lxc.so not accessibleDec 29 16:51:53 node-1 systemd[1]: Started Virtualization daemon.Dec 29 16:51:53 node-1 dnsmasq[3813]: read /etc/hosts - 2 addressesDec 29 16:51:53 node-1 dnsmasq[3813]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses问题解决!问题四由于网络原因创建不成功查看nova-computer的log 11:23:52.283 52305 INFO nova.virt.libvirt.driver [req-cfdcf014-98c9-45ab-9fcb-e8ce68211a00 efc3374a0eff4fb6a0bddb6 9a42f3ccbeaf264182bce6f] [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] Creating image 11:23:52.451 52305 pute.manager [req-cfdcf014-98c9-45ab-9fcb-e8ce68211a00 efc3374a0eff4fb6a0bddb6 9a42f3ccbeaf264182bce6f] [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] Instance failed to spawn 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] Traceback (most recent call last): 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] & File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1853, in _spawn 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] & File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2464, in spawn 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] & File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3865, in to_xml 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] & File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3654, in get_guest_config 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] & File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py", line 384, in get_config 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] & & _("Unexpected vif_type=%s") % vif_type) 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] NovaException: Unexpected vif_type=binding_failed 11:23:52.451 52305 pute.manager [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] 11:23:52.452 52305 pute.resource_tracker [req-cfdcf014-98c9-45ab-9fcb-e8ce68211a00 efc3374a0eff4fb6a0bddb6 9a42f3ccbeaf264182bce6f] 'list' object has no attribute 'get' 11:23:52.528 52305 pute.manager [req-cfdcf014-98c9-45ab-9fcb-e8ce68211a00 efc3374a0eff4fb6a0bddb6 9a42f3ccbeaf264182bce6f] [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] Terminating instance 11:23:53.071 52305 ERROR nova.virt.libvirt.driver [-] [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] During wait destroy, instance disappeared. 11:23:53.266 52305 pute.manager [req-cfdcf014-98c9-45ab-9fcb-e8ce68211a00 efc3374a0eff4fb6a0bddb6 9a42f3ccbeaf264182bce6f] [instance: 687a4dd2-cc63-424d-a1c2-4a25b5fbb261] Error: Unexpected vif_type=binding_failed看到“Unexpected vif_type=binding_failed” 的错误一般是网络问题所以看了下neutron-linuxbridge-agent的状态发现服务没启动手动起下服务就好了至此问题解决问题五:在搭建openstack ha时 当图形化界面预览 image的资源时,下面是解决思路:1、使用glance 命令查看是否能获取到镜像root@node-1 share]# glance image-list &+--------------------------------------+--------+| ID & & & & & & & & & & & & & & & & & | Name & |+--------------------------------------+--------+| d7bfbd76-2796-48bc-a0e9-c | cirros |+--------------------------------------+--------+2、尝试nova是否能获取 imageroot@node-1 share]# nova image-list若不能获取则是在[root@node-1 share]# vim /etc/glance/glance-api.confregistry_host = 192.168.11.63 &##此处填写 vip &即 管理网的虚拟ip本文出自 “” 博客,请务必保留此出处
了这篇文章
类别:未分类┆阅读(0)┆评论(0)}

我要回帖

更多关于 openstack 实例 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信