亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

openstack M版本部署

發布時間:2020-07-21 12:13:33 來源:網絡 閱讀:2479 作者:facinglife 欄目:數據庫

系統解決方案

 

一、環境需求

1、網卡


em1

em2

em3

em4

controller1

172.16.16.1

172.16.17.1

none

none

controller1

172.16.16.2

172.16.17.2

none

none

compute1

172.16.16.3

172.16.17.3

none

none

compute2

172.16.16.4

172.16.17.4

none

none

compute3

172.16.16.5

172.16.17.5

none

none

……





 

2、消息隊列

使用mirror-queue mode,詳細部署方式,參見禪道上的rabbtmq集群部署文檔。

 

3、數據庫

使用mariaDB+innodb+gelera,版本10.0.18以上, 詳細部署方式,參見禪道上的rabbtmq集群部署文檔。

 

4、中間件

使用memcached,未采用集群形式,編輯/etc/sysconfig/memcached,修改127.0.0.1為本地主機名(或者IP)。

 

二、部署方案

本機使用controller1作為認證名,

所有服務密碼使用$MODULE+manager,例如:novamanagerglancemanager

數據庫使用dftc+$MODULE例如:DB_PASSDB_PASS

規劃IP段:172.16.16.0/24作為管理網段;172.16.17.0/24存儲網段;172.16.18.0/23作為外部網絡的網段。

操作之前使用MYIP=`ip add show em1|grep inet|head -1|awk  '{print $2}'|awk-F'/' '{print  $1}'`賦值變量

本文使用flat+vxlan網絡部署方式,如果需要更改,請自行百度;

 

1database

mysql -uroot-p****** -e "create database keystone;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "create database glance;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIEDBY 'DB_PASS';"

mysql -uroot-p****** -e "create database nova;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY'DB_PASS';"

mysql -uroot-p****** -e "create database nova_api;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIEDBY 'DB_PASS';"

mysql -uroot-p****** -e "create database neutron;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIEDBY 'DB_PASS';"

mysql -uroot-p****** -e "create database cinder;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIEDBY 'DB_PASS';"

mysql -uroot-p****** -e "FLUSH PRIVILEGES;"

 

2keystone

### 安裝依賴包

yum installopenstack-keystone httpd mod_wsgi

 

### 修改配置文件

openstack-config--set /etc/keystone/keystone.conf DEFAULT admin_token 749d6ead6be998642461

openstack-config--set /etc/keystone/keystone.conf database connectionmysql+pymysql://keystone:DB_PASS@controller1/keystone

openstack-config--set /etc/keystone/keystone.conf token provider fernet

 

### 同步數據庫并生成fernet

su -s /bin/sh -c"keystone-manage db_sync" keystone

keystone-managefernet_setup --keystone-user keystone --keystone-group keystone

 

###/etc/httpd/conf/httpd.conf

touch /etc/httpd/conf.d/wsgi-keystone.conf

echo <<EOF

Listen 5000

Listen 35357

 

<VirtualHost*:5000>

    WSGIDaemonProcess keystone-publicprocesses=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

    WSGIProcessGroup keystone-public

    WSGIScriptAlias //usr/bin/keystone-wsgi-public

    WSGIApplicationGroup %{GLOBAL}

    WSGIPassAuthorization On

    ErrorLogFormat "%{cu}t %M"

    ErrorLog /var/log/httpd/keystone-error.log

    CustomLog/var/log/httpd/keystone-access.log combined

 

    <Directory /usr/bin>

        Require all granted

    </Directory>

</VirtualHost>

 

<VirtualHost*:35357>

    WSGIDaemonProcess keystone-adminprocesses=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

    WSGIProcessGroup keystone-admin

    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

    WSGIApplicationGroup %{GLOBAL}

    WSGIPassAuthorization On

    ErrorLogFormat "%{cu}t %M"

    ErrorLog /var/log/httpd/keystone-error.log

    CustomLog/var/log/httpd/keystone-access.log combined

 

    <Directory /usr/bin>

        Require all granted

    </Directory>

</VirtualHost>

EOF

 

 

####

systemctl enablehttpd.service && systemctl start httpd.service

 

 

 

###

exportOS_TOKEN=749d6ead6be998642461

exportOS_URL=http://controller1:35357/v3

exportOS_IDENTITY_API_VERSION=3

 

 

 

openstack servicecreate --name keystone --description "DFTCIAAS Identity" identity

 

openstackendpoint create --region scxbxxzx identity public http://controller1:5000/v3

openstackendpoint create --region scxbxxzx identity internal http://controller1:5000/v3

openstackendpoint create --region scxbxxzx identity admin http://controller1:35357/v3

 

 

openstack domaincreate --description "Default Domain" default

 

openstack projectcreate --domain default  --description"Admin Project" admin

openstack user create--domain default  --password-prompt admin

######## createrole project and user

 

openstack rolecreate admin

openstack roleadd --project admin --user admin admin

openstack projectcreate --domain default  --description"Service Project" service

openstack projectcreate --domain default  --description"Demo Project" demo

openstack usercreate --domain default --password-prompt demo

 

echo########  create glance server andendpoint

openstack rolecreate user

openstack roleadd --project demo --user demo user

sed -i"/^pipeline/ s#admin_token_auth##g" /etc/keystone/keystone-paste.ini

unsetOS_TOKEN OS_URL

 

openstack usercreate --domain default --password-prompt glance

 

echo ########create p_w_picpath server and endpoint

openstack roleadd --project service --user glance admin

openstack servicecreate --name glance  --description"DFTCIAAS Image" p_w_picpath

openstackendpoint create --region scxbxxzx p_w_picpath public http://controller1:9292

openstackendpoint create --region scxbxxzx p_w_picpath internal http://controller1:9292

openstackendpoint create --region scxbxxzx p_w_picpath admin http://controller1:9292

openstack usercreate --domain default --password-prompt nova

 

echo ########create compute server and endpoint

openstack roleadd --project service --user nova admin

openstack servicecreate --name nova  --description"DFTCIAAS Compute" compute

openstackendpoint create --region scxbxxzx compute publichttp://controller1:8774/v2.1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx compute internal http://controller1:8774/v2.1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx compute adminhttp://controller1:8774/v2.1/%\(tenant_id\)s

openstack usercreate --domain default --password-prompt neutron

 

echo ########create network server and endpoint

openstack roleadd --project service --user neutron admin

openstack servicecreate --name neutron  --description"DFTCIAAS Networking" network

openstackendpoint create --region scxbxxzx network public http://controller1:9696

openstackendpoint create --region scxbxxzx network internal http://controller1:9696

openstackendpoint create --region scxbxxzx network admin http://controller1:9696

openstack usercreate --domain default --password-prompt cinder

echo ########create volume server and endpoint

openstack roleadd --project service --user cinder admin

openstack servicecreate --name cinder --description "DFTCIAAS Block Storage" volume

openstack servicecreate --name cinderv2 --description "DFTCIAAS Block Storage"volumev2

openstackendpoint create --region scxbxxzx  volumepublic http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumeinternal http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumeadmin http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx volumev2 public http://controller1:8776/v2/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s

 

 

 

3glance

#### 安裝依賴包

yum installopenstack-glance

 

 

#### 修改配置文件內容

openstack-config--set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:DB_PASS@controller1/glance

 

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_uri  http://controller1:5000

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_url  http://controller1:35357

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken memcached_servers  controller1:11211

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_type  password

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken project_name  service

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken username  glance

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken password  glancemanager

 

openstack-config--set /etc/glance/glance-api.conf paste_deploy flavor  keystone

 

openstack-config--set /etc/glance/glance-api.conf glance_store stores  file,http

openstack-config--set /etc/glance/glance-api.conf glance_store default_store  file

openstack-config--set /etc/glance/glance-api.conf glance_store filesystem_store_datadir  /var/lib/glance/p_w_picpaths/

 

openstack-config--set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:DB_PASS@controller1/glance

 

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_uri  http://controller1:5000

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_url  http://controller1:35357

openstack-config--set /etc/glance/glance-registry.conf keystone_authtokenmemcached_servers  controller1:11211

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_type  password

openstack-config--set /etc/glance/glance-registry.conf keystone_authtokenproject_domain_name  default

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken project_name  service

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken username  glance

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken password  glancemanager

 

openstack-config--set /etc/glance/glance-registry.conf paste_deploy flavor  keystone

 

 

###同步數據庫

su -s /bin/sh -c"glance-manage db_sync" glance

 

 

###啟動服務

systemctl enableopenstack-glance-api.service openstack-glance-registry.service

systemctl startopenstack-glance-api.service openstack-glance-registry.service

 

4nova

4.1控制節點

#安裝依賴包

yum installopenstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy  openstack-nova-scheduler

 

#修改配置文件內容

openstack-config--set /etc/nova/nova.conf DEFAULT  enabled_apis osapi_compute,metadata

 

openstack-config--set /etc/nova/nova.conf api_database   connection mysql+pymysql://nova:DB_PASS@controller1/nova_api

openstack-config--set /etc/nova/nova.conf database  connection  mysql+pymysql://nova:DB_PASS@controller1/nova

 

openstack-config--set /etc/nova/nova.conf DEFAULT rpc_backend  rabbit

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts  controller1:5672

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid  dftc

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password  ******

 

openstack-config--set /etc/nova/nova.conf DEFAULT auth_strategy  keystone

openstack-config--set /etc/nova/nova.conf keystone_authtoken  auth_uri http://controller1:5000/v3

openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_url http://controller1:35357/v3

openstack-config--set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211

openstack-config --set/etc/nova/nova.conf keystone_authtoken auth_type  password

openstack-config--set /etc/nova/nova.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/nova/nova.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/nova/nova.conf keystone_authtoken project_name  service

openstack-config--set /etc/nova/nova.conf keystone_authtoken username  nova

openstack-config--set /etc/nova/nova.conf keystone_authtoken password  novamanager

 

openstack-config--set /etc/nova/nova.conf DEFAULT  my_ip  $MYIP

openstack-config--set /etc/nova/nova.conf DEFAULT  use_neutron  True

openstack-config--set /etc/nova/nova.conf DEFAULT  firewall_driver nova.virt.firewall.NoopFirewallDriver

 

openstack-config--set /etc/nova/nova.conf vnc vncserver_listen  $MYIP

openstack-config--set /etc/nova/nova.conf vnc vncserver_proxyclient_address $MYIP

 

openstack-config--set /etc/nova/nova.conf glance api_servers http://controller1:9292

 

openstack-config--set /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp

 

 

#同步數據庫

su -s /bin/sh -c"nova-manage api_db sync" nova

su -s /bin/sh -c"nova-manage db sync" nova

 

#啟動服務

systemctl enableopenstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service   openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

systemctl startopenstack-nova-api.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

 

 

4.2計算節點

#安裝依賴包

yum installopenstack-nova-compute

 

#修改配置文件

openstack-config--set /etc/nova/nova.conf DEFAULT rpc_backend  rabbit

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts  controller1:5672

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid  dftc

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password  ******

 

openstack-config--set /etc/nova/nova.conf DEFAULT auth_strategy  keystone

openstack-config --set/etc/nova/nova.conf keystone_authtoken auth_uri http://controller1:5000/v3

openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_url http://controller1:35357/v3

openstack-config--set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211

openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_type  password

openstack-config--set /etc/nova/nova.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/nova/nova.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/nova/nova.conf keystone_authtoken project_name  service

openstack-config--set /etc/nova/nova.conf keystone_authtoken username  nova

openstack-config--set /etc/nova/nova.conf keystone_authtoken password  novamanager

 

openstack-config--set /etc/nova/nova.conf DEFAULT  my_ip  $MYIP

openstack-config--set /etc/nova/nova.conf DEFAULT  use_neutron  True

openstack-config--set /etc/nova/nova.conf DEFAULT  firewall_driver nova.virt.firewall.NoopFirewallDriver

 

openstack-config--set /etc/nova/nova.conf vnc enabled True

openstack-config--set /etc/nova/nova.conf vnc vncserver_listen $my_ip

openstack-config--set /etc/nova/nova.conf vnc vncserver_proxyclient_address  $my_ip

openstack-config--set /etc/nova/nova.conf vnc novncproxy_base_url  http://controller1:6080/vnc_auto.html

 

openstack-config--set /etc/nova/nova.conf glance api_servers http://controller1:9292

 

openstack-config--set /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp

 

openstack-config--set /etc/nova/nova.conf libvirt virt_type qemu

 

# 啟動服務

systemctl enablelibvirtd.service openstack-nova-compute.service

systemctl startlibvirtd.service openstack-nova-compute.service

 

5neutron

5.1 控制節點

#安裝依賴包

yum install openstack-neutronopenstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

 

 

#修改neutron.conf

openstack-config--set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:DB_PASS@controller1/neutron

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT core_plugin  ml2

openstack-config--set /etc/neutron/neutron.conf DEFAULT service_plugins  router

openstack-config--set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips  True

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT rpc_backend   rabbit

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hostscontroller1:5672

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid dftc

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password dftcpass

 

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT auth_strategy   keystone

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_urihttp://controller1:5000/v3

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_urlhttp://controller1:35357/v3

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken memcached  controller1:11211

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_type  password

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_name  service

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken username  neutron

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken password  neutronmanager

 

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  True

openstack-config--set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  True

openstack-config--set /etc/neutron/neutron.conf nova auth_url http://controller1:35357/v3

openstack-config--set /etc/neutron/neutron.conf nova auth_type password

openstack-config--set /etc/neutron/neutron.conf nova project_domain_name default

openstack-config--set /etc/neutron/neutron.conf nova user_domain_name default

openstack-config --set/etc/neutron/neutron.conf nova region_name scxbxxzx

openstack-config--set /etc/neutron/neutron.conf nova project_name  service

openstack-config--set /etc/neutron/neutron.conf nova username nova

openstack-config--set /etc/neutron/neutron.conf nova password novamanager

 

openstack-config--set /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp

 

 

 

##修改ml2_config.ini

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,vxlan

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types  vxlan

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge,l2population

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  public

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges  1:500

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  True

 

 

 

##修改linuxbridge.ini

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridgephysical_interface_mappings default:em3,public:em3

 

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  True

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip  $MYIP

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population  True

 

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupenable_security_group  True

openstack-config--set /etc/neutron/plugins/ml2//inuxbridge_agent.ini securitygroupfirewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

 

 

##i修改l3-agent.ini

openstack-config--set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver

openstack-config--set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge `echo ' '`

 

 

##修改dhcp_agent.ini

openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver

openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver  neutron.agent.linux.dhcp.Dnsmasq

openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata  True

 

 

##修改metadata_agent.ini

openstack-config--set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip   controller1

openstack-config--set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret   metadatamanager

 

 

##修改nova.conf,使nova使用網絡服務

openstack-config--set /etc/nova/nova.conf neutron url http://controller1:9696

openstack-config--set /etc/nova/nova.conf neutron auth_url http://controller1:35357/v3

openstack-config--set /etc/nova/nova.conf neutron auth_type password

openstack-config--set /etc/nova/nova.conf neutron project_domain_name  default

openstack-config--set /etc/nova/nova.conf neutron user_domain_name  default

openstack-config--set /etc/nova/nova.conf neutron region_name scxbxxzx

openstack-config--set /etc/nova/nova.conf neutron project_name service

openstack-config--set /etc/nova/nova.conf neutron username neutron

openstack-config--set /etc/nova/nova.conf neutron password neutronmanager

openstack-config--set /etc/nova/nova.conf neutron service_metadata_proxy  True

openstack-config--set /etc/nova/nova.conf neutron metadata_proxy_shared_secret  metadatamanager

 

 

#創建鏈接文件

ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

 

#同步數據庫

su -s /bin/sh -c"neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

 

#啟動服務

systemctl restartopenstack-nova-api.service

 

systemctl enableneutron-server.service neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

systemctl startneutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service  neutron-metadata-agent.service

 

systemctl startneutron-l3-agent.service

 

 

 

5.2 計算節點

##安裝依賴包

yum installopenstack-neutron-linuxbridge ebtables ipset

 

 

##修改neutron.conf文件內容

openstack-config--set /etc/neutron/neutron.conf DEFAULT rpc_backend   rabbit

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hostscontroller1:5672

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid  dftc

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password  dftcpass

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_uri  http://controller1:5000/v3

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_url  http://controller1:35357/v3

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken memcached  controller1:11211

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_type  password

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_name  service

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken username  neutron

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken password  neutronmanager

 

openstack-config--set /etc/neutron/neutron.conf oslo_concurrency lock_path   /var/lib/neutron/tmp

 

 

##修改linuxbridge_agent.ini

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridgephysical_interface_mappings default:em3,public:em4

 

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  True

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip  $MYIP

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population  True

 

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupenable_security_group  True

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupfirewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

openstack-config--set /etc/nova/nova.conf neutron url http://controller1:9696

openstack-config--set /etc/nova/nova.conf neutron auth_url http://controller1:35357/v3

openstack-config--set /etc/nova/nova.conf neutron auth_type password

openstack-config--set /etc/nova/nova.conf neutron project_domain_name default

openstack-config--set /etc/nova/nova.conf neutron user_domain_name  default

openstack-config--set /etc/nova/nova.conf neutron region_name scxbxxzx

openstack-config--set /etc/nova/nova.conf neutron project_name service

openstack-config--set /etc/nova/nova.conf neutron username neutron

openstack-config--set /etc/nova/nova.conf neutron password neutronmanager

 

 

#啟動服務

systemctl restartopenstack-nova-compute.service

systemctl enableneutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

 

 

6dashboard

## 安裝依賴包

yum installopenstack-dashboard

 

##編輯文件/etc/openstack-dashboard/local_settings修改如下內容

 

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*', ]

SESSION_ENGINE ='django.contrib.sessions.backends.cache'

 

CACHES = {

    'default': {

         'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',

         'LOCATION': 'controller:11211',

    }

}

OPENSTACK_KEYSTONE_URL ="http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT =True

OPENSTACK_API_VERSIONS= {

    "identity": 3,

    "p_w_picpath": 2,

    "volume": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN ="default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE ="user"

OPENSTACK_NEUTRON_NETWORK= {

    ...

    'enable_router': False,

    'enable_quotas': False,

    'enable_distributed_router': False,

    'enable_ha_router': False,

    'enable_lb': False,

    'enable_firewall': False,

    'enable_***': False,

    'enable_fip_topology_check': False,

}

TIME_ZONE = "Asia/Chongqing"

 

7cinder

##修改配置文件

openstack-config--set /etc/cinder/cinder.conf DEFAULT rpc_backend   rabbit

openstack-config--set /etc/cinder/cinder.conf DEFAULT auth_strategy   keystone

openstack-config--set /etc/cinder/cinder.conf DEFAULT my_ip $MYIP

 

openstack-config--set /etc/cinder/cinder.conf database connection   mysql://cinder:DB_PASS@controller1/DB_PASS

 

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_uri   http://controller1:5000/v3

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_url   http://controller1:35357/v3

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken memcached  controller1:11211

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_type   password

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken project_name  service

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken username  cinder

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken password  cindermanager

openstack-config--set /etc/cinder/cinder.conf oslo_concurrency lock_path   /var/lib/cinder/tmp

openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts  controller1:5672

openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid   dftc

openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password   dftcpass

 

openstackendpoint create --region scxbxxzx  volume public http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volume internal http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volume admin http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumev2 public http://controller1:8776/v2/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s

 

8###clean all ceph configure file andpackage

ceph-deploy purgecontroller1 compute1 compute2 compute3

 

ceph-deploypurgedata controller1 compute1 compute2 compute3

 

ceph-deployforgetkeys

 

ssh compute1 sudorm -rf /osd/osd0/*

ssh compute2 sudorm -rf /osd/osd1/*

ssh compute3 sudorm -rf /osd/osd2/*

 

 

###install newceph-cluster

su - dftc

mkdir cluster

cd cluster

 

#initial mon node

ceph-deploynew  controller1

 

##changeconfigure file

echo "osdpool default size = 2" >> ceph.conf

echo "publicnetwork = 172.16.16.0/24" >> ceph.conf

echo"cluster network = 172.16.17.0/24" >> ceph.conf

 

## 安裝ceph節點

###  ceph.x86_64 1:10.2.5-0.el7                         ceph-base.x86_641:10.2.5-0.el7             

###  ceph-common.x86_64 1:10.2.5-0.el7                  ceph-mds.x86_641:10.2.5-0.el7                   

###  ceph-mon.x86_64 1:10.2.5-0.el7                     ceph-osd.x86_64 1:10.2.5-0.el7                   

###  ceph-radosgw.x86_64 1:10.2.5-0.el7                 ceph-selinux.x86_641:10.2.5-0.el7  

ceph-deployinstall controller1 compute1 compute2 compute3

 

##初始化ceph-mon

ceph-deploy moncreate-initial

 

###########errmessage

[compute3][DEBUG] detect platform information from remote host

[compute3][DEBUG] detect machine type

[compute3][DEBUG] find the location of an executable

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 5

[ceph_deploy.mon][WARNIN]waiting 5 seconds before retrying

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 4

[ceph_deploy.mon][WARNIN]waiting 10 seconds before retrying

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 3

[ceph_deploy.mon][WARNIN]waiting 10 seconds before retrying

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 2

[ceph_deploy.mon][WARNIN]waiting 15 seconds before retrying

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 1

[ceph_deploy.mon][WARNIN]waiting 20 seconds before retrying

[ceph_deploy.mon][ERROR] Some monitors have still not reached quorum:

[ceph_deploy.mon][ERROR] compute1

[ceph_deploy.mon][ERROR] compute3

[ceph_deploy.mon][ERROR] compute2

 

########resolve

copy remoteconfigure file to localhost

compare two file, same file content

so , go aheadnext step

 

 

 

##initial osd

ceph-deploy osdprepare compute1:/osd/osd0/ compute2:/osd/osd1 compute3:/osd/osd2

 

ceph-deploy osdactivate compute1:/osd/osd0/ compute2:/osd/osd1 compute3:/osd/osd2

 

ceph-deploy admincontroller1 compute1 compute2 compute3

 

 

chmod +r/etc/ceph/ceph.client.admin.keyring

 

 

####

ceph authget-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefixrbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rxpool=p_w_picpaths';ceph auth get-or-create client.glance mon 'allow r' osd 'allowclass-read object_prefix rbd_children, allow rwx pool=p_w_picpaths';ceph authget-or-create client.cinder-backup mon 'allow r' osd 'allow class-readobject_prefix rbd_children, allow rwx pool=backups'

 

 

 

####

ceph authget-or-create client.glance | ssh controller1 sudo tee/etc/ceph/ceph.client.glance.keyring

ssh controller1sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring

 

ceph authget-or-create client.cinder | ssh compute1 sudo tee/etc/ceph/ceph.client.cinder.keyring

ssh compute1 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

ceph authget-or-create client.cinder | ssh compute2 sudo tee/etc/ceph/ceph.client.cinder.keyring

ssh compute2 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

ceph authget-or-create client.cinder | ssh compute3 sudo tee/etc/ceph/ceph.client.cinder.keyring

ssh compute3 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

 

ceph authget-or-create client.cinder-backup | ssh compute1 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring

ssh compute1 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

ceph authget-or-create client.cinder-backup | ssh compute2 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring

ssh compute2 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

ceph authget-or-create client.cinder-backup | ssh compute3 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring

ssh compute3 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

 

###  controller node run belowe command#########################

ceph auth get-keyclient.cinder | ssh compute1 tee client.cinder.key

ceph auth get-keyclient.cinder | ssh compute2 tee client.cinder.key

ceph auth get-keyclient.cinder | ssh compute3 tee client.cinder.key

 

### compute'snode dftc user  run ################

cat >secret.xml <<EOF

<secretephemeral='no' private='no'>

 <uuid>c2ad36f3-f184-48b3-81c3-49411cc6566f</uuid>

  <usage type='ceph'>

    <name>client.cindersecret</name>

  </usage>

</secret>

EOF

sudo virshsecret-define --file secret.xml

sudo virshsecret-set-value --secret c2ad36f3-f184-48b3-81c3-49411cc6566f --base64AQAhhXhYL3ApHhAAYO5wYNEdz63pNxermCgjFg== && rm client.cinder.keysecret.xml

 

 

######

virshsecret-set-value --secret c2ad36f3-f184-48b3-81c3-49411cc6566f --base64AQAhhXhYL3ApHhAAYO5wYNEdz63pNxermCgjFg==


 

 

##### OLD VERSION

openstack-config--set /etc/glance/glance-api.conf DEFAULT default_store  rbd

##### NEW VERSION

openstack-config--set /etc/glance/glance-api.conf glance_store default_store  rbd

 

 

openstack-config--set /etc/glance/glance-api.conf DEFAULT show_p_w_picpath_direct_url  True

openstack-config--set /etc/glance/glance-api.conf glance_store stores  rbd

openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_pool  p_w_picpaths

openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_user  glance

openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_ceph_conf /etc/ceph/ceph.conf

openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_chunk_size  8

openstack-config--set /etc/glance/glance-api.conf paste_deploy flavor  keystone

 

##Image 屬性

###建議配置如下 p_w_picpath 屬性:

###  hw_scsi_model=virtio-scsi: 添加 virtio-scsi 控制器以獲得更好的性能、并支持 discard 操作;

###  hw_disk_bus=scsi: 把所有 cinder 塊設備都連到這個控制器;

###  hw_qemu_guest_agent=yes: 啟用 QEMU guest agent (訪客代理)

###  os_require_quiesce=yes: 通過 QEMU guest agent 發送 fs-freeze/thaw 調用

 

 

 

 

openstack-config--set /etc/cinder/cinder.conf DEFAULT enabled_backends  ceph

openstack-config--set /etc/cinder/cinder.conf DEFAULT glance_api_version  2

 

 

openstack-config--set /etc/cinder/cinder.conf ceph volume_driver  cinder.volume.drivers.rbd.RBDDriver

openstack-config--set /etc/cinder/cinder.conf ceph rbd_pool volumes

openstack-config--set /etc/cinder/cinder.conf ceph rbd_ceph_conf  /etc/ceph/ceph.conf

openstack-config--set /etc/cinder/cinder.conf ceph rbd_flatten_volume_from_snapshot  false

openstack-config--set /etc/cinder/cinder.conf ceph rbd_max_clone_depth  5

openstack-config--set /etc/cinder/cinder.conf ceph rbd_store_chunk_size  4

openstack-config--set /etc/cinder/cinder.conf ceph rados_connect_timeout  -1

openstack-config--set /etc/cinder/cinder.conf ceph glance_api_version  2

openstack-config--set /etc/cinder/cinder.conf ceph rbd_user cinder

openstack-config--set /etc/cinder/cinder.conf ceph rbd_secret_uuid  c2ad36f3-f184-48b3-81c3-49411cc6566f

 

 

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_driver  cinder.backup.drivers.ceph

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_conf  /etc/ceph/ceph.conf

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_user  cinder-backup

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_chunk_size  134217728

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_pool  backups

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_stripe_unit  0

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_stripe_count  0

openstack-config--set /etc/cinder/cinder.conf DEFAULT restore_discard_excess_bytes  true

 

openstack-config--set /etc/nova/nova.conf libvirt rbd_user cinder

openstack-config--set /etc/nova/nova.conf libvirt rbd_secret_uuid  457eb676-33da-42ec-9a8c-9293d545c337

 

 

 

 

 

###############

 

[client]

        rbd cache = true

        rbd cache writethrough until flush =true

        admin socket =/var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok

        log file =/var/log/qemu/qemu-guest-$pid.log

        rbd concurrent management ops = 20

 

 

 

 

 

 

 

 

 

 

 

Note:Addopen issues that you identify while writing or reviewing this document to theopen issues section.  As you resolveissues, move them to the closed issues section and keep the issue ID the same.  Include an explanation of the resolution.

When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.

流程流程說Add open issues that you identifywhile writing or reviewing this document to the open issues section.  As you resolve issues, move them to theclosed issues section and keep the issue ID the same.  Include an explanation of the resolution.

When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.

 

待解決與已解決問題

Note:Addopen issues that you identify while writing or reviewing this document to theopen issues section.  As you resolveissues, move them to the closed issues section and keep the issue ID thesame.  Include an explanation of theresolution.

When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.

 

待解決問題

 

ID: 001

Issue: DVR功能實現

Resolution:

Tips: openstack-openvswitch來實現東西向流量之后,可以實現DVR;

 

ID: 002

Issue: HA功能實現

Resolution:

Tips: 使用keepalived提供虛擬IP, haproxy提供均衡負載和端口數據轉發;

 

ID003

Issue: glance模塊使用虛擬IP,端口不可達,無法上傳鏡像,nova,neutron同樣問題

Resolution:

暫無,

 

……

已解決問題

 

ID001

Issue: 需要重置keystone數據庫

Resolution:

#### clear old database and old data########

mysql -uroot -p**** -e "createdatabase keystone;"

mysql -uroot -p**** -e "createdatabase keystone;"

mysql -uroot -p**** -e "GRANT ALLPRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY'DB_PASS';"

mysql -uroot -p**** -e "GRANT ALLPRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'DB_PASS';"

mysql -uroot -p**** -e "createdatabase glance;"

openstack-config --set /etc/keystone/keystone.confDEFAULT admin_token 749d6ead6be998642461

openstack-config --set/etc/keystone/keystone.conf database connectionmysql+pymysql://keystone:DB_PASS@controller1/keystone

openstack-config --set/etc/keystone/keystone.conf token provider fernet

 

### sync database and use fernetkey #######

su -s /bin/sh -c "keystone-managedb_sync" keystone

keystone-manage fernet_setup--keystone-user keystone --keystone-group keystone

 

ID002

Issue: 某個模塊使用CLI,信息提示auth faild!

Resolution:

重置模塊所有的用戶,服務和認證端點,重新建立用戶添加到admin,重建模塊服務和模塊的認證端點。

 

 

ID003

Issue: vnc打不開問題

Resolution:

compute節點執行:

MYIP=`ip add show em1|grep  inet|head -1|awk  '{print $2}'|awk -F'/' '{print  $1}'`

openstack-config --set /etc/nova/nova.confvnc novncproxy_base_url http://$MYIP:6080/

 

ID004

Issue: glance模塊使用本地存儲,無法上傳鏡像

Resolution:

檢查端口9292是否啟動,是否可以telnet。檢查發現端口無法啟動,重新檢查配置文件,與ceph對接不能使用“virt_type”的選項,ceph自身直接使用rbd格式對所有的對象進行統一標記管理;

 

ID005

Issue: 創建虛擬機失敗,界面提示鏈接http://controller:9696失敗

Resolution:

檢查9696端口啟用正常,本地主機名實際為controller1,更新配置文件/etc/nova/nova.conf

[neutron]

url = http://controller1:9696為正確可解析主機名;

 

ID006

Issue: glance-api顯示服務正常,端口每10秒出現一次,無法正常鏈接;api日志無異常輸出,systemctl status拋出python異常ERROR:Store for schema file not found

Resolution:

ceph對接過程使用default_store在舊版本中添加在[DEFAULT]選項之后,新版本中添加到[glance]選項之后,更新之后正常

default_store = rbd

 

 

ID007

Issue: 無法執行完成 openstack-nova-compute.service 命令,一直卡住不動

Resolution: 檢查配置文件,無法鏈接消息隊列,發現之前更新文件,忘記修改rabbitmq端口;修改到正確的5672端口后重新執行。

 

ID008

Issue: openstack-nova-api.service 無法啟動,報錯: ACCESS_REFUSED - Login wasrefused using authentication mechanism AMQPLAIN

Resolution:

rabbitmq消息隊列的配置不對,用戶名填寫錯誤,更改正確后正常。

 

 

ID009

Issue: 無法安裝ceph-deploy依賴的

Processing Dependency:

python-distribute for package:

ceph-deploy-1.5.34-0.noarch Package python-setuptools-0.9.8-4.el7.noarch isobsoleted by python2-setuptools-22.0.5-1.el7.noarch

 which is already installed --> FinishedDependency Resolution

 Error:

Package: ceph-deploy-1.5.34-0.noarch (ceph-noarch) Requires:python-distribute Available: python-setuptools-0.9.8-4.el7.noarch (base)python-distribute = 0.9.8-4.el7

You could try using --skip-broken to work around the problem You could tryrunning: rpm -Va --nofiles --nodigest

包沖突 $ rpm -qa|grep setuptools

python2-setuptools-22.0.5-1.el7.noarch

卸載

 

Resolution:

利用 pip 安裝解決

yum install python-pip

pip installceph-deploy

 

ID010

Issue: 配置完成dashboard后,界面無法正常訪問

Resolution:

memcached can not bind the hostname port!

 

ID011

Issue: dashboard界面總是拋出異常錯誤?
在點擊openstackdashboard時右上角總是彈出一些錯誤的提示,再次刷新時又不提示
 
Resolution:  

MYSQL數據庫安裝完成后,默認最大連接數是100,一般流量稍微大一點這個連接數是遠遠不夠

1、修改mairadb的配置文件,將最大連接數改為1500

echo"max_connections=1500" >>/etc/my.cnf.d/server.cnf

2、重新啟動數據庫
service mariadb  restart

……


向AI問一下細節

免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

AI

安龙县| 青海省| 平罗县| 汾阳市| 祁东县| 大竹县| 富顺县| 南澳县| 新乡市| 铁岭县| 长顺县| 安塞县| 新和县| 鄱阳县| 墨玉县| 连城县| 慈利县| 平谷区| 肇州县| 宁津县| 景东| 隆尧县| 图木舒克市| 开原市| 垦利县| 沂源县| 咸丰县| 马鞍山市| 达日县| 宝兴县| 广饶县| 阿拉善右旗| 康平县| 昭通市| 阳山县| 安远县| 区。| 徐汇区| 若尔盖县| 哈尔滨市| 自贡市|