OpenStack安装文档

合集下载

OpenStackCompute节点安装word精品文档17页

OpenStackCompute节点安装word精品文档17页

OpenStack Compute调研及安装目录1. OpenStack调研 (1)1.1 虚拟化简介 (1)1.1.1 纯软件虚拟化 (2)1.1.2 完全虚拟化 (3)1.2 OpenStack Compute简介 (3)1.2.1 Nova概念 (3)1.2.2 Nova服务架构 (4)1.2.3 Nova运行 (5)2. OpenStack Compute安装 (5)2.1 实验环境 (5)2.2 Nova安装过程 (6)2.3安装OpenStack基于Web的管理控制台 (12)2.5 Nova存储管理 (13)2.6问题及可能的解决方案 (15)1. OpenStack调研1.1 虚拟化简介虚拟化是一个广义的术语,是指计算元件在虚拟的基础上而不是真实的基础上运行,是一个为了简化管理,优化资源的解决方案。

虚拟化技术可以扩大硬件的容量,简化软件的重新配置过程。

目前比较流行的虚拟化技术主要分为纯软件虚拟化和完全虚拟化两方面。

1.1.1 纯软件虚拟化在纯软件虚拟化解决方案中VMM(Virtual Machine Monitor)在软件套件中的位置是传统意义上操作系统所处的位置,而操作系统的位置是传统意义上应用程序所处的位置;客户操作系统很多情况下是通过VMM来与硬件进行通信,由VMM来决定其对系统上所有虚拟机的访问。

纯软件虚拟化的工作原理是这样的,所谓虚拟机是对真实计算环境的抽象和模拟,而VMM需要则为每个虚拟机分配一套数据结构来管理它们状态,VMM 调度虚拟机时将其部分状态恢复到主机系统中。

主机处理器直接运行Guest OS 的机器指令,由于Guest OS运行在低特权级别,当访问主机系统的特权状态时,权限不足导致主机处理器产生异常,将运行权自动交还给VMM。

此外,外部中断的到来也会导致VMM的运行。

VMM可能需要先将该虚拟机的当前状态写回到状态数据结构中,分析虚拟机被挂起的原因,然后代表Guest OS执行相应的特权操作。

openstack安装手册

openstack安装手册

1. OpenStack Basic InstallTable of ContentsIntroduction (1)Architecture (2)Requirements (2)Controller Node (3)Introduction (3)Common services (3)Keystone (5)Glance (7)Nova (7)Cinder (10)Quantum (10)Dashboard (Horizon) (11)Network Node (11)Introduction (11)Common services (12)Network Services (13)Virtual Networking (14)Compute Node (15)Introduction (15)Common services (15)Hypervisor (16)Nova (17)Quantum (19)Create your first VM (20)Conclusion (20)IntroductionThis document helps anyone who wants to deploy OpenStack Folsom for developmentpurposes with Ubuntu 12.04 LTS (using the Ubuntu Cloud Archive).We are going to install a three-node setup with one controller, one network and onecompute node.Of course, you can setup as many computes nodes as you want. This document is a goodstart for beginners in OpenStack who want to install a testing infrastructure.ArchitectureA standard Quantum setup has up to four distinct physical data center networks:•Management network. Used for internal communication between OpenStackcomponents. The IP addresses on this network should be reachable only within the datacenter.•Data network. Used for VM data communication within the cloud deployment. The IPaddressing requirements of this network depend on the Quantum plugin in use.•External network. Used to provide VMs with Internet access in some deploymentscenarios. The IP addresses on this network should be reachable by anyone on theInternet.•API network. Exposes all OpenStack APIs, including the Quantum API, to tenants. The IPaddresses on this network should be reachable by anyone on the Internet. This may bethe same network as the external network, as it is possible to create a quantum subnetfor the external network that uses IP allocation ranges to use only less than the full rangeof IP addresses in an IP block.RequirementsYou need at least three machines (virtual or physical) with Ubuntu 12.04 (LTS) installed.Table 1.1. Architecture and node informationcontroller network compute Hostname folsom-controller folsom-network folsom-computeServices MySQL, RabbitMQ, Nova,Cinder, Glance, Keystone,Quantum Quantum-L3-agent,Quantum-DHCP-agent,Quantum Agent with Open-vSwitchnova-compute, KVM, nova-api, Quantum Agent withOpen-vSwitchMinimum number of disks211External + API network7.7.7.7/247.7.7.8/24-Management network192.168.0.1/24192.168.0.2/24192.168.0.3/24Data network-10.10.10.1/2410.10.10.2/24Total number of NIC232 Controller NodeIntroductionThe Controller node will provide :•Databases (with MySQL)•Queues (with RabbitMQ)•Keystone•Glance•Nova (without nova-compute)•Cinder•Quantum Server (with Open-vSwitch plugin)•Dashboard (with Horizon)Common servicesOperating System1.Install Ubuntu with this parameters :•Time zone : UTC•Hostname : folsom-controller•Packages : OpenSSH-ServerAfter OS Installation, reboot the server.2.Since Ubuntu 12.04 LTS has OpenStack Essex by default, we are going to use the UbuntuCloud Archive for Folsom :apt-get install ubuntu-cloud-keyringEdit /etc/apt/sources.list.d/cloud-archive.list :deb /ubuntu precise-updates/folsommainUpgrade the system (and reboot if you need) :apt-get update && apt-get upgrade3.Configure the network :•Edit /etc/network/interfaces file :# Management Networkauto eth0iface eth0 inet staticaddress 192.168.0.1netmask 255.255.255.0gateway 192.168.0.254dns-nameservers 8.8.8.8# API + Public Networkauto eth1iface eth1 inet staticaddress 7.7.7.7netmask 255.255.255.0•Edit /etc/sysctl.conf :net.ipv4.conf.all.rp_filter = 0net.ipv4.conf.default.rp_filter = 0Then, restart network service :service networking restart•Edit the /etc/hosts file and add folsom-controller, folsom-network and folsom-compute hostnames with correct IP.4.Install Configure NTP :•Install the package :apt-get install -y ntp•Configure /etc/ntp.conf file :server iburstserver 127.127.1.0fudge 127.127.1.0 stratum 10•Restart the service :service ntp restartMySQL Database Service1.Install the packages :apt-get install mysql-server python-mysqldb2.Allow connection from the network :sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/f3.Restart the service :service mysql restart4.Create Databases, Users, Rights :mysql -u root -ppassword <<EOFCREATE DATABASE nova;GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \IDENTIFIED BY 'password';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'192.168.0.1' \IDENTIFIED BY 'password';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'192.168.0.2' \IDENTIFIED BY 'password';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'192.168.0.3' \IDENTIFIED BY 'password';CREATE DATABASE cinder;GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \IDENTIFIED BY 'password';CREATE DATABASE glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \IDENTIFIED BY 'password';CREATE DATABASE keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \IDENTIFIED BY 'password';CREATE DATABASE quantum;GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'localhost' \IDENTIFIED BY 'password';GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'192.168.0.2' \IDENTIFIED BY 'password';GRANT ALL PRIVILEGES ON quantum.* TO 'quantum'@'192.168.0.3' \IDENTIFIED BY 'password';FLUSH PRIVILEGES;EOFRabbitMQ Messaging Service1.Install the packages :apt-get install rabbitmq-server2.Change the default password :rabbitmqctl change_password guest passwordKeystone1.Install the packages :apt-get install keystone python-keystone python-keystoneclient2.Edit /etc/keystone/keystone.conf :[DEFAULT]admin_token = passwordbind_host = 0.0.0.0public_port = 5000admin_port = 35357compute_port = 8774verbose = Truedebug = Truelog_file = keystone.loglog_dir = /var/log/keystonelog_config = /etc/keystone/logging.conf[sql]connection = mysql://keystone:password@localhost:3306/keystoneidle_timeout = 200[identity]driver = keystone.identity.backends.sql.Identity[catalog]driver = keystone.catalog.backends.sql.Catalog(...)3.Restart Keystone and create the tables in the database :service keystone restartkeystone-manage db_sync4.Load environment variables :•Create novarc file :export OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL="http://localhost:5000/v2.0/"export SERVICE_ENDPOINT="http://localhost:35357/v2.0"export SERVICE_TOKEN=password•Export the variables :source novarcecho "source novarc">>.bashrc5.Download the data script and fill Keystone database with data (users, tenants, services) :./keystone-data.sh6.Download the endpoint script and create the endpoints (for projects) :./keystone-endpoints.shIf an IP address of the management network on the controller node is different from this example, please use the following:./keystone-endpoints.sh -K <ip address of the management network>Glance1.Install the packages :apt-get install glance glance-api python-glanceclient glance-common2.Configure Glance :•Edit /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf files andmodify :sql_connection = mysql://glance:password@localhost/glanceadmin_tenant_name = serviceadmin_user = glanceadmin_password = passwordFor glance-api.conf, modify :notifier_strategy = rabbitrabbit_password = password•Restart Glance services :service glance-api restart && service glance-registry restart•Create Glance tables into the database :glance-manage db_sync•Download and import Ubuntu 12.04 LTS UEC Image :glance image-create \--location /releases/12.04/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img \--is-public true --disk-format qcow2 --container-format bare --name"Ubuntu"•Check if the image has been introduced in the index :glance image-list+--------------------------------------+--------+-------------+------------------+-----------+--------+| ID | Name | Disk Format | ContainerFormat | Size | Status |+--------------------------------------+--------+-------------+------------------+-----------+--------+| 0d2664d3-cda9-4937-95b2-909ecf8ea362 | Ubuntu | qcow2 | bare| 233701376 | active |+--------------------------------------+--------+-------------+------------------+-----------+--------+•You can also install Glance Replicator (new in Folsom). More informations about ithere.Nova1.Install the packages :apt-get install nova-api nova-cert nova-common \nova-scheduler python-nova python-novaclient nova-consoleauth novnc \ nova-novncproxy2.Configure Nova :•Edit /etc/nova/api-paste.ini file and modify :admin_tenant_name = serviceadmin_user = novaadmin_password = passwordSince we are going to use Cinder for volumes, we should also delete each partconcerning "nova-volume" :============================================================[composite:osapi_volume]use = call:nova.api.openstack.urlmap:urlmap_factory/: osvolumeversions/v1: openstack_volume_api_v1========================================================================================================================[composite:openstack_volume_api_v1]use = call:nova.api.auth:pipeline_factorynoauth = faultwrap sizelimit noauth ratelimit osapi_volume_app_v1keystone = faultwrap sizelimit authtoken keystonecontext ratelimitosapi_volume_app_v1keystone_nolimit = faultwrap sizelimit authtoken keystonecontextosapi_volume_app_v1========================================================================================================================[app:osapi_volume_app_v1]paste.app_factory = nova.api.openstack.volume:APIRouter.factory========================================================================================================================[pipeline:osvolumeversions]pipeline = faultwrap osvolumeversionapp[app:osvolumeversionapp]paste.app_factory = nova.api.openstack.volume.versions:Versions.factory ============================================================•Edit /etc/nova/nova.conf file and modify :[DEFAULT]# MySQL Connection #sql_connection=mysql://nova:password@192.168.0.1/nova# nova-scheduler #rabbit_password=passwordscheduler_driver=nova.scheduler.simple.SimpleScheduler# nova-api #cc_host=192.168.0.1auth_strategy=keystones3_host=192.168.0.1ec2_host=192.168.0.1nova_url=http://192.168.0.1:8774/v1.1/ec2_url=http://192.168.0.1:8773/services/Cloudkeystone_ec2_url=http://192.168.0.1:5000/v2.0/ec2tokensapi_paste_config=/etc/nova/api-paste.iniallow_admin_api=trueuse_deprecated_auth=falseec2_private_dns_show_ip=Truedmz_cidr=169.254.169.254/32ec2_dmz_host=192.168.0.1metadata_host=192.168.0.1metadata_listen=0.0.0.0enabled_apis=ec2,osapi_compute,metadata# Networking #network_api_class=work.quantumv2.api.APIquantum_url=http://192.168.0.1:9696quantum_auth_strategy=keystonequantum_admin_tenant_name=servicequantum_admin_username=quantumquantum_admin_password=passwordquantum_admin_auth_url=http://192.168.0.1:35357/v2.0libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver linuxnet_interface_driver=work.linux_net.LinuxOVSInterfaceDriver firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver# Cinder #volume_api_class=nova.volume.cinder.API# Glance #glance_api_servers=192.168.0.1:9292image_service=nova.image.glance.GlanceImageService# novnc #novnc_enable=truenovncproxy_base_url=http://192.168.0.1:6080/vnc_auto.htmlvncserver_proxyclient_address=127.0.0.1vncserver_listen=0.0.0.0# Misc #logdir=/var/log/novastate_path=/var/lib/novalock_path=/var/lock/novaroot_helper=sudo nova-rootwrap /etc/nova/rootwrap.confverbose=true•Create Nova tables into the database :nova-manage db sync•Restart Nova services :service nova-api restartservice nova-cert restartservice nova-consoleauth restartservice nova-scheduler restartservice nova-novncproxy restartCinder1.Install the packages :apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget \open-iscsi iscsitarget-dkms python-cinderclient linux-headers-`uname -r`2.Since there is a bug in tgtadm, we have to modify /etc/tgt/targets.conf :# include /etc/tgt/conf.d/*.confinclude /etc/tgt/conf.d/cinder_tgt.conf3.Configure & start the iSCSI services :sed -i 's/false/true/g' /etc/default/iscsitargetservice iscsitarget startservice open-iscsi start4.Configure Cinder :•Edit /etc/cinder/cinder.conf file and modify :[DEFAULT]sql_connection = mysql://cinder:password@localhost:3306/cinderrabbit_password = password•Edit /etc/cinder/api-paste.ini file and modify :admin_tenant_name = serviceadmin_user = cinderadmin_password = password•Create the volume (on the second disk) :fdisk /dev/sdb[Create a Linux partition]pvcreate /dev/sdb1vgcreate cinder-volumes /dev/sdb1•Create Cinder tables into the database :cinder-manage db sync•Restart the services :service cinder-api restartservice cinder-scheduler restartservice cinder-volume restartQuantum1.Install the packages :apt-get install quantum-server2.Configure Quantum services :•Edit /etc/quantum/quantum.conf file and modify :core_plugin = \quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2auth_strategy = keystonefake_rabbit = Falserabbit_password = password•Edit /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini file and modify : [DATABASE]sql_connection = mysql://quantum:password@localhost:3306/quantum[OVS]tenant_network_type = gretunnel_id_ranges = 1:1000enable_tunneling = TrueNoteIt's more handy to choose tunnel mode since you don't have to configureyour physical switches for VLANs.•Edit /etc/quantum/api-paste.ini file and modify :admin_tenant_name = serviceadmin_user = quantumadmin_password = password3.Start the services :service quantum-server restartDashboard (Horizon)Install the packages :apt-get install apache2 libapache2-mod-wsgi openstack-dashboard \memcached python-memcacheOpenStack Dashboard is now available at http://<controller_node>/horizon. We can loginwith admin / password credentials or demo / password.Network NodeIntroductionThe Network node will provide :•Virtual Bridging (Open-vSwitch + Quantum Agent) with tunneling•DHCP Server (Quantum DHCP Agent)•Virtual Routing (Quantum L3 Agent)Common servicesOperating System1.Install Ubuntu with this parameters :•Time zone : UTC•Hostname : folsom-network•Packages : OpenSSH-ServerAfter OS Installation, reboot the server.2.Since Ubuntu 12.04 LTS has OpenStack Essex by default, we are going to use CloudArchives for Folsom :apt-get install ubuntu-cloud-keyringEdit /etc/apt/sources.list.d/cloud-archive.list :deb /ubuntu precise-updates/folsommainUpgrade the system (and reboot if you need) :apt-get update && apt-get upgrade3.Configure the network :•Edit /etc/network/interfaces file :# Management Networkauto eth0iface eth0 inet staticaddress 192.168.0.2netmask 255.255.255.0gateway 192.168.0.254dns-nameservers 8.8.8.8# Data Networkauto eth1iface eth1 inet staticaddress 10.10.10.1netmask 255.255.255.0# Public Bridgeauto eth2iface eth2 inet manualup ifconfig $IFACE 0.0.0.0 upup ip link set $IFACE promisc ondown ifconfig $IFACE down•Edit /etc/sysctl.conf :net.ipv4.ip_forward=1net.ipv4.conf.all.rp_filter = 0net.ipv4.conf.default.rp_filter = 0Then, restart network service :service networking restart•Edit the /etc/hosts file and add folsom-controller, folsom-network and folsom-compute hostnames with correct IP.4.Install Configure NTP :•Install the package :apt-get install -y ntp•Configure /etc/ntp.conf file :server 192.168.0.1•Restart the service :service ntp restartNetwork ServicesOpen-vSwitch1.Install the packages :apt-get install quantum-plugin-openvswitch-agent \quantum-dhcp-agent quantum-l3-agent2.Start Open vSwitch:service openvswitch-switch start3.Create Virtual Bridging :ovs-vsctl add-br br-intovs-vsctl add-br br-exovs-vsctl add-port br-ex eth2ip link set up br-exQuantumConfigure Quantum services :•Edit /etc/quantum/l3_agent.ini file and modify :auth_url = http://192.168.0.1:35357/v2.0admin_tenant_name = serviceadmin_user = quantumadmin_password = passwordmetadata_ip = 192.168.0.1use_namespaces = False•Edit /etc/quantum/api-paste.ini file and modify :auth_host = 192.168.0.1admin_tenant_name = serviceadmin_user = quantumadmin_password = password•Edit /etc/quantum/quantum.conf file and modify :core_plugin = \quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2auth_strategy = keystonefake_rabbit = Falserabbit_host = 192.168.0.1rabbit_password = password•Edit /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini file and modify : [DATABASE]sql_connection = mysql://quantum:password@192.168.0.1:3306/quantum[OVS]tenant_network_type = gretunnel_id_ranges = 1:1000enable_tunneling = Trueintegration_bridge = br-inttunnel_bridge = br-tunlocal_ip = 10.10.10.1NoteIt's more handy to choose tunnel mode since you don't have to configureyour physical switches for VLANs.•Edit /etc/quantum/dhcp_agent.ini file and add :use_namespaces = FalseStart the services :service quantum-plugin-openvswitch-agent startservice quantum-dhcp-agent restartservice quantum-l3-agent restartVirtual NetworkingCreate Virtual Networking1.Load environment variables :•Create novarc file :export OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=passwordexport OS_AUTH_URL="http://192.168.0.1:5000/v2.0/"export SERVICE_ENDPOINT="http://192.168.0.1:35357/v2.0"export SERVICE_TOKEN=password•Export the variables :source novarcecho "source novarc">>.bashrc2.Download the Quantum script. We are using the "Per-tenant Routers with PrivateNetworks" use-case.3.Edit the script belong your networking (public network, floatings IP).4.Execute the script.L3 Configuration•Copy the external network ID :quantum net-list•Edit /etc/quantum/l3_agent.ini and paste the ID :gateway_external_network_id = ID•Copy the provider router ID :quantum router-list•Edit /etc/quantum/l3_agent.ini and paste the ID :router_id = ID•Restart L3 Agent :service quantum-l3-agent restartCompute NodeIntroductionThe Compute node will provide :•Hypervisor (KVM)•nova-compute•Quantum OVS AgentCommon services1.Install Ubuntu with this parameters :•Time zone : UTC•Hostname : folsom-compute•Packages : OpenSSH-ServerAfter OS Installation, reboot the server .2.Since Ubuntu 12.04 LTS has OpenStack Essex by default, we are going to use CloudArchives for Folsom :apt-get install ubuntu-cloud-keyringEdit /etc/apt/sources.list.d/cloud-archive.list :deb /ubuntu precise-updates/folsommainUpgrade the system (and reboot if you need) :apt-get update && apt-get upgrade3.Configure the network :•Edit /etc/network/interfaces file :# Management Networkauto eth0iface eth0 inet staticaddress 192.168.0.3netmask 255.255.255.0gateway 192.168.0.254dns-nameservers 8.8.8.8# Data Networkauto eth1iface eth1 inet staticaddress 10.10.10.2netmask 255.255.255.0•Edit /etc/sysctl.conf :net.ipv4.conf.all.rp_filter = 0net.ipv4.conf.default.rp_filter = 0Then, restart network service :service networking restart•Edit the /etc/hosts file and add folsom-controller, folsom-network and folsom-compute hostnames with correct IP.4.Install & Configure NTP :•Install the package :apt-get install -y ntp•Configure /etc/ntp.conf file :server 192.168.0.1•Restart the service :service ntp restartHypervisor1.Install the packages that we need :apt-get install -y kvm libvirt-bin pm-utils2.Configure libvirt :•Edit /etc/libvirt/qemu.conf file and add :cgroup_device_acl = ["/dev/null", "/dev/full", "/dev/zero","/dev/random", "/dev/urandom","/dev/ptmx", "/dev/kvm", "/dev/kqemu","/dev/rtc", "/dev/hpet", "/dev/net/tun"]•Disable KVM default virtual bridge to avoid any confusion :virsh net-destroy defaultvirsh net-undefine default•Allow Live Migrations :Edit /etc/libvirt/libvirtd.conf file :listen_tls = 0listen_tcp = 1auth_tcp = "none"Modify libvirtd_opts variable in /etc/init/libvirt-bin.conf file :env libvirtd_opts="-d -l"Edit /etc/default/libvirt-bin file :libvirtd_opts="-d -l"3.Restart libvirt :service libvirt-bin restartNova1.Install the packages :apt-get install nova-compute-kvm2.Configure Nova :•Edit /etc/nova/api-paste.ini file and modify :admin_tenant_name = serviceadmin_user = novaadmin_password = password•Edit /etc/nova/nova-compute.conf file and modify :[DEFAULT]libvirt_type=kvmlibvirt_ovs_bridge=br-intlibvirt_vif_type=ethernetlibvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriverlibvirt_use_virtio_for_bridges=True•Edit /etc/nova/nova.conf file and modify :[DEFAULT]# MySQL Connection #sql_connection=mysql://nova:password@192.168.0.1/nova# nova-scheduler #rabbit_host=192.168.0.1rabbit_password=passwordscheduler_driver=nova.scheduler.simple.SimpleScheduler# nova-api #cc_host=192.168.0.1auth_strategy=keystones3_host=192.168.0.1ec2_host=192.168.0.1nova_url=http://192.168.0.1:8774/v1.1/ec2_url=http://192.168.0.1:8773/services/Cloudkeystone_ec2_url=http://192.168.0.1:5000/v2.0/ec2tokensapi_paste_config=/etc/nova/api-paste.iniallow_admin_api=trueuse_deprecated_auth=falseec2_private_dns_show_ip=Truedmz_cidr=169.254.169.254/32ec2_dmz_host=192.168.0.1metadata_host=192.168.0.1metadata_listen=0.0.0.0enabled_apis=metadata# Networking #network_api_class=work.quantumv2.api.APIquantum_url=http://192.168.0.1:9696quantum_auth_strategy=keystonequantum_admin_tenant_name=servicequantum_admin_username=quantumquantum_admin_password=passwordquantum_admin_auth_url=http://192.168.0.1:35357/v2.0libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver linuxnet_interface_driver=work.linux_net.LinuxOVSInterfaceDriver firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver# Compute #compute_driver=libvirt.LibvirtDriverconnection_type=libvirt# Cinder #volume_api_class=nova.volume.cinder.API# Glance #glance_api_servers=192.168.0.1:9292image_service=nova.image.glance.GlanceImageService# novnc #novnc_enable=truenovncproxy_base_url=http://192.168.0.1:6080/vnc_auto.htmlvncserver_proxyclient_address=127.0.0.1vncserver_listen=0.0.0.0# Misc #logdir=/var/log/novastate_path=/var/lib/novalock_path=/var/lock/novaroot_helper=sudo nova-rootwrap /etc/nova/rootwrap.confverbose=true•Restart Nova services :service nova-api-metadata restartservice nova-compute restartQuantumOpen vSwitch1.Install the packages:apt-get install -y openvswitch-switch2.Start Open vSwitch serviceservice openvswitch-switch start3.Configure Virtual Bridgingovs-vsctl add-br br-intQuantum1.Install the packages :apt-get install -y quantum-plugin-openvswitch-agent2.Edit /etc/quantum/quantum.conf file and modify :core_plugin = \quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2auth_strategy = keystonefake_rabbit = Falserabbit_host = 192.168.0.1rabbit_password = password3.Edit /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini file and modify :[DATABASE]sql_connection = mysql://quantum:password@192.168.0.1:3306/quantum[OVS]tenant_network_type = gretunnel_id_ranges = 1:1000integration_bridge = br-inttunnel_bridge = br-tunlocal_ip = 10.10.10.2enable_tunneling = True4.Start the Agent :service quantum-plugin-openvswitch-agent restartCreate your first VM1.You can now use OpenStack API or the Dashboard to manage your own IaaS :http://192.168.0.1/horizon with demo / password credentials.2.Edit the security group "Default" to allow ICMP and SSH.3.Create a personal keypair.4.In the Dashboard, go to "Instances" and click "Launch Instance" for spawning a new VM.5.Since Horizon does not manage L3 in Folsom release, we have to configure floating IPfrom Quantum CLI (using demo tenant). To do that, you need to get the ext_net ID andthe port_id of your VM :quantum net-list -- --router:external Truequantum port-list -- --device_id <vm-uuid>6.Now, we are going to create a floating-IP attached to the virtual port of our VM androuted to the external network :quantum floatingip-create --port_id <port_id> <ext_net_id>7.That's it! You should be able to ping your VM using floating IP. ConclusionWe have built a basic architecture for advanced testing purpose. This kind of architectureis close to the production, without High Availability (HA) and some services such as thosefor running OpenStack Object Storage. You can of course add as many compute nodes asyou want. If you need more specific help, please read the official documentation of eachproject or write a post to an OpenStack mailing list.。

Openstack安装部署手册

Openstack安装部署手册

Openstack安装部署手册Havana版本目录1.环境 (4)2.组件整体结构 (4)3.环境准备 (5)3.1.网卡配置 (5)3.2.修改主机名 (5)3.3.安装mysql 数据库 (5)4.安装openstack包 (6)4.1.安装openstack 单元包 (6)4.2.安装Messaging server (6)5.安装keystone认证服务 (6)5.1.创建openstack keystone 与数据库的连接 (6)5.2.定义一个授权令牌 (6)5.3.配置创建密钥与证书 (7)5.4.启动keystone (7)5.5.定义用户租客和roles (7)5.6.创建服务与定义API endpoint (8)6.配置glance (9)6.1.安装glance 组建 (9)6.2.创建glance数据连接 (9)6.3.keystone下定义名为glance的用户 (9)6.4.添加glance roles (9)6.5.配置imgae的服务的身份验证 (9)6.6.添加凭证到/etc/glance/glance-api-paste.ini 和/etc/ (10)6.7.glance/glance-registry-paste.inifiles.两个文件 (10)6.8.keysotne创建glance 服务 (10)6.9.启动glance服务 (11)6.10.校验glance服务 (11)7.安装nova 组建 (12)7.1.配置nova数据连接 (12)7.2.keysotne创建nova user (12)7.3.添加roles (12)7.4.配置计算服务的身份验证 (13)7.5.keysotne创建nova service (13)7.6.创建endpoint (13)7.7.启动nova 的各项服务 (14)7.8.校验nova 服务 (14)8.安装nova network (14)8.1.安装一个本地数据元 (15)8.2.启动nova network (15)8.3.创建vlan (15)8.4.开放安全规则 (15)8.5.校验各项服务是否正常 (16)9.安装dashboard (16)9.1.修改缓存 (16)9.2.修改/etc/openstack-dashboard/local_settings (17)9.3.启动dashboard (17)9.4.校验安装 (17)10.Glance 制作虚拟机的.img 文件 (17)10.1.创建image disk (17)10.2.启动virt-manager 创建虚拟机 (17)10.3.安装后修改虚拟机如下几个配置问题 (21)10.4.Glance 制作image镜像 (22)11.风格flavor的创建 (22)11.1.查看flavor的情况 (22)11.2.创建新的风格 (22)12.创建虚拟机 (22)1.环境2.组件整体结构PS:在本环境中由于只有一台物理机,所以主机要即当管理节点又提供计算服务,所以除了以上controller 中上述组件还要安装nova-compute ,nova-network 服务。

OpenStack安装、配置和测试手册

OpenStack安装、配置和测试手册

OpenStack安装、配置和测试手册目录一实验环境 (3)二实验拓扑 (3)三安装控制节点 (3)3.1 系统配置 (3)3.2 安装NTP服务 (5)3.3 MySQL安装配置 (5)3.4 Qpid安装配置 (6)3.5 安装OpenStack工具包 (7)3.6 Keystone安装配置 (7)3.6.1 初始化Keystone (7)3.6.2 定义Users、Tenants and Roles (9)3.6.3 定义Services 和API Endpoints (10)3.7 Glance安装配置 (10)3.7.1 初始化Glance (10)3.7.2 创建User、定义Services 和API Endpoints (11)3.7.3 配置Glance服务 (12)3.7.4 Glance测试 (14)3.8 Nova安装配置 (15)3.8.1 初始化Nova (15)3.8.2 创建User、定义Services和API Endpoints (15)3.8.3 配置Nova服务 (16)3.9 Horizon安装配置 (19)3.10 Neutron安装配置 (20)3.10.1 初始化Neutron (20)3.10.2 创建User、定义Services 和API Endpoints (21)3.10.3 配置网络服务 (22)3.11 Cinder安装配置 (26)3.11.1 初始化Cinder (26)3.11.2 创建User、定义Services 和API Endpoints (26)3.11.3 配置Cinder服务 (28)3.12 Swift安装配置 (29)3.12.1 初始化Swift (29)3.12.2 创建User、定义Services 和API Endpoints (29)3.12.3 配置Swift (30)四安装计算节点 (35)4.1 系统配置 (35)4.2 设置时间同步 (37)4.3 配置libvirtd服务 (37)4.4 Neutron安装配置 (38)4.4.1 初始化Neutron-openvswitch (38)4.4.2 配置Neutron服务 (38)4.5 Nova安装配置 (40)4.5.1 初始化Nova-compute (40)4.5.2 配置Nova服务 (40)五测试 (42)一实验环境1、硬件环境一台HP DL380G5服务器2、软件环境CentOS 6.4 x86_64、OpenStack、esxi 5.5二实验拓扑三安装控制节点3.1 系统配置1、导入第三方软件源# rpm -Uvh/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm # rpm -Uvh/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.r pm# yum install/openstack/openstack-havana/rdo-release-havana-7.noar ch.rpm2、修改yum源[root@controller ~]# cat /etc/yum.repos.d/rdo-release.repo[openstack-havana]name=OpenStack Havana Repositorybaseurl=https:///repos/openstack/EOL/openstack-havana/ep el-6/enabled=1gpgcheck=0priority=13、配置/etc/hosts文件4、网络设置5、关闭selinux6、修改/etc/sysctl.conf参数运行以下命令,使其生效:# sysctl –p7、升级系统# yum -y update8、重启机器# reboot3.2 安装NTP服务1、安装NTP时钟同步服务器# yum install -y ntp2、编辑/etc/ntp.conf3、启动ntp服务,设置开机自启动# service ntpd start# chkconfig ntpd on3.3 MySQL安装配置1、安装MySQL# yum install -y mysql mysql-server MySQL-python 2、修改mysql启动文件3、启动MYSQL服务,设置开机启动# service mysqld start# chkconfig mysqld on4、修改root用户密码为openstack# mysqladmin -uroot password 'openstack';history –c 3.4 Qpid安装配置1、安装qpid# yum install -y qpid-cpp-server memcached2、修改/etc/qpidd.conf配置文件,将auth设置为no3、启动qpid服务,设置开机自启动# service qpidd start# chkconfig qpidd on3.5 安装OpenStack工具包# yum install -y openstack-utils3.6 Keystone安装配置3.6.1 初始化Keystone1、安装keystone# yum install -y openstack-keystone2、创建keystone数据库,修改配置文件中的数据库链接# openstack-db --init --service keystone# openstack-config --set /etc/keystone/keystone.conf sql connectionmysql://keystone:keystone@localhost/keystone3、使用openssl随即生成一个令牌,将其存储在配置文件中# export SERVICE_TOKEN=$(openssl rand -hex 10) //随机生成SERVICE_TOKEN值# export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0# mkdir /root/work# echo $SERVICE_TOKEN > /root/work/ks_admin_token# cat /root/work/ks_admin_token# openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN注:将生成的SERVICE_TOKEN值写入文件中保存,以备后续使用,后面涉及到的SERVICE_TOKEN值都是在ks_admin_token文件中获取的。

openstack安装配置文档

openstack安装配置文档

openstack(kilo版)安装配置文档写在前面:本文档所有指令在拷贝的时候注意空格和换行,由于文档的排版原因可能会出现自动换行等误差。

一、实验环境物理机器上的3台虚拟机分别模拟控制节点、网络节点、计算节点。

采用VMware虚拟机管理软件,首先建立一个模板,在上面安装第五部分所涉及到的基本组件,分别克隆出3台虚拟机作为控制、网络、计算节点。

虚拟机操作系统为Ubuntu 14.04.3,openstack版本为kilo版。

所需各部分组件均采用apt-get自动安装。

二、openstack的两种常用上网方式1、在官方文档的三个网卡基础上,我们使用第四个网卡上网(NAT 方式或桥接方式均可)。

另外三个网卡采用host-only方式。

2、使用管理网络上网。

本文采用第二种方式对网络进行配置,即将管理网络根据自己的网络环境配置成可上网的IP地址段(NAT和host-only均可)。

由于对第一种方式进行尝试时最终会遇到ping不通外网的情况,所以不建议尝试。

具体可见/thread-13508-1-1.html三、各节点的网络配置各节点建立好以后,根据openstack-install-guide-apt-kilo官方文档对各节点进行网络配置。

本例采用OpenStack Networking (neutron)方式进行网络配置。

使用OpenStack网络(neutron)的架构样例中,需要一个控制节点、一个网络节点以及至少一个计算节点。

控制节点包含一个在管理网络上的网络接口。

网络节点在包含一个在管理网络上的网络接口,一个在实例隧道网络上的网络接口和一个在外部网络上的网络接口。

计算节点包含一个在管理网络上的网络接口和一个在实例隧道网络上的接口。

所以此时需要在虚拟机管理界面添加虚拟网络,由于采用第二部分提到的第二种方式(即管理网络上网),所以网络分配如下:◆管理网络使用 192.168.109.0/24 带有网关 192.168.109.2◆实例隧道网络使用 10.0.1.0/24 无网关◆外部通道网络使用192.168.109.0/24 带有网关192.168.109.2首先需要配置管理网络采用NAT方式上网:VMware默认VMnet8为NAT模式,打开系统的网络适配器设置,可以看到VMnet8的网络配置信息,如下图:这个ip地址,因个人而异,也就是说不同网络,不同环境,这个ip变化的概率是很大的。

第3章-OpenStack安装部署

第3章-OpenStack安装部署
安装操作系统(Ubuntu) 设置root口令 更新操作系统:apt-get update 安装vim:apt-get install vim 安装ssh:apt-get install openssh-client apt-get install openssh-server 安装git工具包:apt-get install git
21
验证OpenStack环境
访问OpenStack的Dashboard: 在浏览器中登录http://172.16.0.2或http://10.20.0.3 Fuel提供了对OpenStack环境进行健康检查的功能 Fuel允许对当前的OpenStack环境进行编辑
22
3.3 OpenStack手动安装配置
5
自动化安装和配置工具-xCAT
xCAT的工作原理
6
自动化安装和配置工具-xCAT
xCAT的工作流程
使用者在客户机上通过xCAT命令行输给服务器端。 服务器端管理节点上运行的xCAT daemon(xCATd)接收到该任务 指令后,解析出命令名、参数、发起命令的用户名、客户主机IP地址 以及该命令将影响的节点范围等信息。 服务器端管理节点上的xCAT daemon(xcatd)通过ACL判断该任务 指令发出者是否有权限发起这项xCAT任务指令,如果该用户有权限 发起该任务指令,则该任务就将被放进运行队列中等待执行。 该任务指令执行后,结果被服务器发回给客户机端,并显示在任务指 令发出者的终端屏幕上,从而完成整个任务指令的执行过程。
第3章 OpenStack安装部署
OpenStack安装方案

all-in-one:把管理功能和计算功能都安装在一个 节点上。

openstack安装配置详细手册(单节点)

openstack安装配置详细手册(单节点)

openstack安装配置手册目录一、控制节点系统及环境安装 (1)1:准备光盘 (1)2:安装OS (1)3:设置root权限 (1)4:设置网络 (1)5:安装bridge (2);6:设置NTP () (2)7:设置Iscsi (2)8:安装rabbitmq (2)二:安装mysql和创建相关数据库 (2)1:安装mysql (2)2:安装phpmyadmin (2)3:创建数据库 (2)三:安装和配置keystone (2)¥1:安装keystone (2)2:配置keystone (2)创建服务 (2)3:验证安装 (2)四:安装和配置glance (2)1:安装软件 (2)2:配置/etc/glance/ (2)3:设置/etc/glance/ (2)¥4:配置/etc/glance/ (2)5:配置/etc/glance/ (2)6:同步数据库 (2)7:验证glance服务是否正常 (2)8:下载镜像并上传 (2)五:安装配置nova (2)1:安装nova相关组件 (2)2:配置/etc/nova/ (2)(3:配置/etc/nova/ (2)4:停止和重启nova相关服务 (2)5:同步数据库 (2)6:检查nova服务 (2)六:安装和配置Dashbaord (2)1:安装dashbaord (2)2:配置/etc/openstack-dashboard/ (2)3:重启服务 (2)]Appendix A: 使用nova-volume (2)Appendix B: glance使用swift存储镜像 (2)Appendix C: 多计算节点配置 (2)一、控制节点系统及环境安装1:准备光盘:安装OS服务器是两块硬盘,将一个单独的分区专门给nova-volume使用(nova-volume一般不用)。

系统最小化安装,只需要安装ssh server就可以。

装完系统后。

apt-get updateapt-get upgrade更新源里的包,更新系统。

OpenStack安装手册

OpenStack安装手册

OpenStack安装手册目录OpenStack安装手册 (1)一、安装环境 (4)1、示例架构 (4)2、网络 (4)3、安全 (5)4、主机网络配置 (5)5、NTP (7)6、安装OpenStack包 (9)7、安装数据库 (10)8、消息队列 (11)9、缓存令牌 (12)二、认证服务 (12)在控制节点上配置。

(13)1、前提条件 (13)配置Apache服务器 (15)3、创建一个域、项目、用户和角色 (16)4、验证操作 (17)1、前提条件 (20)2、安装并配置组件 (22)5、验证操作 (24)四、计算服务 (25)1、安装和配置控制节点 (25)∙安装并配置组件 (26)∙完成安装 (30)3、安装并配置计算节点 (30)∙安装并配置组件 (30)∙验证操作 (33)一、安装环境1、示例架构根据官方文档,本文架构采用一个控制节点和一个计算节点。

(The example architecture requires at least twonodes (hosts) to launch a basic virtual machine or instance. )控制节点运行认证服务、镜像服务、计算服务的管理部分、网络服务的管理部分、各种网络代理以及Dashboard,还包括一些基础服务如数据库、消息队列以及NTP。

计算节点上运行计算服务中管理实例的管理程序部分。

默认情况下,计算服务使用KVM。

还运行网络代理服务,用来连接实例和虚拟网络以及通过安全组给实例提供防火墙服务。

2、网络∙公有网络公有网络选项以尽可能简单的方式通过layer-2(网桥/交换机)服务以及VLAN网络的分割来部署OpenStack网络服务。

实际上,它将虚拟网络桥接到物理网络,并依靠物理网络基础设施提供layer-3服务(路由)。

另外,DHCP服务为实例提供IP地址。

∙私有网络私有网络选项扩展了公有网络选项,增加了layer-3(路由)服务,使用VXLAN类似的方式。

openstack安装手册(半中文版)

openstack安装手册(半中文版)

openstack安装手册(半中文版)翻译说明:由于名词和软件指令、脚本容易混淆,,导致无法与实际安装配置环境对应,本文会尽量不去翻译这些内容。

实际上,直接看原文,至少对照原文学习和操作,会避免很多因翻译产生的问题。

光头猪猪1.OpenStack基本安装简介如果你想利用Ubuntu 12.04 LTS (使用 Ubuntu Cloud Archive)来部署OpenStack Folsom平台用于开发测试,本文会为你提供帮助。

我们将完成一套三节点的安装,包括一个控制器、一个网络节点和一个计算节点。

当然,你也可以按你的需要安装尽可能多的计算节点。

对于希望安装测试基础平台的OpenStack初学者,本文会成为一个良好的开始。

Architecture一个标准的Quantum安装包括多达四个物理上分离的数据中心网络:管理网络。

用于OpenStack组件之间的内部通信。

在此网络上的IP地址应仅在数据中心内部可达。

数据网络。

用于所部署的云内部的虚拟机数据通信。

该网络的IP 地址分配需求取决于使用中的Quan tum 插件。

外部网络。

用在某些部署方案中提供可访问Internet的虚拟机。

此网络上的IP地址应对Internet上的任何人都可达。

API网络。

向租户公开所有OpenStack Api,包括Quantum API。

此网络上的IP地址应对Internet上的任何人都可达。

本网络可能和外部网络是同一个网络,因为你可以划分整个IP地址分配范围的一部分在外部网络建立一个Quantum子网。

必要条件您需要至少3台装好Ubuntu 12.04 (LTS)的计算机(虚拟或物理)。

表1.1结构和节点信息控制器节点简介控制器节点将提供:Databases (with MySQL)Queues (with RabbitMQ)KeystoneGlanceNova (without nova-compute)CinderQuantum Server (with Open-vSwitch plugin)Dashboard (with Horizon)公共服务操作系统1.使用此参数安装Ubuntu:Time zone :UTCHostname :folsom-controllerPackages :OpenSSH-Server操作系统安装完成后,重新启动服务器。

OpenStack安装文档

OpenStack安装文档

OpenStack Nova安装手册作者: yz日期: 2011-11-27版本: v0.3网址: 目录实验环境 (3)架构部署 (3)服务器系统安装 (3)控制节点安装 (4)NTP时钟服务安装 (4)MYSQL数据库服务安装 (4)RABBITMQ消息队列服务安装 (5)NOVA服务安装 (5)GLANCE镜像存储服务安装 (5)KEYSTONE、noVNC、Dashboard服务相关依赖包安装 (5)KEYSTONE认证服务安装 (5)PUTE扩展库安装 (8)OPENSTACKX扩展库安装 (8)PYTHON-NOVACLIENT扩展库安装 (8)QUANTUM模块安装 (9)OPENSTACK-DASHBOARD控制面板安装 (9)noVNC服务安装 (11)NOVA服务配置 (12)GLANCE镜像存储服务配置 (14)noVNC服务配置 (15)计算节点安装 (16)NTP时钟同步配置 (16)NOVA服务安装 (16)NOVA服务配置 (17)DASHBOARD使用基础 (20)建立Keypairs (20)建立安全组 (20)启动实例 (21)通过VNC连接实例 (22)为实例分配外网IP (23)实验环境硬件:DELL R410(1台)CPU:Intel(R) Xeon(R) CPU E5620 @ 2.40GHz * 2内存:16GB硬盘:300GB网卡:Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet * 2DELL R710(1台)CPU:Intel(R) Xeon(R) CPU E5606 @ 2.13GHz * 2内存:32GB硬盘:250GB网卡:Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet * 4系统:Ubuntu Server 11.04 x64Openstack版本:Diablo 4 release(2011.3)架构部署机器型号/主机名外网IP 内网IP 作用R410/r410-control1 60.12.206.111 192.168.1.2 控制节点R710/r710-compute1 60.12.206.99 192.168.1.3 计算节点1 实例网段为10.0.0.0/24,floating ip为60.12.206.114,实例网段桥接在内网网卡上,网络模式采用FlatDHCP服务器系统安装1.Ubuntu server 11.04 x64使用默认安装方式2.服务器外网使用eth03.服务器内网使用eth14.除apache及noVNC外,所有服务均监听内网IP控制节点安装NTP 时钟服务安装1.安装NTP 时钟同步服务器 apt-get install -y ntp ntpdate2.同步时间/etc/init.d/ntp stop ntpdate ntp.api.bz3.编辑/etc/ntp.conf ,将文件内容替换为如下: restrict 127.0.0.1restrict 192.168.1.0 mask 255.255.255.0 nomodify server ntp.api.bzserver 127.127.1.0 # local clock fudge 127.127.1.0 stratum 10 driftfile /var/lib/ntp/drift4.重启ntp 服务/etc/init.d/ntp restartMYSQL 数据库服务安装1.预设MYSQL 数据库服务root 密码为openstack cat << MYSQL_PASSWORD |debconf -set-selectionsmysql-server-5.1 mysql-server/root_password password openstack mysql-server-5.1 mysql-server/root_password_again password openstack MYSQL_PASSWORD 2.安装MYSQL 数据库服务 apt-get install -y mysql-server 3.更改MYSQL 数据库服务监听内网网卡IPsed -i '/bind-address/s/127.0.0.1/192.168.1.2/g' /etc/mysql/my .cnf 4.重启MYSQL 数据库服务 /etc/init.d/mysql restart 5.检测服务是否正常启动通过netstat -ltunp 查看是否有tcp 3306端口监听 如果没有正常启动请查看/var/log/mysql 下相关log 排错限制192.168.1.0这个网段修改时间 #这个事虚拟机IP 断RABBITMQ消息队列服务安装1.安装RABBITMQ消息队列服务apt-get install -y rabbitmq-server2.更改RABBITMQ消息队列服务guest用户默认密码为openstackrabbitmqctl change_password guest openstackNOVA服务安装1.导入所需更新源echo 'deb /openstack-release/2011.3/ubuntu natty main' >>/etc/apt/sources.list2.导入服务密钥apt-key adv --keyserver --recv-keys 94CA80414F1043F6495425C37D21C2EC3D 1B4472nova-api3.更新APT源列表apt-get update4.nova-api、nova-network、nova-objectstore、nova-scheduler服务安装apt-get install -y nova-api nova-network nova-objectstore nova-schedulerGLANCE镜像存储服务安装1.安装glanceapt-get install -y glanceKEYSTONE、noVNC、Dashboard服务相关依赖包安装1.APT安装相关包apt-get install -y python-dev libxml2-dev libxslt1-dev libsasl2-dev libldap2-dev libsqlite3-dev libssl-dev python-pip swig git python-dateutil apache2 libapache2-mod-wsgi python-numpy2.pip安装相关包pip install passlib sqlalchemy-migrate prettytable glance python-cloudfiles nose==1.0.0 Django==1.3 django-nose==0.1.2 django-registration==0.7 django-mailer mox nosexcoverKEYSTONE认证服务安装1.下载keystone认证服务程序cd /optgit clone https:///cloudbuilders/keystone.gitcd keystonegit checkout diablocd ..2.安装keystone认证服务cd keystonepython setup.py installpython setup.py develop3.建立keystone认证服务数据库mysql -uroot -popenstack -e‘create database keystone’4.为keystone认证服务数据库建立访问所需用户名mysql -uroot -popenstack -e“grant select,insert,update,delete,create,drop,index,alter on keystone.* to keystone@'localhost' identified by 'keystone'”5.建立keystone认证服务启动所需用户useradd -s /bin/bash -g nogroup -m -d /var/log/keystone keystone6.建立keystone认证服务配置文件存放路径mkdir /etc/keystone7.生成keystone认证服务配置文件cp /opt/keystone/etc/keystone.conf /etc/keystone/编辑/etc/keystone/keystone.conf,更改如下:default_store = sqliteservice_host = 0.0.0.0admin_host = 0.0.0.0sql_connection = sqlite:///keystone.db更改为#default_store = sqliteservice_host = 192.168.1.2admin_host = 192.168.1.2sql_connection = mysql://keystone:keystone@localhost/keystone8.生成keystone认证服务数据编辑/etc/keystone/keystone_data.sh,添加如下内容:#!/bin/bash# 建立tenant名为adminkeystone-manage $* tenant add admin# 建立属于admin tenant的用户名为admin密码为openstack的用户keystone-manage $* user add admin openstack admin# 建立管理员规则keystone-manage $* role add Admin# 建立keystone管理员规则keystone-manage $* role add KeystoneAdmin# 建立keystone服务管理员规则keystone-manage $* role add KeystoneServiceAdmin# 将管理员规则赋予admin用户keystone-manage $* role grant Admin admin# 将keystone管理员规则赋予admin用户keystone-manage $* role grant KeystoneAdmin admin# 将keystone服务管理员规则赋予admin用户keystone-manage $* role grant KeystoneServiceAdmin admin# 添加nova compute服务keystone-manage $* service add nova compute "Nova Compute Service"# 添加glance image服务keystone-manage $* service add glance image "Glance Image Service"# 添加keystone identity服务keystone-manage $* service add keystone identity "Keystone Identity Service"# 添加nova compute访问点keystone-manage $* endpointTemplates add RegionOne nova http://192.168.1.2:8774/v1.1/%tenant_i d% http://192.168.1.2:8774/v1.1/%tenant_id% http://192.168.1.2:8774/v1.1/%tenant_id% 1 1# 添加glance image访问点keystone-manage $* endpointTemplates add RegionOne glance http://192.168.1.2:9292/v1.1/%tenan t_id% http://192.168.1.2:9292/v1.1/%tenant_id% http://192.168.1.2:9292/v1.1/%tenant_id% 1 1# 添加keystone identity访问点keystone-manage $* endpointTemplates add RegionOne keystone http://192.168.1.2:5000/v2.0 http:/ /192.168.1.2:35357/v2.0 http://192.168.1.2:5000/v2.0 1 1# 为tenant为admin及admin用户建立一个名为openstack,过期时间为2015年2月5日0点的token keystone-manage $* token add openstack admin admin 2015-02-05T00:00# 为tenant为admin及admin用户建立一个类型为EC2的证书,其k ey和secret分别为admin用户的用户名和密码keystone-manage $* credentials add admin EC2 'admin' 'openstack' admin9.建立keystone认证服务启动脚本配置文件在/etc/init/下建立名为keystone.conf的文件,内容如下:description "Keystone API server"author "Soren Hansen <soren@linux2go.dk>"start on (local-filesystems and net-device-up IF ACE!=lo)stop on runlevel [016]respawnexec su -c "keystone --config-file=/etc/keystone/keystone.conf --log-dir=/var/log/keystone --log-file =keystone.log" keystone10.建立keystone认证服务启动脚本ln -sv /lib/init/upstart-job /etc/init.d/keystone11.启动keystone认证服务/etc/init.d/keystone start12.验证keystone服务是否正常启动通过netstat -ltunp检测是否有tcp 5000和35357端口的监听,如果没有,请查看/var/log/keystone下的相关日志排错PUTE扩展库安装1.下载pute扩展库cd /optgit clone https:///jacobian/pute.gitcd putegit checkout mastercd ..2.安装pute扩展库cd putepython setup.py installpython setup.py developOPENSTACKX扩展库安装1.下载openstackx扩展库cd /optgit clone https:///cloudbuilders/openstackx.gitcd openstackxgit checkout diablocd ..2.安装openstackx扩展库cd openstackxpython setup.py installpython setup.py developPYTHON-NOVACLIENT扩展库安装1.下载python-novaclient扩展库cd /optgit clone https:///cloudbuilders/python-novaclient.gitcd python-novaclientgit checkout diablocd ..2.安装python-novaclient扩展库cd python-novaclientpython setup.py installpython setup.py developQUANTUM模块安装1.下载quantum扩展库cd /optgit clone https:///openstack/quantum.gitcd quantumgit checkout stable/diablocd ..2.安装quantum扩展库cd quantumpython setup.py installpython setup.py developOPENSTACK-DASHBOARD控制面板安装1.下载openstack-dashboard控制面板cd /optgit clone https:///openstack/openstack-dashboard.gitcd openstack-dashboardgit checkout mastercd ..2.安装openstack-dashboard控制面板cd openstack-dashboard/django-openstackpython setup.py installpython setup.py developcd ..cd openstack-dashboardpython setup.py installpython setup.py develop3.建立openstack-dashboard控制面板数据库mysql -uroot -popenstack -e‘create database dashboard’4.为openstack-dashboard控制面板数据库建立访问所需用户名mysql -uroot -popenstack -e“grant select,insert,update,delete,create,drop,index,alter on dashboard.* to dashboard@'localhost' identified by 'dashboard'”5.配置openstack-dashboard控制面板cd /opt/openstack-dashboard/openstack-dashboard/localcp local_settings.py.example local_settings.py编辑local_settings.py,更改如下内容:DATABASES = {'default': {'ENGINE': 'django.db.backends.sqlite3','NAME': os.path.join(LOCAL_PATH, 'dashboard_openstack.sqlite3'),},}更改为DATABASES = {'default': {'ENGINE': 'django.db.backends.mysql','NAME': 'dashboard','USER': 'dashboard','PASSWORD': 'dashboard','HOST': 'localhost','PORT': '3306',},}OPENSTACK_KEYSTONE_URL = http://localhost:5000/v2.0/OPENSTACK_KEYSTONE_ADMIN_URL = http://localhost:35357/v2.0OPENSTACK_ADMIN_TOKEN = "999888777666"更改为OPENSTACK_KEYSTONE_URL = “http://192.168.1.2:5000/v2.0/”OPENSTACK_KEYSTONE_ADMIN_URL = “http://192.168.1.2:35357/v2.0”OPENSTACK_ADMIN_TOKEN = "openstack"6.配置apachemkdir /opt/openstack-dashboard/.blackholechown -R www-data:www-data /opt/openstack-dashboard编辑/etc/apache2/sites-available/default文件,将内容替换为如下:<VirtualHost *:80>WSGIScriptAlias / /opt/openstack-dashboard/openstack-dashboard/dashboard/wsgi/django.wsgiWSGIDaemonProcess dashboard user=www-data group=www-data processes=3 threads=10SetEnv APACHE_RUN_USER www-dataSetEnv APACHE_RUN_GROUP www-dataWSGIProcessGroup dashboardDocumentRoot /opt/openstack-dashboard/.blackhole/Alias /media /opt/openstack-dashboard/openstack-dashboard/media<Directory />Options FollowSymLinksAllowOverride None</Directory><Directory /opt/openstack-dashboard/>Options Indexes FollowSymLinks MultiViewsAllowOverride NoneOrder allow,denyallow from all</Directory>ErrorLog /var/log/apache2/error.logLogLevel warnCustomLog /var/log/apache2/access.log combined</VirtualHost>7.建立openstack-dashboard控制面板数据库结构/opt/openstack-dashboard/openstack-dashboard/dashboard/manage.py syncdb8.重启apache服务/etc/init.d/apache restart9.验证openstack-dashboard控制面板首先通过netstat -ltunp查看80端口的监听,其次通过浏览器访问web服务是否可以看到如下界面:如不成功请查看/var/log/apache/下错误日志noVNC服务安装1.下载noVNC服务cd /optgit clone https:///cloudbuilders/noVNC.gitgit checkout diablocd ..NOVA服务配置1.建立nova服务数据库mysql -uroot -popenstack -e‘create database nova’2.为nova服务数据库建立访问所需用户名mysql -uroot -popenstack -e“grant select,insert,update,delete,create,drop,index,alter on nova.* to nova@'192.168.1.%' identified by 'nova'”3.配置nova服务cp /opt/keystone/examples/paste/nova-api-paste.ini /etc/nova/api-paste.ini编辑/etc/nova/api-paste.init,更改如下内容:service_host = 127.0.0.1auth_host = 127.0.0.1auth_uri = http://127.0.0.1:5000/admin_token = 999888777666更改为service_host = 192.168.1.2auth_host = 192.168.1.2auth_uri = http://192.168.1.2:5000/admin_token = openstack编辑/etc/nova.conf,更改为如下内容#general--logdir=/var/log/nova--state_path=/var/lib/nova--lock_path=/var/lock/nova--verbose=True--use_syslog=False#nova-objectstore--use_s3=True--s3_host=192.168.1.2--s3_port=3333#rabbit--rabbit_host=192.168.1.2--rabbit_port=5672--rabbit_password=openstack#ec2--ec2_listen=192.168.1.2--ec2_listen_port=8773--osapi_listen=192.168.1.2--osapi_listen_port=8774--osapi_extensions_path=/opt/openstackx/extensions--api_paste_config=/etc/nova/api-paste.ini#db--sql_connection=mysql://nova:nova@192.168.1.2/nova--sql_idle_timeout=600--sql_max_retries=3--sql_retry_interval=3#glance--glance_host=192.168.1.2--glance_api_servers=192.168.1.2:9292--image_service=nova.image.glance.GlanceImageService#nova-network--dhcpbridge_flagfile=/etc/nova/nova.conf--dhcpbridge=/usr/bin/nova-dhcpbridge--network_manager=work.manager.FlatDHCPManager--linuxnet_interface_driver=work.linux_net.LinuxBridgeInterfaceDriver4.建立nova数据库结构nova-manage db sync5.建立名为private,ip地址范围为10.0.0.0/24,网络id为1,主机数256个,桥接在eth1网卡,桥接卡名称为br1的实例网络段,并启用多nova-networknova-manage network create private 10.0.0.0/24 1 256 --bridge=br1 --bridge_interface=eth1 --mul ti_host='T'6.建立可分配的floating ipnova-manage floating create 60.12.206.1147.重启相关服务/etc/init.d/nova-api restart/etc/init.d/nova-network restart/etc/init.d/nova-objectstore restart/etc/init.d/nova-scheduler restart8.检测相关服务是否启动成功查看/var/log/nova/nova-api.log最下方是否有如下输出:2011-11-28 00:44:29,390 INFO nova.wsgi [-] Started ec2 on 192.168.1.2:87732011-11-28 00:44:29,390 INFO nova.wsgi [-] Started osapi on 192.168.1.2:8774并通过netstat -ltunp查看是否有tcp 8773和8774的端口监听查看/var/log/nova/nova-network.log最下方是否有如下输出:2011-11-28 00:46:05,519 INFO nova.rpc [-] Connected to AMQP server on 192.168.1.2:56722011-11-28 00:46:05,520 DEBUG nova [-] Creating Consumer connection for Service network from (pid=7592) start /usr/lib/python2.7/dist-packages/nova/service.py:153通过命令nova-manage service list查看是否有如下输出:nova-network r410-control1 nova enabled :-) 201 1-11-27 16:48:36查看/var/log/nova/nova-objectstore.log最下方是否有如下输出:2011-11-28 00:46:46,017 INFO nova.wsgi [-] Started S3 Objectstore on 192.168.1.2:3333并通过netstat -ltunp查看是否有tcp 333的端口监听查看/var/log/nova/nova-scheduler.log最下方是否有如下输出:2011-11-28 00:47:59,789 INFO nova.rpc [-] Connected to AMQP server on 192.168.1.2:56722011-11-28 00:47:59,790 DEBUG nova [-] Creating Consumer connection for Service scheduler from (pid=7805) start /usr/lib/python2.7/dist-packages/nova/service.py:153通过命令nova-manage service list查看是否有如下输出:nova-scheduler r410-control1 nova enabled :-) 201 1-11-27 16:48:40如上述有哪些服务没有成功启动请查看相关/var/log/nova下相关log排错GLANCE镜像存储服务配置1.建立glance镜像存储服务数据库mysql -uroot -popenstack -e‘create database glance’2.为glance镜像存储服务数据库建立访问所需用户名mysql -uroot -popenstack -e“grant select,insert,update,delete,create,drop,index,alter on glance.* to glance@'localhost' identified by 'glance'”3.配置glance镜像存储服务cp /opt/keystone/examples/paste/glance-api.conf /etc/glance/glance-api.confcp /opt/keystone/examples/paste/glance-registry.conf /etc/glance/glance-registry.conf编辑/etc/glance/glance-api.conf,更改如下内容:bind_host = 0.0.0.0registry_host = 0.0.0.0rabbit_password = guestservice_host = 127.0.0.1auth_host = 127.0.0.1auth_uri = http://127.0.0.1:5000/admin_token = 999888777666更改为bind_host = 192.168.1.2registry_host = 192.168.1.2rabbit_password = openstackservice_host = 192.168.1.2auth_host = 192.168.1.2auth_uri = http://192.168.1.2:5000/admin_token = openstack编辑/etc/glance/glance-registry.conf,更改如下内容:bind_host = 0.0.0.0sql_connection = sqlite:///glance.sqliteservice_host = 127.0.0.1auth_host = 127.0.0.1auth_uri = http://127.0.0.1:5000/admin_token = 999888777666更改为bind_host = 192.168.1.2sql_connection = mysql://glance:glance@localhost/glanceservice_host = 192.168.1.2auth_host = 192.168.1.2auth_uri = http://192.168.1.2:5000/admin_token = openstack4.重启相关服务/etc/init.d/glance-api restart/etc/init.d/glance-registry restart5.检测服务是否成功启动通过命令netstat -ltunp查看是否有tcp 9191和9292端口监听如果没有启动成功请查看/var/log/glance下相关log排错6.通过glance上传镜像glance add -H 192.168.1.2 -A openstack name=win2k3 is_public=true < win2k3.imgnoVNC服务配置1.配置noVNC服务向/etc/nova/nova.conf添加如下内容:#nova-vncproxy--vnc_enabled=True--vncproxy_url=http://60.12.206.111:6080--vncproxy_wwwroot=/opt/noVNC--vncproxy_manager=nova.vnc.auth.VNCProxyAuthManager将计算节点ip和主机名对应关系添加到/etc/hosts文件内2.建立noVNC服务启动程序软链接ln -sv /opt/noVNC/utils/nova-wsproxy.py /usr/bin/nova-wsproxy3.建立noVNC服务启动脚本配置文件在/etc/init/下建立名为nova-vncproxy.conf文件,内容如下:description "Nova VNC proxy"author "Vishvananda Ishaya <vishvananda@>"start on (filesystem and net-device-up IF ACE!=lo)stop on runlevel [016]exec su -c "nova-wsproxy 6080 --web /opt/noVNC --flagfile=/etc/nova/nova.conf" nova4.建立noVNC启动脚本ln -sv /lib/init/upstart-job /etc/init.d/nova-vncproxy5.重启相关服务/etc/init.d/nova-api restart/etc/init.d/nova-vncproxy start6.检测服务是否启动成功通过netstat -ltunp查看是否有tcp 6080端口监听如没启动成功请以前台模式启动并查找问题计算节点安装NTP时钟同步配置1.安装NTP相关命令包apt-get install -y ntpdate跟控制节点同步时间并写入硬件ntpdate 192.168.1.2hwclock –w2.将时间同步添加到计划任务echo ’30 8 * * * root /usr/sbin/ntpdate 192.168.1.2;hwclock –w’ >>/etc/crontabNOVA服务安装1.导入所需更新源echo 'deb /openstack-release/2011.3/ubuntu natty main' >>/etc/apt/sources.list2.导入服务密钥apt-key adv --keyserver --recv-keys 94CA80414F1043F6495425C37D21C2EC3D 1B44723.更新APT源列表apt-get update4.nova-network、nova-compute服务安装apt-get install -y nova-network nova-computeNOVA服务配置1.配置nova服务编辑/etc/nova.conf,更改为如下内容#general--logdir=/var/log/nova--state_path=/var/lib/nova--lock_path=/var/lock/nova--verbose=True--use_syslog=False#nova-objectstore--use_s3=True--s3_host=192.168.1.2--s3_port=3333#rabbit--rabbit_host=192.168.1.2--rabbit_port=5672--rabbit_password=openstack#ec2--ec2_host=192.168.1.2--ec2_port=8773--ec2_url=http://192.168.1.2:8773/services/Cloud#osapi--osapi_host=192.168.1.2--osapi_port=8774#db--sql_connection=mysql://nova:nova@192.168.1.2/nova--sql_idle_timeout=600--sql_max_retries=3--sql_retry_interval=3#glance--glance_host=192.168.1.2--glance_api_servers=192.168.1.2:9292--image_service=nova.image.glance.GlanceImageService#libvirt--connection_type=libvirt--libvirt_type=kvm--snapshot_image_format=qcow2--use_cow_image=True--libvirt_use_virtio_for_bridges=True#nova-scheduler--scheduler_driver=nova.scheduler.multi.MultiScheduler--max_cores=48--start_guests_on_host_boot=True--resume_guests_state_on_host_boot=True#nova-network--dhcpbridge_flagfile=/etc/nova/nova.conf--dhcpbridge=/usr/bin/nova-dhcpbridge--network_manager=work.manager.FlatDHCPManager--linuxnet_interface_driver=work.linux_net.LinuxBridgeInterfaceDriver--fixed_range=10.0.0.0/24--flat_interface=br1--flat_network_bridge=eth1--flat_network_dhcp_start=10.0.0.2--floating_range=60.12.206.114--multi_host=true--public_interface=eth0--force_dhcp_release=true--use_ipv6=False2.启动相关服务/etc/init.d/nova-network restart/etc/init.d/nova-compute restart3.检测服务是否启动成功通过命令netstat –ntap查看是否有类似如下连接状态:tcp 0 0 192.168.1.3:26342 192.168.1.2:5672 ESTABLISHED 29466/python tcp 0 0 192.168.1.3:19757 192.168.1.2:3306 ESTABLISHED 29466/python tcp 0 0 192.168.1.3:27483 192.168.1.2:5672 ESTABLISHED 29510/python tcp 0 0 192.168.1.3:4423 192.168.1.2:3306 ESTABLISHED 29510/python tcp 0 0 118.26.228.117:59878 211.101.24.8:56527 ESTABLISHED 29817/2 tcp 0 0 192.168.1.3:9542 192.168.1.2:3306 ESTABLISHED 29510/python tcp 0 0 192.168.1.3:4422 192.168.1.2:3306 TIME_WAIT -tcp 0 0 192.168.1.3:26340 192.168.1.2:5672 ESTABLISHED 29510/python tcp 0 0 192.168.1.3:4424 192.168.1.2:3306 ESTABLISHED 29510/python tcp 0 0 192.168.1.3:26328 192.168.1.2:5672 ESTABLISHED 29466/python查看/var/log/nova/nova-network.log最下方是否有如下输出:2011-11-28 00:46:05,519 INFO nova.rpc [-] Connected to AMQP server on 192.168.1.2:56722011-11-28 00:46:05,520 DEBUG nova [-] Creating Consumer connection for Service network from (pid=7592) start /usr/lib/python2.7/dist-packages/nova/service.py:153查看/var/log/nova/nova-compute.log最下方是否有如下输出:2011-11-28 17:06:24,491 INFO nova.rpc [-] Connected to AMQP server on 192.168.1.2:56722011-11-28 17:06:24,492 DEBUG nova [-] Creating Consumer connection for Service compute from (pid=31197) start /usr/lib/python2.7/dist-packages/nova/service.py:153通过在控制节点执行nova-manage service list结果是否有如下输出(红字):Binary Host Zone Status State Updated_At nova-scheduler r410-control1 nova enabled :-) 2011-11-28 09:07:21nova-network r410-control1 nova enabled :-) 2011-11-28 09:07:21nova-compute r710-compute1 nova enabled :-) 2011-11-28 09:07:14nova-network r710-compute1 nova enabled :-) 2011-11-28 09:07:22通过管理员登陆dashboard,在SYSTEM PANEL面板通过左侧Services标签查看是否有计算节点nova-compute 和nova-network服务,并且颜色是否为绿色,如图:如上述有哪些服务没有成功启动请查看相关/var/log/nova下相关log排错DASHBOARD使用基础建立Keypairs通过USER DASHBOARD面板左侧Keypairs标签,点击Add New Keypair,如图:输入keypair名字,这里假名为openstack,点击Add Keypair按钮,如图:此后会要求下载一个pem文件,可以通过这个文件登陆启动的系统建立安全组通过USER DASHBOARD面板左侧Security Groups标签,点击Create Security Group,如图:输入name,在Name输入test,Description输入test,点击Create Security Group,如图:建立成功后会自动跳转回Security Groups标签,可以看到我们建立的新安全组test,如图:点击我们创建的安全组的Edit Rules,进入如下界面,如图:我们默认放所有,规则如下Ip protocol:tcp,From port:0,To port:65535,Cidr:0/0Ip protocol:udp,From port:0,To port:65535,Cidr:0/0Ip protocol:icmp,From port:-1,To port:-1,Cidr:0/0添加完毕后,如图:启动实例通过USER DASHBOARD面板左侧Images标签,在已上传的镜像后点击Launch,如图:输入实例名称,这里假设为first instance,通过Flavor下拉列表选择你要启动的实例配置,通过Key Name下拉列表选择你已有的keypair,通过Securtiy Group列表框选择我们建立的安全组test,点击Launche Instance,如图:此后通过USER DASHBOARD面板左侧Instances标签,可以看到你刚刚启动的实例,实例刚启动再状态栏Build,如图:当实例状态变为Active后,我们可以通过vnc连接,如图:通过VNC连接实例通过USER DASHBOARD面板左侧Instances标签,找到我们启动的first instance实例->Actions下的VNC Console链接,会新打开一个窗口,如图:通过此窗口我们就可以访问启动的实例了为实例分配外网IP通过USER DASHBOARD面板左侧Floating IPs标签,点击Allocate IP按钮,将出现一个可用外网IP,如图:点击Associate Floating IP链接,进入如下界面,Floating IP是要分配的IP,Instance下拉列表选择你要讲此IP给予哪个实例,最后点击Associate IP,如图:成功后会跳转到Floating IPs标签,可以查看到我们已经分配完成,如图:接下来就可以通过ssh连接你的实例了。

OpenStack 完整安装手册

OpenStack 完整安装手册

服务器系统安装
1. CentOS 6.2 x64 使用最小化安装方式 2. 服务器外网使用 eth0 3. 服务器内网使用 eth1 4. 所有服务均监听 0.0.0.0
控制节点安装
前提工作
1. 导入第三方软件源 rpm -Uvh /pub/epel/6/x86_64/epel-release-6-5.noarch.rpm rpm -Uvh /rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
3. 重启 ntp 服务 /etc/init.d/ntpd start
MYSQL 数据库服务安装
1. 安装 MYSQL 数据库服务 yum install -y mysql-server
2. 更改 MYSQL 数据库服务监听内网网卡 IP sed -i '/symbolic-links=0/a bind-address = 192.168.1.2' /etc/f
实验环境
硬件: DELL R710(1 台) CPU:Intel(R) Xeon(R) CPU E5620 @ 2.40GHz * 2 内存:48GB 硬盘:300GB 网卡:Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet * 4 DELL R410(1 台) CPU:Intel(R) Xeon(R) CPU E5606 @ 2.13GHz * 2 内存:8GB 硬盘:1T * 4 网卡:Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet * 4 系统: CentOS 6.2 x64 Openstack 版本: Essex release(2012.1)

OpenStack安装指南

OpenStack安装指南

11 添加遥测模块目录遥测模块 121安装和配置控制节点 122配置计算服务 125配置镜像服务 127配置块儿存储服务 127配置对象存储服务 128遥测安装的检测 129下一步 130遥测提供了一个监控和计量OpenStack的一个框架,这也被称为测云仪项目。

遥测模块遥测模块可以实现如下的功能:有效地调整与OpenStack服务相关的测量数据。

收集监控通知发送事件和计量数据的服务。

将收集到的数据发送到不同的数据存储块儿和消息队列中。

在收集中断定义规则的数据时创建警报。

这个遥测模块包含以下组件:计算代理(测云仪-代理-计算)运行在每个计算节点上,对这些节点的利用情况进行调查。

在将来可能有其他类型的代理,现在的重点是创建计算代理。

中心代理(测云仪中枢代理)运行在一个中枢管理服务器上,并对资源利用进行调查统计,这里的资源不把实例或计算节点绑定在一起。

多个代理可以开始规模服务水平。

通知代理(测云仪的通知代理)运行在一个中枢管理服务器上,从消息队列中得到消息,从而来构建事件和计量数据。

收集者(测云仪收集者)运行在一个中枢管理服务器上,分派收集遥测数据到数据仓储里面或者对外部消费者的数据不做修改。

报警评估(测云仪的报警评估)运行在一个或多个中枢管理服务器上决定那些火灾警报,这里的火灾警报是因为在一个滑动的时间窗口上跨越了一个阙值所做出的相关统计。

警报通知(测云仪警报通知)运行在一个或多个中枢管理服务器上并允许警报设置基于阈值评估样本的集合。

API服务器(测云仪的API)运行在一个或多个中央管理服务器上,并提供访问数据存储的数据。

这些服务通过使用OpenStack消息总线进行通信。

只有收集器和API 服务器访问数据存储。

安装和配置控制器节点本节描述了在控制器节点上如何安装和配置遥测模块、测云仪的代号。

这个遥测模块使用独立的代理从你环境的每一个OpenStack中收集测量值。

配置的先决条件在你安装和配置遥测模块之前,你必须先安装MongoDB,创建一个MongoDB数据库、服务凭证和API端点。

openstack与ceph整合安装指导文档

openstack与ceph整合安装指导文档

openstack与ceph整合安装指导文档目录1 概述 (3)2 版本配套表 (3)3 系统架构图 (3)3.1 物理结构图 (3)3.2 逻辑结构图 (4)3.3 openstack安装 (5)3.4 ceph安装 (5)3.4.1 ip规划 (5)3.4.2 安装步骤 (6)3.5 controller节点和compute节点安ceph客户端 (7)3.6 controller节点配置glance使用ceph (8)3.7 controller节点配置cinder使用ceph (10)3.8 compute节点配置nova使用ceph (12)1概述本文档描述openstack在glance、cinder、nova组件后端如何配置使用ceph来进行存储。

2版本配套表3系统架构图3.1物理结构图Ceph node1Ceph node2Ceph node3 3.2逻辑结构图3.3openstack安装使用赵子顾的自动部署,3节点部署。

3.4ceph安装3.4.1ip规划3.4.2安装步骤1.修改3台机器的主机名分别为:ceph148、ceph149、ceph1502.编辑3台机器/etc/hosts内容如下:192.168.1.148 ceph148192.168.1.149 ceph149192.168.1.150 ceph1503.将ceph.zip目录拷贝到/home/ceph目录下并且解压,生成ceph和deploy两个目录。

4.编辑/etc/yum.repos.d/ceph.repo文件内容如下:[ceph-noarch]name=Ceph noarch packagesbaseurl=file:///home/ceph/cephenabled=1gpgcheck=0[ceph-deply]name=Ceph deploy packagesbaseurl=file:///home/ceph/deployenabled=1gpgcheck=05.三个节点增加相互信任:ceph148上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph149ssh-copy-id ceph150ceph149上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph150ceph150上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph1496.三个节点均关闭selinux和防火墙:service iptables stopchkconfig iptables off将/etc/sysconfig/selinux中SELINUX= enforcing改为SELINUX=disabled重启机器reboot7.安装ceph,三台机器均执行如下命令:yum install ceph -y8.在ceph148上执行如下命令安装ceph-deploy:yum install ceph-deploy -y9.执行如下命令:cd /etc/cephceph-deploy new ceph148 ceph149 ceph15010.部署mon节点,执行如下命令:ceph-deploy mon create ceph148 ceph149 ceph150ceph-deploy gatherkeys ceph148 //收集密钥11.部署osd节点,执行如下命令:ceph-deploy osd prepare ceph148:/dev/sdb ceph148:/dev/sdc ceph149:/dev/sdb ceph149:/dev/sdc ceph150:/dev/sdb ceph150:/dev/sdc12.如果有需要,部署mds,执行如下命令:ceph-deploy mds create ceph148 ceph149 ceph15013.重启服务/etc/init.d/ceph -a restart14.查看ceph状态是否正常:ceph -s显示如下:cluster 4fa8cb32-fea1-4d68-a341-ebddab2f3e0fhealth HEALTH_WARN clock skew detected on mon.ceph150monmap e2: 3 mons at {ceph148=192.168.1.148:6789/0,ceph149=192.168.1.149:6789/0,ceph150=192.168.1.150:6 789/0}, election epoch 8, quorum 0,1,2 ceph148,ceph149,ceph150osdmap e41: 6 osds: 6 up, 6 inpgmap v76: 192 pgs, 3 pools, 0 bytes data, 0 objects215 MB used, 91878 MB / 92093 MB avail192 active+clean15.配置148为ntp的server,其他节点定时向148同步时间3.5controller节点和compute节点安ceph客户端(不需要,在openstack上执行ceph --version能看到版本表示ceph已经安装)1.执行如下命令rpm --import 'https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc'rpm --import 'https:///git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'2.增加如下文件:vi /etc/yum.repos.d/ceph-extras内容如下:[ceph-extras]name=Ceph Extras Packagesbaseurl=/packages/ceph-extras/rpm/centos6/$basearchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-extras-noarch]name=Ceph Extras noarchbaseurl=/packages/ceph-extras/rpm/centos6/noarchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-extras-source]name=Ceph Extras Sourcesbaseurl=/packages/ceph-extras/rpm/centos6/SRPMSenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc 3.添加ceph库rpm -Uvh /rpms/el6/noarch/ceph-release-1-0.el6.noarch.rpm 4.添加epel库rpm -Uvh /pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 5.安装cephyum update -yyum install ceph -y3.6controller节点配置glance使用ceph1.将ceph148节点/etc/ceph目录下的两个文件拷贝到controller节点和compute节点cd /etc/ceph/scp ceph.conf ceph.client.admin.keyring 192.168.1.142:/etc/ceph/scp ceph.conf ceph.client.admin.keyring 192.168.1.140:/etc/ceph/2.修改ceph.client.admin.keyring的权限chmod +r /etc/ceph/ceph.client.admin.keyring3.在ceph148上创建glance的存储池rados mkpool glance4.编辑140上glance的配置文件/etc/glance/glance-api.conf中如下配置项rbd_store_ceph_conf = /etc/ceph/ceph.confdefault_store = rbdrbd_store_user = adminrbd_store_pool = glance5.重启glance-api进程/etc/init.d/openstack-glance-api restart6.测试上传本地镜像,首先将测试镜像cirros-0.3.2-x86_64-disk.img放到140的/home/,然后执行如下上传命令:glance image-create --name "cirros-0.3.2-x86_64-10" --disk-format qcow2 --container-format bare --is-public True --progress </home/cirros-0.3.2-x86_64-disk.img显示如下:[=============================>] 100%+------------------+--------------------------------------+| Property | Value |+------------------+--------------------------------------+| checksum | 64d7c1cd2b6f60c92c14662941cb7913 || container_format | bare || created_at | 2014-09-16T08:15:46 || deleted | False || deleted_at | None || disk_format | qcow2 || id | 49a71de0-0842-4a7a-b756-edfcb0b86153 || is_public | True || min_disk | 0 || min_ram | 0 || name | cirros-0.3.2-x86_64-10 || owner | 3636a6e92daf4991beb64643bc145fab || protected | False || size | 13167616 || status | active || updated_at | 2014-09-16T08:15:51 || virtual_size | None |+------------------+--------------------------------------+7.查看上传的镜像glance image-list显示如下:+--------------------------------------+------------------------+-------------+------------------+----------+--------+| ID | Name | Disk Format | Container Format | Size | Status |+--------------------------------------+------------------------+-------------+------------------+----------+--------+| 49a71de0-0842-4a7a-b756-edfcb0b86153 | cirros-0.3.2-x86_64-10 | qcow2 | bare | 13167616 | active |+--------------------------------------+------------------------+-------------+------------------+----------+--------+8.测试网页上传镜像,在网页上传一个镜像,然后查看镜像文件glance image-list显示如下:+--------------------------------------+------------------------+-------------+------------------+----------+--------+| ID | Name | Disk Format | Container Format | Size | Status |+--------------------------------------+------------------------+-------------+------------------+----------+--------+| da28a635-2336-4603-a596-30879f4716f4 | asdadada | qcow2 | bare | 13167616 | active || 49a71de0-0842-4a7a-b756-edfcb0b86153 | cirros-0.3.2-x86_64-10 | qcow2 | bare | 13167616 | active |+--------------------------------------+------------------------+-------------+------------------+----------+--------+9.查看ceph中glance池中的对象:rbd ls glance显示如下:49a71de0-0842-4a7a-b756-edfcb0b86153da28a635-2336-4603-a596-30879f4716f43.7controller节点配置cinder使用ceph1.在ceph148上创建cinder的存储池rados mkpool cinder2.编辑140上cinder的配置文件/etc/cinder/cinder.conf中如下配置项volume_driver = cinder.volume.drivers.rbd.RBDDriverrbd_pool=cinderrbd_user=adminrbd_ceph_conf=/etc/ceph/ceph.conf3.重启/etc/init.d/openstack-cinder-volume进程/etc/init.d/openstack-cinder-volume restart4.命令行创建一个1G的磁盘cinder create --display-name dev1 1显示如下:cinderlist+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2014-09-16T08:48:50.367976 || display_description | None || display_name | dev1 || encrypted | False || id | 1d8f3416-fb15-44a9-837f-7724a9034b1e || metadata | {} || size | 1 || snapshot_id | None || source_volid | None || status | creating || volume_type | None |+---------------------+--------------------------------------+5.查看创建的磁盘状态cinder list显示如下:+--------------------------------------+----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+----------+--------------+------+-------------+----------+-------------+| 1d8f3416-fb15-44a9-837f-7724a9034b1e | creating | dev1 | 1 | None | false | |+--------------------------------------+----------+--------------+------+-------------+----------+-------------+界面创建一个2G磁盘6.查看创建的磁盘状态cinder list显示如下:+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| 1d8f3416-fb15-44a9-837f-7724a9034b1e | available | dev1 | 1 | None | false | || e53efe68-5d3b-438d-84c1-fa4c68bd9582 | available | dev2 | 2 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+10.查看ceph中cinder池中的对象:rbd ls cinder显示如下:volume-1d8f3416-fb15-44a9-837f-7724a9034b1evolume-e53efe68-5d3b-438d-84c1-fa4c68bd95823.8compute节点配置nova使用ceph1.升级libvirt1.1.0,参考《qemu-libvirt更新步骤.doct》2.编译qemu-1.6.1,参考《qemu-libvirt更新步骤.doct》3.在ceph148上创建nova的存储池rados mkpool nova4.生成一个uuiduuidgen显示如下:c245e1ef-d340-4d02-9dcf-fd091cd1fe475.执行如下命令cat > secret.xml <<EOF<secret ephemeral='no' private='no'><uuid>c245e1ef-d340-4d02-9dcf-fd091cd1fe47</uuid><usage type='ceph'><name>client.cinder secret</name></usage></secret>EOFvirsh secret-define --file secret.xml显示如下:Secret c245e1ef-d340-4d02-9dcf-fd091cd1fe47 created6.执行如下命令:cat /etc/ceph/ceph.client.admin.keyring显示如下:[client.admin]key = AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==7.将“AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==”放到一个临时文件echo "AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==" > key8.执行如下命令:virsh secret-set-value --secret c245e1ef-d340-4d02-9dcf-fd091cd1fe47 --base64 $(cat key)9.编辑142上nova的配置文件/etc/nova/nova.conf中如下配置项images_type=rbdimages_rbd_pool=novaimages_rbd_ceph_conf=/etc/ceph/ceph.confrbd_user=adminrbd_secret_uuid=c245e1ef-d340-4d02-9dcf-fd091cd1fe47cpu_mode=none7.重启/etc/init.d/openstack-nova-compute进程/etc/init.d/openstack-nova-compute restart10.界面上创建虚拟机,在142上执行如下命令查看虚拟机状态nova list显示如下:+--------------------------------------+-------+--------+------------+-------------+--------------------+| ID | Name | Status | Task State | Power State | Networks |+--------------------------------------+-------+--------+------------+-------------+--------------------+| 445e9242-628a-4178-bb10-2d4fd82d042f | adaaa | ACTIVE | - | Running | intnet=10.10.10.15 |+--------------------------------------+-------+--------+------------+-------------+--------------------+11.查看ceph中nova池中的对象:rbd ls nova显示如下:445e9242-628a-4178-bb10-2d4fd82d042f_disk4操作测试截图4.1云硬盘快照从云硬盘dev3创建云硬盘快照4.2云硬盘快照创建云硬盘4.3挂载快照创建出来的云硬盘。

OpenStackPike版本部署手册

OpenStackPike版本部署手册

OpenStackPike版本部署⼿册Openstack安装部署⽂档(Pike)⼀、环境准备本⽂的安装部署都是在CentOS 7.4上完成,本⽂中的控制节点、存储节点是双⽹卡设置,⽹络节点和计算节点是三⽹卡设置。

注意:yum源可以修改成国内的源。

本⽂有些命令⾏⾥,参数之间 缺少空格,参照时候,请注意。

1. 虚拟机节点拓扑部署和主机命名eth0: 管理⽹络eth1: 数据⽹络/隧道控制节点: eth0: 10.0.2.15/24,eth1: 192.168. 56.101/24⽹络节点: eth0: 10.0.2.5/24,eth1: 192.168. 56.102/24, eth2 ⽆具体IP计算节点: eth0: 10.0.2.4/24,eth1: 192.168. 56.103/24, eth2 ⽆具体IP存储节点: eth0: 10.0.2.6/24,eth1: 192.168. 56.104/24$ vim /etc/hosts# controller192.168.56.101 controller# compute192.168.56.103 compute#network192.168.56.102 network#block storage192.168.56.104 block2. 虚拟机⽹卡配置使⽤传统⽹卡命名⽅式(可跳过)编辑/etc/default/grub并加⼊“net.ifnames=0$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg[NOTE] 具体参考如下连接:3. 关闭各个节点的防⽕墙和NetworkManager服务#service NetworkManager stop#chkconfig NetworkManager off# systemctl stop firewalld.service# systemctl disable firewalld.service# /usr/sbin/setenforce 0##########set SELINUX disabled############## #vim /etc/sysconfig/selinuxSELINUX=disabled4. 安装NTP服务1) 在所有结点上安装chrony$ yum install chrony2) 配置/etc/chrony.conf(控制节点)修改相应的部分:$ vim /etc/chrony.conf……allow 10.0.0.0/8重启server的chrony服务# systemctl enable chronyd.service# systemctl start chronyd.service3) 配置NTP client(⽹络,计算,存储节点)修改相应的部分:$ vim /etc/chrony.conf……server controller iburst……启动ntp服务:# systemctl enable chronyd.service# systemctl start chronyd.service4) 所有节点上进⾏验证$ chronyc sources5. 安装Openstack (所有节点)# yum install centos-release-openstack-pike# yum upgrade# yum install python-openstackclient# yum install openstack-selinux6. 安装MariaDB SQL数据库1) Controller节点:安装mariadb-server# yum install mariadb mariadb-server python2-PyMySQL修改mariadb_f配置# vi /etc/f.d/f[mysqld]bind-address = 192.168.56.101default-storage-engine = innodbinnodb_file_per_table = onmax_connections = 4096collation-server = utf8_general_cicharacter-set-server = utf8重启mysqld服务,并设置开机启动# systemctl enable mariadb.service# systemctl start mariadb.service# mysql_secure_installation设置密码 1235456,其他都是Yes7. 安装Message Queue(rabbitMQ , Controller node)#yum install rabbitmq-server重启rabbitmq服务# systemctl enable rabbitmq-server.service# systemctl start rabbitmq-server.service添加rabbitmq⽤户,并配置权限# rabbitmqctl add_user openstack openstack123# rabbitmqctl set_permissions openstack ".*" ".*" ".*"8. 安装Memcached(控制节点)安装包。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

OpenStack Nova安装手册作者: yz日期: 2011-11-27版本: v0.3网址: 目录实验环境 (3)架构部署 (3)服务器系统安装 (3)控制节点安装 (4)NTP时钟服务安装 (4)MYSQL数据库服务安装 (4)RABBITMQ消息队列服务安装 (5)NOVA服务安装 (5)GLANCE镜像存储服务安装 (5)KEYSTONE、noVNC、Dashboard服务相关依赖包安装 (5)KEYSTONE认证服务安装 (5)PUTE扩展库安装 (8)OPENSTACKX扩展库安装 (8)PYTHON-NOVACLIENT扩展库安装 (8)QUANTUM模块安装 (9)OPENSTACK-DASHBOARD控制面板安装 (9)noVNC服务安装 (11)NOVA服务配置 (12)GLANCE镜像存储服务配置 (14)noVNC服务配置 (15)计算节点安装 (16)NTP时钟同步配置 (16)NOVA服务安装 (16)NOVA服务配置 (17)DASHBOARD使用基础 (20)建立Keypairs (20)建立安全组 (20)启动实例 (21)通过VNC连接实例 (22)为实例分配外网IP (23)实验环境硬件:DELL R410(1台)CPU:Intel(R) Xeon(R) CPU E5620 @ 2.40GHz * 2内存:16GB硬盘:300GB网卡:Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet * 2DELL R710(1台)CPU:Intel(R) Xeon(R) CPU E5606 @ 2.13GHz * 2内存:32GB硬盘:250GB网卡:Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet * 4系统:Ubuntu Server 11.04 x64Openstack版本:Diablo 4 release(2011.3)架构部署机器型号/主机名外网IP 内网IP 作用R410/r410-control1 60.12.206.111 192.168.1.2 控制节点R710/r710-compute1 60.12.206.99 192.168.1.3 计算节点1 实例网段为10.0.0.0/24,floating ip为60.12.206.114,实例网段桥接在内网网卡上,网络模式采用FlatDHCP服务器系统安装1.Ubuntu server 11.04 x64使用默认安装方式2.服务器外网使用eth03.服务器内网使用eth14.除apache及noVNC外,所有服务均监听内网IP控制节点安装NTP 时钟服务安装1.安装NTP 时钟同步服务器 apt-get install -y ntp ntpdate2.同步时间/etc/init.d/ntp stop ntpdate ntp.api.bz3.编辑/etc/ntp.conf ,将文件内容替换为如下: restrict 127.0.0.1restrict 192.168.1.0 mask 255.255.255.0 nomodify server ntp.api.bzserver 127.127.1.0 # local clock fudge 127.127.1.0 stratum 10 driftfile /var/lib/ntp/drift4.重启ntp 服务/etc/init.d/ntp restartMYSQL 数据库服务安装1.预设MYSQL 数据库服务root 密码为openstack cat << MYSQL_PASSWORD |debconf -set-selectionsmysql-server-5.1 mysql-server/root_password password openstack mysql-server-5.1 mysql-server/root_password_again password openstack MYSQL_PASSWORD 2.安装MYSQL 数据库服务 apt-get install -y mysql-server 3.更改MYSQL 数据库服务监听内网网卡IPsed -i '/bind-address/s/127.0.0.1/192.168.1.2/g' /etc/mysql/my .cnf 4.重启MYSQL 数据库服务 /etc/init.d/mysql restart 5.检测服务是否正常启动通过netstat -ltunp 查看是否有tcp 3306端口监听 如果没有正常启动请查看/var/log/mysql 下相关log 排错限制192.168.1.0这个网段修改时间 #这个事虚拟机IP 断RABBITMQ消息队列服务安装1.安装RABBITMQ消息队列服务apt-get install -y rabbitmq-server2.更改RABBITMQ消息队列服务guest用户默认密码为openstackrabbitmqctl change_password guest openstackNOVA服务安装1.导入所需更新源echo 'deb /openstack-release/2011.3/ubuntu natty main' >>/etc/apt/sources.list2.导入服务密钥apt-key adv --keyserver --recv-keys 94CA80414F1043F6495425C37D21C2EC3D 1B4472nova-api3.更新APT源列表apt-get update4.nova-api、nova-network、nova-objectstore、nova-scheduler服务安装apt-get install -y nova-api nova-network nova-objectstore nova-schedulerGLANCE镜像存储服务安装1.安装glanceapt-get install -y glanceKEYSTONE、noVNC、Dashboard服务相关依赖包安装1.APT安装相关包apt-get install -y python-dev libxml2-dev libxslt1-dev libsasl2-dev libldap2-dev libsqlite3-dev libssl-dev python-pip swig git python-dateutil apache2 libapache2-mod-wsgi python-numpy2.pip安装相关包pip install passlib sqlalchemy-migrate prettytable glance python-cloudfiles nose==1.0.0 Django==1.3 django-nose==0.1.2 django-registration==0.7 django-mailer mox nosexcoverKEYSTONE认证服务安装1.下载keystone认证服务程序cd /optgit clone https:///cloudbuilders/keystone.gitcd keystonegit checkout diablocd ..2.安装keystone认证服务cd keystonepython setup.py installpython setup.py develop3.建立keystone认证服务数据库mysql -uroot -popenstack -e‘create database keystone’4.为keystone认证服务数据库建立访问所需用户名mysql -uroot -popenstack -e“grant select,insert,update,delete,create,drop,index,alter on keystone.* to keystone@'localhost' identified by 'keystone'”5.建立keystone认证服务启动所需用户useradd -s /bin/bash -g nogroup -m -d /var/log/keystone keystone6.建立keystone认证服务配置文件存放路径mkdir /etc/keystone7.生成keystone认证服务配置文件cp /opt/keystone/etc/keystone.conf /etc/keystone/编辑/etc/keystone/keystone.conf,更改如下:default_store = sqliteservice_host = 0.0.0.0admin_host = 0.0.0.0sql_connection = sqlite:///keystone.db更改为#default_store = sqliteservice_host = 192.168.1.2admin_host = 192.168.1.2sql_connection = mysql://keystone:keystone@localhost/keystone8.生成keystone认证服务数据编辑/etc/keystone/keystone_data.sh,添加如下内容:#!/bin/bash# 建立tenant名为adminkeystone-manage $* tenant add admin# 建立属于admin tenant的用户名为admin密码为openstack的用户keystone-manage $* user add admin openstack admin# 建立管理员规则keystone-manage $* role add Admin# 建立keystone管理员规则keystone-manage $* role add KeystoneAdmin# 建立keystone服务管理员规则keystone-manage $* role add KeystoneServiceAdmin# 将管理员规则赋予admin用户keystone-manage $* role grant Admin admin# 将keystone管理员规则赋予admin用户keystone-manage $* role grant KeystoneAdmin admin# 将keystone服务管理员规则赋予admin用户keystone-manage $* role grant KeystoneServiceAdmin admin# 添加nova compute服务keystone-manage $* service add nova compute "Nova Compute Service"# 添加glance image服务keystone-manage $* service add glance image "Glance Image Service"# 添加keystone identity服务keystone-manage $* service add keystone identity "Keystone Identity Service"# 添加nova compute访问点keystone-manage $* endpointTemplates add RegionOne nova http://192.168.1.2:8774/v1.1/%tenant_i d% http://192.168.1.2:8774/v1.1/%tenant_id% http://192.168.1.2:8774/v1.1/%tenant_id% 1 1# 添加glance image访问点keystone-manage $* endpointTemplates add RegionOne glance http://192.168.1.2:9292/v1.1/%tenan t_id% http://192.168.1.2:9292/v1.1/%tenant_id% http://192.168.1.2:9292/v1.1/%tenant_id% 1 1# 添加keystone identity访问点keystone-manage $* endpointTemplates add RegionOne keystone http://192.168.1.2:5000/v2.0 http:/ /192.168.1.2:35357/v2.0 http://192.168.1.2:5000/v2.0 1 1# 为tenant为admin及admin用户建立一个名为openstack,过期时间为2015年2月5日0点的token keystone-manage $* token add openstack admin admin 2015-02-05T00:00# 为tenant为admin及admin用户建立一个类型为EC2的证书,其k ey和secret分别为admin用户的用户名和密码keystone-manage $* credentials add admin EC2 'admin' 'openstack' admin9.建立keystone认证服务启动脚本配置文件在/etc/init/下建立名为keystone.conf的文件,内容如下:description "Keystone API server"author "Soren Hansen <soren@linux2go.dk>"start on (local-filesystems and net-device-up IF ACE!=lo)stop on runlevel [016]respawnexec su -c "keystone --config-file=/etc/keystone/keystone.conf --log-dir=/var/log/keystone --log-file =keystone.log" keystone10.建立keystone认证服务启动脚本ln -sv /lib/init/upstart-job /etc/init.d/keystone11.启动keystone认证服务/etc/init.d/keystone start12.验证keystone服务是否正常启动通过netstat -ltunp检测是否有tcp 5000和35357端口的监听,如果没有,请查看/var/log/keystone下的相关日志排错PUTE扩展库安装1.下载pute扩展库cd /optgit clone https:///jacobian/pute.gitcd putegit checkout mastercd ..2.安装pute扩展库cd putepython setup.py installpython setup.py developOPENSTACKX扩展库安装1.下载openstackx扩展库cd /optgit clone https:///cloudbuilders/openstackx.gitcd openstackxgit checkout diablocd ..2.安装openstackx扩展库cd openstackxpython setup.py installpython setup.py developPYTHON-NOVACLIENT扩展库安装1.下载python-novaclient扩展库cd /optgit clone https:///cloudbuilders/python-novaclient.gitcd python-novaclientgit checkout diablocd ..2.安装python-novaclient扩展库cd python-novaclientpython setup.py installpython setup.py developQUANTUM模块安装1.下载quantum扩展库cd /optgit clone https:///openstack/quantum.gitcd quantumgit checkout stable/diablocd ..2.安装quantum扩展库cd quantumpython setup.py installpython setup.py developOPENSTACK-DASHBOARD控制面板安装1.下载openstack-dashboard控制面板cd /optgit clone https:///openstack/openstack-dashboard.gitcd openstack-dashboardgit checkout mastercd ..2.安装openstack-dashboard控制面板cd openstack-dashboard/django-openstackpython setup.py installpython setup.py developcd ..cd openstack-dashboardpython setup.py installpython setup.py develop3.建立openstack-dashboard控制面板数据库mysql -uroot -popenstack -e‘create database dashboard’4.为openstack-dashboard控制面板数据库建立访问所需用户名mysql -uroot -popenstack -e“grant select,insert,update,delete,create,drop,index,alter on dashboard.* to dashboard@'localhost' identified by 'dashboard'”5.配置openstack-dashboard控制面板cd /opt/openstack-dashboard/openstack-dashboard/localcp local_settings.py.example local_settings.py编辑local_settings.py,更改如下内容:DATABASES = {'default': {'ENGINE': 'django.db.backends.sqlite3','NAME': os.path.join(LOCAL_PATH, 'dashboard_openstack.sqlite3'),},}更改为DATABASES = {'default': {'ENGINE': 'django.db.backends.mysql','NAME': 'dashboard','USER': 'dashboard','PASSWORD': 'dashboard','HOST': 'localhost','PORT': '3306',},}OPENSTACK_KEYSTONE_URL = http://localhost:5000/v2.0/OPENSTACK_KEYSTONE_ADMIN_URL = http://localhost:35357/v2.0OPENSTACK_ADMIN_TOKEN = "999888777666"更改为OPENSTACK_KEYSTONE_URL = “http://192.168.1.2:5000/v2.0/”OPENSTACK_KEYSTONE_ADMIN_URL = “http://192.168.1.2:35357/v2.0”OPENSTACK_ADMIN_TOKEN = "openstack"6.配置apachemkdir /opt/openstack-dashboard/.blackholechown -R www-data:www-data /opt/openstack-dashboard编辑/etc/apache2/sites-available/default文件,将内容替换为如下:<VirtualHost *:80>WSGIScriptAlias / /opt/openstack-dashboard/openstack-dashboard/dashboard/wsgi/django.wsgiWSGIDaemonProcess dashboard user=www-data group=www-data processes=3 threads=10SetEnv APACHE_RUN_USER www-dataSetEnv APACHE_RUN_GROUP www-dataWSGIProcessGroup dashboardDocumentRoot /opt/openstack-dashboard/.blackhole/Alias /media /opt/openstack-dashboard/openstack-dashboard/media<Directory />Options FollowSymLinksAllowOverride None</Directory><Directory /opt/openstack-dashboard/>Options Indexes FollowSymLinks MultiViewsAllowOverride NoneOrder allow,denyallow from all</Directory>ErrorLog /var/log/apache2/error.logLogLevel warnCustomLog /var/log/apache2/access.log combined</VirtualHost>7.建立openstack-dashboard控制面板数据库结构/opt/openstack-dashboard/openstack-dashboard/dashboard/manage.py syncdb8.重启apache服务/etc/init.d/apache restart9.验证openstack-dashboard控制面板首先通过netstat -ltunp查看80端口的监听,其次通过浏览器访问web服务是否可以看到如下界面:如不成功请查看/var/log/apache/下错误日志noVNC服务安装1.下载noVNC服务cd /optgit clone https:///cloudbuilders/noVNC.gitgit checkout diablocd ..NOVA服务配置1.建立nova服务数据库mysql -uroot -popenstack -e‘create database nova’2.为nova服务数据库建立访问所需用户名mysql -uroot -popenstack -e“grant select,insert,update,delete,create,drop,index,alter on nova.* to nova@'192.168.1.%' identified by 'nova'”3.配置nova服务cp /opt/keystone/examples/paste/nova-api-paste.ini /etc/nova/api-paste.ini编辑/etc/nova/api-paste.init,更改如下内容:service_host = 127.0.0.1auth_host = 127.0.0.1auth_uri = http://127.0.0.1:5000/admin_token = 999888777666更改为service_host = 192.168.1.2auth_host = 192.168.1.2auth_uri = http://192.168.1.2:5000/admin_token = openstack编辑/etc/nova.conf,更改为如下内容#general--logdir=/var/log/nova--state_path=/var/lib/nova--lock_path=/var/lock/nova--verbose=True--use_syslog=False#nova-objectstore--use_s3=True--s3_host=192.168.1.2--s3_port=3333#rabbit--rabbit_host=192.168.1.2--rabbit_port=5672--rabbit_password=openstack#ec2--ec2_listen=192.168.1.2--ec2_listen_port=8773--osapi_listen=192.168.1.2--osapi_listen_port=8774--osapi_extensions_path=/opt/openstackx/extensions--api_paste_config=/etc/nova/api-paste.ini#db--sql_connection=mysql://nova:nova@192.168.1.2/nova--sql_idle_timeout=600--sql_max_retries=3--sql_retry_interval=3#glance--glance_host=192.168.1.2--glance_api_servers=192.168.1.2:9292--image_service=nova.image.glance.GlanceImageService#nova-network--dhcpbridge_flagfile=/etc/nova/nova.conf--dhcpbridge=/usr/bin/nova-dhcpbridge--network_manager=work.manager.FlatDHCPManager--linuxnet_interface_driver=work.linux_net.LinuxBridgeInterfaceDriver4.建立nova数据库结构nova-manage db sync5.建立名为private,ip地址范围为10.0.0.0/24,网络id为1,主机数256个,桥接在eth1网卡,桥接卡名称为br1的实例网络段,并启用多nova-networknova-manage network create private 10.0.0.0/24 1 256 --bridge=br1 --bridge_interface=eth1 --mul ti_host='T'6.建立可分配的floating ipnova-manage floating create 60.12.206.1147.重启相关服务/etc/init.d/nova-api restart/etc/init.d/nova-network restart/etc/init.d/nova-objectstore restart/etc/init.d/nova-scheduler restart8.检测相关服务是否启动成功查看/var/log/nova/nova-api.log最下方是否有如下输出:2011-11-28 00:44:29,390 INFO nova.wsgi [-] Started ec2 on 192.168.1.2:87732011-11-28 00:44:29,390 INFO nova.wsgi [-] Started osapi on 192.168.1.2:8774并通过netstat -ltunp查看是否有tcp 8773和8774的端口监听查看/var/log/nova/nova-network.log最下方是否有如下输出:2011-11-28 00:46:05,519 INFO nova.rpc [-] Connected to AMQP server on 192.168.1.2:56722011-11-28 00:46:05,520 DEBUG nova [-] Creating Consumer connection for Service network from (pid=7592) start /usr/lib/python2.7/dist-packages/nova/service.py:153通过命令nova-manage service list查看是否有如下输出:nova-network r410-control1 nova enabled :-) 201 1-11-27 16:48:36查看/var/log/nova/nova-objectstore.log最下方是否有如下输出:2011-11-28 00:46:46,017 INFO nova.wsgi [-] Started S3 Objectstore on 192.168.1.2:3333并通过netstat -ltunp查看是否有tcp 333的端口监听查看/var/log/nova/nova-scheduler.log最下方是否有如下输出:2011-11-28 00:47:59,789 INFO nova.rpc [-] Connected to AMQP server on 192.168.1.2:56722011-11-28 00:47:59,790 DEBUG nova [-] Creating Consumer connection for Service scheduler from (pid=7805) start /usr/lib/python2.7/dist-packages/nova/service.py:153通过命令nova-manage service list查看是否有如下输出:nova-scheduler r410-control1 nova enabled :-) 201 1-11-27 16:48:40如上述有哪些服务没有成功启动请查看相关/var/log/nova下相关log排错GLANCE镜像存储服务配置1.建立glance镜像存储服务数据库mysql -uroot -popenstack -e‘create database glance’2.为glance镜像存储服务数据库建立访问所需用户名mysql -uroot -popenstack -e“grant select,insert,update,delete,create,drop,index,alter on glance.* to glance@'localhost' identified by 'glance'”3.配置glance镜像存储服务cp /opt/keystone/examples/paste/glance-api.conf /etc/glance/glance-api.confcp /opt/keystone/examples/paste/glance-registry.conf /etc/glance/glance-registry.conf编辑/etc/glance/glance-api.conf,更改如下内容:bind_host = 0.0.0.0registry_host = 0.0.0.0rabbit_password = guestservice_host = 127.0.0.1auth_host = 127.0.0.1auth_uri = http://127.0.0.1:5000/admin_token = 999888777666更改为bind_host = 192.168.1.2registry_host = 192.168.1.2rabbit_password = openstackservice_host = 192.168.1.2auth_host = 192.168.1.2auth_uri = http://192.168.1.2:5000/admin_token = openstack编辑/etc/glance/glance-registry.conf,更改如下内容:bind_host = 0.0.0.0sql_connection = sqlite:///glance.sqliteservice_host = 127.0.0.1auth_host = 127.0.0.1auth_uri = http://127.0.0.1:5000/admin_token = 999888777666更改为bind_host = 192.168.1.2sql_connection = mysql://glance:glance@localhost/glanceservice_host = 192.168.1.2auth_host = 192.168.1.2auth_uri = http://192.168.1.2:5000/admin_token = openstack4.重启相关服务/etc/init.d/glance-api restart/etc/init.d/glance-registry restart5.检测服务是否成功启动通过命令netstat -ltunp查看是否有tcp 9191和9292端口监听如果没有启动成功请查看/var/log/glance下相关log排错6.通过glance上传镜像glance add -H 192.168.1.2 -A openstack name=win2k3 is_public=true < win2k3.imgnoVNC服务配置1.配置noVNC服务向/etc/nova/nova.conf添加如下内容:#nova-vncproxy--vnc_enabled=True--vncproxy_url=http://60.12.206.111:6080--vncproxy_wwwroot=/opt/noVNC--vncproxy_manager=nova.vnc.auth.VNCProxyAuthManager将计算节点ip和主机名对应关系添加到/etc/hosts文件内2.建立noVNC服务启动程序软链接ln -sv /opt/noVNC/utils/nova-wsproxy.py /usr/bin/nova-wsproxy3.建立noVNC服务启动脚本配置文件在/etc/init/下建立名为nova-vncproxy.conf文件,内容如下:description "Nova VNC proxy"author "Vishvananda Ishaya <vishvananda@>"start on (filesystem and net-device-up IF ACE!=lo)stop on runlevel [016]exec su -c "nova-wsproxy 6080 --web /opt/noVNC --flagfile=/etc/nova/nova.conf" nova4.建立noVNC启动脚本ln -sv /lib/init/upstart-job /etc/init.d/nova-vncproxy5.重启相关服务/etc/init.d/nova-api restart/etc/init.d/nova-vncproxy start6.检测服务是否启动成功通过netstat -ltunp查看是否有tcp 6080端口监听如没启动成功请以前台模式启动并查找问题计算节点安装NTP时钟同步配置1.安装NTP相关命令包apt-get install -y ntpdate跟控制节点同步时间并写入硬件ntpdate 192.168.1.2hwclock –w2.将时间同步添加到计划任务echo ’30 8 * * * root /usr/sbin/ntpdate 192.168.1.2;hwclock –w’ >>/etc/crontabNOVA服务安装1.导入所需更新源echo 'deb /openstack-release/2011.3/ubuntu natty main' >>/etc/apt/sources.list2.导入服务密钥apt-key adv --keyserver --recv-keys 94CA80414F1043F6495425C37D21C2EC3D 1B44723.更新APT源列表apt-get update4.nova-network、nova-compute服务安装apt-get install -y nova-network nova-computeNOVA服务配置1.配置nova服务编辑/etc/nova.conf,更改为如下内容#general--logdir=/var/log/nova--state_path=/var/lib/nova--lock_path=/var/lock/nova--verbose=True--use_syslog=False#nova-objectstore--use_s3=True--s3_host=192.168.1.2--s3_port=3333#rabbit--rabbit_host=192.168.1.2--rabbit_port=5672--rabbit_password=openstack#ec2--ec2_host=192.168.1.2--ec2_port=8773--ec2_url=http://192.168.1.2:8773/services/Cloud#osapi--osapi_host=192.168.1.2--osapi_port=8774#db--sql_connection=mysql://nova:nova@192.168.1.2/nova--sql_idle_timeout=600--sql_max_retries=3--sql_retry_interval=3#glance--glance_host=192.168.1.2--glance_api_servers=192.168.1.2:9292--image_service=nova.image.glance.GlanceImageService#libvirt--connection_type=libvirt--libvirt_type=kvm--snapshot_image_format=qcow2--use_cow_image=True--libvirt_use_virtio_for_bridges=True#nova-scheduler--scheduler_driver=nova.scheduler.multi.MultiScheduler--max_cores=48--start_guests_on_host_boot=True--resume_guests_state_on_host_boot=True#nova-network--dhcpbridge_flagfile=/etc/nova/nova.conf--dhcpbridge=/usr/bin/nova-dhcpbridge--network_manager=work.manager.FlatDHCPManager--linuxnet_interface_driver=work.linux_net.LinuxBridgeInterfaceDriver--fixed_range=10.0.0.0/24--flat_interface=br1--flat_network_bridge=eth1--flat_network_dhcp_start=10.0.0.2--floating_range=60.12.206.114--multi_host=true--public_interface=eth0--force_dhcp_release=true--use_ipv6=False2.启动相关服务/etc/init.d/nova-network restart/etc/init.d/nova-compute restart3.检测服务是否启动成功通过命令netstat –ntap查看是否有类似如下连接状态:tcp 0 0 192.168.1.3:26342 192.168.1.2:5672 ESTABLISHED 29466/python tcp 0 0 192.168.1.3:19757 192.168.1.2:3306 ESTABLISHED 29466/python tcp 0 0 192.168.1.3:27483 192.168.1.2:5672 ESTABLISHED 29510/python tcp 0 0 192.168.1.3:4423 192.168.1.2:3306 ESTABLISHED 29510/python tcp 0 0 118.26.228.117:59878 211.101.24.8:56527 ESTABLISHED 29817/2 tcp 0 0 192.168.1.3:9542 192.168.1.2:3306 ESTABLISHED 29510/python tcp 0 0 192.168.1.3:4422 192.168.1.2:3306 TIME_WAIT -tcp 0 0 192.168.1.3:26340 192.168.1.2:5672 ESTABLISHED 29510/python tcp 0 0 192.168.1.3:4424 192.168.1.2:3306 ESTABLISHED 29510/python tcp 0 0 192.168.1.3:26328 192.168.1.2:5672 ESTABLISHED 29466/python查看/var/log/nova/nova-network.log最下方是否有如下输出:2011-11-28 00:46:05,519 INFO nova.rpc [-] Connected to AMQP server on 192.168.1.2:56722011-11-28 00:46:05,520 DEBUG nova [-] Creating Consumer connection for Service network from (pid=7592) start /usr/lib/python2.7/dist-packages/nova/service.py:153查看/var/log/nova/nova-compute.log最下方是否有如下输出:2011-11-28 17:06:24,491 INFO nova.rpc [-] Connected to AMQP server on 192.168.1.2:56722011-11-28 17:06:24,492 DEBUG nova [-] Creating Consumer connection for Service compute from (pid=31197) start /usr/lib/python2.7/dist-packages/nova/service.py:153通过在控制节点执行nova-manage service list结果是否有如下输出(红字):Binary Host Zone Status State Updated_At nova-scheduler r410-control1 nova enabled :-) 2011-11-28 09:07:21nova-network r410-control1 nova enabled :-) 2011-11-28 09:07:21nova-compute r710-compute1 nova enabled :-) 2011-11-28 09:07:14nova-network r710-compute1 nova enabled :-) 2011-11-28 09:07:22通过管理员登陆dashboard,在SYSTEM PANEL面板通过左侧Services标签查看是否有计算节点nova-compute 和nova-network服务,并且颜色是否为绿色,如图:如上述有哪些服务没有成功启动请查看相关/var/log/nova下相关log排错DASHBOARD使用基础建立Keypairs通过USER DASHBOARD面板左侧Keypairs标签,点击Add New Keypair,如图:输入keypair名字,这里假名为openstack,点击Add Keypair按钮,如图:此后会要求下载一个pem文件,可以通过这个文件登陆启动的系统建立安全组通过USER DASHBOARD面板左侧Security Groups标签,点击Create Security Group,如图:输入name,在Name输入test,Description输入test,点击Create Security Group,如图:建立成功后会自动跳转回Security Groups标签,可以看到我们建立的新安全组test,如图:点击我们创建的安全组的Edit Rules,进入如下界面,如图:我们默认放所有,规则如下Ip protocol:tcp,From port:0,To port:65535,Cidr:0/0Ip protocol:udp,From port:0,To port:65535,Cidr:0/0Ip protocol:icmp,From port:-1,To port:-1,Cidr:0/0添加完毕后,如图:启动实例通过USER DASHBOARD面板左侧Images标签,在已上传的镜像后点击Launch,如图:输入实例名称,这里假设为first instance,通过Flavor下拉列表选择你要启动的实例配置,通过Key Name下拉列表选择你已有的keypair,通过Securtiy Group列表框选择我们建立的安全组test,点击Launche Instance,如图:此后通过USER DASHBOARD面板左侧Instances标签,可以看到你刚刚启动的实例,实例刚启动再状态栏Build,如图:当实例状态变为Active后,我们可以通过vnc连接,如图:通过VNC连接实例通过USER DASHBOARD面板左侧Instances标签,找到我们启动的first instance实例->Actions下的VNC Console链接,会新打开一个窗口,如图:通过此窗口我们就可以访问启动的实例了为实例分配外网IP通过USER DASHBOARD面板左侧Floating IPs标签,点击Allocate IP按钮,将出现一个可用外网IP,如图:点击Associate Floating IP链接,进入如下界面,Floating IP是要分配的IP,Instance下拉列表选择你要讲此IP给予哪个实例,最后点击Associate IP,如图:成功后会跳转到Floating IPs标签,可以查看到我们已经分配完成,如图:接下来就可以通过ssh连接你的实例了。

相关文档
最新文档