警告
NOTICE: THIS DOCUMENTATION SITE HAS BEEN SUPERSEDED.
For the current documentation site goto: http://docs.cloudstack.apache.org
KVM包含在多种基于Linux的操作系统中。尽管不作要求,但是我们推荐以下发行版:
KVM hypervisors主要要求在于libvirt和Qemu版本。不管您使用何种Linux版本,请确保满足以下要求:
CloudStack中的默认使用Linux本身的桥接(bridge模块)方式实现。也可选择在CloudStack中使用OpenVswitch,具体要求如下:
此外,硬件要求如下:
如果你想使用 KVM hypervisor来运行虚拟机,请在你的云环境中安装KVM主机。本章节不会复述KVM的安装文档。它提供了KVM主机与CloudStack协同工作所要准备特有的步骤。
警告
在我们开始之前,请确保所有的主机都安装了最新的更新包。
警告
不建议在主机中运行与CloudStack无关的服务。
安装KVM主机的过程:
主机的操作系统必须为运行CloudStack Agent和KVM实例做些准备。
使用root用户登录操作系统。
检查FQN完全合格/限定主机名。
$ hostname --fqdn
该命令会返回完全合格/限定主机名,例如”kvm1.lab.example.org”。如果没有,请编辑 /etc/hosts。
确保机器可以连接到互联网.
$ ping www.cloudstack.org
启用NTP服务以确保时间同步.
注解
NTP服务用来同步云中的服务器时间。时间不同步会带来意想不到的问题。
安装NTP
$ yum install ntp
$ apt-get install openntpd
在所有主机中重复上述步骤。
CloudStack使用Agent来管理KVM实例。Agent与管理服务器通讯并控制主机上所有的虚拟机。
首先我们安装Agent:
在RHEL/CentOS上:
$ yum install cloudstack-agent
在Ubuntu上:
$ apt-get install cloudstack-agent
现在主机已经为加入群集做好准备。后面的章节有介绍,请参阅 添加一个宿主机。强烈建议在添加主机之前阅读此部分内容。
If you’re using a non-root user to add the KVM host, please add the user to sudoers file:
cloudstack ALL=NOPASSWD: /usr/bin/cloudstack-setup-agent
defaults:cloudstack !requiretty
此外,CloudStack Agent允许主机管理员控制KVM实例中的CPU型号。默认情况下,KVM实例的CPU型号为只有少数CPU特性且版本为xxx的QEMU Virtual CPU。指定CPU型号有几个原因:
For the most part it will be sufficient for the host administrator to specify the guest CPU config in the per-host configuration file (/etc/cloudstack/agent/agent.properties). This will be achieved by introducing following configuration parameters:
guest.cpu.mode=custom|host-model|host-passthrough
guest.cpu.model=from /usr/share/libvirt/cpu_map.xml(only valid when guest.cpu.mode=custom)
guest.cpu.features=vmx ept aes smx mmx ht (space separated list of cpu flags to apply)
更改CPU型号有三个选择:
这里有一些示例:
custom
guest.cpu.mode=custom
guest.cpu.model=SandyBridge
host-model
guest.cpu.mode=host-model
host-passthrough
guest.cpu.mode=host-passthrough
guest.cpu.features=vmx
注解
host-passthrough may lead to migration failure,if you have this problem, you should use host-model or custom. guest.cpu.features will force cpu features as a required policy so make sure to put only those features that are provided by the host CPU.
CloudStack使用libvirt管理虚拟机。因此正确地配置libvirt至关重要。CloudStack-agent依赖于Libvirt,应提前安装完毕。
为了实现动态迁移libvirt需要监听不可靠的TCP连接。还要关闭libvirts尝试使用组播DNS进行广播。这些都可以在 /etc/libvirt/libvirtd.conf文件中进行配置。
设定下列参数:
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "none"
mdns_adv = 0
除了在libvirtd.conf中打开”listen_tcp”以外,我们还必须修改/etc/sysconfig/libvirtd中的参数:
在RHEL或者CentOS中修改 /etc/sysconfig/libvirtd
:
取消如下行的注释:
#LIBVIRTD_ARGS="--listen"
On Ubuntu 14.04: modify /etc/default/libvirt-bin
在下列行添加 “-l”
libvirtd_opts="-d"
如下所示:
libvirtd_opts="-d -l"
And modify /etc/init/libvirt-bin.conf
在下列行添加 “-l”
env libvirtd_opts="-d"
如下所示:
env libvirtd_opts="-d -l"
On Ubuntu 16.04: just modify /etc/init/libvirt-bin.conf
在下列行添加 “-l”
env libvirtd_opts="-d"
如下所示:
env libvirtd_opts="-d -l"
重启libvirt服务
在RHEL/CentOS上:
$ service libvirtd restart
在Ubuntu上:
$ service libvirt-bin restart
CloudStack的会被例如AppArmor和SELinux的安全机制阻止。必须关闭安全机制并确保 Agent具有所必需的权限。
配置SELinux(RHEL和CentOS):
检查你的机器是否安装了SELinux。如果没有,请跳过此部分。
在RHEL或者CentOS中,SELinux是默认安装并启动的。你可以使用如下命令验证:
$ rpm -qa | grep selinux
在 /etc/selinux/config
中设置SELINUX变量值为 “permissive”。这样能确保对SELinux的设置在系统重启之后依然生效。
在RHEL/CentOS上:
$ vi /etc/selinux/config
查找如下行
SELINUX=enforcing
修改为
SELINUX=permissive
然后使SELinux立即运行于permissive模式,无需重新启动系统。
$ setenforce permissive
配置AppArmor(Ubuntu)
检查你的机器中是否安装了AppArmor。如果没有,请跳过此部分。
Ubuntu中默认安装并启动AppArmor。使用如下命令验证:
$ dpkg --list 'apparmor'
在AppArmor配置文件中禁用libvirt
$ ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
$ ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
$ apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
$ apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper
警告
本章节非常重要,请务必彻底理解。
注解
本章节详细介绍了如何使用Linux自带的软件配置桥接网络。如果要使用OpenVswitch,请看下一章节。
CloudStack uses the network bridges in conjunction with KVM to connect the guest instances to each other and the outside world. They also are used to connect the System VMs to your infrastructure.
By default these bridges are called cloudbr0 and cloudbr1 etc, but this can be changed to be more description.
警告
It is essential that you keep the configuration consistent across all of your hypervisors.
There are many ways to configure your networking. Even within the scope of a given network mode. Below are a few simple examples.
In the Basic networking, all of the guests in a given pod will be on the same VLAN/subnet. It is common to use the native (untagged) VLAN for the private/management network, so in this example we will have two VLANs, one (native) for your private/management network and one for the guest network.
We assume that the hypervisor has one NIC (eth0) with one tagged VLAN trunked from the switch:
In this the following example we give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1
注解
The Hypervisor and Management server don’t have to be in the same subnet
配置方式取决于发行版类型,下面给出RHEL/CentOS和Ubuntu的配置示例。
注解
本章节的目标是配置两个名为 ‘cloudbr0’和’cloudbr1’的桥接网络。这仅仅是指导性的,实际情况还要取决于你的网络布局。
网络桥接所需的软件在安装libvirt时就已被安装,继续配置网络。
首先配置eth0:
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
确保内容如下所示:
DEVICE=eth0
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
BRIDGE=cloudbr0
We now have to configure the VLAN interfaces:
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0.200
DEVICE=eth0.200
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr1
Now that we have the VLAN interfaces configured we can add the bridges on top of them.
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
Now we configure cloudbr0 and include the Management IP of the hypervisor. .. note:
The management IP of the hypervisor doesn't have to be in same subnet/VLAN as the
management network, but its quite common.
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
IPADDR=192.168.42.11
GATEWAY=192.168.42.1
NETMASK=255.255.255.0
STP=yes
We configure cloudbr1 as a plain bridge without an IP address
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
DEVICE=cloudbr1
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes
配置完成之后重启网络,通过重启检查一切是否正常。
警告
在发生配置错误和网络故障的时,请确保可以能通过其他方式例如IPMI或ILO连接到服务器。
在安装libvirt时所需的其他软件也会被安装,所以只需配置网络即可。
$ vi /etc/network/interfaces
如下所示修改接口文件:
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet manual
auto eth0.200
iface eth0 inet manual
# management network
auto cloudbr0
iface cloudbr0 inet static
bridge_ports eth0
bridge_fd 5
bridge_stp off
bridge_maxwait 1
address 192.168.42.11
netmask 255.255.255.240
gateway 192.168.42.1
dns-nameservers 8.8.8.8 8.8.4.4
dns-domain lab.example.org
# guest network
auto cloudbr1
iface cloudbr1 inet manual
bridge_ports eth0.200
bridge_fd 5
bridge_stp off
bridge_maxwait 1
配置完成之后重启网络,通过重启检查一切是否正常。
警告
在发生配置错误和网络故障的时,请确保可以能通过其他方式例如IPMI或ILO连接到服务器。
In the Advanced networking mode is most common to have (at least) two physical interfaces. In this example we will again have the hypervisor management interface on cloudbr0 on the untagged (native) VLAN. But now we will have a bridge on top of our additional interface (eth1) for public and guest traffic with no VLANs applied by us - CloudStack will add the VLANs as required.
We again give the Hypervisor the IP-Address 192.168.42.11/24 with the gateway 192.168.42.1
注解
The Hypervisor and Management server don’t have to be in the same subnet
配置方式取决于发行版类型,下面给出RHEL/CentOS和Ubuntu的配置示例。
注解
本章节的目标是配置两个名为 ‘cloudbr0’和’cloudbr1’的桥接网络。这仅仅是指导性的,实际情况还要取决于你的网络布局。
网络桥接所需的软件在安装libvirt时就已被安装,继续配置网络。
首先配置eth0:
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
确保内容如下所示:
DEVICE=eth0
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
BRIDGE=cloudbr0
We now have to configure the VLAN interfaces:
$ vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
BRIDGE=cloudbr1
配置VLAN接口以便能够附加桥接网络。
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
Now we configure cloudbr0 and include the Management IP of the hypervisor. .. note:
The management IP of the hypervisor doesn't have to be in same subnet/VLAN as the
management network, but its quite common.
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
IPADDR=192.168.42.11
GATEWAY=192.168.42.1
NETMASK=255.255.255.0
STP=yes
We configure cloudbr1 as a plain bridge without an IP address
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
DEVICE=cloudbr1
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes
配置完成之后重启网络,通过重启检查一切是否正常。
警告
在发生配置错误和网络故障的时,请确保可以能通过其他方式例如IPMI或ILO连接到服务器。
在安装libvirt时所需的其他软件也会被安装,所以只需配置网络即可。
$ vi /etc/network/interfaces
如下所示修改接口文件:
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet manual
# The second network interface
auto eth1
iface eth1 inet manual
# management network
auto cloudbr0
iface cloudbr0 inet static
bridge_ports eth0
bridge_fd 5
bridge_stp off
bridge_maxwait 1
address 192.168.42.11
netmask 255.255.255.240
gateway 192.168.42.1
dns-nameservers 8.8.8.8 8.8.4.4
dns-domain lab.example.org
# guest network
auto cloudbr1
iface cloudbr1 inet manual
bridge_ports eth1
bridge_fd 5
bridge_stp off
bridge_maxwait 1
配置完成之后重启网络,通过重启检查一切是否正常。
警告
在发生配置错误和网络故障的时,请确保可以能通过其他方式例如IPMI或ILO连接到服务器。
警告
本章节非常重要,请务必彻底理解。
为了转发流量到实例,至少需要两个桥接网络: public 和 private。
By default these bridges are called cloudbr0 and cloudbr1, but you do have to make sure they are available on each hypervisor.
最重要的因素是所有hypervisors上的配置要保持一致。
将系统自带的网络桥接模块加入黑名单,确保该模块不会与openvswitch模块冲突。请参阅你所使用发行版的modprobe文档并找到黑名单。确保该模块不会在重启时自动加载或在下一操作步骤之前卸载该桥接模块。
以下网络配置依赖ifup-ovs和ifdown-ovs脚本,安装openvswitch后会提供该脚本。安装路径为位/etc/sysconfig/network-scripts/。
There are many ways to configure your network. In the Basic networking mode you should have two VLANs, one for your private network and one for the public network.
We assume that the hypervisor has one NIC (eth0) with three tagged VLANs:
在VLAN 100 中,配置Hypervisor的IP为 192.168.42.11/24,网关为192.168.42.1
注解
The Hypervisor and Management server don’t have to be in the same subnet
如何配置这些文件取决于你使用的发行版本,在下面的内容中会提供RHEL/CentOS下的示例。
注解
本章节的目标是设置三个名为’mgmt0’, ‘cloudbr0’和’cloudbr1’ 桥接网络。这仅仅是指导性的,实际情况还要取决于你的网络状况。
使用ovs-vsctl命令创建基于OpenVswitch的网络接口。该命令将配置此接口并将信息保存在OpenVswitch数据库中。
首先我们创建一个连接至eth0接口的主桥接。然后我们创建三个虚拟桥接,每个桥接都连接指定的VLAN。
# ovs-vsctl add-br cloudbr
# ovs-vsctl add-port cloudbr eth0
# ovs-vsctl set port cloudbr trunks=100,200,300
# ovs-vsctl add-br mgmt0 cloudbr 100
# ovs-vsctl add-br cloudbr0 cloudbr 200
# ovs-vsctl add-br cloudbr1 cloudbr 300
所需的安装包在安装openvswitch和libvirt的时就已经安装,继续配置网络。
首先配置eth0:
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
确保内容如下所示:
DEVICE=eth0
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
必须将基础桥接配置为trunk模式。
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr
DEVICE=cloudbr
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
DEVICETYPE=ovs
TYPE=OVSBridge
现在对三个VLAN桥接进行配置:
$ vi /etc/sysconfig/network-scripts/ifcfg-mgmt0
DEVICE=mgmt0
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=static
DEVICETYPE=ovs
TYPE=OVSBridge
IPADDR=192.168.42.11
GATEWAY=192.168.42.1
NETMASK=255.255.255.0
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
DEVICETYPE=ovs
TYPE=OVSBridge
$ vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
DEVICE=cloudbr1
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=OVSBridge
DEVICETYPE=ovs
配置完成之后重启网络,通过重启检查一切是否正常。
警告
在发生配置错误和网络故障的时,请确保可以能通过其他方式例如IPMI或ILO连接到服务器。
hypervisor之间和hypervisor与管理服务器之间要能够通讯。
为了达到这个目的,我们需要开通以下TCP端口(如果使用防火墙):
如何打开这些端口取决于你使用的发行版本。在RHEL/CentOS 及Ubuntu中的示例如下。
RHEL 及 CentOS使用iptables作为防火墙,执行以下iptables命令来开启端口:
$ iptables -I INPUT -p tcp -m tcp --dport 22 -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 1798 -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 16509 -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 5900:6100 -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 49152:49216 -j ACCEPT
这些iptables配置并不会持久保存,重启之后将会消失,我们必须手动保存这些配置。
$ iptables-save > /etc/sysconfig/iptables
Ubuntu中的默认防火墙是UFW(Uncomplicated FireWall),使用Python围绕iptables进行包装。
要打开所需端口,请执行以下命令:
$ ufw allow proto tcp from any to any port 22
$ ufw allow proto tcp from any to any port 1798
$ ufw allow proto tcp from any to any port 16509
$ ufw allow proto tcp from any to any port 5900:6100
$ ufw allow proto tcp from any to any port 49152:49216
注解
默认情况下,Ubuntu中并未启用UFW。在关闭情况下执行这些命令并不能启用防火墙。
New in 4.11 is the ability to bypass storing a template on secondary storage, and instead directly downloading a ‘template’ from an alternate remote location. In order to facilitate this the Aria2 (https://aria2.github.io/) package must be installed on all of your KVM hosts.
As this package often is not available in standard distribution repos, you will need to install the package from your preferred source.
CloudStack uses the qemu-img to perform live migrations. In CentOS > 6.3, the qemu-img supplied by RedHat/CentOS ceased to include a ‘-s’ switch which performs snapshots. The ‘-s’ switch has been restored in latest CentOS/RHEL 7.x versions.
In order to be able to perform live migrations on CentOS 6.x (greater than 6.3) you must replace your version of qemu-img with one which has been patched to include the ‘-s’ switch.