OpenStack实验手册
第一部分基础环境准备
1.1VmwareWorkStation的安装
安装VmwareWorkStation12Pro版本,一路保持默认设置即可。安装过程即将完成时,输入激活码进行激活,在此不再赘述。
1.2NATIP地址段的设置
OpenStack外部网络使用NAT模式的网卡,因此需要配置NAT网卡的IP地址段,配置方法如下:
依次点击【编辑】/【虚拟网络编辑器】,如下图所示
选择【VMnet8】对应的行,在【子网】选项中输入【192.168.10.0】,在【子网掩码】选线中输入【255.255.255.0】,勾选【NAT模式(与虚拟机共享主机的IP地址)】,最后点击【确定】按钮,如下图所示:
1.3CentOS系统最小化安装
在本小结中,我们在VmwareWorkstation上最小化安装CentOS7.5操作系统,命名为template。将template作为模板机,使用它克隆出多台虚拟机做实验,避免手工安装多台CentOS虚拟机,节省时间。
详细的安装步骤如下:
1.3.1VmwareWorkstation的配置
打开VmwareWorkstation,依次点击【文件】/【新建虚拟机】,如下图所示:
保持模式设置,点击【下一步】,如下图所示:
选择【稍后安装操作系统】,点击【下一步】,如下图所示:
客户机操作系统选择【Linux】,版本选择【CentOS64位】,点击【下一步】,如下图所示:
输入虚拟机名称,本例中设置为【template】,并设置虚拟机文件的存放路径,本例中为【D:\template】,点击【下一步】,如下图所示:
在【最大磁盘大小】选项中,输入【50G】,然后勾选【将虚拟磁盘存储为单个文件】,点击【下一步】,如下图所示:
点击【完成】,如下图所示:
点击屏幕左侧的【template】标签,然后点击【编辑虚拟机设置】,如下图所示:
依次选择【打印机】/【移除】,【声卡】/【移除】,将虚拟打印机和声卡移除,避免占用虚拟机资源,如下图所示:
在【CD/DVD(IDE)】标签中,勾选【使用ISO映像文件】,并且指定该ISO镜像的位置,最后点击【确定】,如下图所示:
本例中使用的ISO镜像文件为CentOS-7-x86_64-Everything-1804.iso
1.3.2CentOS操作系统的最小化安装
点击【开启虚拟机】,如下图所示
将鼠标点击到虚拟机黑屏中,使键盘能够控制虚拟机,在下图中使用【上下键】选择【InstallCentOS7】,如下图所示:
、
然后按【Tab】键,在末尾加上net.ifnames=0biosdevname=0,然后点击【回车键】,如下图所示:
在此界面保持默认配置语言设置,使用英文作为CentOS默认语言,点击【Continue】按钮,如下图所示:
系统默认时区为【American/NewYorktimezone】,需要修改为【Asia/Shanghai】,点击【DATE&TIME】,如下图所示:
在下图界面中,【Region】选择【Asia】,【City】选择【Shanghai】,然后点击【Done】按钮,如下图所示:
在下图界面中,点击【INSTALLATIONDESTINATION】,如下图所示:
在下图界面中,保持默认选项,点击【Done】,如下图所示:
在下图界面中,选择【BeginInstallation】,开始安装:
安装过程中,点击【ROOTPASSWORD】来设置root用户密码:
在下图界面中,输入两次root密码,点击【Done】按钮,如下图所示:
注意:如果密码复杂度不符合要求,需要点击两次【Done】按钮
待操作系统安装完成后,点击【Reboot】按钮,如下图所示:
1.3.3模板机template的基本配置
1.3.3.1更新软件包
在模板机template上执行yumupdate–y命令,将软件包升级到最新:
[root@localhost~]#yumupdate–y
升级完成后,执行reboot命令,重启主机:
[root@localhost~]#reboot
1.3.3.2关闭不必要的服务
重启完成后,执行下面的命令,依次关闭防火墙,selinux,NetworkManager和postfix:
[root@localhost~]#systemctldisablefirewalld
[root@localhost~]#systemctlstopfirewalld
[root@localhost~]#sed–i‘s#SELINUX=enforcing#SELINUX=disabled#g’/etc/selinux/config
[root@localhost~]#setenforce0
[root@localhost~]#systemctldisableNetworkManager
[root@localhost~]#systemctlstopNetworkManager
[root@localhost~]#systemctldisablepostfix
[root@localhost~]#systemctlstoppostfix
1.3.3.3设置主机名
[root@localhost~]#hostnamectlset-hostnametemplate
1.3.3.4模板机关机
[root@localhost~]#init0
1.4controller和network虚拟机的克隆
接下来我们使用模板机template克隆出一台虚拟机controller,克隆步骤如下:
右击【template】,点击【管理】和【克隆】,如下图所示:
点击【下一步】,如下图所示:
选择【虚拟机中的当前状态】,点击【下一步】,如下图所示:
选择【创建完整克隆】,点击【下一步】,如下图所示:
设置虚拟机名称和虚拟机文件的存储路径,本例中依次为【controller】和【C:\controller】
,然后点击【完成】,如下图所示:
在同一台Win7主机上,使用同样方式克隆出network节点。
1.5将template虚拟机迁移到另外一台Win7主机
1.5.1迁移原因描述
由于本实验环境单台Win7主机的内存仅为4G,单台Win7主机无法满足实验需求,需要使用2台相同的Win7主机,因此需要将template虚拟机迁移到另外一台Win7主机,并用迁移后的template模板机,在另外一台Win7主机上,克隆出compute1和compute2虚拟机。
注意:在进行template模板机迁移之前,先要对另外一台Win7主机的VmwareWorkstation进行配置,配置请参考1.1小节和1.2小节。
1.5.2Template模板机迁移步骤
1.5.2.1虚拟机目录拷贝迁移
整个目录【D:\template】拷贝到另外一台Win7主机的D盘根目录,并将该虚拟机目录导入到Wmware中,导入步骤如下:
1.5.2.2虚拟机导入
打开另外一台Win7主机的VmwareWorkstation,依次点击【文件】/【打开】,如下图所示:
定位到template目录【本地磁盘(D:)->template】,选中【template.vmx】,点击【打开】,如下图所示:
1.5.2.3删除锁文件
如果拷贝过来的目录包含【template.vmx.lck】目录,需要将该目录删除。进入【D:\template】目录,右击【template.vmx.lck】目录,点击【删除】,如下图所示:
至此template模板机迁移完成。
1.6compute1和compute2的克隆
参考1.4小结,在另外一台Win7主机上,使用模板机template克隆出compute1和compute2两个虚拟机,过程完全相同,在此不再赘述。
1.7虚拟机配置设置
点击【controller】标签,点击【编辑虚拟机设置】,如下图所示:
选中【处理器】,在【处理器数量(P)】中,通过下拉菜单选择【2】,如下图所示:
然后选中【网络适配器】,然后勾选【NAT模式(N),用于共享主机的IP地址】,如下图所示:
然后点击【内存】,将虚拟机的内存设置为【2560MB】,如下图所示
最后点击【添加】,在弹出的对话框中选中【网络适配器】,点击【下一步】,如下图所示:
在下一步对话框中,选中【桥接模式(R):直接连接到物理网络】和【复制物理网络连接状态(P)】,点击【完成】,如下图所示:
返回到上一级对话框,点击【确定】,如下图所示:
至此,controller虚拟机的配置已经完成。
使用同样的方法,设置network节点另外一台Win7主机上的两台虚拟机compute1和compute2。Network节点内存设置为【1024MB】,compute1和compute2的内存均设置为【1544MB】,网卡配置与controller节点完全一致,在此不再赘述。
注意:compute1和compute2的虚拟CPU数量,一定要选择【2】,如果使用默认参数【1】,在计算节点就无法创建OpenStack虚拟机。
1.8IP地址设置
点击【controller】标签,点击【开启此虚拟机】,启动controller虚拟机,如下图所示:
待虚拟机启动完成后,输入用户名root和密码,然后对eth0网卡进行IP地址的配置,输入vi/etc/sysconfig/network-scripts/ifcfg-eth0,如下图所示:
将eth0配置文件改为如下所示:
按ESC键退出插入模式,然后按:wq保存退出。
接下来需要配置eth1网卡。由于eth1网卡没有配置文件,所以需要将eth0的网卡配置文件拷贝为eth1,然后修改eth1的网卡配置文件:
将eth1配置文件改为如下所示:
按ESC键退出插入模式,然后按:wq保存退出。
最后重启网络进程,使上述配置生效:
使用ping命令,测试外网是否连通:
同样方式配置另外一台Win7主机上的compute1虚拟机和compute2虚拟机,两个虚拟机的网卡分别为:
主机名
网卡名
IP地址
掩码
Network
eth0
192.168.10.61
255.255.255.0
eth1
192.168.2.61
255.255.252.0
Compute1
eth0
192.168.10.62
255.255.255.0
eth1
192.168.2.62
255.255.252.0
Compute2
eth0
192.168.10.63
255.255.255.0
eth1
192.168.2.63
255.255.252.0
network,compute1和compute2设置完IP地址后,在controller节点上进行网络联通性测试:
1.9修改主机名
使用以下命令,分别将controller节点,network节点、compute1节点和compute2节点的主机名进行修改:
然后按ctrl+d退出,重新登录后,用户名就修改过来了。
、
本例仅以controller节点为例,network节点、compute1节点和compute2节点请参考本小结的步骤进行修改。
1.10制作快照
由于本实验步骤太多,操作过程中难免出现操作错误,一旦操作错误,就需要从头重新开始,非常耗费时间。因此,在做完每一大步时,将虚拟机的当前状态做成快照,一旦在下一步实验过程中,出现无法回退的操作错误,我们就可以使用快照,将虚拟机恢复到上一次快照的状态。
快照的操作步骤下:
右击【controller】标签,点击【快照】/【拍摄快照】,如下图所示:
在【名称】和【描述】中输入信息,便于自己恢复快照时查看,点击【拍摄快照】,如下图所示:
至此,快照拍摄完成。
本小节仅以controller节点为例,network、compute1和compute2节点请参考本小节自行制作快照。
如果实验中出现无法回退的错误,使用以下操作进行快照恢复:
将虚拟机关机,右击【controller】标签,点击【快照】/【拍摄快照】,如下图所示:
然后重新打开虚拟机,controller节点就恢复到上次拍摄快照的状态了。
1.11配置Baseyum源
本小节的的配置,需要在controller节点、network节点、compute1节点和compute2节点执行:
[root@controller~]#cd/etc/yum.repos.d/
[root@controlleryum.repos.d]#rm–rf*.repo
[root@controlleryum.repos.d]#curl-o/etc/yum.repos.d/CentOS-Base.repohttp://mirrors.aliyun.com/repo/Centos-7.repo
如果有本地yum源,直接在这一步就配置本地yum源,否则CentOS版本不同,可能会对后面的yum安装造成影响:
[root@controller~]#cd/etc/yum.repos.d/
[root@controlleryum.repos.d]#rm–rf*.repo
[root@controlleryum.repos.d]#cat>>local.repo<<EOF
[LOCAL]
name=Local
baseurl=http://192.168.0.167/OpenStack_Q_repo/
enable=1
gpgcheck=0
EOF
1.12安装基础工具包
本小节的的配置,需要在controller节点,compute1节点和compute2节点执行:
[root@controller~]#yuminstall-ynet-toolswgetlrzsz
1.13配置本地hosts文件
本小节的的配置,需要在controller节点,network节点、compute1节点和compute2节点执行:
[root@controller~]#vi/etc/hosts
配置文件末尾添加如下内容:
192.168.2.60controller
192.168.2.61network
192.168.2.62compute1
192.168.2.63compute2
第二部分OpenStack基础组件安装
2.1配置时间服务
1.1.1控制节点配置
本小节的内容,只需要在controller节点上执行。
2.1.0.1安装时间服务组件
建议时间服务用业务网络,因为network节点配置了Neutron之后,外部网络就不通了,时间久了ntp时间同步就不准了。
[root@controller~]#yuminstallchronyntpdate-y
[root@controller~]#vi/etc/chrony.conf
将如下四行注释掉:
#server0.centos.pool.ntp.orgiburst
#server1.centos.pool.ntp.orgiburst
#server2.centos.pool.ntp.orgiburst
#server3.centos.pool.ntp.orgiburst
添加如下两行:
serverntp7.aliyun.comiburst
allow192.168.2.0/24
2.1.0.2强制同步时间
[root@controller~]#ntpdatentp7.aliyun.com
9Oct23:07:01ntpdate[10288]:adjusttimeserver203.107.6.88offset0.000440sec
2.1.0.3启动chronyd服务并查看服务状态
[root@controller~]#systemctlstartchronyd
[root@controller~]#systemctlenablechronyd
[root@controller~]#systemctlstatuschronyd
●chronyd.service-NTPclient/server
Loaded:loaded(/usr/lib/systemd/system/chronyd.service;enabled;vendorpreset:enabled)
Active:active(running)sinceTue2018-10-0923:02:19EDT;6minago
[root@controller~]#chronycsources
210Numberofsources=1
MSName/IPaddressStratumPollReachLastRxLastsample
==========================================================================
^*203.107.6.88261713-1206us[-3593us]+/-18ms
1.1.2其它节点配置
本小节的内容,需要在network、compute1和compute2节点上执行。
2.1.0.4安装时间服务组件
[root@compute1~]#yuminstallchronyntpdate-y
[root@compute1~]#vi/etc/chrony.conf
将如下四行注释掉:
#server0.centos.pool.ntp.orgiburst
#server1.centos.pool.ntp.orgiburst
#server2.centos.pool.ntp.orgiburst
#server3.centos.pool.ntp.orgiburst
添加如下一行:
servercontrolleriburst
2.1.0.5强制同步时间
[root@compute1~]#ntpdatecontroller
10Oct21:32:17ntpdate[1081]:adjusttimeserver192.168.0.210offset0.000235sec
2.1.0.6启动chronyd服务并查看服务状态
[root@compute1~]#systemctlstartchronyd
[root@compute1~]#systemctlenablechronyd
[root@compute1~]#systemctlstatuschronyd
●chronyd.service-NTPclient/server
Loaded:loaded(/usr/lib/systemd/system/chronyd.service;enabled;vendorpreset:enabled)
Active:active(running)sinceTue2018-10-0923:10:14EDT;14minago
[root@compute1~]#chronycsources
210Numberofsources=1
MSName/IPaddressStratumPollReachLastRxLastsample
===============================================================================
^*controller36173+77ns[+25us]+/-8788us
2.2配置OpenStackyum源
2.2小节所有步骤,需要在controller、network、compute1和compute2节点上执行:
#yuminstallcentos-release-openstack-queens-y
#vi/etc/yum.repos.d/CentOS-QEMU-EV.repo
将[centos-qemu-ev]标签下的$contentdir修改为centos,修改后如下所示:
baseurl=http://mirror.centos.org/centos/$releasever/virt/$basearch/kvm-common/
2.3安装OpenStack客户端和SELinux
2.2小节所有步骤,需要在controller、network、compute1和compute2节点上执行
2.3.1安装OpenStack客户端
#yuminstallpython-openstackclient-y
2.3.2安装SELinux
#yuminstallopenstack-selinux–y
2.4安装并配置MySQL
MySQL只需要在控制节点上配置,因此2.4小节的所有操作,只需要在controller节点上执行。
2.4.1安装MySQL软件
[root@controller~]#yuminstallmariadbmariadb-serverpython2-PyMySQL–y
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.4.2修改MySQL配置文件
[root@controller~]#vi/etc/my.cnf.d/openstack.cnf
在文件中添加以下内容:
[mysqld]
bind-address=192.168.2.60
default-storage-engine=innodb
innodb_file_per_table=on
max_connections=4096
collation-server=utf8_general_ci
character-set-server=utf8
2.4.3启动MySQL并设置开机启动
[root@controller~]#systemctlenablemariadb.service
[root@controller~]#systemctlstartmariadb.service
[root@controller~]#systemctlstatusmariadb.service
●mariadb.service-MariaDB10.1databaseserver
Loaded:loaded(/usr/lib/systemd/system/mariadb.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1001:03:04EDT;15sago
2.4.4执行MySQL安全设置
[root@controller~]#mysql_secure_installation
NOTE:RUNNINGALLPARTSOFTHISSCRIPTISRECOMMENDEDFORALLMariaDB
SERVERSINPRODUCTIONUSE!PLEASEREADEACHSTEPCAREFULLY!
InordertologintoMariaDBtosecureit,we'llneedthecurrent
passwordfortherootuser.Ifyou'vejustinstalledMariaDB,and
youhaven'tsettherootpasswordyet,thepasswordwillbeblank,
soyoushouldjustpressenterhere.
Entercurrentpasswordforroot(enterfornone):#直接敲回车
OK,successfullyusedpassword,movingon...
SettingtherootpasswordensuresthatnobodycanlogintotheMariaDB
rootuserwithouttheproperauthorisation.
Setrootpassword?[Y/n]Y#输入Y,表示要重新设置MySQL密码
Newpassword:#输入新密码admin123
Re-enternewpassword:#再次输入新密码admin123
Passwordupdatedsuccessfully!
Reloadingprivilegetables..
...Success!
Bydefault,aMariaDBinstallationhasananonymoususer,allowinganyone
tologintoMariaDBwithouthavingtohaveauseraccountcreatedfor
them.Thisisintendedonlyfortesting,andtomaketheinstallation
goabitsmoother.Youshouldremovethembeforemovingintoa
productionenvironment.
Removeanonymoususers?[Y/n]Y#输入Y,表示要删除MySQL的匿名用户
...Success!
Normally,rootshouldonlybeallowedtoconnectfrom'localhost'.This
ensuresthatsomeonecannotguessattherootpasswordfromthenetwork.
Disallowrootloginremotely?[Y/n]Y#输入Y,表示禁止MySQL的远程登录
...Success!
Bydefault,MariaDBcomeswithadatabasenamed'test'thatanyonecan
access.Thisisalsointendedonlyfortesting,andshouldberemoved
beforemovingintoaproductionenvironment.
Removetestdatabaseandaccesstoit?[Y/n]Y#输入Y,表示删除test数据库
-Droppingtestdatabase...
...Success!
-Removingprivilegesontestdatabase...
...Success!
Reloadingtheprivilegetableswillensurethatallchangesmadesofar
willtakeeffectimmediately.
Reloadprivilegetablesnow?[Y/n]Y#输入Y,表示重新载入权限
...Success!
Cleaningup...
Alldone!Ifyou'vecompletedalloftheabovesteps,yourMariaDB
installationshouldnowbesecure.
ThanksforusingMariaDB!
2.4.5MySQL登录测试
[root@controller~]#mysql-uroot-p
Enterpassword:#输入刚才设置的密码
WelcometotheMariaDBmonitor.Commandsendwith;or\g.
YourMariaDBconnectionidis10
Serverversion:10.1.20-MariaDBMariaDBServer
Copyright(c)2000,2016,Oracle,MariaDBCorporationAbandothers.
Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
MariaDB[(none)]>
2.5安装并配置RabbitMQ
RabbitMQ只需要在控制节点上配置,因此2.5小节的所有操作,只需要在controller节点上执行。
2.5.1安装RabbitMQ软件
[root@controller~]#yuminstallrabbitmq-server–y
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.5.2启动RabbitMQ并设置开机启动
[root@controller~]#systemctlstartrabbitmq-server
[root@controller~]#systemctlenablerabbitmq-server
2.5.3检查RabbitMQ状态:
[root@controller~]#systemctlstatusrabbitmq-server
●rabbitmq-server.service-RabbitMQbroker
Loaded:loaded(/usr/lib/systemd/system/rabbitmq-server.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1020:05:33EDT;1h43minago
2.5.4创建openstack用户并分配权限
[root@controller~]#rabbitmqctladd_useropenstackadmin123
Creatinguser"openstack"...
[root@controller~]#rabbitmqctlset_permissionsopenstack".*"".*"".*"
Settingpermissionsforuser"openstack"invhost"/"...
#将读、写和访问权限赋予openstack用户
2.6安装并配置Memcache
Memcache只需要在控制节点上配置,因此2.6小节的所有操作,只需要在controller节点上执行。
2.6.1安装Memcache软件
[root@controller~]#yuminstallmemcachedpython-memcached–y
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.6.2修改Memcache配置文件
[root@controller~]#vi/etc/sysconfig/memcached
将OPTIONS一行改为下面内容:
OPTIONS="-l127.0.0.1,::1,controller"
2.6.3启动Memcache服务,并检查服务状态
[root@controller~]#systemctlstartmemcached
[root@controller~]#systemctlenablememcached
[root@controller~]#systemctlstatusmemcached
●memcached.service-memcacheddaemon
Loaded:loaded(/usr/lib/systemd/system/memcached.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1001:28:39EDT;6sago
2.7安装并配置etcd
etcd只需要在控制节点上配置,因此2.7小节的所有操作,只需要在controller节点上执行。
2.7.1安装etcd软件
[root@controller~]#yuminstalletcd–y
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.7.2修改etcd配置文件
[root@controller~]#cp/etc/etcd/etcd.conf/etc/etcd/etcd.conf.bak
[root@controller~]#vi/etc/etcd/etcd.conf
修改配置项到如下状态:
#[Member]
ETCD_LISTEN_PEER_URLS="http://192.168.2.60:2380"ETCD_LISTEN_CLIENT_URLS="http://192.168.2.60:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.2.60:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.2.60:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.2.60:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
2.7.3启动etcd服务,并检查服务状态
[root@controller~]#systemctlstartetcd
[root@controller~]#systemctlenableetcd
[root@controller~]#systemctlstatusetcd
●etcd.service-EtcdServer
Loaded:loaded(/usr/lib/systemd/system/etcd.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1001:41:43EDT;8sago
到此为止,OpenStacketcd组件已经安装完成,我们controller节点上,都做一个名为etcd完成的快照,防止后续配置错误,便于回退。
2.8安装并配置keystone
keystone只需要在控制节点上配置,因此2.8小节的所有操作,只需要在controller节点上执行。
2.8.1创建keystone数据库并授权
[root@controller~]#mysql-uroot-p
Enterpassword:
WelcometotheMariaDBmonitor.Commandsendwith;or\g.
YourMariaDBconnectionidis11
Serverversion:10.1.20-MariaDBMariaDBServer
Copyright(c)2000,2016,Oracle,MariaDBCorporationAbandothers.
Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
MariaDB[(none)]>CREATEDATABASEkeystone;
QueryOK,1rowaffected(0.00sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONkeystone.*TO'keystone'@'localhost'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.17sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONkeystone.*TO'keystone'@'%'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>flushprivileges;
QueryOK,0rowsaffected(0.00sec)
2.8.2安装keystone软件
[root@controller~]#yuminstall-yopenstack-keystonehttpdmod_wsgi
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.8.3修改keystone配置文件
[root@controller~]#mv/etc/keystone/keystone.conf/etc/keystone/keystone.conf.bak
[root@controller~]#vi/etc/keystone/keystone.conf
添加如下配置:
[database]
connection=mysql+pymysql://keystone:admin123@controller/keystone
[token]
provider=fernet
2.8.4修改配置文件权限
[root@controller~]#chgrpkeystone/etc/keystone/keystone.conf
[root@controller~]#chmod640/etc/keystone/keystone.conf
2.8.5初始化keystone数据库
[root@controller~]#su-s/bin/sh-c"keystone-managedb_sync"keystone
正常情况下,这条命令没有输入信息
[root@controller~]#echo$?
0
2.8.6初始化Fernetkey库(生成token)
[root@controller~]#keystone-managefernet_setup--keystone-userkeystone--keystone-groupkeystone
[root@controller~]#echo$?
0
[root@controller~]#keystone-managecredential_setup--keystone-userkeystone--keystone-groupkeystone
[root@controller~]#echo$?
0
正常情况下,这两条命令没有输出信息
2.8.7引导认证服务
[root@controller~]#keystone-managebootstrap--bootstrap-passwordadmin123--bootstrap-admin-urlhttp://controller:5000/v3/--bootstrap-internal-urlhttp://controller:5000/v3/--bootstrap-public-urlhttp://controller:5000/v3/--bootstrap-region-idRegionOne
[root@controller~]#echo$?
0
2.8.8配置ApacheHTTP服务
[root@controller~]#vi/etc/httpd/conf/httpd.conf
将#ServerNamewww.example.com:80一行改为:
ServerNamecontroller
2.8.9创建软连接
[root@controller~]#ln-s/usr/share/keystone/wsgi-keystone.conf/etc/httpd/conf.d/
2.8.10启动HTTP服务并检查服务状态
[root@controller~]#systemctlstarthttpd
[root@controller~]#systemctlenablehttpd
[root@controller~]#systemctlstatushttpd
●httpd.service-TheApacheHTTPServer
Loaded:loaded(/usr/lib/systemd/system/httpd.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1002:18:16EDT;44sago
2.8.11创建环境变量
[root@controller~]#exportOS_USERNAME=admin
[root@controller~]#exportOS_PASSWORD=admin123
[root@controller~]#exportOS_PROJECT_NAME=admin
[root@controller~]#exportOS_USER_DOMAIN_NAME=Default
[root@controller~]#exportOS_PROJECT_DOMAIN_NAME=Default
[root@controller~]#exportOS_AUTH_URL=http://controller:35357/v3
[root@controller~]#exportOS_IDENTITY_API_VERSION=3
2.8.12创建名为service的project
[root@controller~]#openstackprojectcreate--domaindefault--description"ServiceProject"service
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|ServiceProject|
|domain_id|default|
|enabled|True|
|id|dda14cc3ed4a491abb19031e0aebd803|
|is_domain|False|
|name|service|
|parent_id|default|
|tags|[]|
+-------------+----------------------------------+
2.8.13创建名为demo的project
[root@controller~]#openstackprojectcreate--domaindefault--description"DemoProject"demo
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|DemoProject|
|domain_id|default|
|enabled|True|
|id|a86824861eb94de39e532df77a3b4f52|
|is_domain|False|
|name|demo|
|parent_id|default|
|tags|[]|
+-------------+----------------------------------+
2.8.14创建demo用户
[root@controller~]#openstackusercreate--domaindefault--password-promptdemo
UserPassword:#密码设置为admin123
RepeatUserPassword:
+---------------------+----------------------------------+
|Field|Value|
+---------------------+----------------------------------+
|domain_id|default|
|enabled|True|
|id|e719eb60c8634aa8bcf8749cd927e79f|
|name|demo|
|options|{}|
|password_expires_at|None|
+---------------------+----------------------------------+
2.8.15创建user角色
[root@controller~]#openstackrolecreateuser
+-----------+----------------------------------+
|Field|Value|
+-----------+----------------------------------+
|domain_id|None|
|id|bdb5fb03c0ad42e7968337d1d0703c06|
|name|user|
+-----------+----------------------------------+
2.8.16把user角色赋予demoproject中的demo用户
[root@controller~]#openstackroleadd--projectdemo--userdemouser
2.8.17取消环境变量
[root@controller~]#unsetOS_AUTH_URLOS_PASSWORD
2.8.18使用admin用户申请认证token
[root@controller~]#openstack--os-auth-urlhttp://controller:35357/v3--os-project-domain-nameDefault--os-user-domain-nameDefault--os-project-nameadmin--os-usernameadmintokenissue
Password:#这里输入密码admin123
+------------+-------------------------------------------------------------------------------------------------------------------------------------+
|Field|Value|
+------------+-------------------------------------------------------------------------------------------------------------------------------------+
|expires|2018-10-10T08:27:09+0000|
|id|gAAAAABbvanNe9LCt6b3GqaPCiJoUhHVYnZzRha-hm-ogtF2RX-iG4koC6njXgrwfovynbd6lPz9asH1eA95DJ6YinO8EsnBrRnDsgQuANOnC21H8XWsMS-zwm8HeENZd8SZgsV2en3ftkeJQamkflv4Pme-bQrs_OcKa2Aw-RNfm-m2ZZfji-8|
|project_id|d3ac61f1746b4fd682cc90aff7c4b598|
|user_id|01a67deebdf2447cb9a6407412895073|
+------------+--------------------------------------------------------------------------------------------------------------------------------+
2.8.19使用demo用户申请认证token
[root@controller~]#openstack--os-auth-urlhttp://controller:5000/v3--os-project-domain-nameDefault--os-user-domain-nameDefault--os-project-namedemo--os-usernamedemotokenissue
Password:#这里输入密码admin123
+------------+-------------------------------------------------------------------------------------------------------------------------------------+
|Field|Value|
+------------+--------------------------------------------------------------------------------------------------------------------------------+
|expires|2018-10-10T08:28:59+0000|
|id|gAAAAABbvao7UEcQCXcy07p9EN3WM9SOc3V8KjZ_CnrrHCZLnCQBxhnRh7j-Hrwdc-wjOqbq2gTzP5jXoy9VhMHQdWH8nGixkse9P061vrS5V5GwbLsKSCdnI7HRrLDhCdqaAoXiu52YJTLx1x1b4mzi0hhtXAYMzj_vZFTe1pRjoMpKogIgtmo|
|project_id|a86824861eb94de39e532df77a3b4f52|
|user_id|e719eb60c8634aa8bcf8749cd927e79f|
+------------+--------------------------------------------------------------------------------------------------------------------------------+
2.8.20编写admin用户环境变量脚本
[root@controller~]#viadmin-openrc
写入以下内容:
exportOS_PROJECT_DOMAIN_NAME=Default
exportOS_USER_DOMAIN_NAME=Default
exportOS_PROJECT_NAME=admin
exportOS_USERNAME=admin
exportOS_PASSWORD=admin123
exportOS_AUTH_URL=http://controller:5000/v3
exportOS_IDENTITY_API_VERSION=3
exportOS_IMAGE_API_VERSION=2
2.8.21编写demo用户环境变量脚本
[root@controller~]#videmo-openrc
exportOS_PROJECT_DOMAIN_NAME=Default
exportOS_USER_DOMAIN_NAME=Default
exportOS_PROJECT_NAME=demo
exportOS_USERNAME=demo
exportOS_PASSWORD=admin123
exportOS_AUTH_URL=http://controller:5000/v3
exportOS_IDENTITY_API_VERSION=3
exportOS_IMAGE_API_VERSION=2
2.8.22验证脚本
[root@controller~]#.admin-openrc
[root@controller~]#openstacktokenissue
+------------+--------------------------------------------------------------------------------------------------------------------------------+
|Field|Value|
+------------+--------------------------------------------------------------------------------------------------------------------------------+
|expires|2018-10-10T08:33:01+0000|
|id|gAAAAABbvasta4O7dbuCvmSK08cIpwR3MGDr-6JFINPgSvKwp9AiyHbkq3vbIXPFBdUWZtqmgXkMBO8nzm4A3ojOl8w_Ifm2m68H0Argt9Vw4QiSwiu5vTtZOVqiqITboRHxGGpzID-NCijJFwgSk_yF98CN43M5mUSp0Z4zG5_hXDc7N_r_yBM|
|project_id|d3ac61f1746b4fd682cc90aff7c4b598|
|user_id|01a67deebdf2447cb9a6407412895073|
+------------+--------------------------------------------------------------------------------------------------------------------------------+
出现上述信息,说明admin用户脚本执行成功
[root@controller~]#.demo-openrc
[root@controller~]#openstack--os-auth-urlhttp://controller:5000/v3--os-project-domain-nameDefault--os-user-domain-nameDefault--os-project-namedemo--os-usernamedemotokenissue
+------------+-------------------------------------------------------------------------------------------------------------------------------------+
|Field|Value|
+------------+-------------------------------------------------------------------------------------------------------------------------------------+
|expires|2018-10-10T08:33:54+0000|
|id|gAAAAABbvatiygjFEYtrJmP7XB0tp5soK44PAtQUt2-Zrdq43iaRiBA0w0S507WPMJI2Qrf7oyVUS2TfX6k2dn8O7wMeOhp3unfWnjPEX6BYsHq8x3QvoEv835-hnGfknuvOGzSbvkX2Z2oLIeHEfLkqjRbgpHNglwcdMO3AkmbEXRP5hCIlhkc|
|project_id|a86824861eb94de39e532df77a3b4f52|
|user_id|e719eb60c8634aa8bcf8749cd927e79f|
+------------+-------------------------------------------------------------------------------------------------------------------------------------+
出现上述信息,说明demo用户脚本执行成功
到此为止,OpenStackkeystone组件已经安装完成,我们controller节点上,都做一个名为keystone完成的快照,防止后续配置错误,便于回退。
2.9安装并配置glance
glance只需要在控制节点上配置,因此2.9小节的所有操作,只需要在controller节点上执行。
2.9.1创建glance数据库并授权
[root@controller~]#mysql-uroot-p
Enterpassword:
WelcometotheMariaDBmonitor.Commandsendwith;or\g.
YourMariaDBconnectionidis13
Serverversion:10.1.20-MariaDBMariaDBServer
Copyright(c)2000,2016,Oracle,MariaDBCorporationAbandothers.
Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
MariaDB[(none)]>CREATEDATABASEglance;
QueryOK,1rowaffected(0.00sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONglance.*TO'glance'@'localhost'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.12sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONglance.*TO'glance'@'%'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>flushprivileges;
QueryOK,0rowsaffected(0.00sec)
2.9.2创建glance用户
[root@controller~]#.admin-openrc
[root@controller~]#openstackusercreate--domaindefault--password-promptglance
UserPassword:#输入密码admin123
RepeatUserPassword:#再次输入密码admin123
+---------------------+----------------------------------+
|Field|Value|
+---------------------+----------------------------------+
|domain_id|default|
|enabled|True|
|id|0ea8e67078474ac783ad34c793ea19ba|
|name|glance|
|options|{}|
|password_expires_at|None|
+---------------------+----------------------------------+
2.9.3为glance用户分配admin角色
[root@controller~]#openstackroleadd--projectservice--userglanceadmin
2.9.4创建glance服务
[root@controller~]#openstackservicecreate--nameglance--description"OpenStackImage"image
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|OpenStackImage|
|enabled|True|
|id|364a9915172c44dda35b50942fff5afd|
|name|glance|
|type|image|
+-------------+----------------------------------+
2.9.5为glance服务创建三个接口
[root@controller~]#openstackendpointcreate--regionRegionOneimagepublichttp://controller:9292
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|c3cdf4a7ed9c4eb6aa18426c6a535194|
|interface|public|
|region|RegionOne|
|region_id|RegionOne|
|service_id|364a9915172c44dda35b50942fff5afd|
|service_name|glance|
|service_type|image|
|url|http://controller:9292|
+--------------+----------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOneimageinternalhttp://controller:9292
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|5ad6872bbe344e3095ce68164b7437c4|
|interface|internal|
|region|RegionOne|
|region_id|RegionOne|
|service_id|364a9915172c44dda35b50942fff5afd|
|service_name|glance|
|service_type|image|
|url|http://controller:9292|
+--------------+----------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOneimageadminhttp://controller:9292
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|de694aa17ecb4afa983552e54fc53452|
|interface|admin|
|region|RegionOne|
|region_id|RegionOne|
|service_id|364a9915172c44dda35b50942fff5afd|
|service_name|glance|
|service_type|image|
|url|http://controller:9292|
+--------------+----------------------------------+
2.9.6安装glance软件
[root@controller~]#yuminstall-yopenstack-glance
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.9.7修改glance-api配置文件
[root@controller~]#mv/etc/glance/glance-api.conf/etc/glance/glance-api.conf.bak
[root@controller~]#vi/etc/glance/glance-api.conf
写入以下内容:
[database]
connection=mysql+pymysql://glance:admin123@controller/glance
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:5000
memcached_servers=controller:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=service
username=glance
password=admin123
[paste_deploy]
flavor=keystone
[glance_store]
stores=file,http
default_store=file
filesystem_store_datadir=/var/lib/glance/images/
2.9.8修改配置文件权限
[root@controller~]#chgrpglance/etc/glance/glance-api.conf
[root@controller~]#chmod640/etc/glance/glance-api.conf
2.9.9修改glance-registry配置文件
[root@controller~]#mv/etc/glance/glance-registry.conf/etc/glance/glance-registry.conf.bak
[root@controller~]#vi/etc/glance/glance-registry.conf
写入以下内容:
[database]
connection=mysql+pymysql://glance:admin123@controller/glance
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:5000
memcached_servers=controller:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=service
username=glance
password=admin123
[paste_deploy]
flavor=keystone
2.9.10修改配置文件权限
[root@controller~]#chgrpglance/etc/glance/glance-registry.conf
[root@controller~]#chmod640/etc/glance/glance-registry.conf
2.9.11同步glance数据库
[root@controller~]#su-s/bin/sh-c"glance-managedb_sync"glance
执行完成后,末尾行出现以下信息,说明数据库同步成功:
Databaseissyncedsuccessfully.
2.9.12启动glance服务,并检查服务状态
[root@controller~]#systemctlstartopenstack-glance-apiopenstack-glance-registry
[root@controller~]#systemctlenableopenstack-glance-apiopenstack-glance-registry
[root@controller~]#systemctlstatusopenstack-glance-apiopenstack-glance-registry
●openstack-glance-api.service-OpenStackImageService(code-namedGlance)APIserver
Loaded:loaded(/usr/lib/systemd/system/openstack-glance-api.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1004:22:50EDT;13sago
●openstack-glance-registry.service-OpenStackImageService(code-namedGlance)Registryserver
Loaded:loaded(/usr/lib/systemd/system/openstack-glance-registry.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1004:22:50EDT;13sago
2.9.13下载测试镜像
[root@controller~]#.admin-openrc
[root@controller~]#wgethttp://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
--2018-10-1004:24:19--http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
Resolvingdownload.cirros-cloud.net(download.cirros-cloud.net)...64.90.42.85,2607:f298:6:a036::bd6:a72a
Connectingtodownload.cirros-cloud.net(download.cirros-cloud.net)|64.90.42.85|:80...connected.
HTTPrequestsent,awaitingresponse...200OK
Length:13267968(13M)[text/plain]
Savingto:‘cirros-0.3.5-x86_64-disk.img’
100%[======================================>]13,267,968842KB/sin17s
2018-10-1004:24:42(773KB/s)-‘cirros-0.3.5-x86_64-disk.img’saved[13267968/13267968]
如果cirros镜像也放在本地HFS服务器上,使用如下命令:
[root@controller~]#wgethttp://192.168.0.167/packages-after-repo2/cirros-0.3.5-x86_64-disk.img
2.9.14上传镜像
[root@controller~]#openstackimagecreate"cirros"--filecirros-0.3.5-x86_64-disk.img--disk-formatqcow2--container-formatbare--public
+------------------+------------------------------------------------------+
|Field|Value|
+------------------+-----------------------------------------------------+
|checksum|f8ab98ff5e73ebab884d80c9dc9c7290|
|container_format|bare|
|created_at|2018-10-10T08:25:28Z|
|disk_format|qcow2|
|file|/v2/images/e07bc4e4-1b7d-42ec-b973-df3222aae8ae/file|
|id|e07bc4e4-1b7d-42ec-b973-df3222aae8ae|
|min_disk|0|
|min_ram|0|
|name|cirros|
|owner|d3ac61f1746b4fd682cc90aff7c4b598|
|protected|False|
|schema|/v2/schemas/image|
|size|13267968|
|status|active|
|tags||
|updated_at|2018-10-10T08:25:28Z|
|virtual_size|None|
|visibility|public|
+------------------+-----------------------------------------------------+
2.9.15查询上传结果
[root@controller~]#openstackimagelist
+--------------------------------------+--------+--------+
|ID|Name|Status|
+--------------------------------------+--------+--------+
|e07bc4e4-1b7d-42ec-b973-df3222aae8ae|cirros|active|
+--------------------------------------+--------+--------+
测试成功。
到此为止,OpenStackglance组件已经安装完成,我们controller节点上,都做一个名为glance完成的快照,防止后续配置错误,便于回退。
2.10安装并配置nova
nova服务需要在controller节点、compute1节点和compute2节点上安装并配置,并且在控制节点上验证是否安装成功。下面先从控制节点安装开始。2.10.1小节是在控制节点安装并配置,2.10.2小节是在计算节点安装并配置,2.10.3小节是在控制节点检验安装是否成功。
2.10.1控制节点安装
2.10.1.1创建数据库并授权
[root@controller~]#mysql-uroot-p
Enterpassword:
WelcometotheMariaDBmonitor.Commandsendwith;or\g.
YourMariaDBconnectionidis22
Serverversion:10.1.20-MariaDBMariaDBServer
Copyright(c)2000,2016,Oracle,MariaDBCorporationAbandothers.
Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
MariaDB[(none)]>CREATEDATABASEnova_api;
QueryOK,1rowaffected(0.01sec)
MariaDB[(none)]>CREATEDATABASEnova;
QueryOK,1rowaffected(0.00sec)
MariaDB[(none)]>CREATEDATABASEnova_cell0;
QueryOK,1rowaffected(0.01sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONnova_api.*TO'nova'@'localhost'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.04sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONnova_api.*TO'nova'@'%'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONnova.*TO'nova'@'localhost'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.02sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONnova.*TO'nova'@'%'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONnova_cell0.*TO'nova'@'localhost'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONnova_cell0.*TO'nova'@'%'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>flushprivileges;
QueryOK,0rowsaffected(0.00sec)
2.10.1.2创建nova用户
[root@controller~]#.admin-openrc
[root@controller~]#openstackusercreate--domaindefault--password-promptnova
UserPassword:#输入密码admin123
RepeatUserPassword:#再次输入密码admin123
+---------------------+----------------------------------+
|Field|Value|
+---------------------+----------------------------------+
|domain_id|default|
|enabled|True|
|id|688257ae157e4408966f69735bc9c2c2|
|name|nova|
|options|{}|
|password_expires_at|None|
+---------------------+----------------------------------+
2.10.1.3为nova用户分配admin角色
[root@controller~]#openstackroleadd--projectservice--usernovaadmin
2.10.1.4创建nova服务
[root@controller~]#openstackservicecreate--namenova--description"OpenStackCompute"compute
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|OpenStackCompute|
|enabled|True|
|id|79d9ce689d2546a59b0bf111665a9d29|
|name|nova|
|type|compute|
+-------------+----------------------------------+
2.10.1.5为nova服务创建三个接口
[root@controller~]#openstackendpointcreate--regionRegionOnecomputepublichttp://controller:8774/v2.1
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|e78224af0b73417ba7f0359102530778|
|interface|public|
|region|RegionOne|
|region_id|RegionOne|
|service_id|79d9ce689d2546a59b0bf111665a9d29|
|service_name|nova|
|service_type|compute|
|url|http://controller:8774/v2.1|
+--------------+----------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnecomputeinternalhttp://controller:8774/v2.1
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|f8842169641049358fdf04b0cb1d4754|
|interface|internal|
|region|RegionOne|
|region_id|RegionOne|
|service_id|79d9ce689d2546a59b0bf111665a9d29|
|service_name|nova|
|service_type|compute|
|url|http://controller:8774/v2.1|
+--------------+----------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnecomputeadminhttp://controller:8774/v2.1
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|f8e3ea60e18f47dbaac5b6357bdd73fa|
|interface|admin|
|region|RegionOne|
|region_id|RegionOne|
|service_id|79d9ce689d2546a59b0bf111665a9d29|
|service_name|nova|
|service_type|compute|
|url|http://controller:8774/v2.1|
+--------------+----------------------------------+
2.10.1.6创建placement用户
[root@controller~]#openstackusercreate--domaindefault--password-promptplacement
UserPassword:#输入密码admin123
RepeatUserPassword:#再次输入密码admin123
+---------------------+----------------------------------+
|Field|Value|
+---------------------+----------------------------------+
|domain_id|default|
|enabled|True|
|id|dcbad0733cc74e06ba9b6ec8da873ae6|
|name|placement|
|options|{}|
|password_expires_at|None|
+---------------------+----------------------------------+
2.10.1.7为placement用户分配admin角色
[root@controller~]#openstackroleadd--projectservice--userplacementadmin
2.10.1.8创建placement服务
[root@controller~]#openstackservicecreate--nameplacement--description"PlacementAPI"placement
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|PlacementAPI|
|enabled|True|
|id|755e41710e7345ef8e7a5d438c76151a|
|name|placement|
|type|placement|
+-------------+----------------------------------+
2.10.1.9为placement服务创建三个接口
[root@controller~]#openstackendpointcreate--regionRegionOneplacementpublichttp://controller:8778
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|8543e34401fa40ffa307ebd84225491b|
|interface|public|
|region|RegionOne|
|region_id|RegionOne|
|service_id|755e41710e7345ef8e7a5d438c76151a|
|service_name|placement|
|service_type|placement|
|url|http://controller:8778|
+--------------+----------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOneplacementinternalhttp://controller:8778
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|89ecdb36fd514f52811bfc30dc0255f3|
|interface|internal|
|region|RegionOne|
|region_id|RegionOne|
|service_id|755e41710e7345ef8e7a5d438c76151a|
|service_name|placement|
|service_type|placement|
|url|http://controller:8778|
+--------------+----------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOneplacementadminhttp://controller:8778
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|dd00ca493af84e6faf9c6ae063c0b8f8|
|interface|admin|
|region|RegionOne|
|region_id|RegionOne|
|service_id|755e41710e7345ef8e7a5d438c76151a|
|service_name|placement|
|service_type|placement|
|url|http://controller:8778|
+--------------+----------------------------------+
2.10.1.10安装nova组件
[root@controller~]#yuminstall-yopenstack-nova-apiopenstack-nova-conductoropenstack-nova-consoleopenstack-nova-novncproxyopenstack-nova-scheduleropenstack-nova-placement-api
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.10.1.11修改nova配置文件
[root@controller~]#mv/etc/nova/nova.conf/etc/nova/nova.conf.bak
[root@controller~]#vi/etc/nova/nova.conf
写入如下内容:
[DEFAULT]
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:admin123@controller
my_ip=192.168.2.60
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[api_database]
connection=mysql+pymysql://nova:admin123@controller/nova_api
[database]
connection=mysql+pymysql://nova:admin123@controller/nova
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_url=http://controller:5000/v3
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=nova
password=admin123
[vnc]
enabled=true
server_listen=$my_ip
server_proxyclient_address=$my_ip
[glance]
api_servers=http://controller:9292
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name=RegionOne
project_domain_name=Default
project_name=service
auth_type=password
user_domain_name=Default
auth_url=http://controller:5000/v3
username=placement
password=admin123
[scheduler]
discover_hosts_in_cells_interval=300
2.10.1.12修改配置文件权限
[root@controller~]#chgrpnova/etc/nova/nova.conf
[root@controller~]#chmod640/etc/nova/nova.conf
2.10.1.13修改placement配置文件
[root@controller~]#vi/etc/httpd/conf.d/00-nova-placement-api.conf
在文件末尾增加以下行:
<Directory/usr/bin>
<IfVersion>=2.4>
Requireallgranted
</IfVersion>
<IfVersion<2.4>
Orderallow,deny
Allowfromall
</IfVersion>
</Directory>
2.10.1.14重启httpd服务并检查其状态
[root@controller~]#systemctlrestarthttpd
[root@controller~]#systemctlstatushttpd
●httpd.service-TheApacheHTTPServer
Loaded:loaded(/usr/lib/systemd/system/httpd.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1004:51:45EDT;5sago
2.10.1.15同步nova-api数据库
[root@controller~]#su-s/bin/sh-c"nova-manageapi_dbsync"nova
遇到以下报错可以忽略:
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:NotSupportedWarning:Configurationoption(s)['use_tpool']notsupported
exception.NotSupportedWarning
[root@controller~]#echo$?
0
2.10.1.16同步Cell0数据库
[root@controller~]#su-s/bin/sh-c"nova-managecell_v2map_cell0"nova
遇到以下报错可以忽略:
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:NotSupportedWarning:Configurationoption(s)['use_tpool']notsupported
exception.NotSupportedWarning
[root@controller~]#echo$?
0
2.10.1.17创建cell1单元格
[root@controller~]#su-s/bin/sh-c"nova-managecell_v2create_cell--name=cell1--verbose"nova
遇到以下报错可以忽略:
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:NotSupportedWarning:Configurationoption(s)['use_tpool']notsupported
exception.NotSupportedWarning
dff763c6-bd15-4bb5-a1f2-83270d52183e
[root@controller~]#echo$?
0
2.10.1.18同步nova数据库
[root@controller~]#su-s/bin/sh-c"nova-managedbsync"nova
遇到以下报错可以忽略:
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:NotSupportedWarning:Configurationoption(s)['use_tpool']notsupported
exception.NotSupportedWarning
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166:Warning:(1831,u'Duplicateindex
result=self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166:Warning:(1831,u'Duplicateindex`uniq_instances0uuid`.Thisisdeprecatedandwillbe
result=self._query(query)
[root@controller~]#echo$?
0
2.10.1.19验证novacell0和cell1是否正确注册
[root@controller~]#nova-managecell_v2list_cells
遇到以下报错可以忽略:
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:NotSupportedWarning:Configurationoption(s)['use_tpool']notsupported
exception.NotSupportedWarning
显示如下信息,说明注册成功:
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
|Name|UUID|TransportURL|DatabaseConnection|
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
|cell0|00000000-0000-0000-0000-000000000000|none:/|mysql+pymysql://nova:****@controller/nova_cell0|
|cell1|dff763c6-bd15-4bb5-a1f2-83270d52183e|rabbit://openstack:****@controller|mysql+pymysql://nova:****@controller/nova|
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
2.10.1.20启动nova服务,并检查服务状态
[root@controller~]#systemctlstartopenstack-nova-apiopenstack-nova-consoleauthopenstack-nova-scheduleropenstack-nova-conductoropenstack-nova-novncproxy
[root@controller~]#systemctlenableopenstack-nova-apiopenstack-nova-consoleauthopenstack-nova-scheduleropenstack-nova-conductoropenstack-nova-novncproxy
[root@controller~]#systemctlstatusopenstack-nova-apiopenstack-nova-consoleauthopenstack-nova-scheduleropenstack-nova-conductoropenstack-nova-novncproxy
●openstack-nova-api.service-OpenStackNovaAPIServer
Loaded:loaded(/usr/lib/systemd/system/openstack-nova-api.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1004:58:56EDT;1min0sago
●openstack-nova-consoleauth.service-OpenStackNovaVNCconsoleauthServer
Loaded:loaded(/usr/lib/systemd/system/openstack-nova-consoleauth.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1004:58:53EDT;1min4sago
●openstack-nova-scheduler.service-OpenStackNovaSchedulerServer
Loaded:loaded(/usr/lib/systemd/system/openstack-nova-scheduler.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1004:58:54EDT;1min3sago
●openstack-nova-conductor.service-OpenStackNovaConductorServer
Loaded:loaded(/usr/lib/systemd/system/openstack-nova-conductor.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1004:58:54EDT;1min3sago
●openstack-nova-novncproxy.service-OpenStackNovaNoVNCProxyServer
Loaded:loaded(/usr/lib/systemd/system/openstack-nova-novncproxy.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1004:58:38EDT;1min19sago
到此为止,OpenStacknova组件已经安装完成,我们controller节点,compute1节点和compute2节点上,都做一个名为nova完成的快照,防止后续配置错误,便于回退。
2.10.2计算节点安装
注意:本节的操作需要同时在compute1和compute2上进行操作。
2.10.2.1检查计算节点是否支持嵌套虚拟化
[root@compute1~]#egrep-c'(vmx|svm)'/proc/cpuinfo
0
结果为0表示不支持虚拟化,或者虚拟化开关没有打开
2.10.2.2打开嵌套虚拟化开关
将compute1和compute2关机,点击【compute1】标签,点击【编辑虚拟机设置】,
在弹出的对话框中,点击【处理器】,在屏幕右侧勾选【虚拟化IntelVT-x/EPT或AMD-V/RVI(V)】和【虚拟化CPU性能计数器(U)】,然后点击下方的【确定】按钮。
同样的方式配置compute2节点,配置完成后将两台虚拟机开机,再次使用相同的命令查看是否支持嵌套虚拟化,返回结果不为0:
[root@compute1~]#egrep-c'(vmx|svm)'/proc/cpuinfo
1
2.10.2.3安装nova软件
[root@compute1~]#yuminstallopenstack-nova-compute–y
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.10.2.4修改nova配置文件
[root@compute1~]#mv/etc/nova/nova.conf/etc/nova/nova.conf.bak
[root@compute1~]#vi/etc/nova/nova.conf
写入如下内容:
[DEFAULT]
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:admin123@controller
my_ip=192.168.2.61#compute2节点改为192.168.2.62
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy=keystone
[keystone_authtoken]
auth_url=http://controller:5000/v3
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=nova
password=admin123
[vnc]
enabled=True
server_listen=0.0.0.0
server_proxyclient_address=$my_ip
novncproxy_base_url=http://192.168.2.60:6080/vnc_auto.html
[glance]
api_servers=http://controller:9292
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name=RegionOne
project_domain_name=Default
project_name=service
auth_type=password
user_domain_name=Default
auth_url=http://controller:5000/v3
username=placement
password=admin123
[libvirt]
virt_type=qemu
2.10.2.5修改nova配置文件权限
[root@compute1~]#chgrpnova/etc/nova/nova.conf
[root@compute1~]#chmod640/etc/nova/nova.conf
2.10.2.6启动nova服务,并检查服务状态
[root@compute1~]#systemctlstartlibvirtdopenstack-nova-compute
[root@compute1~]#systemctlenablelibvirtdopenstack-nova-compute
[root@compute1~]#systemctlstatuslibvirtdopenstack-nova-compute
●libvirtd.service-Virtualizationdaemon
Loaded:loaded(/usr/lib/systemd/system/libvirtd.service;enabled;vendorpreset:enabled)
Active:active(running)sinceWed2018-10-1005:25:33EDT;13sago
●openstack-nova-compute.service-OpenStackNovaComputeServer
Loaded:loaded(/usr/lib/systemd/system/openstack-nova-compute.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1005:25:43EDT;4sago
2.10.3控制节点验证
2.10.3.1计算节点加入Cell数据库
[root@controller~]#.admin-openrc
[root@controller~]#su-s/bin/sh-c"nova-managecell_v2discover_hosts--verbose"nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:NotSupportedWarning:Configurationoption(s)['use_tpool']notsupported
exception.NotSupportedWarning
Found2cellmappings.
Skippingcell0sinceitdoesnotcontainhosts.
Gettingcomputesfromcell'cell1':dff763c6-bd15-4bb5-a1f2-83270d52183e
Checkinghostmappingforcomputehost'compute2':bec45c3e-569f-4680-9e92-b772957b5cbe
Creatinghostmappingforcomputehost'compute2':bec45c3e-569f-4680-9e92-b772957b5cbe
Found1unmappedcomputesincell:dff763c6-bd15-4bb5-a1f2-83270d52183e
[root@controller~]#echo$?
0
2.10.3.2验证安装是否成功
[root@controller~]#.admin-openrc
#列出计算相关服务
[root@controller~]#openstackcomputeservicelist
+----+------------------+------------+----------+---------+-------+----------------------------+
|ID|Binary|Host|Zone|Status|State|UpdatedAt|
+----+------------------+------------+----------+---------+-------+----------------------------+
|1|nova-consoleauth|controller|internal|enabled|up|2018-10-10T09:31:59.000000|
|2|nova-scheduler|controller|internal|enabled|up|2018-10-10T09:31:59.000000|
|3|nova-conductor|controller|internal|enabled|up|2018-10-10T09:31:59.000000|
|6|nova-compute|compute1|nova|enabled|up|2018-10-10T09:31:58.000000|
|7|nova-compute|compute2|nova|enabled|up|2018-10-10T09:32:04.000000|
+----+------------------+------------+----------+---------+-------+----------------------------+
#列出服务接口
[root@controller~]#openstackcataloglist
+-----------+-----------+-----------------------------------------+
|Name|Type|Endpoints|
+-----------+-----------+-----------------------------------------+
|keystone|identity|RegionOne|
|||admin:http://controller:5000/v3/|
|||RegionOne|
|||internal:http://controller:5000/v3/|
|||RegionOne|
|||public:http://controller:5000/v3/|
||||
|glance|image|RegionOne|
|||internal:http://controller:9292|
|||RegionOne|
|||public:http://controller:9292|
|||RegionOne|
|||admin:http://controller:9292|
||||
|placement|placement|RegionOne|
|||public:http://controller:8778|
|||RegionOne|
|||internal:http://controller:8778|
|||RegionOne|
|||admin:http://controller:8778|
||||
|nova|compute|RegionOne|
|||public:http://controller:8774/v2.1|
|||RegionOne|
|||internal:http://controller:8774/v2.1|
|||RegionOne|
|||admin:http://controller:8774/v2.1|
||||
+-----------+-----------+-----------------------------------------+
[root@controller~]#openstackimagelist#列出所有镜像
+--------------------------------------+--------+--------+
|ID|Name|Status|
+--------------------------------------+--------+--------+
|e07bc4e4-1b7d-42ec-b973-df3222aae8ae|cirros|active|
+--------------------------------------+--------+--------+
#验证cells和placementAPI是否正常
[root@controller~]#nova-statusupgradecheck出现以下错误请忽略:
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:NotSupportedWarning:Configurationoption(s)['use_tpool']notsupported
exception.NotSupportedWarning
Option"os_region_name"fromgroup"placement"isdeprecated.Useoption"region-name"fromgroup"placement".
出现如下结果,说明cells和placementAPI服务正常:
+--------------------------------+
|UpgradeCheckResults|
+--------------------------------+
|Check:Cellsv2|
|Result:Success|
|Details:None|
+--------------------------------+
|Check:PlacementAPI|
|Result:Success|
|Details:None|
+--------------------------------+
|Check:ResourceProviders|
|Result:Success|
|Details:None|
+--------------------------------+
|Check:IronicFlavorMigration|
|Result:Success|
|Details:None|
+--------------------------------+
|Check:APIServiceVersion|
|Result:Success|
|Details:None|
+--------------------------------+
[root@controller~]#echo$?
0
2.11安装并配置horizon组件
horizon只需要在控制节点上配置,因此2.11小节的所有操作,只需要在controller节点上执行。
2.11.1安装horizon软件
[root@controller~]#yuminstallopenstack-dashboard–y
注意:安装完成后,末行会出现Complete!关键字,如果没有出现该关键字,说明没有安装成功,或者只安装了部分组件。需要重新执行安装命令。
2.11.2修改horizon配置文件
[root@controller~]#vi/etc/openstack-dashboard/local_settings
将OPENSTACK_HOST参数下面的值:
OPENSTACK_HOST="192.168.2.60"
将ALLOWED_HOSTS参数下面的值:
ALLOWED_HOSTS=['*']
修改OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT的值:
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT=True
修改OPENSTACK_API_VERSIONS的值(或者将原有的值注释掉,增加如下几行):
OPENSTACK_API_VERSIONS={
"identity":3,
"image":2,
"volume":2,
}
修改OPENSTACK_KEYSTONE_DEFAULT_DOMAIN的值(即去掉注释):
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN='Default'
修改OPENSTACK_KEYSTONE_DEFAULT_ROLE的值:
OPENSTACK_KEYSTONE_DEFAULT_ROLE="user"
修改TIME_ZONE的值:
TIME_ZONE="Asia/Shanghai"
修改完成后,按ESC键,然后敲:wq保存退出。
2.11.3修改Memcache配置文件
[root@controller~]#vi/etc/sysconfig/memcached
末尾添加如下几行:
SESSION_ENGINE='django.contrib.sessions.backends.cache'
CACHES={
'default':{
'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION':'controller:11211',
}
}
2.11.4重启httpd服务和memcached服务,并检查服务状态
[root@controller~]#systemctlrestarthttpdmemcached
[root@controller~]#systemctlstatushttpdmemcached
●httpd.service-TheApacheHTTPServer
Loaded:loaded(/usr/lib/systemd/system/httpd.service;enabled;vendorpreset:disabled)
Drop-In:/usr/lib/systemd/system/httpd.service.d
└─openstack-dashboard.conf
Active:active(running)sinceWed2018-10-1023:17:44EDT;29sago
●memcached.service-memcacheddaemon
Loaded:loaded(/usr/lib/systemd/system/memcached.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1023:17:20EDT;53sago
2.11.5登录测试
浏览器输入如下地址:http://192.168.2.60/dashboard,即可进入登录界面
【Domain】字段输入default,【用户名】输入admin或者demo,【密码】输入之前设定的密码admin123,点击【连接】按钮登录,如下图所示。
登录成功后界面如下:
到此为止,OpenStack基础组件已经安装完成,我们在三台虚拟机上,都做一个名为基础组件完成的快照,防止后续配置错误,便于回退。
第三部分OpenStack网络组件安装
OpenStack的网络组件为neutron,该组件需要在控制节点、网络节点和计算节点上进行配置。3.1小节是在控制节点安装并配置,3.2小节是在计算节点安装并配置,3.3小节是在控制节点检验安装是否成功。下面先从控制节点安装开始。
3.1控制节点配置
3.1.1创建Neutron数据库
[root@controller~]#mysql-uroot-p
Enterpassword:
WelcometotheMariaDBmonitor.Commandsendwith;or\g.
YourMariaDBconnectionidis2
Serverversion:10.1.20-MariaDBMariaDBServer
Copyright(c)2000,2016,Oracle,MariaDBCorporationAbandothers.
Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
MariaDB[(none)]>CREATEDATABASEneutron;
QueryOK,1rowaffected(0.00sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONneutron.*TO'neutron'@'localhost'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>GRANTALLPRIVILEGESONneutron.*TO'neutron'@'%'IDENTIFIEDBY'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>flushprivileges;
QueryOK,0rowsaffected(0.00sec)
3.1.2创建neutron用户
[root@controller~]#.admin-openrc
[root@controller~]#openstackusercreate--domaindefault--password-promptneutron
UserPassword:#输入密码admin123
RepeatUserPassword:#输入密码admin123
+---------------------+----------------------------------+
|Field|Value|
+---------------------+----------------------------------+
|domain_id|default|
|enabled|True|
|id|342bb41464dd4c42bbd2b1ed5485b404|
|name|neutron|
|options|{}|
|password_expires_at|None|
+---------------------+----------------------------------+
3.1.3为neutron用户分配admin角色
[root@controller~]#openstackroleadd--projectservice--userneutronadmin
3.1.4创建neutron服务,服务类型为network
[root@controller~]#openstackservicecreate--nameneutron--description"OpenStackNetworking"network
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|OpenStackNetworking|
|enabled|True|
|id|b919fdf862fb4d9b9b3b2519ba786103|
|name|neutron|
|type|network|
+-------------+----------------------------------+
3.1.5为neutron服务创建三个接口
[root@controller~]#openstackendpointcreate--regionRegionOnenetworkpublichttp://controller:9696
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|58ab4f52c3264c97bc7541b73ca493dd|
|interface|public|
|region|RegionOne|
|region_id|RegionOne|
|service_id|b919fdf862fb4d9b9b3b2519ba786103|
|service_name|neutron|
|service_type|network|
|url|http://controller:9696|
+--------------+----------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnenetworkinternalhttp://controller:9696
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|e02f22c23b9e4d3cb5fa7301c7c5e6ef|
|interface|internal|
|region|RegionOne|
|region_id|RegionOne|
|service_id|b919fdf862fb4d9b9b3b2519ba786103|
|service_name|neutron|
|service_type|network|
|url|http://controller:9696|
+--------------+----------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnenetworkadminhttp://controller:9696
+--------------+----------------------------------+
|Field|Value|
+--------------+----------------------------------+
|enabled|True|
|id|48b2d09d329149b3af8f44c7a9b134ab|
|interface|admin|
|region|RegionOne|
|region_id|RegionOne|
|service_id|b919fdf862fb4d9b9b3b2519ba786103|
|service_name|neutron|
|service_type|network|
|url|http://controller:9696|
+--------------+----------------------------------+
3.1.6安装neutron软件包
[root@controller~]#yuminstallopenstack-neutronopenstack-neutron-ml2ebtables-y
3.1.7修改neutron配置文件
[root@controller~]#mv/etc/neutron/neutron.conf/etc/neutron/neutron.conf.bak
[root@controller~]#vi/etc/neutron/neutron.conf
[database]
connection=mysql+pymysql://neutron:admin123@controller/neutron
[DEFAULT]
core_plugin=ml2
service_plugins=router
transport_url=rabbit://openstack:admin123@controller
auth_strategy=keystone
notify_nova_on_port_status_changes=True
notify_nova_on_port_data_changes=True
dhcp_agent_notification=True
allow_overlapping_ips=True
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=neutron
password=admin123
[nova]
auth_url=http://controller:35357
auth_type=password
project_domain_name=default
user_domain_name=default
region_name=RegionOne
project_name=service
username=nova
password=admin123
[oslo_concurrency]
lock_path=/var/lib/neutron/tmp
3.1.8修改配置文件权限
[root@controller~]#chmod640/etc/neutron/neutron.conf
[root@controller~]#chgrpneutron/etc/neutron/neutron.conf
3.1.9修改ML2配置文件
[root@controller~]#mv/etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugins/ml2/ml2_conf.ini.bak
[root@controller~]#vi/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers=flat,vlan,vxlan
tenant_network_types=vxlan,flat
mechanism_drivers=openvswitch,l2population
extension_drivers=port_security
[securitygroup]
enable_ipset=true
enable_security_group=True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ml2_type_flat]
flat_networks=external
[ml2_type_vxlan]
vni_ranges=1:1000
3.1.10修改ML2配置文件权限
[root@controller~]#chmod640/etc/neutron/plugins/ml2/ml2_conf.ini
[root@controller~]#chgrpneutron/etc/neutron/plugins/ml2/ml2_conf.ini
3.1.11添加ML2配置文件软链接
[root@controller~]#ln-s/etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini
3.1.12修改metadata配置文件
[root@controller~]#mv/etc/neutron/metadata_agent.ini/etc/neutron/metadata_agent.ini.bak
[root@controller~]#vi/etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host=controller
metadata_proxy_shared_secret=admin123
[cache]
memcache_servers=controller:11211
3.1.13修改metadata配置文件权限
[root@controller~]#chmod640/etc/neutron/metadata_agent.ini
[root@controller~]#chgrpneutron/etc/neutron/metadata_agent.ini
3.1.14修改nova配置文件
[root@controller~]#cp-a/etc/nova/nova.conf/etc/nova/nova.conf.bak1
[root@controller~]#vi/etc/nova/nova.conf
[DEFAULT]
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
[neutron]
url=http://controller:9696
auth_url=http://controller:35357
auth_type=password
project_domain_name=default
user_domain_name=default
region_name=RegionOne
project_name=service
username=neutron
password=admin123
service_metadata_proxy=true
metadata_proxy_shared_secret=admin123
3.1.15同步neutron数据库
[root@controller~]#su-s/bin/sh-c"neutron-db-manage--config-file/etc/neutron/neutron.conf--config-file/etc/neutron/plugins/ml2/ml2_conf.iniupgradehead"neutron
INFO[alembic.runtime.migration]Runningupgrade97c25b0d2353->2e0d7a8a1586,AddbindingindextoRouterL3AgentBinding
INFO[alembic.runtime.migration]Runningupgrade2e0d7a8a1586->5c85685d616d,Removeavailabilityranges.
OK
出现OK,表示数据库导入成功。
3.1.16重启nova服务,并检查服务状态
[root@controller~]#systemctlrestartopenstack-nova-api
[root@controller~]#systemctlstatusopenstack-nova-api
●openstack-nova-api.service-OpenStackNovaAPIServer
Loaded:loaded(/usr/lib/systemd/system/openstack-nova-api.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1100:24:08EDT;7sago
3.1.17启动neutron服务,并检查服务状态
[root@controller~]#systemctlstartneutron-serverneutron-metadata-agent
[root@controller~]#systemctlenableneutron-serverneutron-metadata-agent
[root@controller~]#systemctlstatusneutron-serverneutron-metadata-agent
●neutron-server.service-OpenStackNeutronServer
Loaded:loaded(/usr/lib/systemd/system/neutron-server.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1100:25:28EDT;8sago
●neutron-metadata-agent.service-OpenStackNeutronMetadataAgent
Loaded:loaded(/usr/lib/systemd/system/neutron-metadata-agent.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1100:25:11EDT;25sago
3.2网络节点配置
3.2.1安装neutron软件
[root@network~]#yuminstall-yopenstack-neutronopenstack-neutron-ml2openstack-neutron-openvswitch
3.2.2修改neutron配置文件
[root@network~]#mv/etc/neutron/neutron.conf/etc/neutron/neutron.conf.bak
[root@network~]#vi/etc/neutron/neutron.conf
[DEFAULT]
core_plugin=ml2
service_plugins=router
auth_strategy=keystone
allow_overlapping_ips=True
transport_url=rabbit://openstack:admin123@controller
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=neutron
password=admin123
[oslo_concurrency]
lock_path=/var/lib/neutron/tmp
3.2.3修改neutron配置文件权限
[root@network~]#chmod640/etc/neutron/neutron.conf
[root@network~]#chgrpneutron/etc/neutron/neutron.conf
3.2.4修改L3代理配置文件
[root@network~]#mv/etc/neutron/l3_agent.ini/etc/neutron/l3_agent.ini.bak
[root@network~]#vi/etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge=br-eth0
3.2.5修改L3代理配置文件权限
[root@network~]#chmod640/etc/neutron/l3_agent.ini
[root@network~]#chgrpneutron/etc/neutron/l3_agent.ini
3.2.6修改DHCP代理配置文件
[root@network~]#mv/etc/neutron/dhcp_agent.ini/etc/neutron/dhcp_agent.ini.bak
[root@network~]#vi/etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata=true
3.2.7修改DHCP代理配置文件权限
[root@network~]#chmod640/etc/neutron/dhcp_agent.ini
[root@network~]#chgrpneutron/etc/neutron/dhcp_agent.ini
3.2.8修改metadata配置文件
[root@network~]#mv/etc/neutron/metadata_agent.ini/etc/neutron/metadata_agent.ini.bak
[root@network~]#vi/etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host=controller
metadata_proxy_shared_secret=admin123
[cache]
memcache_servers=controller:11211
3.2.9修改Metadata配置文件权限
[root@network~]#chmod640/etc/neutron/metadata_agent.ini
[root@network~]#chgrpneutron/etc/neutron/metadata_agent.ini
3.2.10修改ML2配置文件
[root@network~]#mv/etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugins/ml2/ml2_conf.ini.bak
[root@network~]#vi/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers=flat,vlan,vxlan
tenant_network_types=vxlan,flat
mechanism_drivers=openvswitch,l2population
extension_drivers=port_security
[securitygroup]
enable_security_group=True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_ipset=True
[ml2_type_flat]
flat_networks=external
[ml2_type_vxlan]
vni_ranges=1:1000
3.2.11修改ML2配置文件权限
[root@network~]#chmod640/etc/neutron/plugins/ml2/ml2_conf.ini
[root@network~]#chgrpneutron/etc/neutron/plugins/ml2/ml2_conf.ini
3.2.12将ML2配置文件做软链接
[root@network~]#ln-s/etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini
3.2.13修改OpenvSwitch配置文件
[root@network~]#mv/etc/neutron/plugins/ml2/openvswitch_agent.ini/etc/neutron/plugins/ml2/openvswitch_agent.ini.bak
[root@network~]#vi/etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
tunnel_types=vxlan
l2_population=True
prevent_arp_spoofing=True
[ovs]
local_ip=192.168.2.61
bridge_mappings=external:br-eth0
3.2.14修改OpenvSwitch配置文件权限
[root@network~]#chmod640/etc/neutron/plugins/ml2/openvswitch_agent.ini
[root@network~]#chgrpneutron/etc/neutron/plugins/ml2/openvswitch_agent.ini
3.2.15启动OpenvSwitch服务,并检查服务状态
[root@network~]#systemctlstartopenvswitch
[root@network~]#systemctlenableopenvswitch
[root@network~]#systemctlstatusopenvswitch
●openvswitch.service-OpenvSwitch
Loaded:loaded(/usr/lib/systemd/system/openvswitch.service;enabled;vendorpreset:disabled)
Active:active(exited)sinceThu2018-10-1101:27:32EDT;8sago
#OpenvSwitch服务的状态与其它服务不同,当它的状态是active(exited)时,服务就正常了
3.2.16创建网桥br-int
[root@network~]#ovs-vsctladd-brbr-int
[root@network~]#ovs-vsctlshow
7d681201-c2b6-49a9-b4fa-d42f0f041c42
Bridgebr-int
Portbr-int
Interfacebr-int
type:internal
ovs_version:"2.9.0"
3.2.17创建网桥br-eth0,并将网卡eth0绑定到该网桥
[root@network~]#ovs-vsctladd-brbr-eth0
[root@network~]#ovs-vsctlshow
7d681201-c2b6-49a9-b4fa-d42f0f041c42
Bridge"br-eth0"
Port"br-eth0"
Interface"br-eth0"
type:internal
Bridgebr-int
Portbr-int
Interfacebr-int
type:internal
ovs_version:"2.9.0"
[root@network~]#ovs-vsctladd-portbr-eth0eth0
3.2.18启动Neutron服务,并检查服务状态
[root@network~]#systemctlstartneutron-dhcp-agent
[root@network~]#systemctlstartneutron-l3-agent
[root@network~]#systemctlstartneutron-metadata-agent
[root@network~]#systemctlstartneutron-openvswitch-agent
[root@network~]#systemctlenableneutron-dhcp-agent
[root@network~]#systemctlenableneutron-l3-agent
[root@network~]#systemctlenableneutron-metadata-agent
[root@network~]#systemctlenableneutron-openvswitch-agent
[root@network~]#systemctlstatusneutron-dhcp-agentneutron-l3-agentneutron-metadata-agentneutron-openvswitch-agent
●neutron-dhcp-agent.service-OpenStackNeutronDHCPAgent
Loaded:loaded(/usr/lib/systemd/system/neutron-dhcp-agent.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1101:39:43EDT;1min35sago
●neutron-l3-agent.service-OpenStackNeutronLayer3Agent
Loaded:loaded(/usr/lib/systemd/system/neutron-l3-agent.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1101:39:43EDT;1min35sago
●neutron-metadata-agent.service-OpenStackNeutronMetadataAgent
Loaded:loaded(/usr/lib/systemd/system/neutron-metadata-agent.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1101:39:43EDT;1min35sago
●neutron-openvswitch-agent.service-OpenStackNeutronOpenvSwitchAgent
Loaded:loaded(/usr/lib/systemd/system/neutron-openvswitch-agent.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1101:39:44EDT;1min35sago
3.3计算节点配置
3.3.1安装neutron软件包
[root@compute1~]#yuminstall-yopenstack-neutronopenstack-neutron-ml2openstack-neutron-openvswitchebtablesipset
3.3.2修改neutron配置文件
[root@compute1~]#mv/etc/neutron/neutron.conf/etc/neutron/neutron.conf.bak
[root@compute1~]#vi/etc/neutron/neutron.conf
[DEFAULT]
core_plugin=ml2
service_plugins=router
auth_strategy=keystone
allow_overlapping_ips=True
transport_url=rabbit://openstack:admin123@controller
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=neutron
password=admin123
[oslo_concurrency]
lock_path=/var/lib/neutron/tmp
3.3.3修改Neutron配置文件权限
[root@compute1~]#chmod640/etc/neutron/neutron.conf
[root@compute1~]#chgrpneutron/etc/neutron/neutron.conf
3.3.4修改ML2配置文件
[root@compute1~]#mv/etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugins/ml2/ml2_conf.ini.bak
[root@compute1~]#vi/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers=flat,vlan,vxlan
tenant_network_types=vxlan,flat
mechanism_drivers=openvswitch,l2population
extension_drivers=port_security
[securitygroup]
enable_security_group=true
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_ipset=True
[ml2_type_flat]
flat_networks=external
[ml2_type_vxlan]
vni_ranges=1:1000
3.3.5修改ML2配置文件权限
[root@compute1~]#chmod640/etc/neutron/plugins/ml2/ml2_conf.ini
[root@compute1~]#chgrpneutron/etc/neutron/plugins/ml2/ml2_conf.ini
3.3.6将ML2配置文件做软链接
[root@compute1~]#ln-s/etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini
3.3.7修改nova配置文件
[root@computer1~]#cp/etc/nova/nova.conf/etc/nova/nova.conf.bak1
[root@computer1~]#vi/etc/nova/nova.conf
[DEFAULT]
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
vif_plugging_is_fatal=True
vif_plugging_timeout=300
[neutron]
url=http://controller:9696
auth_url=http://controller:35357
auth_type=password
project_domain_name=default
user_domain_name=default
region_name=RegionOne
project_name=service
username=neutron
password=admin123
service_metadata_proxy=True
metadata_proxy_shared_secret=admin123
3.3.8修改OpenvSwitch配置文件
[root@compute1~]#mv/etc/neutron/plugins/ml2/openvswitch_agent.ini/etc/neutron/plugins/ml2/openvswitch_agent.ini.bak
[root@compute1~]#vi/etc/neutron/plugins/ml2/openvswitch_agent.ini
[agent]
tunnel_types=vxlan
l2_pupulation=True
prevent_arp_spoofing=True
[ovs]
local_ip=192.168.2.62#计算节点管理IP,computer2节点需要修改为192.168.2.63
3.3.9修改OpenvSwitch配置文件权限
[root@compute1~]#chmod640/etc/neutron/plugins/ml2/openvswitch_agent.ini
[root@compute1~]#chgrpneutron/etc/neutron/plugins/ml2/openvswitch_agent.ini
3.3.10启动OpenvSwitch,并检查服务状态
[root@compute1~]#systemctlstartopenvswitch
[root@compute1~]#systemctlenableopenvswitch
[root@compute1~]#systemctlstatusopenvswitch
●openvswitch.service-OpenvSwitch
Loaded:loaded(/usr/lib/systemd/system/openvswitch.service;enabled;vendorpreset:disabled)
Active:active(exited)sinceThu2018-10-1102:02:50EDT;7sago
3.3.11创建网桥br-int
[root@compute1~]#ovs-vsctladd-brbr-int
[root@compute1~]#ovs-vsctlshow
92d333af-a6c2-4139-92d7-f54c019622b0
Bridgebr-int
Portbr-int
Interfacebr-int
type:internal
ovs_version:"2.9.0"
3.3.12重启nova服务,并检查服务状态
[root@compute1~]#systemctlrestartopenstack-nova-compute
[root@compute1~]#systemctlstatusopenstack-nova-compute
●openstack-nova-compute.service-OpenStackNovaComputeServer
Loaded:loaded(/usr/lib/systemd/system/openstack-nova-compute.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1102:06:25EDT;30sago
3.3.13启动OpenvSwitch服务,并检查服务状态
[root@compute1~]#systemctlstartneutron-openvswitch-agent
[root@compute1~]#systemctlenableneutron-openvswitch-agent
[root@compute1~]#systemctlstatusneutron-openvswitch-agent
●neutron-openvswitch-agent.service-OpenStackNeutronOpenvSwitchAgent
Loaded:loaded(/usr/lib/systemd/system/neutron-openvswitch-agent.service;enabled;vendorpreset:disabled)
Active:active(running)sinceThu2018-10-1102:08:32EDT;12sago
注意:compute2节点的配置与compute1节点的配置几乎完全一致,只是在OpenvSwitch配置文件/etc/neutron/plugins/ml2/openvswitch_agent.ini中,将[ovs]标签下的local_ip设置为computer2节点的外网IP:192.168.2.62。
三个节点的实验环境,虚拟机无法访问外网。
第四部分OpenStack网络验证
第四部分的配置都是在horizon组件提供的web管理界面中完成的。在浏览器中输入http://192.168.2.60/dashboard即可登录web管理界面。
4.1创建网络和路由器
4.1.1创建外部网络external
使用admin账号登录OpenStack的web管理界面,依次点击【管理员】\【网络】\【网络】\【创建网络】,如下图所示:
然后按照下图中的信息输入,输入完成后点击【下一步】
4.1.2为外部网络创建子网
在随后弹出的对话框中,参照如下截图进行配置,配置完成后点击【下一步】,为外网external创建一个名为external_subnet的子网:
4.1.3为外部网络配置DHCP
在随后弹出的对话框中,参照如下截图进行配置,配置完成后点击【已创建】,为external创建外网地址池,地址池起始地址和结束地址之间,用逗号隔开:
外网external创建完成之后,会显示如下信息:
4.1.4创建内部网络internal
仍然使用admin账号登录OpenStack的web管理界面,依次点击【管理员】\【网络】\【网络】\【创建网络】,然后按照下图中的信息输入,输入完成后点击【下一步】,如下图所示:
4.1.5为内部网络创建子网
在随后弹出的对话框中,参照如下截图进行配置,配置完成后点击【下一步】,为内网internal创建一个名为internal_subnet的子网:
4.1.6为内部网络配置DHCP
在随后弹出的对话框中,参照如下截图进行配置,配置完成后点击【已创建】,为internal创建内网地址池,并指定DNS服务器。地址池起始地址和结束地址之间,用逗号隔开:
内网external创建完成之后,会显示如下信息:
4.1.7创建路由器,并绑定外部网络external
仍然使用admin账号登录OpenStack的web管理界面,依次点击【项目】\【网络】\【路由】\【新建路由】,如下图所示:
按照下图中的信息输入,输入完成后点击【新建路由】,如下图所示:
路由器router创建完成之后,会显示如下信息:
4.1.8将内部网络绑定到路由器
使用admin账号登录OpenStack的web管理界面,依次点击【项目】\【网络】\【路由】\【router】,如下图所示:
在随后的界面中点击【接口】标签,然后点击【增加接口】按钮,如下图所示:
按照下图中的信息输入,输入完成后点击【提交】,如下图所示:
配置完成之后,会显示如下信息:
4.2创建虚拟机
4.2.1创建实例类型
使用admin账号登录OpenStack的web管理界面,依次点击【管理员】\【计算】\【实例类型】\【创建实例类型】,如下图所示:
按照下图中的信息输入,输入完成后点击【创建实例类型】,如下图所示:
配置完成之后,会显示如下信息:
4.2.2创建虚拟机
使用admin账号登录OpenStack的web管理界面,依次点击【项目】\【计算】\【实例】\【创建实例】,如下图所示:
在随后的界面中的【详情】标签中,输入【实例名称】按钮,如下图所示:
在【源】标签中,将cirros一行上移到【已分配】下面,然后点击【下一项】,如下图所示:
在【实例类型】标签中,将test一行上移到【已分配】下面,然后点击【下一项】,如下图所示:
在【网络】标签中,将internal一行上移到【已分配】下面,然后点击【创建实例】,如下图所示:
同样的方式创建一个名为cirror-vm2的虚拟机,创建完成后,显示如下:
点击上图的【cirros-vm1】连接,会跳转到如下界面,在此界面中点击【控制台】标签,然后输入用户名cirros和密码cubswin:)登录,如下图所示:
登录之后使用sudosu–root命令切换到管理员账号:
OpenStack的控制台有时候会卡住,这个是OpenStack的一个bug。如果出现这种情况,我们可以用如下方法登录控制台:
[root@controller~]#.admin-openrc
[root@controller~]#openstackserverlist
+--------------------------------------+------------+--------+---------------------+--------+--------+
|ID|Name|Status|Networks|Image|Flavor|
+--------------------------------------+------------+--------+---------------------+--------+------+
|8b08cc73-5001-4305-a968-80a168dc8f48|cirros-vm2|ACTIVE|internal=10.0.0.104|cirros|test|
|63a3e361-f059-4426-8787-3c33a8d8102b|cirros-vm1|ACTIVE|internal=10.0.0.121|cirros|test|
+--------------------------------------+------------+--------+---------------------+--------+------+
[root@controller~]#novaget-vnc-console63a3e361-f059-4426-8787-3c33a8d8102bnovnc
+-------+-----------------------------------------------------------------------------------+
|Type|Url|
+-------+-----------------------------------------------------------------------------------+
|novnc|http://192.168.2.60:6080/vnc_auto.html?token=e1a8109a-7e7d-4cd3-83e2-b6beebeafc86|
+-------+-----------------------------------------------------------------------------------+
将绿色的URL复制粘贴到浏览器中,就可以打开cirros-vm1的控制台了。
4.2.3测试网络连通性
使用命令ipa查看所获得的内网地址:
内网地址10.0.0.121/24在internal_subnet指定的范围内,由DHCP分配获得。
使用ping命令测试外网连通性:
使用ping命令测试到内网虚拟机cirros-vm2的连通性:
注意:由于实验环境所限,4.2.4,4.2.5和4.2.6小节的操作无法看到实验效果,这时因为我们的Win7主机内存太小,使用两台Win7主机进行实验导致的。如果我们有一台8G的Win7主机,将4个节点都放在一台Win7主机上运行,这些内容完全可以看到效果。
我们仅将后续内容写出,供有实验环境的学员完成后续实验。
4.2.4设置安全组规则
使用admin账号登录OpenStack的web管理界面,依次点击【项目】\【网络】\【安全组】\【管理规则】,如下图所示:
在新弹出的界面中点击【添加规则】,在弹出的对话框中,【规则】字段选择【定制TCP规则】,【方向】字段选择【入口】,在打开端口栏位下,【端口】字段输入【22】,允许从外网访问虚拟机的SSH端口,最后点击【添加】按钮。
同样方式添加允许ICMP协议访问虚拟机的条目,配置方式如下:
4.2.5虚拟机绑定浮动IP
使用admin账号登录OpenStack的web管理界面,依次点击【项目】\【计算】\【实例】,然后点击虚拟机cirros-vm1对应的【创建快照】旁边的三角箭头,然后在下拉菜单中选择【绑定浮动IP】选项,如下图所示:
在弹出的对话框中,点击【+】符号,如下图所示:
在弹出的分配浮动IP的对话框中,点击【分配IP】按钮。
返回到上一级界面后,会发现IP地址192.168.0.243已经被分配给虚拟机cirros-vm1了,点击【关联】按钮,如下图所示:
绑定成功后,就可以看到IP地址192.168.0.243已经成功的被分配给了虚拟机cirros-vm1,如下图所示:
4.2.6从外网访问虚拟机
从外部网络可以ping通虚拟机cirros-vm1的外网IP,如下图所示:
使用SecureCRT可以从22端口连接到虚拟机cirros-vm1,如下图所示:
登录之后,通过ipa命令可以看到IP地址是10.0.0.121,是cirros-vm1的内网IP。
第五部分Cinder部署
本次实验将控制节点作为存储节点,进行Cinder的安装。
5.1控制节点配置
5.1.1创建cinder数据库
[root@controller~]#mysql-uroot–padmin123
WelcometotheMariaDBmonitor.Commandsendwith;or\g.
YourMariaDBconnectionidis37
Serverversion:10.1.20-MariaDBMariaDBServer
Copyright(c)2000,2016,Oracle,MariaDBCorporationAbandothers.
Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
MariaDB[(none)]>createdatabasecinder;
QueryOK,1rowaffected(0.00sec)
MariaDB[(none)]>grantallprivilegesoncinder.*tocinder@'localhost'identifiedby'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>grantallprivilegesoncinder.*tocinder@'%'identifiedby'admin123';
QueryOK,0rowsaffected(0.00sec)
MariaDB[(none)]>flushprivileges;
QueryOK,0rowsaffected(0.00sec)
5.1.2创建cinder用户
[root@controller~]#.admin-openrc
[root@controller~]#openstackusercreate--domaindefault--projectservice--passwordadmin123cinder
+---------------------+----------------------------------+
|Field|Value|
+---------------------+----------------------------------+
|default_project_id|76775785168945d4b7fc02b248dc0594|
|domain_id|default|
|enabled|True|
|id|96b1ecd28eb947c1896fc70b0673d5fd|
|name|cinder|
|options|{}|
|password_expires_at|None|
+---------------------+----------------------------------+
5.1.3为cinder用户分配admin角色
[root@controller~]#openstackroleadd--projectservice--usercinderadmin
5.1.4创建cinder服务
[root@controller~]#openstackservicecreate--namecinderv2--description"OpenStackBlockStorage"volumev2
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|OpenStackBlockStorage|
|enabled|True|
|id|7e50e381bb5b4e9d8c9270125a2162aa|
|name|cinderv2|
|type|volumev2|
+-------------+----------------------------------+
[root@controller~]#openstackservicecreate--namecinderv3--description"OpenStackBlockStorage"volumev3
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|OpenStackBlockStorage|
|enabled|True|
|id|cdb578019e984f6dab4c79bf54bf5d98|
|name|cinderv3|
|type|volumev3|
+-------------+----------------------------------+
5.1.5为cinder服务创建三个接口
[root@controller~]#openstackendpointcreate--regionRegionOnevolumev2publichttp://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
|Field|Value|
+--------------+-----------------------------------------+
|enabled|True|
|id|e584048d47e64749ae4ad1f564f95308|
|interface|public|
|region|RegionOne|
|region_id|RegionOne|
|service_id|7e50e381bb5b4e9d8c9270125a2162aa|
|service_name|cinderv2|
|service_type|volumev2|
|url|http://controller:8776/v2/%(tenant_id)s|
+--------------+-----------------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnevolumev2internalhttp://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
|Field|Value|
+--------------+-----------------------------------------+
|enabled|True|
|id|58b694cbfcdc480c9e76b507a7ecaeff|
|interface|internal|
|region|RegionOne|
|region_id|RegionOne|
|service_id|7e50e381bb5b4e9d8c9270125a2162aa|
|service_name|cinderv2|
|service_type|volumev2|
|url|http://controller:8776/v2/%(tenant_id)s|
+--------------+-----------------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnevolumev2adminhttp://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
|Field|Value|
+--------------+-----------------------------------------+
|enabled|True|
|id|d8ea543f5a524846afe020fa93f348aa|
|interface|admin|
|region|RegionOne|
|region_id|RegionOne|
|service_id|7e50e381bb5b4e9d8c9270125a2162aa|
|service_name|cinderv2|
|service_type|volumev2|
|url|http://controller:8776/v2/%(tenant_id)s|
+--------------+-----------------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnevolumev3publichttp://controller:8776/v3/%\(tenant_id\)s
+--------------+-----------------------------------------+
|Field|Value|
+--------------+-----------------------------------------+
|enabled|True|
|id|dc845b808c1c486480c5e922ae84c129|
|interface|public|
|region|RegionOne|
|region_id|RegionOne|
|service_id|cdb578019e984f6dab4c79bf54bf5d98|
|service_name|cinderv3|
|service_type|volumev3|
|url|http://controller:8776/v3/%(tenant_id)s|
+--------------+-----------------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnevolumev3internalhttp://controller:8776/v3/%\(tenant_id\)s
+--------------+-----------------------------------------+
|Field|Value|
+--------------+-----------------------------------------+
|enabled|True|
|id|e4df2cb3cca74e72b07e5e4aec246684|
|interface|internal|
|region|RegionOne|
|region_id|RegionOne|
|service_id|cdb578019e984f6dab4c79bf54bf5d98|
|service_name|cinderv3|
|service_type|volumev3|
|url|http://controller:8776/v3/%(tenant_id)s|
+--------------+-----------------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOnevolumev3adminhttp://controller:8776/v3/%\(tenant_id\)s
+--------------+-----------------------------------------+
|Field|Value|
+--------------+-----------------------------------------+
|enabled|True|
|id|cba7077e64d44cfd82f265d43f2a4077|
|interface|admin|
|region|RegionOne|
|region_id|RegionOne|
|service_id|cdb578019e984f6dab4c79bf54bf5d98|
|service_name|cinderv3|
|service_type|volumev3|
|url|http://controller:8776/v3/%(tenant_id)s|
+--------------+-----------------------------------------+
5.1.6安装cinder软件包
[root@controller~]#yuminstall-yopenstack-cinderpython-keystonetargetcli
5.1.7系统是否支持LVM
[root@controller~]#rpm-qa|greplvm2
lvm2-2.02.177-4.el7.x86_64
lvm2-libs-2.02.177-4.el7.x86_64
实验环境使用的CentOS7版本操作系统默认自带LVM软件包,因此不需再进行安装。
5.1.8添加硬盘
在VMware的【VMwareWorkstation】中,右击界面左侧栏中的虚拟机controller,在菜单中选择【设置】项。如下图所示:
弹出【虚拟机设置】窗口,点击窗口下方的【添加】按钮,如图所示:
弹出【添加硬件向导】窗口,选中左侧栏中的【硬盘】选项,然后点击【下一步】按钮,如下图所示:
【添加硬件向导】窗口切换到【选择硬盘类型】内容,【虚拟硬盘类型】选择默认的SCSI(S)(推荐)项,点击【下一步】按钮,如下图所示:
【添加硬件向导】窗口切换到【选择磁盘】内容,【磁盘】选择第一个选项,即创建新虚拟硬盘(V)选项,点击【下一步】按钮,如下图所示:
【添加硬件向导】窗口切换到【指定磁盘容量】内容,【最大磁盘大小】保持默认的20GB大小即可(若有需要也可根据实际情况进行修改),然后选中下面的将虚拟磁盘存储为单个文件选项,点击【下一步】按钮,如下图所示:
以上操作完成后,即为虚拟机controller创建了一个新的硬盘,在【虚拟机设置】窗口的左侧栏中可查看到,点击【确定】按钮,完成添加硬盘的所有操作,如下图所示:
以上已经创建好另一个硬盘,但在虚拟机controller中命令查看时,未查看此硬盘,因此,重启虚拟机controller使新硬盘生效。
重启后,查看虚拟机controller的硬盘信息,sdb即新硬盘。
[root@controller~]#ls/dev/sd*
/dev/sda/dev/sda1/dev/sda2/dev/sdb
5.1.9将新硬盘进行PV、VG设置
[root@controller~]#pvcreate/dev/sdb
Physicalvolume"/dev/sdb"successfullycreated.
[root@controller~]#pvdisplay
---Physicalvolume---
PVName/dev/sda2
VGNamecentos
PVSize<49.00GiB/notusable3.00MiB
Allocatableyes
PESize4.00MiB
TotalPE12543
FreePE1
AllocatedPE12542
PVUUIDxn0rZD-Q203-LEE3-xbZv-fj78-fLY1-AAq83q
"/dev/sdb"isanewphysicalvolumeof"20.00GiB"
---NEWPhysicalvolume---
PVName/dev/sdb
VGName
PVSize20.00GiB
AllocatableNO
PESize0
TotalPE0
FreePE0
AllocatedPE0
PVUUID1rmWQz-12BO-yKjZ-Nsxl-8LAw-ykmr-dFceze
[root@controller~]#vgcreatecinder-volumes/dev/sdb
Volumegroup"cinder-volumes"successfullycreated
[root@controller~]#vgdisplaycinder-volumes
---Volumegroup---
VGNamecinder-volumes
SystemID
Formatlvm2
MetadataAreas1
MetadataSequenceNo1
VGAccessread/write
VGStatusresizable
MAXLV0
CurLV0
OpenLV0
MaxPV0
CurPV1
ActPV1
VGSize<20.00GiB
PESize4.00MiB
TotalPE5119
AllocPE/Size0/0
FreePE/Size5119/<20.00GiB
VGUUIDrKegvd-2bH9-2Dyd-KPrY-NAvn-yRrR-yRWojv
[root@controller~]#pvdisplay/dev/sdb
---Physicalvolume---
PVName/dev/sdb
VGNamecinder-volumes
PVSize20.00GiB/notusable4.00MiB
Allocatableyes
PESize4.00MiB
TotalPE5119
FreePE5119
AllocatedPE0
PVUUID1rmWQz-12BO-yKjZ-Nsxl-8LAw-ykmr-dFceze
[root@controller~]#vgs
VG#PV#LV#SNAttrVSizeVFree
centos120wz--n-<49.00g4.00m
cinder-volumes100wz--n-<20.00g<20.00g
5.1.10配置计算服务使用块存储
[root@controller~]#vi/etc/nova/nova.conf
[cinder]
os_region_name=RegionOne
5.1.11重启openstack-nova-api服务
#systemctlrestartopenstack-cinder-api
5.1.12修改cinder配置文件
[root@controller~]#mv/etc/cinder/cinder.conf/etc/cinder/cinder.conf.bak
[root@controller~]#vi/etc/cinder/cinder.conf
[DEFAULT]
my_ip=172.16.0.210
log_dir=/var/log/cinder
auth_strategy=keystone
transport_url=rabbit://openstack:admin123@controller
glance_api_servers=http://controller:9292
enabled_backends=lvm#不设置openstack-cinder-volume启动后失败Fail
[lvm]
iscsi_helper=lioadm
volume_group=cinder-volumes
iscsi_ip_address=172.16.0.214
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir=/var/lib/cinder/volumes
iscsi_protocol=iscsi
volume_backend_name=lvmm
[database]
connection=mysql+pymysql://cinder:admin123@controller/cinder
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
username=cinder
password=admin123
[oslo_concurrency]
lock_path=/var/lib/cinder/tmp
5.1.13修改cinder配置文件权限
[root@controller~]#chmod640/etc/cinder/cinder.conf
[root@controller~]#chgrpcinder/etc/cinder/cinder.conf
5.1.14同步cinder数据库
[root@controller~]#su-s/bin/bashcinder-c"cinder-managedbsync"
Option"logdir"fromgroup"DEFAULT"isdeprecated.Useoption"log-dir"fromgroup"DEFAULT".
[root@controller~]#echo$?
0
5.1.15启动cinder服务,并检查服务状态
[root@controller~]#systemctlstartopenstack-cinder-apiopenstack-cinder-scheduleropenstack-cinder-volumetarget
[root@controller~]#systemctlenableopenstack-cinder-apiopenstack-cinder-scheduleropenstack-cinder-volumetarget
[root@controller~]#systemctlstatusopenstack-cinder-apiopenstack-cinder-scheduleropenstack-cinder-volumetarget
●openstack-cinder-api.service-OpenStackCinderAPIServer
Loaded:loaded(/usr/lib/systemd/system/openstack-cinder-api.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1709:10:10CST;19sago
●openstack-cinder-scheduler.service-OpenStackCinderSchedulerServer
Loaded:loaded(/usr/lib/systemd/system/openstack-cinder-scheduler.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1709:10:10CST;18sago
●openstack-cinder-volume.service-OpenStackCinderVolumeServer
Loaded:loaded(/usr/lib/systemd/system/openstack-cinder-volume.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-11-2816:28:09CST;14msago
●target.service-RestoreLIOkerneltargetconfiguration
Loaded:loaded(/usr/lib/systemd/system/target.service;enabled;vendorpreset:disabled)
Active:active(exited)sinceWed2018-11-2816:25:41CST;27minago[root@controller~]#openstackvolumeservicelist
备注:
删除那些down掉的服务
cinder-manageserviceremovecinder-volume
5.1.16验证cinder状态
[root@controller~]#.admin-openrc
[root@controller~]#openstackvolumeservicelist
+------------------+-----------------+------+---------+-------+----------------------------+
|Binary|Host|Zone|Status|State|UpdatedAt|
+------------------+-----------------+------+---------+-------+----------------------------+
|cinder-scheduler|controller|nova|enabled|up|2018-11-29T06:28:54.000000|
|cinder-volume|controller@lvm0|nova|enabled|up|2018-11-29T06:28:50.000000|
+------------------+-----------------+------+---------+-------+----------------------------+
5.1.17LVM存储后端验证
使用admin用户登录Openstack图形化界面,访问地址:http://控制节点IP/dashboard
登录成功,点击界面左侧栏的【管理员】\【卷】\【卷类型】选项,在右侧可查看到卷类型列表,目前无任何卷类型,下面点击右侧栏里的【创建卷类型】按钮,开始创建LVM卷类型,如下图所示:
弹出【创建卷类型】窗口,在该窗口中的【名称】栏里输入卷类型名称lvm,然后点击右下角的【创建卷类型】按钮,如下图所示:
创建完成,在【管理员】\【卷】\【卷类型】的右侧栏【卷类型】列表中可看到创建好的卷类型lvm,点击lvm行右侧的【】按钮,选择【查看扩展规格】选项,如下图所示:
弹出【卷类型扩展规格】窗口,点击【已创建】按钮,如下图所示:
弹出【创建卷类型扩展规格】窗口,在【键】栏中输入volume_backend_name,在【值】栏中输入lvmm,用于为卷类型lvm指定存储后端(注意:volume_backend_name是cinder.conf文件中的lvm的参数名称,lvmm是volume_backend_name的参数值,此处要与cinder.conf文件的内容一致,否则将导致后面创建卷失败。),点击【已创建】按钮,如下图所示:
点击图形化界面左侧栏的【项目】\【卷】\【卷】项,在右侧栏可查看卷列表,点击右侧栏的【创建卷】按钮,如下图所示:
弹出【创建卷】窗口,在【名称】栏输入卷名称lvm-volume1(可自定义),【卷来源】保持默认选项即可,【类型】栏选择lvm,【大小】栏默认即可,点击右下角的【创建卷】按钮,如下图所示:
创建完成,在卷列表中可查到创建的卷,状态可用即正常,如下图所示:
5.1.18创建虚拟机挂载LVM卷
点击【项目】\【计算】\【实例】创建一个虚拟机vm,点击虚拟机vm行的【】按钮下的【连接卷】选项,如下图所示:
弹出【连接卷】窗口,选择刚刚创建的LVM卷lvm_v1,点击【连接卷】按钮,如下图所示:
连接成功,可以点击【项目】\【计算】\【实例】页面的虚拟机vm,在其页面【概况】栏的末尾处可查看虚拟机与卷已连接,如下图所示:
也可以点击左侧栏的【项目】\【卷】\【卷】项,在右侧卷列表的【连接到】列中,查看到卷lvm_v1已经与虚拟机vm连接,如下图所示:
第六部分配置NFS共享存储
6.1NFS服务器配置(网络节点)
本实验使用网络节点作为NFS服务器
6.1.1安装NFS软件包
[root@nfs~]#yuminstall-ynfs-utils
6.1.2创建NFS用户
[root@controller~]#useraddcinder-u165
useradd:user'cinder'alreadyexists
[root@controller~]#idcinder
uid=165(cinder)gid=1001(cinder)groups=1001(cinder)
[root@controller~]#useraddnova-u162
useradd:user'cinder'alreadyexists
[root@networkopt]#idnova
uid=162(nova)gid=162(nova)groups=162(nova)
6.1.3创建NFS目录并分配权限
[root@controller~]#mkdir/opt/cinder-nfs
[root@controller~]#chown165:165/opt/cinder-nfs–R
[root@network~]#ll/opt/cinder-nfs/-d
drwxr-xr-x2cinder1656Dec115:37/opt/cinder-nfs/
[root@controller~]#mkdir/var/lib/nova/instances
[root@controller~]#chown162:162/var/lib/nova/instances–R
[root@network~]#ll/var/lib/nova/instances-d
drwxr-xr-x2nova1626Dec117:22/var/lib/nova/instances
6.1.4修改NFS配置文件
[root@nfs~]#vi/etc/exports
/opt/cinder-nfs192.168.2.0/22(rw,no_root_squash)
/var/lib/nova/instances192.168.2.0/22(rw,no_root_squash)
6.1.5启动NFS服务,并检查服务状态
[root@nfs~]#systemctlstartrpcbindnfs-server
[root@nfs~]#systemctlenablerpcbindnfs-server
[root@nfs~]#systemctlstatusrpcbindnfs-server
●rpcbind.service-RPCbindservice
Loaded:loaded(/usr/lib/systemd/system/rpcbind.service;enabled;vendorpreset:enabled)
Active:active(running)sinceWed2018-10-1713:51:53CST;33sago
●nfs-server.service-NFSserverandservices
Loaded:loaded(/usr/lib/systemd/system/nfs-server.service;enabled;vendorpreset:disabled)
Drop-In:/run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active:active(exited)sinceWed2018-10-1713:51:53CST;32sago
6.2NFS客户端配置(控制节点)
在控制节点中按照如下步骤配置NFS客户端。
6.2.1安装NFS软件包
[root@cinder~]#yuminstall-ynfs-utils
6.2.2查看NFS服务器共享目录
[root@controller~]#showmount-e192.168.2.61
Exportlistfor192.168.2.61:
/opt/cinder-nfs192.168.2.0/24
6.2.3创建NFS挂载点目录
[root@cinder~]#mkdir/var/lib/cinder/mnt_nfs
[root@cinder~]#chowncinder:cinder/var/lib/cinder/mnt_nfs/
6.2.4创建NFS配置文件并设置权限
[root@cinder~]#vi/etc/cinder/nfs_shares
192.168.0.61:/opt/cinder-nfs#192.168.0.61是NFS服务器的IP,这里也是网络节的IP
[root@cinder~]#chmod640/etc/cinder/nfs_shares
[root@cinder~]#chgrpcinder/etc/cinder/nfs_shares
6.2.5配置cinder配置文件
[root@controller~]#vi/etc/cinder/cinder.conf
[DEFAULT]
enabled_backends=lvm,nfs
[nfs]
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/opt/cinder-nfs
nfs_mount_point_base=/var/lib/cinder/mnt_nfs
volume_backend_name=nfss
6.2.6重启cinder服务并检查服务状态
[root@cinder~]#systemctlrestartopenstack-cinder-volume
[root@cinder~]#systemctlstatusopenstack-cinder-volume
●openstack-cinder-volume.service-OpenStackCinderVolumeServer
Loaded:loaded(/usr/lib/systemd/system/openstack-cinder-volume.service;enabled;vendorpreset:disabled)
Active:active(running)sinceWed2018-10-1714:10:04CST;9msago
6.2.7挂载NFS目录并设置开机自动挂载
[root@controllermnt_nfs]#df-h
FilesystemSizeUsedAvailUse%Mountedon
/dev/mapper/centos-root47G2.6G45G6%/
devtmpfs1.4G01.4G0%/dev
tmpfs1.4G9.5M1.4G1%/run
tmpfs1.4G01.4G0%/sys/fs/cgroup
192.168.2.61:/opt/cinder-nfs47G1.4G46G3%/var/lib/cinder/mnt_nfs/1682d1554e32cf01748de9d4efde9b57
[root@compute1home]#df-h
FilesystemSizeUsedAvailUse%Mountedon
/dev/mapper/centos-root47G1.8G46G4%/
devtmpfs731M0731M0%/dev
tmpfs743M0743M0%/dev/shm
tmpfs743M9.6M733M2%/run
tmpfs743M0743M0%/sys/fs/cgroup
/dev/sda11014M161M854M16%/boot
tmpfs149M0149M0%/run/user/0
controller:/opt/cinder-nfs47G2.6G45G6%/var/lib/nova/instances[root@compute1home]#vi/etc/fstab
#在文件最后追加下面一行:
controller:/opt/cinder-nfs/homenfsdefaults00
6.3NFS客户端配置(计算节点)
两个计算节点中按照如下步骤配置NFS客户端。
6.3.1修改Nova配置文件
[root@compute1nova]#vi/etc/nova/nova.conf
[libvirt]
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
6.3.2重启openstack-nova-compute服务
[root@compute1nova]#systemctlrestartopenstack-nova-compute
6.3.3修改/etc/sysconfig/libvirtd文件
[root@compute1nova]#vi/etc/sysconfig/libvirtd
#取消以下内容的注释
LIBVIRTD_CONFIG=/etc/libvirt/libvirtd.conf
LIBVIRTD_ARGS="--listen"
6.3.4修改/etc/sysconfig/libvirtd文件
[root@compute1nova]#vi/etc/libvirt/libvirtd.conf
listen_tls=0
listen_tcp=1
auth_tcp="none"
6.3.5重启libvirtd服务,并检查服务状态
[root@compute1nova]#systemctlrestartlibvirtd
[root@compute1nova]#systemctlstatuslibvirtd
●libvirtd.service-Virtualizationdaemon
Loaded:loaded(/usr/lib/systemd/system/libvirtd.service;enabled;vendorpreset:enabled)
Active:active(running)sinceSat2018-12-0117:50:38CST;11sago
6.3.6查看计算节点的默认虚拟机存储路径,并将文件移动到其他路径中
[root@compute1nova]#mvinstances/*/home/
[root@compute1nova]#cdinstances/
[root@compute1nova]#ll
total0
6.3.7计算节点挂载到NFS,计算节点的挂载点授权
[root@compute1instances]#mount-tnfs192.168.2.61:/var/lib/nova/instances/var/lib/nova/instances
[root@compute1instances]#df-h
FilesystemSizeUsedAvailUse%Mountedon
/dev/mapper/centos-root47G1.8G46G4%/
devtmpfs731M0731M0%/dev
tmpfs743M0743M0%/dev/shm
tmpfs743M9.6M733M2%/run
tmpfs743M0743M0%/sys/fs/cgroup
/dev/sda11014M161M854M16%/boot
tmpfs149M0149M0%/run/user/0
192.168.2.61:/var/lib/nova/instances47G1.5G46G3%/var/lib/nova/instances
[root@compute1instances]#cd/var/lib/nova/
[root@compute1instances]#chown-Rnova:novainstances
[root@compute1instances]#vi/etc/fstab
192.168.2.61:/var/lib/nova/instances/var/lib/nova/instancesnfsdefaults00
6.3.8将之前移走的文件移回默认虚拟机存储路径中
[root@compute1nova]#mv/home/*instances/
[root@compute1nova]#cdinstances/
[root@compute1instances]#ls
_basecompute_nodesf8cdfaa7-d8e2-4feb-9d10-4669e0b68e57locks
6.4验证虚拟机状态
使用admin用户,登录OpenStack的图形化界面http://控制节点ip/dashboard
点击【项目】\【计算】\【实例】中的vm2,如下图所示:
在右侧栏中点击【控制台】tab页,仍能正常显示虚拟机登录页面,即虚拟机正常,如下图所示:
6.5迁移虚拟机
点击左侧栏的【管理员】\【计算】\【实例】,然后点击迁移的虚拟机vm2行的【】按钮下的【实例热迁移】,如下图所示:
弹出【热迁移】窗口,【新主机】栏选择compute2,如下图所示:
迁移成功,虚拟机vm2所在的主机变成compute2,如下图所示:
第七部分Swift部署
7.1配置控制节点
7.1.1创建用户
[root@controller~]#.admin-openrc
[root@controller~]#openstackusercreate--domaindefault--projectservice--passwordadmin123swift
+---------------------+----------------------------------+
|Field|Value|
+---------------------+----------------------------------+
|default_project_id|76775785168945d4b7fc02b248dc0594|
|domain_id|default|
|enabled|True|
|id|cb05ca355b4042338f45ba27f383054e|
|name|swift|
|options|{}|
|password_expires_at|None|
+---------------------+----------------------------------+
7.1.2为用户swift分配admin角色
[root@controller~]#openstackroleadd--projectservice--userswiftadmin
7.1.3创建swift服务,服务类型为object-store
[root@controller~]#openstackservicecreate--nameswift--description"OpenStackObjectStroage"object-store
+-------------+----------------------------------+
|Field|Value|
+-------------+----------------------------------+
|description|OpenStackObjectStroage|
|enabled|True|
|id|c8e1fe810e79453f8775c459534c32fe|
|name|swift|
|type|object-store|
+-------------+----------------------------------+
7.1.4为swift服务创建三个接口
[root@controller~]#openstackendpointcreate--regionRegionOneobject-storepublichttp://192.168.2.60:8080/v1/AUTH_%\(project_id\)s
+--------------+-------------------------------------------------+
|Field|Value|
+--------------+-------------------------------------------------+
|enabled|True|
|id|3c15a05284c5433f9f6448bc972b4dd5|
|interface|public|
|region|RegionOne|
|region_id|RegionOne|
|service_id|c8e1fe810e79453f8775c459534c32fe|
|service_name|swift|
|service_type|object-store|
|url|http://192.168.2.60:8080/v1/AUTH_%(project_id)s|
+--------------+-------------------------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOneobject-storeinternalhttp://192.168.2.60:8080/v1/AUTH_%\(project_id\)s
+--------------+-------------------------------------------------+
|Field|Value|
+--------------+-------------------------------------------------+
|enabled|True|
|id|f64c40f2735b465994b5a92625f657a6|
|interface|internal|
|region|RegionOne|
|region_id|RegionOne|
|service_id|c8e1fe810e79453f8775c459534c32fe|
|service_name|swift|
|service_type|object-store|
|url|http://192.168.2.60:8080/v1/AUTH_%(project_id)s|
+--------------+-------------------------------------------------+
[root@controller~]#openstackendpointcreate--regionRegionOneobject-storeadminhttp://192.168.2.60:8080/v1/AUTH_%\(project_id\)s
+--------------+-------------------------------------------------+
|Field|Value|
+--------------+-------------------------------------------------+
|enabled|True|
|id|8f5c0e2d704c4ef49b32dd3cf9c65a3d|
|interface|admin|
|region|RegionOne|
|region_id|RegionOne|
|service_id|c8e1fe810e79453f8775c459534c32fe|
|service_name|swift|
|service_type|object-store|
|url|http://192.168.2.60:8080/v1/AUTH_%(project_id)s|
7.1.5安装Swift相关软件包
[root@controller~]#yuminstallopenstack-swift-proxypython-swiftclientpython-keystoneclientpython-keystonemiddleware-y
7.1.6修改proxy-server配置文件
[root@controller~]#cp-a/etc/swift/proxy-server.conf/etc/swift/proxy-server.conf.bak
[root@controller~]#>/etc/swift/proxy-server.conf
[root@controller~]#vi/etc/swift/proxy-server.conf
[DEFAULT]
bind_port=8080
user=swift
swift_dir=/etc/swift
[pipeline:main]
pipeline=catch_errorsgatekeeperhealthcheckproxy-loggingcachecontainer_syncbulkratelimitauthtokenkeystoneauthcontainer-quotasaccount-quotasslodloversioned_writesproxy-loggingproxy-server
[app:proxy-server]
use=egg:swift#proxy
account_autocreate=True
[filter:tempauth]
use=egg:swift#tempauth
user_admin_admin=admin.admin.reseller_admin
user_test_tester=testing.admin
user_test2_tester2=testing2.admin
user_test_tester3=testing3
user_test5_tester5=testing5service
[filter:authtoken]
paste.filter_factory=keystonemiddleware.auth_token:filter_factory
auth_uri=http://192.168.2.60:5000
auth_url=http://192.168.2.60:35357
memcached_servers=192.168.2.60:11211
auth_type=password
project_domain_id=default
user_domain_id=default
project_name=service
username=swift
password=admin123
delay_auth_decision=True
[filter:keystoneauth]
use=egg:swift#keystoneauth
operator_roles=admin,user
[filter:healthcheck]
use=egg:swift#healthcheck
[filter:cache]
use=egg:swift#memcache
memcache_servers=192.168.2.60:11211
[filter:ratelimit]
use=egg:swift#ratelimit
[filter:domain_remap]
use=egg:swift#domain_remap
[filter:catch_errors]
use=egg:swift#catch_errors
[filter:cname_lookup]
use=egg:swift#cname_lookup
[filter:staticweb]
use=egg:swift#staticweb
[filter:tempurl]
use=egg:swift#tempurl
[filter:formpost]
use=egg:swift#formpost
[filter:name_check]
use=egg:swift#name_check
[filter:list-endpoints]
use=egg:swift#list_endpoints
[filter:proxy-logging]
use=egg:swift#proxy_logging
[filter:bulk]
use=egg:swift#bulk
[filter:slo]
use=egg:swift#slo
[filter:dlo]
use=egg:swift#dlo
[filter:container-quotas]
use=egg:swift#container_quotas
[filter:account-quotas]
use=egg:swift#account_quotas
[filter:gatekeeper]
use=egg:swift#gatekeeper
[filter:container_sync]
use=egg:swift#container_sync
[filter:xprofile]
use=egg:swift#xprofile
[filter:versioned_writes]
use=egg:swift#versioned_writes
[filter:copy]
use=egg:swift#copy
[filter:keymaster]
use=egg:swift#keymaster
encryption_root_secret=admin123
[filter:encryption]
use=egg:swift#encryption
7.1.7修改swift配置文件
[root@controller~]#cp-a/etc/swift/swift.conf/etc/swift/swift.conf_back
[root@controller~]#>/etc/swift/swift.conf
[root@controller~]#vi/etc/swift/swift.conf
[swift-hash]
swift_hash_path_suffix=admin123
swift_hash_path_prefix=admin123
[storage-policy:0]
name=Policy-0
default=yes
aliases=yellow,orange
[swift-constraints]
7.1.8配置Swift环文件
[root@controller~]#cd/etc/swift
[root@controllerswift]#swift-ring-builder/etc/swift/account.buildercreate1231
[root@controllerswift]#swift-ring-builder/etc/swift/container.buildercreate1231
[root@controllerswift]#swift-ring-builder/etc/swift/object.buildercreate1231
#为第一台存储服务器swift1创建环
[root@controller~]#swift-ring-builder/etc/swift/account.builderaddr0z0-192.168.2.65:6202/device0100
Deviced0r0z0-192.168.2.65:6202R192.168.2.65:6202/device0_""with100.0weightgotid0
[root@controller~]#swift-ring-builder/etc/swift/container.builderaddr0z0-192.168.2.65:6201/device0100
Deviced0r0z0-192.168.2.65:6201R192.168.2.65:6201/device0_""with100.0weightgotid0
[root@controller~]#swift-ring-builder/etc/swift/object.builderaddr0z0-192.168.2.65:6200/device0100
Deviced0r0z0-192.168.2.65:6200R192.168.2.65:6200/device0_""with100.0weightgotid0
#为第二台存储服务器swift2创建环
[root@controller~]#swift-ring-builder/etc/swift/account.builderaddr0z0-192.168.2.66:6202/device1100
Deviced1r0z0-192.168.2.66:6202R192.168.2.66:6202/device1_""with100.0weightgotid1
[root@controller~]#swift-ring-builder/etc/swift/container.builderaddr0z0-192.168.2.66:6201/device1100
Deviced1r0z0-192.168.2.66:6201R192.168.2.66:6201/device1_""with100.0weightgotid1
[root@controller~]#swift-ring-builder/etc/swift/object.builderaddr0z0-192.168.2.66:6200/device1100
Deviced1r0z0-192.168.2.66:6200R192.168.2.66:6200/device1_""with100.0weightgotid1
#为第三台存储服务器swift3创建环
[root@controller~]#swift-ring-builder/etc/swift/account.builderaddr0z0-192.168.2.64:6202/device2100
Deviced2r0z0-192.168.2.64:6202R192.168.2.64:6202/device2_""with100.0weightgotid2
[root@controller~]#swift-ring-builder/etc/swift/container.builderaddr0z0-192.168.2.64:6201/device2100
Deviced2r0z0-192.168.2.64:6201R192.168.2.64:6201/device2_""with100.0weightgotid2
[root@controller~]#swift-ring-builder/etc/swift/object.builderaddr0z0-192.168.2.64:6200/device2100
Deviced2r0z0-192.168.2.64:6200R192.168.2.64:6200/device2_""with100.0weightgotid2
7.1.9确认环内容
[root@controllerswift]#ll
total60
-rw-r--r--1rootroot9050Dec411:01account.builder
drwxr-xr-x2rootroot109Dec411:00backups
-rw-r--r--1rootroot9050Dec411:01container.builder
-rw-r-----1rootswift1415Feb172018container-reconciler.conf
-rw-r--r--1rootroot9050Dec411:01object.builder
-rw-r-----1rootswift291Feb172018object-expirer.conf
drwxr-xr-x2rootroot6Feb172018proxy-server
-rw-r-----1rootswift2033Dec410:36proxy-server.conf
-rw-r-----1rootswift2868Feb172018proxy-server.conf.bak
-rw-r-----1rootswift176Dec410:59swift.conf
-rw-r-----1rootswift63Feb172018swift.conf_back
[root@controllerswift]#swift-ring-builderaccount.builder
account.builder,buildversion4,id5c10fffe4e69467896e5682fbadb0e8a
4096partitions,3.000000replicas,1regions,1zones,3devices,0.00balance,0.00dispersion
Theminimumnumberofhoursbeforeapartitioncanbereassignedis1(0:58:42remaining)
Theoverloadfactoris0.00%(0.000000)
Ringfileaccount.ring.gzisup-to-date
Devices:idregionzoneipaddress:portreplicationip:portnameweightpartitionsbalanceflagsmeta
200192.168.2.64:6202192.168.2.64:6202device2100.0040960.00
000192.168.2.65:6202192.168.2.65:6202device0100.0040960.00
100192.168.2.66:6202192.168.2.66:6202device1100.0040960.00
[root@controllerswift]#swift-ring-buildercontainer.builder
container.builder,buildversion4,id14b9b5bfe82347df9fa07a7532609d1d
4096partitions,3.000000replicas,1regions,1zones,3devices,0.00balance,0.00dispersion
Theminimumnumberofhoursbeforeapartitioncanbereassignedis1(0:58:39remaining)
Theoverloadfactoris0.00%(0.000000)
Ringfilecontainer.ring.gzisup-to-date
Devices:idregionzoneipaddress:portreplicationip:portnameweightpartitionsbalanceflagsmeta
200192.168.2.64:6201192.168.2.64:6201device2100.0040960.00
000192.168.2.65:6201192.168.2.65:6201device0100.0040960.00
100192.168.2.66:6201192.168.2.66:6201device1100.0040960.00
[root@controllerswift]#swift-ring-builderobject.builder
object.builder,buildversion4,idd38f7758b223400aa19b5e8c8f3f4656
4096partitions,3.000000replicas,1regions,1zones,3devices,0.00balance,0.00dispersion
Theminimumnumberofhoursbeforeapartitioncanbereassignedis1(0:58:39remaining)
Theoverloadfactoris0.00%(0.000000)
Ringfileobject.ring.gzisup-to-date
Devices:idregionzoneipaddress:portreplicationip:portnameweightpartitionsbalanceflagsmeta
200192.168.2.64:6200192.168.2.64:6200device2100.0040960.00
000192.168.2.65:6200192.168.2.65:6200device0100.0040960.00
100192.168.2.66:6200192.168.2.66:6200device1100.0040960.00
7.1.10重平衡环
[root@controllerswift]#swift-ring-builder/etc/swift/account.builderrebalance
Reassigned12288(300.00%)partitions.Balanceisnow0.00.Dispersionisnow0.00
[root@controllerswift]#swift-ring-builder/etc/swift/container.builderrebalance
Reassigned12288(300.00%)partitions.Balanceisnow0.00.Dispersionisnow0.00
[root@controllerswift]#swift-ring-builder/etc/swift/object.builderrebalance
Reassigned12288(300.00%)partitions.Balanceisnow0.00.Dispersionisnow0.00
[root@controllerswift]#ll
total144
-rw-r--r--1rootroot33957Dec411:02account.builder
-rw-r--r--1rootroot290Dec411:02account.ring.gz
drwxr-xr-x2rootroot315Dec411:02backups
-rw-r--r--1rootroot33957Dec411:02container.builder
-rw-r-----1rootswift1415Feb172018container-reconciler.conf
-rw-r--r--1rootroot292Dec411:02container.ring.gz
-rw-r--r--1rootroot33957Dec411:02object.builder
-rw-r-----1rootswift291Feb172018object-expirer.conf
-rw-r--r--1rootroot289Dec411:02object.ring.gz
drwxr-xr-x2rootroot6Feb172018proxy-server
-rw-r-----1rootswift2033Dec410:36proxy-server.conf
-rw-r-----1rootswift2868Feb172018proxy-server.conf.bak
-rw-r-----1rootswift176Dec410:59swift.conf
-rw-r-----1rootswift63Feb172018swift.conf_back
7.1.11修改*.ring.gz文件的所有者
[root@controllerswift]#chownswift./etc/swift/*.gz
7.1.12启动swift服务,且允许开机自启动
[root@controllerswift]#systemctlstartopenstack-swift-proxy
[root@controllerswift]#systemctlenableopenstack-swift-proxy
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-proxy.serviceto/usr/lib/systemd/system/openstack-swift-proxy.service.
[root@controllerswift]#systemctlstatusopenstack-swift-proxy
●openstack-swift-proxy.service-OpenStackObjectStorage(swift)-ProxyServer
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-proxy.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0411:14:03CST;4sago
7.2配置对象存储节点
对象存储节点:swift1、swift2和swift3。
7.2.1配置OpenStackyum源
[root@swift1~]#yuminstall-ycentos-release-openstack-queens
7.2.2添加硬盘
分别为每个块存储节点添加一块硬盘,大小10G。
具体添加硬盘的操作可参见第五部分Cinder部署的5.1.8,这里不再赘述。
添加硬盘完成后,虚拟机操作系统不识别新硬盘,需要重启虚拟机。
[root@swift1~]#ls/dev/sd*
/dev/sda/dev/sda1/dev/sda2/dev/sdb
7.2.3安装swift软件包
[root@swift1yum.repos.d]#yuminstall-yopenstack-swift-accountopenstack-swift-containeropenstack-swift-objectxfsprogsrsyncopenssh-clients
7.2.4配置硬盘
[root@swift1~]#mkfs.xfs-isize=1024-ssize=4096/dev/sdb
meta-data=/dev/sdbisize=1024agcount=4,agsize=655360blks
=sectsz=4096attr=2,projid32bit=1
=crc=1finobt=0,sparse=0
data=bsize=4096blocks=2621440,imaxpct=25
=sunit=0swidth=0blks
naming=version2bsize=4096ascii-ci=0ftype=1
log=internallogbsize=4096blocks=2560,version=2
=sectsz=4096sunit=1blks,lazy-count=1
realtime=noneextsz=4096blocks=0,rtextents=0
7.2.5创建挂载目录
[root@swift1~]#mkdir-p/srv/node/device0
注意:swift1对应目录/srv/node/device0;swift2对应目录/srv/node/device1;swift3对应目录/srv/node/device2。
[root@swift1~]#mount-onoatime,nodiratime,nobarrier/dev/sdb/srv/node/device0
[root@swift1~]#df-h
FilesystemSizeUsedAvailUse%Mountedon
/dev/mapper/centos-root47G1.2G46G3%/
devtmpfs476M0476M0%/dev
tmpfs488M0488M0%/dev/shm
tmpfs488M7.6M480M2%/run
tmpfs488M0488M0%/sys/fs/cgroup
/dev/sda11014M159M856M16%/boot
tmpfs98M098M0%/run/user/0
/dev/sdb10G33M10G1%/srv/node/device0
7.2.6修改目录/srv/node的所有者
[root@swift1~]#chown-Rswift./srv/node
7.2.7设置开机自动挂载
[root@swift1~]#vi/etc/fstab
/dev/sdb/srv/node/device0xfsnoatime,nodiratime,nobarrier00
[root@swift1~]#mount–a
7.2.8将控制节点的环文件拷贝到本地存储节点
[root@swift1swift]#scp192.168.2.60:/etc/swift/*.gz/etc/swift/
Theauthenticityofhost'192.168.2.60(192.168.2.60)'can'tbeestablished.
ECDSAkeyfingerprintisSHA256:d06aNFNQ00PKaGWGvEJ8lK5fIcWu6lw0Ypj+wqAeQ9I.
ECDSAkeyfingerprintisMD5:9f:eb:be:7a:74:d8:99:85:f0:30:fb:46:3f:29:6c:40.
Areyousureyouwanttocontinueconnecting(yes/no)?yes
Warning:Permanentlyadded'192.168.2.60'(ECDSA)tothelistofknownhosts.
root@192.168.2.60'spassword:
account.ring.gz100%29034.3KB/s00:00
container.ring.gz100%29229.2KB/s00:00
object.ring.gz
7.2.9修改配置文件所有者
[root@swift1swift]#chownswift./etc/swift/*.gz
7.2.10修改swift配置文件
[root@swift1swift]#cp-a/etc/swift/swift.conf/etc/swift/swift.conf.bak
[root@swift1swift]#>/etc/swift/swift.conf
[root@swift1swift]#vi/etc/swift/swift.conf
[swift-hash]
swift_hash_path_suffix=admin123
swift_hash_path_prefix=admin123
[storage-policy:0]
name=Policy-0
default=yes
aliases=yellow,orange
[swift-constraints]
7.2.11修改account-server配置文件
[root@swift1swift]#cp-a/etc/swift/account-server.conft-server.conf.bak
[root@swift1swift]#>/etc/swift/account-server.conf
[root@swift1swift]#vi/etc/swift/account-server.conf
[DEFAULT]
bind_ip=0.0.0.0
bind_port=6202
[pipeline:main]
pipeline=healthcheckreconaccount-server
[app:account-server]
use=egg:swift#account
[filter:healthcheck]
use=egg:swift#healthcheck
[filter:recon]
use=egg:swift#recon
[account-replicator]
[account-auditor]
[account-reaper]
[filter:xprofile]
7.2.12修改container-server配置文件
[root@swift1swift]#cp-a/etc/swift/container-server.conf/etc/swift/container-server.conf.bak
[root@swift1swift]#>/etc/swift/container-server.conf
[root@swift1swift]#vi/etc/swift/container-server.conf
[DEFAULT]
bind_ip=0.0.0.0
bind_port=6201
[pipeline:main]
pipeline=healthcheckreconcontainer-server
[app:container-server]
use=egg:swift#container
[filter:healthcheck]
use=egg:swift#healthcheck
[filter:recon]
use=egg:swift#recon
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
[filter:xprofile]
use=egg:swift#xprofile
7.2.13修改object-server配置文件
[root@swift1swift]#cp-a/etc/swift/object-server.conf/etc/swift/object-server.conf.bak
[root@swift1swift]#>/etc/swift/object-server.conf
[root@swift1swift]#vi/etc/swift/object-server.conf
[DEFAULT]
bind_ip=0.0.0.0
bind_port=6200
[pipeline:main]
pipeline=healthcheckreconobject-server
[app:object-server]
use=egg:swift#object
[filter:healthcheck]
use=egg:swift#healthcheck
[filter:recon]
use=egg:swift#recon
[object-replicator]
[object-reconstructor]
[object-updater]
[object-auditor]
[filter:xprofile]
use=egg:swift#xprofile
7.2.14修改rsync配置文件
[root@swift1swift]#cp-a/etc/rsyncd.conf/etc/rsyncd.conf.bak
[root@swift1swift]#>/etc/rsyncd.conf
[root@swift1swift]#vi/etc/rsyncd.conf
pidfile=/var/run/rsyncd.pid
logfile=/var/log/rsyncd.log
uid=swift
gid=swift
address=192.168.2.65#当前块存储节点的IP
[account]
path=/srv/node
readonly=false
writeonly=no
list=yes
incomingchmod=0644
outgoingchmod=0644
maxconnections=25
lockfile=/var/lock/account.lock
[container]
path=/srv/node
readonly=false
writeonly=no
list=yes
incomingchmod=0644
outgoingchmod=0644
maxconnections=25
lockfile=/var/lock/container.lock
[object]
path=/srv/node
readonly=false
writeonly=no
list=yes
incomingchmod=0644
outgoingchmod=0644
maxconnections=25
lockfile=/var/lock/object.lock
[swift_server]
path=/etc/swift
readonly=true
writeonly=no
list=yes
incomingchmod=0644
outgoingchmod=0644
maxconnections=5
lockfile=/var/lock/swift_server.lock
7.2.15启动rsyncd服务
[root@swift1swift]#systemctlstartrsyncd
[root@swift1swift]#systemctlenablersyncd
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/rsyncd.serviceto/usr/lib/systemd/system/rsyncd.service.
[root@swift1swift]#systemctlstatusrsyncd
●rsyncd.service-fastremotefilecopyprogramdaemon
Loaded:loaded(/usr/lib/systemd/system/rsyncd.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0422:52:18CST;17sago
7.2.16启动swift服务
[root@swift1swift]#systemctlstartopenstack-swift-accountopenstack-swift-account-replicatoropenstack-swift-account-auditoropenstack-swift-account-reaperopenstack-swift-containeropenstack-swift-container-replicatoropenstack-swift-container-updateropenstack-swift-container-auditoropenstack-swift-objectopenstack-swift-object-replicatoropenstack-swift-object-updateropenstack-swift-object-auditor
[root@swift1swift]#systemctlenableopenstack-swift-accountopenstack-swift-account-replicatoropenstack-swift-account-auditoropenstack-swift-account-reaperopenstack-swift-containeropenstack-swift-container-replicatoropenstack-swift-container-updateropenstack-swift-container-auditoropenstack-swift-objectopenstack-swift-object-replicatoropenstack-swift-object-updateropenstack-swift-object-auditor
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-account.serviceto/usr/lib/systemd/system/openstack-swift-account.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-account-replicator.serviceto/usr/lib/systemd/system/openstack-swift-account-replicator.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-account-auditor.serviceto/usr/lib/systemd/system/openstack-swift-account-auditor.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-account-reaper.serviceto/usr/lib/systemd/system/openstack-swift-account-reaper.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-container.serviceto/usr/lib/systemd/system/openstack-swift-container.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-container-replicator.serviceto/usr/lib/systemd/system/openstack-swift-container-replicator.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-container-updater.serviceto/usr/lib/systemd/system/openstack-swift-container-updater.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-container-auditor.serviceto/usr/lib/systemd/system/openstack-swift-container-auditor.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-object.serviceto/usr/lib/systemd/system/openstack-swift-object.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-object-replicator.serviceto/usr/lib/systemd/system/openstack-swift-object-replicator.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-object-updater.serviceto/usr/lib/systemd/system/openstack-swift-object-updater.service.
Createdsymlinkfrom/etc/systemd/system/multi-user.target.wants/openstack-swift-object-auditor.serviceto/usr/lib/systemd/system/openstack-swift-object-auditor.service.
[root@swift1swift]#systemctlstatusopenstack-swift-accountopenstack-swift-account-replicatoropenstack-swift-account-auditoropenstack-swift-account-reaperopenstack-swift-containeropenstack-swift-container-replicatoropenstack-swift-container-updateropenstack-swift-container-auditoropenstack-swift-objectopenstack-swift-object-replicatoropenstack-swift-object-updateropenstack-swift-object-auditor
●openstack-swift-account.service-OpenStackObjectStorage(swift)-AccountServer
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-account.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:46CST;40sago
●openstack-swift-account-replicator.service-OpenStackObjectStorage(swift)-AccountReplicator
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-account-replicator.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:46CST;40sago
●openstack-swift-account-auditor.service-OpenStackObjectStorage(swift)-AccountAuditor
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-account-auditor.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:46CST;40sago
●openstack-swift-account-reaper.service-OpenStackObjectStorage(swift)-AccountReaper
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-account-reaper.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:46CST;40sago
●openstack-swift-container.service-OpenStackObjectStorage(swift)-ContainerServer
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-container.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:46CST;40sago
●openstack-swift-container-replicator.service-OpenStackObjectStorage(swift)-ContainerReplicator
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-container-replicator.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:46CST;40sago
●openstack-swift-container-updater.service-OpenStackObjectStorage(swift)-ContainerUpdater
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-container-updater.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:47CST;40sago
●openstack-swift-container-auditor.service-OpenStackObjectStorage(swift)-ContainerAuditor
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-container-auditor.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:47CST;40sago
●openstack-swift-object.service-OpenStackObjectStorage(swift)-ObjectServer
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-object.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:47CST;40sago
●openstack-swift-object-replicator.service-OpenStackObjectStorage(swift)-ObjectReplicator
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-object-replicator.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:47CST;40sago
●openstack-swift-object-updater.service-OpenStackObjectStorage(swift)-ObjectUpdater
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-object-updater.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:47CST;40sago
●openstack-swift-object-auditor.service-OpenStackObjectStorage(swift)-ObjectAuditor
Loaded:loaded(/usr/lib/systemd/system/openstack-swift-object-auditor.service;enabled;vendorpreset:disabled)
Active:active(running)sinceTue2018-12-0423:04:47CST;40sago
7.3验证Swift组件
使用admin用户登录OpenStack图形化界面,地址:http://控制节点ip/dashboard
点击左侧栏【项目】\【对象存储】\【容器】项,然后点击右侧栏的【容器】按钮,如下图所示:
弹出【创建容器】容器,输入容器名称testC,【访问容器】这里选择【公有】,点击【提交】按钮,如下图所示:
容器创建完成,可查看到容器,如下图所示:
点击容器testC,查看到该容器的信息,然后,点击当前页面的【】按钮,如下图所示:
弹出【上传文件】窗口,点击【浏览】按钮,选择要上传的文件,文件名默认与上传文件名称一致,也可自行修改,这里修改为testImg,点击【上传文件】按钮,如下图所示:
上传成功,在列表中查看到上传文件。然后,点击文件后面的【下载】按钮,即可下载该文件,如下图所示:
点击要删除文件testImg后面的【】按钮,选择【删除】选项,弹出【在testC中删除文件】提示框,点击提示框中的【删除】按钮,如下图所示:
删除成功,点击【OK】按钮,如下图所示:
Kubernetes实验手册
1.1准备安装环境
在开始Kubernetes的安装前,首先要准备好安装环境。
7.3.1创建虚拟机
使用克隆的方式,创建3台虚拟机,主机名分别为master、node1、node2。
3台虚拟机的资源分配如下表所示:
主机名
IP地址
CPU
内存
所在位置
master
192.168.2.84
2
1.5G
宿主机1
node1
192.168.2.81
2
1.5G
宿主机1
node2
192.168.2.82
2
1.5G
宿主机2
7.3.2设置各主机免密码登录
安装Kubernetes需要和节点之间可以免密码登录。
1.设置主机名映射
在每一个节点的/etc/hosts文件写入以下内容:
192.168.2.84master
192.168.2.81node1
192.168.2.82node2
2.创建密钥
在每一个节点,以root用户执行以下命令创建密钥:
#ssh-keygen
3.复制密钥
运行以下命令,把密钥复制到各节点,包括本节点:
#ssh-copy-idhostname
例如:
#ssh-copy-idmaster
在第个节点复制完成后,运行以下命令,就可以免密码登录主机:
#sshhostname
例如:
#sshmaster
7.3.3关闭防火墙和SELinus
(1).在各节点运行以下命令,关闭防火墙:
#systemctlstopfirewalld
#systemctldisablefirewalld
(2).在各节点进行以下操作,关闭SELinux
编辑/etc/selinux/config文件,把其中的
SELINUX=enforcing
改为
SELINUX=disabled
然后重启主机。
7.4安装Docker
docker-ce版本有多种安装方式,本次实验采用下载二进制安装包的方式进行安装。
在浏览器中输入以下地址:
https://download.docker.com/linux/static/stable/x86_64/
登录后的页面如下图所示。
图1登录下载页面
下载docker-18.09.0.tgz到node节点的/home/software目录,不需要解压。
编辑文件install-docker.sh,写入以下内容:
#!/bin/sh
usage(){
echo"Usage:$0FILE_NAME_DOCKER_CE_TAR_GZ"
echo"$0docker-17.09.0-ce.tgz"
echo"Getdocker-cebinaryfrom:https://download.docker.com/linux/static/stable/x86_64/"
echo"eg:wgethttps://download.docker.com/linux/static/stable/x86_64/docker-17.09.0-ce.tgz"
echo""
}
SYSTEMDDIR=/usr/lib/systemd/system
SERVICEFILE=docker.service
DOCKERDIR=/usr/bin
DOCKERBIN=docker
SERVICENAME=docker
if[$#-ne1];then
usage
exit1
else
FILETARGZ="$1"
fi
if[!-f${FILETARGZ}];then
echo"Dockerbinarytgzfilesdoesnotexist,pleasecheckit"
echo"Getdocker-cebinaryfrom:https://download.docker.com/linux/static/stable/x86_64/"
echo"eg:wgethttps://download.docker.com/linux/static/stable/x86_64/docker-17.09.0-ce.tgz"
exit1
fi
echo"##unzip:tarxvpf${FILETARGZ}"
tarxvpf${FILETARGZ}
echo
echo"##binary:${DOCKERBIN}copyto${DOCKERDIR}"
cp-p${DOCKERBIN}/*${DOCKERDIR}>/dev/null2>&1
which${DOCKERBIN}
echo"##systemdservice:${SERVICEFILE}"
echo"##docker.service:createdockersystemdfile"
cat>${SYSTEMDDIR}/${SERVICEFILE}<<EOF
[Unit]
Description=DockerApplicationContainerEngine
Documentation=http://docs.docker.com
After=network.targetdocker.socket
[Service]
Type=notify
EnvironmentFile=-/run/flannel/docker
WorkingDirectory=/usr/local/bin
ExecStart=/usr/bin/dockerd\
-Htcp://0.0.0.0:4243\
-Hunix:///var/run/docker.sock\
--selinux-enabled=false\
--log-optmax-size=1g
ExecReload=/bin/kill-sHUP$MAINPID
#Havingnon-zeroLimit*scausesperformanceproblemsduetoaccountingoverhead
#inthekernel.Werecommendusingcgroupstodocontainer-localaccounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
#UncommentTasksMaxifyoursystemdversionsupportsit.
#Onlysystemd226andabovesupportthisversion.
#TasksMax=infinity
TimeoutStartSec=0
#setdelegateyessothatsystemddoesnotresetthecgroupsofdockercontainers
Delegate=yes
#killonlythedockerprocess,notallprocessesinthecgroup
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
echo""
systemctldaemon-reload
echo"##Servicestatus:${SERVICENAME}"
systemctlstatus${SERVICENAME}
echo"##Servicerestart:${SERVICENAME}"
systemctlrestart${SERVICENAME}
echo"##Servicestatus:${SERVICENAME}"
systemctlstatus${SERVICENAME}
echo"##Serviceenabled:${SERVICENAME}"
systemctlenable${SERVICENAME}
echo"##dockerversion"
dockerversion
注意:如果是在Windows下编辑这个文件并上传到node节点的/home/software目录,在上传的时候需要用文本方式,不要用二进制方式,如下图所示:
图2选择用文本方式上传
上传后,给文件添加可执行权限:
#chmod755install-docker.sh
或者执行这个命令:
#chmod+xinstall-docker.sh
然后执行以下命令,安装Docker:
#./install-docker.shdocker-18.03.1-ce.tar
执行时,会自动启动Docker服务,如图3所示:
图3查看Docker服务是否已经启动
执行以下命令查看Docker版本:
#dockerversion
输出结果如图4所示:
图4查看Docker版本信息
7.5创建CA证书
7.5.1下载程序文件
在/home/software目录,使用wget命令直接下载。如果没有安装使用,先使用如下命令进行安装:
#yuminstall-ywget
然后使用如下命令下载这两个工具软件:
#wgethttps://pkg.cfssl.org/R1.2/cfssl_linux-amd64
#wgethttps://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
下载完成后,使用如下命令,更改文件名称:
#mvcfssl_linux-amd64cfssl
#mvcfssljson_linux-amd64cfssljson
然后运行以下命令,给这两个工具软件加上可执行权限:
#chmod+xcfsslcfssljson
用以下命令把这两个工具软件复制到/usr/bin/目录:
#cpcfsslcfssljson/usr/bin
然后,使用如下命令,把这两个工具软件复制到其他两个节点的/usr/bin目录:
#scpcfsslcfssljsonnode1:/usr/bin
#scpcfsslcfssljsonnode2:/usr/bin
7.5.2创建密钥和证书
使用以下命令创建目录,用于存放证书及密钥文件:
#mkdir–p/etc/kubernetes/ssl
进入这个目录,然后运行以下命令:
#cfsslprint-defaultsconfig>config.json
#cfsslprint-defaultscsr>csr.json
把config.json文件名改为ca-config.json,并使用VI编辑器修改文件为如下结果:
{
"signing":{
"default":{
"expiry":"43824h"
},
"profiles":{
"kubernetes":{
"usages":[
"signing",
"keyencipherment",
"serverauth",
"clientauth"
],
"expiry":"43824h"
}
}
}
}
把csr.json改名为ca-csr.json,并使用VI编辑器,把文件修改为以下内容:
{
"CN":"kubernetes",
"key":{
"algo":"rsa",
"size":4096
},
"names":[
{
"C":"CN",
"ST":"SD",
"L":"QD",
"O":"k8s",
"OU":"System"
}
]
}
运行以下命令,生成密钥和证书文件:
#cfsslgencert-initcaca-csr.json|cfssljson-bareca
生成密钥和证书文件的过程如图5所示:
图5生成密钥和证书文件
命令生成了ca.csr、ca-key.pem、ca.pem三个文件。其中ca.csr是证书请求文件,ca-key.pem是私钥文件,ca.pem是证书文件。
然后把这三个文件复制到node1和node2节点的相同目录下:
#scpca.csrca-key.pemca.pemnode1:/etc/kubernetes/ssl
#scpca.csrca-key.pemca.pemnode2:/etc/kubernetes/ssl
后续各个组件的密钥和证书,都会以这几个文件基础进行创建。
7.6安装和配置Etcd
下载软件
到https://github.com/coreos/etcd/releases下载etcd-v3.3.8-linux-amd64.tar.gz,如图6所示:
图6下载Etcd软件包
下载后,把etcd-v3.3.8-linux-amd64.tar.gz文件放入/home/software目录,然后使用以下命令解压文件:
#tarxzvfetcd-v3.3.8-linux-amd64.tar.gz
进入解压后的目录:
#cdetcd-v3.3.8-linux-amd64
用如下命令,把etcd、etcdctl复制到master节点的/usr/bin目录,以及node1和node2节点的相同目录:
#cpetcdetcdctl/usr/bin
#scpetcdetcdctlnode1:/usr/bin
#scpetcdetcdctlnode2:/usr/bin
然后在所有节点运行以下命令:
#etcd–version
#etcdctl–version
输出结果如图7所示:
图7查看版本信息
7.6.1制作证书
在所有节点创建/etc/etcd/ssl目录并进入这个目录。这个目录用于存放制作证书需要的文件及制作生成的证书:
#mkdir-p/etc/etcd/ssl
#cd/etc/etcd/ssl
在这个目录下使用VI编辑器,编辑名为etcd-csr.json的文件,文件内容如下:
{
"CN":"etcd",
"hosts":[
"127.0.0.1",
"192.168.2.84",
"192.168.2.81",
"192.168.2.82"
],
"key":{
"algo":"rsa",
"size":4096
},
"names":[
{
"C":"CN",
"ST":"SD",
"L":"QD",
"O":"k8s",
"OU":"System"
}
]
}
然后使用如下命令创建密钥是证书,本次生成的证书是以/etc/kubernetes/ssl目录下的密钥和证书的基础上创建的:
#cfsslgencert-ca=/etc/kubernetes/ssl/ca.pem\
-ca-key=/etc/kubernetes/ssl/ca-key.pem\
-config=/etc/kubernetes/ssl/ca-config.json\
-profile=kubernetesetcd-csr.json|cfssljson-bareetcd
执行过程如图8所示:
图8创建密钥和证书文件
在两个Node节点,用以下命令创建目录:
#mkdir-p/etc/etcd/ssl
命令运行完成后,后成了etcd.csr、etcd-key.pem、etcd.pem3个文件,用如下命令,把这3个文件复制到其他两个节点:
#scpetcd.csretcd-key.pemetcd.pemnode1:/etc/etcd/ssl
#scpetcd.csretcd-key.pemetcd.pemnode2:/etc/etcd/ssl
7.6.2编辑Etcd配置文件
登录master节点,打进终端容器,在/etc/etcd目录下,使用VI编辑器,编辑etcd.conf文件,写入以下内容:
#[member]
ETCD_NAME="master"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.10:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.10:2379,https://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.10:2380"
ETCD_INITIAL_CLUSTER="master=https://192.168.0.10:2380,node1=https://192.168.0.20:2380,node2=https://192.168.0.30:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.10:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
这个文件,除
TCD_INITIAL_CLUSTER="master=https://192.168.0.10:2380,node2=https://192.168.0.20:2380,node2=https://192.168.0.30:2380"
一行3个节点要一致外,其他节点的IP地址都要改成本节点的地址。master、node1和node2后面对应的是各节点的地址。
保存退出后,使用以下命令,把etcd.conf文件复制到其他两个节点的相同目录:
#scp/etc/etcd/etcd.confnode1:/etc/etcd
#scp/etc/etcd/etcd.confnode2:/etc/etcd
然后根据各节点的实际情况修改各自的配置文件。
在node1的文件内容如下:
#[member]
ETCD_NAME="node1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.81:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.81:2379,https://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.81:2380"
ETCD_INITIAL_CLUSTER="master=https://192.168.2.84:2380,node1=https://192.168.2.81:2380,node2=https://192.168.2.82:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.81:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
在node2的文件内容如下:
#[member]
ETCD_NAME="node2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.82:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.82:2379,https://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.82:2380"
ETCD_INITIAL_CLUSTER="master=https://192.168.2.84:2380,node1=https://192.168.2.81:2380,node2=https://192.168.2.82:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.82:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/etc/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
7.6.3创建Etcd服务
在/usr/lib/systemd/system目录下,使用VI编辑器,创建etcd.service文件,写入以下内容:
[Unit]
Description=EtcdServer
After=network.target
[Service]
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
Type=notify
[Install]
WantedBy=multi-user.target
编辑完成后,把文件复制到其他两个节点的相同目录:
#scp/usr/lib/systemd/system/etcd.servicenode1:/usr/lib/systemd/system
#scp/usr/lib/systemd/system/etcd.servicenode2:/usr/lib/systemd/system
在所有节点执行以下命令,创建Etcd工作目录:
#mkdir-p/var/lib/etcd
7.6.4启动服务和集群
在所有节点上执行以下两条命令,启动Etcd服务,并确保Etcd服务在系统重启后能自动启动:
#systemctlstartetcd
#systemctlenableetcd
然后用以下命令查看Etcd服务的状态:
#systemctlstatusetcd
如果状态为active,说明启动成功,如图9所示:
图9查看Etcd服务状态
然后在master节点用以下命令查看集群状态:
#etcdctl--endpoints=https://192.168.2.84:2379\
--ca-file=/etc/kubernetes/ssl/ca.pem\
--cert-file=/etc/etcd/ssl/etcd.pem\
--key-file=/etc/etcd/ssl/etcd-key.pemcluster-health
如果可以看到如图10所示的输出,说明集群状态正常:
图10查看集群状态
7.7配置master节点
7.7.1下载软件
使用如下地址,登录github:
https://github.com/kubernetes/kubernetes/releases
登录后,可以看到Kubernetes发布的各个版本的软件,本次实验使用v1.10.1版,如图11所示:
图11下载界面
点击Seekubernetes-announce@andCHANGELOG-1.10.mdfordetails一行中的CHANGELOG-1.10.md,会进入下载网页,安装Kubernetes需要下载Server、Node、Client三个软件包。
如图12所示,点击下载kubernetes-server-linux-amd64.tar.gz软件包
图12Server软件包下载界面
如图13所示,下载kubernetes-node-linux-amd64.tar.gz软件包。
图13Node软件包下载界面
如图14所示,下载kubernetes-client-linux-amd64.tar.gz软件包。
图14Client软件包下载界面
把这三个软件包下载到/home/software目录下。
7.7.2安装并配置master节点
1.解压软件
在/home/software目录下,使用如下命令解压软件包:
#tarxzvfkubernetes-server-linux-amd64.tar.tar
#tarxzvfkubernetes-node-linux-amd64.tar.tar
#tarxzvfkubernetes-client-linux-amd64.tar.tar
解压后进入kubernetes目录,在这个目录下有一个kubernetes-src.tar.gz文件,使用如下命令解压:
#cd./kubernetes
#tarxzvfkubernetes-src.tar.gz
使用如下命令,把/home/software/kubernetes/server/bin目录下的kube-apiserver、kube-controller-manager、kube-scheduler文件复制到/usr/bin目录。
#cpkube-apiserverkube-controller-managerkube-scheduler/usr/bin
4.创建密钥和证书
进入/etc/kubernetes/ssl目录,使用VI编辑器,编辑kubernetes-csr.json并写入以下内容:
{
"CN":"kubernetes",
"hosts":[
"127.0.0.1",
"192.168.2.84",
"80.1.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key":{
"algo":"rsa",
"size":4096
},
"names":[
{
"C":"CN",
"ST":"SD",
"L":"QD",
"O":"k8s",
"OU":"System"
}
]
}
注意:这里的80.1.0.1是clusterIP,后续的配置文件里用到clusterIP的地方写入这个地址
使用如下命令创建密钥和证书:
#cfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/ca-config.json-profile=kuberneteskubernetes-csr.json|cfssljson-barekubernetes
执行完成后,生成了kubernetes.csr、kubernetes-key.pem、kubernetes.pem3个文件,把kubernetes-key.pem和kubernetes.pem复制到其他两个节点的相同目录:
#scpkubernetes-key.pemkubernetes.pemnode1:/etc/kubernetes/ssl
#scpkubernetes-key.pemkubernetes.pemnode2:/etc/kubernetes/ssl
5.创建kube-apiserver使用的客户端token文件
以root用户登录master节点,在终端执行以下命令,生成一个token:
#head-c16/dev/urandom|od-An-tx|tr-d''
fb197b00040d993afed1367db4f9ef00
执行结束后,生成一个token即一个字符串:fb197b00040d993afed1367db4f9ef00
然后在/etc/kubernetes/ssl目录下,使用VI编辑器编辑bootstrap-token.csv文件,写与以下内容:
fb197b00040d993afed1367db4f9ef00,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
6.创建基础用户名/密码认证配置
在/etc/kubernetes/ssl目录下,使用VI编辑器编辑basic-auth.csv文件,写入以下内容,用于Kubernetes的基础用户认证:
admin,admin,1
readonly,readonly,2
7.部署KubernetesAPIServer
在/usr/lib/systemd/system/目录下编辑kube-apiserver.service文件,并写入以下内容:
[Unit]
Description=KubernetesAPIServer
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/bin/kube-apiserver\
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction\
--bind-address=192.168.2.84\
--insecure-bind-address=127.0.0.1\
--authorization-mode=Node,RBAC\
--runtime-config=rbac.authorization.k8s.io/v1\
--kubelet-https=true\
--anonymous-auth=false\
--basic-auth-file=/etc/kubernetes/ssl/basic-auth.csv\
--enable-bootstrap-token-auth\
--token-auth-file=/etc/kubernetes/ssl/bootstrap-token.csv\
--service-cluster-ip-range=80.1.0.0/16\
--service-node-port-range=20000-40000\
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem\
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem\
--client-ca-file=/etc/kubernetes/ssl/ca.pem\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem\
--etcd-cafile=/etc/kubernetes/ssl/ca.pem\
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem\
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem\
--etcd-servers=https://192.168.2.84:2379,https://192.168.2.81:2379,https://192.168.2.82:2379\
--enable-swagger-ui=true\
--allow-privileged=true\
--audit-log-maxage=30\
--audit-log-maxbackup=3\
--audit-log-maxsize=100\
--audit-log-path=/etc/kubernetes/log/api-audit.log\
--event-ttl=1h\
--v=2\
--logtostderr=false\
--log-dir=/var/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
注意:--service-cluster-ip-range=80.1.0.0/16一行中,80.1.0.0/16是分配给cluster的网段。
保存退出后,使用如下命令创建保存api-server日志的目录:
#mkdir–p/var/kubernetes/log
然后运行以下命令启动kube-apiserver服务,并确保服务可以在系统重启后自动启动:
#systemctlstartkube-apiserver
#systemctlenablekube-apiserver
使用如下命令查看服务状态:
#systemctlstatuskube-apiserver
正常状态如图15所示:
图14查看服务状态
8.部署ControllerManager服务
在/usr/lib/systemd/system/目录下编辑kube-controller-manager.service文件,并写入以下内容:
[Unit]
Description=KubernetesControllerManager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager\
--address=127.0.0.1\
--master=http://127.0.0.1:8080\
--allocate-node-cidrs=true\
--service-cluster-ip-range=80.1.0.0/16\
--cluster-cidr=80.2.0.0/16\
--cluster-name=kubernetes\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem\
--root-ca-file=/etc/kubernetes/ssl/ca.pem\
--leader-elect=true\
--v=2\
--logtostderr=false\
--log-dir=/var/kubernetes/log
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
注意:--service-cluster-ip-range=80.1.0.0/16对应是的clusterIP的网段,--cluster-cidr=80.2.0.0/16对应的是PodIP的网段。
在终端运行以下命令,启动kube-controller-manager服务,并确保服务可以在系统重启后自动运行:
#systemctlstartkube-controller-manager
#systemctlenablekube-controller-manager
执行以下命令,查看服务状态:
#systemctlstatuskube-controller-manager
正常状态如图15所示:
图15查看服务状态
9.部署KubernetesScheduler
在/usr/lib/systemd/system/目录下编辑kube-scheduler.service文件,并写入以下内容:
[Unit]
Description=KubernetesScheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-scheduler\
--address=127.0.0.1\
--master=http://127.0.0.1:8080\
--leader-elect=true\
--v=2\
--logtostderr=false\
--log-dir=/var/kubernetes/log
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
在终端执行以下命令,启动kube-scheduler服务并确保服务可以在系统重启后自动启动:
#systemctlstartkube-scheduler
#systemctlenablekube-scheduler
使用如下命令查看服务状态:
#systemctlstatuskube-scheduler
正常状态如图16所示:
图16查看服务状态
10.部署kubectl命令行工具
(1).复制文件
使用以下命令,把kubectl文件复制到/usr/bin目录下:
#cp/home/software/kubernetes/client/bin/kubectl/usr/bin
(2).创建密钥和证书
进入/etc/kubernetes/ssl目录,使用VI编辑器,编辑admin-csr.json文件,用于创建证书和密钥。在这个文件写入以下内容:
{
"CN":"admin",
"hosts":[],
"key":{
"algo":"rsa",
"size":4096
},
"names":[
{
"C":"CN",
"ST":"SD",
"L":"QD",
"O":"system:masters",
"OU":"System"
}
]
}
使用如下命令,创建admin用户的密钥和证书:
#cfsslgencert-ca=/etc/kubernetes/ssl/ca.pem-ca-key=/etc/kubernetes/ssl/ca-key.pem-config=/etc/kubernetes/ssl/ca-config.json-profile=kubernetesadmin-csr.json|cfssljson-bareadmin
这个命令会生成admin.csr、admin-key.pem、admin.pem三个文件。
(3).设置集群参数
使用如下命令,设置集群Kubernetes的参数:
#kubectlconfigset-clusterkubernetes\
--certificate-authority=/etc/kubernetes/ssl/ca.pem\
--embed-certs=true--server=https://192.168.2.84:6443
命令执行成功,会有如下提示:
Cluster"kubernetes"set.
(4).设置客户端认证参数
使用如下命令,设置用户admin以客户端方式运行的参数:
#kubectlconfigset-credentialsadmin\
--client-certificate=/etc/kubernetes/ssl/admin.pem\
--embed-certs=true--client-key=/etc/kubernetes/ssl/admin-key.pem
命令执行成功,会有如下提示:
User"admin"set.
(5).设置上下文参数
使用如下命令,设置上下文参数,即使集群Kubernetes和用户admin相关联:
#kubectlconfigset-contextkubernetes--cluster=kubernetes--user=admin
命令执行成功,会有如下提示:
Context"kubernetes"created.
(6).设置默认上下文
使用如下命令,设置默认上下文,即开始使用刚才设置的关联:
#kubectlconfiguse-contextkubernetes
命令执行成功,会有如下提示:
Switchedtocontext"kubernetes".
(7).使用kubectl工具
使用如下命令,验证配置:
#kubectlgetcs
如果配置成功,会有如图17所示的输出:
图17验证master节点的配置
如果可以看到这个输出,说明master节点配置成功。
7.8配置node节点
配置两个node节点前,首先要完成master节点的相关配置,然后再切换到node节点进行配置操作。
7.8.1在master节点的配置
1.复制文件
进入/home/software/kubernetes/server/bin目录,把kubelet复制到其他两个节点的/usr/bin目录下:
#scpkubeletnode1:/usr/bin
#scpkubeletnode2:/usr/bin
把kube-proxy文件复制到master和其他两个节点的/usr/bin目录下:
#cpkube-proxy/usr/bin
#scpkube-proxynode1:/usr/bin
#scpkube-proxynode2:/usr/bin
其中kubelet服务只需要在node节点部署,kube-proxy服务需要在所有节点部署。
11.创建角色绑定
在master终端执行以下命令,把kubelet-bootstrap绑定到集群:
#kubectlcreateclusterrolebindingkubelet-bootstrap--clusterrole=system:node-bootstrapper--user=kubelet-bootstrap
绑定成功后,会有如下提示:
clusterrolebinding.rbac.authorization.k8s.io"kubelet-bootstrap"created
12.设置集群参数
在终端执行以下命令,创建kubeletbootstrappingkubeconfig文件,设置集群参数:
#kubectlconfigset-clusterkubernetes\
--certificate-authority=/etc/kubernetes/ssl/ca.pem\
--embed-certs=true--server=https://192.168.2.84:6443\
--kubeconfig=bootstrap.kubeconfig
执行成功,会有如下提示:
Cluster"kubernetes"set.
13.设置客户端认证参数
使用如下命令,设置客户端认证参数,命令中的token对应的参数值就是在配置master节点时创建的token字符串,存放于/etc/kubernetes/ssl/bootstrap-token.csv文件里:
#kubectlconfigset-credentialskubelet-bootstrap\
--token=fb197b00040d993afed1367db4f9ef00\
--kubeconfig=bootstrap.kubeconfig
设置成功,会有如下提示:
User"kubelet-bootstrap"set.
14.设备集群和用户的上下文关联参数
使用如下命令,设备集群Kubernetes和用户kubelet-bootstrap的上下文关联参数:
#kubectlconfigset-contextdefault\
--cluster=kubernetes\
--user=kubelet-bootstrap\
--kubeconfig=bootstrap.kubeconfig
设置成功,会有如下提示:
Context"default"created.
15.选择默认上下文
把刚才设置的关联调协成系统默认使用的上下文关联。
#kubectlconfiguse-contextdefault--kubeconfig=bootstrap.kubeconfig
设置成功,会有如下提示:
Switchedtocontext"default".
在三个节点创建/etc/kubernetes/cfg目录:
#mkdir-p/etc/kubernetes/cfg
把bootstrap.kubeconfig文件复制到各个节点的相同目录,命令如下:
#cpbootstrap.kubeconfig/etc/kubernetes/cfg
#scpbootstrap.kubeconfignode1:/etc/kubernetes/cfg
#scpbootstrap.kubeconfignode2:/etc/kubernetes/cfg
7.8.2在node节点上部署Kubelet
1.部署CNI并设置其支持kubelet
在node1节点,用如下命令创建CNI的工作目录,并进入这个目录:
#mkdir-p/etc/cni/net.d
#cd/etc/cni/net.d
在这个目录下使用VI编辑器编辑10-default.conf文件,并写入以下内容:
{
"name":"flannel",
"type":"flannel",
"delegate":{
"bridge":"docker0",
"isDefaultGateway":true,
"mtu":1400
}
}
这个脚本用于删除Docker自动创建的docker0网桥。
在node2节点上也执行相同的操作。
16.创建kubelet服务
登录node1节点,创建kubelet工具目录,命令如下:
#mkdir/var/lib/kubelet
#mkdir-p/var/kubernetes/log
在/usr/lib/systemd/system/目录编辑kubelet.service,写入以下内容:
[Unit]
Description=KubernetesKubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/bin/kubelet\
--address=192.168.0.20\
--hostname-override=192.168.0.20\
--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0\
--experimental-bootstrap-kubeconfig=/etc/kubernetes/cfg/bootstrap.kubeconfig\
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig\
--cert-dir=/etc/kubernetes/ssl\
--network-plugin=cni\
--cni-conf-dir=/etc/cni/net.d\
--cni-bin-dir=/usr/bin/\
--cluster-dns=10.1.0.2\
--cluster-domain=cluster.local.\
--hairpin-modehairpin-veth\
--allow-privileged=true\
--fail-swap-on=false\
--logtostderr=true\
--v=2\
--logtostderr=false\
--log-dir=/var/kubernetes/log
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
编写完成后,使用以下命令启动服务,并确保服务在系统重启后能自动启动:
#systemctlstartkubelet
#systemctlenablekubelet
如果服务可以正常启动,那么kubelet服务会在/etc/kubernetes/ssl目录下生成kubelet-client.key、kubelet.crt、kubelet.key3个文件,用于系统对kubelet服务的认证。
使用如下命令查看服务状态,状态正常,输出如图18所示:
图18查看服务状态
17.查看并批准csr请求
注意,以下操作需要在master节点上进行。
登录master节点终端,执行以下命令:
#kubectlgetcsr
执行成功后,会有以下输出,显示请求状态为Pending,如图19所示:
图19查看csr请求状态
使用以下命令,批准请求:
#kubectlgetcsr|grep'Pending'|awk'NR>0{print$1}'|xargskubectlcertificateapprove
执行成功,会有如图20所示的输出
图20执行结果
再次用如下命令查看请求状态:
#kubectlgetcsr
如图21所示,请求状态变为Approved,Issued,表明已经批准:
图21查看请求批准后的状态
7.8.3在node节点上部署Kube-proxy
1.配置kube-proxy使用LVS
首先需要在每一个node节点安装LVS软件,用于实现负载均衡,安装命令如下:
#yuminstall-yipvsadmipsetconntrack
18.创建kube-proxy证书请求文件
在master节点的/etc/kubernetes/ssl目录下编写证书请求文件kube-proxy-csr.json,写入以下内容:
{
"CN":"system:kube-proxy",
"hosts":[],
"key":{
"algo":"rsa",
"size":4096
},
"names":[
{
"C":"CN",
"ST":"SD",
"L":"QD",
"O":"k8s",
"OU":"System"
}
]
}
19.生成密钥和证书文件
在master节点终端使用以下命令,生成密钥和证书文件:
#cfsslgencert-ca=/etc/kubernetes/ssl/ca.pem\
-ca-key=/etc/kubernetes/ssl/ca-key.pem\
-config=/etc/kubernetes/ssl/ca-config.json\
-profile=kuberneteskube-proxy-csr.json|cfssljson-barekube-proxy
执行完成后,生成了kube-proxy.csr、kube-proxy-key.pem、kube-proxy.pem
20.复制密钥和证书到所有node节点
在master节点终端窗口使用如下命令,把生成的密钥和证书文件复制到其他两个node节点的/etc/kubernetes/ssl目录:
#scpkube-proxy.pemkube-proxy-key.pemnode1:/etc/kubernetes/ssl
#scpkube-proxy.pemkube-proxy-key.pemnode2:/etc/kubernetes/ssl
21.创建kube-proxy配置文件
在master节点终端使用如下命令,创建kube-proxy配置文件:
#kubectlconfigset-clusterkubernetes\
--certificate-authority=/etc/kubernetes/ssl/ca.pem\
--embed-certs=true\
--server=https://192.168.2.84:6443\
--kubeconfig=kube-proxy.kubeconfig
命令执行成功,会有如下输出:
Cluster"kubernetes"set.
22.创建用户kube-proxy
在master节点终端使用如下命令,创建kube-proxy用户:
#kubectlconfigset-credentialskube-proxy\
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem\
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem\
--embed-certs=true\
--kubeconfig=kube-proxy.kubeconfig
命令执行成功,会有如下输出:
User"kube-proxy"set.
23.设置用户参数的上下文件关联
在master节点终端使用如下命令,设置用户kube-proxy参数的上下文件关联:
#kubectlconfigset-contextdefault\
--cluster=kubernetes\
--user=kube-proxy\
--kubeconfig=kube-proxy.kubeconfig
命令执行成功,会有如下输出:
Context"default"created.
24.把参数设置为默认的上下文参数
在master节点终端使用如下命令,设置用户kube-proxy的参数为默认的上下文件关联参数:
#kubectlconfiguse-contextdefault--kubeconfig=kube-proxy.kubeconfig
命令执行成功,会有如下输出:
Switchedtocontext"default".
25.把配置文件复制到node节点
在master节点终端使用如下命令,把配置文件kube-proxy.kubeconfig复制到node节点和master节点的/etc/kubernetes/cfg目录:
#scpkube-proxy.kubeconfignode1:/etc/kubernetes/cfg
#scpkube-proxy.kubeconfignode2:/etc/kubernetes/cfg
#cpkube-proxy.kubeconfig/etc/kubernetes/cfg
26.创建kube-proxy服务
在所有节点终端执行以下命令,创建kube-proxy工作目录:
#mkdir/var/lib/kube-proxy
在所有节点的/usr/lib/systemd/system目录编辑kube-proxy.service文件,写入以下内容:
[Unit]
Description=KubernetesKube-ProxyServer
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/bin/kube-proxy\
--bind-address=192.168.0.20\
--hostname-override=192.168.0.20\
--kubeconfig=/etc/kubernetes/cfg/kube-proxy.kubeconfig\
--masquerade-all\
--feature-gates=SupportIPVSProxyMode=true\
--proxy-mode=ipvs\
--ipvs-min-sync-period=5s\
--ipvs-sync-period=5s\
--ipvs-scheduler=rr\
--logtostderr=true\
--v=2\
--log-dir=/var/kubernetes/log
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
使用如下命令启动kube-proxy服务,并确保在主机重启后,服务可以自动启动:
#systemctlstartkube-proxy
#systemctlenablekube-proxy
使用如下命令查看服务状态:
#systemctlstatuskube-proxy
服务如果能正常启动,会有如图22所示的输出:
图22查看服务状态
27.检查LVS状态
在任意节点终端,使用如下命令,查看LVS的状态:
#ipvsadm-L–n
如果状态正常,会有如图23所示的输出
图23LVS的输出
28.检查node节点部署是否生效
在master节点执行以下命令,查检node节点部署是否生效:
#kubectlgetnode
如果部署生效,会有如图10-36所示的输出:
图24查看Node节点部署情况
如果输出正常,说明node节点部署已经正确完成。
7.9部署Flannel网络
7.9.1生成Flannel的密钥和证书
在master节点的终端,进入/etc/kubernetes/ssl目录,编辑flanneld-csr.json文件,并写入以下内容:
{
"CN":"flanneld",
"hosts":[],
"key":{
"algo":"rsa",
"size":4096
},
"names":[
{
"C":"CN",
"ST":"SD",
"L":"QD",
"O":"k8s",
"OU":"System"
}
]
}
在master终端,使用如下命令,生成密钥和证书:
#cfsslgencert-ca=/etc/kubernetes/ssl/ca.pem\
-ca-key=/etc/kubernetes/ssl/ca-key.pem\
-config=/etc/kubernetes/ssl/ca-config.json\
-profile=kubernetesflanneld-csr.json|cfssljson-bareflanneld
命令执行成功,会生成flanneld.csr、flanneld-key.pem、flanneld.pem3个文件。
在master终端执行以下命令,把flanneld-key.pem、flanneld.pem复制到node节点的/etc/kubernetes/ssl目录:
#scpflanneld-key.pemflanneld.pemnode1:/etc/kubernetes/ssl
#scpflanneld-key.pemflanneld.pemnode2:/etc/kubernetes/ssl
7.9.2下载软件
在master节点,使用如下命令,创建并进入下载软件用的目录:
#mkdir/home/software/flannel
运行以下命令,下载软件:
#wgethttps://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
运行完成后,会在当前目录看到一个名为flannel-v0.10.0-linux-amd64.tar.gz的文件,使用如下命令,解压文件:
#tarxzvfflannel-v0.10.0-linux-amd64.tar.gz
解压后,得到flanneld和mk-docker-opts.sh两个文件,把这两个文件复制到每一个节点的/usr/bin目录下,命令如下:
#cpflanneldmk-docker-opts.sh/usr/bin
#scpflanneldmk-docker-opts.shnode1:/usr/bin
#scpflanneldmk-docker-opts.shnode2:/usr/bin
然后,进入/home/software/kubernetes/cluster/centos/node/bin目录,把这个目录下的remove-docker0.sh文件复制到master节点和两个node节点的/usr/bin目录下,命令如下:
#cpremove-docker0.sh/usr/bin
#scpremove-docker0.shnode1:/usr/bin
#scpremove-docker0.shnode2:/usr/bin
在/home/software目录下创建cni目录:
#mkdir/home/software/cin
然后进入这个目录,运行以下命令,下载CNI软件:
wgethttps://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
7.9.3配置Flannel
1.编辑配置文件
在/etc/kubernetes/cfg目录下编辑flannel文件,写入以下内容:
FLANNEL_ETCD="-etcd-endpoints=https://192.168.0.10:2379,https://192.168.0.20:2379,https://192.168.0.30:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
FLANNEL_ETCD_CAFILE="--etcd-cafile=/etc/kubernetes/ssl/ca.pem"
FLANNEL_ETCD_CERTFILE="--etcd-certfile=/etc/kubernetes/ssl/flanneld.pem"
FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/etc/kubernetes/ssl/flanneld-key.pem"
然后用以下命令把flannel文件复制到两个node节点的相同目录:
#scpflannelnode1:/etc/kubernetes/cfg
#scpflannelnode2:/etc/kubernetes/cfg
29.创建服务
在/usr/lib/systemd/system编辑flannel.service文件,写入以下内容:
[Unit]
Description=Flanneldoverlayaddressetcdagent
After=network.target
Before=docker.service
[Service]
EnvironmentFile=-/etc/kubernetes/cfg/flannel
ExecStartPre=/usr/bin/remove-docker0.sh
ExecStart=/usr/bin/flanneld${FLANNEL_ETCD}${FLANNEL_ETCD_KEY}${FLANNEL_ETCD_CAFILE}${FLANNEL_ETCD_CERTFILE}${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/usr/bin/mk-docker-opts.sh-d/run/flannel/docker
Type=notify
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
编辑完成后,把flannel.service复制到两个node节点的相同目录:
#scp/usr/lib/systemd/system/flannel.servicenode1:/usr/lib/systemd/system
#scp/usr/lib/systemd/system/flannel.servicenode2:/usr/lib/systemd/system
7.9.4集成Flannel和CNI
进入/home/software/cni目录,用以下命令,解压已经下载的cni-plugins-amd64-v0.7.1.tgz文件:
#tarxzvfcni-plugins-amd64-v0.7.1.tgz
然后把cni-plugins-amd64-v0.7.1.tgz文件移动到上级目录,再把当前目录下的所有文件复制到本节点和其他两个node节点的/usr/bin目录,命令如下:
#cp*/usr/bin
#scp*node1:/usr/bin
#scp*node2:/usr/bin
在master节点创建/kubernetes/network目录,命令如下:
#mkdir/kubernetes/network
然后运行以下命令,生成Etcd的密钥:
etcdctl--ca-file/etc/kubernetes/ssl/ca.pem\
--cert-file/etc/kubernetes/ssl/flanneld.pem\
--key-file/etc/kubernetes/ssl/flanneld-key.pem\
--no-sync-Chttps://192.168.2.84:2379,https://192.168.2.81:2379,https://192.168.2.82:2379\
mk/kubernetes/network/config'{"Network":"80.2.0.0/16","Backend":{"Type":"vxlan","VNI":1}}'>/dev/null2>&1
在所有节点使用如下命令启动flannel服务,并确保服务在系统后可以自动运行:
#systemctlstartflannel
#systemctlenableflannel
使用如下命令查看服务状态:
#systemctlstatusflannel
如果服务状态正常,会有如图24所示的输出:
图24查看服务状态
7.9.5配置Docker使用Flannel
Docker使用Flannel,需要修改两个Node节点/usr/lib/systemd/system目录下的docker.service文件。
修改完的文件内容如下:
[Unit]
Description=DockerApplicationContainerEngine
Documentation=https://docs.docker.com
After=network-online.targetfirewalld.serviceflannel.service
Wants=network-online.target
Requires=flannel.service
[Service]
Type=notify
EnvironmentFile=-/run/flannel/docker
#thedefaultisnottousesystemdforcgroupsbecausethedelegateissuesstill
#existsandsystemdcurrentlydoesnotsupportthecgroupfeaturesetrequired
#forcontainersrunbydocker
ExecStart=/usr/bin/dockerd$DOCKER_OPTS
ExecReload=/bin/kill-sHUP$MAINPID
#Havingnon-zeroLimit*scausesperformanceproblemsduetoaccountingoverhead
#inthekernel.Werecommendusingcgroupstodocontainer-localaccounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
#UncommentTasksMaxifyoursystemdversionsupportsit.
#Onlysystemd226andabovesupportthisversion.
#TasksMax=infinity
TimeoutStartSec=0
#setdelegateyessothatsystemddoesnotresetthecgroupsofdockercontainers
Delegate=yes
#killonlythedockerprocess,notallprocessesinthecgroup
KillMode=process
#restartthedockerprocessifitexitsprematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
在所有节点运行以下命令,使系统重新读入Docker的服务文件后,重启Docker服务:
#systemctldaemon-reload
#systemctlrestartdocker
然后使用如下命令查看Docker服务的状态:
#systemctlstatusdocker
如果状态正常,输出如图25所示:
图25查看服务状态
到此安装完毕。
7.10使用Kubectl命令
开始使用前,先把镜像导入系统(实验时可以把镜像从已经有系统导出,再导入实验系统,也可以从网上直接下载)。
7.10.1上传镜像文件
1.上传文件
在两个Node节点上创建/home/images目录:
#mkdir/home/images
一共有3个镜像文件,分别是centssh.tar、nginx.tar、pause-amd64.tar、centos.tar,如果文件保存的Window要系统下,使用WinSCP上传到两个Node节点,如果是保存的Linux系统,使用scp命令发送即可。
30.导入镜像
然后使用如下命令导入镜像:
#dockerload<centssh.tar
#dockerload<nginx.tar
#dockerload<pause-amd64.tar
#dockerload<centos.tar
导入完成后,用以下命令查看镜像:
#dockerimages
输出如图26所示:
图26查看镜像
7.10.2常用子命令
1.run
在master节点的终端使用以下命令创建一个Deployment:
#kubectlrunmynginx--image=nginx--replicas=2
输入的命令正确无误,会有如下输出:
deployment.apps"mynginx"created
31.get
在master节点终端输入以下命令,查看创建了哪些Deployment:
#kubectlgetdeployment
输出结果如图27所示,Deploymentmynginx已经创建
图27创建Deployment
使用如下命令,可以查看Pod
#kubectlgetpod
输出如图28所示:
图28创建查看Pod
还可以查看Deployment的详细信息,命令如下:
#kubectlgetdeploymentmynginx-owide
输出如图29所示
图29查看Deployment的详细信息
也可以查看Pod的详细信息,命令如下:
#kubectlgetpod-owide
输出如图30所示:
图30查看Pod的详细信息
32.describe
使用如下命令查看Deployment详细信息:
#kubectldescribedeploymentmynginx
或者如下命令查看Pod的详细信息:
#kubectldescribepodmynginx-7f77c9fb4c-vp677
此处输出内容太多,以实验时输出内容为参考。
33.expose
使用如下命令,可以在已有的Deployment基础上对外创建服务:
#kubectlexposedeploymentmynginx--port=80
运行完成,有如下输出:
service"mynginx"exposed
然后使用以下命令,查看创建好的Service:
#kubectlgetservice
输出如图31所示:
图31查看Service
34.delete
使用子命令delete删除service:
#kubectldeleteservicemynginx
运行完成,有如下输出:
service"mynginx"deleted
再用get子命令查看Service:
#kubectlgetservice
使用子命令delete删除Deployment
#kubectldeletedeploymentmynginx
运行后有如下输出:
deployment.extensions"mynginx"deleted
然后用get命令再查看,有如下输出:
Noresourcesfound.
35.create
创建/home/yaml目录,在这个目录下创建ngin-pod.yaml文件,写入以下内容:
apiVersion:v1
kind:Pod
metadata:
labels:
run:myng
name:myng
spec:
containers:
-image:nginx
name:myng
ports:
-containerPort:80
protocol:TCP
然后使用如下命令,创建Pod:
#kubectlcreate-fngin-pod.yaml
输出如下:
pod"myng"created
然后查看Pod:
#kubectlgetpod
输出如图32所示:
图32查看Pod
使用如下命令删除这个Pod:
#kubectldeletepodmyng
输出如下:
pod"myng"deleted
7.10.3管理Pod
1.ExecAction方式使用探针
编写ExecProbe.yaml文件,写入以下内容:
apiVersion:v1
kind:Pod
metadata:
labels:
run:myng
name:myng
spec:
containers:
-image:nginx
name:myng
args:
-/bin/bash
--c
-echoalive>/tmp/liveness;sleep10;rm-rf/tmp/health;sleep30;
livenessProbe:
exec:
command:
-cat
-/tmp/liveness
initialDelaySeconds:15
periodSeconds:5
然后使用如下命令创建Pod:
#kubectlcreate-fExecProbe.yaml
创建成功后,有如下输出:
pod"myng"created
本例中的LivenessProbe探针会执行cat/tmp/liveness文件以判断一个容器是否在正常运行。这个Pod运行后,会在创建/tmp/liveness10秒后删除这个文件,然后休眠60秒。而探针在容器启动15秒后才探测执行结果,并且会得到执行失败(Fail)的结果,因此Kubelet会杀死这个容器并重新启动。如图33所示,Pod在3分钟内补充重启3次。
图33Pod被重启
36.TCPSocketAction方式使用探针
编辑TCPProbe.yaml文件,写入以下内容:
apiVersion:v1
kind:Pod
metadata:
labels:
run:myng
name:myng
spec:
containers:
-image:nginx
name:myng
livenessProbe:
tcpSocket:
port:80
initialDelaySeconds:20
periodSeconds:10
编辑TCPProbe1.yaml文件,写入以下内容:
apiVersion:v1
kind:Pod
metadata:
labels:
run:myng1
name:myng1
spec:
containers:
-image:nginx
name:myng1
livenessProbe:
tcpSocket:
port:90
initialDelaySeconds:20
periodSeconds:10
使用以下两条命令创建Pod:
#kubectlcreate-fTCPProbe1.yaml
#kubectlcreate-fTCPProbe.yaml
因为不能通过端口获取信息,开放90端口的Podmyng1被连续重启,如图34所示:
图34查看Pod
7.10.4管理Service
在进行Service负载均衡验证的时候,使用之前导入的centssh镜像。
在Master节点编辑NodePort.yaml文件,写入以下内容:
apiVersion:extensions/v1beta1
kind:Deployment
metadata:
name:myssh
labels:
app:myssh
spec:
replicas:2
selector:
matchLabels:
app:myssh
template:
metadata:
labels:
app:myssh
spec:
containers:
-name:myssh
image:centssh:latest
imagePullPolicy:Never
ports:
-containerPort:2022
command:
-/usr/sbin/sshd
--D
创建完成后,在Master节点的窗口使用如下命令创建Deployment:
#kubectlcreate-fNodePort.yaml
创建成功后,会有如下输出:
deployment.extensions"myssh"created
然后使用如下命令查看创建结果:
#kubectlgetdeployment
#kubectlgetpod-owide
创建后的结果如图35所示:
图35查看创建结果
然后创建ssh-svc.yaml文件,写入以下内容:
apiVersion:v1
kind:Service
metadata:
name:myssh
spec:
type:NodePort
selector:
app:myssh
ports:
-protocol:TCP
port:2022
targetPort:2022
nodePort:32022
然后使用如下命令创建Service:
#kubectlcreate-fssh-svc.yaml
创建成功,会有如下输出:
service"myssh"created
使用如下命令查看创建结果:
#kubectlgetservice
输出如图36所示,对外的可连接端口被映射到了32022:
云计算框架与应用,教师培训实践指导手册,OpenStack实验手册
图36查看创建结果
使用如下命令多次登录容器并退出:
#ssh192.168.2.84–p32022
连续多次登录,可以看到多次在两个容器之间轮流登录,如图37所示:
图37测试登录
- 本文分类:学校工作
- 本文标签:
- 本文链接:https://www.58how.com/xuexiaogongzuo/47852.html

发表评论 取消回复