二、Ceph的ceph-deploy部署

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

⼆、Ceph的ceph-deploy部署
1、实验环境
系统版本:buntu 18.04.5 LTS
内核参数:4.15.0-112-generic
ceph版本:pacific/16.2.5
主机分配:
#部署服务器ceph-deploy
172.168.32.101/10.0.0.101 ceph-deploy
#两个ceph-mgr 管理服务器
172.168.32.102/10.0.0.102 ceph-mgr01
172.168.32.103/10.0.0.103 ceph-mgr02
#三台服务器作为ceph 集群Mon 监视服务器,每台服务器可以和ceph 集群的cluster ⽹络通信。

172.168.32.104/10.0.0.104 ceph-mon01 ceph-mds01
172.168.32.105/10.0.0.105 ceph-mon02 ceph-mds02
172.168.32.106/10.0.0.106 ceph-mon03 ceph-mds03
#四台服务器作为ceph 集群OSD 存储服务器,每台服务器⽀持两个⽹络,public ⽹络针对客户端访问,cluster ⽹络⽤于集群管理及数据同步,每台三块或以上的磁盘
172.168.32.107/10.0.0.107 ceph-node01
172.168.32.108/10.0.0.108 ceph-node02
172.168.32.109/10.0.0.109 ceph-node03
172.168.32.110/10.0.0.110 ceph-node04
#磁盘划分
#/dev/sdb /dev/sdc /dev/sdd /dev/sde #50G
2、系统环境初始化
1)所有节点更换为清华源
cat >/etc/apt/source.list<<EOF
# 默认注释了源码镜像以提⾼ apt update 速度,如有需要可⾃⾏取消注释
deb https:///ubuntu/ bionic main restricted universe multiverse
# deb-src https:///ubuntu/ bionic main restricted universe multiverse
deb https:///ubuntu/ bionic-updates main restricted universe multiverse
# deb-src https:///ubuntu/ bionic-updates main restricted universe multiverse
deb https:///ubuntu/ bionic-backports main restricted universe multiverse
# deb-src https:///ubuntu/ bionic-backports main restricted universe multiverse
deb https:///ubuntu/ bionic-security main restricted universe multiverse
# deb-src https:///ubuntu/ bionic-security main restricted universe multiverse
EOF
2)所有节点安装常⽤软件
apt install iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libp 3)所有节点的内核配置
cat >/etc/sysctl.conf <<EOF
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded
applications. kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536
# # Controls the maximum size of a message, in bytes
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# # Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
# TCP kernel paramater
net.ipv4.tcp_mem = 786432 1048576 1572864
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304 n
et.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
# socket buffer
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
dev_max_backlog = 262144
net.core.somaxconn = 20480
net.core.optmem_max = 81920
# TCP conn
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
# tcp conn reuse
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_max_tw_buckets = 20000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syncookies = 1
# keepalive conn
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.ip_local_port_range = 10001 65000
# swap
vm.overcommit_memory = 0
vm.swappiness = 10
#net.ipv4.conf.eth1.rp_filter = 0
#net.ipv4.conf.lo.arp_ignore = 1
#net.ipv4.conf.lo.arp_announce = 2
#net.ipv4.conf.all.arp_ignore = 1
#net.ipv4.conf.all.arp_announce = 2
EOF
4)所有节点的⽂件权限配置
cat > /etc/security/limits.conf <<EOF
root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000
root soft msgqueue 8192000
root hard msgqueue 8192000
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000
* soft msgqueue 8192000
* hard msgqueue 8192000
EOF
5)所有节点的时间同步配置
#安装cron并启动
apt install cron -y
systemctl status cron.service
#同步时间
/usr/sbin/ntpdate &> /dev/null && hwclock -w
#每5分钟同步⼀次时间
echo "*/5 * * * * /usr/sbin/ntpdate &> /dev/null && hwclock -w" >> /var/spool/cron/crontabs/root
6)所有节点/etc/hosts配置
cat >>/etc/hosts<<EOF
172.168.32.101 ceph-deploy
172.168.32.102 ceph-mgr01
172.168.32.103 ceph-mgr02
172.168.32.104 ceph-mon01 ceph-mds01
172.168.32.105 ceph-mon02 ceph-mds02
172.168.32.106 ceph-mon03 ceph-mds03
172.168.32.107 ceph-node01
172.168.32.108 ceph-node02
172.168.32.109 ceph-node03
172.168.32.110 ceph-node04
EOF
7)所有节点安装python2
做ceph初始化时,需要python2.7
apt install python2.7 -y
ln -sv /usr/bin/python2.7 /usr/bin/python2
3、ceph部署
1)所有节点配置ceph yum 仓库,并导⼊key
2)所有节点创建ceph⽤户,并允许ceph ⽤户以执⾏特权命令:
推荐使⽤指定的普通⽤户部署和运⾏ceph 集群,普通⽤户只要能以⾮交互⽅式执⾏命令执⾏⼀些特权命令即可,新版的ceph-deploy 可以指定包含root 的在内只要可以执⾏命令的⽤户,不过仍然推荐使⽤普通⽤户,⽐如ceph、cephuser、cephadmin 这样的⽤户去管理ceph 集群。

#因为前⾯安装的ceph-common会更改ceph⽤户的家⽬录,建议使⽤其它⽤户来部署,如cephadmin⽤户
groupadd -r -g 2021 cephadmin && useradd -r -m -s /bin/bash -u 2021 -g 2021 cephadmin && echo cephadmin:123456 | chpasswd
#允许ceph ⽤户以执⾏特权命令
echo "cephadmin ALL=(ALL) NOPASSWD:ALL" >> /etc/ers
3)配置免秘钥登录:
在ceph-deploy 节点配置允许以⾮交互的⽅式登录到各ceph node/mon/mgr 节点,即在ceph-deploy 节点的ceph⽤户⽣成秘钥对,然后分发公钥到各被管理节点的ceph⽤户。

#(1)创建ssh密钥
cephadmin@ceph-deploy:/tmp$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Oq0Vh0Do/VUVklh3U58XNgNNkIfCIPAiXFw+ztEhbqM cephadmin@ceph-deploy
The key's randomart image is:
+---[RSA 2048]----+
| .+++ oooo=@O+|
| . oo+ + ooo=.+B|
| + o.O . .. ..o|
| o B.+.. .|
| E +S.. |
| o.o |
| o o |
| + |
| . |
+----[SHA256]-----+
#(2)安装sshpass
cephadmin@ceph-deploy:/tmp$ sudo apt install sshpass
#(3)ceph-deploy节点使⽤ceph⽤户分发密钥脚本
cat >>/tmp/ssh_fenfa.sh<<EOF
#!/bin/bash
#⽬标主机列表
IP="
172.168.32.101
172.168.32.102
172.168.32.103
172.168.32.104
172.168.32.105
172.168.32.106
172.168.32.107
172.168.32.108
172.168.32.109
172.168.32.110"
for node in ${IP};do
sshpass -p 123456 ssh-copy-id cephadmin@${node} -o StrictHostKeyChecking=no &> /dev/null
if [ $? -eq 0 ];then
echo "${node}----> 密钥分发success完成"
else
echo "${node}----> 密钥分发false失败"
fi
done
EOF
#(4)使⽤脚本分发ssh密钥
cephadmin@ceph-deploy:/tmp$ bash ssh_fenfa.sh
172.168.32.101----> 密钥分发success完成
172.168.32.102----> 密钥分发success完成
172.168.32.103----> 密钥分发success完成
172.168.32.104----> 密钥分发success完成
172.168.32.105----> 密钥分发success完成
172.168.32.106----> 密钥分发success完成
172.168.32.107----> 密钥分发success完成
172.168.32.108----> 密钥分发success完成
172.168.32.109----> 密钥分发success完成
172.168.32.110----> 密钥分发success完成
4)在ceph-deploy节点部署ceph-deploy⼯具包
cephadmin@ceph-deploy:~# sudo apt-cache madison ceph-deploy
ceph-deploy | 2.0.1 | https:///ceph/debian-pacific bionic/main amd64 Packages
ceph-deploy | 2.0.1 | https:///ceph/debian-pacific bionic/main i386 Packages
ceph-deploy | 1.5.38-0ubuntu1 | https:///ubuntu bionic/universe amd64 Packages
ceph-deploy | 1.5.38-0ubuntu1 | https:///ubuntu bionic/universe i386 Packages
cephadmin@ceph-deploy:~# sudo apt install ceph-deploy
5)初始化mon节点
在管理节点初始化mon 节点
cephadmin@ceph-deploy:~$ mkdir ceph-cluster #保存当前集群的初始化配置信息
cephadmin@ceph-deploy:~$ cd ceph-cluster/
cephadmin@ceph-deploy:~/ceph-cluster$
前期只先初始化ceph-mon01节点,ceph-mon02和ceph-mon03在集群部署完成后,再⼿动添加
cephadmin@ceph-deploy:~/ceph-cluster$ sudo ceph-deploy new --cluster-network 10.0.0.0/16 --public-network 172.168.0.0/16 ceph-mon01
#下⾯内容为ceph-mon01的初始化过程
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy new --cluster-network 10.0.0.0/16 --public-network 172.168.0.0/16 ceph-mon01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new --cluster-network 10.0.0.0/16 --public-network 172.168.0.0/16 ceph-mon01
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f25a37afe10>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['ceph-mon01']
[ceph_deploy.cli][INFO ] func : <function new at 0x7f25a0a64ad0>
[ceph_deploy.cli][INFO ] public_network : 172.168.0.0/16
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : 10.0.0.0/16
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-mon01][DEBUG ] connected to host: ceph-deploy
[ceph-mon01][INFO ] Running command: ssh -CT -o BatchMode=yes ceph-mon01
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO ] will connect again with password prompt
The authenticity of host 'ceph-mon01 (172.168.32.104)' can't be established.
ECDSA key fingerprint is SHA256:AIDN3qa9QKjViElHDrtTXhJ5EpTXdWj5Sc3tiy91E4Y.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-mon01' (ECDSA) to the list of known hosts.
[ceph-mon01][DEBUG ] connected to host: ceph-mon01
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph_deploy.new][INFO ] adding public keys to authorized_keys
[ceph-mon01][DEBUG ] append contents to file
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph-mon01][DEBUG ] find the location of an executable
[ceph-mon01][INFO ] Running command: sudo /bin/ip link show
[ceph-mon01][INFO ] Running command: sudo /bin/ip addr show
[ceph-mon01][DEBUG ] IP addresses found: [u'172.168.32.104', u'10.0.0.104']
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon01
[ceph_deploy.new][DEBUG ] Monitor ceph-mon01 at 172.168.32.104
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-mon01']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'172.168.32.104']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
初始化为验证
cephadmin@ceph-deploy:~/ceph-cluster$ ll
total 16
drwxrwxr-x 2 cephadmin cephadmin 75 Aug 30 10:48 ./
drwxr-xr-x 6 cephadmin cephadmin 157 Aug 30 10:25 ../
-rw-rw-r-- 1 cephadmin cephadmin 264 Aug 30 10:48 ceph.conf #⾃动⽣成的配置⽂件
-rw-r--r-- 1 cephadmin cephadmin 6207 Aug 30 10:48 ceph-deploy-ceph.log #初始化⽇志
-rw------- 1 cephadmin cephadmin 73 Aug 30 10:48 ceph.mon.keyring #⽤于ceph mon 节点内部通讯认证的秘钥环⽂件cephadmin@ceph-deploy:~/ceph-cluster$ cat ceph.conf
[global]
fsid = c31ea2e3-47f7-4247-9d12-c0bf8f1dfbfb #ceph集群ID
public_network = 172.168.0.0/16
cluster_network = 10.0.0.0/16
mon_initial_members = ceph-mon01 #可以⽤逗号做分割添加多个mon节点
mon_host = 172.168.32.104
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
6)初始化ceph-node节点
初始化ceph-node节点
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node01 ceph-node02 ceph-node03 ceph-node04 此过程会在指定的ceph node 节点按照串⾏的⽅式逐个服务器安装epel 源和ceph 源并安装ceph所需软件#......
[ceph-node03][DEBUG ] The following additional packages will be installed:
[ceph-node03][DEBUG ] ceph-base ceph-common ceph-mgr ceph-mgr-modules-core libaio1 libbabeltrace1
[ceph-node03][DEBUG ] libcephfs2 libdw1 libgoogle-perftools4 libibverbs1 libjaeger libjs-jquery
[ceph-node03][DEBUG ] libleveldb1v5 liblttng-ust-ctl4 liblttng-ust0 liblua5.3-0 libnl-route-3-200
[ceph-node03][DEBUG ] liboath0 librabbitmq4 librados2 libradosstriper1 librbd1 librdkafka1
[ceph-node03][DEBUG ] librdmacm1 librgw2 libsnappy1v5 libtcmalloc-minimal4 liburcu6
[ceph-node03][DEBUG ] python-pastedeploy-tpl python3-bcrypt python3-bs4 python3-ceph-argparse
[ceph-node03][DEBUG ] python3-ceph-common python3-cephfs python3-cherrypy3 python3-dateutil
[ceph-node03][DEBUG ] python3-distutils python3-jwt python3-lib2to3 python3-logutils python3-mako
[ceph-node03][DEBUG ] python3-markupsafe python3-paste python3-pastedeploy python3-pecan
[ceph-node03][DEBUG ] python3-prettytable python3-rados python3-rbd python3-rgw
[ceph-node03][DEBUG ] python3-simplegeneric python3-singledispatch python3-tempita
[ceph-node03][DEBUG ] python3-waitress python3-webob python3-webtest python3-werkzeug
[ceph-node03][DEBUG ] Suggested packages:
[ceph-node03][DEBUG ] python3-influxdb python3-crypto python3-beaker python-mako-doc httpd-wsgi
[ceph-node03][DEBUG ] libapache2-mod-python libapache2-mod-scgi libjs-mochikit python-pecan-doc
[ceph-node03][DEBUG ] python-waitress-doc python-webob-doc python-webtest-doc ipython3
[ceph-node03][DEBUG ] python3-lxml python3-termcolor python3-watchdog python-werkzeug-doc
[ceph-node03][DEBUG ] Recommended packages:
[ceph-node03][DEBUG ] ntp | time-daemon ceph-fuse ceph-mgr-dashboard ceph-mgr-diskprediction-local
[ceph-node03][DEBUG ] ceph-mgr-k8sevents ceph-mgr-cephadm nvme-cli smartmontools ibverbs-providers
[ceph-node03][DEBUG ] javascript-common python3-lxml python3-routes python3-simplejson
[ceph-node03][DEBUG ] python3-pastescript python3-pyinotify
[ceph-node03][DEBUG ] The following NEW packages will be installed:
[ceph-node03][DEBUG ] ceph ceph-base ceph-common ceph-mds ceph-mgr ceph-mgr-modules-core ceph-mon
[ceph-node03][DEBUG ] ceph-osd libaio1 libbabeltrace1 libcephfs2 libdw1 libgoogle-perftools4
[ceph-node03][DEBUG ] libibverbs1 libjaeger libjs-jquery libleveldb1v5 liblttng-ust-ctl4
[ceph-node03][DEBUG ] liblttng-ust0 liblua5.3-0 libnl-route-3-200 liboath0 librabbitmq4 librados2
[ceph-node03][DEBUG ] libradosstriper1 librbd1 librdkafka1 librdmacm1 librgw2 libsnappy1v5
[ceph-node03][DEBUG ] libtcmalloc-minimal4 liburcu6 python-pastedeploy-tpl python3-bcrypt
[ceph-node03][DEBUG ] python3-bs4 python3-ceph-argparse python3-ceph-common python3-cephfs
[ceph-node03][DEBUG ] python3-cherrypy3 python3-dateutil python3-distutils python3-jwt
[ceph-node03][DEBUG ] python3-lib2to3 python3-logutils python3-mako python3-markupsafe
[ceph-node03][DEBUG ] python3-paste python3-pastedeploy python3-pecan python3-prettytable
[ceph-node03][DEBUG ] python3-rados python3-rbd python3-rgw python3-simplegeneric
[ceph-node03][DEBUG ] python3-singledispatch python3-tempita python3-waitress python3-webob
[ceph-node03][DEBUG ] python3-webtest python3-werkzeug radosgw
#......
7)配置mon 节点并⽣成及同步秘钥
在各mon 节点按照组件ceph-mon,并通初始化mon 节点,mon 节点ha 还可以后期横向扩容。

root@ceph-mon01:~# apt install ceph-mon
root@ceph-mon02:~# apt install ceph-mon
root@ceph-mon03:~# apt install ceph-mon
在ceph-deploy节点初始化mon节点
cephadmin@ceph-deploy:~/ceph-cluster$ ceph-deploy mon create-initial
#下⾯为mon节点初始化内容
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbee5b13fa0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fbee5af7ad0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon01
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon01 ...
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph-mon01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Ubuntu 18.04 bionic
[ceph-mon01][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon01][DEBUG ] get remote short hostname
[ceph-mon01][DEBUG ] deploying mon to ceph-mon01
[ceph-mon01][DEBUG ] get remote short hostname
[ceph-mon01][DEBUG ] remote hostname: ceph-mon01
[ceph-mon01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon01][DEBUG ] create the mon path if it does not exist
[ceph-mon01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon01/done
[ceph-mon01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon01/done
[ceph-mon01][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon01.mon.keyring
[ceph-mon01][DEBUG ] create the monitor keyring file
[ceph-mon01][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph-mon01 --keyring /var/lib/ceph/tmp/ceph-ceph-mon01.mon.keyring --setuser 64045 --setgroup 64045
[ceph-mon01][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon01.mon.keyring
[ceph-mon01][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon01][DEBUG ] create the init path if it does not exist
[ceph-mon01][INFO ] Running command: sudo systemctl enable ceph.target
[ceph-mon01][INFO ] Running command: sudo systemctl enable ceph-mon@ceph-mon01
[ceph-mon01][WARNIN] Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon01.service → /lib/systemd/system/ceph-mon@.service. [ceph-mon01][INFO ] Running command: sudo systemctl start ceph-mon@ceph-mon01
[ceph-mon01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon01.asok mon_status
[ceph-mon01][DEBUG ] ********************************************************************************
[ceph-mon01][DEBUG ] status for monitor: mon.ceph-mon01
[ceph-mon01][DEBUG ] {
[ceph-mon01][DEBUG ] "election_epoch": 3,
[ceph-mon01][DEBUG ] "extra_probe_peers": [],
[ceph-mon01][DEBUG ] "feature_map": {
[ceph-mon01][DEBUG ] "mon": [
[ceph-mon01][DEBUG ] {
[ceph-mon01][DEBUG ] "features": "0x3f01cfb9fffdffff",
[ceph-mon01][DEBUG ] "num": 1,
[ceph-mon01][DEBUG ] "release": "luminous"
[ceph-mon01][DEBUG ] }
[ceph-mon01][DEBUG ] ]
[ceph-mon01][DEBUG ] },
[ceph-mon01][DEBUG ] "features": {
[ceph-mon01][DEBUG ] "quorum_con": "4540138297136906239",
[ceph-mon01][DEBUG ] "quorum_mon": [
[ceph-mon01][DEBUG ] "kraken",
[ceph-mon01][DEBUG ] "luminous",
[ceph-mon01][DEBUG ] "mimic",
[ceph-mon01][DEBUG ] "osdmap-prune",
[ceph-mon01][DEBUG ] "nautilus",
[ceph-mon01][DEBUG ] "octopus",
[ceph-mon01][DEBUG ] "pacific",
[ceph-mon01][DEBUG ] "elector-pinging"
[ceph-mon01][DEBUG ] ],
[ceph-mon01][DEBUG ] "required_con": "2449958747317026820",
[ceph-mon01][DEBUG ] "required_mon": [
[ceph-mon01][DEBUG ] "kraken",
[ceph-mon01][DEBUG ] "luminous",
[ceph-mon01][DEBUG ] "mimic",
[ceph-mon01][DEBUG ] "osdmap-prune",
[ceph-mon01][DEBUG ] "nautilus",
[ceph-mon01][DEBUG ] "octopus",
[ceph-mon01][DEBUG ] "pacific",
[ceph-mon01][DEBUG ] "elector-pinging"
[ceph-mon01][DEBUG ] ]
[ceph-mon01][DEBUG ] },
[ceph-mon01][DEBUG ] "monmap": {
[ceph-mon01][DEBUG ] "created": "2021-08-30T03:27:46.534560Z",
[ceph-mon01][DEBUG ] "disallowed_leaders: ": "",
[ceph-mon01][DEBUG ] "election_strategy": 1,
[ceph-mon01][DEBUG ] "epoch": 1,
[ceph-mon01][DEBUG ] "features": {
[ceph-mon01][DEBUG ] "optional": [],
[ceph-mon01][DEBUG ] "persistent": [
[ceph-mon01][DEBUG ] "kraken",
[ceph-mon01][DEBUG ] "luminous",
[ceph-mon01][DEBUG ] "mimic",
[ceph-mon01][DEBUG ] "osdmap-prune",
[ceph-mon01][DEBUG ] "nautilus",
[ceph-mon01][DEBUG ] "octopus",
[ceph-mon01][DEBUG ] "pacific",
[ceph-mon01][DEBUG ] "elector-pinging"
[ceph-mon01][DEBUG ] ]
[ceph-mon01][DEBUG ] },
[ceph-mon01][DEBUG ] "fsid": "c31ea2e3-47f7-4247-9d12-c0bf8f1dfbfb",
[ceph-mon01][DEBUG ] "min_mon_release": 16,
[ceph-mon01][DEBUG ] "min_mon_release_name": "pacific",
[ceph-mon01][DEBUG ] "modified": "2021-08-30T03:27:46.534560Z",
[ceph-mon01][DEBUG ] "mons": [
[ceph-mon01][DEBUG ] {
[ceph-mon01][DEBUG ] "addr": "172.168.32.104:6789/0",
[ceph-mon01][DEBUG ] "crush_location": "{}",
[ceph-mon01][DEBUG ] "name": "ceph-mon01",
[ceph-mon01][DEBUG ] "priority": 0,
[ceph-mon01][DEBUG ] "public_addr": "172.168.32.104:6789/0",
[ceph-mon01][DEBUG ] "public_addrs": {
[ceph-mon01][DEBUG ] "addrvec": [
[ceph-mon01][DEBUG ] {
[ceph-mon01][DEBUG ] "addr": "172.168.32.104:3300",
[ceph-mon01][DEBUG ] "nonce": 0,
[ceph-mon01][DEBUG ] "type": "v2"
[ceph-mon01][DEBUG ] },
[ceph-mon01][DEBUG ] {
[ceph-mon01][DEBUG ] "addr": "172.168.32.104:6789",
[ceph-mon01][DEBUG ] "nonce": 0,
[ceph-mon01][DEBUG ] "type": "v1"
[ceph-mon01][DEBUG ] }
[ceph-mon01][DEBUG ] ]
[ceph-mon01][DEBUG ] },
[ceph-mon01][DEBUG ] "rank": 0,
[ceph-mon01][DEBUG ] "weight": 0
[ceph-mon01][DEBUG ] }
[ceph-mon01][DEBUG ] ],
[ceph-mon01][DEBUG ] "stretch_mode": false
[ceph-mon01][DEBUG ] },
[ceph-mon01][DEBUG ] "name": "ceph-mon01",
[ceph-mon01][DEBUG ] "outside_quorum": [],
[ceph-mon01][DEBUG ] "quorum": [
[ceph-mon01][DEBUG ] 0
[ceph-mon01][DEBUG ] ],
[ceph-mon01][DEBUG ] "quorum_age": 2,
[ceph-mon01][DEBUG ] "rank": 0,
[ceph-mon01][DEBUG ] "state": "leader",
[ceph-mon01][DEBUG ] "stretch_mode": false,
[ceph-mon01][DEBUG ] "sync_provider": []
[ceph-mon01][DEBUG ] }
[ceph-mon01][DEBUG ] ********************************************************************************
[ceph-mon01][INFO ] monitor: mon.ceph-mon01 is running
[ceph-mon01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon01.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph-mon01
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph-mon01][DEBUG ] find the location of an executable
[ceph-mon01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon01.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph-mon01 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpCb4OB1
[ceph-mon01][DEBUG ] connection detected need for sudo
[ceph-mon01][DEBUG ] connected to host: ceph-mon01
[ceph-mon01][DEBUG ] detect platform information from remote host
[ceph-mon01][DEBUG ] detect machine type
[ceph-mon01][DEBUG ] get remote short hostname
[ceph-mon01][DEBUG ] fetch remote file
[ceph-mon01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-mon01.asok mon_status [ceph-mon01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.admin
[ceph-mon01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.bootstrap-mds
[ceph-mon01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.bootstrap-mgr
[ceph-mon01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.bootstrap-osd
[ceph-mon01][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-mon01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpCb4OB1
8)验证mon 节点
验证在mon 定节点已经⾃动安装并启动了ceph-mon 服务,并且后期在ceph-deploy 节点初始化⽬录会⽣成⼀些bootstrap ceph mds/mgr/osd/rgw 等服务的keyring 认证⽂件,这些初始化⽂件拥有对ceph 集群的最⾼权限,所以⼀定要保存好。

root@ceph-mon01:~# ps -ef|grep ceph-mon
ceph 8304 1 0 22:43 ? 00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon01 --setuser ceph --setgroup ceph
9)分发admin 秘钥到node节点
在ceph-deploy 节点把配置⽂件和admin 密钥拷贝⾄Ceph 集群需要执⾏ceph 管理命令的节点,从⽽不需要后期通过ceph 命令对ceph 集群进⾏管理配置的时候每次都需要指定。

相关文档
最新文档