Ceph安装部署文档

合集下载

openstack与ceph整合安装指导文档

openstack与ceph整合安装指导文档

openstack与ceph整合安装指导文档目录1 概述 (3)2 版本配套表 (3)3 系统架构图 (3)3.1 物理结构图 (3)3.2 逻辑结构图 (4)3.3 openstack安装 (5)3.4 ceph安装 (5)3.4.1 ip规划 (5)3.4.2 安装步骤 (6)3.5 controller节点和compute节点安ceph客户端 (7)3.6 controller节点配置glance使用ceph (8)3.7 controller节点配置cinder使用ceph (10)3.8 compute节点配置nova使用ceph (12)1概述本文档描述openstack在glance、cinder、nova组件后端如何配置使用ceph来进行存储。

2版本配套表3系统架构图3.1物理结构图Ceph node1Ceph node2Ceph node3 3.2逻辑结构图3.3openstack安装使用赵子顾的自动部署,3节点部署。

3.4ceph安装3.4.1ip规划3.4.2安装步骤1.修改3台机器的主机名分别为:ceph148、ceph149、ceph1502.编辑3台机器/etc/hosts内容如下:192.168.1.148 ceph148192.168.1.149 ceph149192.168.1.150 ceph1503.将ceph.zip目录拷贝到/home/ceph目录下并且解压,生成ceph和deploy两个目录。

4.编辑/etc/yum.repos.d/ceph.repo文件内容如下:[ceph-noarch]name=Ceph noarch packagesbaseurl=file:///home/ceph/cephenabled=1gpgcheck=0[ceph-deply]name=Ceph deploy packagesbaseurl=file:///home/ceph/deployenabled=1gpgcheck=05.三个节点增加相互信任:ceph148上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph149ssh-copy-id ceph150ceph149上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph150ceph150上执行:ssh-keygenssh-copy-id ceph148ssh-copy-id ceph1496.三个节点均关闭selinux和防火墙:service iptables stopchkconfig iptables off将/etc/sysconfig/selinux中SELINUX= enforcing改为SELINUX=disabled重启机器reboot7.安装ceph,三台机器均执行如下命令:yum install ceph -y8.在ceph148上执行如下命令安装ceph-deploy:yum install ceph-deploy -y9.执行如下命令:cd /etc/cephceph-deploy new ceph148 ceph149 ceph15010.部署mon节点,执行如下命令:ceph-deploy mon create ceph148 ceph149 ceph150ceph-deploy gatherkeys ceph148 //收集密钥11.部署osd节点,执行如下命令:ceph-deploy osd prepare ceph148:/dev/sdb ceph148:/dev/sdc ceph149:/dev/sdb ceph149:/dev/sdc ceph150:/dev/sdb ceph150:/dev/sdc12.如果有需要,部署mds,执行如下命令:ceph-deploy mds create ceph148 ceph149 ceph15013.重启服务/etc/init.d/ceph -a restart14.查看ceph状态是否正常:ceph -s显示如下:cluster 4fa8cb32-fea1-4d68-a341-ebddab2f3e0fhealth HEALTH_WARN clock skew detected on mon.ceph150monmap e2: 3 mons at {ceph148=192.168.1.148:6789/0,ceph149=192.168.1.149:6789/0,ceph150=192.168.1.150:6 789/0}, election epoch 8, quorum 0,1,2 ceph148,ceph149,ceph150osdmap e41: 6 osds: 6 up, 6 inpgmap v76: 192 pgs, 3 pools, 0 bytes data, 0 objects215 MB used, 91878 MB / 92093 MB avail192 active+clean15.配置148为ntp的server,其他节点定时向148同步时间3.5controller节点和compute节点安ceph客户端(不需要,在openstack上执行ceph --version能看到版本表示ceph已经安装)1.执行如下命令rpm --import 'https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc'rpm --import 'https:///git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'2.增加如下文件:vi /etc/yum.repos.d/ceph-extras内容如下:[ceph-extras]name=Ceph Extras Packagesbaseurl=/packages/ceph-extras/rpm/centos6/$basearchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-extras-noarch]name=Ceph Extras noarchbaseurl=/packages/ceph-extras/rpm/centos6/noarchenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-extras-source]name=Ceph Extras Sourcesbaseurl=/packages/ceph-extras/rpm/centos6/SRPMSenabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc 3.添加ceph库rpm -Uvh /rpms/el6/noarch/ceph-release-1-0.el6.noarch.rpm 4.添加epel库rpm -Uvh /pub/epel/6/x86_64/epel-release-6-8.noarch.rpm 5.安装cephyum update -yyum install ceph -y3.6controller节点配置glance使用ceph1.将ceph148节点/etc/ceph目录下的两个文件拷贝到controller节点和compute节点cd /etc/ceph/scp ceph.conf ceph.client.admin.keyring 192.168.1.142:/etc/ceph/scp ceph.conf ceph.client.admin.keyring 192.168.1.140:/etc/ceph/2.修改ceph.client.admin.keyring的权限chmod +r /etc/ceph/ceph.client.admin.keyring3.在ceph148上创建glance的存储池rados mkpool glance4.编辑140上glance的配置文件/etc/glance/glance-api.conf中如下配置项rbd_store_ceph_conf = /etc/ceph/ceph.confdefault_store = rbdrbd_store_user = adminrbd_store_pool = glance5.重启glance-api进程/etc/init.d/openstack-glance-api restart6.测试上传本地镜像,首先将测试镜像cirros-0.3.2-x86_64-disk.img放到140的/home/,然后执行如下上传命令:glance image-create --name "cirros-0.3.2-x86_64-10" --disk-format qcow2 --container-format bare --is-public True --progress </home/cirros-0.3.2-x86_64-disk.img显示如下:[=============================>] 100%+------------------+--------------------------------------+| Property | Value |+------------------+--------------------------------------+| checksum | 64d7c1cd2b6f60c92c14662941cb7913 || container_format | bare || created_at | 2014-09-16T08:15:46 || deleted | False || deleted_at | None || disk_format | qcow2 || id | 49a71de0-0842-4a7a-b756-edfcb0b86153 || is_public | True || min_disk | 0 || min_ram | 0 || name | cirros-0.3.2-x86_64-10 || owner | 3636a6e92daf4991beb64643bc145fab || protected | False || size | 13167616 || status | active || updated_at | 2014-09-16T08:15:51 || virtual_size | None |+------------------+--------------------------------------+7.查看上传的镜像glance image-list显示如下:+--------------------------------------+------------------------+-------------+------------------+----------+--------+| ID | Name | Disk Format | Container Format | Size | Status |+--------------------------------------+------------------------+-------------+------------------+----------+--------+| 49a71de0-0842-4a7a-b756-edfcb0b86153 | cirros-0.3.2-x86_64-10 | qcow2 | bare | 13167616 | active |+--------------------------------------+------------------------+-------------+------------------+----------+--------+8.测试网页上传镜像,在网页上传一个镜像,然后查看镜像文件glance image-list显示如下:+--------------------------------------+------------------------+-------------+------------------+----------+--------+| ID | Name | Disk Format | Container Format | Size | Status |+--------------------------------------+------------------------+-------------+------------------+----------+--------+| da28a635-2336-4603-a596-30879f4716f4 | asdadada | qcow2 | bare | 13167616 | active || 49a71de0-0842-4a7a-b756-edfcb0b86153 | cirros-0.3.2-x86_64-10 | qcow2 | bare | 13167616 | active |+--------------------------------------+------------------------+-------------+------------------+----------+--------+9.查看ceph中glance池中的对象:rbd ls glance显示如下:49a71de0-0842-4a7a-b756-edfcb0b86153da28a635-2336-4603-a596-30879f4716f43.7controller节点配置cinder使用ceph1.在ceph148上创建cinder的存储池rados mkpool cinder2.编辑140上cinder的配置文件/etc/cinder/cinder.conf中如下配置项volume_driver = cinder.volume.drivers.rbd.RBDDriverrbd_pool=cinderrbd_user=adminrbd_ceph_conf=/etc/ceph/ceph.conf3.重启/etc/init.d/openstack-cinder-volume进程/etc/init.d/openstack-cinder-volume restart4.命令行创建一个1G的磁盘cinder create --display-name dev1 1显示如下:cinderlist+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2014-09-16T08:48:50.367976 || display_description | None || display_name | dev1 || encrypted | False || id | 1d8f3416-fb15-44a9-837f-7724a9034b1e || metadata | {} || size | 1 || snapshot_id | None || source_volid | None || status | creating || volume_type | None |+---------------------+--------------------------------------+5.查看创建的磁盘状态cinder list显示如下:+--------------------------------------+----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+----------+--------------+------+-------------+----------+-------------+| 1d8f3416-fb15-44a9-837f-7724a9034b1e | creating | dev1 | 1 | None | false | |+--------------------------------------+----------+--------------+------+-------------+----------+-------------+界面创建一个2G磁盘6.查看创建的磁盘状态cinder list显示如下:+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| 1d8f3416-fb15-44a9-837f-7724a9034b1e | available | dev1 | 1 | None | false | || e53efe68-5d3b-438d-84c1-fa4c68bd9582 | available | dev2 | 2 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+10.查看ceph中cinder池中的对象:rbd ls cinder显示如下:volume-1d8f3416-fb15-44a9-837f-7724a9034b1evolume-e53efe68-5d3b-438d-84c1-fa4c68bd95823.8compute节点配置nova使用ceph1.升级libvirt1.1.0,参考《qemu-libvirt更新步骤.doct》2.编译qemu-1.6.1,参考《qemu-libvirt更新步骤.doct》3.在ceph148上创建nova的存储池rados mkpool nova4.生成一个uuiduuidgen显示如下:c245e1ef-d340-4d02-9dcf-fd091cd1fe475.执行如下命令cat > secret.xml <<EOF<secret ephemeral='no' private='no'><uuid>c245e1ef-d340-4d02-9dcf-fd091cd1fe47</uuid><usage type='ceph'><name>client.cinder secret</name></usage></secret>EOFvirsh secret-define --file secret.xml显示如下:Secret c245e1ef-d340-4d02-9dcf-fd091cd1fe47 created6.执行如下命令:cat /etc/ceph/ceph.client.admin.keyring显示如下:[client.admin]key = AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==7.将“AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==”放到一个临时文件echo "AQAXrRdU8O7uHRAAvYit51h4Dgiz6jkAtq8GLA==" > key8.执行如下命令:virsh secret-set-value --secret c245e1ef-d340-4d02-9dcf-fd091cd1fe47 --base64 $(cat key)9.编辑142上nova的配置文件/etc/nova/nova.conf中如下配置项images_type=rbdimages_rbd_pool=novaimages_rbd_ceph_conf=/etc/ceph/ceph.confrbd_user=adminrbd_secret_uuid=c245e1ef-d340-4d02-9dcf-fd091cd1fe47cpu_mode=none7.重启/etc/init.d/openstack-nova-compute进程/etc/init.d/openstack-nova-compute restart10.界面上创建虚拟机,在142上执行如下命令查看虚拟机状态nova list显示如下:+--------------------------------------+-------+--------+------------+-------------+--------------------+| ID | Name | Status | Task State | Power State | Networks |+--------------------------------------+-------+--------+------------+-------------+--------------------+| 445e9242-628a-4178-bb10-2d4fd82d042f | adaaa | ACTIVE | - | Running | intnet=10.10.10.15 |+--------------------------------------+-------+--------+------------+-------------+--------------------+11.查看ceph中nova池中的对象:rbd ls nova显示如下:445e9242-628a-4178-bb10-2d4fd82d042f_disk4操作测试截图4.1云硬盘快照从云硬盘dev3创建云硬盘快照4.2云硬盘快照创建云硬盘4.3挂载快照创建出来的云硬盘。

二、Ceph的ceph-deploy部署

二、Ceph的ceph-deploy部署

⼆、Ceph的ceph-deploy部署1、实验环境系统版本:buntu 18.04.5 LTS内核参数:4.15.0-112-genericceph版本:pacific/16.2.5主机分配:#部署服务器ceph-deploy172.168.32.101/10.0.0.101 ceph-deploy#两个ceph-mgr 管理服务器172.168.32.102/10.0.0.102 ceph-mgr01172.168.32.103/10.0.0.103 ceph-mgr02#三台服务器作为ceph 集群Mon 监视服务器,每台服务器可以和ceph 集群的cluster ⽹络通信。

172.168.32.104/10.0.0.104 ceph-mon01 ceph-mds01172.168.32.105/10.0.0.105 ceph-mon02 ceph-mds02172.168.32.106/10.0.0.106 ceph-mon03 ceph-mds03#四台服务器作为ceph 集群OSD 存储服务器,每台服务器⽀持两个⽹络,public ⽹络针对客户端访问,cluster ⽹络⽤于集群管理及数据同步,每台三块或以上的磁盘172.168.32.107/10.0.0.107 ceph-node01172.168.32.108/10.0.0.108 ceph-node02172.168.32.109/10.0.0.109 ceph-node03172.168.32.110/10.0.0.110 ceph-node04#磁盘划分#/dev/sdb /dev/sdc /dev/sdd /dev/sde #50G2、系统环境初始化1)所有节点更换为清华源cat >/etc/apt/source.list<<EOF# 默认注释了源码镜像以提⾼ apt update 速度,如有需要可⾃⾏取消注释deb https:///ubuntu/ bionic main restricted universe multiverse# deb-src https:///ubuntu/ bionic main restricted universe multiversedeb https:///ubuntu/ bionic-updates main restricted universe multiverse# deb-src https:///ubuntu/ bionic-updates main restricted universe multiversedeb https:///ubuntu/ bionic-backports main restricted universe multiverse# deb-src https:///ubuntu/ bionic-backports main restricted universe multiversedeb https:///ubuntu/ bionic-security main restricted universe multiverse# deb-src https:///ubuntu/ bionic-security main restricted universe multiverseEOF2)所有节点安装常⽤软件apt install iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libp 3)所有节点的内核配置cat >/etc/sysctl.conf <<EOF# Controls source route verificationnet.ipv4.conf.default.rp_filter = 1net.ipv4.ip_nonlocal_bind = 1net.ipv4.ip_forward = 1# Do not accept source routingnet.ipv4.conf.default.accept_source_route = 0# Controls the System Request debugging functionality of the kernelkernel.sysrq = 0# Controls whether core dumps will append the PID to the core filename.# Useful for debugging multi-threadedapplications. kernel.core_uses_pid = 1# Controls the use of TCP syncookiesnet.ipv4.tcp_syncookies = 1# Disable netfilter on bridges.net.bridge.bridge-nf-call-ip6tables = 0net.bridge.bridge-nf-call-iptables = 0net.bridge.bridge-nf-call-arptables = 0# Controls the default maxmimum size of a mesage queuekernel.msgmnb = 65536# # Controls the maximum size of a message, in byteskernel.msgmax = 65536# Controls the maximum shared segment size, in byteskernel.shmmax = 68719476736# # Controls the maximum number of shared memory segments, in pageskernel.shmall = 4294967296# TCP kernel paramaternet.ipv4.tcp_mem = 786432 1048576 1572864net.ipv4.tcp_rmem = 4096 87380 4194304net.ipv4.tcp_wmem = 4096 16384 4194304 net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_sack = 1# socket buffernet.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216dev_max_backlog = 262144net.core.somaxconn = 20480net.core.optmem_max = 81920# TCP connnet.ipv4.tcp_max_syn_backlog = 262144net.ipv4.tcp_syn_retries = 3net.ipv4.tcp_retries1 = 3net.ipv4.tcp_retries2 = 15# tcp conn reusenet.ipv4.tcp_timestamps = 0net.ipv4.tcp_tw_reuse = 0net.ipv4.tcp_tw_recycle = 0net.ipv4.tcp_fin_timeout = 1net.ipv4.tcp_max_tw_buckets = 20000net.ipv4.tcp_max_orphans = 3276800net.ipv4.tcp_synack_retries = 1net.ipv4.tcp_syncookies = 1# keepalive connnet.ipv4.tcp_keepalive_time = 300net.ipv4.tcp_keepalive_intvl = 30net.ipv4.tcp_keepalive_probes = 3net.ipv4.ip_local_port_range = 10001 65000# swapvm.overcommit_memory = 0vm.swappiness = 10#net.ipv4.conf.eth1.rp_filter = 0#net.ipv4.conf.lo.arp_ignore = 1#net.ipv4.conf.lo.arp_announce = 2#net.ipv4.conf.all.arp_ignore = 1#net.ipv4.conf.all.arp_announce = 2EOF4)所有节点的⽂件权限配置cat > /etc/security/limits.conf <<EOFroot soft core unlimitedroot hard core unlimitedroot soft nproc 1000000root hard nproc 1000000root soft nofile 1000000root hard nofile 1000000root soft memlock 32000root hard memlock 32000root soft msgqueue 8192000root hard msgqueue 8192000* soft core unlimited* hard core unlimited* soft nproc 1000000* hard nproc 1000000* soft nofile 1000000* hard nofile 1000000* soft memlock 32000* hard memlock 32000* soft msgqueue 8192000* hard msgqueue 8192000EOF5)所有节点的时间同步配置#安装cron并启动apt install cron -ysystemctl status cron.service#同步时间/usr/sbin/ntpdate &> /dev/null && hwclock -w#每5分钟同步⼀次时间echo "*/5 * * * * /usr/sbin/ntpdate &> /dev/null && hwclock -w" >> /var/spool/cron/crontabs/root6)所有节点/etc/hosts配置cat >>/etc/hosts<<EOF172.168.32.101 ceph-deploy172.168.32.102 ceph-mgr01172.168.32.103 ceph-mgr02172.168.32.104 ceph-mon01 ceph-mds01172.168.32.105 ceph-mon02 ceph-mds02172.168.32.106 ceph-mon03 ceph-mds03172.168.32.107 ceph-node01172.168.32.108 ceph-node02172.168.32.109 ceph-node03172.168.32.110 ceph-node04EOF7)所有节点安装python2做ceph初始化时,需要python2.7apt install python2.7 -yln -sv /usr/bin/python2.7 /usr/bin/python23、ceph部署1)所有节点配置ceph yum 仓库,并导⼊key2)所有节点创建ceph⽤户,并允许ceph ⽤户以执⾏特权命令:推荐使⽤指定的普通⽤户部署和运⾏ceph 集群,普通⽤户只要能以⾮交互⽅式执⾏命令执⾏⼀些特权命令即可,新版的ceph-deploy 可以指定包含root 的在内只要可以执⾏命令的⽤户,不过仍然推荐使⽤普通⽤户,⽐如ceph、cephuser、cephadmin 这样的⽤户去管理ceph 集群。

ceph1安装文档.

ceph1安装文档.

ceph安装文档[root@server231 ~]# cat /etc/hosts172.16.7.231 server231vi ceph.sh#!/bin/bashifeval "rpm -q ceph"; thenecho "ceph is install"elseecho "ceph is install"fiyum install ceph -yrm -rf /var/lib/ceph/mon/*rm -rf /var/lib/ceph/osd/*mkdir -p /var/lib/ceph/mon/ceph-server231export hosts=`ifconfig ens192 |awk NR==2 |grep 'inet'|awk '{print $2}'` echo $hostsexport a=`uuidgen`echo $a# 修改配置文件tee /etc/ceph/ceph.conf<< EOFfsid = $amon initial members = server231mon host = $hostsauth cluster required = cephxauth service required = cephxauth client required = cephxosd journal size = 1024file store xattr use omap = trueosd pool default size = 2osd pool default min size = 1osd pool default pgnum = 128osd pool default pgpnum = 128osd crush chooseleaf type = 1mon_pg_warn_max_per_osd = 1000 #(消除ceph-warn的配置)mon clock drift allowed = 2mon clock drift warn backoff = 30[osd.0]host = server231#addr = 172.16.7.xxx:6789osd data = /var/lib/ceph/osd/ceph-0keyring = /var/lib/ceph/osd/ceph-0/keyring[osd.1]host = server231#addr = 172.16.7.xxx:6789osd data = /var/lib/ceph/osd/ceph-1keyring = /var/lib/ceph/osd/ceph-1/keyring[mds.server231]host = server231EOF# 生成密钥ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' # 创建adminceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin--set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'# 加入密钥ceph.mon.keyringceph-authtool/etc/ceph/ceph.mon.keyring --import-keyring/etc/ceph/ceph.client.admin.keyring# 创建monitor mapmonmaptool --create --add server231 $hosts --fsid $a --clobber /etc/ceph/monmap# 初始化monceph-mon --mkfs -i server231 --monmap /etc/ceph/monmap --keyring/etc/ceph/ceph.mon.keyring# 创建系统启动文件touch /var/lib/ceph/mon/ceph-server231/donetouch /var/lib/ceph/mon/ceph-server231/sysvinit# 启动mon/etc/init.d/ceph start mon.server231# 创建idcephosd createmkdir -p /var/lib/ceph/osd/ceph-0#mkdir -p /var/lib/ceph/osd/ceph-1# 初始化OSDceph-osd -i 0 --mkfs --mkkey# 注册此OSDcephauth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring #节点加入CRUSH MAPcephosd crush add-bucket server231 host#节点放入default根cephosd crush move server231 root=default# 分配权重、重新编译、注入集群cephosd crush add osd.0 1.0 host=server231# 创建系统启动文件touch /var/lib/ceph/osd/ceph-0/sysvinit# 启动osd/etc/init.d/ceph start osd.0# osd1# 创建idcephosd create#echo $bmkdir -p /var/lib/ceph/osd/ceph-1# 初始化OSDceph-osd -i 1 --mkfs --mkkey# 注册此OSDcephauth add osd.1 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-1/keyring #节点加入CRUSH MAPcephosd crush add-bucket server231 host#节点放入default根cephosd crush move server231 root=default# 分配权重、重新编译、注入集群cephosd crush add osd.1 1.0 host=server231# 创建系统启动文件touch /var/lib/ceph/osd/ceph-1/sysvinit# 启动osd/etc/init.d/ceph start osd.1#创建mds目录mkdir -p /var/lib/ceph/mds/ceph-server231#注册MDScephauth get-or-create mds.server231 mds 'allow' osd 'allow rwx' mon 'allow profile mds' -o/var/lib/ceph/mds/ceph-server231/keyring#启动mds并设置系统自动启动touch /var/lib/ceph/mds/ceph-server231/sysvinit/etc/init.d/ceph start mds.server2312.ceph 挂载磁盘的方法mount -t ceph server101,server102:6789:/ /cephtest -o name=admin,secret=AQA/lH FZbI/dIxAA974lXbpOiopB3k0ilHEtAA==。

Ceph094安装手册

Ceph094安装手册

Ceph094.6安装-2016一、安装环境4台虚拟机,1台deploy,1台mon,2台osd,操作系统rhel71Ceph-adm 192.168.16.180Ceph-mon 192.168.16.181Ceph-osd1 192.168.16.182Ceph-osd2 192.168.16.183二、安装预环境1、配置主机名/ip地址(所有主机单独执行)hostnamectl set-hostname 主机名修改/etc/sysconfig/network-scripts/ifcfg-eno*IPADDR/NETMASK/GATEWAY2、adm节点上生成节点列表,/etc/ceph/cephlist.txt192.168.16.180192.168.16.181192.168.16.182192.168.16.1833、在adm上编辑/etc/hosts4、adm利用脚本同步所有节点的/etc/hosts[root@ceph-adm ceph]# cat /etc/ceph/rsync_hosts.shWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt);do echo -----$ip-----;rsync -avp --delete /etc/hosts $ip:/etc/;done5、所有主机生成ssh-key,并所有主机都执行id的copyssh-keygen -t rsassh-copy-id root@ceph-admssh-copy-id root@ceph-monssh-copy-id root@ceph-osd1ssh-copy-id root@ceph-osd26、adm上执行A、同步创建/etc/ceph目录[root@ceph-adm ceph]# cat mkdir_workdir.shWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt);do echo -----$ip-----;ssh root@$ip mkdir -p /etc/ceph ;doneb、同步关闭防火墙[root@ceph-adm ceph]# cat close_firewall.sh#!/bin/shset -xWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt)do echo -----$ip-----ssh root@$ip "sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config"ssh root@$ip setenforce 0ssh root@$ip "firewall-cmd --zone=public --add-port=6789/tcp --permanent"ssh root@$ip "firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent"ssh root@$ip "firewall-cmd --reload"donec、所有脚本执行系统优化,打开文件限制[root@ceph-adm ceph]# cat system_optimization.sh#!/bin/shset -xWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt)do echo -----$ip-----ssh root@$ip "sed -i 's/4096/102400/' /etc/security/limits.d/20-nproc.conf"ssh root@$ip "cat /etc/rc.local | grep "ulimit -SHn 102400" || echo "ulimit -SHn 102400" >>/etc/rc.local"doned、编辑wty_project.repo和wty_rhel7.repo文件,并同步到所有节点[root@ceph-adm ceph]# cat rsync_repo.shWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt);do echo -----$ip-----;rsync -avp --delete /etc/ceph/*.repo$ip:/etc/yum.repos.d/;donee、安装ceph以及必要的rpm包[root@ceph-adm ceph]# cat ceph_install.sh#!/bin/shset -xWorkDir=/etc/cephfor ip in $(cat ${WorkDir}/cephlist.txt)do echo -----$ip-----ssh root@$ip "yum install redhat-lsb -y"ssh root@$ip "yum install ceph -y"done三、deploy安装,在adm节点1、deploy安装cd /etc/ceph/yum install ceph-deploy -y2、初始化集群[root@ceph-adm ceph]# ceph-deploy new ceph-mon3、安装集群ceph软件包(与二e的那一步有点重复,但是还是有需要的包要安装例如fcgi/ceph-radosgw)Ceph-deploy Install ceph-adm ceph-mon ceph-osd1 ceph-osd24、添加初始monitor节点和收集秘钥[root@ceph-adm ceph]# ceph-deploy mon create-initial5、osd节点操作A、osd1/osd2各增加2块100G硬盘B、adm节点操作ceph-deploy disk zap ceph-osd1:sdb ceph-osd1:sdc ceph-osd2:sdb ceph-osd2:sdcceph-deploy osd create ceph-osd1:sdb ceph-osd1:sdc ceph-osd2:sdb ceph-osd2:sdcceph –sceph osd tree检测正常*disk zap操作对硬盘进行zero操作,*osd create操作合并了osd prepare/osd activate操作,但是挂载目录不能指定/var/lib/ceph/osd/ceph-X6、将所有节点加为admin,使节点可以运行ceph的所有命令Ceph-deploy admin ceph-adm ceph-mon ceph-osd1 ceph-osd2。

CEPH部署文档V1.0

CEPH部署文档V1.0

安装部署文档目录目录 (2)1.硬件配置 (3)2.部署规划 (4)普通性能集群 (4)高性能集群 (4)3.安装操作系统 (6)操作场景 (6)前提条件 (6)操作步骤 (6)系统配置 (11)4.准备安装环境 (12)5.安装CEPH-普遍性能集群 (13)安装M ONITOR (13)安装OSD (15)初始化操作 (17)新增&删除M ONITOR示例 (17)新增&删除OSD示例 (18)6.安装CEPH-高性能集群 (20)安装M ONITOR (20)安装OSD (22)初始化操作 (23)7.CEPH常用命令 (25)查看状态 (25)对象操作 (26)快照操作 (27)备份数设置 (27)1.硬件配置10台服务器的配置均为:机器型号:PowerEdge R730CPU:Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz*2内存:256G内存硬盘:2个300G的sas盘,6个400G的SSD硬盘。

10台服务器的配置均为:机器型号:PowerEdge R730CPU:Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz*2内存:256G内存硬盘:2个300G的sas盘,1个400G的SSD硬盘,9个1T的SATA硬盘。

2.部署规划本方案以硬盘性能为准,分别以包含9个1T的SATA硬盘的10台服务器为一组,6个SSD硬盘的10台服务器为一组,部署两套CEPH,定义为普通性能集群和高性能集群。

普通性能集群1)300G*2(SAS)配置为RAID1作为系统盘的安装。

2)400G*1(SSD)配置为非RAID模式,以40G为单位划分为9个分区,设置gpt格式,不需格式化文件系统,作为CEPH的日志分区。

3)1T*9(SATA):每个盘配置为RAID0,作为CEPH的数据分区,设置gpt格式,不需格式化文件系统。

每台共提供9个OSD。

1. 云柜 ceph测试环境安装文档

1. 云柜 ceph测试环境安装文档

Ceph是加州大学Santa Cruz分校的Sage Weil(DreamHost的联合创始人)专为博士论文设计的新一代自由软件分布式文件系统。

自2007年毕业之后,Sage开始全职投入到Ceph开发之中,使其能适用于生产环境。

Ceph的主要目标是设计成基于POSIX的没有单点故障的分布式文件系统,使数据能容错和无缝的复制。

2010年3 月,Linus Torvalds将Ceph client合并到内核2.6.34中。

关于Ceph的详细介绍见:Ceph:一个Linux PB 级分布式文件系统本文是在Centos6.7上对Ceph的部署的详细指南。

发现有部分网站转载本文,转载可以,但请给出原文链接,尊重劳动成果,谢谢!首先对部署环境进行说明:编号主机名IP 功能tkvm01 10.1.166.75 测试节点10.1.166.76 tkvm02 10.1.166.76 测试节点10.1.166.66 dceph66 10.1.166.66 admin10.1.166.87 dceph87 10.1.166.87 mon10.1.166.88 dceph88 10.1.166.88 osd10.1.166.89 dceph89 10.1.166.89 osd10.1.166.90 dceph90 10.1.166.90 osdCeph的文件系统作为一个目录挂载到客户端cephclient的/cephfs目录下,可以像操作普通目录一样对此目录进行操作。

1. 安装前准备(root用户)参考文档:/ceph/ceph4e2d658765876863/ceph-1/installation30105feb901f5b8988c53011/preflig ht3010988468c030111.1 在每台机添加hosts修改文件/etc/hosts(或者/etc/sysconfig/network),添加以下内容:10.201.26.121 ceph0110.201.26.122 ceph0210.201.26.123 ceph031.2 每个Ceph节点上创建一个用户# adduser ceph# passwd ceph密码统一设为:ceph1.3 在每个Ceph节点中为用户增加root 权限# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph# chmod 0440 /etc/sudoers.d/ceph1.4 关闭防火墙等服务# service iptables stop# chkconfig iptables off //关闭开机启动防火墙每台机器节点都需要修改/etc/selinux/config 文件将SELINUX=enforcing改为SELINUX=disabled2. CEPH部署设置(root用户操作)增加Ceph资料库至ceph-deploy 管理节点. 之后,安装ceph-deploy安装EPEL 软件源(单台机操作即可):# rpm -Uvh https:///pub/epel/6/x86_64/epel-release-6-8.noarch.rpm# yum update -y安装ceph依赖# rpm -Uvh /rpm-hammer/el6/noarch/ceph-release-1-0.el6.noarch.rpm# yum install ceph-deploy -y否则会出现以下问题:3. 数据节点磁盘挂载(root用户)在ceph01、ceph02、ceph03上分别挂载了一块20g大小的磁盘作为ceph的数据存储测试使用。

ceph安装配置说明

ceph安装配置说明

ceph安装配置说明一、环境说明:注:在配置系统环境时,需要指定各结点的机器名,关闭iptables、关闭selinux(重要)。

相关软件包:ceph-0.61.2.tar.tarlibedit0-3.0-1.20090722cvs.el6.x86_64.rpmlibedit-devel-3.0-1.20090722cvs.el6.x86_64.rpmsnappy-1.0.5-1.el6.rf.x86_64.rpmsnappy-devel-1.0.5-1.el6.rf.x86_64.rpmleveldb-1.7.0-2.el6.x86_64.rpmleveldb-devel-1.7.0-2.el6.x86_64.rpmbtrfs-progs-0.19.11.tar.bz2$src为安装包存放目录二、内核编译及配置:cp /boot/config-2.6.32-279.el6.x86_64 /usr/src/linux-2.6.34.2/.config make menuconfig #选择把ceph编译成模块和加载btrfs文件系统make all #若是多核处理器,则可以使用make -j8命令,以多线程方式加速构建内核makemodules_installmake install修改/etc/grub.conf文件,把新编译的linux-2.6.34.2版本内核做为默认启动内核。

三、Ceph安装配置:先安装相关依赖包:rpm -ivh libedit0-3.0-1.20090722cvs.el6.x86_64.rpm --forcerpm -ivh libedit-devel-3.0-1.20090722cvs.el6.x86_64.rpmrpm -ivh snappy-1.0.5-1.el6.rf.x86_64.rpmrpm -ivh snappy-devel-1.0.5-1.el6.rf.x86_64.rpmrpm -ivh leveldb-1.7.0-2.el6.x86_64.rpmrpm -ivh leveldb-devel-1.7.0-2.el6.x86_64.rpm编译安装ceph:./autogen.sh./configure --without-tcmalloc --without-libatomic-opsmakemake install配置ceph:cp $src/ceph-0.61.2/src/sample.ceph.conf /usr/local/etc/ceph/ceph.confcp $src/ceph-0.61.2/src/init-ceph /etc/init.d/cephmkdir /var/log/ceph #建立存放ceph日志目录。

ceph安装部署

ceph安装部署

ceph安装部署ceph简介不管你是想为云平台提供Ceph 对象存储和/或 Ceph 块设备,还是想部署⼀个 Ceph ⽂件系统或者把 Ceph 作为他⽤,所有 Ceph 存储集群的部署都始于部署⼀个个 Ceph 节点、⽹络和 Ceph 存储集群。

Ceph 存储集群⾄少需要⼀个 Ceph Monitor 和两个 OSD 守护进程。

⽽运⾏ Ceph ⽂件系统客户端时,则必须要有元数据服务器( Metadata Server )。

Ceph OSDs: Ceph OSD 守护进程( Ceph OSD )的功能是存储数据,处理数据的复制、恢复、回填、再均衡,并通过检查其他OSD 守护进程的⼼跳来向 Ceph Monitors 提供⼀些监控信息。

当 Ceph 存储集群设定为有2个副本时,⾄少需要2个 OSD 守护进程,集群才能达到 active+clean 状态( Ceph 默认有3个副本,但你可以调整副本数)。

Monitors: Ceph Monitor维护着展⽰集群状态的各种图表,包括监视器图、 OSD 图、归置组( PG )图、和 CRUSH 图。

Ceph 保存着发⽣在Monitors 、 OSD 和 PG上的每⼀次状态变更的历史信息(称为 epoch )。

MDSs: Ceph 元数据服务器( MDS )为 Ceph ⽂件系统存储元数据(也就是说,Ceph 块设备和 Ceph 对象存储不使⽤MDS )。

元数据服务器使得 POSIX ⽂件系统的⽤户们,可以在不对 Ceph 存储集群造成负担的前提下,执⾏诸如 ls、find 等基本命令Ceph组件:osdOSD 守护进程,⾄少两个⽤于存储数据、处理数据拷贝、恢复、回滚、均衡通过⼼跳程序向monitor提供部分监控信息⼀个ceph集群中⾄少需要两个OSD守护进程Ceph组件:mon维护集群的状态映射信息包括monitor、OSD、placement Group(PG)还维护了monitor、OSD和PG的状态改变历史信息Ceph组件:mgr(新功能)负责ceph集群管理,如pg map对外提供集群性能指标(如cpeh -s 下IO信息)具有web界⾯的监控系统(dashboard)ceph逻辑结构数据通过ceph的object存储到PG,PG在存储到osd daemon,osd对应diskobject只能对应⼀个pg⼀个raid可以对应⼀个osd⼀整块硬盘可以对应⼀个osd⼀个分区可以对应⼀个osdmonitor:奇数个 osd : ⼏⼗到上万,osd越多性能越好pg概念副本数crush规则(pg怎么找到osd acting set)⽤户及权限epoach:单调递增的版本号acting set: osd列表,第⼀个为primary osd,replicated osdup set :acting set过去的版本pg tmp:临时pg组osd状态:默认每2秒汇报⾃⼰给mon(同时监控组内osd,如300秒没有给mon汇报状态,则会把这个osd踢出pg组) up 可以提供iodown 挂掉了in 有数据out 没数据了ceph应⽤场景:通过tgt⽀持iscsi挂载公司内部⽂件共享海量⽂件,⼤流量,⾼并发需要⾼可⽤、⾼性能⽂件系统传统单服务器及NAS共享难以满⾜需求,如存储容量,⾼可⽤ceph⽣产环境推荐存储集群采⽤全万兆⽹络集群⽹络(不对外)与公共⽹络分离(使⽤不同⽹卡)mon、mds与osd分离部署在不同机器上journal推荐使⽤PCI SSD,⼀般企业级IOPS可达40万以上OSD使⽤SATA亦可根据容量规划集群⾄强E5 2620 V3或以上cpu,64GB或更⾼内存最后,集群主机分散部署,避免机柜故障(电源、⽹络)ceph安装环境由于机器较少,使⽤3台机器,充当mon与osd,⽣产环境不建议,⽣产环境⾄少3个mon独⽴主机IP⾓⾊配置ceph-0eth0:192.168.0.150(Public)eth1:172.16.1.100(Cluster)mon、osd、mgrDISK 0 15G(OS)DISK 110G(Journal)DISK 2 10G(OSD)DISK 3 10G(OSD)ceph-1eth0:192.168.0.151(Public)eth1:172.16.1.101(Cluster)mon、osd、mgrDISK 0 15G(OS)DISK 110G(Journal)DISK 2 10G(OSD)DISK 3 10G(OSD)ceph-2eth0:192.168.0.152(Public)eth1:172.16.1.102(Cluster)mon、osd、mgrDISK 0 15G(OS)DISK 110G(Journal)DISK 2 10G(OSD)DISK 3 10G(OSD)⼀、系统设置1.绑定主机名由于后续安装及配置都涉及到主机名,故此需先绑定依次在三个节点上执⾏以下命令完成hosts绑定[root@ceph-node0 ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.0.150 ceph-node0192.168.0.151 ceph-node1192.168.0.152 ceph-node22.ssh-keygen信任3. 每台关闭防⽕墙systemctl stop firewalld4.时间同步yum install -y ntpdate//ntpdate 5.安装epel源与ceph-deploy本步骤要在每⼀个节点上执⾏安装epel源wget -O /etc/yum.repos.d/epel.repo /repo/epel-7.repo安装ceph-deployrpm -ivh https:///ceph/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm 替换 ceph.repo 服务器sed -i 's#htt.*://#https:///ceph#g' /etc/yum.repos.d/ceph.repo 或直接复制下⽅⽂本内容替换 /etc/yum.repos.d/ceph.repo[Ceph]name=Ceph packages for $basearchbaseurl=https:///ceph/rpm-luminous/el7/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=https:///ceph/keys/release.asc[Ceph-noarch]name=Ceph noarch packagesbaseurl=https:///ceph/rpm-luminous/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https:///ceph/keys/release.asc[ceph-source]name=Ceph source packagesbaseurl=https:///ceph/rpm-luminous/el7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=https:///ceph/keys/release.asc6.安装ceph使⽤ yum 安装 ceph-deploy[root@ceph-node0 ~]# yum install -y ceph-deploy创建 ceph-install ⽬录并进⼊,安装时产⽣的⽂件都将在这个⽬录[root@ceph-node0 ~]# mkdir ceph-install && cd ceph-install[root@ceph-node0 ceph-install]#⼆、准备硬盘1.Journal磁盘本步骤要在每⼀个节点上执⾏在每个节点上为Journal磁盘分区, 分别为 sdb1, sdb2, 各⾃对应本机的2个OSD, journal磁盘对应osd的⼤⼩为25%使⽤ parted 命令进⾏创建分区操作[root@ceph-node0 ~]# parted /dev/sdbmklabel gptmkpart primary xfs 0% 50%mkpart primary xfs 50% 100%q2.OSD磁盘对于OSD磁盘我们不做处理,交由ceph-deploy进⾏操作三、安装ceph1.使⽤ceph-deploy安装ceph,以下步骤只在ceph-depoly管理节点执⾏创建⼀个ceph集群,也就是Mon,三台都充当mon[root@ceph-node0 ceph-install]# ceph-deploy new ceph-node0 ceph-node1 ceph-node2在全部节点上安装ceph[root@ceph-node0 ceph-install]# ceph-deploy install ceph-node0 ceph-node1 ceph-node2或在每个节点上⼿动执⾏ yum install -y ceph创建和初始化监控节点并收集所有的秘钥[root@ceph-node0 ceph-install]# ceph-deploy mon create-initial此时可以在osd节点查看mon端⼝创建OSD存储节点[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node0 --data /dev/sdc --journal /dev/sdb1[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node0 --data /dev/sdd --journal /dev/sdb2[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node1 --data /dev/sdc --journal /dev/sdb1[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node1 --data /dev/sdd --journal /dev/sdb2[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node2 --data /dev/sdc --journal /dev/sdb1[root@ceph-node0 ceph-install]# ceph-deploy osd create ceph-node2 --data /dev/sdd --journal /dev/sdb2把配置⽂件和admin 秘钥到管理节点和ceph节点[root@ceph-0 ceph-install]# ceph-deploy --overwrite-conf admin ceph-node0 ceph-node1 ceph-node2使⽤ ceph -s 命令查看集群状态[root@ceph-node0 ceph-install]# ceph -scluster:id: e103fb71-c0a9-488e-ba42-98746a55778ahealth: HEALTH_WARNno active mgrservices:mon: 3 daemons, quorum ceph-node0,ceph-node1,ceph-node2mgr: no daemons activeosd: 6 osds: 6 up, 6indata:pools: 0 pools, 0 pgsobjects: 0 objects, 0Busage: 0B used, 0B / 0B availpgs:如集群正常则显⽰ health HEALTH_OK如OSD未全部启动,则使⽤下⽅命令重启相应节点, @ 后⾯为 OSD IDsystemctl start ceph-osd@02. 部署mgrluminous 版本需要启动 mgr, 否则 ceph -s 会有 no active mgr 提⽰官⽅⽂档建议在每个 monitor 上都启动⼀个 mgr[root@ceph-node0 ceph-install]# ceph-deploy mgr create ceph-node0:ceph-node0 ceph-node1:ceph-node1 ceph-node2:ceph-node2再次查看ceph状态[root@ceph-node0 ceph-install]# ceph -scluster:id: e103fb71-c0a9-488e-ba42-98746a55778ahealth: HEALTH_OKservices:mon: 3 daemons, quorum ceph-node0,ceph-node1,ceph-node2mgr: ceph-node0(active), standbys: ceph-node1, ceph-node2osd: 6 osds: 6 up, 6 indata:pools: 0 pools, 0 pgsobjects: 0 objects, 0Busage: 6.02GiB used, 54.0GiB / 60.0GiB availpgs:3.清除操作安装过程中如遇到奇怪的错误,可以通过以下步骤清除操作从头再来[root@ceph-node0 ceph-install]# ceph-deploy purge ceph-node0 ceph-node1 ceph-node2[root@ceph-node0 ceph-install]# ceph-deploy purgedata ceph-node0 ceph-node1 ceph-node2[root@ceph-node0 ceph-install]# ceph-deploy forgetkeys四、配置1. 为何要分离⽹络性能OSD 为客户端处理数据复制,复制多份时 OSD 间的⽹络负载势必会影响到客户端和 ceph 集群的通讯,包括延时增加、产⽣性能问题;恢复和重均衡也会显著增加公共⽹延时。

ceph0.87安装使用手册资料

ceph0.87安装使用手册资料

Ceph0.87安装使用手册(v0.1)2015年3月28日目录第一部分:环境准备...................................................................................................................- 1 -一、准备.......................................................................................................................- 1 -二、在admin节点上安装ceph-deploy......................................................................- 1 -三、在每个机器上安装ntp ........................................................................................- 2 -四、在每个机器上安装openssh-server .....................................................................- 2 -五、在每个机器上创建user并给予sudo权限 ........................................................- 2 -六、设置从admin节点到其他三个节点的免密码登录...........................................- 3 -七、一些注意事项.......................................................................................................- 5 -第二部分:安装...........................................................................................................................- 5 -一、在wx-ceph-admin节点上创建目录....................................................................- 5 -二、清空配置(purge configuration) ............................................................................- 5 -三、安装.......................................................................................................................- 6 -四、配置OSD(目录做OSD)...................................................................................- 7 -五、配置OSD(硬盘做OSD)...................................................................................- 8 -六、配置文件拷贝.......................................................................................................- 9 -七、安装MDS........................................................................................................... - 10 -八、Ceph运行 .......................................................................................................... - 10 -第三部分:使用........................................................................................................................ - 11 -一、集群监控............................................................................................................ - 11 -二、用户管理............................................................................................................ - 14 -三、Pool(池)管理 ...................................................................................................... - 15 -第四部分:Cephfs..................................................................................................................... - 15 -一、创建ceph文件系统.......................................................................................... - 15 -二、挂载(mount) ceph文件系统............................................................................. - 16 -第一部分:环境准备一、准备预备1:修改每个机器里边的/etc/hosts文件,添加这些机器的ip例如:在wx-ceph-admin机器的/etc/hosts文件中,添加:172.16.100.46 wx-ceph-admin172.16.100.42 wx-ceph01172.16.100.44 wx-ceph02172.16.100.45 wx-ceph03二、在admin节点上安装ceph-deploy1、Add the release key:执行:wget -q -O- 'https:///git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add –2、Add the Ceph packages to your repository. Replace {ceph-stable-release} with a stable Ceph release (e.g., cuttlefish, dumpling, emperor, firefly, etc.). For example:echo deb /debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list执行:echo deb /debian-giant/ $(lsb_release -sc) main | sudo tee/etc/apt/sources.list.d/ceph.list3、Update your repository and install ceph-deploy:执行:sudo apt-get update && sudo apt-get install ceph-deployNote:You can also use the EU mirror for downloading your packages. Simply replace / by /三、在每个机器上安装ntpWe recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift.On Debian / Ubuntu, 执行:sudo apt-get install ntp四、在每个机器上安装openssh-serversudo apt-get install openssh-server五、在每个机器上创建user并给予sudo权限1、在每个机器上创建user格式:ssh user@ceph-serversudo useradd -d /home/{username} -m {username}sudo passwd {username}实际操作:登录到每台机器,然后执行:sudo useradd -d /home/ceph -m cephsudo passwd ceph2、For the user you added to each Ceph node, ensure that the user has sudo privileges.格式:echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}sudo chmod 0440 /etc/sudoers.d/{username}实际操作:登录到每台机器,然后执行:echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephsudo chmod 0440 /etc/sudoers.d/ceph六、设置从admin节点到其他三个节点的免密码登录1、以新创建的用户登录到admin节点在admin节点上执行:su ceph2、执行ssh-keygen,一路回车。

ceph详细安装部署教程(多监控节点)

ceph详细安装部署教程(多监控节点)

ceph详细安装部署教程(多监控节点)1、安装环境系统centos-6.5设备:1台admin-node (ceph-ploy) 1台 monistor 2台 osd2、关闭所有节点的防火墙及关闭selinux,重启机器。

service iptables stopsed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config chkconfig iptables off3、编辑admin-node节点的ceph yum仓库vi /etc/yum.repos.d/ceph.repo[ceph-noarch]name=Ceph noarch packagesbaseurl=/rpm/el6/noarch/enabled=1gpgcheck=1type=rpm-mdgpgkey=https:///git/?p=ceph.git;a=blob_plain;f=keys/relea se.asc4、安装搜狐的epel仓库rpm -ivh /fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm5、更新admin-node节点的yum源yum clean allyum update -y6、在admin-node节点上建立一个ceph集群目录mkdir /cephcd /ceph7、在admin-node节点上安装ceph部署工具yum install ceph-deploy -y8、配置admin-node节点的hosts文件vi /etc/hosts10.240.240.210 admin-node10.240.240.211 node110.240.240.212 node210.240.240.213 node3二、配置ceph-deploy部署的无密码登录每个ceph节点1、在每个Ceph节点上安装一个SSH服务器[ceph@node3 ~]$ yum install openssh-server -y2、配置您的admin-node管理节点与每个Ceph节点无密码的SSH访问。

Ceph分布式存储平台部署手册

Ceph分布式存储平台部署手册

Ceph分布式存储平台部署手册目录1.CEPH的架构介绍 (5)2.CEPH在OPENSTACK中应用的云存储的整体规划 (6)3.CEPH集群安装在UBUNTU 12.04 (7)3.1.配置ceph源73.2.依需求修改ceph.conf配置文件73.3.设置主机的hosts 93.4.设置集群节点之间无密钥访问93.5.创建目录103.6.创建分区与挂载103.7.执行初始化113.8.启动ceph 113.9.ceph健康检查114.CEPH集群安装在CENTOS6.4 (12)4.1.安装更新源124.2.使用rpm安装ceph0.67.4 124.3.依需求修改ceph.conf配置文件134.4.设置主机的hosts 214.5.设置集群节点之间无密钥访问214.6.创建目录224.7.执行初始化224.8.启动ceph 224.9.ceph健康检查235.OPENSTACK GLANCE 使用CEPH集群的配置 (24)5.1.创建卷池和图像池245.2.增加两个池的复制水平245.3.为池创建 Ceph 客户端和密钥环245.4.在计算节点应用密钥环245.4.1.创建libvirt密钥245.4.2.计算节点ceph安装255.5.更新你的 glance-api 配置文件256.OPENSTACK VOLUMES使用CEPH集群的配置 (27)6.1.计算节点ceph安装276.2.创建临时的 secret.xml 文件276.3.设定 libvirt 使用上面的密钥286.4.更新 cinder 配置286.4.1.cinder.conf文件更改286.4.2.更改 cinder 启动脚本配置文件296.4.3.更改/etc/nova/nova.conf配置296.4.4.重启 cinder 服务296.5.验证cinder-volume 296.6.验证rdb创建volume 307.挂载CEPHFS (31)7.1.配置/etc/fstab 317.2.挂载vm实例目录318.FQA (32)1.CEPH的架构介绍CEPH的组件主要包括客户端ceph client(数据用户),元数据服务器mds(缓存和同步分布式元数据),一个对象存储集群osd(将数据和元数据作为对象存储,执行其他关键职能),集群监视器mon(执行监视功能)。

CentOS7手动部署Ceph0.94

CentOS7手动部署Ceph0.94

CentOS7手动部署Ceph0.94--------------------All-nodes--------------------1- 各节点设置 hosts# vim /etc/hosts192.168.121.25 monmds01192.168.121.26 storage01192.168.121.27 storage022- 设置源这里我的环境将 {ceph-release} 替换为 hammer 将 {distro} 替换为 el7# vim /etc/yum.repos.d/ceph.repo[ceph]name=Ceph packages for $basearchbaseurl=/rpm-{ceph-release}/{distro}/$basearch enabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///keys/release.asc[ceph-noarch]name=Ceph noarch packagesbaseurl=/rpm-{ceph-release}/{distro}/noarch enabled=1priority=2gpgcheck=1type=rpm-mdgpgkey=https:///keys/release.asc[ceph-source]name=Ceph source packagesbaseurl=/rpm-{ceph-release}/{distro}/SRPMS enabled=0priority=2gpgcheck=1type=rpm-mdgpgkey=https:///keys/release.asc# yum update && sudo yum install yum-plugin-priorities -y # yum install ceph -y--------------------Monitor--------------------1- 确定 fsid# uuidgen7172833a-6d3a-42fe-b146-e3389d6845982- 配置文件# vim /etc/ceph/ceph.conf[global]fsid = 7172833a-6d3a-42fe-b146-e3389d684598mon initial members = monmds01mon host = 192.168.121.25public network = 192.168.121.0/24auth cluster required = cephxauth service required = cephxauth client required = cephxosd journal size = 1024filestore xattr use omap = trueosd pool default size = 2 #设置两个副本osd pool default min size = 1osd pool default pg num = 128 # PGS = (Total_number_of_OSD * 100) / max_replication_count 得出数值较近的2的指数osd pool default pgp num = 128osd crush chooseleaf type = 13- 创建 keyring# ceph-authtool --create-keyring /tmp/ceph.mon.keyring \ > --gen-key -n mon. --cap mon 'allow *'4- 生成管理员 keyring# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring \> --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' \> --cap osd 'allow *' --cap mds 'allow'5- 将 client.admin key 导入 ceph.mon.keyring# ceph-authtool /tmp/ceph.mon.keyring \> --import-keyring /etc/ceph/ceph.client.admin.keyring6- 生成 monitor map# monmaptool --create --add monmds01 192.168.121.25 \> --fsid 7172833a-6d3a-42fe-b146-e3389d684598 /tmp/monmap7- 创建 monitor 数据目录 ceph-{hostname}# mkdir /var/lib/ceph/mon/ceph-monmds018- 导入 monitor map 和 keyring 信息# ceph-mon --mkfs -i monmds01 \> --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring9- 创建两个空文件# touch /var/lib/ceph/mon/ceph-monmds01/done# touch /var/lib/ceph/mon/ceph-monmds01/sysvinit10- 启动 monitor# service ceph start mon.monmds01=== mon.monmds01 ===Starting Ceph mon.monmds01 on monmds01...Running as unit run-5055.service.Starting ceph-create-keys on monmds01...检查一下# ceph -scluster 7172833a-6d3a-42fe-b146-e3389d684598health HEALTH_ERR64 pgs stuck inactive64 pgs stuck uncleanno osdsmonmap e1: 1 mons at {monmds01=192.168.121.25:6789/0} election epoch 2, quorum 0 monmds01osdmap e1: 0 osds: 0 up, 0 inpgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects0 kB used, 0 kB / 0 kB avail64 creating# ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.monmds01.asok mon_status11- 将 keyring 和 ceph.conf 拷贝到各个 osd 节点# scp /etc/ceph/ceph.conf storage-x:/etc/ceph/# scp /etc/ceph/ceph.client.admin.keyring storage-x:/etc/ceph/--------------------Osds--------------------1- Create OSD# uuidgen4cd76113-25d3-4dfd-a2fc-8ef34b87499c# uuidgene2c49f20-fbeb-449d-9ff1-49324b66da26--storage01--# ceph osd create 4cd76113-25d3-4dfd-a2fc-8ef34b87499c0 #返回值0表示osd-number=0--storage02--# ceph osd create e2c49f20-fbeb-449d-9ff1-49324b66da26 12- 创建数据存储目录--storage01--# mkdir -p /data/ceph/osd/ceph-0 #目录名为{cluster-name}-{osd-number}# ln -s /data/ceph/osd/ceph-0 /var/lib/ceph/osd/存储挂的磁阵,这里要把挂的硬盘分区、格式化,环境里使用了OpenStack云硬盘对接后端磁阵# fdisk /dev/vdb# mkfs.ext4 /dev/vdb1挂载# mount -o defaults,_netdev /dev/vdb1 /var/lib/ceph/osd/ceph-0写入分区表# vim /etc/fstab/dev/vdb1 /var/lib/ceph/osd/ceph-0 ext4 defaults,_netdev 0 0--storage02--# mkdir -p /data/ceph/osd/ceph-1# ln -s /data/ceph/osd/ceph-1 /var/lib/ceph/osd/# fdisk /dev/vdb# mkfs.ext4 /dev/vdb1# mount -o defaults,_netdev /dev/vdb1 /var/lib/ceph/osd/ceph-1# vim /etc/fstab/dev/vdb1 /var/lib/ceph/osd/ceph-1 ext4 defaults,_netdev 0 03- 初始化 OSD 数据目录--storage01--# ceph-osd -i 0 --mkfs --mkjournal --mkkey \> --osd-uuid 4cd76113-25d3-4dfd-a2fc-8ef34b87499c \> --cluster ceph \> --osd-data=/data/ceph/osd/ceph-0 \> --osd-journal=/data/ceph/osd/ceph-0/journal2015-11-11 13:51:21.950261 7fd64b796880 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway2015-11-11 13:51:22.461867 7fd64b796880 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway2015-11-11 13:51:22.463339 7fd64b796880 -1 filestore(/data/ceph/osd/ceph-0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file ordirectory2015-11-11 13:51:22.971198 7fd64b796880 -1 created object store /data/ceph/osd/ceph-0 journal /data/ceph/osd/ceph-0/journal for osd.0 fsid 7172833a-6d3a-42fe-b146-e3389d6845982015-11-11 13:51:22.971377 7fd64b796880 -1 auth: error reading file: /data/ceph/osd/ceph-0/keyring: can't open /data/ceph/osd/ceph-0/keyring: (2) No such file or directory 2015-11-11 13:51:22.971653 7fd64b796880 -1 created new key in keyring /data/ceph/osd/ceph-0/keyring--storage02--# ceph-osd -i 1 --mkfs --mkjournal --mkkey \> --osd-uuid e2c49f20-fbeb-449d-9ff1-49324b66da26 \> --cluster ceph \> --osd-data=/data/ceph/osd/ceph-1 \> --osd-journal=/data/ceph/osd/ceph-1/journal2015-11-09 21:42:24.012339 7f6caebdb7c0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway2015-11-09 21:42:24.126837 7f6caebdb7c0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway2015-11-09 21:42:24.127570 7f6caebdb7c0 -1 filestore(/data/ceph/osd/ceph-1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory2015-11-09 21:42:24.221463 7f6caebdb7c0 -1 created object store /data/ceph/osd/ceph-1 journal /data/ceph/osd/ceph-1/journal for osd.1 fsid 7172833a-6d3a-42fe-b146-e3389d6845982015-11-09 21:42:24.221540 7f6caebdb7c0 -1 auth: error reading file: /data/ceph/osd/ceph-1/keyring: can't open /data/ceph/osd/ceph-1/keyring: (2) No such file or directory 2015-11-09 21:42:24.221690 7f6caebdb7c0 -1 created new key in keyring /data/ceph/osd/ceph-1/keyring4- 注册 OSD authentication key--storage01--# ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /data/ceph/osd/ceph-0/keyringadded key for osd.0--storage02--# ceph auth add osd.1 osd 'allow *' mon 'allow profile osd' -i /data/ceph/osd/ceph-1/keyringadded key for osd.15- 将 ceph 节点加入 CRUSH map--storage01--# ceph osd crush add-bucket storage01 hostadded bucket storage01 type host to crush map# ceph osd crush move storage01 root=defaultmoved item id -2 name 'storage01' to location {root=default} in crush map--storage02--# ceph osd crush add-bucket storage02 hostadded bucket storage02 type host to crush map# ceph osd crush move storage02 root=defaultmoved item id -3 name 'storage02' to location {root=default} in crush map6- 将 OSD 加入 CRUSH map 此处设置权重为1.0--storage01--# ceph osd crush add osd.0 1.0 host=storage01add item id 0 name 'osd.0' weight 1 at location {host=storage01} to crush map--storage02--# ceph osd crush add osd.1 1.0 host=storage02add item id 1 name 'osd.1' weight 1 at location {host=storage02} to crush map7- 创建初始化文件--storage01--# touch /var/lib/ceph/osd/ceph-0/sysvinit--storage02--# touch /var/lib/ceph/osd/ceph-1/sysvinit8- 启动服务--storage01--# service ceph start osd.0=== osd.0 ===create-or-move updated item name 'osd.0' weight 0.96 at location {host=storage01,root=default} to crush map Starting Ceph osd.0 on storage01...Running as unit run-27621.service.--storage02--# service ceph start osd.1=== osd.1 ===create-or-move updated item name 'osd.1' weight 0.05 at location {host=storage02,root=default} to crush map Starting Ceph osd.1 on storage02...Running as unit run-2312.service.9- 检查# ceph -scluster 7172833a-6d3a-42fe-b146-e3389d684598health HEALTH_OKmonmap e1: 1 mons at {monmds01=192.168.121.25:6789/0} election epoch 2, quorum 0 monmds01osdmap e13: 2 osds: 2 up, 2 inpgmap v33: 64 pgs, 1 pools, 0 bytes data, 0 objects6587 MB used, 974 GB / 1033 GB avail64 active+clean# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 2.00000 root default-2 1.00000 host storage010 1.00000 osd.0 up 1.00000 1.00000-3 1.00000 host storage021 1.00000 osd.1 up 1.00000 1.00000--------------------MDS--------------------mds 的坑耽误了很久,官网里只有 ceph-deploy 一笔带过,一下手动过程基本靠摸索,此外启动后是看不到状态的,要创建了 pool 和fs 以后,才能观察到 mds 状态,比较奇特1- 创建mds数据目录# mkdir /var/lib/ceph/mds/ceph-monmds012- 创建keyring# ceph auth get-or-create mds.monmds01 \> mon 'allow rwx' osd 'allow *' \> mds 'allow *' -o /var/lib/ceph/mds/ceph-monmds01/keyring3- 创建初始化文件# touch /var/lib/ceph/mds/ceph-monmds01/sysvinit# touch /var/lib/ceph/mds/ceph-monmds01/done4- 启动mds# service ceph start mds.monmds015- 查看状态,此时还看不到mds信息,要在创建好pool以后才能看到# ceph -scluster 342311f7-c486-479e-9c36-71adf326693ehealth HEALTH_OKmonmap e1: 1 mons at {monmds01=192.168.121.25:6789/0} election epoch 2, quorum 0 monmds01osdmap e13: 2 osds: 2 up, 2 inpgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects2202 MB used, 1866 GB / 1968 GB avail64 active+clean--------------------CephFS--------------------1- 初始状态# ceph -scluster 342311f7-c486-479e-9c36-71adf326693ehealth HEALTH_OKmonmap e1: 1 mons at {monmds01=192.168.121.25:6789/0} election epoch 2, quorum 0 monmds01osdmap e13: 2 osds: 2 up, 2 inpgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects2202 MB used, 1866 GB / 1968 GB avail64 active+clean2- 创建pool# ceph osd pool create cephfs_data 100# ceph osd pool create cephfs_metadata 1003- 查看此时状态# ceph osd lspools0 rbd,1 cephfs_data,2 cephfs_metadata,# ceph -scluster 342311f7-c486-479e-9c36-71adf326693ehealth HEALTH_OKmonmap e1: 1 mons at {monmds01=192.168.121.25:6789/0} election epoch 2, quorum 0 monmds01osdmap e17: 2 osds: 2 up, 2 inpgmap v52: 264 pgs, 3 pools, 0 bytes data, 0 objects2206 MB used, 1866 GB / 1968 GB avail264 active+clean4- 创建fs# ceph fs new cephfs cephfs_metadata cephfs_data5- 现在观察状态已经显示mds现状# ceph -scluster 342311f7-c486-479e-9c36-71adf326693ehealth HEALTH_OKmonmap e1: 1 mons at {monmds01=192.168.121.25:6789/0} election epoch 2, quorum 0 monmds01mdsmap e5: 1/1/1 up {0=monmds01=up:active}osdmap e18: 2 osds: 2 up, 2 inpgmap v56: 264 pgs, 3 pools, 1962 bytes data, 20 objects2207 MB used, 1866 GB / 1968 GB avail264 active+clean# ceph mds state5: 1/1/1 up {0=monmds01=up:active}# ceph fs lsname: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]6- 挂载CephFS# df -hFilesystem Size Used Avail Use% Mounted on/dev/vda1 40G 14G 25G 36% /devtmpfs 2.0G 0 2.0G 0% /devtmpfs 2.0G 220K 2.0G 1% /dev/shmtmpfs 2.0G 17M 2.0G 1% /runtmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup# mkdir /mnt/mycephfs# mount -t ceph monmds01:6789:/ \> /mnt/mycephfs \> -o name=admin,secret=AQCky1ZWYtdHKRAACe+Nk6gZ6rJerMrlOheG8Q==# df -hFilesystem Size Used Avail Use% Mounted on/dev/vda1 40G 14G 25G 36% /devtmpfs 2.0G 0 2.0G 0% /devtmpfs 2.0G 220K 2.0G 1% /dev/shmtmpfs 2.0G 17M 2.0G 1% /runtmpfs 2.0G 0 2.0G 0% /sys/fs/cgroupmonmds01:6789:/ 2.0T 103G 1.9T 6% /mnt/mycephfs挂载命令中的密码可以在monmds01节点查询# ceph auth listclient.adminkey: AQCky1ZWYtdHKRAACe+Nk6gZ6rJerMrlOheG8Q==auid: 0caps: [mds] allowcaps: [mon] allow *caps: [osd] allow *7- 持久挂载CentOS7中fstab挂载启动过早,网络还没通,会影响系统启动,需要加上'_netdev' 参数# vim /etc/fstabmonmds01:6789:/ /mnt/mycephfs ceph name=admin,secret=AQCky1ZWYtdHKRAACe+Nk6gZ6rJerMrlO heG8Q==,noatime,_netdev 0 2。

Ceph安装部署与测试调优

Ceph安装部署与测试调优

Ceph安装部署及测试调优目录1.熟悉Ceph存储的基本原理与架构2.掌握Ceph集群的安装部署方法3.掌握Ceph常见的性能测试调优方法目录1.基本概念及架构2.安装部署3.测试调优Ceph是一个统一的分布式存储系统,具有高扩展性、高可靠性、高性能,基于RADOS(reliable, autonomous, distributed object store ),可提供对象存储、块设备存储、文件系统存储三种接口RADOS:是Ceph集群的精华,为用户实现数据分配、Failover等集群操作。

LIBRADOS:Librados是RADOS的提供库,上层的RBD、RGW和CephFS都是通过LIBRADOS访问的,目前提供PHP、Ruby、Java、Python、C和C++支持。

RBD:RBD全称RADOS block device,是Ceph对外提供的块设备服务。

RGW:RGW全称RADOS gateway,是Ceph对外提供的对象存储服务,接口与S3和Swift兼容。

CephFS:CephFS全称Ceph File System,是Ceph对外提供的文件系统服务OSD :Ceph OSD 进程,功能是负责读写数据,处理数据的复制、恢复、回填、再均衡,并通过检查其他OSD 守护进程的心跳来向Ceph Monitors 提供一些监控信息。

Monitor :集群的管理进程,维护着展示集群状态的各种图表,包括监视器图、OSD 图、归置组(PG )图、和CRUSH 图。

MDS :Ceph 元数据服务器,为Ceph 文件系统存储元数据(也就是说,Ceph 块存储和Ceph 对象存储不使用MDS )。

Ceph存储集群Object :Ceph 最底层的存储单元是Object 对象,每个Object 包含元数据和原始数据。

PG :PG 全称Placement Groups ,即归置组,是存放objects 的逻辑概念,一个PG 可映射到多个OSD 。

星风软件Ceph全解决方案集群部署指南说明书

星风软件Ceph全解决方案集群部署指南说明书

One Stop Virtualization Shop StarWind® Ceph all-in-one ClusterHow to deploy Ceph all-in-one ClusterJUNE 2017TECHINCAL PAPERTrademarks“StarWind”, “StarWind Software” and the StarWind and the StarWind Software logos are registered trademarks of StarWind Software. “StarWind LSFS” is a trademark of StarWind Software which may be registered in some jurisdictions. All other trademarks are owned by their respective owners.ChangesThe material in this document is for information only and is subject to change without notice. While reasonable efforts have been made in the preparation of this document to assure its accuracy, StarWind Software assumes no liability resulting from errors or omissions in this document, or from the use of the information contained herein. StarWind Software reserves the right to make changes in the product design without reservation and without notification to its users.Technical Support and ServicesIf you have questions about installing or using this software, check this and other documents first - you will find answers to most of your questions on the Technical Papers webpage or in StarWind Forum. If you need further assistance, please contact us.In 2016, Gartner named StarWind “Cool Vendor for Compute Platforms”.Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.About StarWindStarWind is a pioneer in virtualization and a company that participated in the development of this technology from its earliest days. Now the company is among the leading vendors of software and hardware hyper-converged solutions. The company’s core product is the years-proven StarWind Virtual SAN, which allows SMB and ROBO to benefit from cost-efficient hyperconverged IT infrastructure. Having earned a reputation of reliability, StarWind created a hardware product line and is actively tapping into hyperconverged and storage appliances market. In 2016, Gartner namedSta rWind “Cool Vendor for Compute Platforms” following the success and popularity of StarWind HyperConverged Appliance. StarWind partners with world-known companies: Microsoft, VMware, Veeam, Intel, Dell, Mellanox, Citrix, Western Digital, etc.Copyright ©2009-2017 StarWind Software Inc.No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent of StarWind Software.ContentsIntroduction (4)Before you begin (5)Virtual Machine Deployment and OS installation (5)Virtual Machine configuration (19)Ceph Deployment (21)Conclusion (22)Contacts (23)IntroductionThis guidance will show you how to deploy a Ceph all-in-one cluster. The paper will walk you through the Ceph cluster configuration process and describe how to create a Ceph monitor and Ceph OSD.Ceph is an open-source project, which provides unified software solution for storing blocks, files, and objects. The main idea of the project is to provide a high-performing distributed storage system which can provide an ability to perform a massive storage scale-out and will have no single points of failure. It has become one of the most popular Software-Defined Storage technologies.Ceph becomes more attractive to the storage industry due to its openness, scalability, and reliability. Cloud computing and IaaS era requires a system which must be Software-Defined and ready for cloud technologies. Ceph injects here more than perfect, regardless the environment where it is going to be used (public, private, or hybrid cloud).This guide is intended for experienced IT and Storage administrators and professionals who would like to deploy the Ceph all-in-one cluster to check out all the benefits of Ceph object storage.A full set of up-to-date technical documentation can always be found here, or by pressing the Help button in the StarWind Management Console.For any technical inquiries please visit our online community, Frequently Asked Questions page, or use the support form to contact our technical support department.Before you beginThis guide describes the installation and configuration of the Ceph all-in-one cluster, which means that we are going to build the Ceph cluster using only one VM. We are going to deploy the ESXi VM and install Debian 8 on it.You can download our pre-deployed OVF template or follow these steps:Virtual Machine Deployment and OS installation1.Download Debian 8 ISO for OS installation here:https:///cdimage/archive/8.8.0/amd64/iso-cd/debian-8.8.0-amd64-netinst.iso2.Create the ESXi VM with following settings:3.Mount the ISO image to the VM and boot from it.4.Choose Graphical install option5.Choose an eligible language for the installation process6.Select your location, which is going to be used to set your time zone.7.Configure the keyboard (choose American English)8.Enter the hostname9.Configure your network.10.Set up a password for ‘root’account11.Create a user account which is going to be used instead of the root account for non-administrative activities12.Set up a password for the newly created account13.Select the desired time zone for you14.Partition the disks15.Write changes to the disks16.Configure the package manager17.S elect a Debian Archive mirror18.Enter proxy information if you need to use HTTP proxy.19.Configure popularity contest20.Select software needed21.Install the GRUB boot loader22.Finish the installationVirtual Machine configuration23.Add a Virtual Disk with a desirable size to the VM. This Virtual Disk will be used by OSDDaemon.24.Boot the VM into the recently installed OS and log in to it using the root account. UpdateDebian using the following command: apt-get -y update25.Install packages and configure NTP.apt-get install -y sudo python python-pip ntp;systemctl enable ntp;systemctl start ntp;26.Add user you have created to sudoers (where %USERNAME% is the user account youhave created during OS installation):usermod -aG sudo %USERNAME%;echo "%USERNAME% ALL = (root) NOPASSWD:ALL" | sudo tee/etc/sudoers.d/%USERNAME%;chmod 0440 /etc/sudoers.d/%USERNAME%;27.Connect to the VM via SSH and log in using your user account.28.Configure SSH:Generate the ssh keys for %USERNAME%user:ssh-keygenLeave passphrase as blank/empty.Edit file id_rsa.pub and remove "%USERNAME%@host" (name of your user) at the end of the stringnano /home/%USERNAME%/.ssh/id_rsa.pubcp /home/%USERNAME%/.ssh/id_rsa.pub/home/%USERNAME%/.ssh/authorized_key29.Add to /etc/hosts host ip (eth0) and a hostnameCeph Deployment30.Deploy Ceph "all-in-one":•Create directory "Ceph-all-in-one":mkdir ~/Ceph-all-in-one;cd ~/Ceph-all-in-one;•Install Ceph-deploy:sudo pip install Ceph-deploy•Create new config:sCeph-deploy new Ceph-all-in-one;echo "[osd]" >> /home/%USERNAME%/Ceph-all-in-one/Ceph.conf;echo "osd pool default size = 1" >> /home/sw/Ceph-all-in-one/Ceph.conf;echo "osd crush chooseleaf type = 0" >> /home/%USERNAME%/Ceph-all-in-one/Ceph.conf;31.Install Ceph and add mon role to nodeCeph-deploy install Ceph-all-in-one; ("Ceph-all-in-one" our hostname)Ceph-deploy mon create-initial;Ceph-deploy osd create Ceph-all-in-one:sdb; ("Ceph-all-in-one" our hostname, sdb name of the disk we have added in the Virtual Machine configurationsection)32.Change Ceph rbd pool size:sudo Ceph osd pool set rbd size 133.After deployment:Check cluster status: sudo Ceph -sNOTE: Please keep in mind that we have deployed Ceph cluster without the replication. It is not recommended to use this scenario in production.ConclusionBy following these instructions, you have deployed Debian VM and configured it for creating Ceph all-in-one cluster. We have configured the VM as a Ceph monitor and created an OSD and Ceph pool. As a result, you can create RBD device, format it and mount to store your data.Contacts1-617-449-7717 1-617-507-5845 +44 20 3769 1857 (UK)+49 302 1788 849 (Germany) +33 097 7197 857 (France) +7 495 975 94 39 (Russian Federation and CIS) 1-866-790-2646Customer Support Portal:Support Forum:Sales: General Information: https:///support https:///forums ***********************************StarWind Software, Inc. 35 Village Rd., Suite 100, Middleton, MA 01949 USA ©2017, StarWind Software Inc. All rights reserved.。

Ceph安装部署与测试调优

Ceph安装部署与测试调优

Ceph安装部署及测试调优目录1.熟悉Ceph存储的基本原理与架构2.掌握Ceph集群的安装部署方法3.掌握Ceph常见的性能测试调优方法目录1.基本概念及架构2.安装部署3.测试调优Ceph是一个统一的分布式存储系统,具有高扩展性、高可靠性、高性能,基于RADOS(reliable, autonomous, distributed object store ),可提供对象存储、块设备存储、文件系统存储三种接口RADOS:是Ceph集群的精华,为用户实现数据分配、Failover等集群操作。

LIBRADOS:Librados是RADOS的提供库,上层的RBD、RGW和CephFS都是通过LIBRADOS访问的,目前提供PHP、Ruby、Java、Python、C和C++支持。

RBD:RBD全称RADOS block device,是Ceph对外提供的块设备服务。

RGW:RGW全称RADOS gateway,是Ceph对外提供的对象存储服务,接口与S3和Swift兼容。

CephFS:CephFS全称Ceph File System,是Ceph对外提供的文件系统服务OSD :Ceph OSD 进程,功能是负责读写数据,处理数据的复制、恢复、回填、再均衡,并通过检查其他OSD 守护进程的心跳来向Ceph Monitors 提供一些监控信息。

Monitor :集群的管理进程,维护着展示集群状态的各种图表,包括监视器图、OSD 图、归置组(PG )图、和CRUSH 图。

MDS :Ceph 元数据服务器,为Ceph 文件系统存储元数据(也就是说,Ceph 块存储和Ceph 对象存储不使用MDS )。

Ceph存储集群Object :Ceph 最底层的存储单元是Object 对象,每个Object 包含元数据和原始数据。

PG :PG 全称Placement Groups ,即归置组,是存放objects 的逻辑概念,一个PG 可映射到多个OSD 。

ceph安装部署手册1.0

ceph安装部署手册1.0

Ceph安装部署手册1.需求1.1硬件环境3台PC机,安装操作系统centos7;分别创建一个主分区/dev/sda4 IP分别设置为192.168.0.2,192.168.0.3,192.168.0.4注:MON、OSD节点可以不再同一节点上1.2软件环境安装包:centos、ceph、ceph-deploy2.准备2.1关闭防火墙(所有节点)1. systemctl stop firewalldsystemctl disable firewalld2. vi /etc/sysconfig/selinux => SELINUX=disabled2.2主机配置(所有节点)[root@node1 ~]# vi /etc/hosts2.3软件安装1.安装vsftp:[root@node1 ~]# rpm -ivh/var/ftp/centos/Packages/vsftpd-3.0.2-9.el7.x86_6 4.rpm2.配置yumcentos、ceph、ceph-deploy3.启动vsftp4.安装(所有节点)分别安装ceph、ceph-deploy注:建议把NTP 服务安装到所有Ceph 节点上(特别是Ceph 监视器节点),以免因时钟漂移导致故障。

# yum install ntp# yum install ceph ceph-deploy3.搭建ceph1.进入/etc/ceph目录,确保下面所有操作都在此目录下:[root@node1 ~]# cd /etc/ceph/2.Create the cluster.(创建集群)[root@node1 ceph]# ceph-deploy new node13. create the monitor.(创建一个mon)[root@node1 ceph]# ceph-deploy mon create node1 4. gather the keys.(取得秘钥)[root@node1 ceph]# ceph-deploy gatherkeys node1 5.Verify that the monitor is running.(检查)3.1adding monitors在node1执行:1.Configuration(在/etc/ceph下面配置ceph.conf,保证fsid和上图标注部分一致)[global]fsid =e23ed29d-abb3-4050-bd40-ed2a07e70a7dmon initial members = node1,node2,node32.从node1分别拷贝证书和配置文件到node2,node33.Add two Ceph Monitors to the cluster.(添加两个monitor)4.Verify(检查)3.2adding osds(所有节点)1.Generate a UUID for the OSD.(获得UUID)[root@node1 ~]# uuidgen2.Create the OSD.[root@node1 ~]# ceph osd createf898402c-e395-4f57-9e33-89c6dd57edf63.Create the default directory on your new OSD.[root@node1 ~]# mkdir /var/lib/ceph/osd/ceph-0 4.Prepare a device for use with Ceph, and mount it to the directory you just create.[root@node1 ~]# mkfs -t xfs -f /dev/sda4[root@node1 ~]# mount /dev/sda4/var/lib/ceph/osd/ceph-0/5.Initialize the OSD data directory.[root@node1 ~]# ceph-osd -i 0 --mkfs --mkkey--osd-uuid f898402c-e395-4f57-9e33-89c6dd57edf66.Register the OSD authentication key.[root@node1 ~]# ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-0/keyring7.Add the Ceph Node to the CRUSH map.[root@node1 ~]# ceph osd crush add-bucket node1host8.Place the Ceph Node under the root default.[root@node1 ~]# ceph osd crush move node1root=default9. Add the device as an item in the host, assign it a weight, recompile it and set it.[root@node1 ~]# ceph osd crush add osd.0 1.0host=node1只在一个节点执行:10.Configuration(在/etc/ceph/ceph.conf中添加以下内容,并且同步到node2,node3)# vi /etc/ceph/ceph.conf[osd]osd mkfs type = xfs[osd.0]host = node1devs = /dev/sda4[osd.1]host = node2devs = /dev/sda4[osd.2]host = node3devs = /dev/sda4#scp /etc/ceph/ceph.conf ************.0.12:/etc/ceph/# scp /etc/ceph/ceph.conf ************.0.13:/etc/ceph/ 11.Start the new OSD[root@node1 ~]# /etc/init.d/ceph start osd.012.Verify(检查)View the osd tree:。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Ceph安装部署文档Ceph安装部署文档目录二:部署环境介绍主机名公网IP(eth0)私网IP(eth1)双网卡绑定bond0运行服务操作系统内核备注anode1 172.16.100.35 mon、osdCentOS6.7主节点anode2 172.16.100.36 mon、osdCentOS6.7anode3 172.16.100.37 mon、osdCentOS6.7三:集群配置准备工作3.1 : 生成SSH证书,节点建立连接1)所有节点修改hostnamevim /etc/sysconfig/network2)安装SSH (主节点)sudo apt-get install openssh-server3)s sh登陆证书(主节点)ssh-keygen将配置完成的证书传输到其它服务器:ssh-copy-id {其他节点用户名}@{其他节点IP} Example:ssh-copy-id root@anode24)创建并编辑~/.ssh/config 文件,添加其他Host Host {Hostname}Hostname{}User {Username}Example:Host anode1Hostname 172.16.100.35User rootHost anode2Hostname 172.16.100.36User rootHost anode3Hostname 172.16.100.37User root3.2 : 建立ip地址list, 修改host文件1)创建工作文件夹,建立ip地址list,为文件传输做准备主节点执行mkdir /workspace/cd /workspace/vim cephlist.txt 主机列表写入:anode1anode2anode32)修改host文件vim /etc/hosts追加内容如下:172.16.100.35 anode1172.16.100.36 anode2172.16.100.37 anode3将host文件传输到其它主机for ip in $(cat /workspace/cephlist.txt);do echo -----$ip-----;rsync -avp /etc/hosts $ip:/etc/;done3.3 : 网络端口设置检查网络设置,确定这些设置是永久生效的,重启之后不会改变。

(1)Network设置,所有节点执行vim /etc/sysconfig/network-scripts/ifcfg-{iface}确认ONBOOT 为YESBOOTPROTO 对于静态IP地址来说通常为NONE如果要使用IPV6协议的话,需要设置IPV6{opt} 为YES(2)防火墙设置(Iptables),所有节点执行a)端口6789:Monitor 需要通过此端口与OSD通信,因此所有Monitor节点需打开b)端口6800:7300:用于OSD通信。

每个Ceph Node上的每个OSD需要三个端口,一个用于与client和 Monitor通信;一个用于与其他OSD传送数据,一个用于心跳检测。

如果一个Ceph Node上有4个OSD,打开12(=3×4)个端口。

sudo iptables -I INPUT 1 -i eth0 -p tcp -s172.16.100.35/255.255.255.0 --dport 6789 -jACCEPTsudo iptables -I INPUT 1 -i eth0 -p tcp -s172.16.100.35/255.255.255.0 --dport 6800:6809-j ACCEPT配置完成iptable以后,确保每个节点上的改变永久生效,重启以后也能保持有效。

/sbin/service iptables save(3)tty 设置, 所有节点执行sudo visudo找到Defaults requiretty,大约在50多行,把它改成Defaults:{User} !requiretty 或者直接把原句注释掉。

确保Ceph-Deploy不会报错。

(4)SELINUX, 所有节点执行sudo setenforce 0确保集群在配置完成之前不会出错。

可以在/etc/selinux/config修改永久改变。

3.4 : 安装centos的yum源软件包 =>全部节点安装(1) 复制此文档所在文件夹中的.repo文件到目录/etc/yum.repos.d/中(2) 传输yum源文件到其它节点服务器--delete 删除那些DST中SRC没有的文件for ip in $(cat /workspace/cephlist.txt);do echo -----$ip-----;rsync -avp --delete /etc/yum.repos.d $ip:/etc/;done(3) yum立即生效(所有节点执行)yum make cache3.5 : 添加时间同步定时任务(1) 安装NTP软件包,所有节点执行yum install ntp完成后,都需要配置NTP服务为自启动chkconfig ntpd onchkconfig --list ntpdntpd 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭在配置前,先使用ntpdate手动同步下时间,免得本机与外部时间服务器时间差距太大,让ntpd不能正常同步。

# ntpdate -u (2) 配置内网时间服务器NTP-Server(172.16.100.35) NTPD服务配置核心就在/etc/ntp.conf文件,红色部分修改,其他的是默认。

# For more information about this file, see theman pages# ntp.conf(5), ntp_acc(5), ntp_auth(5),ntp_clock(5), ntp_misc(5), ntp_mon(5).driftfile /var/lib/ntp/drift# Permit time synchronization with our timesource, but do not# permit the source to query or modify the service on this system.restrict default kod nomodify notrap nopeer noqueryrestrict -6 default kod nomodify notrap nopeer noquery# Permit all access over the loopback interface. This could# be tightened as well, but to do so would effect some of# the administrative functions.restrict 127.0.0.1restrict -6 ::1# Hosts on local network are less restricted. # 允许内网其他机器同步时间restrict 172.16.100.0 mask 255.255.255.0 nomodify notrap# Use public servers from the project.# Please consider joining the pool(/join.html).# 中国这边最活跃的时间服务器 : /zone/cnserver perfer # 中国国家受时中心server # . server # 0.as #broadcast 192.168.1.255 autokey # broadcast server#broadcastclient# broadcast client#broadcast 224.0.1.1 autokey # multicast server#multicastclient224.0.1.1 # multicast client#manycastserver 239.255.254.254 # manycast server#manycastclient 239.255.254.254 autokey #manycast client# allow update time by the upper server# 允许上层时间服务器主动修改本机时间restrict nomodify notrap noqueryrestrict nomodify notrap noqueryrestrict nomodify notrap noquery# Undisciplined Local Clock. This is a fake driver intended for backup# and when no outside source of synchronized time is available.# 外部时间服务器不可用时,以本地时间作为时间服务server 127.127.1.0 # local clockfudge 127.127.1.0 stratum 10# Enable public key cryptography.#cryptoincludefile /etc/ntp/crypto/pw# Key file containing the keys and keyidentifiers used when operating# with symmetric key cryptography.keys /etc/ntp/keys# Specify the key identifiers which are trusted.#trustedkey 4 8 42# Specify the key identifier to use with the ntpdcutility.#requestkey 8# Specify the key identifier to use with the ntpqutility.#controlkey 8# Enable writing of statistics records.#statistics clockstats cryptostats loopstatspeerstats使修改立即生效chkconfig ntpd onchkconfig ntpdate on(3) 与本地时间服务器同步的其他节点设置yum install ntp...chkconfig ntpd onvim /etc/ntp.conf(直接替换原来文件)driftfile /var/lib/ntp/driftrestrict 127.0.0.1restrict -6 ::1# 配置时间服务器为本地的时间服务器server 172.16.100.35restrict 172.16.100.35 nomodify notrap noqueryserver 127.127.1.0 #local clockfudge 127.127.1.0 stratum10includefile/etc/ntp/crypto/pwkeys /etc/ntp/keys使用ntpdate手动同步本地服务器时间ntpdate -u 192.168.0.13522 Dec 17:09:57 ntpdate[6439]: adjust time server 172.16.100.35 offset 0.004882 sec这里有可能出现同步失败,一般情况下原因都是本地的NTPD 服务器还没有正常启动起来,一般需要几分钟时间后才能开始同步。

相关文档
最新文档