ceph0.87安装利用手册
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Ceph0.87安装利用手册
(v0.1)
2021年3月28日
目录
第一部分:环境准备....................................................... - 1 -
一、准备......................................................... - 1 -
二、在admin节点上安装ceph-deploy ............................... - 1 -
三、在每个机器上安装ntp ......................................... - 2 -
四、在每个机器上安装openssh-server .............................. - 2 -
五、在每个机器上创建user并给予sudo权限......................... - 2 -
六、设置从admin节点到其他三个节点的免密码登录................... - 3 -
七、一些注意事项................................................. - 5 -第二部分:安装........................................................... - 5 -
一、在wx-ceph-admin节点上创建目录............................... - 5 -
二、清空配置(purge configuration) ................................ - 5 -
三、安装......................................................... - 6 -
四、配置OSD(目录做OSD)........................................ - 7 -
五、配置OSD(硬盘做OSD)........................................ - 8 -
六、配置文件拷贝................................................. - 9 -
七、安装MDS .................................................... - 10 -
八、Ceph运行................................................... - 10 -第三部分:使用.......................................................... - 11 -
一、集群监控.................................................... - 11 -
二、用户管理.................................................... - 14 -
三、Pool(池)管理................................................ - 14 -第四部分:Cephfs ........................................................ - 15 -
一、创建ceph文件系统........................................... - 15 -
二、挂载(mount) ceph文件系统................................... - 16 -
第一部份:环境预备
一、预备
预备1:修改每一个机械里边的/etc/hosts文件,添加这些机械的ip
例如:在wx-ceph-admin机械的/etc/hosts文件中,
添加:
172.16.100.46 wx-ceph-admin
172.16.100.42 wx-ceph01
172.16.100.44 wx-ceph02
172.16.100.45 wx-ceph03
二、在admin节点上安装ceph-deploy
1、Add the release key:
执行:
wget -q -O- 'https://ceph/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add –2、Add the Ceph packages to your repository. Replace {ceph-stable-release} with a stable Ceph release (e.g., cuttlefish, dumpling, emperor, firefly, etc.). For example:
echo deb ceph/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
执行:
echo deb eu.ceph/debian-giant/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
3、Update your repository and install ceph-deploy:
执行:
sudo apt-get update && sudo apt-get install ceph-deploy
Note:You can also use the EU mirror eu.ceph for downloading your packages. Simply replace ceph/ by
三、在每一个机械上安装ntp
We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift.
On Debian / Ubuntu, 执行:
sudo apt-get install ntp
四、在每一个机械上安装openssh-server
sudo apt-get install openssh-server
五、在每一个机械上创建user并给予sudo权限
1、在每一个机械上创建user
格式:
ssh user@ceph-server
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}
实际操作:
登录到每台机械,然后执行:
sudo useradd -d /home/ceph -m ceph
sudo passwd ceph
2、For the user you added to each Ceph node, ensure that the user has sudo privileges.
格式:
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
实际操作:
登录到每台机械,然后执行:
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
六、设置从admin节点到其他三个节点的免密码登录
1、以新创建的用户登录到admin节点
在admin节点上执行:su ceph
2、执行ssh-keygen,一路回车。
3、Copy the key to each Ceph Node, replacing {username} with the user name you created with Create a Ceph User.
格式:
ssh-copy-id {username}@node1
ssh-copy-id {username}@node2
ssh-copy-id {username}@node3
执行:
ssh-copy-id ceph@wx-ceph01
ssh-copy-id ceph@wx-ceph02
ssh-copy-id ceph@wx-ceph03
验证是不是能够免密码登录:
4、(Recommended) Modify the ~/.ssh/config file of your ceph-deploy admin node so that ceph-deploy can log in to Ceph nodes as the user you created without requiring you to specify --username {username} each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scp usage. Replace {username} with the user name you created:
Host node1
Hostname node1
User {username}
Host node2
Hostname node2
User {username}
Host node3
Hostname node3
User {username}
执行:
在admin节点:sudo vi ~/.ssh/config
Host wx-ceph01
Hostname wx-ceph01
User ceph
Host wx-ceph02
Hostname wx-ceph02
User ceph
Host wx-ceph03
Hostname wx-ceph03
User ceph
七、一些注意事项
1、为了安装的便利,要确保防火墙开启ceph需要用到的端口,为了简单,建议将防火墙全数禁用掉。
2、在redhat和centos下,需要关闭selinux
第二部份:安装
安装之前,要确保第一部份的环境预备已经完成,以下的步骤通过在admin节点上来进行快速的ceph集群安装部署。
(If you haven’t completed your Preflight Checklist, do that first. This Quick Start sets up a Ceph Storage Cluster using ceph-deploy on your admin node. Create a three Ceph Node cluster so you can explore Ceph functionality.)
一、在wx-ceph-admin节点上创建目录
说明:For best results, create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster.
执行:
mkdir my-cluster
cd my-cluster
说明:The ceph-deploy utility will output files to the current directory. Ensure you are in this directory when executing ceph-deploy.
Important:Do not call ceph-deploy with sudo or run it as root if you are logged in as a different user, because it will not issue sudo commands needed on the remote host.
二、清空配置(purge configuration)
If at any point you run into trouble and you want to start over, execute the following to purge the configuration:
To purge the Ceph packages, you may also execute:
格式:
ceph-deploy purge {ceph-node} [{ceph-node}]
执行
ceph-deploy purge wx-ceph-admin wx-ceph01 wx-ceph02 wx-ceph03
格式:
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy purgedata wx-ceph-admin wx-ceph01 wx-ceph02 wx-ceph03
ceph-deploy forgetkeys
If you execute purge, you must re-install Ceph.
若是执行了以上purge指令,ceph需要从头安装。
三、安装
On your admin node from the directory you created for holding your configuration details, perform the following steps using ceph-deploy.
1、部署一个monitor
格式:
ceph-deploy new {initial-monitor-node(s)}
执行
ceph-deploy new wx-ceph01 wx-ceph02 wx-ceph03
Check the output of ceph-deploy with ls and cat in the current directory. You should see a Ceph configuration file, a monitor secret keyring, and a log file for the new cluster. See ceph-deploy new -h for additional details.
2、更改副本数
Change the default number of replicas in the Ceph configuration file from 3 to 2 so that Ceph can achieve an active + clean state with just two Ceph OSDs. Add the following line under the [global] section:
osd pool default size = 2
3、If you have more than one network interface, add the public network setting under the [global] section of your Ceph configuration file. See the Network Configuration Reference (docs.ceph/docs/master/rados/configuration/network-config-ref/)for details.
public network = {ip-address}/{netmask}
4、Install Ceph.
格式:
ceph-deploy install {ceph-node}[{ceph-node} ...]
执行:
ceph-deploy install --release giant wx-ceph-admin wx-ceph01 wx-ceph02 wx-ceph03
The ceph-deploy utility will install Ceph on each node.
NOTE: If you use ceph-deploy purge, you must re-execute this step to re-install Ceph.
5、Add the initial monitor(s) and gather the keys:
ceph-deploy mon create wx-ceph01 wx-ceph02 wx-ceph03
ceph-deploy gatherkeys wx-ceph01 wx-ceph02 wx-ceph03
Once you complete the process, your local directory should have the following keyrings:
{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
{cluster-name}.bootstrap-rgw.keyring
Note:The bootstrap-rgw keyring is only created during installation of clusters running Hammer or newer.
四、配置OSD(目录做OSD)
Add two OSDs. For fast setup, this quick start uses a directory rather than an entire disk per Ceph OSD Daemon. See ceph-deploy osd for details on using separate disks/partitions for OSDs and journals.
1、用目录来做OSD(快速部署方式,生产系统必需用硬盘,详见docs.ceph/docs/master/rados/deployment/ceph-deploy-osd/)
Login to the Ceph Nodes and create a directory for the Ceph OSD Daemon.
ssh wx-ceph02
sudo mkdir /var/local/osd0
exit
ssh wx-ceph03
sudo mkdir /var/local/osd1
exit
2、预备OSD
Then, from your admin node, use ceph-deploy to prepare the OSDs.
格式:
ceph-deploy osd prepare {ceph-node}:/path/to/directory
执行:
ceph-deploy osd prepare wx-ceph02:/var/local/osd0 wx-ceph03:/var/local/osd1
3、激活OSD
Finally, activate the OSDs.
格式:
ceph-deploy osd activate {ceph-node}:/path/to/directory
执行:
ceph-deploy osd activate wx-ceph02:/var/local/osd0 wx-ceph03:/var/local/osd1
五、配置OSD(硬盘做OSD)
Adding and removing Ceph OSD Daemons to your cluster may involve a few more steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons write data to the disk and to journals. So you need to provide a disk for the OSD and a path to the journal partition (i.e., this is the most common configuration, but you may configure your system to your own needs).
In Ceph v0.60 and later releases, Ceph supports dm-crypt on disk encryption. You may specify the --dmcrypt argument when preparing an OSD to tell ceph-deploy that you want to use encryption. You may also specify the --dmcrypt-key-dir argument to specify the location of dm-crypt encryption keys.
You should test various drive configurations to gauge their throughput before before building out
a large cluster. See Data Storage for additional details.
1、LIST DISKS
To list the disks on a node, execute the following command:
ceph-deploy disk list {node-name [node-name]...}
2、ZAP DISKS
To zap a disk (delete its partition table) in preparation for use with Ceph, execute the following:
ceph-deploy disk zap {osd-server-name}:{disk-name}
ceph-deploy disk zap osdserver1:sdb
Important:This will delete all data.
3、PREPARE OSDS
Once you create a cluster, install Ceph packages, and gather keys, you may prepare the OSDs and deploy them to the OSD node(s). If you need to identify a disk or zap it prior to preparing it for use as an OSD, see List Disks and Zap Disks.
ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}]
ceph-deploy osd prepare osdserver1:sdb:/dev/ssd
ceph-deploy osd prepare osdserver1:sdc:/dev/ssd
Note:When running multiple Ceph OSD daemons on a single node, and sharing a partioned journal with each OSD daemon, you should consider the entire node the minimum failure domain for CRUSH purposes, because if the SSD drive fails, all of the Ceph OSD daemons that journal to it will fail too.
4、ACTIVATE OSDS
Once you prepare an OSD you may activate it with the following command.
ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}]
ceph-deploy osd activate osdserver1:/dev/sdb1:/dev/ssd1
ceph-deploy osd activate osdserver1:/dev/sdc1:/dev/ssd2
The activate command will cause your OSD to come up and be placed in the cluster. The activate command uses the path to the partition created when running the prepare command.
或3和4能够合成一步:
3&4、CREATE OSDS
You may prepare OSDs, deploy them to the OSD node(s) and activate them in one step with the create command. The create command is a convenience method for executing the prepare and activate command sequentially.
ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
ceph-deploy osd create osdserver1:sdb:/dev/ssd1
六、配置文件拷贝
Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
1、格式:
ceph-deploy admin {admin-node} {ceph-node}
执行:
ceph-deploy admin wx-ceph-admin wx-ceph01 wx-ceph02 wx-ceph03
When ceph-deploy is talking to the local admin host (admin-node), it must be reachable by its hostname. If necessary, modify /etc/hosts to add the name of the admin host.
2、Ensure that you have the correct permissions for the ceph.client.admin.keyring.
在所有节点上执行
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
3、Check your cluster’s health.
执行
ceph health
Your cluster should return an active + clean state when it has finished peering.
七、安装MDS
To use CephFS, you need at least one metadata server. Execute the following to create a metadata server:
格式
ceph-deploy mds create {ceph-node}
执行:
ceph-deploy mds create wx-ceph01
Note:Currently Ceph runs in production with one metadata server only. You may use more, but there is currently no commercial support for a cluster with multiple metadata servers.
八、Ceph运行
(参考:)
When deploying Ceph Cuttlefish and beyond with ceph-deploy on Ubuntu, you may start and stop Ceph daemons on a Ceph Node using the event-based Upstart. Upstart does not require you to define daemon instances in the Ceph configuration file.
1、查看ceph进程的运行状态
To list the Ceph Upstart jobs and instances on a node, execute:
sudo initctl list | grep ceph
See initctl(manpages.ubuntu/manpages/raring/en/man8/initctl.8.html)for additional details.
2、启动所有ceph进程
To start all daemons on a Ceph Node (irrespective of type), execute the following:
sudo start ceph-all
3、停止所有ceph进程
To stop all daemons on a Ceph Node (irrespective of type), execute the following:
sudo stop ceph-all
4、按类型启动相关ceph进程
To start all daemons of a particular type on a Ceph Node, execute one of the following:
sudo start ceph-osd-all
sudo start ceph-mon-all
sudo start ceph-mds-all
5、按类型停止相关ceph进程
To stop all daemons of a particular type on a Ceph Node, execute one of the following:
sudo stop ceph-osd-all
sudo stop ceph-mon-all
sudo stop ceph-mds-all
第三部份:利用
一、集群监控
1、交互模式
To run the ceph tool in interactive mode, type ceph at the command line with no arguments. For example:
ceph
ceph> health
ceph> status
ceph> quorum_status
ceph> mon_status
ceph>q退出
2、检查集群健康状态
After you start your cluster, and before you start reading and/or writing data, check your cluster’s health first. You can check on the health of your Ceph cluster with the following:
ceph health
3、观看集群WATCHING A CLUSTER
To watch the cluster’s ongoing events, open a new terminal. Then, enter:
ceph –w
4、检查集群利用状态CHECKING A CLUSTER’S USAGE STATS
To check a cluster’s data usage and data distribution among pools, you can use the df option. It is similar to Linux df. Execute the following:
ceph df
5、检查集群状态CHECKING A CLUSTER’S STATUS
To check a cluster’s status, execute the following:
ceph status
Or:
ceph -s
In interactive mode, type status and press Enter.
ceph> status
6、检查OSD状态CHECKING OSD STATUS
You can check OSDs to ensure they are up and in by executing:
ceph osd stat
Or:
ceph osd dump
You can also check view OSDs according to their position in the CRUSH map.
ceph osd tree
需要检查PG状态请参考:
7、检查monitor状态CHECKING MONITOR STATUS
If your cluster has multiple monitors (likely), you should check the monitor quorum status after you start the cluster before reading and/or writing data. A quorum(仲裁)must be present when multiple monitors are running. You should also check monitor status periodically to ensure that they are running.
To see display the monitor map, execute the following:
ceph mon stat
Or:
ceph mon dump
To check the quorum status for the monitor cluster, execute the following:
ceph quorum_status
8、检查MDS状态CHECKING MDS STATUS
Metadata servers provide metadata services for Ceph FS. Metadata servers have two sets of states:
up | down and active | inactive. To ensure your metadata servers are up and active, execute the following:
ceph mds stat
To display details of the metadata cluster, execute the following:
ceph mds dump
二、用户治理
三、Pool(池)治理
第四部份:Cephfs
一、创建ceph文件系统
以下操作在mds节点:ceph01上执行。
参考:
Tip:The ceph fs new command was introduced in Ceph 0.84. Prior to this release, no manual steps are required to create a filesystem, and pools named data and metadata exist by default.
The Ceph command line now includes commands for creating and removing filesystems, but at present only one filesystem may exist at a time.
A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata. When configuring these pools, you might consider:
(1)Using a higher replication level for the metadata pool, as any data loss in this pool can render the whole filesystem inaccessible.
(2)Using lower-latency storage such as SSDs for the metadata pool, as this will directly affect the observed latency of filesystem operations on clients.
Refer to Pools to learn more about managing pools. For example, to create two pools with default settings for use with a filesystem, you might run the following commands:
格式:
$ ceph osd pool create cephfs_data <pg_num>
$ ceph osd pool create cephfs_metadata <pg_num>
Once the pools are created, you may enable the filesystem using the fs new command:
1、先用ceph pg stat查看pg状态,看总共有多少个pg,避免那个地址设置pg数量超过总pg数。
2、执行:
ceph osd pool create cephfs_data 32
ceph osd pool create cephfs_metadata 16
注:以上32和16是可省略字段。
3、创建文件系统
格式:
$ ceph fs new <fs_name> <metadata> <data>
For example:
执行:
$ ceph fs new cephfs cephfs_metadata cephfs_data
$ ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
4、Once a filesystem has been created, your MDS(s) will be able to enter an active state. For example, in a single MDS system:
$ ceph mds stat
e5: 1/1/1 up {0=a=up:active}
Once the filesystem is created and the MDS is active, you are ready to mount the filesystem.二、挂载(mount) ceph文件系统
以下操作在admin节点上执行,或
(1)新建一台虚拟机,假设其主机名为ceph-client01
(2)在admin节点上执行ceph-deploy install ceph-client01
然后在ceph-client01上执行。
有2种方式,别离为kernel方式(从linux kernel 2.6.34开始支持)和ceph-fuse方式,kernel 方式在核心态运行,ceph-fuse在用户态运行,效率上kernel方式比ceph-fuse高,建议采纳kernel方式。
1、在/etc/ceph/目录下找到一个叫ceph.client.admin.keyring的文件。
2、打开ceph.client.admin.keyring,能够看到key = xxx的字段。
3、将key保留到要执行mount指令的当前目录的一个叫admin.secret的文件中。
文件内容为:
AQCImhZVyFAkDhAAKvs7RQNYmsQnmr830M9Osg==
4、To mount the Ceph file system you may use the mount command if you know the monitor host IP address(es), or use the mount.ceph utility to resolve the monitor host name(s) into IP address(es) for you. For example:
执行:
sudo mount -t ceph 172.16.100.42:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret
5、To unmount the Ceph file system, you may use the umount command. For example:
sudo umount /mnt/mycephfs。