ceph安装配置

环境说明

5台虚拟机
OS:CentOS Linux release 7.4.1708 (Core)

10.143.248.200 t2-ceph-test0 #ceph_deploy, mon
10.143.248.202 t2-ceph-test1 #osd,mon,mds
10.143.248.203 t2-ceph-test2 #osd,mon
10.143.248.204 t2-ceph-test3 #osd
10.143.248.205 t2-ceph-test4 #client

安装配置

安装前的准备,在所有机器上执行

#配置依赖
$ sudo yum install -y yum-utils
$ sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
$ sudo yum install --nogpgcheck -y epel-release
$ sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
$ sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

#添加ceph.repo源
$ sudo vim /etc/yum.repos.d/ceph.repo
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

$ sudo yum update
#在t2-ceph-test0上安装部署工具 ceph-deploy
$ sudo yum -y install ceph-deploy

#安装ntp及sshserver
$ sudo yum -y install ntp ntpdate ntp-doc openssh-server

#创建用户,不建议使用ceph作为用户名,从 Infernalis 版起,用户名 “ceph” 保留给了 Ceph 守护进程。如果 Ceph 节点上已经有了 “ceph” 用户,升级前必须先删掉这个用户。
$ sudo useradd -d /home/cephuser -m cephuser
(/bin/echo "123456" ;sleep 1;/bin/echo "123456") | sudo passwd cephuser

#确保cephuser用户拥有sudo权限
echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
sudo chmod 0440 /etc/sudoers.d/cephuser

#配置免密码登陆略
#关闭selinux
sudo setenforce 0 && sudo sed -i '/^SELINUX=/ s/=.*$/=disabled/g' /etc/selinux/config

#禁用tty
sudo visudo
在最后添加
Defaults:cephuser !requiretty

#配置ssh config文件
cat /home/cephuser/.ssh/config


#确保config文件权限为644
-rw-r--r--. 1 ceph ceph 232 May 9 11:46 /home/ceph/.ssh/config

准备管理节点t2-ceph-test0,管理节点会通过ssh登录其他节点执行部署命令

# Important 如果你是用另一普通用户登录的,不要用 sudo 或在 root 身份运行 ceph-deploy ,因为它不会在远程主机上调用所需的 sudo 命令。

# 在t2-ceph-test0上新建目录,用于保存 ceph-deploy 生成的配置文件和密钥对
# 注意,接下来的所有操作都需要在my-cluster目录下进行,否则会报下面的错
# “[ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No such file or directory: 'ceph.conf'; has `ceph-deploy new` been run in this directory?”
$ su cephuser -
$ mkdir my-cluster
$ cd my-cluster

#创建集群
$ ceph-deploy new t2-ceph-test0
$ ll 
total 12
-rw-rw-r--. 1 cephuser cephuser 263 May 9 16:40 ceph.conf
-rw-rw-r--. 1 cephuser cephuser 3145 May 9 16:37 ceph-deploy-ceph.log
-rw-------. 1 cephuser cephuser 73 May 9 16:37 ceph.mon.keyring

#添加默认副本数及网络的配置
$ vim ceph.conf
[global]
fsid = c74124f5-21e2-48ed-b723-fb750a9d4e83
mon_initial_members = t2-ceph-test0
mon_host = 10.143.248.200
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
public network = 10.143.248.0/24

通过管理节点在其他所有节点上安装集群

$ ceph-deploy install t2-ceph-test0 t2-ceph-test1 t2-ceph-test2 t2-ceph-test3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy install t2-ceph-test0 t2-ceph-test1 t2-ceph-test2 t2-ceph-test3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x242c248>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f411d1e7c80>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['t2-ceph-test0', 't2-ceph-test1', 't2-ceph-test2', 't2-ceph-test3']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts t2-ceph-test0 t2-ceph-test1 t2-ceph-test2 t2-ceph-test3
[ceph_deploy.install][DEBUG ] Detecting platform for host t2-ceph-test0 ...
[t2-ceph-test0][DEBUG ] connection detected need for sudo
[t2-ceph-test0][DEBUG ] connected to host: t2-ceph-test0
[t2-ceph-test0][DEBUG ] detect platform information from remote host
[t2-ceph-test0][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.4.1708 Core
[t2-ceph-test0][INFO ] installing Ceph on t2-ceph-test0
[t2-ceph-test0][INFO ] Running command: sudo yum clean all
[t2-ceph-test0][WARNIN] Repository base is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository updates is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository extras is listed more than once in the configuration
[t2-ceph-test0][DEBUG ] Cleaning repos: Ceph Ceph-noarch base bdp ceph-source epel extras updates
[t2-ceph-test0][DEBUG ] Cleaning up everything
[t2-ceph-test0][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[t2-ceph-test0][INFO ] Running command: sudo yum -y install epel-release
[t2-ceph-test0][WARNIN] Repository base is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository updates is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository extras is listed more than once in the configuration
[t2-ceph-test0][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[t2-ceph-test0][DEBUG ] Nothing to do
[t2-ceph-test0][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[t2-ceph-test0][WARNIN] Repository base is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository updates is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository extras is listed more than once in the configuration
[t2-ceph-test0][DEBUG ] Package yum-plugin-priorities-1.1.31-42.el7.noarch already installed and latest version
[t2-ceph-test0][DEBUG ] Nothing to do
[t2-ceph-test0][DEBUG ] Configure Yum priorities to include obsoletes
[t2-ceph-test0][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[t2-ceph-test0][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[t2-ceph-test0][INFO ] Running command: sudo yum remove -y ceph-release
[t2-ceph-test0][WARNIN] Repository base is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository updates is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository extras is listed more than once in the configuration
[t2-ceph-test0][DEBUG ] Resolving Dependencies
[t2-ceph-test0][DEBUG ] --> Running transaction check
[t2-ceph-test0][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased
[t2-ceph-test0][DEBUG ] --> Finished Dependency Resolution
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Dependencies Resolved
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] ================================================================================
[t2-ceph-test0][DEBUG ] Package Arch Version Repository Size
[t2-ceph-test0][DEBUG ] ================================================================================
[t2-ceph-test0][DEBUG ] Removing:
[t2-ceph-test0][DEBUG ] ceph-release noarch 1-1.el7 installed 535
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Transaction Summary
[t2-ceph-test0][DEBUG ] ================================================================================
[t2-ceph-test0][DEBUG ] Remove 1 Package
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Installed size: 535
[t2-ceph-test0][DEBUG ] Downloading packages:
[t2-ceph-test0][DEBUG ] Running transaction check
[t2-ceph-test0][DEBUG ] Running transaction test
[t2-ceph-test0][DEBUG ] Transaction test succeeded
[t2-ceph-test0][DEBUG ] Running transaction
[t2-ceph-test0][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1
[t2-ceph-test0][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[t2-ceph-test0][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Removed:
[t2-ceph-test0][DEBUG ] ceph-release.noarch 0:1-1.el7
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Complete!
[t2-ceph-test0][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[t2-ceph-test0][WARNIN] Repository base is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository updates is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository extras is listed more than once in the configuration
[t2-ceph-test0][DEBUG ] Examining /var/tmp/yum-root-6rNNaT/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[t2-ceph-test0][DEBUG ] Marking /var/tmp/yum-root-6rNNaT/ceph-release-1-0.el7.noarch.rpm to be installed
[t2-ceph-test0][DEBUG ] Resolving Dependencies
[t2-ceph-test0][DEBUG ] --> Running transaction check
[t2-ceph-test0][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[t2-ceph-test0][DEBUG ] --> Finished Dependency Resolution
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Dependencies Resolved
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] ================================================================================
[t2-ceph-test0][DEBUG ] Package Arch Version Repository Size
[t2-ceph-test0][DEBUG ] ================================================================================
[t2-ceph-test0][DEBUG ] Installing:
[t2-ceph-test0][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Transaction Summary
[t2-ceph-test0][DEBUG ] ================================================================================
[t2-ceph-test0][DEBUG ] Install 1 Package
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Total size: 535
[t2-ceph-test0][DEBUG ] Installed size: 535
[t2-ceph-test0][DEBUG ] Downloading packages:
[t2-ceph-test0][DEBUG ] Running transaction check
[t2-ceph-test0][DEBUG ] Running transaction test
[t2-ceph-test0][DEBUG ] Transaction test succeeded
[t2-ceph-test0][DEBUG ] Running transaction
[t2-ceph-test0][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[t2-ceph-test0][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Installed:
[t2-ceph-test0][DEBUG ] ceph-release.noarch 0:1-1.el7
[t2-ceph-test0][DEBUG ]
[t2-ceph-test0][DEBUG ] Complete!
[t2-ceph-test0][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[t2-ceph-test0][WARNIN] altered ceph.repo priorities to contain: priority=1
[t2-ceph-test0][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
[t2-ceph-test0][WARNIN] Repository base is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository updates is listed more than once in the configuration
[t2-ceph-test0][WARNIN] Repository extras is listed more than once in the configuration
[t2-ceph-test0][DEBUG ] Package 1:ceph-10.2.10-0.el7.x86_64 already installed and latest version
[t2-ceph-test0][DEBUG ] Package 1:ceph-radosgw-10.2.10-0.el7.x86_64 already installed and latest version
[t2-ceph-test0][DEBUG ] Nothing to do
[t2-ceph-test0][INFO ] Running command: sudo ceph --version
[t2-ceph-test0][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
[ceph_deploy.install][DEBUG ] Detecting platform for host t2-ceph-test1 ...
[t2-ceph-test1][DEBUG ] connection detected need for sudo
[t2-ceph-test1][DEBUG ] connected to host: t2-ceph-test1
[t2-ceph-test1][DEBUG ] detect platform information from remote host
[t2-ceph-test1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[t2-ceph-test1][INFO ] installing Ceph on t2-ceph-test1
[t2-ceph-test1][INFO ] Running command: sudo yum clean all
[t2-ceph-test1][DEBUG ] Cleaning repos: base bdp epel extras updates
[t2-ceph-test1][DEBUG ] Cleaning up everything
[t2-ceph-test1][INFO ] Running command: sudo yum -y install epel-release
[t2-ceph-test1][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[t2-ceph-test1][DEBUG ] Nothing to do
[t2-ceph-test1][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[t2-ceph-test1][DEBUG ] Package yum-plugin-priorities-1.1.31-42.el7.noarch already installed and latest version
[t2-ceph-test1][DEBUG ] Nothing to do
[t2-ceph-test1][DEBUG ] Configure Yum priorities to include obsoletes
[t2-ceph-test1][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[t2-ceph-test1][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[t2-ceph-test1][INFO ] Running command: sudo yum remove -y ceph-release
[t2-ceph-test1][WARNIN] No Match for argument: ceph-release
[t2-ceph-test1][DEBUG ] No Packages marked for removal
[t2-ceph-test1][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[t2-ceph-test1][DEBUG ] Examining /var/tmp/yum-root-iH75qP/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[t2-ceph-test1][DEBUG ] Marking /var/tmp/yum-root-iH75qP/ceph-release-1-0.el7.noarch.rpm to be installed
[t2-ceph-test1][DEBUG ] Resolving Dependencies
[t2-ceph-test1][DEBUG ] --> Running transaction check
[t2-ceph-test1][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[t2-ceph-test1][DEBUG ] --> Finished Dependency Resolution
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Dependencies Resolved
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] ================================================================================
[t2-ceph-test1][DEBUG ] Package Arch Version Repository Size
[t2-ceph-test1][DEBUG ] ================================================================================
[t2-ceph-test1][DEBUG ] Installing:
[t2-ceph-test1][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Transaction Summary
[t2-ceph-test1][DEBUG ] ================================================================================
[t2-ceph-test1][DEBUG ] Install 1 Package
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Total size: 535
[t2-ceph-test1][DEBUG ] Installed size: 535
[t2-ceph-test1][DEBUG ] Downloading packages:
[t2-ceph-test1][DEBUG ] Running transaction check
[t2-ceph-test1][DEBUG ] Running transaction test
[t2-ceph-test1][DEBUG ] Transaction test succeeded
[t2-ceph-test1][DEBUG ] Running transaction
[t2-ceph-test1][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[t2-ceph-test1][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Installed:
[t2-ceph-test1][DEBUG ] ceph-release.noarch 0:1-1.el7
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Complete!
[t2-ceph-test1][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[t2-ceph-test1][WARNIN] altered ceph.repo priorities to contain: priority=1
[t2-ceph-test1][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
[t2-ceph-test1][DEBUG ] Resolving Dependencies
[t2-ceph-test1][DEBUG ] --> Running transaction check
[t2-ceph-test1][DEBUG ] ---> Package ceph.x86_64 1:10.2.10-0.el7 will be installed
[t2-ceph-test1][DEBUG ] --> Processing Dependency: ceph-mon = 1:10.2.10-0.el7 for package: 1:ceph-10.2.10-0.el7.x86_64
[t2-ceph-test1][DEBUG ] --> Processing Dependency: ceph-mds = 1:10.2.10-0.el7 for package: 1:ceph-10.2.10-0.el7.x86_64
[t2-ceph-test1][DEBUG ] --> Processing Dependency: ceph-osd = 1:10.2.10-0.el7 for package: 1:ceph-10.2.10-0.el7.x86_64
[t2-ceph-test1][DEBUG ] ---> Package ceph-radosgw.x86_64 1:10.2.10-0.el7 will be installed
[t2-ceph-test1][DEBUG ] --> Processing Dependency: ceph-common = 1:10.2.10-0.el7 for package: 1:ceph-radosgw-10.2.10-0.el7.x86_64
[t2-ceph-test1][DEBUG ] --> Processing Dependency: ceph-selinux = 1:10.2.10-0.el7 for package: 1:ceph-radosgw-10.2.10-0.el7.x86_64
[t2-ceph-test1][DEBUG ] --> Running transaction check
[t2-ceph-test1][DEBUG ] ---> Package ceph-common.x86_64 1:10.2.10-0.el7 will be installed
[t2-ceph-test1][DEBUG ] ---> Package ceph-mds.x86_64 1:10.2.10-0.el7 will be installed
[t2-ceph-test1][DEBUG ] --> Processing Dependency: ceph-base = 1:10.2.10-0.el7 for package: 1:ceph-mds-10.2.10-0.el7.x86_64
[t2-ceph-test1][DEBUG ] ---> Package ceph-mon.x86_64 1:10.2.10-0.el7 will be installed
[t2-ceph-test1][DEBUG ] ---> Package ceph-osd.x86_64 1:10.2.10-0.el7 will be installed
[t2-ceph-test1][DEBUG ] ---> Package ceph-selinux.x86_64 1:10.2.10-0.el7 will be installed
[t2-ceph-test1][DEBUG ] --> Running transaction check
[t2-ceph-test1][DEBUG ] ---> Package ceph-base.x86_64 1:10.2.10-0.el7 will be installed
[t2-ceph-test1][DEBUG ] --> Finished Dependency Resolution
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Dependencies Resolved
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] ================================================================================
[t2-ceph-test1][DEBUG ] Package Arch Version Repository Size
[t2-ceph-test1][DEBUG ] ================================================================================
[t2-ceph-test1][DEBUG ] Installing:
[t2-ceph-test1][DEBUG ] ceph x86_64 1:10.2.10-0.el7 Ceph 3.0 k
[t2-ceph-test1][DEBUG ] ceph-radosgw x86_64 1:10.2.10-0.el7 Ceph 266 k
[t2-ceph-test1][DEBUG ] Installing for dependencies:
[t2-ceph-test1][DEBUG ] ceph-base x86_64 1:10.2.10-0.el7 Ceph 4.2 M
[t2-ceph-test1][DEBUG ] ceph-common x86_64 1:10.2.10-0.el7 Ceph 17 M
[t2-ceph-test1][DEBUG ] ceph-mds x86_64 1:10.2.10-0.el7 Ceph 2.8 M
[t2-ceph-test1][DEBUG ] ceph-mon x86_64 1:10.2.10-0.el7 Ceph 2.8 M
[t2-ceph-test1][DEBUG ] ceph-osd x86_64 1:10.2.10-0.el7 Ceph 9.1 M
[t2-ceph-test1][DEBUG ] ceph-selinux x86_64 1:10.2.10-0.el7 Ceph 20 k
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Transaction Summary
[t2-ceph-test1][DEBUG ] ================================================================================
[t2-ceph-test1][DEBUG ] Install 2 Packages (+6 Dependent packages)
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Total download size: 36 M
[t2-ceph-test1][DEBUG ] Installed size: 134 M
[t2-ceph-test1][DEBUG ] Downloading packages:
[t2-ceph-test1][DEBUG ] --------------------------------------------------------------------------------
[t2-ceph-test1][DEBUG ] Total 1.3 MB/s | 36 MB 00:26
[t2-ceph-test1][DEBUG ] Running transaction check
[t2-ceph-test1][DEBUG ] Running transaction test
[t2-ceph-test1][DEBUG ] Transaction test succeeded
[t2-ceph-test1][DEBUG ] Running transaction
[t2-ceph-test1][DEBUG ] Installing : 1:ceph-common-10.2.10-0.el7.x86_64 1/8
[t2-ceph-test1][DEBUG ] Installing : 1:ceph-base-10.2.10-0.el7.x86_64 2/8
[t2-ceph-test1][DEBUG ] Installing : 1:ceph-selinux-10.2.10-0.el7.x86_64 3/8
[t2-ceph-test1][DEBUG ] Installing : 1:ceph-mds-10.2.10-0.el7.x86_64 4/8
[t2-ceph-test1][DEBUG ] Installing : 1:ceph-mon-10.2.10-0.el7.x86_64 5/8
[t2-ceph-test1][DEBUG ] Installing : 1:ceph-osd-10.2.10-0.el7.x86_64 6/8
[t2-ceph-test1][DEBUG ] Installing : 1:ceph-10.2.10-0.el7.x86_64 7/8
[t2-ceph-test1][DEBUG ] Installing : 1:ceph-radosgw-10.2.10-0.el7.x86_64 8/8
[t2-ceph-test1][DEBUG ] Verifying : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/8
[t2-ceph-test1][DEBUG ] Verifying : 1:ceph-mds-10.2.10-0.el7.x86_64 2/8
[t2-ceph-test1][DEBUG ] Verifying : 1:ceph-selinux-10.2.10-0.el7.x86_64 3/8
[t2-ceph-test1][DEBUG ] Verifying : 1:ceph-base-10.2.10-0.el7.x86_64 4/8
[t2-ceph-test1][DEBUG ] Verifying : 1:ceph-common-10.2.10-0.el7.x86_64 5/8
[t2-ceph-test1][DEBUG ] Verifying : 1:ceph-10.2.10-0.el7.x86_64 6/8
[t2-ceph-test1][DEBUG ] Verifying : 1:ceph-mon-10.2.10-0.el7.x86_64 7/8
[t2-ceph-test1][DEBUG ] Verifying : 1:ceph-osd-10.2.10-0.el7.x86_64 8/8
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Installed:
[t2-ceph-test1][DEBUG ] ceph.x86_64 1:10.2.10-0.el7 ceph-radosgw.x86_64 1:10.2.10-0.el7
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Dependency Installed:
[t2-ceph-test1][DEBUG ] ceph-base.x86_64 1:10.2.10-0.el7 ceph-common.x86_64 1:10.2.10-0.el7
[t2-ceph-test1][DEBUG ] ceph-mds.x86_64 1:10.2.10-0.el7 ceph-mon.x86_64 1:10.2.10-0.el7
[t2-ceph-test1][DEBUG ] ceph-osd.x86_64 1:10.2.10-0.el7 ceph-selinux.x86_64 1:10.2.10-0.el7
[t2-ceph-test1][DEBUG ]
[t2-ceph-test1][DEBUG ] Complete!
[t2-ceph-test1][INFO ] Running command: sudo ceph --version
[t2-ceph-test1][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
[ceph_deploy.install][DEBUG ] Detecting platform for host t2-ceph-test2 ...
t2-ceph-test2,t2-ceph-test3输出略

配置初始 monitor(s)、并收集所有密钥。这里会把t2-ceph-test0设置为mon节点,monitor节点对硬件要求很低。mon节点负责监控整个集群的运行状态,状态信息由集群中各节点的守护进程提供。ceph moniter map包括osd map,pg map,mds map,crush map等,这些map统称为集群map。

$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1962fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x195c5f0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts t2-ceph-test0
[ceph_deploy.mon][DEBUG ] detecting platform for host t2-ceph-test0 ...
[t2-ceph-test0][DEBUG ] connection detected need for sudo
[t2-ceph-test0][DEBUG ] connected to host: t2-ceph-test0
[t2-ceph-test0][DEBUG ] detect platform information from remote host
[t2-ceph-test0][DEBUG ] detect machine type
[t2-ceph-test0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.4.1708 Core
[t2-ceph-test0][DEBUG ] determining if provided host has same hostname in remote
[t2-ceph-test0][DEBUG ] get remote short hostname
[t2-ceph-test0][DEBUG ] deploying mon to t2-ceph-test0
[t2-ceph-test0][DEBUG ] get remote short hostname
[t2-ceph-test0][DEBUG ] remote hostname: t2-ceph-test0
[t2-ceph-test0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[t2-ceph-test0][DEBUG ] create the mon path if it does not exist
[t2-ceph-test0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-t2-ceph-test0/done
[t2-ceph-test0][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-t2-ceph-test0/done
[t2-ceph-test0][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-t2-ceph-test0.mon.keyring
[t2-ceph-test0][DEBUG ] create the monitor keyring file
[t2-ceph-test0][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i t2-ceph-test0 --keyring /var/lib/ceph/tmp/ceph-t2-ceph-test0.mon.keyring --setuser 1000 --setgroup 1000
[t2-ceph-test0][DEBUG ] ceph-mon: renaming mon.noname-a 10.143.248.200:6789/0 to mon.t2-ceph-test0
[t2-ceph-test0][DEBUG ] ceph-mon: set fsid to c74124f5-21e2-48ed-b723-fb750a9d4e83
[t2-ceph-test0][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-t2-ceph-test0 for mon.t2-ceph-test0
[t2-ceph-test0][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-t2-ceph-test0.mon.keyring
[t2-ceph-test0][DEBUG ] create a done file to avoid re-doing the mon deployment
[t2-ceph-test0][DEBUG ] create the init path if it does not exist
[t2-ceph-test0][INFO ] Running command: sudo systemctl enable ceph.target
[t2-ceph-test0][INFO ] Running command: sudo systemctl enable ceph-mon@t2-ceph-test0
[t2-ceph-test0][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@t2-ceph-test0.service to /usr/lib/systemd/system/ceph-mon@.service.
[t2-ceph-test0][INFO ] Running command: sudo systemctl start ceph-mon@t2-ceph-test0
[t2-ceph-test0][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.t2-ceph-test0.asok mon_status
[t2-ceph-test0][DEBUG ] ********************************************************************************
[t2-ceph-test0][DEBUG ] status for monitor: mon.t2-ceph-test0
[t2-ceph-test0][DEBUG ] {
[t2-ceph-test0][DEBUG ] "election_epoch": 3,
[t2-ceph-test0][DEBUG ] "extra_probe_peers": [],
[t2-ceph-test0][DEBUG ] "monmap": {
[t2-ceph-test0][DEBUG ] "created": "2018-05-09 16:55:50.647593",
[t2-ceph-test0][DEBUG ] "epoch": 1,
[t2-ceph-test0][DEBUG ] "fsid": "c74124f5-21e2-48ed-b723-fb750a9d4e83",
[t2-ceph-test0][DEBUG ] "modified": "2018-05-09 16:55:50.647593",
[t2-ceph-test0][DEBUG ] "mons": [
[t2-ceph-test0][DEBUG ] {
[t2-ceph-test0][DEBUG ] "addr": "10.143.248.200:6789/0",
[t2-ceph-test0][DEBUG ] "name": "t2-ceph-test0",
[t2-ceph-test0][DEBUG ] "rank": 0
[t2-ceph-test0][DEBUG ] }
[t2-ceph-test0][DEBUG ] ]
[t2-ceph-test0][DEBUG ] },
[t2-ceph-test0][DEBUG ] "name": "t2-ceph-test0",
[t2-ceph-test0][DEBUG ] "outside_quorum": [],
[t2-ceph-test0][DEBUG ] "quorum": [
[t2-ceph-test0][DEBUG ] 0
[t2-ceph-test0][DEBUG ] ],
[t2-ceph-test0][DEBUG ] "rank": 0,
[t2-ceph-test0][DEBUG ] "state": "leader",
[t2-ceph-test0][DEBUG ] "sync_provider": []
[t2-ceph-test0][DEBUG ] }
[t2-ceph-test0][DEBUG ] ********************************************************************************
[t2-ceph-test0][INFO ] monitor: mon.t2-ceph-test0 is running
[t2-ceph-test0][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.t2-ceph-test0.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.t2-ceph-test0
[t2-ceph-test0][DEBUG ] connection detected need for sudo
[t2-ceph-test0][DEBUG ] connected to host: t2-ceph-test0
[t2-ceph-test0][DEBUG ] detect platform information from remote host
[t2-ceph-test0][DEBUG ] detect machine type
[t2-ceph-test0][DEBUG ] find the location of an executable
[t2-ceph-test0][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.t2-ceph-test0.asok mon_status
[ceph_deploy.mon][INFO ] mon.t2-ceph-test0 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpsxutCO
[t2-ceph-test0][DEBUG ] connection detected need for sudo
[t2-ceph-test0][DEBUG ] connected to host: t2-ceph-test0
[t2-ceph-test0][DEBUG ] detect platform information from remote host
[t2-ceph-test0][DEBUG ] detect machine type
[t2-ceph-test0][DEBUG ] get remote short hostname
[t2-ceph-test0][DEBUG ] fetch remote file
[t2-ceph-test0][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.t2-ceph-test0.asok mon_status
[t2-ceph-test0][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-t2-ceph-test0/keyring auth get client.admin
[t2-ceph-test0][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-t2-ceph-test0/keyring auth get client.bootstrap-mds
[t2-ceph-test0][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-t2-ceph-test0/keyring auth get client.bootstrap-mgr
[t2-ceph-test0][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-t2-ceph-test0/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[t2-ceph-test0][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-t2-ceph-test0/keyring auth get client.bootstrap-osd
[t2-ceph-test0][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-t2-ceph-test0/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpsxutCO

列出节点上的可用磁盘

$ ceph-deploy disk list t2-ceph-test2 t2-ceph-test3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy disk list t2-ceph-test2 t2-ceph-test3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1323fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x1319ed8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('t2-ceph-test2', None, None), ('t2-ceph-test3', None, None)]
[t2-ceph-test2][DEBUG ] connection detected need for sudo
[t2-ceph-test2][DEBUG ] connected to host: t2-ceph-test2
[t2-ceph-test2][DEBUG ] detect platform information from remote host
[t2-ceph-test2][DEBUG ] detect machine type
[t2-ceph-test2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Listing disks on t2-ceph-test2...
[t2-ceph-test2][DEBUG ] find the location of an executable
[t2-ceph-test2][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[t2-ceph-test2][DEBUG ] /dev/dm-0 swap, swap
[t2-ceph-test2][DEBUG ] /dev/dm-1 other, xfs, mounted on /
[t2-ceph-test2][DEBUG ] /dev/sr0 other, unknown
[t2-ceph-test2][DEBUG ] /dev/xvda :
[t2-ceph-test2][DEBUG ] /dev/xvda2 other, LVM2_member
[t2-ceph-test2][DEBUG ] /dev/xvda1 other, xfs, mounted on /boot
[t2-ceph-test2][DEBUG ] /dev/xvdb :
[t2-ceph-test2][DEBUG ] /dev/xvdb1 other, xfs
t2-ceph-test3输出略

擦除节点上的磁盘数据,这步会清空磁盘上的所有数据

$ ceph-deploy disk zap t2-ceph-test2:/dev/xvdb t2-ceph-test3:/dev/xvdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy disk zap t2-ceph-test2:/dev/xvdb t2-ceph-test3:/dev/xvdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1b03fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x1af9ed8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('t2-ceph-test2', '/dev/xvdb', None), ('t2-ceph-test3', '/dev/xvdb', None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/xvdb on t2-ceph-test2
[t2-ceph-test2][DEBUG ] connection detected need for sudo
[t2-ceph-test2][DEBUG ] connected to host: t2-ceph-test2
[t2-ceph-test2][DEBUG ] detect platform information from remote host
[t2-ceph-test2][DEBUG ] detect machine type
[t2-ceph-test2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[t2-ceph-test2][DEBUG ] zeroing last few blocks of device
[t2-ceph-test2][DEBUG ] find the location of an executable
[t2-ceph-test2][INFO ] Running command: sudo /usr/sbin/ceph-disk zap /dev/xvdb
[t2-ceph-test2][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[t2-ceph-test2][WARNIN] backup header from main header.
[t2-ceph-test2][WARNIN]
[t2-ceph-test2][WARNIN] Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
[t2-ceph-test2][WARNIN] on the recovery & transformation menu to examine the two tables.
[t2-ceph-test2][WARNIN]
[t2-ceph-test2][WARNIN] Warning! One or more CRCs don't match. You should repair the disk!
[t2-ceph-test2][WARNIN]
[t2-ceph-test2][DEBUG ] ****************************************************************************
[t2-ceph-test2][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[t2-ceph-test2][DEBUG ] verification and recovery are STRONGLY recommended.
[t2-ceph-test2][DEBUG ] ****************************************************************************
[t2-ceph-test2][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[t2-ceph-test2][DEBUG ] other utilities.
[t2-ceph-test2][DEBUG ] Creating new GPT entries.
[t2-ceph-test2][DEBUG ] The operation has completed successfully.
t2-ceph-test3输出略

准备osd,可以用目录或者整块磁盘,这里用整块磁盘/dev/xvdb,不需要先配置文件系统,ceph会在这一步自动把xvdb整块磁盘分为ceph data和ceph journal两个分区,并格式化为xfs文件系统,分别对应数据目录45G和日志目录5G,journal软链接到data的根目录。当client请求ceph集群的时候,会先从mon节点得到cluster map,然后直接与osd进行I/O操作,首先会写入这里创建的日志目录,然后才写入数据目录。

$ ceph-deploy osd prepare t2-ceph-test2:/dev/xvdb t2-ceph-test3:/dev/xvdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy osd prepare t2-ceph-test2:/dev/xvdb t2-ceph-test3:/dev/xvdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] disk : [('t2-ceph-test2', '/dev/xvdb', None), ('t2-ceph-test3', '/dev/xvdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1b09248>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x1afbe60>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks t2-ceph-test2:/dev/xvdb: t2-ceph-test3:/dev/xvdb:
[t2-ceph-test2][DEBUG ] connection detected need for sudo
[t2-ceph-test2][DEBUG ] connected to host: t2-ceph-test2
[t2-ceph-test2][DEBUG ] detect platform information from remote host
[t2-ceph-test2][DEBUG ] detect machine type
[t2-ceph-test2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to t2-ceph-test2
[t2-ceph-test2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host t2-ceph-test2 disk /dev/xvdb journal None activate False
[t2-ceph-test2][DEBUG ] find the location of an executable
[t2-ceph-test2][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/xvdb
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] set_type: Will colocate journal with data on /dev/xvdb
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] ptype_tobe_for_name: name = journal
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/xvdb
[t2-ceph-test2][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:e6eabaf6-9e79-4366-a74a-b2883a3768c5 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/xvdb
[t2-ceph-test2][DEBUG ] The operation has completed successfully.
[t2-ceph-test2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb2 uuid path is /sys/dev/block/202:18/dm/uuid
[t2-ceph-test2][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e6eabaf6-9e79-4366-a74a-b2883a3768c5
[t2-ceph-test2][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e6eabaf6-9e79-4366-a74a-b2883a3768c5
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] set_data_partition: Creating osd partition on /dev/xvdb
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] ptype_tobe_for_name: name = data
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/xvdb
[t2-ceph-test2][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:8b8d569a-7e60-4977-b9e5-f393216a5e03 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/xvdb
[t2-ceph-test2][DEBUG ] The operation has completed successfully.
[t2-ceph-test2][WARNIN] update_partition: Calling partprobe on created device /dev/xvdb
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[t2-ceph-test2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/xvdb1
[t2-ceph-test2][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/xvdb1
[t2-ceph-test2][DEBUG ] meta-data=/dev/xvdb1 isize=2048 agcount=4, agsize=2949055 blks
[t2-ceph-test2][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.iZSkVJ with options noatime,inode64
[t2-ceph-test2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.iZSkVJ
[t2-ceph-test2][DEBUG ] = crc=0 finobt=0
[t2-ceph-test2][DEBUG ] data = bsize=4096 blocks=11796219, imaxpct=25
[t2-ceph-test2][DEBUG ] = sunit=0 swidth=0 blks
[t2-ceph-test2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
[t2-ceph-test2][DEBUG ] log =internal log bsize=4096 blocks=5759, version=2
[t2-ceph-test2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[t2-ceph-test2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[t2-ceph-test2][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.iZSkVJ
[t2-ceph-test2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.iZSkVJ
[t2-ceph-test2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.iZSkVJ/ceph_fsid.2008.tmp
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.iZSkVJ/ceph_fsid.2008.tmp
[t2-ceph-test2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.iZSkVJ/fsid.2008.tmp
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.iZSkVJ/fsid.2008.tmp
[t2-ceph-test2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.iZSkVJ/magic.2008.tmp
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.iZSkVJ/magic.2008.tmp
[t2-ceph-test2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.iZSkVJ/journal_uuid.2008.tmp
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.iZSkVJ/journal_uuid.2008.tmp
[t2-ceph-test2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.iZSkVJ/journal -> /dev/disk/by-partuuid/e6eabaf6-9e79-4366-a74a-b2883a3768c5
[t2-ceph-test2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.iZSkVJ
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.iZSkVJ
[t2-ceph-test2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.iZSkVJ
[t2-ceph-test2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.iZSkVJ
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb uuid path is /sys/dev/block/202:16/dm/uuid
[t2-ceph-test2][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/xvdb
[t2-ceph-test2][DEBUG ] Warning: The kernel is still using the old partition table.
[t2-ceph-test2][DEBUG ] The new table will be used at the next reboot.
[t2-ceph-test2][DEBUG ] The operation has completed successfully.
[t2-ceph-test2][WARNIN] update_partition: Calling partprobe on prepared device /dev/xvdb
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/flock -s /dev/xvdb /sbin/partprobe /dev/xvdb
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match xvdb1
[t2-ceph-test2][INFO ] checking OSD status...
[t2-ceph-test2][DEBUG ] find the location of an executable
[t2-ceph-test2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host t2-ceph-test2 is now ready for osd use.
t2-ceph-test3输出略

在t2-ceph-test2或t2-ceph-test3上确认下

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
xvda 202:0 0 20G 0 disk
├─xvda1 202:1 0 500M 0 part /boot
└─xvda2 202:2 0 19.5G 0 part
 ├─centos-swap 253:0 0 2G 0 lvm [SWAP]
 └─centos-root 253:1 0 17.5G 0 lvm /
xvdb 202:16 0 50G 0 disk
├─xvdb1 202:17 0 45G 0 part /var/lib/ceph/osd/ceph-6
└─xvdb2 202:18 0 5G 0 part

激活osd,激活之后osd就会出现在集群中

$ ceph-deploy osd activate t2-ceph-test2:/dev/xvdb1 t2-ceph-test3:/dev/xvdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy osd activate t2-ceph-test2:/dev/xvdb1 t2-ceph-test3:/dev/xvdb1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1baf248>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0x1ba1e60>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('t2-ceph-test2', '/dev/xvdb1', None), ('t2-ceph-test3', '/dev/xvdb1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks t2-ceph-test2:/dev/xvdb1: t2-ceph-test3:/dev/xvdb1:
[t2-ceph-test2][DEBUG ] connection detected need for sudo
[t2-ceph-test2][DEBUG ] connected to host: t2-ceph-test2
[t2-ceph-test2][DEBUG ] detect platform information from remote host
[t2-ceph-test2][DEBUG ] detect machine type
[t2-ceph-test2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] activating host t2-ceph-test2 disk /dev/xvdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[t2-ceph-test2][DEBUG ] find the location of an executable
[t2-ceph-test2][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/xvdb1
[t2-ceph-test2][WARNIN] main_activate: path = /dev/xvdb1
[t2-ceph-test2][WARNIN] get_dm_uuid: get_dm_uuid /dev/xvdb1 uuid path is /sys/dev/block/202:17/dm/uuid
[t2-ceph-test2][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/xvdb1
[t2-ceph-test2][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/xvdb1
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[t2-ceph-test2][WARNIN] mount: Mounting /dev/xvdb1 on /var/lib/ceph/tmp/mnt.oVUDsv with options noatime,inode64
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/xvdb1 /var/lib/ceph/tmp/mnt.oVUDsv
[t2-ceph-test2][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.oVUDsv
[t2-ceph-test2][WARNIN] activate: Cluster uuid is c74124f5-21e2-48ed-b723-fb750a9d4e83
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[t2-ceph-test2][WARNIN] activate: Cluster name is ceph
[t2-ceph-test2][WARNIN] activate: OSD uuid is d64b57f4-e007-47c7-8efe-92ab6e7b3011
[t2-ceph-test2][WARNIN] activate: OSD id is 6
[t2-ceph-test2][WARNIN] activate: Marking with init system systemd
[t2-ceph-test2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.oVUDsv/systemd
[t2-ceph-test2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.oVUDsv/systemd
[t2-ceph-test2][WARNIN] activate: ceph osd.6 data dir is ready at /var/lib/ceph/tmp/mnt.oVUDsv
[t2-ceph-test2][WARNIN] mount_activate: ceph osd.6 already mounted in position; unmounting ours.
[t2-ceph-test2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.oVUDsv
[t2-ceph-test2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.oVUDsv
[t2-ceph-test2][WARNIN] start_daemon: Starting ceph osd.6...
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@6
[t2-ceph-test2][WARNIN] Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@6.service.
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@6 --runtime
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@6
[t2-ceph-test2][WARNIN] Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@6.service to /usr/lib/systemd/system/ceph-osd@.service.
[t2-ceph-test2][WARNIN] command_check_call: Running command: /usr/bin/systemctl start ceph-osd@6
[t2-ceph-test2][INFO ] checking OSD status...
[t2-ceph-test2][DEBUG ] find the location of an executable
[t2-ceph-test2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[t2-ceph-test2][INFO ] Running command: sudo systemctl enable ceph.target
t2-ceph-test3输出略

把配置文件复制到管理节点和其他节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring

$ ceph-deploy admin t2-ceph-test0 t2-ceph-test1 t2-ceph-test2 t2-ceph-test3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy admin t2-ceph-test0 t2-ceph-test1 t2-ceph-test2 t2-ceph-test3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x16c6c68>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['t2-ceph-test0', 't2-ceph-test1', 't2-ceph-test2', 't2-ceph-test3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f9200c898c0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to t2-ceph-test0
[t2-ceph-test0][DEBUG ] connection detected need for sudo
[t2-ceph-test0][DEBUG ] connected to host: t2-ceph-test0
[t2-ceph-test0][DEBUG ] detect platform information from remote host
[t2-ceph-test0][DEBUG ] detect machine type
[t2-ceph-test0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to t2-ceph-test1
[t2-ceph-test1][DEBUG ] connection detected need for sudo
[t2-ceph-test1][DEBUG ] connected to host: t2-ceph-test1
[t2-ceph-test1][DEBUG ] detect platform information from remote host
[t2-ceph-test1][DEBUG ] detect machine type
[t2-ceph-test1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to t2-ceph-test2
[t2-ceph-test2][DEBUG ] connection detected need for sudo
[t2-ceph-test2][DEBUG ] connected to host: t2-ceph-test2
[t2-ceph-test2][DEBUG ] detect platform information from remote host
[t2-ceph-test2][DEBUG ] detect machine type
[t2-ceph-test2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to t2-ceph-test3
[t2-ceph-test3][DEBUG ] connection detected need for sudo
[t2-ceph-test3][DEBUG ] connected to host: t2-ceph-test3
[t2-ceph-test3][DEBUG ] detect platform information from remote host
[t2-ceph-test3][DEBUG ] detect machine type
[t2-ceph-test3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

确保对配置文件有读权限
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

对于新建的集群,默认只有一个名为rbd 的pool,我把这个删掉了,注意这里需要输入两遍pool name 并且加上–yes-i-really-really-mean-it参数

$ ceph osd pool delete rbd rbd --yes-i-really-really-mean-it

一个 Ceph 文件系统需要至少两个 pool(即RADOS 存储池),一个用于数据、一个用于元数据。配置这些存储池时需考虑下面两点:为元数据存储池设置较高的副本水平,因为此存储池丢失任何数据都会导致整个文件系统失效。为元数据存储池分配低延时存储器(像 SSD ),因为它会直接影响到客户端的操作延时。

$ ceph osd pool create cephfs_data 64
$ ceph osd pool create cephfs_metadata 64

这时集群中只有一个mon节点,生产环境一般需要有三个,再把t2-ceph-test1和t2-ceph-test2添加为mon节点

$ ceph-deploy mon create t2-ceph-test1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephuser/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create t2-ceph-test1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1bbef80>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['t2-ceph-test1']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x1bb85f0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts t2-ceph-test1
[ceph_deploy.mon][DEBUG ] detecting platform for host t2-ceph-test1 ...
[t2-ceph-test1][DEBUG ] connection detected need for sudo
[t2-ceph-test1][DEBUG ] connected to host: t2-ceph-test1
[t2-ceph-test1][DEBUG ] detect platform information from remote host
[t2-ceph-test1][DEBUG ] detect machine type
[t2-ceph-test1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.2.1511 Core
[t2-ceph-test1][DEBUG ] determining if provided host has same hostname in remote
[t2-ceph-test1][DEBUG ] get remote short hostname
[t2-ceph-test1][DEBUG ] deploying mon to t2-ceph-test1
[t2-ceph-test1][DEBUG ] get remote short hostname
[t2-ceph-test1][DEBUG ] remote hostname: t2-ceph-test1
[t2-ceph-test1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[t2-ceph-test1][DEBUG ] create the mon path if it does not exist
[t2-ceph-test1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-t2-ceph-test1/done
[t2-ceph-test1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-t2-ceph-test1/done
[t2-ceph-test1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-t2-ceph-test1.mon.keyring
[t2-ceph-test1][DEBUG ] create the monitor keyring file
[t2-ceph-test1][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs -i t2-ceph-test1 --keyring /var/lib/ceph/tmp/ceph-t2-ceph-test1.mon.keyring --setuser 1000 --setgroup 1000
[t2-ceph-test1][DEBUG ] ceph-mon: set fsid to c74124f5-21e2-48ed-b723-fb750a9d4e83
[t2-ceph-test1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-t2-ceph-test1 for mon.t2-ceph-test1
[t2-ceph-test1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-t2-ceph-test1.mon.keyring
[t2-ceph-test1][DEBUG ] create a done file to avoid re-doing the mon deployment
[t2-ceph-test1][DEBUG ] create the init path if it does not exist
[t2-ceph-test1][INFO  ] Running command: sudo systemctl enable ceph.target
[t2-ceph-test1][INFO  ] Running command: sudo systemctl enable ceph-mon@t2-ceph-test1
[t2-ceph-test1][INFO  ] Running command: sudo systemctl start ceph-mon@t2-ceph-test1
[t2-ceph-test1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.t2-ceph-test1.asok mon_status
[t2-ceph-test1][DEBUG ] ********************************************************************************
[t2-ceph-test1][DEBUG ] status for monitor: mon.t2-ceph-test1
[t2-ceph-test1][DEBUG ] {
[t2-ceph-test1][DEBUG ]   "election_epoch": 0,
[t2-ceph-test1][DEBUG ]   "extra_probe_peers": [
[t2-ceph-test1][DEBUG ]     "10.143.248.200:6789/0"
[t2-ceph-test1][DEBUG ]   ],
[t2-ceph-test1][DEBUG ]   "monmap": {
[t2-ceph-test1][DEBUG ]     "created": "2018-05-09 16:55:50.647593",
[t2-ceph-test1][DEBUG ]     "epoch": 1,
[t2-ceph-test1][DEBUG ]     "fsid": "c74124f5-21e2-48ed-b723-fb750a9d4e83",
[t2-ceph-test1][DEBUG ]     "modified": "2018-05-09 16:55:50.647593",
[t2-ceph-test1][DEBUG ]     "mons": [
[t2-ceph-test1][DEBUG ]       {
[t2-ceph-test1][DEBUG ]         "addr": "10.143.248.200:6789/0",
[t2-ceph-test1][DEBUG ]         "name": "t2-ceph-test0",
[t2-ceph-test1][DEBUG ]         "rank": 0
[t2-ceph-test1][DEBUG ]       }
[t2-ceph-test1][DEBUG ]     ]
[t2-ceph-test1][DEBUG ]   },
[t2-ceph-test1][DEBUG ]   "name": "t2-ceph-test1",
[t2-ceph-test1][DEBUG ]   "outside_quorum": [],
[t2-ceph-test1][DEBUG ]   "quorum": [],
[t2-ceph-test1][DEBUG ]   "rank": -1,
[t2-ceph-test1][DEBUG ]   "state": "probing",
[t2-ceph-test1][DEBUG ]   "sync_provider": []
[t2-ceph-test1][DEBUG ] }
[t2-ceph-test1][DEBUG ] ********************************************************************************
[t2-ceph-test1][INFO  ] monitor: mon.t2-ceph-test1 is currently at the state of probing
[t2-ceph-test1][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.t2-ceph-test1.asok mon_status
[t2-ceph-test1][WARNIN] t2-ceph-test1 is not defined in `mon initial members`

# t2-ceph-test2的输出略过

$ ceph -s
 cluster c74124f5-21e2-48ed-b723-fb750a9d4e83
 health HEALTH_OK
 monmap e3: 3 mons at {t2-ceph-test0=10.143.248.200:6789/0,t2-ceph-test1=10.143.248.202:6789/0,t2-ceph-test2=10.143.248.203:6789/0}
 election epoch 8, quorum 0,1,2 t2-ceph-test0,t2-ceph-test1,t2-ceph-test2
 fsmap e5: 1/1/1 up {0=t2-ceph-test1=up:active}
 osdmap e63: 2 osds: 2 up, 2 in
 flags sortbitwise,require_jewel_osds
 pgmap v139: 128 pgs, 2 pools, 2068 bytes data, 20 objects
 71640 kB used, 92043 MB / 92112 MB avail
 128 active+clean

再加一个osd,把t2-ceph-test1:/dev/xvdb也加到osd里面,在t2-ceph-test0上按照上面的步骤

$ ceph-deploy disk zap t2-ceph-test1:/dev/xvdb
$ ceph-deploy osd prepare t2-ceph-test1:/dev/xvdb
$ ceph-deploy osd activate t2-ceph-test1:/dev/xvdb

看下集群的最终状态

$ ceph -s
    cluster c74124f5-21e2-48ed-b723-fb750a9d4e83
     health HEALTH_OK
     monmap e3: 3 mons at {t2-ceph-test0=10.143.248.200:6789/0,t2-ceph-test1=10.143.248.202:6789/0,t2-ceph-test2=10.143.248.203:6789/0}
            election epoch 8, quorum 0,1,2 t2-ceph-test0,t2-ceph-test1,t2-ceph-test2
      fsmap e5: 1/1/1 up {0=t2-ceph-test1=up:active}
     osdmap e68: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v151: 128 pgs, 2 pools, 2068 bytes data, 20 objects
            105 MB used, 134 GB / 134 GB avail
                 128 active+clean

发表评论

电子邮件地址不会被公开。 必填项已用*标注