使用 cephadm 安装 Ceph 集群
centos7.4三台
Ceph octopus 15.2.3
Python 3.6
Docker 默认最新的
注意:OSD 硬盘需要大于 5G
官方文档:
https://docs.ceph.com/docs/master/cephadm/install/
准备工作:
-
配置定义Ceph所有节点HostName:
hostnamectl set-hostname ceph1
hostnamectl set-hostname ceph2
hostnamectl set-hostname ceph3
-
配置Host文件:
ceph1 10.1.6.90
ceph2 10.1.6.91
ceph3 10.1.6.92
-
三台主机配置时间同步服务:
在所有 Ceph 节点上安装并运行chrony服务,特别是监控节点以免因时钟漂移导致故障
-
安装依赖:
yum -y install python3 yum-utils
-
安装Docker服务
cephadm基于容器运行所有ceph组件,所有节点需要安装docker或podman,这里以安装docker为例。
配置阿里云yum源
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
或:清华大学源
yum-config-manager \
--add-repo \
https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
for i in `tail -n 4 /etc/hosts | awk '{print $1}'`; do ssh $i exec yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -y ;done
for i in `tail -n 4 /etc/hosts | awk '{print $1}'`; do ssh $i exec yum clear all;yum makecache ;done
安装 Docker Engine-Community 和 containerd
yum install docker-ce docker-ce-cli containerd.io -y
for i in `tail -n 4 /etc/hosts | awk '{print $1}'`; do ssh $i exec yum install docker-ce docker-ce-cli containerd.io -y ;done
配置docker镜像加速
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
]
}
EOF
Docker中国区官方镜像
https://registry.docker-cn.com
网易
http://hub-mirror.c.163.com
中国科技大学
https://docker.mirrors.ustc.edu.cn
for i in `tail -n 4 /etc/hosts | awk '{print $1}'`; do scp /etc/docker/daemon.json $i:/etc/docker/daemon.json ;done
启动docker服务
systemctl start docker
systemctl enable docker
systemctl daemon-reload
systemctl restart docker
- 关闭防火墙及selinux
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
关闭selinux
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
- 配置三台主机免交互
ssh-keygen
ssh-copy-id root@ceph1
ssh-copy-id root@ceph2
ssh-copy-id root@ceph3
- 更新yum 源
//安装wget命令,方便下载新的yum源。
yum install wget -y
//备份原yum
cd /etc/yum.repos.d/
mkdir backup
mv C* backup
//用wget命令下载新的yum源。
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
//配置ceph源
vi /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
yum 时候出现Another app is currently holding the yum lock; waiting for it to exit...解决方法
rm -rf /var/run/yum.pid
安装cephadm
wget https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
或者直接从github下载
curl https://raw.githubusercontent.com/ceph/ceph/v15.2.13/src/cephadm/cephadm -o cephadm
再不济你把代码复制下来给执行权限: chmod a+x cephadm && cp cephadm /usr/bin/cephadm
cephadm其实就是一个python3脚本,可以直接运行,而不用安装。
初始化机群
mkdir -p /etc/ceph
cephadm bootstrap --mon-ip 10.1.6.90
mon-ip 就是第一个monitor daemon的IP地址
该命令执行以下操作:
- 在本地主机上为新集群创建monitor 和 manager daemon守护程序。
为Ceph集群生成一个新的SSH密钥,并将其添加到root用户的/root/.ssh/authorized_keys文件中。
将public key的副本写入/etc/ceph/ceph.pub。
将与新群集进行通信所需的最小配置文件保存到/etc/ceph/ceph.conf。
向/etc/ceph/ceph.client.admin.keyring写入client.admin可特权管理secret key的副本。
Add the _admin label to the bootstrap host. By default, any host with
this label will (also) get a copy of /etc/ceph/ceph.conf and
/etc/ceph/ceph.client.admin.keyring.
注意 这个会生成密码,需提前保存,方便后面访问dashboard。
......
URL: https://ceph01:8443/
User: admin
Password: soricu721m
INFO:cephadm:You can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid ff2b5380-b5eb-11ea-85a5-000c29177e91 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
INFO:cephadm:Bootstrap complete.
[root@ceph1 ~]# ll /etc/ceph/
total 12
-rw------- 1 root root 63 Mar 14 23:01 ceph.client.admin.keyring
-rw-r--r-- 1 root root 179 Mar 14 23:01 ceph.conf
-rw-r--r-- 1 root root 595 Mar 14 23:02 ceph.pub
[root@ceph1 ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ceph/ceph v15 5553b0cb212c 2 months ago 943MB
ceph/ceph-grafana 6.6.2 a0dce381714a 9 months ago 509MB
prom/prometheus v2.18.1 de242295e225 10 months ago 140MB
prom/alertmanager v0.20.0 0881eb8f169f 15 months ago 52.1MB
prom/node-exporter v0.18.1 e5a616e4b9cf 21 months ago 22.9MB
[root@ceph1 ~]# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bef4209bd542 prom/node-exporter:v0.18.1 "/bin/node_exporter …" 2 minutes ago Up 2 minutes ceph-12837782-84d6-11eb-a474-00505622b20c-node-exporter.ceph1
d062d4b9fc13 ceph/ceph-grafana:6.6.2 "/bin/sh -c 'grafana…" 2 minutes ago Up 2 minutes ceph-12837782-84d6-11eb-a474-00505622b20c-grafana.ceph1
3b81b1145e40 prom/alertmanager:v0.20.0 "/bin/alertmanager -…" 2 minutes ago Up 2 minutes ceph-12837782-84d6-11eb-a474-00505622b20c-alertmanager.ceph1
6442a8ae6fe8 prom/prometheus:v2.18.1 "/bin/prometheus --c…" 2 minutes ago Up 2 minutes ceph-12837782-84d6-11eb-a474-00505622b20c-prometheus.ceph1
839e0a93f289 ceph/ceph:v15 "/usr/bin/ceph-crash…" 2 minutes ago Up 2 minutes ceph-12837782-84d6-11eb-a474-00505622b20c-crash.ceph1
fe8d6084aad4 ceph/ceph:v15 "/usr/bin/ceph-mgr -…" 3 minutes ago Up 3 minutes ceph-12837782-84d6-11eb-a474-00505622b20c-mgr.ceph1.usdqnp
180e32d34660 ceph/ceph:v15 "/usr/bin/ceph-mon -…" 3 minutes ago Up 3 minutes ceph-12837782-84d6-11eb-a474-00505622b20c-mon.ceph1
PS:此时已经运行了以下组件
ceph-mgr ceph管理程序
ceph-monitor ceph监视器
ceph-crash 崩溃数据收集模块
prometheus prometheus监控组件
grafana 监控数据展示dashboard
alertmanager prometheus告警组件
node_exporter prometheus节点数据收集组件
启用CEPH命令
默认情况下本地主机是不支持ceph基本命令的,需执行命令(cephadm shell)进入特定Shell中使用(退出exit)
进入命令状态
cephadm shell
[root@ceph1 ~]# cephadm shell
Inferring fsid 12837782-84d6-11eb-a474-00505622b20c
Inferring config /var/lib/ceph/12837782-84d6-11eb-a474-00505622b20c/mon.ceph1/config
Using recent ceph image ceph/ceph@sha256:37939a3739e4e037dcf1b1f5828058d721d8c6de958212609f9e7d920b9c62bf
[ceph: root@ceph1 /]# ceph -s
cluster:
id: 12837782-84d6-11eb-a474-00505622b20c
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph1 (age 42m)
mgr: ceph1.usdqnp(active, since 41m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[ceph: root@ceph1 /]# exit
exit
[root@ceph1 ~]#
安装ceph-common包使本地主机支持ceph基本命令:cephadm install ceph-common
[root@ceph1 ~]# cephadm install ceph-common
Installing packages [‘ceph-common’]…
[root@ceph1 ~]# ceph -s
cluster:
id: 12837782-84d6-11eb-a474-00505622b20c
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph1 (age 48m)
mgr: ceph1.usdqnp(active, since 47m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
将主机加入集群中
配置集群的公共SSH公钥至其它Ceph节点:ssh-copy-id -f -i /etc/ceph/ceph.pub root@Hostname
[root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
The authenticity of host 'ceph2 (192.168.80.246)' can't be established.
ECDSA key fingerprint is SHA256:2Eo2WLWyofiltEAs4nLUFLOcXLFD6YvsuPSDlEDUZGk.
ECDSA key fingerprint is MD5:3c:b0:5f:a8:af:6a:15:45:eb:a9:2a:b0:20:21:65:04.
Are you sure you want to continue connecting (yes/no)? yes
root@ceph2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@ceph2'"
and check to make sure that only the key(s) you wanted were added.
[root@ceph1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
The authenticity of host 'ceph3 (192.168.80.247)' can't be established.
ECDSA key fingerprint is SHA256:2Eo2WLWyofiltEAs4nLUFLOcXLFD6YvsuPSDlEDUZGk.
ECDSA key fingerprint is MD5:3c:b0:5f:a8:af:6a:15:45:eb:a9:2a:b0:20:21:65:04.
Are you sure you want to continue connecting (yes/no)? yes
root@ceph3's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@ceph3'"
and check to make sure that only the key(s) you wanted were added.
添加指定节点到集群
[root@ceph1 ~]# ceph orch host add ceph2
Added host 'ceph2'
[root@ceph1 ~]# ceph orch host add ceph3
Added host 'ceph3'
[root@ceph1 ~]# ceph orch host ls #验证查看ceph纳管的所有节点
HOST ADDR LABELS STATUS
ceph1 ceph1
ceph2 ceph2
ceph3 ceph3
[root@ceph1 ~]# ceph -s
cluster:
id: 12837782-84d6-11eb-a474-00505622b20c
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services: #此处可看到连第一节点在内公有三个mon监控服务及两个mgr管理服务已扩展部署好
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 50s)
mgr: ceph1.usdqnp(active, since 60m), standbys: ceph2.qopzlo
osd: 0 osds: 0 up, 0 in
task status:
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
添加完成后ceph会自动扩展monitor和manager到另外节点(此过程时间可能会稍久,耐心等待),另外可用命令(ceph -s)或Ceph的Ceph Dashboard页面查看添加情况
部署OSD
在各节点处执行命令(lsblk)查看确定将被配置osd的设备名称
[root@ceph1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part /
sdb 8:16 0 100G 0 disk
[root@ceph2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part /
sdb 8:16 0 100G 0 disk
[root@ceph3 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part /
sdb 8:16 0 100G 0 disk
确定后执行命令添加:ceph orch daemon add osd Hostname:/dev/sdx
[root@ceph1 ~]# ceph orch daemon add osd ceph1:/dev/sdb
Created osd(s) 0 on host 'ceph1'
[root@ceph1 ~]# ceph orch daemon add osd ceph2:/dev/sdb
Created osd(s) 1 on host 'ceph2'
[root@ceph1 ~]# ceph orch daemon add osd ceph3:/dev/sdb
Created osd(s) 2 on host 'ceph3'
[root@ceph1 ~]# ceph -s
cluster:
id: 12837782-84d6-11eb-a474-00505622b20c
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 6m)
mgr: ceph2.qopzlo(active, since 6m), standbys: ceph1.usdqnp
osd: 3 osds: 3 up (since 11s), 3 in (since 11s) #不一会可查看到此处刚刚配置的osd节点都up状态及in状态表示都正确生效了
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 297 GiB / 300 GiB avail
pgs: 1 active+clean
检查集群健康状态:ceph -s #health参数显示OK表示Ceph环境基本部署完毕
部署 MDS 提供 CephFs 功能
# 创建一个 pool 用于存储 cephfs 数据
[root@ceph1 ~]# ceph osd pool create cephfs_data 64 64
pool 'cephfs_data' created
# 创建一个 pool 用于存储 cephfs 元数据
[root@ceph1 ~]# ceph osd pool create cephfs_metadata 32 32
pool 'cephfs_metadata' created
# 创建 cephfs, 指定 cephfs_metadata 存储元数据, 指定 cephfs_data 存储实际数据
[root@ceph1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
# 查看 cephfs
[root@ceph1 ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
# 在 ceph1, ceph2, ceph3 部署 mds
[root@ceph1 ~]# ceph orch apply mds cephfs --placement="3 ceph1 ceph2 ceph3"
Scheduled mds.cephfs update...
# 查看 mds 是否启动
[root@ceph1 ~]# ceph orch ps --daemon-type mds
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
mds.cephfs.ceph1.lmrpri ceph01 running (4m) 4m ago 4m 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc bcbc19c43089
mds.cephfs.ceph2.ggapal ceph02 running (4m) 4m ago 4m 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 987aa0d80bd9
mds.cephfs.ceph3.eelzdg ceph03 running (4m) 4m ago 4m 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc f3e2de2d5817
# 查看当前集群的所有 pool
[root@ceph1 ~]# ceph osd lspools
1 device_health_metrics
2 cephfs_data
3 cephfs_metadata
部署RGWS
# 创建领域
[root@ceph1 ~]# radosgw-admin realm create --rgw-realm=rgw01 --default
{
"id": "e8f17788-0adf-47fd-a920-4d3be7faf604",
"name": "rgw01",
"current_period": "436ef150-e9ef-4748-83e9-b572a8812465",
"epoch": 1
}
# 创建区域组
[root@ceph1 ~]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
{
"id": "aa0c224d-5894-40f9-88ce-6d65fef851d1",
"name": "default",
"api_name": "default",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "",
"zones": [],
"placement_targets": [],
"default_placement": "",
"realm_id": "e8f17788-0adf-47fd-a920-4d3be7faf604",
"sync_policy": {
"groups": []
}
}
# 创建区域
[root@ceph1 ~]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-hangzho --master --default
{
"id": "e0cd3501-b5bc-4015-832e-6b6b02708c02",
"name": "cn-hangzho",
"domain_root": "cn-hangzho.rgw.meta:root",
"control_pool": "cn-hangzho.rgw.control",
"gc_pool": "cn-hangzho.rgw.log:gc",
"lc_pool": "cn-hangzho.rgw.log:lc",
"log_pool": "cn-hangzho.rgw.log",
"intent_log_pool": "cn-hangzho.rgw.log:intent",
"usage_log_pool": "cn-hangzho.rgw.log:usage",
"roles_pool": "cn-hangzho.rgw.meta:roles",
"reshard_pool": "cn-hangzho.rgw.log:reshard",
"user_keys_pool": "cn-hangzho.rgw.meta:users.keys",
"user_email_pool": "cn-hangzho.rgw.meta:users.email",
"user_swift_pool": "cn-hangzho.rgw.meta:users.swift",
"user_uid_pool": "cn-hangzho.rgw.meta:users.uid",
"otp_pool": "cn-hangzho.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "cn-hangzho.rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "cn-hangzho.rgw.buckets.data"
}
},
"data_extra_pool": "cn-hangzho.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"realm_id": "e8f17788-0adf-47fd-a920-4d3be7faf604"
}
# 在 ceph01, ceph02, ceph03 上部署 rgw
[root@ceph1 ~]# ceph orch apply rgw rgw01 cn-hangzho --placement="3 ceph1 ceph2 ceph3"
Scheduled rgw.rgw01.cn-hangzho update...
# 查看各节点 rgw 是否启动
[root@ceph1 ~]# ceph orch ps --daemon-type rgw
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
rgw.rgw01.cn-hangzho.ceph1.uaglvv ceph1 running (25s) 23s ago 25s 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc d7e90e8d4d1b
rgw.rgw01.cn-hangzho.ceph2.szkeqj ceph2 running (27s) 21s ago 27s 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 95171c8f0e4b
rgw.rgw01.cn-hangzho.ceph3.zphone ceph3 running (26s) 20s ago 26s 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 19804d391100
测试RBD使用
# 创建 RBD
[root@ceph1 ~]# ceph osd pool create rbd 16
# application enable RBD
[root@ceph1 ~]# ceph osd pool application enable rbd rbd
# 创建 rbd 存储, 指定大小为 10GB
[root@ceph1 ~]# rbd create rbd1 --size 10240
# 查看 rbd 信息
[root@ceph1 ~]# rbd --image rbd1 info
rbd image 'rbd1':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 39e7c066a17a
block_name_prefix: rbd_data.39e7c066a17a
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Wed Jun 24 16:52:53 2020
access_timestamp: Wed Jun 24 16:52:53 2020
modify_timestamp: Wed Jun 24 16:52:53 2020
[root@ceph1 ~]# ceph osd crush tunables hammer
adjusted tunables profile to hammer
[root@ceph1 ~]# ceph osd crush reweight-all
reweighted crush hierarchy
# 由于关闭一些内核默认不支持的特性
[root@ceph1 ~]# rbd feature disable rbd1 exclusive-lock object-map fast-diff deep-flatten
# 查看特性是否已禁用
[root@ceph1 ~]# rbd --image rbd1 info | grep features
features: layering
op_features:
# 映射到客户端(在需要挂载的客户端运行)
[root@ceph1 ~]# rbd map --image rbd1
/dev/rbd0
# 查看映射情况
[root@ceph1 ~]# rbd showmapped
id pool namespace image snap device
0 rbd rbd1 - /dev/rbd0
# 格式化
[root@ceph1 ~]# mkfs.xfs /dev/rbd0
# 创建挂载目录, 并将 rbd 挂载到指定目录
[root@ceph1 ~]# mkdir /mnt/rbd
[root@ceph1 ~]# mount /dev/rbd0 /mnt/rbd/
# 查看挂载情况
[root@ceph1 ~]# df -hl | grep rbd
/dev/rbd0 10G 33M 10G 1% /mnt/rbd
测试CephFs
# 创建用户用于客户端访问 CephFs
[root@ceph1 ~]# ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow r, allow rw path=/' osd 'allow rw pool=cephfs_data' -o ceph.client.cephfs.keyring
# 获取用户 tooken
[root@ceph1 ~]# ceph auth get-key client.cephfs
AQAyGfNeTXKzDhAAkQPKLA72/Rriy9qpczzp8A==
# 创建挂载目录, 并将 Ceph 挂载到指定目录, 此种挂载方式被称为内核驱动的挂载方式, 也可以将其通过 NFS Ganesha 输出为 NFS 服务器格式
[root@ceph1 ~]# mkdir /mnt/cephfs/
[root@ceph1 ~]# mount -t ceph ceph1:/ /mnt/cephfs/ -o name=cephfs,secret=AQAyGfNeTXKzDhAAkQPKLA72/Rriy9qpczzp8A==
# 查看挂载
[root@ceph1 ~]# mount | grep cephfs
10.1.6.90:/ on /mnt/cephfs type ceph (rw,relatime,name=cephfs,secret=<hidden>,acl)
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)