rook-ceph-tools
操作系统上使用 ceph 命令需要额外安装 ceph 的包,对于某些操作系统还需要编译安装。 可以使用 rook-ceph-tools 去操作 ceph
默认配置文件 https://github.com/rook/rook/blob/master/deploy/examples/toolbox.yaml
一般需要修改,修改内容.
1 |
|
改完后文件示例
1 | apiVersion: apps/v1 |
使用
进入容器查看状态1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21[root@test-61 ~]# kubectl exec -it -n rook-ceph rook-ceph-tools-84d9889d64-wlm6x -- bash
[root@test-62 /]# ceph -s
cluster:
id: 546a216f-2c8e-4a9d-acf4-3041857a127a
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 2w)
mgr: a(active, since 4h), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 11d), 3 in (since 2w)
data:
volumes: 1/1 healthy
pools: 4 pools, 81 pgs
objects: 1.68k objects, 5.0 GiB
usage: 18 GiB used, 882 GiB / 900 GiB avail
pgs: 81 active+clean
io:
client: 1.2 KiB/s rd, 2.0 KiB/s wr, 2 op/s rd, 0 op/s wr
使用 rbd1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67[root@test-62 /]# rbd create replicapool/test --size 10
[root@test-62 /]# rbd info replicapool/test
rbd image 'test':
size 10 MiB in 3 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 249d8df433415b
block_name_prefix: rbd_data.249d8df433415b
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Mon Aug 14 02:36:22 2023
access_timestamp: Mon Aug 14 02:36:22 2023
modify_timestamp: Mon Aug 14 02:36:22 2023
[root@test-62 /]# rbd feature disable replicapool/test fast-diff deep-flatten object-map exclusive-lock
[root@test-62 /]# rbd map replicapool/test
/dev/rbd2
[root@test-62 /]# lsblk | grep rbd
rbd0 251:0 0 8G 0 disk
rbd1 251:16 0 8G 0 disk
rbd2 251:32 0 10M 0 disk
[root@test-62 /]# mkfs.ext4 -m0 /dev/rbd2 # 初次使用需要 格式化,使用 之前的 eg rbd0 不要这个操作
mke2fs 1.45.6 (20-Mar-2020)
Suggestion: Use Linux kernel >= 3.18 for improved stability of the metadata and journal checksum features.
Discarding device blocks: done
Creating filesystem with 10240 1k blocks and 2560 inodes
Filesystem UUID: 3e880522-712f-4a5e-ae2e-becc940ee973
Superblock backups stored on blocks:
8193
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
[root@test-62 /]# mkdir /tmp/rook-volume
[root@test-62 /]# mount /dev/rbd2 /tmp/rook-volume
[root@test-62 /]# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 500G 47G 453G 10% /
tmpfs 118G 0 118G 0% /sys/fs/cgroup
devtmpfs 118G 0 118G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/mapper/centos_172--20--50--245-root 295G 58G 238G 20% /usr/lib/modules
/dev/mapper/containerd-testpool 500G 47G 453G 10% /etc/hostname
tmpfs 235G 4.0K 235G 1% /var/lib/rook-ceph-mon
tmpfs 235G 12K 235G 1% /run/secrets/kubernetes.io/serviceaccount
/dev/rbd2 8.7M 172K 8.4M 2% /tmp/rook-volume
[root@test-62 /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
rbd0 251:0 0 8G 0 disk
rbd1 251:16 0 8G 0 disk
rbd2 251:32 0 10M 0 disk /tmp/rook-volume
vda 252:0 0 300G 0 disk
|-vda1 252:1 0 1G 0 part
`-vda2 252:2 0 299G 0 part
|-centos_172--20--50--245-root 253:0 0 295G 0 lvm /dev/termination-log
`-centos_172--20--50--245-swap 253:1 0 4G 0 lvm
vdb 252:16 0 100G 0 disk
vdc 252:32 0 500G 0 disk
`-containerd-testpool 253:4 0 500G 0 lvm /etc/resolv.conf
vdd 252:48 0 1T 0 disk
`-hadoop-testpool 253:3 0 1024G 0 lvm
vde 252:64 0 300G 0 disk
vdf 252:80 0 300G 0 disk
卸载 rbd
1 | [root@test-62 /]# umount /tmp/rook-volume |
使用 cephfs (上一章有创建)
1 | mkdir /tmp/registry |
其他使用参考 ceph 官网