k8s高可用集群二进制安装与升级

环境信息

可以使用 github 的部署方法,二进制安装略微复杂

前置操作, keeplived + haproxy 等,可参考 kubeadm 高可用 安装中的前置操作。

环境信息

centos7 , x86_64, 公网主机 没有相同的主机名。( keeplived + haproxy

三主节点

  1. imwl-175 66.42.99.175
  2. imwl-219 66.42.110.219
  3. imwl-124 149.248.6.124

三从节点

  1. imwl-53 45.32.21.53
  2. imwl-244 207.148.89.244
  3. imwl-18 45.32.254.18

文件信息

  1. kubernetes-v1.20.5 地址

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#downloads-for-v1205

此页 ctrl + f 搜索 etcd , 发现支持的版本为 3.4.13 (尽量保持一致,也可以先升级 k8s 之后再升级 etcd)

1
2
https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz
https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz

证书

1
2
3
4
5
6
7
8
9
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
  1. 创建证书配置文件 ca-config.json
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    [root@dong ssl]# vim ca-config.json
    {
    "signing": {
    "default": {
    "expiry": "87600h"
    },
    "profiles": {
    "kubernetes": {
    "usages": [
    "signing", # 表示该证书可以签名其他证书
    "key encipherment",
    "server auth", # 表示client可以用该 CA 对server提供的证书进行验证
    "client auth" # 表示server可以用该CA对client提供的证书进行验证
    ],
    "expiry": "87600h"
    }
    }
    }
    }

字段说明

1
2
3
4
5
6
7
8
9
ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;

signing:表示该证书可以签名其他证书;生成的ca.pem证书中 CA=TRUE;

server auth:表示client可以用该 CA 对server提供的证书进行验证;

client auth:表示server可以用该CA对client提供的证书进行验证;

expiry:过期时间

  1. 创建 CA 证书签名请求文件 ca-csr.json
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    [root@k8s01 ssl]# cat  ca-csr.json
    {
    "CN": "kubernetes", # Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "ST": "BeiJing",
    "L": "BeiJing",
    "O": "k8s", # Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
    "OU": "System"
    }
    ],
    "ca": {
    "expiry": "87600h"
    }
    }

字段说明

1
2
CN: Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(UserName);浏览器使用该字段验证网站是否合法;
O: Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组(Group);

  1. 使用 ca-csr.json 生成 证书文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@test-196 catest]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2021/04/06 17:38:51 [INFO] generating a new CA key and certificate from CSR
2021/04/06 17:38:51 [INFO] generate received request
2021/04/06 17:38:51 [INFO] received CSR
2021/04/06 17:38:51 [INFO] generating key: rsa-2048
2021/04/06 17:38:51 [INFO] encoded CSR
2021/04/06 17:38:51 [INFO] signed certificate with serial number 521813437196023415968870323165871682871497777129

[root@test-196 catest]# ll
total 20
-rw-r--r-- 1 root root 473 Apr 6 17:31 ca-config.json
-rw-r--r-- 1 root root 1001 Apr 6 17:38 ca.csr
-rw-r--r-- 1 root root 257 Apr 6 17:38 ca-csr.json
-rw------- 1 root root 1675 Apr 6 17:38 ca-key.pem
-rw-r--r-- 1 root root 1359 Apr 6 17:38 ca.pem
  1. ca.pem : CA 证书, 是后面 kubernetes 组件会用到的 RootCA
  2. ca-key.pem : CA 的私钥
  3. ca.csr : 一个签署请求

升级

具体参数 请参考详细配置文件

eg:

1
2
3
4
5
6
7
8
9
10
[root@imwl-53 ~]# systemctl status etcd -l
● etcd.service - Etcd Server
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2021-04-01 02:19:52 UTC; 3h 47min ago
Docs: https://github.com/coreos
Main PID: 8517 (etcd)
Tasks: 11
Memory: 97.6M
CGroup: /system.slice/etcd.service
└─8517 /opt/kube/bin/etcd --name=etcd1 --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem --initial-advertise-peer-urls=https://45.32.21.53:2380 --listen-peer-urls=https://45.32.21.53:2380 --listen-client-urls=https://45.32.21.53:2379,http://127.0.0.1:2379 --advertise-client-urls=https://45.32.21.53:2379 --initial-cluster-token=etcd-cluster-0 --initial-cluster=etcd1=https://45.32.21.53:2380,etcd2=https://207.148.89.244:2380,etcd3=https://45.32.254.181:2380 --initial-cluster-state=new --data-dir=/var/lib/etcd --snapshot-count=50000 --auto-compaction-retention=1 --max-request-bytes=10485760 --auto-compaction-mode=periodic --quota-backend-bytes=8589934592

备份文件需要的主要配置信息

1
2
3
4
5
CACERT='/etc/kubernetes/ssl/ca.pem'
CERT='/etc/etcd/ssl/etcd.pem'
KEY='/etc/etcd/ssl/etcd-key.pem'
DATA-DIR='/var/lib/etcd'
ETCDCTL_PATH='/opt/kube/bin/'

升级 etcd

  1. 备份 etcd 数据, 每个节点都有完整的数据,只需要备份一个节点就好了

执行 etcd_backup.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#!/bin/bash
set -e # 遇到错误直接退出
# 不需要修改的变量
script_abs=$(readlink -f "$0") # 获取当前脚本绝对路径
script_dir=$(dirname $script_abs) # 获取当前脚本所在的绝对目录
script_name=${script_abs##*/} # 获取当前脚本文件名
cd ${script_dir} # 全局切换到脚本所在目录
USER=$(whoami) # 获取执行用户名

gainAddress(){
if [ ! -f "./address.txt" ];then
read -p $'input ADDRESS : \n' ADDRESS
echo $ADDRESS > etcd_ip.txt
fi
ADDRESS=`cat etcd_ip.txt` # 获取 etcd_ip 地址
}

CLIENTPORT=2379
PEERPORT=2380
DATA-DIR='/var/lib/etcd'
ETCDBIN_DIR='/opt/kube/bin/'
CACERT='/etc/kubernetes/ssl/ca.pem'
CERT='/etc/etcd/ssl/etcd.pem'
KEY='/etc/etcd/ssl/etcd-key.pem'
BACKUP_DIR='/root/backup/etcd/'
DATE=`date +%Y%m%d-%H%M%S`

SNAPSHOTNAME=$BACKUP_DIR/snapshot-$(date +%Y%m%d-%H%M%S).db

etcdctl version
etcd --version

[ ! -d $BACKUP_DIR ] && mkdir -p $BACKUP_DIR
export ETCDCTL_API=3
$ETCDCTL_PATH --endpoints=$ENDPOINT --cacert=$CACERT --cert=$CERT --key=$KEY snapshot save $SNAPSHOTNAME

for ip in $ADDRESS;do
ssh $ip [ ! -d $BACKUP_DIR ] && mkdir -p $BACKUP_DIR
scp $SNAPSHOTNAME $USER@$ip:~/
done

  1. 依次停止 etcd 集群,清理数据,替换二进制文件,
1
2
3
4
5
6
systemctl stop etcd
rm -rf /var/lib/etcd # 此路径 也是 `data_dir 的地址`
for ip in $ADDRESS;do
ssh root@$ip systemctl stop etcd && rm -rf $DATA-DIRE
scp ./etcd* root@$ip:$ETCDBIN_DIR
done
  1. 恢复数据, 启动 etcd

需要手动到各个节点执行,按需修改

1
2
3
4
5
etcdctl snapshot restore $snapbackname \
--name etcd1 \
--initial-cluster etcd1=https://45.32.21.53:2380,etcd2=https://207.148.89.244:2380,etcd3=https://45.32.254.181:2380 \
--initial-advertise-peer-urls http://45.32.21.53:2380
systemctl restart etcd

etcd 说明

使用 3.4.13 版本 有问题,所以改了一下 endpoint, 改成 单个

1
2
3
4
5
6
7
8
9
10
11
12
[root@imwl-53 ~]# etcdctl --endpoints=45.32.21.53:2379,45.32.254.181:2379,207.148.89.244:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem  snapshot save /root/backup/etcd//snapshot-20210401-054755.db
Error: snapshot must be requested to one selected node, not multiple [45.32.21.53:2379 45.32.254.181:2379 207.148.89.244:2379]


[root@imwl-53 ~]# etcdctl --endpoints=45.32.21.53:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem snapshot save /root/backup/etcd/snapshot-20210401-054755.db
{"level":"info","ts":1617257504.110205,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/root/backup/etcd/snapshot-20210401-054755.db.part"}
{"level":"info","ts":"2021-04-01T06:11:44.119Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1617257504.1191013,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"45.32.21.53:2379"}
{"level":"info","ts":"2021-04-01T06:11:44.156Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1617257504.1682398,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"45.32.21.53:2379","size":"3.9 MB","took":0.057855651}
{"level":"info","ts":1617257504.1684163,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/root/backup/etcd/snapshot-20210401-054755.db"}
Snapshot saved at /root/backup/etcd/snapshot-20210401-054755.db

配置文件备份

/etc/systemd/system/kubelet.service 基本上 节点一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@imwl-124 ~]# cat /etc/systemd/system/kubelet.service   
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpu/podruntime.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuacct/podruntime.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/podruntime.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/memory/podruntime.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/pids/podruntime.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/systemd/podruntime.slice

ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpu/system.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuacct/system.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/memory/system.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/pids/system.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/systemd/system.slice

ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/podruntime.slice
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice
ExecStart=/opt/kube/bin/kubelet \
--config=/var/lib/kubelet/config.yaml \
--cni-bin-dir=/opt/kube/bin \
--cni-conf-dir=/etc/cni/net.d \
--hostname-override=149.248.6.124 \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--network-plugin=cni \
--pod-infra-container-image=easzlab/pause-amd64:3.2 \
--root-dir=/var/lib/kubelet \
--v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

/etc/systemd/system/kube-proxy.service 基本上 节点一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@imwl-124 ~]# cat /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
# kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后,kube-proxy 会对访问 Service IP 的请求做 SNAT
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kube/bin/kube-proxy \
--bind-address=149.248.6.124 \
--cluster-cidr=172.20.0.0/16 \
--hostname-override=149.248.6.124 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--logtostderr=true \
--proxy-mode=ipvs \
--metrics-bind-address=149.248.6.124
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

/etc/systemd/system/kube-apiserver.service , msater 节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@imwl-124 ~]# cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kube/bin/kube-apiserver \
--advertise-address=149.248.6.124 \
--allow-privileged=true \
--anonymous-auth=false \
--api-audiences=api,istio-ca \
--authorization-mode=Node,RBAC \
--token-auth-file=/etc/kubernetes/ssl/basic-auth.csv \
--bind-address=149.248.6.124 \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--endpoint-reconciler-type=lease \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://45.32.21.53:2379,https://207.148.89.244:2379,https://45.32.254.181:2379 \
--kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/admin.pem \
--kubelet-client-key=/etc/kubernetes/ssl/admin-key.pem \
--service-account-issuer=kubernetes.default.svc \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca.pem \
--service-cluster-ip-range=10.68.0.0/16 \
--service-node-port-range=20000-40000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
--requestheader-allowed-names= \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--proxy-client-cert-file=/etc/kubernetes/ssl/aggregator-proxy.pem \
--proxy-client-key-file=/etc/kubernetes/ssl/aggregator-proxy-key.pem \
--enable-aggregator-routing=true \
--v=2
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

/etc/systemd/system/kube-controller-manager.service , msater 节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@imwl-124 ~]# cat /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kube/bin/kube-controller-manager \
--address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=172.20.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--node-cidr-mask-size=24 \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-cluster-ip-range=10.68.0.0/16 \
--use-service-account-credentials=true \
--v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

/etc/systemd/system/kube-scheduler.service , msater 节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@imwl-124 ~]# cat /etc/systemd/system/kube-scheduler.service .
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kube/bin/kube-scheduler \
--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

etcd

45.32.21.53 节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@imwl-53 ~]# cat  /etc/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/kube/bin/etcd \
--name=etcd1 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://45.32.21.53:2380 \
--listen-peer-urls=https://45.32.21.53:2380 \
--listen-client-urls=https://45.32.21.53:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://45.32.21.53:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=etcd1=https://45.32.21.53:2380,etcd2=https://207.148.89.244:2380,etcd3=https://45.32.254.181:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd \
--snapshot-count=50000 \
--auto-compaction-retention=1 \
--max-request-bytes=10485760 \
--auto-compaction-mode=periodic \
--quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

207.148.89.244 节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@imwl-244 ~]# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/kube/bin/etcd \
--name=etcd2 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://207.148.89.244:2380 \
--listen-peer-urls=https://207.148.89.244:2380 \
--listen-client-urls=https://207.148.89.244:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://207.148.89.244:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=etcd1=https://45.32.21.53:2380,etcd2=https://207.148.89.244:2380,etcd3=https://45.32.254.181:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd \
--snapshot-count=50000 \
--auto-compaction-retention=1 \
--max-request-bytes=10485760 \
--auto-compaction-mode=periodic \
--quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

45.32.254.181 节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@imwl-181 ~]# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/opt/kube/bin/etcd \
--name=etcd3 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://45.32.254.181:2380 \
--listen-peer-urls=https://45.32.254.181:2380 \
--listen-client-urls=https://45.32.254.181:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://45.32.254.181:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=etcd1=https://45.32.21.53:2380,etcd2=https://207.148.89.244:2380,etcd3=https://45.32.254.181:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd \
--snapshot-count=50000 \
--auto-compaction-retention=1 \
--max-request-bytes=10485760 \
--auto-compaction-mode=periodic \
--quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

相同的配置

切换 目录 /etc/etcd/ssl/

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem -ca-key=/etc/kubernetes/ssl/ca-key.pem -config=/etc/kubernetes/ssl/ca-config.json -profile=kubernetes /etc/etcd/ssl/etcd-csr.json | cfssljson -bare etcd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@imwl-181 ssl]# cat etcd-csr.json etcd
{
"CN": "etcd",
"hosts": [
"45.32.21.53",
"207.148.89.244",
"45.32.254.181",
"127.0.0.1"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HangZhou",
"L": "XS",
"O": "k8s",
"OU": "System"
}
]
}