K8S之kubeadm安装v1.14.3

#kubernetes

修改kubeadm 生成证书时间,以下以1.14.3为例,需要先安装kubeadm

#检出kubernetes代码
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
#切换到对应版本分支
git checkout -b remotes/origin/release-1.14 v1.14.3
#修改证书生成文件中的时间设置cert.go
vi staging/src/k8s.io/client-go/util/cert/cert.go
vi cmd/kubeadm/app/util/pkiutil/pki_helpers.go

安装go 语言,这里以13.3为例,用于重新编译kubeadm

wget https://dl.google.com/go/go1.13.3.linux-amd64.tar.gz
tar -zxf go1.13.3.linux-amd64.tar.gz -C /usr/local/
export PATH=$PATH:/usr/local/go/bin
#回到kubernetes代码目录
cd kubernetes
#用修改过时间的代码重新编译kubeadm
make WHAT=cmd/kubeadm

备份旧的kubeadm命令,并将新编辑的kubeadm 复制或者通过软链接的方式替换原来的kubeadm

mv /usr/bin/kubeadm /usr/bin/kubeadm_backup
ln -s /usr/src/kubernetes/_output/bin/kubeadm /usr/bin/kubeadm

1.23版 go的版本也要随之更新

vi staging/src/k8s.io/client-go/util/cert/cert.go
vi ./cmd/kubeadm/app/constants/constants.go

重新编译

yum install -y gcc

cd kubernetes
make all WHAT=cmd/kubeadm GOFLAGS=-v
mv /usr/bin/kubeadm /usr/bin/kubeadm_backup
ln -s /usr/src/kubernetes/_output/bin/kubeadm /usr/bin/kubeadm

1Master 1etcd 2Node 结构安装


kubernetes版本: v1.14.3

一、在所有节点上进行如下操作

swapoff –a		# 临时关闭swap分区
vi /etc/fstab/ 	# 修改fstab文件,禁用swap分区开机挂载
setenforce 0	# 临时关闭selinux
vi /etc/selinux/config	# 修改selinux配置文件,禁用selinux开机自启
vi /etc/hosts	# 添加所有参数集群的服务器主机名和IP到hosts文件

添加yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes Repository
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

安装docker kubectl kubeadm kubelet软件

yum remove docker docker-common docker-selinux docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce kubelet kubeadm kubectl --disableexcludes=kubernetes

更新docker镜像源,以便在国内拉取镜像

echo '{"registry-mirrors":["https://registry.docker-cn.com"]}' > /etc/docker/daemon.json

启动服务

systemctl enable kubelet 
systemctl enable docker
systemctl start kubelet
systemctl start docker

master节点

创建master节点初始化配置文件

vi init-config.yaml
	apiVersion: kubeadm.k8s.io/v1beta1
	imageRepository: registry.aliyuncs.com	# docker官方k8s镜像源
	kind: ClusterConfiguration
	kubernetesVersion: v1.14.0			# 指定k8s版本
	networking:
	  podSubnet: "172.7.0.0/16"			# k8s pod 网段配置
	  serviceSubnet: "10.10.0.0/16"		# k8s service 网段配置

预先拉取集群所需要的镜像,需要在所有节点上操作

kubeadm config images pull --config=init-config.yaml 

Letting iptables see bridged traffic

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

初始化master节点

kubeadm init --config=init-config.yaml	# 初始化master节点

注意初始化完成后以下的几行提示,需要在master节点执行以下命令,配置kubectl相关信息

mkdir $HOME/./kube
cp /etc/kubernetes/admin.conf $HOME/./kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 以下生成的token用于在NODE节点上执行,先保存一下.安装了网络插件后再加入node节点到集群
kubeadm join 172.23.210.22:6443 --token 6hwq85.kurhalhv2h8ucm39 \
	--discovery-token-ca-cert-hash sha256:fde510e51caa21146b36c64ab4b4edc7e87ff6f35cea4db079503849188b0f00 

在主节点上安装flannel网络插件:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

一定要先装插件再加入节点,通过kubectl get pods --all-namespaces 命令看到coredns-* 服务状态为running之后再将节点加入集群,不然coredns-*容器会一直在CrashLoopBackOff状态死循环。

另外一种常用网络插件calico。可以根据自己的习惯来选择一种

curl https://docs.projectcalico.org/v3.7/manifests/calico.yaml -O
kubectl apply -f calico.yaml

NODE节点

vi join-config.yaml
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: JoinConfiguration
	discovery:
	  bootstrapToken:
		apiServerEndpoint: 172.23.210.22:6443
		token: 6hwq85.kurhalhv2h8ucm39	# master节点初始化完成时生成的token值
		unsafeSkipCAVerification: true
	  tlsBootstrapToken: 6hwq85.kurhalhv2h8ucm39 # 同上

NODEl、2加入集群

kubeadm join --config=join-config.yaml

查看集群所有节点信息,在master节点上执行如下信息即可

kubectl get nodes

移除node节点,以下以移除test-03节点为例 先在master节点上驱散要移除节点的所有pod master节点操作如下:

kubectl drain test-03 --delete-local-data --ignore-daemonsets	# 驱散test-03节点上的所有pod --ignore-daemonsets参数是为了忽略集群组件容器(注释),因为组件容器无法被驱散。
kubectl delete node test-03	# 移除test-03节点

被移除的节点上需要清除之前的集群信息 test-03节点操作如下:

kubeadm reset	# 在test-03上执行,清除之前的集群相关信息

注释:类似flannel网络插件的集群组件容器无法被驱散,可以通过参数--ignore-daemonsets 来忽略,

error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-flannel-ds-amd64-7x8jb, kube-system/kube-proxy-4vw7m

3Master 2node 结构安装(多Master节点安装方法一样,只是重复Master节点加入操作即可。NODE节点同理)


kubernetes版本: v1.14.3

一、在所有节点上进行如下操作

swapoff –a		# 临时关闭swap分区
vi /etc/fstab/ 	# 修改fstab文件,禁用swap分区开机挂载
setenforce 0	# 临时关闭selinux
vi /etc/selinux/config	# 修改selinux配置文件,禁用selinux开机自启
vi /etc/hosts	# 添加所有参数集群的服务器主机名和IP到hosts文件

添加yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes Repository
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

安装docker kubectl kubeadm kubelet软件

yum remove docker docker-common docker-selinux docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce kubelet kubeadm kubectl --disableexcludes=kubernetes

更新docker镜像源,以便在国内拉取镜像

echo '{"registry-mirrors":["https://registry.docker-cn.com"]}' > /etc/docker/daemon.json

启动服务

systemctl enable kubelet 
systemctl enable docker
systemctl start kubelet
systemctl start docker

二、为了高可用和负载均衡,还需要安装haproxy+keepalived或nginx+keepalived来进行负载均衡和热备

以下用haproxy+keepalived来做演示 所有Master服务器的haproxy服务配置一样,keepalived服务大体一样,除了用于区分主从部分的配置

安装haproxy 和keepalived服务

yum install -y haproxy keepalived

haproxy服务的配置

vi /etc/haproxy/haproxy.cfg
	listen  stats   0.0.0.0:12345                   # haproxy 状态查看端口,通过该端口可以通过浏览器查看haproxy负载状态
		mode                    http
		log                     global
		maxconn                 10
		stats                   enable
		stats                   hide-version
		stats                   refresh 30s
		stats                   show-node
		stats                   auth admin:p@ssw0rd # 访问状态页面的认证信息
		stats                   uri /stats          # 访问状态页面的URL路径,注意是uri 不是url

	frontend    kube-api-https
		bind    0.0.0.0:12567						# kube-apiserver 服务对外端口
		mode    tcp
		default_backend kube-api-server

	backend kube-api-server
		balance roundrobin
		mode    tcp
		server  test-01   172.23.210.22:6443 check	# master1
		server  test-02   172.23.210.23:6443 check	# master2
		server  test-03   172.23.210.24:6443 check	# master3

keepalived服务的配置

vi /etc/keepalived/keepalived.conf
	global_defs {
	   router_id test-01                # 服务器标识,三台服务器不能一样
	}

	vrrp_script chk_ha {                # haproxy服务状态检测脚本
		script "/root/check_haproxy.sh" # 脚本路径
		interval 2                      # 检测时间间隔单位秒。
	}
	vrrp_instance VI_1 {
		state SLAVE                     # 主从状态,分别是MASTER和SLAVE。主服务器宕机后,从服务器接过VIP提供服务;主服务器恢复后,因为优先级一般比从服务器高,会抢回VIP。还有一种做法,两个都设置为SLAVE,设置相同优先级,可以避免主服务器重新启动后抢占VIP,导致服务访问切换,这种情况下只有正在提供服务的服务器无法正常提供服务才会导致虚拟IP切换到其它服务器。
		interface ens192				# ens192 网卡接口,可以通过ip ad查看.
		virtual_router_id 51
		priority 99                     # 服务器优先级,0-240,数据越大,优先及越高。例如我这里从服务器设置优先级为99,主服务器优先级为100
		advert_int 1
		authentication {
			auth_type PASS
			auth_pass 1111
		}
		virtual_ipaddress {
			172.23.210.26				# 虚拟IP,会在两台服务器之间切换
		}
		 track_script {                 # 调用上面的服务状态检测配置
			chk_ha
		}
	}

check_haproxy.sh脚本内容,如果检测到haproxy服务不存在,尝试重启,重启后还不存在,则结束keepalived进程,由备机提供服务,注意保持脚本有可执行权限

cat > check_haproxy.sh << EOF
#!/bin/bash
run=`ps -C haproxy --no-header | wc -l`
if [ $run -eq 0 ]
then
	systemctl restart haproxy
	sleep 2
	if [ $run -eq 0 ]
	then
		killall keepalived
	fi
fi
EOF

启动服务

systemctl enable haproxy
systemctl enable keepalived
systemctl start haproxy
systemctl start keepalived

Letting iptables see bridged traffic

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

三、初始化master,初始化之前为了防止出现证书一年后过期的情况,可以参考顶部的修改生成证书时长的部分。

master1 编辑初始化配置文件

vi kubeadm-config.yaml
	apiVersion: kubeadm.k8s.io/v1beta1
	imageRepository: registry.aliyuncs.com/google_containers	# 指定国内镜像地址,以免官方镜像无法下载
	kind: ClusterConfiguration
	kubernetesVersion: v1.14.0			# 指定k8s版本,用于拉取相应的镜像
	controlPlaneEndpoint: "172.23.210.26:12567"	# 配置kube-apiserver服务地址,因为前面用了haproxy进行负载.这里使用的是keepalived虚拟地址和haproxy负载的12567端口
	networking:
	  podSubnet: "172.7.0.0/16"		# pod 网段配置
	  serviceSubnet: "10.10.0.0/16"	# svc 网段配置

拉取k8s相关镜像

kubeadm config images pull --config=kubeadm-config.yaml 
	[config/images] Pulled registry.aliyuncs.com/google-containers/kube-apiserver:v1.14.0
	[config/images] Pulled registry.aliyuncs.com/google-containers/kube-controller-manager:v1.14.0
	[config/images] Pulled registry.aliyuncs.com/google-containers/kube-scheduler:v1.14.0
	[config/images] Pulled registry.aliyuncs.com/google-containers/kube-proxy:v1.14.0
	[config/images] Pulled registry.aliyuncs.com/google-containers/pause:3.1
	[config/images] Pulled registry.aliyuncs.com/google-containers/etcd:3.3.10
	[config/images] Pulled registry.aliyuncs.com/google-containers/coredns:1.3.1

初始化master1 --experimental-upload-certs 该参数专用于高可用部署,可以将需要在不同的控制平面之间传递的证书上传到集群中,以Secret形式保存起来,并使用token进行加密.Secret过期时间为两小时,过期后可以重新生成.该命令需要k8s 1.14及以上版本才能支持.

kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs
	.....
	Your Kubernetes control-plane has initialized successfully!

	To start using your cluster, you need to run the following as a regular user:

	  mkdir -p $HOME/.kube
	  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	  sudo chown $(id -u):$(id -g) $HOME/.kube/config

	You should now deploy a pod network to the cluster.
	Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	  https://kubernetes.io/docs/concepts/cluster-administration/addons/

	You can now join any number of the control-plane node running the following command on each as root:
	# 以下信息用于master2加入集群时的配置
	  kubeadm join 172.23.210.26:12567 --token tfm6qd.ibzoeaorwnyqlcow \
		--discovery-token-ca-cert-hash sha256:af32363c4993ca00b7322876cac036bc21816efc1ed0de2a662d606451c60cce \
		--experimental-control-plane --certificate-key f6599b60661b3b5c2dbd17fd6487e50eba0eb13d7926b8c55a1a1bd7f869f7f9

	Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
	As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use 
	"kubeadm init phase upload-certs --experimental-upload-certs" to reload certs afterward.

	Then you can join any number of worker nodes by running the following on each as root:
	# 以下信息用于node加入集群时的配置
	kubeadm join 172.23.210.26:12567 --token tfm6qd.ibzoeaorwnyqlcow \
		--discovery-token-ca-cert-hash sha256:af32363c4993ca00b7322876cac036bc21816efc1ed0de2a662d606451c60cce 

要开始使用kubectl命令,需要先执行以下操作,如果是普通用户,需要在后两条命令前加sudo

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

等网络插件安装完成,容器启动之后再到master2上操作,不然会如情况,coredns会一直卡在CrashLoopBackOff循环,无法启动

kubectl get po,no --all-namespaces
NAMESPACE     NAME                                  READY   STATUS             RESTARTS   AGE
kube-system   pod/coredns-6897bd7b5-7mcnm           0/1     CrashLoopBackOff   1          8m4s
kube-system   pod/coredns-6897bd7b5-g8bqx           0/1     CrashLoopBackOff   1          8m4s
......

另外一种常用网络插件calico。可以根据自己的习惯来选择一种

curl https://docs.projectcalico.org/v3.7/manifests/calico.yaml -O
kubectl apply -f calico.yaml

下面才是正常情况

kubectl get po,no --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-6897bd7b5-4zpnz       1/1     Running   0          30s
kube-system   pod/coredns-6897bd7b5-gdf54       1/1     Running   0          30s
......

master2,master3。K8S版本1.15以前,必须等MASTER1所有网络、corednspod正常启动后再执行加入master2,master3的操作。

将master1上初始化成功后显示的信息复制到master2执行即可将master2加入集群

kubeadm join 172.23.210.26:12567 --token tfm6qd.ibzoeaorwnyqlcow \
	--discovery-token-ca-cert-hash sha256:af32363c4993ca00b7322876cac036bc21816efc1ed0de2a662d606451c60cce \
	--experimental-control-plane --certificate-key f6599b60661b3b5c2dbd17fd6487e50eba0eb13d7926b8c55a1a1bd7f869f7f9

关于“[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at \https://kubernetes.io/docs/setup/cri/”报错

修改docker的服务文件在“ExecStart”后面加上 --exec-opt native.cgroupdriver=systemd即可

#vi /usr/lib/systemd/system/docker.service
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd

# 直接用下面的语句替换一次搞定
sed -i "s/ExecStart=\/usr\/bin\/dockerd -H fd:\/\/ --containerd=\/run\/containerd\/containerd.sock/ExecStart=\/usr\/bin\/dockerd -H fd:\/\/ --containerd=\/run\/containerd\/containerd.sock --exec-opt native.cgroupdriver=systemd/" /usr/lib/systemd/system/docker.service

然后重启下服务

systemctl daemon-reload
systemctl restart docker

通过docker info 可以看到"Cgroup Driver: cgroupfs" 变成了“Cgroup Driver: systemd”

看到如下提示就表示加入成功

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

如果master2也想要使用kubectl命令,同样需要执行上面三条命令即可 NODE节点加入同之前一样,就不演示了,以下是最后所有集群组件启动完成后的结果

[root@TEST-01 ~]# kubectl get po,no -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
pod/coredns-68b57596f6-bdv45          1/1     Running   0          3m3s
pod/coredns-68b57596f6-rn8pm          1/1     Running   0          3m3s
pod/etcd-test-01                      1/1     Running   0          2m4s
pod/etcd-test-02                      1/1     Running   0          2m4s
pod/etcd-test-03                      1/1     Running   0          46s
pod/kube-apiserver-test-01            1/1     Running   0          2m1s
pod/kube-apiserver-test-02            1/1     Running   0          2m5s
pod/kube-controller-manager-test-01   1/1     Running   1          113s
pod/kube-controller-manager-test-02   1/1     Running   0          2m5s
pod/kube-controller-manager-test-03   1/1     Running   0          44s
pod/kube-flannel-ds-amd64-4qjp4       1/1     Running   0          46s
pod/kube-flannel-ds-amd64-7gp2c       1/1     Running   0          45s
pod/kube-flannel-ds-amd64-cbvmh       1/1     Running   0          2m5s
pod/kube-flannel-ds-amd64-d4njb       1/1     Running   0          2m46s
pod/kube-flannel-ds-amd64-drs59       1/1     Running   0          46s
pod/kube-proxy-42pff                  1/1     Running   0          46s
pod/kube-proxy-5bb65                  1/1     Running   0          45s
pod/kube-proxy-hrnks                  1/1     Running   0          2m5s
pod/kube-proxy-kc2bb                  1/1     Running   0          3m3s
pod/kube-proxy-zslnb                  1/1     Running   0          46s
pod/kube-scheduler-test-01            1/1     Running   1          2m4s
pod/kube-scheduler-test-02            1/1     Running   0          2m5s
pod/kube-scheduler-test-03            1/1     Running   0          36s

NAME           STATUS   ROLES    AGE     VERSION
node/test-01   Ready    master   3m22s   v1.14.3
node/test-02   Ready    master   2m5s    v1.14.3
node/test-03   Ready    master   46s     v1.14.3
node/test-04   Ready    <none>   46s     v1.14.3
node/test-05   Ready    <none>   46s     v1.14.3

注意: 不论是MASTER还是NODE加入集群的token都是有时效性的,MASTER加入集群的token有效期为两小时,NODE加入集群的token有效期为24小时.可以通过kubeadm token list 查看.过期后可以重新生成.不过证书的hash值不会过期,建议集群初始完成后保存好hash信息,以便后期使用

查看现有token

root@TEST-01 ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                           EXTRA GROUPS
ay3tyz.peeqmomkayvivxhy   1h        2019-06-12T17:56:33+08:00   <none>                   Proxy for managing TTL for the kubeadm-certs secret   <none>
tfm6qd.ibzoeaorwnyqlcow   23h       2019-06-13T15:56:33+08:00   authentication,signing   <none>                                                system:bootstrappers:kubeadm:default-node-token

重新生成NODE加入集群的token

[root@TEST-01 ~]# kubeadm token create
bfklrw.shyi4zofcj7hnjx8		#生成NODE加入集群时的新token

[root@TEST-01 ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                           EXTRA GROUPS
ay3tyz.peeqmomkayvivxhy   1h        2019-06-12T17:56:33+08:00   <none>                   Proxy for managing TTL for the kubeadm-certs secret   <none>
bfklrw.shyi4zofcj7hnjx8   23h       2019-06-13T16:50:17+08:00   authentication,signing   <none>                                                system:bootstrappers:kubeadm:default-node-token
tfm6qd.ibzoeaorwnyqlcow   23h       2019-06-13T15:56:33+08:00   authentication,signing   <none>                                                system:bootstrappers:kubeadm:default-node-token

重新生成MASTER加入集群的token

[root@TEST-01 ~]# kubeadm init phase upload-certs --experimental-upload-certs
I0612 16:52:00.385590    7773 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0612 16:52:00.385775    7773 version.go:97] falling back to the local client version: v1.14.3
[upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
5ce76c3c4fc95e8ab7bf5d0abc1abe8232fd3d39a5ff9c49a65612ecbcc6cb3e		#新的certificate-key

[root@TEST-01 ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                           EXTRA GROUPS
6zm2x5.p7dwn0q9xkcyah5v   1h        2019-06-12T18:52:00+08:00   <none>                   Proxy for managing TTL for the kubeadm-certs secret   <none>		#新的master加入集群token
ay3tyz.peeqmomkayvivxhy   1h        2019-06-12T17:56:33+08:00   <none>                   Proxy for managing TTL for the kubeadm-certs secret   <none>
bfklrw.shyi4zofcj7hnjx8   23h       2019-06-13T16:50:17+08:00   authentication,signing   <none>                                                system:bootstrappers:kubeadm:default-node-token
tfm6qd.ibzoeaorwnyqlcow   23h       2019-06-13T15:56:33+08:00   authentication,signing   <none>                                                system:bootstrappers:kubeadm:default-node-token

v1.21.1

[root@TEST-01 k8s-init]# kubeadm init phase upload-certs --upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
f46155cffdc9e8eb7b026255881a9fd290b35607047907a4ad66dce02d421aa6

获取ca证书sha256编码hash值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

k8s集群证书过期,使用kubectl命令时会报如下错误

Unable to connect to the server: x509: certificate has expired or is not yet valid

更新证书 先备份一下/etc/kubernetes 目录下的所有内容。

cp -r /etc/kubernetes/pki /etc/kubernetes/.pki_backup	# 备份旧的证书
kubeadm config view > cluster.yaml						# 备份当前配置,用于更新证书用
kubeadm alpha certs renew all --config cluster.yaml		# renew all 更新所有证书,并指定配置文件。如果想单独更新某一个,把all改成下面对应的服务即可。

all
apiserver
apiserver-etcd-client
apiserver-kubelet-client
etcd-healthcheck-client
etcd-peer
etcd-server
front-proxy-client

把/etc/kubernetes/pki 目录下更新的证书复制到其它master节点的同目录下

查看证书有效期

openssl x509 -in ca.crt -noout -text  |grep Not

kubeadm alpha certs check-expiration
kubeadm certs check-expiration

解决kubernetes默认证书1年有效期问题

https://blog.51cto.com/11889458/2323328

或者参考[[Kubernetes/修改kubeadm证书有效时长]]提前对生成证书的时间进行修改


k8s 服务 NodePort模式默认端口范围只能是30000-32767。想要指定其它范围的解决办法。

修改所有主节点的 /etc/kubernetes/manifests/kube-apiserver.yaml文件,在command中api-server启动参数加上 --service-node-port-range=1-65535,重启kubelet服务即可


本地环境没有ingress的情况下,想直接通过Cluster IP访问服务的话,直接在网络中的网关路由条目中增加一条到ClusterIP网关的静态路由即可,下一跳为任意K8S Node物理网卡IP

最后更新于