欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

使用 kubeadm 部署 k8s-1.22

最编程 2024-10-15 18:48:07
...

一、环境准备

角色 地址 系统版本 集群版本 安装软件
Master01 192.168.89.164 centos7.9 1.22.2 kubeadm、kubelet、kubectl、docker、nginx、keepalived
Master02 192.168.89.165 centos7.9 1.22.2 kubeadm、kubelet、kubectl、docker、nginx、keepalived
Master03 192.168.89.166 centos7.9 1.22.2 kubeadm、kubelet、kubectl、docker、nginx、keepalived
Node-1 192.168.89.167 centos7.9 1.22.2 kubeadm、kubelet、kubectl、docker
Node-2 192.168.89.168 centos7.9 1.22.2 kubeadm、kubelet、kubectl、docker
VIP 192.168.89.10 VIP VIP VIP

二、准备阶段

1.所有机器设置为静态ip

2.所有机器关闭防火墙和selinux

3.关闭swap交换分区

# swapoff -a        #临时关闭
# sed -i 's/.*swap.*/#&/' /etc/fstab        #永久关闭

4.所有机器安装docker 

https://blog.****.net/YJLZ0821/article/details/142794190?sharetype=blogdetail&sharerId=142794190&sharerefer=PC&sharesource=YJLZ0821&spm=1011.2480.3001.8118

5.所有机器同步时间

# crontab -e

0 * * * * /usr/sbin/ntpdate ntp.ntsc.ac.cn        #每小时同步一次

三、 在所有节点安装kubeadm和kubelet

1.配置源:

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

 2. 安装1.22版本

# yum install -y kubelet-1.22.2-0.x86_64 kubeadm-1.22.2-0.x86_64 kubectl-1.22.2-0.x86_64 ipvsadm ipset

 3.加载ipvs相关内核模块

# cat << EOF >> /etc/rc.local

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

EOF

# chmod +x /etc/rc.local

4.配置转发相关参数

# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

# sysctl --system

5.验证内核模块是否加载成功

 # lsmod | grep ip_vs
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 141092  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          133387  2 ip_vs,nf_conntrack_ipv4
libcrc32c              12644  3 xfs,ip_vs,nf_conntrack

 6.启动kubelet

1.配置kubelet使用pause镜像
获取docker的systemd
# DOCKER_CGROUPS=`docker info |grep 'Cgroup' | awk ' NR==1 {print $3}'`

2.配置kubelet的cgroups
# cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=k8s.gcr.io/pause:3.5"
EOF

3.启动并设置开机自启

# systemctl daemon-reload

# systemctl enable --now kubelet

四、负载均衡部署

1.在master节点安装nginx、keepalived

# cat <<EOF >/etc/yum.repos.d/nginx.repo 
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

EOF

# yum install -y nginx keepalived

 2.配置nginx代理master的api-server组件

# vim /etc/nginx/nginx.conf 
...
stream {
    upstream apiserver {
        server 192.168.89.164:6443 weight=5 max_fails=3 fail_timeout=30s;
        server 192.168.89.165:6443 weight=5 max_fails=3 fail_timeout=30s;
        server 192.168.89.166:6443 weight=5 max_fails=3 fail_timeout=30s;
    }
    server {
        listen 8443;
        proxy_pass apiserver;
    }
}

http {

# mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.bak

# systemctl enable --now nginx

 3.配置keepalived高可用

1.matser01上:

# vim /etc/keepalived/keepalived.conf 

! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.89.10/24
    }
}

# systemctl enable --now keepalived

2.master02、master03上同样配置,修改state为BACKUP,priority为90

# systemctl enable --now keepalived

 五、拉取k8s组件镜像文件

由于直接拉取官网镜像可能会超时,选择拉取阿里云镜像文件再重新打tag的方式实现拉取k8s所需组件镜像文件的效果

# vim pull.sh 
#!/usr/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5

# vim tag.sh 
#!/usr/bin/bash
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.2 k8s.gcr.io/kube-controller-manager:v1.22.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.2 k8s.gcr.io/kube-proxy:v1.22.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.2 k8s.gcr.io/kube-apiserver:v1.22.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.2 k8s.gcr.io/kube-scheduler:v1.22.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5

# 执行bash脚本

六、配置k8s集群

1.在master01上进行初始化操作

# kubeadm init --kubernetes-version=v1.22.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.89.164 --control-plane-endpoint=192.168.89.10:8443 --ignore-preflight-errors=Swap

2.将管理员证书复制到.kube下

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

3.将master02与master03加入集群

1.在master02、master03上创建证书目录

[root@master02 ~]# mkdir /etc/kubernetes/pki/etcd -p
[root@master03 ~]# mkdir /etc/kubernetes/pki/etcd -p

2.在master01节点上,将master01节点上的证书拷贝到master02、master03节点上

[root@master01 ~]# scp -rp /etc/kubernetes/pki/ca.*  192.168.89.165:/etc/kubernetes/pki/  
[root@master01 ~]# scp -rp /etc/kubernetes/pki/sa.* 192.168.89.165:/etc/kubernetes/pki/   
[root@master01 ~]# scp -rp /etc/kubernetes/pki/front-proxy-ca.*  192.168.89.165:/etc/kubernetes/pki/   
[root@master01 ~]# scp -rp /etc/kubernetes/pki/etcd/ca.* 192.168.89.165:/etc/kubernetes/pki/etcd/
[root@master01 ~]# scp -rp /etc/kubernetes/admin.conf  192.168.89.165:/etc/kubernetes/

----------------------------------------------------------------------------------------------------------------------
[root@master01 ~]# scp -rp /etc/kubernetes/pki/ca.*  192.168.89.166:/etc/kubernetes/pki/  
[root@master01 ~]# scp -rp /etc/kubernetes/pki/sa.* 192.168.89.166:/etc/kubernetes/pki/   
[root@master01 ~]# scp -rp /etc/kubernetes/pki/front-proxy-ca.*  192.168.89.166:/etc/kubernetes/pki/   
[root@master01 ~]# scp -rp /etc/kubernetes/pki/etcd/ca.* 192.168.89.166:/etc/kubernetes/pki/etcd/
[root@master01 ~]# scp -rp /etc/kubernetes/admin.conf  192.168.89.166:/etc/kubernetes/

3.由上面初始成功的信息提示,复制粘贴命令到master02、master03节点执行即可
[root@master02 ~]#  kubeadm join 192.168.89.10:8443 --token b9pc48.kmytbkm8gj9r64su \
        --discovery-token-ca-cert-hash sha256:59507f9ef2c2a503eb116bacea4b04cf23e611653cd8a4de22c1a0695f97c439 \
        --control-plane 

[root@master02 ~]# mkdir -p $HOME/.kube
[root@master02 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master02 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

-------------------------------------------------------------------------------------------------------------------------

[root@master03 ~]#  kubeadm join 192.168.89.10:8443 --token b9pc48.kmytbkm8gj9r64su \
        --discovery-token-ca-cert-hash sha256:59507f9ef2c2a503eb116bacea4b04cf23e611653cd8a4de22c1a0695f97c439 \
        --control-plane 

[root@master03 ~]# mkdir -p $HOME/.kube
[root@master03 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master03 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

4.Node-1与Node-2加入到集群

命令和master加入集群命令一致,删除--control-plane

[root@node-1 ~]# kubeadm join 192.168.89.10:8443 --token b9pc48.kmytbkm8gj9r64su \
        --discovery-token-ca-cert-hash sha256:59507f9ef2c2a503eb116bacea4b04cf23e611653cd8a4de22c1a0695f97c439 

-------------------------------------------------------------------------------------------------------------------------

[root@node-2 ~]# kubeadm join 192.168.89.10:8443 --token b9pc48.kmytbkm8gj9r64su \
        --discovery-token-ca-cert-hash sha256:59507f9ef2c2a503eb116bacea4b04cf23e611653cd8a4de22c1a0695f97c439 

5.检查集群

# kubectl get nodes

# kubectl config view

一共五台机器,状态应该都为NotReady

6.集群部署可进行相关操作

1.驱离master01节点上的pod

# kubectl drain master01 --delete-local-data --force --ignore-daemonsets

2.删除节点

# kubectl delete node master02

3.重置节点

# kubeadm reset

4.reset后删除master上配置文件

# rm -rf /var/lib/cni/ $HOME/.kube/config

注意:如果整个k8s集群都做完了,需要重置按照上面步骤操作。如果是在初始化出错只需要操作第三步

5.kubeadm生成的token过期后,集群增加节点

# kubeadm token create        重新生成token

# kubeadm token create --print-join-command        直接生成加入集群命令

# kubeadm  token list

# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'        获取ca证书sha256编码hash值

七、配置网络插件

1.配置加速器拉取flannel镜像和yaml文件

在一个主节点操作即可

# cd ~ && mkdir flannel && cd flannel

# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# vim /etc/docker/daemon.json
{
    "registry-mirrors": [ "https://3197780b4ae540e2a7e0a7403f87dcf4.mirror.swr.myhuaweicloud.com" ]
}

# systemctl daemon-reload
# systemctl restart docker
# docker pull docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2
# docker pull docker.io/flannel/flannel:v0.25.7

# docker pull docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1 #拉取与yaml文件中image版本相同的flannel镜像文件

2.启动

# kubectl apply -f ~/flannel/kube-flannel.yml  #启动完成之后需要等待一会

# kubectl get nodes

此时五台机器status应为Ready

3.注意

# 如果Node有多个网卡的话,参考https://github.com/kubernetes/kubernetes/issues/39701
# 目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。容器无法通信的情况。
#需要将kube-flannel.yml下载到本地,
# flanneld启动参数加上--iface=<iface-name>
    containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens33
        - --iface=eth0
        
⚠️⚠️⚠️--iface=ens33 的值,是你当前的网卡,或者可以指定多网卡

# 1.12版本的kubeadm额外给node1节点设置了一个污点(Taint):node.kubernetes.io/not-ready:NoSchedule,
# 很容易理解,即如果节点还没有ready之前,是不接受调度的。可是如果Kubernetes的网络插件还没有部署的话,节点是不会进入ready状态的。
# 因此修改以下kube-flannel.yaml的内容,加入对node.kubernetes.io/not-ready:NoSchedule这个污点的容忍:
    - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      - key: node.kubernetes.io/not-ready  #添加如下三行---在165行左右
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel