欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

简易步骤:在 Debian 和 Containerd 上部署 Kubernetes v1.28 集群

最编程 2024-08-10 21:07:22
...

目标

搭建 Kubernetes v1.28 版本,采用如下环境部署方案:

  • 系统环境:Debian 12.0
  • 内核版本:6.1.0-7-amd64
  • 容器运行时:containerd CRI

image.png

准备开始

kubernetes.io/zh-cn/docs/…

该部分内容来自于 K8S 官方文档:

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 DebianRed HatLinux 发行版以及一些不提供包管理器的发行版提供通用的指令。
  • 每台机器 2 GB 或更多的 RAM(如果少于这个数字将会影响你应用的运行内存)。
  • CPU 2 核心及以上。
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)。
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid

使用 kubeadm 安装 Kubernetes

主机配置

准备虚拟机环境

IP Address Hostname CPU Memory Storage OS Release Role
10.2.102.241 k8s-master 8C 8G 1024GB Debian 12 Master
10.2.102.242 k8s-node1 8C 16G 1024GB Debian 12 Worker
10.2.102.243 k8s-node2 8C 16G 1024GB Debian 12 Worker

确认基本主机信息

更多详细内容,可参考另一篇文章 《一行 Shell 汇总:三剑客抓取系统信息》

# 查看 IP 地址,设置静态地址
ip addr show ens192 | awk '/inet /{split($2, ip, "/"); print ip[1]}'

# 查看 MAC 地址,确保 MAC 的唯一性
ip link | awk '/state UP/ {getline; print $2}'

# 查看主机的 UUID,确保 product_uuid 的唯一性
cat /sys/class/dmi/id/product_uuid

# 查看内核版本
uname -r

# 查看操作系统发行版
cat /etc/os-release

# 查看 CPU 信息
lscpu -p | grep -v "^#" | wc -l

# 查看 DIMM 信息
free -h | awk '/Mem/{print $2}'

# 查看 Disk 信息
lsblk
pvs

设置主机名和更新 /etc/hosts 文件

设置系统主机名:

# 在主控节点运行
hostnamectl set-hostname "k8s-master"

# 在工作节点运行
hostnamectl set-hostname "k8s-node1"
hostnamectl set-hostname "k8s-node2"

设置本地域名解析文件:

cat > /etc/hosts << EOF
127.0.0.1        localhost

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

# 添加主机名和IP地址映射
10.2.102.241    k8s-master
10.2.102.242    k8s-node1
10.2.102.243    k8s-node2
EOF

配置 DNS 解析:

echo "nameserver 8.8.8.8" | tee /etc/resolv.conf

设置时区和时间同步配置

设置系统时区:

# 设置系统时区
timedatectl set-timezone Asia/Shanghai

设置时钟同步服务:

# 安装 chrony
apt-get install -y chrony

# 修改为阿里的时钟源
sed -i '/pool 2.debian.pool.ntp.org iburst/ s/^/#/' /etc/chrony/chrony.conf && \
sed -i '/pool 2.debian.pool.ntp.org iburst/ a\server ntp.aliyun.com iburst' /etc/chrony/chrony.conf

# 启用并立即启动 chrony 服务
systemctl enable --now chrony

# 查看与 chrony 服务器同步的时间源
chronyc sources -v

# 查看当前系统时钟与 chrony 时间源之间的跟踪信息
chronyc tracking

# 强制系统时钟与 chrony 服务器进行快速同步
chronyc -a makestep

设置软件源

# Debian 12(代号为Bookworm)阿里镜像源
cat > /etc/apt/sources.list << EOF
deb https://mirrors.aliyun.com/debian/ bookworm main non-free non-free-firmware contrib
deb-src https://mirrors.aliyun.com/debian/ bookworm main non-free non-free-firmware contrib

deb https://mirrors.aliyun.com/debian-security/ bookworm-security main
deb-src https://mirrors.aliyun.com/debian-security/ bookworm-security main

deb https://mirrors.aliyun.com/debian/ bookworm-updates main non-free non-free-firmware contrib
deb-src https://mirrors.aliyun.com/debian/ bookworm-updates main non-free non-free-firmware contrib

deb https://mirrors.aliyun.com/debian/ bookworm-backports main non-free non-free-firmware contrib
deb-src https://mirrors.aliyun.com/debian/ bookworm-backports main non-free non-free-firmware contrib

# This system was installed using small removable media
# (e.g. netinst, live or single CD). The matching "deb cdrom"
# entries were disabled at the end of the installation process.
# For information about how to configure apt package sources,
# see the sources.list(5) manual.
EOF

# 清除apt的软件包缓存
apt clean

# 清除apt的旧版本软件包
apt autoclean

# 刷新软件包列表
apt update

优化内核参数

# 创建一个名为 kubernetes.conf 的内核配置文件,并写入以下配置内容
cat > /etc/sysctl.d/kubernetes.conf << EOF
# 允许 IPv6 转发请求通过iptables进行处理(如果禁用防火墙或不是iptables,则该配置无效)
net.bridge.bridge-nf-call-ip6tables = 1

# 允许 IPv4 转发请求通过iptables进行处理(如果禁用防火墙或不是iptables,则该配置无效)
net.bridge.bridge-nf-call-iptables = 1

# 启用IPv4数据包的转发功能
net.ipv4.ip_forward = 1

# 禁用发送 ICMP 重定向消息
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# 提高 TCP 连接跟踪的最大数量
net.netfilter.nf_conntrack_max = 1000000

# 提高连接追踪表的超时时间
net.netfilter.nf_conntrack_tcp_timeout_established = 86400

# 提高监听队列大小
net.core.somaxconn = 1024

# 防止 SYN 攻击
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2

# 提高文件描述符限制
fs.file-max = 65536

# 设置虚拟内存交换(swap)的使用策略为0,减少对磁盘的频繁读写
vm.swappiness = 0
EOF

# 加载或启动内核模块 br_netfilter,该模块提供了网络桥接所需的网络过滤功能
modprobe br_netfilter

# 查看是否已成功加载模块
lsmod | grep br_netfilter

# 将读取该文件中的参数设置,并将其应用到系统的当前运行状态中
sysctl -p /etc/sysctl.d/kubernetes.conf

安装 ipset 和 ipvsadm

Kubernetes 中,ipsetipvsadm 的用途:

  • ipset 主要用于支持 Service 的负载均衡和网络策略。它可以帮助实现高性能的数据包过滤和转发,以及对 IP 地址和端口进行快速匹配。
  • ipvsadm 主要用于配置和管理 IPVS 负载均衡器,以实现 Service 的负载均衡。
# 在线安装
apt-get install -y ipset ipvsadm

# 检查是否安装
dpkg -l ipset ipvsadm

内核模块配置

# 将自定义在系统引导时自动加载的内核模块
cat > /etc/modules-load.d/kubernetes.conf << EOF
# /etc/modules-load.d/kubernetes.conf

# Linux 网桥支持
br_netfilter

# IPVS 加载均衡器
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh

# IPv4 连接跟踪
nf_conntrack_ipv4

# IP 表规则
ip_tables
EOF

# 添加可执行权限
chmod a+x /etc/modules-load.d/kubernetes.conf

关闭 SWAP 分区

# 显示当前正在使用的 swap 分区
swapon --show

# 关闭所有已激活的 swap 分区
swapoff -a

# 禁用系统启动时自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

关闭安全策略服务

# 停止 AppArmor 服务
systemctl stop apparmor.service

# 禁用 AppArmor 服务
systemctl disable apparmor.service

关闭防火墙服务

# 禁用 Uncomplicated Firewall(ufw)
ufw disable

# 停止 ufw 服务
systemctl stop ufw.service

# 禁用 ufw 服务
systemctl disable ufw.service

【补】CentOS 系列

需要注意的是,配置文件的路径和编写方式可能因发行版而略有不同,存在一些差异。

  • 关闭 SELinux
# 临时禁用 SELinux
setenforce 0

# 在重启系统后永久禁用 SELinux
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
  • 清理防火墙规则,设置默认转发策略
# 清除和删除 iptables 规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

# 将设置默认的 FORWARD 链策略为 ACCEPT
iptables -P FORWARD ACCEPT

# 停用 firewalld 防火墙服务
systemctl stop firewalld && systemctl disable firewalld

安装容器运行时

介绍 Container Runtime

一图搞懂 Docker 与 Kubernetes 的关系与区别:www.processon.com/view/654fbf…

1.24 版起,Dockershim 已从 Kubernetes 项目中移除。下面列出了目前支持的 Linux 操作系统的 CR 终端节点:

容器运行时 Unix 域套接字 说明
containerd unix:///var/run/containerd/containerd.sock 我们选择在我们的生产环境中使用 containerd 作为 K8S 集群的容器运行时。
CRI-O unix:///var/run/crio/crio.sock -
Docker Engine(使用 cri-dockerd unix:///var/run/cri-dockerd.sock 虽然 Docker 作为单机非常受欢迎,但是相较之下,CRI-Dockerd 项目的星数还相对较少。然而,未来仍然值得期待。

说明:一些基于 containerd 的知名产品包括 DockerKubernetesRancher,而一些基于 CRI-O 的知名产品包括 RedHatOpenShift 等。

下载最新的 containerd 源码包

image.png

image.png

# 从 Github 下载 cri-containerd 的压缩包
wget https://github.com/containerd/containerd/releases/download/v1.7.8/cri-containerd-1.7.8-linux-amd64.tar.gz

# 将下载的压缩包解压到根目录
tar xf cri-containerd-1.7.8-linux-amd64.tar.gz -C /

修改 containerd 配置文件

# 创建目录,该目录用于存放 containerd 配置文件
mkdir /etc/containerd

# 创建一个默认的 containerd 配置文件
containerd config default > /etc/containerd/config.toml

# 修改配置文件中使用的沙箱镜像版本
sed -i '/sandbox_image/s/3.8/3.9/' /etc/containerd/config.toml

# 设置容器运行时(containerd + CRI)在创建容器时使用 Systemd Cgroups 驱动
sed -i '/SystemdCgroup/s/false/true/' /etc/containerd/config.toml

启动 containerd 及设置开机自启

# 启用并立即启动 containerd 服务
systemctl enable --now containerd.service

# 检查 containerd 服务的当前状态
systemctl status containerd.service

验证 CR 环境是否可用

通过查看以下三个组件的版本来确认安装是否正确完成:

# 用于检查 containerd 的版本
containerd --version

# 用于与 CRI(Container Runtime Interface)兼容的容器运行时交互的命令行工具
crictl --version

# 用于运行符合 OCI(Open Container Initiative)标准的容器
runc --version

K8S 集群部署

添加 kubernetes 软件源

# 更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包
apt-get install -y gnupg gnupg2 curl software-properties-common

# 下载 Google Cloud 的 GPG 密钥
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmour -o /etc/apt/trusted.gpg.d/cgoogle.gpg

# 添加 Kubernetes 官方软件源到系统的 apt 软件源列表
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

# 更新 apt 包索引
apt-get update

集群软件安装

安装 kubeletkubeadmkubectl,并锁定其版本

# 安装所需的软件包
root@k8s-master:~# apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  conntrack cri-tools ebtables ethtool kubernetes-cni socat
The following NEW packages will be installed:
  conntrack cri-tools ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 9 newly installed, 0 to remove and 80 not upgraded.
Need to get 87.3 MB of archives.
After this operation, 337 MB of additional disk space will be used.
Get:1 https://mirrors.aliyun.com/debian bookworm/main amd64 conntrack amd64 1:1.4.7-1+b2 [35.2 kB]
Get:4 https://mirrors.aliyun.com/debian bookworm/main amd64 ebtables amd64 2.0.11-5 [86.5 kB]
Get:8 https://mirrors.aliyun.com/debian bookworm/main amd64 ethtool amd64 1:6.1-1 [197 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.26.0-00 [18.9 MB]
Get:9 https://mirrors.aliyun.com/debian bookworm/main amd64 socat amd64 1.7.4.4-2 [375 kB]
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 1.2.0-00 [27.6 MB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.28.2-00 [19.5 MB]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.28.2-00 [10.3 MB]
Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.28.2-00 [10.3 MB]
Fetched 87.3 MB in 8s (11.2 MB/s)
Selecting previously unselected package conntrack.
(Reading database ... 28961 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.7-1+b2_amd64.deb ...
Unpacking conntrack (1:1.4.7-1+b2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.26.0-00_amd64.deb ...
Unpacking cri-tools (1.26.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-5_amd64.deb ...
Unpacking ebtables (2.0.11-5) ...
Selecting previously unselected package ethtool.
Preparing to unpack .../3-ethtool_1%3a6.1-1_amd64.deb ...
Unpacking ethtool (1:6.1-1) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../4-kubernetes-cni_1.2.0-00_amd64.deb ...
Unpacking kubernetes-cni (1.2.0-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../5-socat_1.7.4.4-2_amd64.deb ...
Unpacking socat (1.7.4.4-2) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../6-kubelet_1.28.2-00_amd64.deb ...
Unpacking kubelet (1.28.2-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../7-kubectl_1.28.2-00_amd64.deb ...
Unpacking kubectl (1.28.2-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../8-kubeadm_1.28.2-00_amd64.deb ...
Unpacking kubeadm (1.28.2-00) ...
Setting up conntrack (1:1.4.7-1+b2) ...
Setting up kubectl (1.28.2-00) ...
Setting up ebtables (2.0.11-5) ...
Setting up socat (1.7.4.4-2) ...
Setting up cri-tools (1.26.0-00) ...
Setting up kubernetes-cni (1.2.0-00) ...
Setting up ethtool (1:6.1-1) ...
Setting up kubelet (1.28.2-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.28.2-00) ...
Processing triggers for man-db (2.9.4-2) ...

# 锁定软件包版本以防止其被自动更新
root@k8s-master:~# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

# 检查安装的软件版本
root@k8s-master:~# dpkg -l kubelet kubeadm kubectl
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
+++-==============-============-============-=====================================
hi  kubeadm        1.28.2-00    amd64        Kubernetes Cluster Bootstrapping Tool
hi  kubectl        1.28.2-00    amd64        Kubernetes Command Line Tool
hi  kubelet        1.28.2-00    amd64        Kubernetes Node Agent

配置 kubelet

# 使用 /etc/default/kubelet 文件来设置 kubelet 的额外参数
cat > /etc/default/kubelet << EOF
# 该参数指定了 kubelet 使用 systemd 作为容器运行时的 cgroup 驱动程序
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF

# 这里先设置kubelet为开机自启
systemctl enable kubelet

kubernetes 集群初始化

Master 节点初始化 K8S 集群:

1. 拉取镜像
root@k8s-master:~# kubeadm config images pull
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.3
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.3
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.3
[config/images] Pulled registry.k8s.io/kube-proxy:v1.28.3
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.9-0
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1

2. 查看镜像列表
root@k8s-master:~# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

3. 初始化集群

- 生成了证书和密钥用于 API Server、etcd 和其他组件。
- 创建了 kubeconfig 文件,其中包含了访问集群所需的配置信息。
- 创建了 etcd 的静态 Pod 配置文件。
- 创建了 kube-apiserver、kube-controller-manager 和 kube-scheduler 的静态 Pod 配置文件。
- 启动了 kubelet。
- 标记了节点 k8s-master 为控制平面节点,并添加了相应的标签和容忍度设置。
- 配置了 Bootstrap Token、RBAC 角色和 Cluster Info ConfigMap 等。

# 自定义一个 Kubernetes 的 Pod 网络 CIDR 地址段,其中 10.244.0.0/16 是一个较为常见的默认选择
root@k8s-master:~# kubeadm init --control-plane-endpoint=k8s-master --kubernetes-version=v1.28.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.2.102.241 --cri-socket unix://var/run/containerd/containerd.sock
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.2.102.241]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.2.102.241 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.2.102.241 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.001810 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: nrd1gc.itd7fmgzfpznt1zx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token nrd1gc.itd7fmgzfpznt1zx \
	--discovery-token-ca-cert-hash sha256:3fa47c723879848c7ad77a4605569e9524914fa329cccbf4f6e20968c8bb67b2 \
	--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token nrd1gc.itd7fmgzfpznt1zx \
	--discovery-token-ca-cert-hash sha256:3fa47c723879848c7ad77a4605569e9524914fa329cccbf4f6e20968c8bb67b2

4. 配置 Kubernetes 集群的访问权限
root@k8s-master:~# mkdir -p $HOME/.kube
root@k8s-master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-master:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@k8s-master:~# export KUBECONFIG=/etc/kubernetes/admin.conf

5. 获取节点和集群信息
root@k8s-master:~# kubectl get nodes -o wide
NAME         STATUS     ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
k8s-master   NotReady   control-plane   3m40s   v1.28.2   10.2.102.241   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-7-amd64    containerd://1.7.8

root@k8s-master:~# kubectl cluster-info
Kubernetes control plane is running at https://k8s-master:6443
CoreDNS is running at https://k8s-master:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

6. 列出所有的 CRI 容器列表,且都为 Running 状态
root@k8s-master:~# crictl ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
b9ce7283ea12b       c120fed2beb84       2 minutes ago       Running             kube-proxy                0                   dbe51de138e00       kube-proxy-qjkfp
86347ca767e8c       7a5d9d67a13f6       3 minutes ago       Running             kube-scheduler            0                   5ac0fb9aa591f       kube-scheduler-k8s-master
c4602ab9c2a32       cdcab12b2dd16       3 minutes ago       Running             kube-apiserver            0                   35c1b0320b68f       kube-apiserver-k8s-master
b9c2ec66a3580       55f13c92defb1       3 minutes ago       Running             kube-controller-manager   0                   40d312589fdfe       kube-controller-manager-k8s-master
668707e9ab707       73deb9a3f7025       3 minutes ago       Running             etcd                      0                   ea104d6e8cef7       etcd-k8s-master

将所有的 Worker 节点添加至 K8S 集群:

1. 测试 API-Server 端口连通性
root@k8s-node1:~# nmap -p 6443 -Pn 10.2.102.241
Starting Nmap 7.93 ( https://nmap.org ) at 2023-11-13 02:33 CST
Nmap scan report for k8s-master (10.2.102.241)
Host is up (0.00026s latency).

PORT     STATE SERVICE
6443/tcp open  sun-sr-https
MAC Address: 00:50:56:80:16:51 (VMware)

Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds

2. 从 “kubeadm init” 命令的输出中复制如下命令
root@k8s-node1:~# kubeadm join k8s-master:6443 --token nrd1gc.itd7fmgzfpznt1zx --discovery-token-ca-cert-hash sha256:3fa47c723879848c7ad77a4605569e9524914fa329cccbf4f6e20968c8bb67b2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Master 节点上验证集群节点是否可用:

root@k8s-master:~# kubectl get nodes
NAME         STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   14m     v1.28.2
k8s-node1    NotReady   <none>          3m37s   v1.28.2
k8s-node2    NotReady   <none>          25s     v1.28.2

root@k8s-master:~# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-5dd5756b68-2whrm             0/1     Pending   0          18m
coredns-5dd5756b68-wftr8             0/1     Pending   0          18m
etcd-k8s-master                      1/1     Running   0          18m
kube-apiserver-k8s-master            1/1     Running   0          18m
kube-controller-manager-k8s-master   1/1     Running   0          18m
kube-proxy-289pg                     1/1     Running   0          7m16s
kube-proxy-qjkfp                     1/1     Running   0          18m
kube-proxy-rnpkw                     1/1     Running   0          4m4s
kube-scheduler-k8s-master            1/1     Running   0          18m

使用 Calico 插件设置 Pod 网络

Calico是 目前开源的最成熟的纯三层网络框架之一, 是一种广泛采用、久经考验的开源网络和网络安全解决方案,适用于 Kubernetes、虚拟机和裸机工作负载。 Calico 为云原生应用提供两大服务:工作负载之间的网络连接和工作负载之间的网络安全策略。

Calico 访问链接:projectcalico.docs.tigera.io/about/about…

image.png

  1. 安装 Tigera Calico operator
root@k8s-master:~# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml

root@k8s-master:~# kubectl get ns
NAME              STATUS   AGE
default           Active   28m
kube-node-lease   Active   28m
kube-public       Active   28m
kube-system       Active   28m
tigera-operator   Active   43s

root@k8s-master:~# kubectl get pods -n tigera-operator
NAME                               READY   STATUS    RESTARTS   AGE
tigera-operator-597bf4ddf6-l4j6n   1/1     Running   0          110s
  1. 通过创建必要的自定义资源来安装 Calico
# 下载自定义文件
root@k8s-master:~# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml

# 修改 ip 池,需与初始化时一致
root@k8s-master:~# sed -i 's/192.168.0.0/10.244.0.0/' custom-resources.yaml

# 安装 Calico
root@k8s-master:~# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

root@k8s-master:~# kubectl get ns
NAME              STATUS   AGE
calico-system     Active   20s
default           Active   33m
kube-node-lease   Active   33m
kube-public       Active   33m
kube-system       Active   33m
tigera-operator   Active   5m8s
  1. 检查状态
root@k8s-master:~# kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6c8fd5c4d4-tnkkj   1/1     Running   0          26m
calico-node-gjcdb                          1/1     Running   0          26m
calico-node-mhqz8                          1/1     Running   0          26m
calico-node-wxv7j                          1/1     Running   0          26m
calico-typha-65b978b6f9-v9wpr              1/1     Running   0          26m
calico-typha-65b978b6f9-xkczl              1/1     Running   0          26m
csi-node-driver-fd6kr                      2/2     Running   0          26m
csi-node-driver-lswnw                      2/2     Running   0          26m
csi-node-driver-xsljx                      2/2     Running   0          26m

root@k8s-master:~# kubectl get pods -n kube-system -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
coredns-5dd5756b68-2whrm             1/1     Running   0          59m   10.244.169.130   k8s-node2    <none>           <none>
coredns-5dd5756b68-wftr8             1/1     Running   0          59m   10.244.169.132   k8s-node2    <none>           <none>
etcd-k8s-master                      1/1     Running   0          59m   10.2.102.241     k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running   0          59m   10.2.102.241     k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running   0          59m   10.2.102.241     k8s-master   <none>           <none>
kube-proxy-289pg                     1/1     Running   0          48m   10.2.102.242     k8s-node1    <none>           <none>
kube-proxy-qjkfp                     1/1     Running   0          59m   10.2.102.241     k8s-master   <none>           <none>
kube-proxy-rnpkw                     1/1     Running   0          45m   10.2.102.243     k8s-node2    <none>           <none>
kube-scheduler-k8s-master            1/1     Running   0          59m   10.2.102.241     k8s-master   <none>           <none>
  1. 域名解析测试
root@k8s-master:~# apt install -y dnsutils

# 可以获取 Kubernetes 集群中 `kube-dns` 服务的 IP
root@k8s-master:~# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   63m

# 使用 dig 命令通过指定 DNS 服务器(上面的 IP)来查询特定域名的解析
root@k8s-master:~# dig -t a www.baidu.com @10.96.0.10

; <<>> DiG 9.18.19-1~deb12u1-Debian <<>> -t a www.baidu.com @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56133
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: bef7a82bf5f44839 (echoed)
;; QUESTION SECTION:
;www.baidu.com.			IN	A

;; ANSWER SECTION:
www.baidu.com.		30	IN	CNAME	www.a.shifen.com.
www.a.shifen.com.	30	IN	A	39.156.66.18
www.a.shifen.com.	30	IN	A	39.156.66.14

;; Query time: 12 msec
;; SERVER: 10.96.0.10#53(10.96.0.10) (UDP)
;; WHEN: Mon Nov 13 03:26:09 CST 2023
;; MSG SIZE  rcvd: 161

测试 Kubernetes 集群的安装

1. 创建一个 Deployment
root@k8s-master:~# kubectl create deployment nginx-app --image=nginx --replicas 2
deployment.apps/nginx-app created

2. 为 Deployment 暴露一个服务
root@k8s-master:~# kubectl expose deployment nginx-app --name=nginx-web-svc --type NodePort --port 80 --target-port 80
service/nginx-web-svc exposed

3. 获取服务的详细信息
root@k8s-master:~# kubectl describe svc nginx-web-svc
Name:                     nginx-web-svc
Namespace:                default
Labels:                   app=nginx-app
Annotations:              <none>
Selector:                 app=nginx-app
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.129.21
IPs:                      10.103.129.21
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31517/TCP
Endpoints:                10.244.169.134:80,10.244.36.67:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

4. 使用任一工作节点的主机名来访问
root@k8s-master:~# curl http://k8s-node1:31517
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

5. 查看 Pod IP
root@k8s-master:~# kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
nginx-app-5777b5f95-lmrnk   1/1     Running   0          11m   10.244.169.134   k8s-node2   <none>           <none>
nginx-app-5777b5f95-pvkj2   1/1     Running   0          11m   10.244.36.67     k8s-node1   <none>           <none>

6. 通过 IP 访问
root@k8s-master:~# nmap -p 80 -Pn 10.244.169.134
Starting Nmap 7.93 ( https://nmap.org ) at 2023-11-13 12:21 CST
Nmap scan report for 10.244.169.134
Host is up (0.00029s latency).

PORT   STATE SERVICE
80/tcp open  http

Nmap done: 1 IP address (1 host up) scanned in 0.25 seconds
# 在当前 shell 中启用 kubectl 自动补全功能
root@k8s-master:~# echo "source <(kubectl completion bash)" >> ~/.bashrc
root@k8s-master:~# source ~/.bashrc

安装 Helm 包管理工具

Helm 是 Kubernetes 的一个包管理工具,就像 Linux 的 Apt 或 Yum 一样。这个工具能帮助开发者和系统管理员更方便地管理在 Kubernetes 集群上部署、更新、卸载应用。

Helm 中的三个主要概念:

概念 描述
Chart 在 Kubernetes 集群上部署应用所需的所有资源定义的包
Release 在 Kubernetes 集群上部署的 Chart 的实例
Repository Chart 的存储位置,类似软件仓库,用于分发和分享 Chart
1. 添加 Helm 的官方 GPG key
root@k8s-master:~# curl https://baltocdn.com/helm/signing.asc | gpg --dearmor -o /usr/share/keyrings/helm-keyring.gpg

2. 添加 Helm 的官方 APT 仓库
root@k8s-master:~# echo "deb [signed-by=/usr/share/keyrings/helm-keyring.gpg] https://baltocdn.com/helm/stable/debian/ all main" | tee /etc/apt/sources.list.d/helm-stable-debian.list

3. 更新 apt 源
root@k8s-master:~# apt-get update

4. 安装 Helm
root@k8s-master:~# apt-get install -y helm

5. 检查 Helm 是否已正确安装
root@k8s-master:~# helm version
version.BuildInfo{Version:"v3.13.3", GitCommit:"c8b948945e52abba22ff885446a1486cb5fd3474", GitTreeState:"clean", GoVersion:"go1.20.11"}

安装配置 MetallLB

在标准的裸机 Kubernetes 集群环境中,我们通常无法直接使用 Service 的 LoadBalancer 类型,因为 Kubernetes 本身并不具备在没有外部负载均衡器(如云服务商提供的那种)的情况下,自动分配和管理外部 IP 地址的能力。因此,为了解决这个问题,我们可以使用一个名为 MetalLB 的开源工具。MetalLB 是专为裸机 Kubernetes 集群设计的负载均衡器,它让我们能够在标准的 Kubernetes 集群中,为 LoadBalancer 类型的服务分配并管理外部 IP 地址。

  1. 使用 Helm 安装 MetalLB
1. 使用 Helm 添加 MetalLB 的 chart 仓库
root@k8s-master:~# helm repo add metallb https://metallb.github.io/metallb
"metallb" has been added to your repositories

2. 更新 Helm 的 chart 列表
root@k8s-master:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "metallb" chart repository
Update Complete. ⎈Happy Helming!⎈

3. 安装 MetalLB 到 metallb-system 命名空间下
root@k8s-master:~# helm install metallb metallb/metallb --namespace metallb-system --create-namespace
NAME: metallb
LAST DEPLOYED: Thu Dec 28 01:18:01 2023
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.

Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.

4. 查看 MetalLB 的 Pods 是否已成功运行
root@k8s-master:~# kubectl get pods -n metallb-system
NAME                                  READY   STATUS    RESTARTS   AGE
metallb-controller-5f9bb77dcd-m6n4r   1/1     Running   0          32s
metallb-speaker-7s7m6                 4/4     Running   0          32s
metallb-speaker-7tbbp                 4/4     Running   0          32s
metallb-speaker-dmsng                 4/4     Running   0          32s
  1. 创建 metallb-config.yaml 文件:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: ip-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.100-192.168.1.250

---

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-mode-config
  namespace: metallb-system
spec:
  ipAddressPools:
  - ip-pool

在这个配置中,addresses 定义了 MetalLB 可以使用的 IP 地址的范围。你需要根据你的网络环境调整这个范围。同时,L2Advertisement 对象表明我们选择了 L2 作为使用的协议。

  1. 应用配置文件:
# 创建的 MetalLB 的配置对象(IPAddressPool 和 L2Advertisement)
root@k8s-master:~# kubectl apply -f metallb-config.yaml
ipaddresspool.metallb.io/ip-pool created
l2advertisement.metallb.io/l2-mode-config created

# 查看这些资源的状态
root@k8s-master:~# kubectl get ipaddresspool -n metallb-system
NAME AGE ip-pool 59s
root@k8s-master:~# kubectl get l2advertisement -n metallb-system
NAME AGE l2-mode-config 64s

至此,MetalLB 应该已经在你的集群中成功运行并准备好为你的 LoadBalancer 类型的 Service 分配 IP 地址。

关于如何使用,更多参考:metallb.universe.tf/usage/

如果你想要从你的地址池中为服务请求 IP,你应该如下设置你的 Service

apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    metallb.universe.tf/address-pool: ip-pool
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer

当然,也可以明确指定了一个特定的 IP 地址:

metadata:
  name: nginx
  annotations:
    metallb.universe.tf/loadBalancerIPs: 192.168.1.111

安装部署 longhorn

Longhorn 是一个云原生的分布式存储系统,它为 Kubernetes 工作负载提供了持久存储资源。当你在 Kubernetes 集群中启用 Longhorn 后,它会自动管理存储,包括动态地为 PVC 创建 PV,并处理底层存储的故障恢复和数据副本。

作为用户,只需要通过 Kubernetes 原生的 PVC 创建和管理存储,无需直接处理 PV 和底层存储。

使用 Helm 安装 Longhorn

helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace --set defaultDataPath=/data/longhorn

在使用之前,建议阅读 Longhorn官方文档,以理解更多关于部署和使用 Longhorn 的细节。

网络排查

# 为了在同一个命名空间下测试,可以创建一个新的测试 Pod 采用 curlimages/curl 镜像,这是一个带有 curl 和 DNS 工具的轻量级测试镜像:
kubectl run -n <namespace> --rm -i --tty test --image=curlimages/curl --restart=Never -- /bin/sh

使用 Rancher 部署 Kubernetes

待更新 ...