简易操作:在本地搭建Kubernetes认证环境 - CKA与CKS的模拟部署教程
Kubernetes认证-CKA和CKS模拟环境安装部署
王先森2023-12-182023-12-18
环境准备
准备三台Linux机器(本文以Ubuntu 23.10系统为例),三台机器之间能相互通信。
以下是本文使用的三台Ubuntu 23.10:
hostname |
IP |
memory |
---|---|---|
k8s-master |
10.1.1.20 |
4GB |
k8s-node1 |
10.1.1.30 |
2GB |
k8s-node2 |
10.1.1.40 |
2GB |
系统初始化
需分别在k8s-master、k8s-node1、k8s-node2 中执行,建议通过root用户操作
将普通用户(work)设置免密sudo切换
visudo
# Allow members of group sudo to execute any command
# 添加如下配置
work ALL=(ALL) NOPASSWD:ALL
设置时区为上海
timedatectl set-timezone Asia/Shanghai
apt-get install -y ntpdate >/dev/null 2>&1
ntpdate ntp.aliyun.com
关闭swap
sed -i '/swap/d' /etc/fstab
swapoff -a
关闭防火墙
systemctl disable --now ufw >/dev/null 2>&1
载入内核模块开启流量转发
cat >>/etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
cat >>/etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system >/dev/null 2>&1
安装containerd, kubeadm, kubelet, kubectl
安装containerd
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get -qq update >/dev/null 2>&1
apt-get install -qq -y containerd.io >/dev/null 2>&1
containerd config default >/etc/containerd/config.toml
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
systemctl restart --now containerd
安装 kubeadm, kubelet, kubectl
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add > /dev/null 2>&1
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list > /dev/null 2>&1
apt-get -qq update >/dev/null 2>&1
apt-get install -y kubeadm=1.28.0-00 kubelet=1.28.0-00 kubectl=1.28.0-00
可以检查下kubeadm,kubelet,kubectl的安装情况,如果都能获取到版本号,说明安装成功。
kubeadm version
kubelet --version
kubectl version --client
初始化master节点
以下操作都在master节点上进行。
通过阿里开源拉取集群所需要的镜像
sudo kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
如果拉取成功,会看到类似下面的输出
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.4
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
初始化Kubeadm
-
--apiserver-advertise-address
这个地址是本地用于和其他节点通信的IP地址 -
--pod-network-cidr
pod network 地址空间
sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=10.1.1.20 --pod-network-cidr=10.244.0.0/16
最后一段的输出要保存记录好, 这一段指出后续需要做什么配置。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
# 准备 .kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
# 部署pod network方案
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
# 添加worker节点
kubeadm join 10.1.1.20:6443 --token hd3cjk.sk5co35ml64kw2wo \
--discovery-token-ca-cert-hash sha256:05b42f0a81350227d45f7005c6f2dc664f75d70e0b5e5e8dbfb65705425a859c
shell 自动补全(Bash)
more information can be found https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-autocomplete
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
部署pod network方案
在https://kubernetes.io/docs/concepts/cluster-administration/addons/ 选择一个network方案, 根据提供的具体链接去部署。
这里我们选择overlay的方案,名字叫 calico
部署方法如下:
$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml -o calico-custom-resources.yaml
$ cat calico-custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16 # 修改为 pod network 地址空间
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
$ kubectl apply -f calico-custom-resources.yaml
添加worker节点
添加worker节点非常简单,直接在worker节点上运行join即可,注意–token
kubeadm join 10.1.1.20:6443 --token hd3cjk.sk5co35ml64kw2wo \
--discovery-token-ca-cert-hash sha256:05b42f0a81350227d45f7005c6f2dc664f75d70e0b5e5e8dbfb65705425a859c
注意:不小心忘记join的token
和discovery-token-ca-cert-hash
怎么办?
token 可以通过 kubeadm token list
获取到,比如 ``0pdoeh.wrqchegv3xm3k1ow`
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
hd3cjk.sk5co35ml64kw2wo 23h 2023-12-19T03:41:11Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
而 discovery-token-ca-cert-hash
可以通过
openssl x509 -in /etc/kubernetes/pki/ca.crt -pubkey -noout |
openssl pkey -pubin -outform DER |
openssl dgst -sha256
结果类似于 SHA2-256(stdin)= 05b42f0a81350227d45f7005c6f2dc664f75d70e0b5e5e8dbfb65705425a859c
最后在master节点查看node和pod结果。(比如我们有两个worker节点)
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 7m v1.28.0
k8s-node1 Ready <none> 50s v1.28.0
k8s-node2 Ready <none> 21s v1.28.0
集群验证
创建pod
创建一个nginx的pod,pod能成功过running
$ kubectl run web --image nginx
pod/web created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 5s
创建service
给nginx pod创建一个service, 通过curl能访问这个service的cluster ip地址。
$ kubectl expose pod web --port=80 --name=web-service
service/web-service exposed
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 65m
web-service ClusterIP 10.96.95.185 <none> 80/TCP 4s
$ curl 10.96.95.185
...
<title>Welcome to nginx!</title>
...
环境清理
$ kubectl delete service web-service
$ kubectl delete pod web
推荐阅读
-
简易操作:在本地搭建Kubernetes认证环境 - CKA与CKS的模拟部署教程
-
前端搭建指南:调试本地开发环境的实用技巧 - 在CLI脚手架项目里,别忘了添加库文件及其版本依赖 1. 实时联动与修改: a) 首先确保在当前CLI脚手架项目的`package.json`依赖中加入所用库文件及其正确版本。 b) 通过npm link在本地创建硬连接,使得库与项目能实时修改、同步。运行如下命令: ``` $ cd your-lib-dir $ npm link $ cd your-cli-dir $ npm link your-lib ``` 2. 模拟部署环境: 当需测试线上环境时,解除本地软链接。回到库文件目录并执行: ``` $ cd your-lib-dir $ npm unlink ``` 同样,在CLI项目目录执行: ``` $ cd your-cli-dir $ npm uninstall your-lib --save ``` 注意此时可能会出现因缺少库文件依赖导致的错误,但可暂且忽略,这样能避免将来遗漏或重复安装的问题。 3. 发布后的操作: 如果库已上线,可先清空CLI项目的`node_modules`目录(`rm -rf node_modules`),然后重新安装所有依赖: ``` $ npm install ``` 通过以上步骤,您就能顺畅地调试和部署您的脚手架项目了。 总结 `link` & `unlink` 命令的运用: 1. `npm link`:在本地创建硬连接,便于实时协作与同步。 2. `npm unlink`:模拟线上环境并解除软链接,为正式部署做准备。