简单易懂!在Centos 7.9上使用containerd运行时搭建Kubernetes集群教程
最编程
2024-08-10 21:30:36
...
[root@node1 ~]# kubeadm join 192.168.1.92:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:d87669b0c3630a0c5f566097cedee190764712ee0c8d41fc2db00521fcf9f680 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
node2:
[root@node2 ~]# kubeadm join 192.168.1.92:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:637dedda374472a68d5e3f58701a50527692ab281d50181a7d516751333ea8e8 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
6.6 查看集群状态:
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady control-plane 2m57s v1.25.0 node1 NotReady <none> 47s v1.25.0 node2 NotReady <none> 29s v1.25.0
七、安装网络插件
可以看到是 NotReady 状态,这是因为还没有安装网络插件,必须部署一个 容器网络接口 (CNI) 基于 Pod 网络附加组件,以便您的 Pod 可以相互通信。在安装网络之前,集群 DNS (CoreDNS) 不会启动。接下来安装网络插件,可以在以下两个任一地址中选择需要安装的网络插件(我选用的第二个地址安装),这里我们安装 calio
- https:// kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
- https://projectcalico.docs.tigera.io/archive/v3.23/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico
7.1 下载calico文件
[root@master ~]# curl https://projectcalico.docs.tigera.io/archive/v3.23/manifests/calico.yaml -O % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 226k 100 226k 0 0 278k 0 --:--:-- --:--:-- --:--:-- 278k
7.2 编辑calico.yaml文件:
- 注:文件默认IP为:192.168.0.0/16
- name: CALICO_IPV4POOL_CIDR # 由于在init的时候配置的172网段,所以这里需要修改 value: "172.16.0.0/16"
7.3 安装calico网络插件
[root@master ~]# kubectl apply -f calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created poddisruptionbudget.policy/calico-kube-controllers created
7.4 查看pod运行状态(每秒刷新一次)
[root@master ~]# watch -n 1 kubectl get pod -n kube-system [root@master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-d8b9b6478-2khtq 1/1 Running 0 110s calico-node-f4t6r 1/1 Running 0 110s calico-node-f6xfz 1/1 Running 0 110s calico-node-mck5r 1/1 Running 0 110s coredns-7f8cbcb969-2ddsl 1/1 Running 0 4d15h coredns-7f8cbcb969-pm5s8 1/1 Running 0 4d15h etcd-master 1/1 Running 1 4d15h kube-apiserver-master 1/1 Running 1 4d15h kube-controller-manager-master 1/1 Running 1 (70s ago) 4d15h kube-proxy-2hzkf 1/1 Running 0 4d15h kube-proxy-grx5m 1/1 Running 0 4d15h kube-proxy-klklc 1/1 Running 0 4d15h kube-scheduler-master 1/1 Running 2 (73s ago) 4d15h
7.5 查看集群状态
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane 4d15h v1.25.0 node1 Ready <none> 4d15h v1.25.0 node2 Ready <none> 4d15h v1.25.0
八、测试
- 使用k8s启动一个deployment资源
[root@master ~]# vim deploy-nginx.yaml [root@master ~]# cat deploy-nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 3 # 告知 Deployment 运行 3 个与该模板匹配的 Pod template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 [root@master ~]# kubectl apply -f deploy-nginx.yaml deployment.apps/nginx-deployment created [root@master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-7fb96c846b-48h24 1/1 Running 0 14s nginx-deployment-7fb96c846b-ms7c9 1/1 Running 0 14s nginx-deployment-7fb96c846b-zpsf7 1/1 Running 0 14s
查看所有pod运行状态
[root@master ~]# kubectl get pod -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default nginx-deployment-7fb96c846b-48h24 1/1 Running 0 61s 172.16.104.3 node2 <none> <none> default nginx-deployment-7fb96c846b-ms7c9 1/1 Running 0 61s 172.16.166.130 node1 <none> <none> default nginx-deployment-7fb96c846b-zpsf7 1/1 Running 0 61s 172.16.166.131 node1 <none> <none> kube-system calico-kube-controllers-d8b9b6478-2khtq 1/1 Running 0 6m46s 172.16.166.129 node1 <none> <none> kube-system calico-node-f4t6r 1/1 Running 0 6m46s 192.168.1.93 node1 <none> <none> kube-system calico-node-f6xfz 1/1 Running 0 6m46s 192.168.1.92 master <none> <none> kube-system calico-node-mck5r 1/1 Running 0 6m46s 192.168.1.94 node2 <none> <none> kube-system coredns-7f8cbcb969-2ddsl 1/1 Running 0 4d15h 172.16.104.2 node2 <none> <none> kube-system coredns-7f8cbcb969-pm5s8 1/1 Running 0 4d15h 172.16.104.1 node2 <none> <none> kube-system etcd-master 1/1 Running 1 4d15h 192.168.1.92 master <none> <none> kube-system kube-apiserver-master 1/1 Running 1 4d15h 192.168.1.92 master <none> <none> kube-system kube-controller-manager-master 1/1 Running 1 (6m6s ago) 4d15h 192.168.1.92 master <none> <none> kube-system kube-proxy-2hzkf 1/1 Running 0 4d15h 192.168.1.94 node2 <none> <none> kube-system kube-proxy-grx5m 1/1 Running 0 4d15h 192.168.1.92 master <none> <none> kube-system kube-proxy-klklc 1/1 Running 0 4d15h 192.168.1.93 node1 <none> <none> kube-system kube-scheduler-master 1/1 Running 2 (6m9s ago) 4d15h 192.168.1.92 master <none> <none>