欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

使用 eck 在 k8s 中部署 es 集群

最编程 2024-03-06 15:20:38
...

1. ECK简介

Elastic Cloud on Kubernetes (ECK) 是一个官方提供的用于在 Kubernetes 集群中简化部署、管理和操作 Elastic Stack(包括 Elasticsearch 和 Kibana)的扩展。

ECK 是一个 Kubernetes Operator,它管理和自动化 Elastic Stack 的生命周期。通过使用 ECK,可以在 Kubernetes 环境中快速实现以下功能:

  1. 部署和管理 Elasticsearch 和 Kibana 实例,包括创建、删除、扩展和升级。
  2. 配置和调整 Elastic Stack 组件以满足特定需求。
  3. 自动处理故障检测、恢复和备份。
  4. 保护 Elasticsearch 集群,通过安全配置、证书管理和安全通信来确保数据安全。
  5. 监控 Elastic Stack 的性能和资源使用,从而优化集群性能。

官方文档: www.elastic.co/guide/en/cl…

2. 版本说明

ECK版本: 2.8.0

适用于Kubernetes版本: 1.24~1.27 (本文使用1.27.2演示)

适用于ElasticsearchKibana版本: 6.8+、7.1+、8+ (本文演示部署8.8.0版本的es与kibana)

3. 部署ECK

3.1 创建ECK所需CRD

kubectl create -f https://download.elastic.co/downloads/eck/2.8.0/crds.yaml

输出

customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearchautoscalers.autoscaling.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/stackconfigpolicies.stackconfigpolicy.k8s.elastic.co created

3.2 创建ECK opeartor

kubectl apply -f https://download.elastic.co/downloads/eck/2.8.0/operator.yaml

输出

namespace/elastic-system created
serviceaccount/elastic-operator created
secret/elastic-webhook-server-cert created
configmap/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator-view created
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
service/elastic-webhook-server created
statefulset.apps/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created

ECK operator在 elastic-system 命名空间中运行。生产环境中的工作负载选择专用命名空间,而不是使用 elastic-system 或 default 命名空间。

查看ECK operator

kubectl get pods -n elastic-system

输出

NAME                 READY   STATUS    RESTARTS   AGE
elastic-operator-0   1/1     Running   0          13m

4. 通过eck部署es集群

Kubernetes集群至少要有一个2GB可用内存的节点,否则Pod 将停留在 Pending 状态。

4.1 创建es集群es.yaml文件

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 8.8.0
  nodeSets:
  - name: default
    count: 3
    config:
      node.store.allow_mmap: false
    podTemplate:
      spec:
        volumes:
        - name: elasticsearch-data
          emptyDir: {}
kubectl apply -f es.yaml

4.2 查看es集群信息

查看es集群状态

kubectl get elasticsearch

输出

NAME         HEALTH    NODES   VERSION   PHASE             AGE
quickstart   unknown           8.8.0     ApplyingChanges   2m4s

此时看到的状态为unknown,可能是由于正在创建中

正常等待几分钟后应该显示为

NAME         HEALTH   NODES   VERSION   PHASE   AGE
quickstart   green    3       8.8.0     Ready   18m

查看es集群的pod

kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'

输出

NAME                      READY   STATUS    RESTARTS   AGE
quickstart-es-default-0   1/1     Running   0          19m
quickstart-es-default-1   1/1     Running   0          19m
quickstart-es-default-2   1/1     Running   0          19m

4.3 访问es集群

默认情况下为自动创建service

kubectl get service quickstart-es-http

输出

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
quickstart-es-http   ClusterIP   10.105.188.20   <none>        9200/TCP   33m
# 获取密码
PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

输出

{
  "name" : "quickstart-es-default-0",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "hPaILve1QCe2ig25RPErcg",
  "version" : {
    "number" : "8.8.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "c01029875a091076ed42cdb3a41c10b1a9a5a20f",
    "build_date" : "2023-05-23T17:16:07.179039820Z",
    "build_snapshot" : false,
    "lucene_version" : "9.6.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

5. 部署kibana

  1. 创建文件kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: quickstart
spec:
  version: 8.8.0
  count: 1
  elasticsearchRef:
    name: quickstart
  1. apply文件
kubectl apply -f kibana.yaml
  1. 查看kibana状态
kubectl get kibana

输出

NAME         HEALTH   NODES   VERSION   AGE
quickstart   green    1       8.8.0     10m
  1. 查看kibana pod
kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'

输出

NAME                             READY   STATUS    RESTARTS   AGE
quickstart-kb-74f84886f4-qb5nd   1/1     Running   0          10m
  1. 开启端口转发, 让本地计算机可以使用浏览器访问kibana
kubectl port-forward --address 0.0.0.0 service/quickstart-kb-http 5601
  1. 获取密码
kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

输出

1rBa2h6yPuG6dFk72z9mQ694
  1. 登录kibana 用户名: elastic 密码: 1rBa2h6yPuG6dFk72z9mQ694

6. 使用持久卷

本文使用NFS实现持久化;生产环境中应该使用分布式存储,如ceph

6.1 安装配置nfs

所有节点安装nfs客户端
# 本文k8s节点系统版本为 RockyLinux 9.2
yum install -y nfs-utils

6.2 为nfs创建rabc

创建文件nfs-rbac.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
     # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-provisioner
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f nfs-rbac.yaml

6.3 创建nfs provisioner

创建文件nfs-provisioner.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
kind: Service
apiVersion: v1
metadata:
  name: nfs-provisioner
  labels:
    app: nfs-provisioner
spec:
  ports:
    - name: nfs
      port: 2049
    - name: nfs-udp
      port: 2049
      protocol: UDP
    - name: nlockmgr
      port: 32803
    - name: nlockmgr-udp
      port: 32803
      protocol: UDP
    - name: mountd
      port: 20048
    - name: mountd-udp
      port: 20048
      protocol: UDP
    - name: rquotad
      port: 875
    - name: rquotad-udp
      port: 875
      protocol: UDP
    - name: rpcbind
      port: 111
    - name: rpcbind-udp
      port: 111
      protocol: UDP
    - name: statd
      port: 662
    - name: statd-udp
      port: 662
      protocol: UDP
  selector:
    app: nfs-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-provisioner
  replicas: 1
  strategy:
    type: Recreate 
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-provisioner
          # image: registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8
          image: k8s.dockerproxy.com/sig-storage/nfs-provisioner:v4.0.8
          ports:
            - name: nfs
              containerPort: 2049
            - name: nfs-udp
              containerPort: 2049
              protocol: UDP
            - name: nlockmgr
              containerPort: 32803
            - name: nlockmgr-udp
              containerPort: 32803
              protocol: UDP
            - name: mountd
              containerPort: 20048
            - name: mountd-udp
              containerPort: 20048
              protocol: UDP
            - name: rquotad
              containerPort: 875
            - name: rquotad-udp
              containerPort: 875
              protocol: UDP
            - name: rpcbind
              containerPort: 111
            - name: rpcbind-udp
              containerPort: 111
              protocol: UDP
            - name: statd
              containerPort: 662
            - name: statd-udp
              containerPort: 662
              protocol: UDP
          securityContext:
            capabilities:
              add:
                - DAC_READ_SEARCH
                - SYS_RESOURCE
          args:
            - "-provisioner=tiga.cc/nfs"
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: SERVICE_NAME
              value: nfs-provisioner
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          imagePullPolicy: "IfNotPresent"
          volumeMounts:
            - name: export-volume
              mountPath: /export
      volumes:
        - name: export-volume
          hostPath:
            path: /data/nfs

创建nfs-provisioner

kubectl apply -f nfs-provisioner.yaml

查看nfs provisioner状态

kubectl get pods --selector='app=nfs-provisioner'

输出

NAME                               READY   STATUS    RESTARTS   AGE
nfs-provisioner-7d997c56c5-jhl2x   1/1     Running   0          15h

6.4 创建StorageClass

创建文件nfs-class.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: tiga-nfs
provisioner: tiga.cc/nfs
mountOptions:
  - vers=4.1

创建nfs stroage class

kubectl apply -f nfs-class.yaml

6.5 部署es集群时使用持久卷

创建文件es-cluster-nfs.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart-nfs
spec:
  version: 8.8.0
  nodeSets:
  - name: default
    count: 3
    config:
      node.store.allow_mmap: false
      node.roles: ["master", "data"] 
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 2Gi
            memory: 4Gi
            cpu: 2
          limits:
            memory: 4Gi
        storageClassName: tiga-nfs

创建es集群

kubectl apply -f es-cluster-nfs.yaml

6.6 验证es

PASSWORD=$(kubectl get secret quickstart-nfs-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
curl -u "elastic:$PASSWORD" -k "https://quickstart-es-nfs-http:9200/_cat/health"

输出

1685585424 02:10:24 quickstart-nfs green 3 3 0 0 0 0 0 0 - 100.0%

查看pvc

kubectl get pvc

输出

NAME                                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-data-quickstart-nfs-es-default-0   Bound    pvc-cba1fef2-fa9c-46d5-9d54-53101666b98a   2Gi        RWO            tiga-nfs       6m23s
elasticsearch-data-quickstart-nfs-es-default-1   Bound    pvc-b7015556-4840-4504-ba4d-c16138e17db0   2Gi        RWO            tiga-nfs       6m22s
elasticsearch-data-quickstart-nfs-es-default-2   Bound    pvc-870aa53d-b7ae-4b99-865e-edb8b97cce6c   2Gi        RWO            tiga-nfs       6m22s