Docker下elasticsearch8实战(单节点、扩容、集群、安全校验、kibana一网打尽)
欢迎访问我的GitHub
这里分类和汇总了欣宸的全部原创(含配套源码):github.com/zq2599/blog…
本篇概览
- 本篇记录了用docker部署ElasticSearch8的过程,还有一些对es的基本操作(例如创建索引和查询),既留给自己后面反复使用,也可以为正在部署环境的读者提供一些参考
- 请注意docker部署ElasticSearch的适用场景:我这边只是在开发过程中使用,这种方式在生产环境是否适合是有待商榷的,在用于生产环境时请慎重考虑
- 用docker运行elasticsearch镜像其实很简单,也就是一行命令的事情,但实际开发过程中,单节点环境不一定够用,其他的要求例如:
- https安全证书
- 账号密码校验
- 集群环境
- 用kibana执行命令
- 以上操作都要简单易用
- 因此,今天就随本文一起动手实战,从最简单的单节点,到相对齐全的集群+kibana全部亲手操作一遍
- 总的来说,本篇会分为两大部分:
- 第一部分是基于docker的,安装单节点和加入节点,重点是对基本操作的理解
- 第二部分是基于docker-compose的,同时安装es集群+kibana,重点是简单高效的集群部署
环境信息
- 以下是本次实战的环境信息,可以作为参考
- 操作系统:macOS Monterey(M1 Pro芯片的MacBook Pro,16G内存)
- Docker:Docker Desktop 4.7.1 (77678)
- ElasticSearch:8.2.2
- Kibana:8.2.2
准备工作
- 首先是docker镜像加速,没有加速手段的下载镜像体验会很差,我这边里用的是七牛的镜像加速(reg-mirror.qiniu.com) ,您可以按个人喜好自行配
- 如果您的环境是Linux,注意要做以下操作,否则es可能会启动失败
- 用编辑工具打开文件/etc/sysctl.conf
- 在尾部添加一行配置vm.max_map_count = 262144,如果已存在就修改,数值不能低于262144
- 修改保存,然后执行命令sudo sysctl -p使其立即生效
部署es
- 先创建一个docker网络
docker network create elastic
- 创建es容器,若本地没有镜像会自动下载,为了省内存,我这里设置ES容器内的java进程只用到1024M内存,您可以根据自己电脑情况调整
docker run \
--name es01 \
--net elastic \
-p 9200:9200 \
-e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" \
-idt elasticsearch:8.2.2
- 进入容器
docker exec -it es01 /bin/bash
- 重置密码
bin/elasticsearch-reset-password -u elastic
- 提示是否重置,输入y,控制台会打印新密码,请记住这个密码,稍后要用到
Password for the [elastic] user successfully reset.
New value: 3_J35UWr2sIUkyxxxxxx
- 现在验证elastic能否正常响应
- 在chrome浏览器的地址栏输入https://localhost:9200
- 此时浏览器会弹出安全提示,如下图,这时候不要用鼠标去点任何地方,直接在键盘上输入thisisunsafe,然后回车
- 接下来就会弹出登录页面了,如下图,填写账号elastic,密码是刚才控制台返回的
- 如果看到以下信息,证明es启动成功
- 如果您在chrome上安装了ElasticSearch Head插件(没错,是chrome浏览器插件),此时已经可以访问es服务了,如下图
- es已经就绪,接下来是kibana
部署和操作kibana
- 一行命令完成部署
docker run \
--name kibana \
--net elastic \
-p 5601:5601 \
-idt kibana:8.2.2
- 生成token,kibana连接es的时候要用到
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
- 控制台会生成一长串文本,请保存下来,这是稍后给kibana用来连接es的token
- 浏览器访问http://localhost:5601/,会弹出输入窗口,如下图,在文本框内输入刚才生成的token内容,再点击Configure Elastic按钮
- 然后弹出个输入验证码的页面
- 在控制台输入以下命令,顺利拿到验证码
docker exec -it kibana bin/kibana-verification-code
- 回到网页输入验证码,可以看到初始化页面
- 接下来就是常规的安全登录了,如下图,输入es的账号密码即可登录成功
- 登录成功,下图选择右边的Explore on my own
- eshead显示新增了一些kibana自用的索引
集群扩容
- 目前es服务是单节点,有时候需要对单节点进行扩容,加入新的机器以提升es服务的性能、存储、可用性等,docker下可以很方便的进行扩容,接下来一起试试
- 与kibana能够访问es类似,新机器加入当前es服务也需要授权token,生成token的命令如下,请在控制台执行
docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
- 拿到控制台生产的token后(30分钟有效期),执行以下命令即可创建一个新的es容器,与原先的es组成集群,原有的数据会保留,注意将xxxxxx替换成刚刚生成的token
docker run \
-e ENROLLMENT_TOKEN="xxxxxx" \
-e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" \
--name es02 \
--net elastic \
-idt elasticsearch:8.2.2
- 在eshead上可以看到新增的节点
- 至此,kibana部署完成,接下来可以做一些CRUD的基本操作,新手可用来快速了解es基本操作,老司机可以直接跳过了
实战es操作:命令行操作
-
咱们先用命令行做一些最基本的操作,然后再考虑用kibana
-
由于es开方的https服务,所以首先把证书从容器中导出来,后面的curl请求都要指定这个证书
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
- 当前目录下新增名为http_ca.crt的文件,就是安全证书了,试试能不能用,控制台输入以下命令,注意把xxxxxx换成您自己的密码
curl --cacert http_ca.crt -u elastic:xxxxxx https://localhost:9200
- 控制台输出以下信息,证明外部访问es已经成功
❯ curl --cacert http_ca.crt -u elastic:xxxxxx https://localhost:9200
{
"name" : "279acdab6c7f",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "whfRDTzCQym_jwx2OrMgKg",
"version" : {
"number" : "8.2.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "9876968ef3c745186b94fdabd4483e01499224ef",
"build_date" : "2022-05-25T15:47:06.259735307Z",
"build_snapshot" : false,
"lucene_version" : "9.1.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
- 建一个索引试试,此索引名为my-book,有六个字段
curl -X PUT "https://localhost:9200/my-book?pretty" \
--cacert http_ca.crt \
-u elastic:xxxxxx \
-H 'Content-Type: application/json' \
-d'
{
"settings": {
"number_of_shards": 1
},
"mappings": {
"properties": {
"line_id": {
"type": "long"
},
"line_number": {
"type": "keyword"
},
"play_name": {
"type": "keyword"
},
"speaker": {
"type": "keyword"
},
"speech_number": {
"type": "long"
},
"text_entry": {
"type": "text"
}
}
}
}
'
- 收到响应
{
"acknowledged" : true,
"shards_acknowledged" : true,
"index" : "my-book"
}
- eshead插件也能看到索引创建成功
- 用GET命令获取索引信息试试,如下,符合预期
❯ curl -X GET \
https://localhost:9200/my-book\?pretty \
--cacert http_ca.crt \
-u elastic:m9ZRFl9wCIiVkLudRopy
{
"my-book" : {
"aliases" : { },
"mappings" : {
"properties" : {
"line_id" : {
"type" : "long"
},
"line_number" : {
"type" : "keyword"
},
"play_name" : {
"type" : "keyword"
},
"speaker" : {
"type" : "keyword"
},
"speech_number" : {
"type" : "long"
},
"text_entry" : {
"type" : "text"
}
}
},
"settings" : {
"index" : {
"routing" : {
"allocation" : {
"include" : {
"_tier_preference" : "data_content"
}
}
},
"number_of_shards" : "1",
"provided_name" : "my-book",
"creation_date" : "1653811101586",
"number_of_replicas" : "1",
"uuid" : "zX8kWS_IQ-ymdI7vYLOjew",
"version" : {
"created" : "8020299"
}
}
}
}
}
- 再试试批量导入一笔数据,从这个地址下载数据文件:raw.githubusercontent.com/zq2599/blog…
- 下载完毕后,执行以下命令,就会开始批量导入的操作
curl -H 'Content-Type: application/x-ndjson' \
--cacert http_ca.crt \
-u elastic:m9ZRFl9wCIiVkLudRopy \
-s -XPOST https://localhost:9200/_bulk \
--data-binary @shakespeare_only_one_type.json
-
导入成功后,用eshead可以看到每条记录的详细信息
-
接下来试试kibana
操作kibana
- 在kibana页面,点击下图红框位置,进入查询页面
- 执行查询的操作如下
- 看看刚刚导入了多少数据,如下图,十一万
- 至此,基于docker部署ElasticSearch-8和Kibana-8的实战已完成,咱们已经了解了基本操作,接下来的重点就是效率了:基于docker-compose快速部署集群环境
基于docker-compose快速部署
- 借助docker-compose,可以将es集群+kibana的安装过程可以进一步简化,精简后的步骤如下图,已经省得不能再省了...
- 接着会按照上述流程进行实战,一共实战两次:第一次部署带证书账号密码的安全版本,第二次部署没有任何安全检查的版本,装好直接访问使用
编写配置文件
-
再次确认接下来工作的目标:用docker-compose快速部署es集群+kibana,这个集群是带安全检查的(自签证书+账号密码)
-
找个干净目录,新建名为.env的文件,内容如下,这是给docker-compose用到的配置文件每个配置项都有详细注释说明
# elastic账号的密码 (至少六个字符)
ELASTIC_PASSWORD=123456
# kibana_system账号的密码 (至少六个字符),该账号仅用于一些kibana的内部设置,不能用来查询es
KIBANA_PASSWORD=abcdef
# es和kibana的版本
STACK_VERSION=8.2.2
# 集群名字
CLUSTER_NAME=docker-cluster
# x-pack安全设置,这里选择basic,基础设置,如果选择了trail,则会在30天后到期
LICENSE=basic
#LICENSE=trial
# es映射到宿主机的的端口
ES_PORT=9200
# kibana映射到宿主机的的端口
KIBANA_PORT=5601
# es容器的内存大小,请根据自己硬件情况调整
MEM_LIMIT=1073741824
# 命名空间,会体现在容器名的前缀上
COMPOSE_PROJECT_NAME=demo
- 然后是docker-compose.yaml文件,这里面会用到刚才创建的.env文件,一共创建了五个容器:启动操作、三个es组成集群,一个kibana(多说一句:官方脚本,放心用)
version: "2.2"
services:
setup:
image: elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es02\n"\
" dns:\n"\
" - es02\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es03\n"\
" dns:\n"\
" - es03\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
es01:
depends_on:
setup:
condition: service_healthy
image: elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es02:
depends_on:
- es01
image: elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata02:/usr/share/elasticsearch/data
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es02/es02.key
- xpack.security.http.ssl.certificate=certs/es02/es02.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es02/es02.key
- xpack.security.transport.ssl.certificate=certs/es02/es02.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es03:
depends_on:
- es02
image: elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata03:/usr/share/elasticsearch/data
environment:
- node.name=es03
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es03/es03.key
- xpack.security.http.ssl.certificate=certs/es03/es03.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es03/es03.key
- xpack.security.transport.ssl.certificate=certs/es03/es03.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
es01:
condition: service_healthy
es02:
condition: service_healthy
es03:
condition: service_healthy
image: kibana:${STACK_VERSION}
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
certs:
driver: local
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
- 注意:.env和docker-compose.yaml两个文件在同一目录下
启动应用
- 在docker-compose.yaml文件所在目录,执行命令docker-compose up -d启动所有容器
❯ docker-compose up -d
Creating network "demo_default" with the default driver
Pulling setup (elasticsearch:8.2.2)...
8.2.2: Pulling from library/elasticsearch
Digest: sha256:8c666cb1e76650306655b67644a01663f9c7a5422b2c51dd570524267f11ce3d
Status: Downloaded newer image for elasticsearch:8.2.2
Pulling kibana (kibana:8.2.2)...
8.2.2: Pulling from library/kibana
Digest: sha256:cf34801f36a2e79c834b3cdeb0a3463ff34b8d8588c3ccdd47212c4e0753f8a5
Status: Downloaded newer image for kibana:8.2.2
Creating demo_setup_1 ... done
Creating demo_es01_1 ... done
Creating demo_es02_1 ... done
Creating demo_es03_1 ... done
Creating demo_kibana_1 ... done
- 查看容器状态,负责启动的demo_setup_1已退出,其他的正常运行
❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8ce010cddfc kibana:8.2.2 "/bin/tini -- /usr/l…" 20 minutes ago Up 20 minutes (healthy) 0.0.0.0:5601->5601/tcp demo_kibana_1
78662d44ae31 elasticsearch:8.2.2 "/bin/tini -- /usr/l…" 21 minutes ago Up 21 minutes (healthy) 9200/tcp, 9300/tcp demo_es03_1
7e96273872cb elasticsearch:8.2.2 "/bin/tini -- /usr/l…" 21 minutes ago Up 21 minutes (healthy) 9200/tcp, 9300/tcp demo_es02_1
8b8be1d645ba elasticsearch:8.2.2 "/bin/tini -- /usr/l…" 21 minutes ago Up 21 minutes (healthy) 0.0.0.0:9200->9200/tcp, 9300/tcp demo_es01_1
c48ffb724ca2 elasticsearch:8.2.2 "/bin/tini -- /usr/l…" 21 minutes ago Exited (0) 20 minutes ago demo_setup_1
- 看看demo_setup_1的日志,提示启动顺利
❯ docker logs demo_setup_1
Setting file permissions
Waiting for Elasticsearch availability
Setting kibana_system password
All done!
- 如果要使用curl命令向ES发请求,需要提前将crt文件从容器中复制出来
docker cp demo_es01_1:/usr/share/elasticsearch/config/certs/es01/es01.crt .
验证
-
现在来验证es集群和kibana能不能正常工作
-
浏览器访问https://localhost:9200/,注意是https,会看到以下警告页面
-
此时直接键入thisisunsafe再回车,会提示输入账号密码,根据之前的配置账号elastic,密码123456
-
浏览器显示如下,证明es成功响应了
-
如果chrome上安装了eshead插件,此时就能查看es集群情况了(注意内部的地址栏中,要用https,而非http),如下图,一共三个节点,es02前面有五角星标志,表示其主节点的身份
-
目前看来es集群部署和运行都已经正常,再看kibana是否可用
-
访问http://localhost:5601/,账号elastic,密码123456
-
点击下图红框位置,进入输入命令的页面
-
如下图,左侧输入创建索引的命令,再点击红框中的按钮,右侧会显示执行结果
-
批量写入两条记录
-
最后是查询操作
清理
- 如果要删除es,执行docker-compose down就会删除容器,但是,此命令不会删除数据,下次执行docker-compose up -d后,新的es集群中会出现刚才创建的test001索引,并且数据也在
- 这是因为docker-compose.yaml中使用了数据卷volume存储es集群的关键数据,这些输入被保存在宿主机的磁盘上
❯ docker volume ls
DRIVER VOLUME NAME
local demo_certs
local demo_esdata01
local demo_esdata02
local demo_esdata03
local demo_kibanadata
- 执行docker volume rm demo_certs demo_esdata01 demo_esdata02 demo_esdata03即可将它们彻底清除
- 以上就是快速部署es集群+kibana的整个过程了,是不是很简单呢?
不带密码的集群
-
有时候咱们部署es不需要安全认证,例如开发环境,或者有防火墙禁止外部访问的环境,那么刚才的部署就不够用了,咱们需要一个更简单的、部署完了立刻能用的集群,接下来动手试试吧
-
找个干净目录,新建名为.env的文件,内容如下,和安全版相比去掉了一些不需要的内容
# kibana_system账号的密码 (至少六个字符),该账号仅用于一些kibana的内部设置,不能用来查询es
KIBANA_PASSWORD=abcdef
# es和kibana的版本
STACK_VERSION=8.2.2
# 集群名字
CLUSTER_NAME=docker-cluster
# es映射到宿主机的的端口
ES_PORT=9200
# kibana映射到宿主机的的端口
KIBANA_PORT=5601
# es容器的内存大小,请根据自己硬件情况调整
MEM_LIMIT=1073741824
# 命名空间,会体现在容器名的前缀上
COMPOSE_PROJECT_NAME=demo
- 然后是docker-compose.yaml文件,这里面会用到刚才创建的.env文件,和安全版相比去掉了启动容器,和安全相关的配置和脚本也删除了
version: "2.2"
services:
es01:
image: elasticsearch:${STACK_VERSION}
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- xpack.security.http.ssl.enabled=false
- xpack.security.transport.ssl.enabled=false
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
es02:
depends_on:
- es01
image: elasticsearch:${STACK_VERSION}
volumes:
- esdata02:/usr/share/elasticsearch/data
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- xpack.security.http.ssl.enabled=false
- xpack.security.transport.ssl.enabled=false
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
es03:
depends_on:
- es02
image: elasticsearch:${STACK_VERSION}
volumes:
- esdata03:/usr/share/elasticsearch/data
environment:
- node.name=es03
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- xpack.security.http.ssl.enabled=false
- xpack.security.transport.ssl.enabled=false
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: kibana:${STACK_VERSION}
volumes:
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=http://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
mem_limit: ${MEM_LIMIT}
volumes:
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
- 注意:.env和docker-compose.yaml两个文件在同一目录下
启动和验证
- 启动前,请先停止和清理掉刚才部署的安全版
- 在docker-compose.yaml文件所在目录,执行命令docker-compose up -d启动所有容器,稍等片刻,可见所有容器已经就绪
❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
11663375288d elasticsearch:8.2.2 "/bin/tini -- /usr/l…" 4 minutes ago Up 4 minutes 9200/tcp, 9300/tcp demo_es03_1
ad6f0390b9cf elasticsearch:8.2.2 "/bin/tini -- /usr/l…" 4 minutes ago Up 4 minutes 9200/tcp, 9300/tcp demo_es02_1
5080709e5358 kibana:8.2.2 "/bin/tini -- /usr/l…" 4 minutes ago Up 4 minutes 0.0.0.0:5601->5601/tcp demo_kibana_1
4b1e576fbfd3 elasticsearch:8.2.2 "/bin/tini -- /usr/l…" 4 minutes ago Up 4 minutes 0.0.0.0:9200->9200/tcp, 9300/tcp demo_es01_1
-
浏览器访问http://localhost:9200/ ,注意是http,收到es响应
-
chrome的eshead插件也能正常获取es集群信息
-
访问kibana,地址是http://localhost:5601/ ,注意是http,能够正常使用,下图是成功创建索引的操作
-
基于docker-compose部署es集群+kibana的部署已经完成,借助娴熟的复制粘贴操作,快速部署一个es集群简直易如反掌,如果您正要快速部署一套es集群,希望本文能给您一些参考
-
至此,docker下的elasticsearch8实战已经全部完成,即可以用docker进行最基本的部署,也能借助docker-compose高效搭建es8集群,是否需要账号密码校验能随意选择,希望通过本文,能帮您快速建好自己所需的环境
-
我正在参与掘金技术社区创作者签约计划招募活动,点击链接报名投稿。
上一篇: Origin切线工具Tangent.opk的安装和使用问题
下一篇: 如何快速开发erp管理系统