[TOC]
安装
使用 docker-compose:
git clone https://github.com/deviantony/docker-elk.git
cd docker-elk
sudo docker-compose up -d
配置
配置密码
执行下列命令,生成 ES 集成用户的密码:
sudo docker-compose exec -T elasticsearch bin/elasticsearch-setup-passwords auto --batch
比如我的输出:
Changed password for user apm_system
PASSWORD apm_system = mkiFZAiUFtYusw0hKidx
Changed password for user kibana_system
PASSWORD kibana_system = 8KnPbnm5doTZBBfk5lxd
Changed password for user kibana
PASSWORD kibana = 8KnPbnm5doTZBBfk5lxd
Changed password for user logstash_system
PASSWORD logstash_system = IZILJ4Z9cqC6oARU2DJb
Changed password for user beats_system
PASSWORD beats_system = ZwZV65JUoQPq4Drwx79V
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = yGD4EK9o9fDDFpPGaC34
Changed password for user elastic
PASSWORD elastic = XCkZUSgrTHOF0krUsXgJ
编辑 kibana 配置文件 /path/to/docker-elk/kibana/config/kibana.yml
,修改其中用户信息:
elasticsearch.username: kibana_system
elasticsearch.password: 8KnPbnm5doTZBBfk5lxd
编辑 logstash 配置文件 /path/to/docker-elk/logstash/config/logstash.yml
,修改用户信息:
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: IZILJ4Z9cqC6oARU2DJb
编辑 /path/to/docker-elk/logstash/pipeline/logstash.conf
,修改 es 连接配置:
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "XCkZUSgrTHOF0krUsXgJ"
ecs_compatibility => disabled
}
}
最后重启 kibana 和 logstash:
sudo docker-compose restart kibana logstash
分词器插件 ik
安装
-
方法 1
推荐使用此方法安装插件,能比较方便未来自定义分词字典。
将插件解压为文件夹
analysis-ik
,随后把这个文件夹拷贝至/path/to/docker-elk/elasticsearch/plugins
。编辑 dockerfile 文件
/path/to/docker-elk/elasticsearch/Dockerfile
,添加如下:COPY plugins plugins
build 并重启:
sudo docker-compose build sudo docker-compose up -d
-
方法 2(不推荐)
编辑 es dockerfile 文件
/path/to/docker-elk/elasticsearch/Dockerfile
,添加:RUN elasticsearch-plugin install -b http://192.168.79.1:8000/elasticsearch-analysis-ik-7.11.2.zip
PS: 自行替换插件包地址。
之后 build 并重启:
sudo docker-compose build sudo docker-compose up -d
ik_smart
GET _analyze
{
"analyzer": "ik_smart",
"text": "vue.js 教程"
}
输出:
{
"tokens" : [
{
"token" : "vue.js",
"start_offset" : 0,
"end_offset" : 6,
"type" : "LETTER",
"position" : 0
},
{
"token" : "教程",
"start_offset" : 7,
"end_offset" : 9,
"type" : "CN_WORD",
"position" : 1
}
]
}
ik_max_word
GET _analyze
{
"analyzer": "ik_max_word",
"text": "vue.js 教程"
}
输出:
{
"tokens" : [
{
"token" : "vue.js",
"start_offset" : 0,
"end_offset" : 6,
"type" : "LETTER",
"position" : 0
},
{
"token" : "vue",
"start_offset" : 0,
"end_offset" : 3,
"type" : "ENGLISH",
"position" : 1
},
{
"token" : "js",
"start_offset" : 4,
"end_offset" : 6,
"type" : "ENGLISH",
"position" : 2
},
{
"token" : "教程",
"start_offset" : 7,
"end_offset" : 9,
"type" : "CN_WORD",
"position" : 3
}
]
}
自定义字典
有的常见人名/专有词 IK 不能识别,比如:
GET _analyze
{
"analyzer": "ik_max_word",
"text": "阮一峰的网络日志"
}
// 输出:
{
"tokens" : [
{
"token" : "阮",
"start_offset" : 0,
"end_offset" : 1,
"type" : "CN_CHAR",
"position" : 0
},
{
"token" : "一",
"start_offset" : 1,
"end_offset" : 2,
"type" : "TYPE_CNUM",
"position" : 1
},
{
"token" : "峰",
"start_offset" : 2,
"end_offset" : 3,
"type" : "CN_CHAR",
"position" : 2
},
{
"token" : "的",
"start_offset" : 3,
"end_offset" : 4,
"type" : "CN_CHAR",
"position" : 3
},
{
"token" : "网络日志",
"start_offset" : 4,
"end_offset" : 8,
"type" : "CN_WORD",
"position" : 4
},
{
"token" : "网络",
"start_offset" : 4,
"end_offset" : 6,
"type" : "CN_WORD",
"position" : 5
},
{
"token" : "日志",
"start_offset" : 6,
"end_offset" : 8,
"type" : "CN_WORD",
"position" : 6
}
]
}
可以看到“阮一峰”这个名字没有被正确分词,因此需要自定义一个词典,将改名字添加进去。
首先新增自定义字典文件 /path/to/docker-elk/elasticsearch/plugins/analysis-ik/config/boli.dic
,添加内容很简单:
阮一峰
之后编辑 ik 配置文件,指定自定义词典:
vim elasticsearch/plugins/analysis-ik/config/IKAnalyzer.cfg.xml
编辑内容:
<!--用户可以在这里配置自己的扩展字典 -->
<entry key="ext_dict">boli.dic</entry>
随后 build、重启 ES:
sudo docker-compose build
sudo docker-compose up -d
再次进行测试:
GET _analyze
{
"analyzer": "ik_max_word",
"text": "阮一峰的网络日志"
}
// 输出:
{
"tokens" : [
{
"token" : "阮一峰",
"start_offset" : 0,
"end_offset" : 3,
"type" : "CN_WORD",
"position" : 0
},
{
"token" : "一",
"start_offset" : 1,
"end_offset" : 2,
"type" : "TYPE_CNUM",
"position" : 1
},
{
"token" : "峰",
"start_offset" : 2,
"end_offset" : 3,
"type" : "CN_CHAR",
"position" : 2
},
{
"token" : "的",
"start_offset" : 3,
"end_offset" : 4,
"type" : "CN_CHAR",
"position" : 3
},
{
"token" : "网络日志",
"start_offset" : 4,
"end_offset" : 8,
"type" : "CN_WORD",
"position" : 4
},
{
"token" : "网络",
"start_offset" : 4,
"end_offset" : 6,
"type" : "CN_WORD",
"position" : 5
},
{
"token" : "日志",
"start_offset" : 6,
"end_offset" : 8,
"type" : "CN_WORD",
"position" : 6
}
]
}
可看到搜索内容已被正确分词。
集群
TODO