ogstash/config/log4j2.properties. Using default config which logs to console
The stdin plugin is now waiting for input:
00:00:19.669 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
00:00:19.688 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
00:00:19.802 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
2018-02-06T16:00:20.050Z localhost.localdomain 112
但是我们可以看到 , 有一个warning
Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings.
它说咱们没有logstash.yml,这个是从logstash5.0之后开始出现的,详细配置参考官网。
现在这样启动之后,发现内存还是过大,那我们来看看怎么把占用内存调小一点。
依旧是在/etc/logstash下的jvm.options
我们来设置一下大小
vi /etc/logstash/jvm.options
-Xms128m
-Xmx256m
先试试,不够再调大~
我们看到,我们需要执行logstash的时候非常麻烦,需要先进入目录再执行啊,这样不科学~ 来执行下面的命令
ln -s /usr/share/logstash/bin/logstash /usr/bin/logstash
然后就可以了~
第四步,安装kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm
然后安装,安装之后,找到配置文件,在/etc/kibana/kibana.yml
server.port: 5601
server.host: 0.0.0.0
elasticsearch.url: "http://192.168.2.178:9200"
然后就可以启动了,不过一样,我们先创建软链接,
ln -s /usr/share/kibana/bin/kibana /usr/bin/kibana
就可以kibana命令启动了~
到这里,我们的elk已经安装完成~
第五步 , 安装Filebeat
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm
安装,创建软链接ln -s /usr/share/filebeat/bin/filebeat /usr/bin/filebeat
接下来就是让Filebeat跟logstash勾搭起来了~
先创建正则表达式目录 /usr/local/elk/app/logstash-5.1.1/patterns
创建logstash配置文件 :
vi /etc/logstash/conf.d/pro-log.conf
input {
beats {
port => 5044
}
}
filter {
if [fields][logIndex] == "nginx" {
grok {
patterns_dir => "/usr/local/elk/app/logstash-5.1.1/patterns"
match => {
"message" => "%{NGINXACCESS}"
}
}
urldecode {
charset => "UTF-8"
field => "url"
}
if [upstreamtime] == "" or [upstreamtime] == "null" {
mutate {
update => { "upstreamtime" => "0" }
}
}
date {
match => ["logtime", "dd/MMM/yyyy:HH:mm:ss Z"]
target => "@timestamp"
}
mutate {
convert => {
"responsetime" => "float"
"upstreamtime" => "float"
"size" => "integer"
}
remove_field => ["port","logtime","message"]
}
}
}
output {
elasticsearch {
hosts => "192.168.2.178:9200"
manage_template => false
index => "%{[fields][logIndex]}-%{+YYYY.MM.dd}"
document_type => "%{[fields][docType]}"
}
}
我们这里用nginx的access_log来试试,先看看nginx的配置
log_format logstash '$http_host $server_addr $remote_addr [$time_local] "$visit_flag" "$jsession_id" "$login_name" "