...
Code Block |
---|
input { kafka { topic_id => "local_all_logs" zk_connect => "127.0.0.1:2181" auto_offset_reset => "smallest" type => "all_logs" } kafka { topic_id => "local_apiserver_logs" zk_connect => "127.0.0.1:2181" auto_offset_reset => "smallest" type => "apiserver_logs" } kafka { topic_id => "local_gfac_logs" zk_connect => "127.0.0.1:2181" auto_offset_reset => "smallest" type => "gfac_logs" } kafka { topic_id => "local_orchestrator_logs" zk_connect => "127.0.0.1:2181" auto_offset_reset => "smallest" type => "orchestrator_logs" } kafka { topic_id => "local_credentialstore_logs" zk_connect => "127.0.0.1:2181" auto_offset_reset => "smallest" type => "credentialstore_logs" } } filter { mutate { add_field => { "[@metadata][level]" => "%{[level]}" } } mutate { lowercase => ["[@metadata][level]"] } mutate { gsub => ["level", "LOG_", ""] } mutate { add_tag => ["local", "CoreOS-899.13.0"] } ruby { code => " begin t = Time.iso8601(event['timestamp']) rescue ArgumentError => e # drop the event if format is invalid event.cancel return end event['timestamp_usec'] = t.usec % 1000 event['timestamp'] = t.utc.strftime('%FT%T.%LZ') " } } output { stdout { codec => rubydebug } if [type] == "apiserver_logs" { elasticsearch { hosts => ["elasticsearch.us-east-1.aws.found.io:9200"] user => "admin" password => "adminpassword" index => "local-apiserver-logs-logstash-%{+YYYY.MM.dd}" } } else if [type] == "gfac_logs" { elasticsearch { hosts => ["elasticsearch.us-east-1.aws.found.io:9200"] user => "admin" password => "adminpassword" index => "local-gfac-logs-logstash-%{+YYYY.MM.dd}" } } else if [type] == "orchestrator_logs" { elasticsearch { hosts => ["elasticsearch.us-east-1.aws.found.io:9200"] user => "admin" password => "adminpassword" index => "local-orchestrator-logs-logstash-%{+YYYY.MM.dd}" } } else if [type] == "credentialstore_logs" { elasticsearch { hosts => ["elasticsearch.us-east-1.aws.found.io:9200"] user => "admin" password => "adminpassword" index => "local-credentialstore-logs-logstash-%{+YYYY.MM.dd}" } } else { elasticsearch { hosts => ["elasticsearch.us-east-1.aws.found.io:9200"] user => "admin" password => "adminpassword" index => "local-airavata-logs-logstash-%{+YYYY.MM.dd}" } } } |
Easiest and fastest way to use Elastic search is using the hosted version of Elastic search from a cloud provider, there are set of companies who provide elastic search as a service so you can setup a Elastic search cluster with few clicks. But most of these services charge you more money based on the load you have. If you have a very low load and you require relatively low TTL for your logs it might be efficient and financially make sense to use a ES cluster from of the providers. If you have relatively high TTL for your logs then setup your own cluster is also an option. To start setup your own elastic search cluster and Kiban follow the very last few links below. If you want to secure Kibana you can use another product from Elastic search called Shield and add security to your ES cluster and Kibana.
http://kafka.apache.org/documentation.html
https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html
https://www.elastic.co/cloud/as-a-service/signup
https://www.elastic.co/guide/en/kibana/current/production.html
https://www.elastic.co/guide/en/shield/shield-1.0/marvel.html/cloud/as-a-service/signup
Content by Label | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...