Advertisements
Well with a central syslog server you get all the logs in a single server. But how you will keep track of logs? or how you will check for a particular log? You will grep or awk? Its time consuming and not easy.
What if we have a web interface where we can see all the logs? And query to it for specific log patterns? It will be awesome right? In this post we will how to achieve this using logstash, elasticsearch, kibana and redis.
The following are the most important components in the logstash central log server's configuration:
- A remote shipper component which will be on all the client systems. ie this will be sending the logs to the broker.
- Searching and Storage of the information.
- Web interface (Kibana with nginx)
- Indexer(elasticsearch)
- Broker (This will receive log event data from different agents, means remote servers) We will be using redis for this.
Server ip:54.165.44.141
Client ip: 54.85.56.235
We are using ubuntu instances. Please find the instance details below.
ubuntu@ip-172-30-0-222:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"
ubuntu@ip-172-30-0-222:~$
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"
ubuntu@ip-172-30-0-222:~$
Both the server and client instances are spawn from the same ami.
We will start with the server configuration:
First of all we will update the apt-get repository of Ubuntu:
#sudo apt-get update
#sudo apt-get update
Logstash is running on java. So we have to install latest jdk:
sudo apt-get install openjdk-7-jdk
ubuntu@ip-172-30-0-222:~$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
Logstash is very scalable as we can install the components separately on different nodes and scale them independently. But here in this example we will be configuring everything on a single instance.
Even though the following step are needed, to understand it better we will create different directories for software,configuration files and logs.
sudo mkdir /opt/logstash/
sudo mkdir /etc/logstash
sudo mkdir /var/log/logstash
Download the latest logstash package from the website. Previously they used to give jar files but to make it better and reduce dependencies they provide the archive files now. Go to the logstash software location destination and download the zipped archive file.
cd /opt/logstash
https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
sudo tar xvzf logstash-1.4.2.tar.gz
cd /opt/logstash
https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
sudo tar xvzf logstash-1.4.2.tar.gz
Installing Redis Server Broker
We had discussed that for the broker(in order to receive the logs from the other systems) we will be using redis server. Install redis server using apt-get commandsudo apt-get install redis-server
sudo vim /etc/redis/redis.conf
sudo /etc/init.d/redis-server restart
Redis server will be listening on port 6379
ubuntu@ip-172-30-0-222:/opt/logstash/logstash-1.4.2$ netstat -nplaut
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
http://sharadchhetri.com/2014/10/04/install-redis-server-centos-7-rhel-7/
You can test the redis server using telnet commands also
ubuntu@ip-172-30-0-222:/opt/logstash/logstash-1.4.2$ telnet localhost 6379
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
ping
+PONG
Now we have downloaded the logstash and redis. Its the time to get the elasticsearch. We will download the latest debian package of elasticsearch.
ubuntu@ip-172-30-0-222:/usr/local/src$ sudo wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.deb
Installing elasticsearch:
ubuntu@ip-172-30-0-222:/usr/local/src$ sudo dpkg -i elasticsearch-1.3.4.deb
Starting elasticsearch:
ubuntu@ip-172-30-0-222:/usr/local/src$ sudo /etc/init.d/elasticsearch start
sudo: unable to resolve host ip-172-30-0-222
* Starting Elasticsearch Server [ OK ]
ubuntu@ip-172-30-0-222:/usr/local/src$
By default elasticsearch will be running in a cluster. Even though only one node is there. This is for scalability. So that later we can add more nodes to this cluster without changing the entire architecture.
ubuntu@ip-172-30-0-222:/usr/local/src$ sudo /etc/init.d/elasticsearch start
sudo: unable to resolve host ip-172-30-0-222
* Starting Elasticsearch Server [ OK ]
ubuntu@ip-172-30-0-222:/usr/local/src$
By default elasticsearch will be running in a cluster. Even though only one node is there. This is for scalability. So that later we can add more nodes to this cluster without changing the entire architecture.
You can give the names for the cluster and nodes in its configuration file. Here I have given cluster name as logstash and node name as Randeep.
ubuntu@ip-172-30-0-222:/usr/local/src$ sudo vim /etc/elasticsearch/elasticsearch.yml
cluster.name: logstash
node.name: "Randeep"
Edit the configuration file and restart.
ubuntu@ip-172-30-0-222:/usr/local/src$ sudo /etc/init.d/elasticsearch restart
sudo: unable to resolve host ip-172-30-0-222
* Stopping Elasticsearch Server [ OK ]
* Starting Elasticsearch Server [ OK ]
ubuntu@ip-172-30-0-222:/usr/local/src$
Elasticsearch will be listening on port 9200. We have to open this port in firewalls.
ubuntu@ip-172-30-0-222:/usr/local/src$ netstat -ntplau | grep 9200
(No info could be read for "-p": geteuid()=1000 but you should be root.)
tcp6 0 0 :::9200 :::* LISTEN -
ubuntu@ip-172-30-0-222:/usr/local/src$
If you access this port from browser, you will be able to get the details:
http://54.165.44.141:9200/
{
"status" : 200,
"name" : "Randeep",
"version" : {
"number" : "1.3.4",
"build_hash" : "a70f3ccb52200f8f2c87e9c370c6597448eb3e45",
"build_timestamp" : "2014-09-30T09:07:17Z",
"build_snapshot" : false,
"lucene_version" : "4.9"
},
"tagline" : "You Know, for Search"
}
Now configuring the logstash. We have started the broker, redis server. And we have started elastic search which will do the indexing and storage. But logstash still doesn't know about these. For that we will put these details in the logstash configuration file.
We will create a configuration file for logstash and we need to pass this path when we start the logstash.
ubuntu@ip-172-30-0-222:/opt/logstash/logstash-1.4.2$ cat /etc/logstash/server.conf
input {
redis {
host => "127.0.0.1"
type => "redis"
data_type => "list"
key => "logstash"
}
}
output {
stdout { }
elasticsearch {
cluster => "logstash"
}
}
Now starting the logstash:
First we will verify the configuration:cd /opt/logstash/logstash-1.4.2
$ sudo bin/logstash --configtest -f /etc/logstash/server.conf --log /var/log/logstash/server.log &
Sending logstash logs to /var/log/logstash/server.log.
Using milestone 2 input plugin 'redis'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
Configuration OK
Configuration OK
ubuntu@ip-172-30-0-222:/opt/logstash/logstash-1.4.2$
Now starting the logstash:
$ sudo bin/logstash --verbose -f /etc/logstash/server.conf --log /var/log/logstash/server.log &
Sending logstash logs to /var/log/logstash/server.log.
Using milestone 2 input plugin 'redis'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
We have installed logstash,redis,elasticsearch. Now how we will see the logs in web interface? Logstash comes with Kibana by default but we will use nginx and kibana.
Download the latest kibana package:
$ sudo wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.1.zip
ubuntu@ip-172-30-0-222:/usr/share/nginx$ ls
html kibana-3.1.1.zip
ubuntu@ip-172-30-0-222:/usr/share/nginx$ sudo unzip kibana-3.1.1.zip
Now we need to tell the nginx the path of Kibana by editing the document root in the configuration file.
sudo vim /etc/nginx/sites-available/default
Change the root directive accordingly
Install nginx using apt-get
ubuntu@ip-172-30-0-222:~$ sudo apt-get install nginxDownload the latest kibana package:
$ sudo wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.1.zip
ubuntu@ip-172-30-0-222:/usr/share/nginx$ ls
html kibana-3.1.1.zip
ubuntu@ip-172-30-0-222:/usr/share/nginx$ sudo unzip kibana-3.1.1.zip
Now we need to tell the nginx the path of Kibana by editing the document root in the configuration file.
sudo vim /etc/nginx/sites-available/default
Change the root directive accordingly
root /usr/share/nginx/kibana-3.1.1;
Restart the nginx:
ubuntu@ip-172-30-0-222:/usr/share/nginx/kibana-3.1.1$ sudo /etc/init.d/nginx restart
sudo: unable to resolve host ip-172-30-0-222
* Restarting nginx nginx [ OK ]
ubuntu@ip-172-30-0-222:/usr/share/nginx/kibana-3.1.1$
Thats it. We have completed the server part! Now we need a client and client need to send logs to the logstash server.
Client configuration:
Restart the nginx:
ubuntu@ip-172-30-0-222:/usr/share/nginx/kibana-3.1.1$ sudo /etc/init.d/nginx restart
sudo: unable to resolve host ip-172-30-0-222
* Restarting nginx nginx [ OK ]
ubuntu@ip-172-30-0-222:/usr/share/nginx/kibana-3.1.1$
Thats it. We have completed the server part! Now we need a client and client need to send logs to the logstash server.
Client configuration:
First of all update the apt-get repositories:
sudo apt-get update
In client also, we are installing logstash agent. So we need java to run the logstash:
sudo apt-get install openjdk-7-jdk
ubuntu@ip-172-30-0-242:~$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
creating configuration, software and log directories for client system:
sudo mkdir /opt/logstash/
sudo mkdir /etc/logstash
sudo mkdir /var/log/logstash
Download the logstash latest version:
cd /opt/logstash
https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
sudo tar xvzf logstash-1.4.2.tar.gz
Create a configuration file for client. In this file we will specify the broker information:
ubuntu@ip-172-30-0-242:/opt/logstash/logstash-1.4.2$ sudo vim /etc/logstash/shipper.conf
ubuntu@ip-172-30-0-242:~$ cat /etc/logstash/shipper.conf
input {
file {
type => "syslog"
path => ["/var/log/auth.log", "/var/log/syslog"]
exclude => ["*.gz", "shipper.log"]
}
}
output {
stdout { }
redis {
host => "54.165.44.141"
data_type => "list"
key => "logstash"
}
}
sudo apt-get update
In client also, we are installing logstash agent. So we need java to run the logstash:
sudo apt-get install openjdk-7-jdk
ubuntu@ip-172-30-0-242:~$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
creating configuration, software and log directories for client system:
sudo mkdir /opt/logstash/
sudo mkdir /etc/logstash
sudo mkdir /var/log/logstash
Download the logstash latest version:
cd /opt/logstash
https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
sudo tar xvzf logstash-1.4.2.tar.gz
Create a configuration file for client. In this file we will specify the broker information:
ubuntu@ip-172-30-0-242:/opt/logstash/logstash-1.4.2$ sudo vim /etc/logstash/shipper.conf
ubuntu@ip-172-30-0-242:~$ cat /etc/logstash/shipper.conf
input {
file {
type => "syslog"
path => ["/var/log/auth.log", "/var/log/syslog"]
exclude => ["*.gz", "shipper.log"]
}
}
output {
stdout { }
redis {
host => "54.165.44.141"
data_type => "list"
key => "logstash"
}
}
start the logstash agent with the configuration and log file paths.
$ sudo bin/logstash --verbose -f /etc/logstash/shipper.conf --log /var/log/logstash/client.log &
Sending logstash logs to /var/log/logstash/client.log.
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
Now open ypur browser and see the kibana for the logs:
In the query bar you can search for the words which will be there in the logs. Kibana will show all the logs which contains the particular string.
Sending logstash logs to /var/log/logstash/client.log.
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
Now open ypur browser and see the kibana for the logs:
In the query bar you can search for the words which will be there in the logs. Kibana will show all the logs which contains the particular string.
Thats it. Now sit back and relax. All your logs are just a query away!
No comments:
Post a Comment
Be nice. That's all.