In this blog post you will learn:
- How to build a custom Docker image for Logstash,
- How to install and configure Filebeat service,
- How to make Filebeat to cooperate with the ELK stack,
- How to do basic log event filtering in Kibana
Building custom Docker image for Logstash
Why do we have to do a custom Docker image for Logstash? Isn’t the one that we have pulled down from Elastic enough? Those questions might pop up in reader’s mind. You see, not all services work out of the box as we want them to after the installation. We have to do minor configuration file changes in order to make it work as we have imagined. In this case we have to tell Logstash where to put log events that came from Filebeat.
How to make those Logstash configuration changes?
I would suggest that you run basic ELK stack on Docker first and login to Logstash Docker container. This is just so you can see how the Logstash config file looks like and where is it placed inside of Docker container. To log into the Logstash Docker container or any other Docker container you would type: sudo docker exec -u 0 -it container_name /bin/bash
After logging into Logstash Docker container you should see results like on the Picture 1 below. I have opened the directory where Logstash config resides and shown you the outlook of the config file as well in the already mentioned picture so you won’t be confused.
As you were able to see, Logstash config file on the Picture 1 above has 2 parts, input and output. What we will be changing in the Logstash config file is the output part. We won’t be doing that change inside of the Docker container. We will just copy logstash.conf content and save it in the file with the same name logstash.conf but outside of Docker container. I have created myself a special directory outside of container and named it Logstash, inside I have saved the logstash.conf and changed the output. Your final logstash.conf should look like on the Picture 2 below.
What do these parts mean in the output? In layman terms it means that we have told Logstash to send log events to Elasticsearch and we have set custom name for our Elasticsearch index which will appear in Kibana later on. I could do a deep dive here about this but then blog post would be way too long.
Now you have logstash.conf updated and saved in a special directory Logstash. What next? Next up we will create file named Dockerfile in the same directory where you have saved logstash.conf. That file will help us to build a custom Logstash Docker image.
To create Dockerfile type the command: sudo vim Dockerfile. I will assume that you know how to work with vim file editor. Dockerfile content should look like on the picture 3 below. With commands below we are basically telling docker service to pull the original Logstash image, then we tell it to remove the existing default logstash.conf from the image and replace it with our version which we have modified above.
Now when you have this typed into a Dockerfile and saved it you are ready to build your very own custom Logstash Docker image by typing the following command:
sudo docker build -t="ubuntu/logstash:v1" .
After image build please type the command sudo docker images so you make sure that your custom image exists. See the Picture 4 below.
Now, setting up the entire ELK stack is same as on the prior article with a minor change. Instead of logstash:6.6.0 you would type ubuntu/logstash:v1. Before you run ELK stack with modified Logstash image you have to stop the old Logstash Docker container by typing the following command: sudo docker stop container_name. Commands that you would type to run the ELK stack with modified Logstash image are below. You don’t have to use all of them, you can just use the one for Logstash.
sudo docker run -d -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" elasticsearch:6.6.0
sudo docker run -d --net=host -e "ELASTICSEARCH_URL=http://localhost:9200" ubuntu/logstash:v1
sudo docker run -d --net=host -e "ELASTICSEARCH_URL=http://localhost:9200" kibana:6.6.0
Your slightly modified ELK stack should be up and running. You can double check it by typing public_host_IP:5601 in your browser. Also, inside the command line you can type the command sudo docker ps. You should see the change in the logstash image name. See picture 5 below.
Now when we have ELK stack up and running we can go play with the Filebeat service. But before that please do take a break if you need one. This has been a longer post and there is more to digest with the Filebeat.
Install and configure Filebeat
Filebeat is a service that is going to ship log events to Logstash before they reach Elasticsearch and Kibana. You can imagine it like a big boat full of logs.
To download Filebeat you would type the following command:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.2-amd64.de
To install Filebeat you would type the following command:
sudo dpkg -i filebeat-6.6.2-amd64.deb
Now you just have to start and enable Filebeat. To do that you have to type following commands.
- sudo systemctl enable filebeat
- sudo systemctl start filebeat
- sudo systemctl status filebeat (this one is addition to see the status of filebeat and if it is sending log events to Logstash, see Picture 6 below) Be aware that Filebeat won’t send log events right away, we have to tell it to do that first. More about that in the next chapter.
Installation of Filebeat is done. Now we can make it to collaborate with Logstash and the rest of the ELK stack in the next chapter.
Make Filebeat to cooperate with the ELK stack
Filebeat by default is not sending any log events. We have to tell it to do that by modifying its config file. It is a similar process like we had to do for Logstash. To modify Filebeat’s config file you would type following command:
sudo vim /etc/filebeat/filebeat.yml
I will assume that you know how to work with vim text editor. If not please type vimtutor in the command line before editing any config file. It will give you a good intro on how to use vim file editor. I should have mentioned that before.
After you have opened filebeat config file you should modify following parts which I will extract in pictures below.
Digression: before you do a deep dive into filebeat config change, please install apache web server, start it and enable it. We will use it in this example to see how Filebeat is sending log events that reside inside apache's access.log file.
This is it regarding filebeat configuration. Now you just have to restart the filebeat service with the command: sudo systemctl restart filebeat and you can type sudo systemctl status filebeat just to check if it has started to send log events to Logstash.
Now we can go and see the results in Kibana. I suggest that you open 2 tabs in your browser. One tab will serve for you to refresh your apache server on the port 80 i.e. apache’s default welcome page. So, type server_public_IP:80 and hit refresh button several times. That will actually fill up apache’s access log file and Filebeat will be able to send log events to Logstash. In the other browser tab you will open Kibana by typing server_public_IP:5601. Don’t worry if you will be prompted with message which template you would use. Just pick that you will explore it on your own. After Kibana has let you in, click on the Discover tab on the menu from your left. You should see results similar to the one below on the picture 10.
As you managed to see on the picture 10 I have typed filebeat-* in the index pattern field already. If you remember that we gave the custom name for index pattern in the Logstash output? This is the result what you see now in the picture 10. After typing filebeat-* you will just have to click on the Next step button and in the next dialog box (See picture 11 below) you will have to pick Time filter field name which is @timestamp. You can skip it but I highly suggest that you chose @timestamp and hit the button Create index pattern. This will help the Elasticsearch in the future to track log events by a certain date and it will ease your life big time.
When our index pattern has been created we can now again click on the Discover tab from the Kibana menu on your left and you should be able to see log events including apache log events. To be fair, we didn’t exclude the rest of the system logs from filebeat config file. Thus you will see those too. To check easier if your apache log events have made it to Elasticsearch and Kibana we will do a bit of filtering in our next chapter.
How to filter log events in Kibana
To start filtering you would have to click on the Add filter option and pick fields you want to filter. And of course you will have to add a value as well. In the Picture 12 below is the example of filtering apache’s access log file. You can try to do the same.
In the Picture 13 you will see that Kibana has filtered out apache log events. Feel free to click on each and see the details.
I could have gone into deep dive with Elasticsearch index patterns and not only that. But as I said, this would turn into a huuuge blog post then. In the future I might do a deep dive but in a different way unlike today with blog posts. It will be much easier for you to digest the material.
So, In this chapter you have learned:
- How to create custom Logstash Docker image
- How to install and configure Filebeat and make it to work with the ELK stack
- How to do basic filtering in Kibana
In the next blog post I will share tips and tricks on how to troubleshoot ELK stack and maybe I will add some neat ELK stack cheat sheet for you to download it.
As always, feel free to comment and share this blog post. Also, feel free to subscribe to my blog so you can get a notification via email every time I publish a new blog post. 🙂
One thought on “Configure ELK stack on Docker – part 3”
[…] you have read one of my previous blog posts you could have noticed steps on how to run ELK stack Docker containers. When I was setting that […]