Ship MySQL Logs to Elasticsearch Effortlessly with Filebeat

Alibaba Cloud
4 min readFeb 18, 2021

Written by Liu Xiaoguo, an advocate of the China Elasticsearch Community

Edited by Lettie and Dayu

Released by ELK Geek

This article gives a detailed description of the use of Filebeat to transfer MySQL log information to Elasticsearch.

Prepare the Environment

  1. Prepare the CentOS v7.4 Elastic Cloud Server (ECS) environment and disable Security-Enhanced Linux (SELinux) and the firewall.
  2. Prepare the Alibaba Cloud Elasticsearch 6.7 environment, use the user name and password you created to log on to Kibana, and import the data into Elasticsearch.
  3. Install Filebeat 6.7.0.
  4. Install MySQL 5.6.48.

Install MySQL

You need to install MySQL by running the following commands:

# yum install mysql-server
# systemctl start mysqld
# systemctl status mysqld
####Use mysqladmin to set the root password#####
# mysqladmin -u root password "123456"

Next, configure an error log file and a slow query log file in my.cnf. These configurations are disabled by default and need to be manually enabled. You can also enable the temporary slow log feature by running the following commands:

# vim /etc/my.cnf
[mysqld]
log_queries_not_using_indexes = 1
slow_query_log=on
slow_query_log_file=/var/log/mysql/slow-mysql-query.log
long_query_time=0
[mysqld_safe]
log-error=/var/log/mysql/mysqld.log

Note: MySQL does not automatically create log files. Therefore, you need to manually create one. After a log file is created, grant the read and write permissions to all users. For example, run the chmod 777 slow-mysql-query.log statement.

Configure Filebeat

To install Filebeat on CentOS, log on to the Elasticsearch console, go to the Kibana console, click Home, and click Add log data.

Install Filebeat and make modifications step by step according to the requirements on the page. When you modify the filebeat.yml file, consider the following steps:

1. Configure dynamic loading of Filebeat modules.

filebeat.config.modules:
# Glob pattern for configuration loading
path: /etc/filebeat/modules.d/mysql.yml
# Set to true to enable config reloading
reload.enabled: true
# Period on which files under path should be checked for changes
reload.period: 1s

MySQL detects error logs and slow logs separately. Therefore, you need to specify the paths of your error log and slow log by using modules. The modules are dynamically loaded, so the paths are used to specify the locations of modules.

2. Enter a Kibana address.

setup.kibana:  # Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "https://es-cn-0p11111000zvqku.kibana.elasticsearch.aliyuncs.com:5601"

3. Enter an Elasticsearch address and a port number.

output.elasticsearch:
# Array of hosts to connect to.
hosts: ["es-cn-0p11111000zvqku.elasticsearch.aliyuncs.com:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
username: "elastic"
password: "elastic@333"

4. Enable and configure the MySQL module.

# sudo filebeat modules enable mysql
# vim /etc/filebeat/modules.d/mysql.yml
- module: mysql
# Error logs
error:
enabled: true
var.paths: ["/var/log/mysql/mysqld.log"]
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Slow logs
slowlog:
enabled: true
var.paths: ["/var/log/mysql/slow-mysql-query.log"]
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:

Run the following commands to upload the dashboard, pipeline, and template information to Elasticsearch and Kibana.

# sudo filebeat setup
# sudo service filebeat start

Kibana Dashboard

Restart and query the MySQL database, so that a slow log and an error log are generated accordingly.

Part of the slow query log

Part of the error log

Go to the Kibana dashboard [Filebeat MySQL] Overview ECS and view the collected data. Here, you can see all the information about MySQL, including the following queries and error logs.

Summary

As shown in this tutorial, Filebeat is an excellent log shipping solution for MySQL databases and Elasticsearch clusters. Compared with previous versions, Filebeat 6.7.0 is very lightweight and can effectively ship log events. Filebeat supports compression and can be easily configured in a YAML file. Filebeat allows you to easily manage log files, track log registries, create custom word K segments to enable fine-grained filtering and discovery in logs, and use the Kibana visualization feature to provide immediate power for log data.

Declaration: This article is an authorized adaptation from Beats: How to Use Filebeat to Ship MySQL Logs to Elasticsearch based on Alibaba Cloud service environment.

Original Source:

--

--

Alibaba Cloud

Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website:https://www.alibabacloud.com