How To Move MySQL Data Directory On A Separate Partition

Prerequisite: A free partition that will serve as a dedicated MySQL partition.

Note: These instructions assume that the partition you wish to mount is /dev/sdc1

  1. Backup all MySQL databases
    Code:
    mysqldump --opt --all-databases | gzip > /home/alldatabases.sql.gz
  2. Stop tailwatchd and the mysql (tailwatchd monitors services, so disable it to prevent it from prematurely restarting mysql)
    Code:
    /scripts/restartsrv_tailwatchd --stop
    /scripts/restartsrv_mysql --stop
  3. Backup the MySQL data directory in case something goes awry
    Code:
    mv /var/lib/mysql /var/lib/mysql.backup
  4. Create the new mount point
    Code:
    mkdir /var/lib/mysql
  5. Configure /etc/fstab so that the new partition is mounted when the server boots (adjust values as necessary)
    Code:
    echo "/dev/sdc1     /var/lib/mysql     ext3     defaults,usrquota    0 1" >> /etc/fstab
  6. Mount the new partition. The following command will mount everything in /etc/fstab:
    Code:
    mount -a
  7. Change the ownership of the mount point so that it is accessible to the user “mysql”
    Code:
    chown mysql:mysql /var/lib/mysql
  8. Ensure that the permissions of the mount point are correct
    Code:
    chmod 711 /var/lib/mysql
  9. Start mysql and tailwatchd
    Code:
    /scripts/restartsrv_mysql --start
    /scripts/restartsrv_tailwatchd --start
  10. Ensure that the MySQL data directory is mounted correctly:
    Code:
    mount |grep /var/lib/mysql
  11. You should see a line that looks like this:
    /dev/sdc1 on /var/lib/mysql type ext3 (rw,usrquota)

Source

How To Back Up Your MySQL Databases

MySQL is an open source relational database management system (DBMS) which is frequently deployed in a wide assortment of contexts. Most frequently it is deployed as part of the LAMP Stack. The database system is also easy to use and highly portable and is, in the context of many applications, extremely efficient. As MySQL is often a centralized data store for large amounts of mission critical data, making regular backups of your MySQL database is one of the most important disaster recovery tasks a system administrator can perform. This guide addresses a number of distinct methods for creating back ups of your database as well as restoring databases from backups.

 

Backup Methodology

Most backups of MySQL databases in this guide are performed using the mysqldump tool, which is distributed with the default MySQL server installation. We recommend that you use mysqldumpwhenever possible because it is often the easiest and most efficient way to take database backups. Other methods detailed in this guide are provided for situations when you do not have access to the mysqldump tool, as in a recovery environment like Finnix or in situations where the local instance of the MySQL server will not start.

Nevertheless, this guide provides a mere overview of the mysqldump tool, as there are many options for and uses of mysqldump that fall beyond the scope of this document. We encourage you to become familiar with all of the procedures covered in this document, and to continue your exploration of mysqldump beyond the cases described here. Be sure to note the following:

  • The *.sql files created with mysqldump can be restored at any time. You can even edit the database .sql files manually (with great care!) using your favorite text editor.
  • If your databases only make use of the MyISAM storage engine, you can substitute the mysqldump command with the faster mysqlhotcopy.

 

Creating Backups of the Entire Database Management System (DBMS)

It is often necessary to take a back up (or “dump”) of an entire database management system along with all databases and tables, including the system databases which hold the users, permissions and passwords.

Option 1: Create Backups of an Entire Database Management System Using the mysqldump Utility

The most straight forward method for creating a single coherent backup of the entire MySQL database management system uses the mysqldump utility from the command line. The syntax for creating a database dump with a current timestamp is as follows:

1
mysqldump --all-databases > dump-$( date '+%Y-%m-%d_%H-%M-%S' ).sql -u root -p

This command will prompt you for a password before beginning the database backup in the current directory. This process can take anywhere from a few seconds to a few hours depending on the size of your databases.

Automate this process by adding a line to crontab:

1
0 1 * * * /usr/bin/mysqldump --all-databases > dump-$( date '+%Y-%m-%d_%H-%M-%S' ).sql -u root -pPASSWORD

For the example above, use which mysqldump to confirm the correct path to the command, and replace root with the mysql user you would like to run backups as, and PASSWORD with the correct password for that user.

In the crontab example, ensure that there is no space between the -P flag, and your password entry.

Option 2: Create Backups of an Entire DBMS Using Copies of the MySQL Data Directory

While the mysqldump tool is the preferred backup method, there are a couple of cases that require a different approach. mysqldump only works when the database server is accessible and running. If the database cannot be started or the host system is inaccessible, we can copy MySQL’s database directly. This method is often necessary in situations where you only have access to a recovery environment like Finnix with your system’s disks mounted in that file system. If you’re attempting this method on your system itself, ensure that the database is not running. Issue a command that resembles the following:

1
/etc/init.d/mysqld stop

On most distribution’s version of MySQL, the data directory is located in the /var/lib/mysql/directory. If this directory doesn’t exist examine the /etc/mysql/my.cnf file for a path to the data directory. Alternatively, you can search your file system for the data directory by issuing the following command:

1
find / -name mysql

Once you have located your MySQL data directory you can copy it to a backup location. The following example assumes that the MySQL data directory is located at /var/lib/mysql/:

1
cp -R /var/lib/mysql/* /opt/database/backup-1266871069/

In this case, we have recursively copied the contents of the data directory (e.g. /var/lib/mysql/) to a directory within the /opt/ hierarchy (e.g. /opt/database/backup-1266871069/). This directory must exist before initiating the copy operation. Consider the following sequence of operations:

1
2
3
/etc/init.d/mysql stop
mkdir -p /opt/database/backup-1266872202/
cp -R /var/lib/mysql/* /opt/database/backup-1266872202/

These commands begin by stopping the MySQL server daemon, then creating a directory named /opt/database/backup-1266872202/, and performing a recursive copy of the data directory. Note that we’ve chosen to use the backup-[time_t] naming convention for our examples. Substitute the paths’ above for your preferred organization and naming scheme. The cp command does not produce output and can take some time to complete depending on the size of your database. Do not be alarmed if it takes a while to complete. When the copy operation is finished, you may want to archive the data directory into a “tar” archive to make it easier to manage and move between machines. Issue the following commands to create the archive:

1
2
cd /opt/database/backup-1266872202
tar -czfv * > /opt/mysqlBackup-1266872202.tar.gz 

Once the tarball is created, you can easily transfer the file in the manner that is most convenient for you. Don’t forget to restart the MySQL server daemon again if needed:

1
/etc/init.d/mysql start

Creating Backups of a Single Database

In many cases, creating a back up of the entire database server isn’t required. In some cases such as upgrading a web application, the installer may recommend making a backup of the database in case the upgrade adversely affects the database. Similarly, if you want to create a “dump” of a specific database to move that database to a different server, you might consider the following method.

When possible, use the mysqldump tool to export a “dump” of a single database. This command will resemble the following:

1
mysqldump -u squire -ps3cr1t -h localhost danceLeaders > 1266861650-danceLeaders.sql

The above example is like the example in the previous section, except rather than using the --all-databases option, this example specifies a particular database name. In this case we create a back up of the danceLeaders database. The form of this command, in a more plain notation is as follows:

1
mysqldump -u [username] -p[password] -h [host] [databaseName] > [backup-name].sql

For an additional example, we will backup the database named customer using the root database account by issuing the following command:

1
mysqldump -u root -p -h localhost customer > customerBackup.sql

You will be prompted for a password before mysqldump begins it’s backup process. As always the backup file, in this case customerBackup.sql, is created in the directory where you issue this command. The mysqldump command can complete in a few seconds or a few hours depending on the size of the database and the load on the host when running the backup.

Creating Backups of a Single Table

Option 1: Create Backups of a Single Table Using the mysqldump Utility

This operation, like previous uses of the mysqldump utility in this document, allows you to create a backup of a single database table. Continuing our earlier examples the following example allows you to back up the table squireRecords in the danceLeaders database.

1
 mysqldump -u squire -ps3cr1t -h localhost danceLeaders squireRecords \> 1266861650-danceLeaders-squireRecords.sql

The above example is like the example in the previous section, except that we’ve added a table name specification to the command to specify the name of the table that we want to back up. The form of this command in a more plain notation is as follows:

1
mysqldump -u [username] -p[password] -h [host] [databaseName] [tableName] > [backup-name].sql

For an additional example, we will backup the table named “order” from the database named customer using the root database account by issuing the following command:

1
mysqldump -u root -p -h localhost customer order > customerBackup-order.sql

You will be prompted for a password before mysqldump begins its backup process. As always, the backup file (in this case customerBackup.sql) is created in the directory where you issue this command. The mysqldump command can complete in a few seconds or a few hours depending on the size of the database and the load on the host when running the backup.

Option 2: Create Backups of a Single Table Using the MySQL Client and an OUTFILE Statement

The MySQL client itself has some backup capability. It is useful when you are already logged in and you do not want to exit the current session. If you are using a live system and cannot afford down time, you should consider temporarily locking the table you’re backing up.

Do be aware that when backing up a single table using the MySQL client, that table’s structure is not maintained in the backup. Only the data itself is saved when using this method.

  1. Before we begin, we recommend performing a LOCK TABLES on the tables you intend to backup up, followed by FLUSH TABLES to ensure that the database is in a consistent space during the backup operation. You only need a read lock. This allows other clients to continue to query the tables while you are making a copy of the files in the MySQL data directory. For a “read” lock, the syntax of LOCK TABLES looks like the following:
    1
    LOCK TABLES tableName READ;
    

    To perform a LOCK TABLES on the order table of the customer database, issue the following command:

    1
    mysql -u root -p -h localhost
    

    You will then be prompted for the root password. Once you have entered the database credentials, you will arrive at the mysql client prompt. Issue the following command to lock the order table in the customer database (the trailing ; is required for MySQL commands):

    1
    2
    3
    USE customer;
    LOCK TABLES order READ;
    FLUSH TABLES;
    
  2. We can now begin the backup operation. To create a backup of a single table using the MySQL client, you will need to be logged in to your MySQL DBMS. If you are not currently logged in you may log in with the following command:
    1
     mysql -u root -p -h localhost
    

    You will be prompted for a password. Once you have entered the correct password and are at the MySQL client prompt, you can use a SELECT * INTO OUTFILE statement. The syntax of this statement looks like the following:

    1
     SELECT * INTO OUTFILE 'file_name' FROM tbl_name;
    

    In this example, we will create a backup of the data from the order table of the customerdatabase. Issue the following command to begin the backup procedure (the trailing ; is required for MySQL commands):

    1
    2
    3
    4
    5
     USE customer;
     LOCK TABLES order READ;
     FLUSH TABLES;
     SELECT * INTO OUTFILE 'customerOrderBackup.sql' FROM order;
     UNLOCK TABLES;
    

    The customerOrderBackup.sql file will be created in the appropriate data sub-directory within MySQLs data directory. The MySQL data directory is commonly /var/lib/mysql/. In this example, the OUTFILE will be /var/lib/mysql/customer/customerOrderBackup.sql. The location of this directory and file can, however, vary between Linux distributions. If you can not find your backup file, you can search for it with the following command:

    1
    find / -name customerOrderBackup.sql
    
  3. Once you have completed the backup operation, you will want to unlock the tables using the following command in the MySQL client. This will return your database to its normal operation. Log in to the MySQL client with the first command if you are not presently logged in and then issue the second command:
    1
    2
    3
    mysql -uroot -p -h localhost
    
    UNLOCK TABLES;
    

You can continue using your database as normal from this point.

Considerations for an Effective Backup Strategy

Creating backups of your MySQL database should be a regular and scheduled task. You might like to consider scheduling periodic backups using cronmysqldump and/or mail. Consider our documentation for more information regarding cron. Implementing an automated backup solution may help minimize down time in a disaster recovery situation.

You do not need to log in as root when backing up databases. A MySQL user with read (e.g. SELECT) permission is able to use both the mysqldump and mysql (e.g. the MySQL client) tools to take backups, as described below. As a matter of common practice, we recommend that you not use the MySQL root user whenever possible to minimize security risks.

You may want to consider incremental backups as part of a long-term database backup plan. While this process is not covered here, we recommend that you consider the MySQL Database Backup Methods resource for more information.

Restoring an Entire DBMS From Backup

A backup that cannot be restored is of minimal value. We recommend testing your backups regularly to ensure that they can be restored in the event that you may need to restore from backups. When using restoring backups of your MySQL database, the method you use depends on the method you used to create the backup in question.

Option 1: Restoring an Entire DBMS Using the MySQL Client and Backups Created by mysqldump

Before beginning the restoration process, this section assumes your system is running a newly installed version of MySQL without any existing databases or tables. If you already have databases and tables in your MySQL DBMS, please make a backup before proceeding as this process will overwrite current MySQL data.

You can easily restore your entire DBMS using the mysql command. The syntax for this will resemble the following:

1
mysql -u [username] -p [password] < backupFile.sql

In this case we’re simply restoring the entire DBMS. The command will look like the following:

1
mysql -u root -p < 1266861650-backup-all.sql

You will be prompted for the root MySQL user’s password. Once the correct credentials are supplied, the restoration process will begin. Since this process restores an entire DBMS, it can take anywhere from a few seconds to many hours.

Option 2: Restoring an Entire DBMS Using MySQL Data Files Copied Directly from MySQL’s Data Directory

Before beginning the restoration process, this section assumes your system is running a newly installed version of MySQL without any existing databases or tables. If you already have databases and tables in your MySQL DBMS, please make a backup before proceeding as this process will overwrite current MySQL data.

  1. If you have a complete backup of your MySQL data directory (commonly /var/lib/mysql), you can restore it from the command line. To ensure a successful restore, you must first stop the MySQL server daemon and delete the current data in the MySQL data directory.

    /etc/init.d/mysql stop rm -R /var/lib/mysql/*

  2. In the following example, the MySQL data directory backup is located in the /opt/database/backup-1266872202 directory. If you made a tarball of the data directory when you backed up your DBMS data directory, you will need to extract the files from the tarball before copying with the following commands:

    cp mysqlBackup-1266872202.tar.gz /var/lib/mysql/ cd /var/lib/mysql tar xzvf mysqlBackup-1266872202.tar.gz

  3. Before we can restart the MySQL database process, we must ensure that the permissions are set correctly on the /var/lib/mysql/ directory. For this example, we assume the MySQL server daemon runs as the user mysql with the group mysql. To change the permissions on the data directory issue the following command:

    chown -R mysql:mysql /var/lib/mysql

  4. Alter the mysql:mysql portion of this command if your MySQL instance runs with different user and group permissions. The form of this argument is [user]:[group]. Finally we can start the MySQL server daemon with the following command:
    1
     /etc/init.d/mysql start
    

    If you receive an error similar to the following:

    1
    2
     /usr/bin/mysqladmin: connect to server at 'localhost' failed
         error: 'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)'
    

    You’ll need to find the old debian-sys-maint user’s password in the /etc/mysql/debian.cnf and then change the new debian-sys-maint user’s password to it. You can view the old password using cat:

    1
     cat /etc/mysql/debian.cnf | grep password
    

    Copy (or remember) the password. Then you’ll need to change the new debian-sys-maint user’s password. You can do this by logging in as the MySQL root user and issuing the following command (where <password> is the password of the old debian-sys-maint user):

    1
     GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY '<password>' WITH GRANT OPTION;
    
  5. You’ll then need to restart MySQL with the following command:
    1
    /etc/init.d/mysql restart
    

After MySQL server has successfully started, you will want to test your MySQL DBMS and ensure that all databases and tables restored properly. We also recommend that you audit your logs for potential errors. In some cases MySQL can start successfully despite database errors.

Restoring a Single Database from Backup

In cases where you have only created a backup for one database, or only need to restore a single database, the restoration process is somewhat different.

Before beginning the restoration process, this section assumes your system is running a newly installed version of MySQL without any existing databases or tables. If you already have databases and tables in your MySQL DBMS, please make a backup before proceeding as this process will overwrite current MySQL data.

  1. To restore a single database using the mysql command, first prepare the destination database. Log in to your (new) MySQL database server using the MySQL client:
    1
     mysql -u root -p -h localhost
    
  2. You will be prompted for the root MySQL user’s password. After you have provided the correct credentials, you must create the destination database. In this case, the customer database will be restored:
    1
     CREATE DATABASE customer;
    
  3. As with all MySQL statements, do not omit the final semi-colon (e.g. ;) at the conclusion of each command. Depending on your deployment, you may need to create a new MySQL user or recreate a previous user with access to the newly created database. The command for creating a new MySQL user takes the following form:
    1
     CREATE USER '[username]'@'[host]' IDENTIFIED BY '[password]';
    
  4. In the next example, we will create a user named customeradmin:
    1
    CREATE USER 'customeradmin'@'localhost' IDENTIFIED BY 's3cr1t';
    
  5. Now we will give customeradmin privileges to access the customer database. The command for granting privileges to a database for a specific user takes the following form:
    1
     GRANT [privilegeType] ON [databaseName].[tableName] TO '[username]'@'[host]'
    
  6. For the purposes of the following example, we will give customeradmin full access to the customer database. Issue the following command in the MySQL client:
    1
     GRANT ALL ON customer.* TO 'customeradmin'@'localhost';
    
  7. You may need to specify different access grants depending on the demands of your deployment. Consult the official documentation for MySQL’s GRANT statement. Once the destination database and MySQL user have been created, you can close the MySQL client with the following command:
    1
     quit
    
  8. You can now use the mysql command to restore your SQL file. The form of this command resembles the following:
    1
     mysql -u [username] -p[password] -h [host] [databaseName] < [filename].sql
    

In the following example, we will restore the customer database from a SQL backup file named customerBackup.sql (pay special attention to the < symbol in this command):

1
mysql -u root -p -h localhost customer < customerBackup.sql

You will be prompted for the root MySQL user’s password. Once the correct credentials are supplied, the restoration process will begin. The duration of this operation depends on your system’s load and the size of the database that you are restoring. It may complete in a few seconds, or it may take many hours.

Restoring a Single Table from Backup

Option 1: Restoring a Single Table Using the MySQL and Backups Created by mysqldump

Before beginning the restoration process, we assume that your MySQL instance already has an existing database that can receive the table you wish to restore. If your MySQL instance does not have the required database, we’ll need to create it before proceeding. First, log into your MySQL instance with the following command:

1
mysql -u root -p -h localhost

You will be prompted for the root MySQL user’s password. After you have provided the correct credentials, you must create the destination database. For the purpose of this example we will create the customer database and exit the mysql prompt by issuing the following statements:

1
2
CREATE DATABASE customer;
    quit

If you already have the required database, you can safely skip the above step. To continue with the table restoration, issue a command in the following form:

1
mysql -u [username] -p[password] -h [host] [databaseName] < [filename].sql

For the following, example, we will restore the order table into the existing customer database from an SQL backup file named customerOrderBackup.sql. Be very careful to use the < operator in the following command:

1
mysql -u root -p -h localhost customer < customerOrderBackup.sql

You will be prompted for the root MySQL user’s password. Once the correct credentials are supplied, the restoration process will begin. The duration of this operation depends on your system’s load and the size of the table that you are restoring. It may complete in a few seconds, or it may take many hours.

Option 2: Restoring a Single Table Using the MySQL Client and an INFILE Statement for Backups Created with OUTFILE

Before beginning the restoration process, we assume that your MySQL instance already has an existing database that can receive the table you wish to restore. If your MySQL instance does not have the required database, we’ll need to create it before proceeding. First, log into your MySQL instance with the following command:

1
mysql -u root -p -h localhost

You will be prompted for the root MySQL user’s password. After you have provided the correct credentials, you must create the destination database. For the purpose of this example we will create the customer database and exit the mysql prompt by issuing the following statements:

1
2
CREATE DATABASE customer;
    quit

The data backup used in this case was created using the SELECT * INTO OUTFILE 'backupFile.sql' FROM tableName command. This type of backup only retains the data itself so the table structure must be recreated. To restore a single table from within the MySQL client, you must first prepare the destination database and table. Log in to your (new) MySQL instance using the MySQL client:

1
mysql -u root -p -h localhost

You will be prompted for the root MySQL user’s password. Once the correct credentials are supplied, you must create the destination database. In this case, we will create the customer database. Issue the following statement:

1
CREATE DATABASE customer;

Remember that the semi-colons (e.g. ;) following each statement are required. Now you must create the destination table with the correct structure. The data types of the fields of the table must mirror those of the table where the backup originated. In this example, we will restore the ordertable of the customer database. There are 2 fields in the order table, custNum with data type INTand orderName with data type VARCHAR(20); your table structure will be different:

1
2
USE customer;
CREATE TABLE order (custNum INT, orderName VARCHAR(20));

Depending on your deployment, you may need to create a new MySQL user or recreate a previous user with access to the newly created database. The command for creating a new MySQL user takes the following form:

1
CREATE USER '[username]'@'[host]' IDENTIFIED BY '[password]';

In the next example, we will create a user named customeradmin:

1
CREATE USER 'customeradmin'@'localhost' IDENTIFIED BY 's3cr1t';

Now we will give customeradmin privileges to access the customer database. The command for granting privileges to a database for a specific user takes the following form:

1
GRANT [privilegeType] ON [databaseName].[tableName] TO '[username]'@'[host]'

For the purposes of the following example, we will give customeradmin full access to the customerdatabase. Issue the following command in the MySQL client:

1
GRANT ALL ON customer.* TO 'customeradmin'@'localhost';

You may need to specify different access grants depending on the demands of your deployment. Consult the official documentation for MySQL’s GRANT statement. Once the table and user have been created, we can import the backup data from the backup file using the LOAD DATA command. The syntax resembles the following:

1
LOAD DATA INFILE '[filename]' INTO TABLE [tableName];

In the following, example we will restore data from a table from a file named customerOrderBackup.sql. When MySQL client is given path and filename after INFILE, it looks in the MySQL data directory for that file. If the filename customerOrderBackup.sql was given, the path would be /var/lib/mysql/customerOrderBackup.sql. Ensure that the file you are trying to restore from exists, especially if MySQL generates File not found errors.

To import the data from the customerOrderBackup.sql file located in /var/lib/mysql/, issue the following command:

1
LOAD DATA INFILE 'customerOrderBackup.sql' INTO TABLE order;

This process can take anywhere from a few seconds to many hours depending on the size of your table. The duration of this operation depends on your system’s load and the size of the table that you are restoring. It may complete in a few seconds, or it may take many hours. After you have verified that your data was imported successfully, you can log out:

1
quit

How To Use MySQL Query Profiling

What is the MySQL slow query log?

The MySQL slow query log is a log that MySQL sends slow, potentially problematic queries to. This logging functionality comes with MySQL but is turned off by default. What queries are logged is determined by customizable server variables that allow for query profiling based on an application’s performance requirements. Generally the queries that are logged are queries that take longer than a specified amount of time to execute or queries that do not properly hit indexes.

Setting up profiling variables

The primary server variables for setting up the MySQL slow query log are:

slow_query_log			G 
slow_query_log_file			G 
long_query_time			G / S
log_queries_not_using_indexes	G
min_examined_row_limit		G / S

NOTE: (G) global variable, (S) session variable

slow_query_log – Boolean for turning the slow query log on and off.

slow_query_log_file – The absolute path for the query log file. The file’s directory should be owned by the mysqld user and have the correct permissions to be read from and written to. The mysql daemon will likely be running as `mysql` but to verify run the following in the Linux terminal:

 ps -ef | grep bin/mysqld | cut -d' ' -f1

The output will likely display the current user as well as the mysqld user. An example of setting the directory path /var/log/mysql:

cd /var/log
mkdir mysql
chmod 755 mysql
chown mysql:mysql mysql

long_query_time – The time, in seconds, for checking query length. For a value of 5, any query taking longer than 5s to execute would be logged.

log_queries_not_using_indexes – Boolean value whether to log queries that are not hitting indexes. When doing query analysis, it is important to log queries that are not hitting indexes.

min_examined_row_limit – Sets a lower limit on how many rows should be examined. A value of 1000 would ignore any query that analyzes less than 1000 rows.

The MySQL server variables can be set in the MySQL conf file or dynamically via a MySQL GUI or MySQL command line. If the variables are set in the conf file, they will be persisted when the server restarts but will also require a server restart to become active. The MySQL conf file is usually located in `/etc or /usr`, typically `/etc/my.cnf` or `/etc/mysql/my.cnf`. To find the conf file (may have to broaden search to more root directories):

find /etc -name my.cnf
find /usr -name my.cnf

Once the conf file has been found, simply append the desired values under the [mysqld] heading:

[mysqld]
….
slow-query-log = 1
slow-query-log-file = /var/log/mysql/localhost-slow.log
long_query_time = 1
log-queries-not-using-indexes

Again, the changes will not take affect until after a server restart, so if the changes are needed immediately then set the variables dynamically:

mysql> SET GLOBAL slow_query_log = 'ON';
mysql> SET GLOBAL slow_query_log_file = '/var/log/mysql/localhost-slow.log';
mysql> SET GLOBAL log_queries_not_using_indexes = 'ON';
mysql> SET SESSION long_query_time = 1;
mysql> SET SESSION min_examined_row_limit = 100;

To check the variable values:

mysql> SHOW GLOBAL VARIABLES LIKE 'slow_query_log';
mysql> SHOW SESSION VARIABLES LIKE 'long_query_time';

One drawback to setting MySQL variables dynamically is that the variables will be lost upon server restart. It is advisable to add any important variables that you need to be persisted to the MySQL conf file.

NOTE: The syntax for setting variables dynamically via SET and placing them into the conf file are slightly different, e.g. `slow_query_log` vs. `slow-query-log`. View MySQL’s dynamic system variables page for the different syntaxes. The Option-File Format is the format for the conf file and System Variable Name is the variable name for setting the variables dynamically.

Generating query profile data

Now that the MySQL slow query log configurations have been outlined, it is time to generate some query data for profiling. This example was written on a running MySQL instance with no prior slow log configurations set. The example’s queries can be run via a MySQL GUI or through the MySQL command prompt. When monitoring the slow query log, it is useful to have two connection windows open to the server: one connection for writing the MySQL statements and one connection for watching the query log.

In the MySQL console tab, log into MySQL server with a user who has SUPER ADMIN privileges. To start, create a test database and table, add some dummy data, and turn on the slow query log. This example should be run in a development environment, ideally with no other applications using MySQL to help avoid cluttering the query log as it is being monitored:

$> mysql -u <user_name> -p

mysql> CREATE DATABASE profile_sampling;
mysql> USE profile_sampling;
mysql> CREATE TABLE users ( id TINYINT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255) );
mysql> INSERT INTO users (name) VALUES ('Walter'),('Skyler'),('Jesse'),('Hank'),('Walter Jr.'),('Marie'),('Saul'),('Gustavo'),('Hector'),('Mike');mysql> SET GLOBAL slow_query_log = 1;
mysql> SET GLOBAL slow_query_log_file = '/var/log/mysql/localhost-slow.log';
mysql> SET GLOBAL log_queries_not_using_indexes = 1;
mysql> SET long_query_time = 10;
mysql> SET min_examined_row_limit = 0;

There is now a test database and table with a small amount of test data. The slow query log was turned on but the query time was intentionally set high and the minimum row examined flag kept off. In the console tab for viewing the log:

cd /var/log/mysql
ls -l

There should be no slow query log in the folder yet, as no queries have been run. If there is, that means that the slow query log has been turned on and configured in the past, which may skew some of this example’s results. Back in the MySQL tab, run the following SQL:

mysql> USE profile_sampling;
mysql> SELECT * FROM users WHERE id = 1;

The query executed was a simple select using the Primary Key index from the table. This query was fast and used an index, so there will be no entries in the slow query log for this query. Look back in the query log directory and verify that no log was created. Now back in your MySQL window run:

mysql> SELECT * FROM users WHERE name = 'Jesse';

This query was run on a non indexed column – name. At this point there will be a query in the log with the following info (may not be exactly the same):

/var/log/mysql/localhost-slow.log

 # Time: 140322 13:54:58
# [email protected]: root[root] @ localhost []
# Query_time: 0.000303  Lock_time: 0.000090 Rows_sent: 1  Rows_examined: 10
use profile_sampling;
SET timestamp=1395521698;
SELECT * FROM users WHERE name = 'Jesse';

The query has been successfully logged. One more example. Raise the minimum examined row limit and run a similar query:

mysql> SET min_examined_row_limit = 100;
mysql> SELECT * FROM users WHERE name = 'Walter';

No data will be added to the log because the minimum of 100 rows was not analyzed.

NOTE: If there is no data being populated into the log, there are a couple of things that can be checked. First the permissions of the directory in which the log is being created in. The owner/group should be the same as the mysqld user (see above for example) as well as have correct permissions, chmod 755 to be sure. Second, there may have been existing slow query variable configurations that are interfering with the example. Reset the defaults by removing any slow query variables from the conf file and restarting the server, or set the global variables dynamically back to their default values. If the changes are made dynamically, logout and log back into MySQL to ensure the global updates take effect.

 

Analyzing query profile information

Looking at the query profile data from the above example:

 # Time: 140322 13:54:58
# [email protected]: root[root] @ localhost []
# Query_time: 0.000303  Lock_time: 0.000090 Rows_sent: 1  Rows_examined: 10
use profile_sampling;
SET timestamp=1395521698;
SELECT * FROM users WHERE name = 'Jesse';

The entry displays:

  • Time at which the query was ran
  • Who ran it
  • How long the query took
  • Length of the lock
  • How many rows where returned
  • How many rows where examined

This is useful because any query that violates the performance requirements specified with the server variables will end up in the log. This allows a developer, or admin, to have MySQL alert them when a query is not performing as well as it should [opposed to reading through source code and trying to find poorly written queries]. Also, the query profiling data can be useful when it is profiled over a period of time, which can help determine what circumstances are contributing to poor application performance.

Using mysqldumpslow

In a more realistic example, profiling would be enabled on a database driven application, providing a moderate stream of data to profile against. The log would be continually getting written to, likely more frequently than anybody would be watching. As the log size grows, it becomes difficult to parse through all the data and problematic queries easily get lost in the log. MySQL offers another tool, mysqldumpslow, that helps avoid this problem by breaking down the slow query log. The binary is bundled with MySQL (on Linux) so to use it simply run the command and pass in the log path:

mysqldumpslow -t 5 -s at /var/log/mysql/localhost-slow.log

There are various parameters that can be used with the command to help customize output. In the above example the top 5 queries sorted by the average query time will be displayed. The resulting rows are more readable as well as grouped by query (this output is different from the example to demonstrate high values):

 

Count: 2  Time=68.34s (136s)  Lock=0.00s (0s)  Rows=39892974.5 (79785949), root[root]@localhost
  SELECT PL.pl_title, P.page_title
  FROM page P
  INNER JOIN pagelinks PL
  ON PL.pl_namespace = P.page_namespace
  WHERE P.page_namespace = N
…

The data being displayed:

  • Count – How many times the query has been logged
  • Time – Both the average time and the total time in the ()
  • Lock – Table lock time
  • Rows – Number of rows returned

The command abstracts numbers and strings, so the same queries with different WHERE clauses will be counted as the same query (notice the page_namespace = N). Having a tool like mysqldumpslow prevents the need to constantly watch the slow query log, instead allowing for periodic or automated checks. The parameters to the mysqldumpslow command allow for some complex expression matching which help drill down into the various queries in the log.

There are also 3rd party log analysis tools available that offer different data views, a popular one being pt-query-digest.

Query breakdown

One last profiling tool to be aware of is the tool which allows for a complex break down of a query. A good use case for the tool is grabbing a problematic query from the slow query log and running it directly in MySQL. First profiling must be turned on, then the query is ran:

mysql> SET SESSION profiling = 1;
mysql> USE profile_sampling;
mysql> SELECT * FROM users WHERE name = 'Jesse';
mysql> SHOW PROFILES;

After profiling has been turned on, the SHOW PROFILES will show a table linking a Query_ID to a SQL statement. Find the Query_ID corresponding to the query ran and run the following query (replace # with your Query_ID):

mysql> SELECT * FROM INFORMATION_SCHEMA.PROFILING WHERE QUERY_ID=#;

Sample Output:

SEQ STATE DURATION
1 starting 0.000046
2 checking permissions 0.000005
3 opening tables 0.000036

The STATE is the “step” in the process of executing the query, and the DURATION is how long that step took to complete, in seconds. This isn’t an overly useful tool, but it is interesting and can help determine what part of the query execution is causing the most latency.

For a detailed outline of the various columns:http://dev.mysql.com/doc/refman/5.5/en/profiling-table.html

For a detailed overview of the various “steps”:http://dev.mysql.com/doc/refman/5.5/en/general-thread-states.html

NOTE: This tool should NOT be used in a production environment rather for analyzing specific queries.

Slow query log performance

One last question to address is how the slow query log will affect performance. In general it is safe to run the slow query log in a production environment; neither the CPU nor the I/O load should be a concern ¹ ². However, there should be some strategy for monitoring the log size to ensure the log file size does not get too big for the file system. Also, a good rule of thumb when running the slow query log in a production environment is to leave long_query_time at 1s or higher.

IMPORTANT: It is not a good idea to use the profiling tool, SET profiling=1, nor to log all queries, i.e. the general_log variable, in a production, high workload environment.

Conclusion

The slow query log is extremely helpful in singling out problematic queries and profiling overall query performance. When query profiling with the slow query log, a developer can get an in-depth understanding of how an application’s MySQL queries are performing. Using a tool such as mysqldumpslow, monitoring and evaluating the slow query log becomes manageable and can easily be incorporated into the development process. Now that problematic queries have been identified, the next step is to tune the queries for maximum performance.