Featured image of post Quickly Import and Restore Large MySQL SQL Backup Files

Quickly Import and Restore Large MySQL SQL Backup Files

Here, 'large' refers to files over 100GB. Patience is your best friend during the restoration process. For several hundred gigabytes of SQL backup files, it is not recommended to use tools like Mobxterm or Xshell, as they can lead to inexplicable issues. Instead, it's better to use the native `scp` command for file transfer. The `scp` command is used for securely copying files and directories between Linux and Unix operating systems. For details on the `scp` command, you can refer to this article: [https://bmzhp.com/knowledge/174](https://bmzhp.com/knowledge/174)

Here, ’large’ refers to files over 100GB. Patience is key while waiting for the restoration to complete. For SQL backup files reaching several hundred gigabytes, it’s advisable not to use tools like Mobxterm or Xshell for the transfer, as they often lead to unexpected problems. Instead, using the native scp command is recommended. The scp command is designed for securely copying files and directories between Linux and Unix operating systems. For specific scp commands, please check this article: https://bmzhp.com/knowledge/174.

Once the SQL file is copied to the Linux server, you can optimize MySQL to speed up the restoration process using the following strategies:

1. Adjust the Buffer Pool

MySQL uses memory to cache queries and table data, which reduces disk I/O operations. If you have more available memory, consider increasing the following buffer pool parameters:

  • innodb_buffer_pool_size: If using the InnoDB storage engine, you might want to set this parameter to 70% to 80% of your available memory.
  • key_buffer_size: If you are using the MyISAM storage engine, you may want to increase this parameter to enhance performance.

2. Enable Batch Mode

MySQL supports batch import mode, which can speed up the import of SQL files. To do this, you should use the following command when importing the SQL file:

1
mysql -u username -p --default-character-set=utf8 dbname < data.sql

Here, --default-character-set=utf8 can be adjusted according to the character set of your SQL file. Additionally, you can enable batch mode using the following options:

1
mysql -u username -p --default-character-set=utf8 dbname --max_allowed_packet=512M --net_buffer_length=8M < data.sql

In this command, max_allowed_packet specifies the maximum size of a MySQL packet, while net_buffer_length sets the size of the MySQL network buffer. These parameters can be adjusted based on your hardware resources to improve import speed.

3. Disable Logging

When executing SQL files, MySQL writes each statement to the binary log for backup and recovery purposes. If you do not need binary logging, you can disable it to speed up the import process. To do this, add the following line to your MySQL configuration file:

1
2
[mysqld]
skip-log-bin

4. Disable Foreign Key Checks

While executing SQL files, MySQL also verifies foreign key constraints to ensure data consistency. If your SQL file already satisfies foreign key constraints, you can disable this feature to speed up the import. To do this, add the following statement at the beginning of your SQL file:

1
SET foreign_key_checks = 0;

And add the following statement at the end of your SQL file:

1
SET foreign_key_checks = 1;

This way, foreign key checks can be temporarily disabled and then re-enabled afterwards.

Finally, ensure that your MySQL version is updated to the latest version and that all necessary security patches and fixes have been applied. This will help ensure that your MySQL database remains efficient, secure, and stable.

Licensed under CC BY-NC-SA 4.0