This document lists the frequently asked questions (FAQs) and the solutions about Backup & Restore (BR).
What should I do if the error message
could not read local://...:download sst failed is returned during data restoration?
When you restore data, each node must have access to all backup files (SST files). By default, if
local storage is used, you cannot restore data because the backup files are scattered among different nodes. Therefore, you have to copy the backup file of each TiKV node to the other TiKV nodes.
It is recommended to mount an NFS disk as a backup disk during backup. For details, see Back up a single table to a network disk.
When you use the
oltp_read_only scenario of
sysbench to back up to a disk (make sure the backup disk and the service disk are different) at full rate, the cluster QPS is decreased by 15%-25%. The impact on the cluster depends on the table schema.
To reduce the impact on the cluster, you can use the
--ratelimit parameter to limit the backup rate.
The system libraries (
mysql) are filtered out during full backup. For more details, refer to the Backup Principle.
Because these system libraries do not exist in the backup files, no conflict occurs among system tables during data restoration.
What should I do to resolve the
Permission denied error, even if I have tried to run BR using root in vain?
You need to confirm whether TiKV has access to the backup directory. To back up data, confirm whether TiKV has the write permission. To restore data, confirm whether it has the read permission.
Running BR with the root access might fail due to the disk permission, because the backup files (SST files) are saved by TiKV.
Almost all of these problems are system call errors that occur when TiKV writes data to the disk. You can check the mounting method and the file system of the backup directory, and try to back up data to another folder or another hard disk.
For example, you might encounter the
Code: 22(invalid argument) error when backing up data to the network disk built by
When you use
backupmeta is generated on the node where BR is running, and backup files are generated on the Leader nodes of each Region.
During data backup, backup files are generated on the Leader nodes of each Region. The size of the backup is equal to the data size, with no redundant replicas. Therefore, the total data size is approximately the total number of TiKV data divided by the number of replicas.
However, if you want to restore data from local storage, the number of replicas is equal to that of the TiKV nodes, because each TiKV must have access to all backup files.
The data restored using BR cannot be replicated to the downstream. This is because BR directly imports SST files but the downstream cluster currently cannot obtain these files from the upstream.
DDL jobs generated during the BR restore might cause unexpected DDL executions in Drainer. Therefore, if you need to perform restore on the upstream cluster of TiCDC/Drainer, add all tables restored using BR to the Drainer block list.
You can use
syncer.ignore-table to configure the block list for Drainer.