Back up Data to PV Using BR
This document describes how to back up the data of a TiDB cluster in Kubernetes to Persistent Volumes (PVs). BR is used to get the backup of the TiDB cluster, and then the backup data is sent to PVs.
PVs in this documentation can be any Kubernetes supported Persistent Volume types. This document uses NFS as an example PV type.
Ad-hoc backup
Ad-hoc backup supports both full backup and incremental backup. It describes the backup by creating a Backup
Custom Resource (CR) object. TiDB Operator performs the specific backup operation based on this Backup
object. If an error occurs during the backup process, TiDB Operator does not retry, and you need to handle this error manually.
This document provides examples in which the data of the demo1
TiDB cluster in the test1
Kubernetes namespace is backed up to NFS.
Prerequisites for ad-hoc backup
Download backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the
test1
namespace:kubectl apply -f backup-rbac.yaml -n test1Create the
backup-demo1-tidb-secret
secret which stores the root account and password needed to access the TiDB cluster:kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=<password> --namespace=test1Ensure that the NFS server is accessible from your Kubernetes cluster, and TiKV is configured to mount the same NFS server directory to the same local path as in backup jobs. To mount NFS for TiKV, refer to the configuration below:
spec: tikv: additionalVolumes: # specify volume types that are supported by Kubernetes, Ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes - name: nfs nfs: server: 192.168.0.2 path: /nfs additionalVolumeMounts: # this must match `name` in `additionalVolumes` - name: nfs mountPath: /nfs
Required database account privileges
- The
SELECT
andUPDATE
privileges of themysql.tidb
table: Before and after the backup, theBackup
CR needs a database account with these privileges to adjust the GC time.
Process of ad-hoc backup
Create the
Backup
CR, and back up cluster data to NFS as described below:kubectl apply -f backup-nfs.yamlThe content of
backup-nfs.yaml
is as follows:--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-nfs namespace: test1 spec: # # backupType: full # # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb-host} # port: ${tidb-port} # user: ${tidb-user} # secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true # options: # - --lastbackupts=420134118382108673 local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_ip} path: /nfs volumeMount: name: nfs mountPath: /nfsIn the example above,
spec.local
refers to the configuration related to PVs. For more information about PV configuration, refer to Local storage fields.In the example above, some parameters in
spec.br
can be ignored, such aslogLevel
,statusAddr
,concurrency
,rateLimit
,checksum
, andtimeAgo
. For more information about BR configuration, refer to BR fields.Since TiDB Operator v1.1.6, if you want to back up data incrementally, you only need to specify the last backup timestamp
--lastbackupts
inspec.br.options
. For the limitations of incremental backup, refer to Use BR to Back up and Restore Data.For more information about the
Backup
CR fields, refer to Backup CR fields.This example backs up all data in the TiDB cluster to NFS.
After creating the
Backup
CR, use the following command to check the backup status:kubectl get bk -n test1 -owide
Backup CR examples
Back up data of all clusters
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-nfs
namespace: test1
spec:
# # backupType: full
# # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
from:
host: ${tidb-host}
port: ${tidb-port}
user: ${tidb-user}
secretName: backup-demo1-tidb-secret
br:
cluster: demo1
clusterNamespace: test1
local:
prefix: backup-nfs
volume:
name: nfs
nfs:
server: ${nfs_server_ip}
path: /nfs
volumeMount:
name: nfs
mountPath: /nfs
Back up data of a single database
The following example backs up data of the db1
database.
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-nfs
namespace: test1
spec:
# # backupType: full
# # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
from:
host: ${tidb-host}
port: ${tidb-port}
user: ${tidb-user}
secretName: backup-demo1-tidb-secret
tableFilter:
- "db1.*"
br:
cluster: demo1
clusterNamespace: test1
local:
prefix: backup-nfs
volume:
name: nfs
nfs:
server: ${nfs_server_ip}
path: /nfs
volumeMount:
name: nfs
mountPath: /nfs
Back up data of a single table
The following example backs up data of the db1.table1
table.
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-nfs
namespace: test1
spec:
# # backupType: full
# # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
from:
host: ${tidb-host}
port: ${tidb-port}
user: ${tidb-user}
secretName: backup-demo1-tidb-secret
tableFilter:
- "db1.table1"
br:
cluster: demo1
clusterNamespace: test1
local:
prefix: backup-nfs
volume:
name: nfs
nfs:
server: ${nfs_server_ip}
path: /nfs
volumeMount:
name: nfs
mountPath: /nfs
Back up data of multiple tables using the table filter
The following example backs up data of the db1.table1
table and db1.table2
table.
---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
name: demo1-backup-nfs
namespace: test1
spec:
# # backupType: full
# # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
from:
host: ${tidb-host}
port: ${tidb-port}
user: ${tidb-user}
secretName: backup-demo1-tidb-secret
tableFilter:
- "db1.table1"
- "db1.table2"
br:
cluster: demo1
clusterNamespace: test1
local:
prefix: backup-nfs
volume:
name: nfs
nfs:
server: ${nfs_server_ip}
path: /nfs
volumeMount:
name: nfs
mountPath: /nfs
Scheduled full backup
You can set a backup policy to perform scheduled backups of the TiDB cluster, and set a backup retention policy to avoid excessive backup items. A scheduled full backup is described by a custom BackupSchedule
CR object. A full backup is triggered at each backup time point. Its underlying implementation is the ad-hoc full backup.
Prerequisites for scheduled full backup
The prerequisites for the scheduled full backup is the same with the prerequisites for ad-hoc backup.
Process of scheduled full backup
Create the
BackupSchedule
CR, and back up cluster data as described below:kubectl apply -f backup-schedule-nfs.yamlThe content of
backup-schedule-nfs.yaml
is as follows:--- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-nfs namespace: test1 spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} # user: ${tidb_user} # secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_ip} path: /nfs volumeMount: name: nfs mountPath: /nfsAfter creating the scheduled full backup, use the following command to check the backup status:
kubectl get bks -n test1 -owideUse the following command to check all the backup items:
kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-nfs -n test1
From the example above, you can see that the backupSchedule
configuration consists of two parts. One is the unique configuration of backupSchedule
, and the other is backupTemplate
.
backupTemplate
specifies the configuration related to the cluster and remote storage, which is the same as the spec
configuration of the Backup
CR. For the unique configuration of backupSchedule
, refer to BackupSchedule CR fields.
Delete the backup CR
Refer to Delete the Backup CR.
Troubleshooting
If you encounter any problem during the backup process, refer to Common Deployment Failures.