Sign InTry Free

Restore Data from S3-Compatible Storage Using TiDB Lightning

This document describes how to restore the TiDB cluster data backed up using TiDB Operator in Kubernetes.

The restore method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator v1.1 or later versions. For the underlying implementation, TiDB Lightning TiDB-backend is used to perform the restore.

TiDB Lightning supports three backends: Importer-backend, Local-backend, and TiDB-backend. For the differences of these backends and how to choose backends, see TiDB Lightning Backends. To import data using Importer-backend or Local-backend, see Import Data.

Prerequisites

  1. Download backup-rbac.yaml and execute the following command to create the role-based access control (RBAC) resources in the test2 namespace:

    kubectl apply -f backup-rbac.yaml -n test2
    
  2. Grant permissions to the remote storage.

    To grant permissions to access S3-compatible remote storage, refer to AWS account permissions.

    If you use Ceph as the backend storage for testing, you can grant permissions by using AccessKey and SecretKey.

  3. Create the restore-demo2-tidb-secret secret which stores the root account and password needed to access the TiDB cluster:

    kubectl create secret generic restore-demo2-tidb-secret --from-literal=password=${password} --namespace=test2
    

Required database account privileges

PrivilegesScope
SELECTTables
INSERTTables
UPDATETables
DELETETables
CREATEDatabases, tables
DROPDatabases, tables
ALTERTables

Restore process

  • Create the Restore CR, and restore the cluster data from Ceph by importing AccessKey and SecretKey to grant permissions:

    kubectl apply -f restore.yaml
    

    The content of restore.yaml is as follows:

    ---
    apiVersion: pingcap.com/v1alpha1
    kind: Restore
    metadata:
      name: demo2-restore
      namespace: test2
    spec:
      backupType: full
      to:
        host: ${tidb_host}
        port: ${tidb_port}
        user: ${tidb_user}
        secretName: restore-demo2-tidb-secret
      s3:
        provider: ceph
        endpoint: ${endpoint}
        secretName: s3-secret
        path: s3://${backup_path}
      # storageClassName: local-storage
      storageSize: 1Gi
    
  • Create the Restore CR, and restore the cluster data from Amazon S3 by importing AccessKey and SecretKey to grant permissions:

    kubectl apply -f restore.yaml
    

    The restore.yaml file has the following content:

    ---
    apiVersion: pingcap.com/v1alpha1
    kind: Restore
    metadata:
      name: demo2-restore
      namespace: test2
    spec:
      backupType: full
      to:
        host: ${tidb_host}
        port: ${tidb_port}
        user: ${tidb_user}
        secretName: restore-demo2-tidb-secret
      s3:
        provider: aws
        region: ${region}
        secretName: s3-secret
        path: s3://${backup_path}
      # storageClassName: local-storage
      storageSize: 1Gi
    
  • Create the Restore CR, and restore the cluster data from Amazon S3 by binding IAM with Pod to grant permissions:

    kubectl apply -f restore.yaml
    

    The content of restore.yaml is as follows:

    ---
    apiVersion: pingcap.com/v1alpha1
    kind: Restore
    metadata:
      name: demo2-restore
      namespace: test2
      annotations:
        iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user
    spec:
      backupType: full
      to:
        host: ${tidb_host}
        port: ${tidb_port}
        user: ${tidb_user}
        secretName: restore-demo2-tidb-secret
      s3:
        provider: aws
        region: ${region}
        path: s3://${backup_path}
      # storageClassName: local-storage
      storageSize: 1Gi
    
  • Create the Restore CR, and restore the cluster data from Amazon S3 by binding IAM with ServiceAccount to grant permissions:

    kubectl apply -f restore.yaml
    

    The content of restore.yaml is as follows:

    ---
    apiVersion: pingcap.com/v1alpha1
    kind: Restore
    metadata:
      name: demo2-restore
      namespace: test2
    spec:
      backupType: full
      serviceAccount: tidb-backup-manager
      to:
        host: ${tidb_host}
        port: ${tidb_port}
        user: ${tidb_user}
        secretName: restore-demo2-tidb-secret
      s3:
        provider: aws
        region: ${region}
        path: s3://${backup_path}
      # storageClassName: local-storage
      storageSize: 1Gi
    

After creating the Restore CR, execute the following command to check the restore status:

kubectl get rt -n test2 -owide

The example above restores data from the spec.s3.path path on S3-compatible storage to the spec.to.host TiDB cluster. For more information about S3-compatible storage configuration, refer to S3 storage fields.

For more information about the Restore CR fields, refer to Restore CR fields.

Troubleshooting

If you encounter any problem during the restore process, refer to Common Deployment Failures.

Download PDF
Products
TiDB Cloud
TiDB
Pricing
Get Demo
© 2023 PingCAP. All Rights Reserved.