Maintain TiDB Binlog
This document describes how to maintain TiDB Binlog of a TiDB cluster in Kubernetes.
Prerequisites
- Deploy TiDB Operator;
- Install Helm and configure it with the official PingCAP chart.
Enable TiDB Binlog of a TiDB cluster
TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster with TiDB Binlog enabled, or enable TiDB Binlog in an existing TiDB cluster:
Modify the
values.yamlfile as described below:Set
binlog.pump.createtotrue.Set
binlog.drainer.createtotrue.Set
binlog.pump.storageClassNameandbinlog.drainer.storageClassNameto an availablestorageClassin your Kubernetes cluster.Set
binlog.drainer.destDBTypeto your desired downstream storage as needed, which is explained in details below.TiDB Binlog supports three types of downstream storage:
- PersistenceVolume: the default downstream storage. You can configure a large PV for
drainer(by modifyingbinlog.drainer.storage) in this case. - MySQL compatible databases: enabled by setting
binlog.drainer.destDBTypetomysql. Meanwhile, you must configure the address and credential of the target database inbinlog.drainer.mysql. - Apache Kafka: enabled by setting
binlog.drainer.destDBTypetokafka. Meanwhile, you must configure the zookeeper address and Kafka address of the target cluster inbinlog.drainer.kafka.
- PersistenceVolume: the default downstream storage. You can configure a large PV for
Set affinity and anti-affinity for TiDB and the Pump component:
By default, TiDB's affinity is set to
{}. Currently, each TiDB instance does not have a corresponding Pump instance by default. When TiDB Binlog is enabled, if Pump and TiDB are separately deployed and network isolation occurs, andignore-erroris enabled, TiDB loses binlogs. In this situation, it is recommended to deploy a TiDB instance and a Pump instance on the same node using the affinity feature, and to split Pump instances on different nodes using the anti-affinity feature. For each node, only one Pump instance is required.Note:
<release-name>needs to be replaced with theHelm-release-nameof the targettidb-clusterchart.Configure
tidb.affinityas follows:tidb: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app.kubernetes.io/component" operator: In values: - "pump" - key: "app.kubernetes.io/managed-by" operator: In values: - "tidb-operator" - key: "app.kubernetes.io/name" operator: In values: - "tidb-cluster" - key: "app.kubernetes.io/instance" operator: In values: - <release-name> topologyKey: kubernetes.io/hostnameConfigure
binlog.pump.affinityas follows:binlog: pump: affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: "app.kubernetes.io/component" operator: In values: - "tidb" - key: "app.kubernetes.io/managed-by" operator: In values: - "tidb-operator" - key: "app.kubernetes.io/name" operator: In values: - "tidb-cluster" - key: "app.kubernetes.io/instance" operator: In values: - <release-name> topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: "app.kubernetes.io/component" operator: In values: - "pump" - key: "app.kubernetes.io/managed-by" operator: In values: - "tidb-operator" - key: "app.kubernetes.io/name" operator: In values: - "tidb-cluster" - key: "app.kubernetes.io/instance" operator: In values: - <release-name> topologyKey: kubernetes.io/hostname
Create a new TiDB cluster or update an existing cluster:
Create a new TiDB cluster with TiDB Binlog enabled:
helm install pingcap/tidb-cluster --name=<release-name> --namespace=<namespace> --version=<chart-version> -f <values-file>Update an existing TiDB cluster to enable TiDB Binlog:
Note:
If you set the affinity for TiDB and its components, updating the existing TiDB cluster causes rolling updates of the TiDB components in the cluster.
helm upgrade <release-name> pingcap/tidb-cluster --version=<chart-version> -f <values-file>
Deploy multiple drainers
By default, only one downstream drainer is created. You can install the tidb-drainer Helm chart to deploy more drainers for a TiDB cluster, as described below:
Make sure that the PingCAP Helm repository is up to date:
helm repo updatehelm search tidb-drainer -lGet the default
values.yamlfile to facilitate customization:helm inspect values pingcap/tidb-drainer --version=<chart-version> > values.yamlModify the
values.yamlfile to specify the source TiDB cluster and the downstream database of the drainer. Here is an example:clusterName: example-tidb clusterVersion: v3.0.0 storageClassName: local-storage storage: 10Gi config: | detect-interval = 10 [syncer] worker-count = 16 txn-batch = 20 disable-dispatch = false ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" safe-mode = false db-type = "tidb" [syncer.to] host = "slave-tidb" user = "root" password = "" port = 4000The
clusterNameandclusterVersionmust match the desired source TiDB cluster.For complete configuration details, refer to TiDB Binlog Drainer Configurations in Kubernetes.
Deploy the drainer:
helm install pingcap/tidb-drainer --name=<release-name> --namespace=<namespace> --version=<chart-version> -f values.yaml