Deploy a TiDB Cluster Using TiUP
TiUP is a cluster operation and maintenance tool introduced in TiDB 4.0. TiUP provides TiUP cluster, a cluster management component written in Golang. By using TiUP cluster, you can easily perform daily database operations, including deploying, starting, stopping, destroying, scaling, and upgrading a TiDB cluster, and manage TiDB cluster parameters.
TiUP supports deploying TiDB, TiFlash, TiDB Binlog, TiCDC, and the monitoring system. This document introduces how to deploy TiDB clusters of different topologies.
Step 1: Prerequisites and precheck
Make sure that you have read the following documents:
Step 2: Install TiUP on the control machine
Log in to the control machine using a regular user account (take the tidb
user as an example). All the following TiUP installation and cluster management operations can be performed by the tidb
user.
Install TiUP by executing the following command:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | shSet the TiUP environment variables:
Redeclare the global environment variables:
source .bash_profileConfirm whether TiUP is installed:
which tiupInstall the TiUP cluster component:
tiup clusterIf TiUP is already installed, update the TiUP cluster component to the latest version:
tiup update --self && tiup update clusterExpected output includes
“Update successfully!”
.Verify the current version of your TiUP cluster:
tiup --binary cluster
Step 3: Edit the initialization configuration file
According to the intended cluster topology, you need to manually create and edit the cluster initialization configuration file.
The following examples cover the most common scenarios. You need to create a YAML configuration file (named topology.yaml
for example) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration accordingly.
The following topology documents provide a cluster configuration template for each of the following common scenarios:
This is the basic cluster topology, including tidb-server, tikv-server, and pd-server. It is suitable for OLTP applications.
This is to deploy TiFlash along with the minimal cluster topology. TiFlash is a columnar storage engine, and gradually becomes a standard cluster topology. It is suitable for real-time HTAP applications.
This is to deploy TiCDC along with the minimal cluster topology. TiCDC is a tool for replicating the incremental data of TiDB, introduced in TiDB 4.0. It supports multiple downstream platforms, such as TiDB, MySQL, and MQ. Compared with TiDB Binlog, TiCDC has lower latency and native high availability. After the deployment, start TiCDC and create the replication task using
cdc cli
.TiDB Binlog deployment topology
This is to deploy TiDB Binlog along with the minimal cluster topology. TiDB Binlog is the widely used component for replicating incremental data. It provides near real-time backup and replication.
This is to deploy TiSpark along with the minimal cluster topology. TiSpark is a component built for running Apache Spark on top of TiDB/TiKV to answer the OLAP queries. Currently, TiUP cluster's support for TiSpark is still experimental.
This is to deploy multiple instances on a single machine. You need to add extra configurations for the directory, port, resource ratio, and label.
Geo-distributed deployment topology
This topology takes the typical architecture of three data centers in two cities as an example. It introduces the geo-distributed deployment architecture and the key configuration that requires attention.
Step 4: Execute the deployment command
tiup cluster deploy tidb-test v4.0.16 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
In the above command:
- The name of the deployed TiDB cluster is
tidb-test
. - The version of the TiDB cluster is
v4.0.16
. You can see other supported versions by runningtiup list tidb
. - The initialization configuration file is
topology.yaml
. --user root
: Log in to the target machine through theroot
key to complete the cluster deployment, or you can use other users withssh
andsudo
privileges to complete the deployment.[-i]
and[-p]
: optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters.[-i]
is the private key of theroot
user (or other users specified by--user
) that has access to the target machine.[-p]
is used to input the user password interactively.- If you need to specify the user group name to be created on the target machine, see this example.
At the end of the output log, you will see Deployed cluster `tidb-test` successfully
. This indicates that the deployment is successful.
Step 5: Check the clusters managed by TiUP
tiup cluster list
TiUP supports managing multiple TiDB clusters. The command above outputs information of all the clusters currently managed by TiUP, including the name, deployment user, version, and secret key information:
Starting /home/tidb/.tiup/components/cluster/v1.0.0/cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
tidb-test tidb v4.0.16 /home/tidb/.tiup/storage/cluster/clusters/tidb-test /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
Step 6: Check the status of the deployed TiDB cluster
For example, execute the following command to check the status of the tidb-test
cluster:
tiup cluster display tidb-test
Expected output includes the instance ID, role, host, listening port, and status (because the cluster is not started yet, so the status is Down
/inactive
), and directory information.
Step 7: Start the TiDB cluster
tiup cluster start tidb-test
If the output log includes Started cluster `tidb-test` successfully
, the start is successful.
Step 8: Verify the running status of the TiDB cluster
Check the TiDB cluster status using TiUP:
tiup cluster display tidb-testIf the
Status
isUp
in the output, the cluster status is normal.Log in to the database by running the following command:
mysql -u root -h 10.0.1.4 -P 4000
In addition, you also need to verify the status of the monitoring system, TiDB Dashboard, and the execution of simple SQL commands. For the specific operations, see Verify Cluster Status.
What's next
If you have deployed TiFlash along with the TiDB cluster, see the following documents:
If you have deployed TiCDC along with the TiDB cluster, see the following documents: