Production Deployment from Binary Tarball
This guide provides installation instructions from a binary tarball on Linux. A complete TiDB cluster contains PD, TiKV, and TiDB. To start the database service, follow the order of PD -> TiKV -> TiDB. To stop the database service, follow the order of stopping TiDB -> TiKV -> PD.
See also local deployment and testing environment deployment.
Prepare
Before you start, see TiDB architecture and Software and Hardware Recommendations. Make sure the following requirements are satisfied:
Operating system
For the operating system, it is recommended to use RHEL/CentOS 7.3 or higher. The following additional requirements are recommended:
Network and firewall
Operating system parameters
Database running user settings
TiDB components and default ports
Before you deploy a TiDB cluster, see the required components and optional components.
TiDB database components (required)
See the following table for the default ports for the TiDB components:
TiDB database components (optional)
See the following table for the default ports for the optional TiDB components:
Create a database running user account
Log in to the machine using the
rootuser account and create a database running user account (tidb) using the following command:useradd tidb -mSwitch the user from
roottotidbby using the following command. You can use thistidbuser account to deploy your TiDB cluster.su - tidb
Download the official binary package
# Download the package.
$ wget https://download.pingcap.org/tidb-{version}-linux-amd64.tar.gz
$ wget https://download.pingcap.org/tidb-{version}-linux-amd64.sha256
# Check the file integrity. If the result is OK, the file is correct.
$ sha256sum -c tidb-{version}-linux-amd64.sha256
# Extract the package.
$ tar -xzf tidb-{version}-linux-amd64.tar.gz
$ cd tidb-{version}-linux-amd64
Multiple nodes cluster deployment
For the production environment, multiple nodes cluster deployment is recommended. Before you begin, see Software and Hardware Recommendations.
Assuming that you have six nodes, you can deploy 3 PD instances, 3 TiKV instances, and 1 TiDB instance. See the following table for details:
Follow the steps below to start PD, TiKV, and TiDB:
Start PD on Node1, Node2, and Node3 in sequence.
./bin/pd-server --name=pd1 \ --data-dir=pd \ --client-urls="http://192.168.199.113:2379" \ --peer-urls="http://192.168.199.113:2380" \ --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \ -L "info" \ --log-file=pd.log & ./bin/pd-server --name=pd2 \ --data-dir=pd \ --client-urls="http://192.168.199.114:2379" \ --peer-urls="http://192.168.199.114:2380" \ --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \ -L "info" \ --log-file=pd.log & ./bin/pd-server --name=pd3 \ --data-dir=pd \ --client-urls="http://192.168.199.115:2379" \ --peer-urls="http://192.168.199.115:2380" \ --initial-cluster="pd1=http://192.168.199.113:2380,pd2=http://192.168.199.114:2380,pd3=http://192.168.199.115:2380" \ -L "info" \ --log-file=pd.log &Start TiKV on Node4, Node5 and Node6.
./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ --addr="192.168.199.116:20160" \ --status-addr="192.168.199.116:20180" \ --data-dir=tikv \ --log-file=tikv.log & ./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ --addr="192.168.199.117:20160" \ --status-addr="192.168.199.117:20180" \ --data-dir=tikv \ --log-file=tikv.log & ./bin/tikv-server --pd="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ --addr="192.168.199.118:20160" \ --status-addr="192.168.199.118:20180" \ --data-dir=tikv \ --log-file=tikv.log &Start TiDB on Node1.
./bin/tidb-server --store=tikv \ --path="192.168.199.113:2379,192.168.199.114:2379,192.168.199.115:2379" \ --log-file=tidb.log &Use the MySQL client to connect to TiDB.
mysql -h 192.168.199.113 -P 4000 -u root -D test
For the deployment and use of TiDB monitoring services, see Monitor a TiDB Cluster.