Sign InTry Free

Deploy a TiFlash Cluster

This document introduces the environment requirements for deploying a TiFlash cluster and the deployment methods in different scenarios.

This section provides hardware configuration recommendations based on different TiFlash deployment methods.

TiFlash standalone deployment

  • Minimum configuration: 32 VCore, 64 GB RAM, 1 SSD + n HDD
  • Recommended configuration: 48 VCore, 128 GB RAM, 1 NVMe SSD + n SSD

There is no limit to the number of deployment machines (one at least). A single machine can use multiple disks, but deploying multiple instances on a single machine is not recommended.

It is recommended to use an SSD disk to buffer the real-time data being replicated and written to TiFlash. The performance of this disk need to be not lower than the hard disk used by TiKV. It is recommended that you use a better performance NVMe SSD and the SSD's capacity is not less than 10% of the total capacity. Otherwise, it might become the bottleneck of the amount of data that this node can handle.

For other hard disks, you can use multiple HDDs or regular SSDs. A better hard disk will surely bring better performance.

TiFlash supports multi-disk deployment, so there is no need to use RAID.

TiFlash and TiKV are deployed on the same node

See Hardware recommendations for TiKV server, and increase the memory capacity and the number of and CPU cores as needed.

It is not recommended to deploy TiFlash and TiKV on the same disk to prevent mutual interference.

Hard disk selection criteria are the same as TiFlash standalone deployment. The total capacity of the hard disk is roughly: the to-be-replicated data capacity of the entire TiKV cluster / the number of TiKV replicas * the number of TiFlash replicas.

For example, if the overall planned capacity of TiKV is 1TB, each Region in TiKV has 3 replicas, and each Region in TiFlash has 2 replicas, then the recommended capacity of TiFlash will be 1024GB / 3 * 2. You can choose to replicate part of the tables instead of all.

TiDB version requirements

Currently, the testing of TiFlash is based on the related components of TiDB 3.1 (including TiDB, PD, TiKV, and TiFlash). For the download method of TiDB 3.1, refer to the following installation and deployment steps.

Install and deploy TiFlash

This section describes how to install and deploy TiFlash in the following scenarios:

Fresh TiFlash deployment

For fresh TiFlash deployment, it is recommended to deploy TiFlash by downloading an offline installation package. The steps are as follows:

  1. Download TiDB Ansible with the specified tag corresponding to the TiDB 3.1 version:

    git clone -b $tag https://github.com/pingcap/tidb-ansible.git
  2. Download the binary files of TiDB, TiKV, PD, and other components:

    ansible-playbook local_prepare.yml
  3. Edit the inventory.ini configuration file. In addition to configuring for TiDB cluster deployment, you also need to specify the IPs of your TiFlash servers under the [tiflash_servers] section (currently only IPs are supported; domain names are not supported).

    If you want to customize the deployment directory, configure the data_dir parameter. If you want multi-disk deployment, separate the deployment directories with commas (note that the parent directory of each data_dir directory needs to give the tidb user write permissions). For example:

    [tiflash_servers] 192.168.1.1 data_dir=/data1/tiflash/data,/data2/tiflash/data
  4. Complete the remaining steps of the TiDB Ansible deployment process.

  5. To verify that TiFlash has been successfully deployed:

    1. Execute the pd-ctl store http://your-pd-address command in pd-ctl (resources/bin in the tidb-ansible directory includes the pd-ctl binary file).
    2. Observe that the status of the deployed TiFlash instance is "Up".

Add TiFlash component to an existing TiDB cluster

  1. First, confirm that your current TiDB version supports TiFlash, otherwise you need to upgrade your TiDB cluster to 3.1 rc or higher according to TiDB Upgrade Guide.

  2. Execute the config set enable-placement-rules true command in pd-ctl (resources/bin in the tidb-ansible directory includes the pd-ctl binary file) to enable PD's Placement Rules feature.

  3. Edit the inventory.ini configuration file. You need to specify the IPs of your TiFlash servers under the [tiflash_servers] section (currently only IPs are supported; domain names are not supported).

    If you want to customize the deployment directory, configure the data_dir parameter. If you want multi-disk deployment separate the deployment directories with commas (note that the parent directory of each data_dir directory needs to give the tidb user write permissions). For example:

    [tiflash_servers] 192.168.1.1 data_dir=/data1/tiflash/data,/data2/tiflash/data
  4. Execute the following ansible-playbook commands to deploy TiFlash:

    ansible-playbook local_prepare.yml && ansible-playbook -t tiflash deploy.yml && ansible-playbook -t tiflash start.yml && ansible-playbook rolling_update_monitor.yml
  5. To verify that TiFlash has been successfully deployed:

    1. Execute the pd-ctl store http://your-pd-address command in pd-ctl (resources/bin in the tidb-ansible directory includes the pd-ctl binary file).
    2. Observe that the status of the deployed TiFlash instance is "Up".
Download PDF
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.