Software and Hardware Recommendations

As an open-source distributed SQL database with high performance, TiDB can be deployed in the Intel architecture server, ARM architecture server, and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems.

OS and platform requirements

Operating systemsSupported CPU architectures
Red Hat Enterprise Linux 8.4 or a later 8.x version
  • x86_64
  • ARM 64
  • Red Hat Enterprise Linux 7.3 or a later 7.x version
  • CentOS 7.3 or a later 7.x version
  • x86_64
  • ARM 64
Amazon Linux 2
  • x86_64
  • ARM 64
Kylin Euler V10 SP1/SP2
  • x86_64
  • ARM 64
UnionTech OS (UOS) V20
  • x86_64
  • ARM 64
openEuler 22.03 LTS SP1x86_64
macOS 12 (Monterey) or later
  • x86_64
  • ARM 64
Oracle Enterprise Linux 7.3 or a later 7.x versionx86_64
Ubuntu LTS 18.04 or laterx86_64
CentOS 8 Stream
  • x86_64
  • ARM 64
Debian 9 (Stretch) or laterx86_64
Fedora 35 or laterx86_64
openSUSE Leap later than v15.3 (not including Tumbleweed)x86_64
SUSE Linux Enterprise Server 15x86_64

Libraries required for compiling and running TiDB

Libraries required for compiling and running TiDBVersion
Golang1.20 or later
Rustnightly-2022-07-31 or later
GCC7.x
LLVM13.0 or later

Library required for running TiDB: glibc (2.28-151.el8 version)

Docker image dependencies

The following CPU architectures are supported:

Software recommendations

Control machine

SoftwareVersion
sshpass1.06 or later
TiUP1.5.0 or later

Target machines

SoftwareVersion
sshpass1.06 or later
numa2.0.12 or later
tarany

Server recommendations

You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture or on the hardware server platform in the ARM architecture. The requirements and recommendations about server hardware configuration (ignoring the resources occupied by the operating system itself) for development, test, and production environments are as follows:

Development and test environments

ComponentCPUMemoryLocal StorageNetworkNumber of Instances (Minimum Requirement)
TiDB8 core+16 GB+No special requirementsGigabit network card1 (can be deployed on the same machine with PD)
PD4 core+8 GB+SAS, 200 GB+Gigabit network card1 (can be deployed on the same machine with TiDB)
TiKV8 core+32 GB+SAS, 200 GB+Gigabit network card3
TiFlash32 core+64 GB+SSD, 200 GB+Gigabit network card1
TiCDC8 core+16 GB+SAS, 200 GB+Gigabit network card1

Production environment

ComponentCPUMemoryHard Disk TypeNetworkNumber of Instances (Minimum Requirement)
TiDB16 core+48 GB+SSD10 Gigabit network card (2 preferred)2
PD8 core+16 GB+SSD10 Gigabit network card (2 preferred)3
TiKV16 core+64 GB+SSD10 Gigabit network card (2 preferred)3
TiFlash48 core+128 GB+1 or more SSDs10 Gigabit network card (2 preferred)2
TiCDC16 core+64 GB+SSD10 Gigabit network card (2 preferred)2
Monitor8 core+16 GB+SASGigabit network card1

Before you deploy TiFlash, note the following items:

  • TiFlash can be deployed on multiple disks.
  • It is recommended to use a high-performance SSD as the first disk of the TiFlash data directory to buffer the real-time replication of TiKV data. The performance of this disk should not be lower than that of TiKV, such as PCI-E SSD. The disk capacity should be no less than 10% of the total capacity; otherwise, it might become the bottleneck of this node. You can deploy ordinary SSDs for other disks, but note that a better PCI-E SSD brings better performance.
  • It is recommended to deploy TiFlash on different nodes from TiKV. If you must deploy TiFlash and TiKV on the same node, increase the number of CPU cores and memory, and try to deploy TiFlash and TiKV on different disks to avoid interfering each other.
  • The total capacity of the TiFlash disks is calculated in this way: the data volume of the entire TiKV cluster to be replicated / the number of TiKV replicas * the number of TiFlash replicas. For example, if the overall planned capacity of TiKV is 1 TB, the number of TiKV replicas is 3, and the number of TiFlash replicas is 2, then the recommended total capacity of TiFlash is 1024 GB / 3 * 2. You can replicate only the data of some tables. In such case, determine the TiFlash capacity according to the data volume of the tables to be replicated.

Before you deploy TiCDC, note that it is recommended to deploy TiCDC on PCIe-SSD disks larger than 1 TB.

Network requirements

As an open-source distributed SQL database, TiDB requires the following network port configuration to run. Based on the TiDB deployment in actual environments, the administrator can open relevant ports in the network side and host side.

ComponentDefault PortDescription
TiDB4000the communication port for the application and DBA tools
TiDB10080the communication port to report TiDB status
TiKV20160the TiKV communication port
TiKV20180the communication port to report TiKV status
PD2379the communication port between TiDB and PD
PD2380the inter-node communication port within the PD cluster
TiFlash9000the TiFlash TCP service port
TiFlash3930the TiFlash RAFT and Coprocessor service port
TiFlash20170the TiFlash Proxy service port
TiFlash20292the port for Prometheus to pull TiFlash Proxy metrics
TiFlash8234the port for Prometheus to pull TiFlash metrics
Pump8250the Pump communication port
Drainer8249the Drainer communication port
TiCDC8300the TiCDC communication port
Monitoring9090the communication port for the Prometheus service
Monitoring12020the communication port for the NgMonitoring service
Node_exporter9100the communication port to report the system information of every TiDB cluster node
Blackbox_exporter9115the Blackbox_exporter communication port, used to monitor the ports in the TiDB cluster
Grafana3000the port for the external Web monitoring service and client (Browser) access
Alertmanager9093the port for the alert web service
Alertmanager9094the alert communication port

Disk space requirements

ComponentDisk space requirementHealthy disk usage
TiDB
  • At least 30 GB for the log disk
  • Starting from v6.5.0, Fast Online DDL (controlled by the tidb_ddl_enable_fast_reorg variable) is enabled by default to accelerate DDL operations, such as adding indexes. If DDL operations involving large objects exist in your application, it is highly recommended to prepare additional SSD disk space for TiDB (100 GB or more). For detailed configuration instructions, see Set a temporary space for a TiDB instance
Lower than 90%
PDAt least 20 GB for the data disk and for the log disk, respectivelyLower than 90%
TiKVAt least 100 GB for the data disk and for the log disk, respectivelyLower than 80%
TiFlashAt least 100 GB for the data disk and at least 30 GB for the log disk, respectivelyLower than 80%
TiUP
  • Control machine: No more than 1 GB space is required for deploying a TiDB cluster of a single version. The space required increases if TiDB clusters of multiple versions are deployed.
  • Deployment servers (machines where the TiDB components run): TiFlash occupies about 700 MB space and other components (such as PD, TiDB, and TiKV) occupy about 200 MB space respectively. During the cluster deployment process, the TiUP cluster requires less than 1 MB of temporary space (/tmp directory) to store temporary files.
N/A
Ngmonitoring
  • Conprof: 3 x 1 GB x Number of components (each component occupies about 1 GB per day, 3 days in total) + 20 GB reserved space
  • Top SQL: 30 x 50 MB x Number of components (each component occupies about 50 MB per day, 30 days in total)
  • Conprof and Top SQL share the reserved space
N/A

Web browser requirements

TiDB relies on Grafana to provide visualization of database metrics. A recent version of Internet Explorer, Chrome or Firefox with Javascript enabled is sufficient.