TiDB FAQ

This document lists the Most Frequently Asked Questions about TiDB.

About TiDB

TiDB introduction and architecture

What is TiDB?

TiDB is a distributed SQL database that features in horizontal scalability, high availability and consistent distributed transactions. It also enables you to use MySQL's SQL syntax and protocol to manage and retrieve data.

What is TiDB's architecture?

The TiDB cluster has three components: the TiDB server, the PD (Placement Driver) server, and the TiKV server. For more details, see TiDB architecture.

Is TiDB based on MySQL?

No. TiDB supports MySQL syntax and protocol, but it is a new open source database that is developed and maintained by PingCAP, Inc.

What is the respective responsibility of TiDB, TiKV and PD (Placement Driver)?

  • TiDB works as the SQL computing layer, mainly responsible for parsing SQL, specifying query plan, and generating executor.
  • TiKV works as a distributed Key-Value storage engine, used to store the real data. In short, TiKV is the storage engine of TiDB.
  • PD works as the cluster manager of TiDB, which manages TiKV metadata, allocates timestamps, and makes decisions for data placement and load balancing.

Is it easy to use TiDB?

Yes, it is. When all the required services are started, you can use TiDB as easily as a MySQL server. You can replace MySQL with TiDB to power your applications without changing a single line of code in most cases. You can also manage TiDB using the popular MySQL management tools.

How is TiDB compatible with MySQL?

Currently, TiDB supports the majority of MySQL 5.7 syntax, but does not support trigger, stored procedures, user-defined functions, and foreign keys. For more details, see Compatibility with MySQL.

If you use the MySQL 8.0 client and it fails to connect to TiDB, try to add the default-auth and default-character-set options:

mysql -h 127.0.0.1 -u root -P 4000 --default-auth=mysql_native_password --default-character-set=utf8

This problem occurs because MySQL 8.0 changes the authentication plugin default in MySQL 5.7. To solve this problem, you need to add the options above to specify using the old encryption method.

How is TiDB highly available?

TiDB is self-healing. All of the three components, TiDB, TiKV and PD, can tolerate failures of some of their instances. With its strong consistency guarantee, whether it’s data machine failures or even downtime of an entire data center, your data can be recovered automatically. For more information, see TiDB architecture.

How is TiDB strongly consistent?

TiDB implements Snapshot Isolation consistency, which it advertises as REPEATABLE-READ for compatibility with MySQL. Data is redundantly copied between TiKV nodes using the Raft consensus algorithm to ensure recoverability should a node failure occur.

At the bottom layer, TiKV uses a model of replication log + State Machine to replicate data. For the write requests, the data is written to a Leader and the Leader then replicates the command to its Followers in the form of log. When the majority of nodes in the cluster receive this log, this log is committed and can be applied into the State Machine.

Does TiDB support distributed transactions?

Yes. TiDB distributes transactions across your cluster, whether it is a few nodes in a single location or many nodes across multiple datacenters.

Inspired by Google's Percolator, the transaction model in TiDB is mainly a two-phase commit protocol with some practical optimizations. This model relies on a timestamp allocator to assign the monotone increasing timestamp for each transaction, so conflicts can be detected. PD works as the timestamp allocator in a TiDB cluster.

What programming language can I use to work with TiDB?

Any language supported by MySQL client or driver.

Can I use other Key-Value storage engines with TiDB?

Yes. TiKV and TiDB support many popular standalone storage engines, such as GolevelDB and BoltDB. If the storage engine is a KV engine that supports transactions and it provides a client that meets the interface requirement of TiDB, then it can connect to TiDB.

The architecture of TiDB guarantees that it fully supports geo-distribution and multi-activeness. Your data and applications are always-on. All the outages are transparent to your applications and your data can recover automatically. The operation depends on the network latency and stability. It is recommended to keep the latency within 5ms. Currently, we already have similar use cases. For details, contact info@pingcap.com.

Does TiDB provide any other knowledge resource besides the documentation?

Currently, TiDB documentation is the most important and timely way to get knowledge of TiDB. In addition, we also have some technical communication groups. If you have any needs, contact info@pingcap.com.

What are the MySQL variables that TiDB is compatible with?

See System Variables.

The order of results is different from MySQL when ORDER BY is omitted

It is not a bug. The default order of records depends on various situations without any guarantee of consistency.

The order of results in MySQL might appear stable because queries are executed in a single thread. However, it is common that query plans can change when upgrading to new versions. It is recommended to use ORDER BY whenever an order of results is desired.

The reference can be found in ISO/IEC 9075:1992, Database Language SQL- July 30, 1992, which states as follows:

If an <order by clause> is not specified, then the table specified by the <cursor specification> is T and the ordering of rows in T is implementation-dependent. In the following two queries, both results are considered legal:

> select * from t; +------+------+ | a | b | +------+------+ | 1 | 1 | | 2 | 2 | +------+------+ 2 rows in set (0.00 sec)
> select * from t; -- the order of results is not guaranteed +------+------+ | a | b | +------+------+ | 2 | 2 | | 1 | 1 | +------+------+ 2 rows in set (0.00 sec)

A statement is also considered non-deterministic if the list of columns used in the ORDER BY is non-unique. In the following example, the column a has duplicate values. Thus, only ORDER BY a, b would be guaranteed deterministic:

> select * from t order by a; +------+------+ | a | b | +------+------+ | 1 | 1 | | 2 | 1 | | 2 | 2 | +------+------+ 3 rows in set (0.00 sec)
> select * from t order by a; -- the order of column a is guaranteed, but b is not +------+------+ | a | b | +------+------+ | 1 | 1 | | 2 | 2 | | 2 | 1 | +------+------+ 3 rows in set (0.00 sec)

Does TiDB support SELECT FOR UPDATE?

Yes. When using pessimistic locking (the default since TiDB v3.0) the SELECT FOR UPDATE execution behaves similar to MySQL.

When using optimistic locking, SELECT FOR UPDATE does not lock data when the transaction is started, but checks conflicts when the transaction is committed. If the check reveals conflicts, the committing transaction rolls back.

Can the codec of TiDB guarantee that the UTF-8 string is memcomparable? Is there any coding suggestion if our key needs to support UTF-8?

The character sets of TiDB use UTF-8 by default and currently only support UTF-8. The string of TiDB uses the memcomparable format.

What is the length limit for the TiDB user name?

32 characters at most.

What is the maximum number of statements in a transaction?

5000 at most.

Does TiDB support XA?

No. The JDBC driver of TiDB is MySQL JDBC (Connector/J). When using Atomikos, set the data source to type="com.mysql.jdbc.jdbc2.optional.MysqlXADataSource". TiDB does not support the connection with MySQL JDBC XADataSource. MySQL JDBC XADataSource only works for MySQL (for example, using DML to modify the redo log).

After you configure the two data sources of Atomikos, set the JDBC drives to XA. When Atomikos operates TM and RM (DB), Atomikos sends the command including XA to the JDBC layer. Taking MySQL for an example, when XA is enabled in the JDBC layer, JDBC will send a series of XA logic operations to InnoDB, including using DML to change the redo log. This is the operation of the two-phase commit. The current TiDB version does not support the upper application layer JTA/XA and does not parse XA operations sent by Atomikos.

As a standalone database, MySQL can only implement across-database transactions using XA; while TiDB supports distributed transactions using Google Percolator transaction model and its performance stability is higher than XA, so TiDB does not support XA and there is no need for TiDB to support XA.

Does show processlist display the system process ID?

The display content of TiDB show processlist is almost the same as that of MySQL show processlist. TiDB show processlist does not display the system process ID. The ID that it displays is the current session ID. The differences between TiDB show processlist and MySQL show processlist are as follows:

  • As TiDB is a distributed database, the tidb-server instance is a stateless engine for parsing and executing the SQL statements (for details, see TiDB architecture). show processlist displays the session list executed in the tidb-server instance that the user logs in to from the MySQL client, not the list of all the sessions running in the cluster. But MySQL is a standalone database and its show processlist displays all the SQL statements executed in MySQL.
  • The State column in TiDB is not continually updated during query execution. As TiDB supports parallel query, each statement may be in multiple states at once, and thus it is difficult to simplify to a single value.

How to modify the user password and privilege?

To modify the user password in TiDB, it is recommended to use set password for 'root'@'%' = '0101001'; or alter, not update mysql.user which might lead to the condition that the password in other nodes is not refreshed timely.

It is recommended to use the official standard statements when modifying the user password and privilege. For details, see TiDB user account management.

Why does the auto-increment ID of the later inserted data is smaller than that of the earlier inserted data in TiDB?

The auto-increment ID feature in TiDB is only guaranteed to be automatically incremental and unique but is not guaranteed to be allocated sequentially. Currently, TiDB is allocating IDs in batches. If data is inserted into multiple TiDB servers simultaneously, the allocated IDs are not sequential. When multiple threads concurrently insert data to multiple tidb-server instances, the auto-increment ID of the later inserted data may be smaller. TiDB allows specifying AUTO_INCREMENT for the integer field, but allows only one AUTO_INCREMENT field in a single table. For details, see MySQL Compatibility.

How do I modify the sql_mode in TiDB?

TiDB supports modifying the sql_mode as a system variable, as in MySQL. Currently, TiDB does not permit modifying the sql mode in a configuration file, but system variable changes made with SET GLOBAL propagate to all TiDB servers in the cluster and persist across restarts.

Does TiDB support modifying the MySQL version string of the server to a specific one that is required by the security vulnerability scanning tool?

Since v3.0.8, TiDB supports modifying the version string of the server by modifying server-version in the configuration file. When you deploy TiDB using TiDB Ansible, you can also specify the proper version string by configuring server-version in the conf/tidb.yml configuration file to avoid the failure of security vulnerability scan.

What authentication protocols does TiDB support? What's the process?

  • Like MySQL, TiDB supports the SASL protocol for user login authentication and password processing.

  • When the client connects to TiDB, the challenge-response authentication mode starts. The process is as follows:

    1. The client connects to the server.
    2. The server sends a random string challenge to the client.
    3. The client sends the username and response to the server.
    4. The server verifies the response.

TiDB techniques

TiKV for data storage

See TiDB Internal (I) - Data Storage.

TiDB for data computing

See TiDB Internal (II) - Computing.

PD for scheduling

See TiDB Internal (III) - Scheduling.

Install, deploy and upgrade

Prepare

Operating system version requirements

Linux OS PlatformVersion
Red Hat Enterprise Linux7.3 or later
CentOS7.3 or later
Oracle Enterprise Linux7.3 or later

As an open source distributed NewSQL database with high performance, TiDB can be deployed in the Intel architecture server and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems. For details, see Software and Hardware Requirements for deploying TiDB.

Server requirements

You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture. The requirements and recommendations about server hardware configuration for development, testing and production environments are as follows:

Development and testing environments
ComponentCPUMemoryLocal StorageNetworkInstance Number (Minimum Requirement)
TiDB8 core+16 GB+SAS, 200 GB+Gigabit network card1 (can be deployed on the same machine with PD)
PD8 core+16 GB+SAS, 200 GB+Gigabit network card1 (can be deployed on the same machine with TiDB)
TiKV8 core+32 GB+SAS, 200 GB+Gigabit network card3
Total Server Number4
Production environment
ComponentCPUMemoryHard Disk TypeNetworkInstance Number (Minimum Requirement)
TiDB16 core+48 GB+SAS10 Gigabit network card (2 preferred)2
PD8 core+16 GB+SSD10 Gigabit network card (2 preferred)3
TiKV16 core+48 GB+SSD10 Gigabit network card (2 preferred)3
Monitor8 core+16 GB+SASGigabit network card1
Total Server Number9
What's the purposes of 2 network cards of 10 gigabit?

As a distributed cluster, TiDB has a high demand on time, especially for PD, because PD needs to distribute unique timestamps. If the time in the PD servers is not consistent, it takes longer waiting time when switching the PD server. The bond of two network cards guarantees the stability of data transmission, and 10 gigabit guarantees the transmission speed. Gigabit network cards are prone to meet bottlenecks, therefore it is strongly recommended to use 10 gigabit network cards.

Is it feasible if we don't use RAID for SSD?

If the resources are adequate, it is recommended to use RAID 10 for SSD. If the resources are inadequate, it is acceptable not to use RAID for SSD.

  • TiDB has a high requirement on CPU and memory. If you need to open Binlog, the local disk space should be increased based on the service volume estimation and the time requirement for the GC operation. But the SSD disk is not a must.
  • PD stores the cluster metadata and has frequent Read and Write requests. It demands a high I/O disk. A disk of low performance will affect the performance of the whole cluster. It is recommended to use SSD disks. In addition, a larger number of Regions has a higher requirement on CPU and memory.
  • TiKV has a high requirement on CPU, memory and disk. It is required to use SSD.

For details, see Software and Hardware Recommendations.

Install and deploy

See Ansible Deployment.

Why the modified toml configuration for TiKV/PD does not take effect?

You need to set the --config parameter in TiKV/PD to make the toml configuration effective. TiKV/PD does not read the configuration by default. Currently, this issue only occurs when deploying using Binary. For TiKV, edit the configuration and restart the service. For PD, the configuration file is only read when PD is started for the first time, after which you can modify the configuration using pd-ctl. For details, see PD Control User Guide.

The monitoring machine is recommended to use standalone deployment. It is recommended to use an 8 core CPU with 16 GB+ memory and a 500 GB+ hard disk.

Why the monitor cannot display all metrics?

Check the time difference between the machine time of the monitor and the time within the cluster. If it is large, you can correct the time and the monitor will display all the metrics.

What is the function of supervise/svc/svstat service?
  • supervise: the daemon process, to manage the processes
  • svc: to start and stop the service
  • svstat: to check the process status
Description of inventory.ini variables
VariableDescription
cluster_namethe name of a cluster, adjustable
tidb_versionthe version of TiDB, configured by default in TiDB Ansible branches
deployment_methodthe method of deployment, binary by default, Docker optional
process_supervisionthe supervision way of processes, systemd by default, supervise optional
timezonethe timezone of the managed node, adjustable, Asia/Shanghai by default, used with the set_timezone variable
set_timezoneto edit the timezone of the managed node, True by default; False means closing
enable_elkcurrently not supported
enable_firewalldto enable the firewall, closed by default
enable_ntpdto monitor the NTP service of the managed node, True by default; do not close it
machine_benchmarkto monitor the disk IOPS of the managed node, True by default; do not close it
set_hostnameto edit the hostname of the managed node based on the IP, False by default
enable_binlogwhether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the zookeeper_addrs variable
zookeeper_addrsthe ZooKeeper address of the binlog Kafka cluster
enable_slow_query_logto record the slow query log of TiDB into a single file: ({{ deploy_dir }}/log/tidb_slow_query.log). False by default, to record it into the TiDB log
deploy_without_tidbthe Key-Value mode, deploy only PD, TiKV and the monitoring service, not TiDB; set the IP of the tidb_servers host group to null in the inventory.ini file

Deploy TiDB offline using TiDB Ansible

It is not recommended to deploy TiDB offline using TiDB Ansible. If the Control Machine has no access to external network, you can deploy TiDB offline using TiDB Ansible. For details, see Offline Deployment Using TiDB Ansible.

How to deploy TiDB quickly using Docker Compose on a single machine?

You can use Docker Compose to build a TiDB cluster locally, including the cluster monitoring components. You can also customize the version and number of instances for each component. The configuration file can also be customized. You can only use this deployment method for testing and development environment. For details, see Building the Cluster Using Docker Compose.

How to separately record the slow query log in TiDB? How to locate the slow query SQL statement?

  1. The slow query definition for TiDB is in the conf/tidb.yml configuration file of tidb-ansible. The slow-threshold: 300 parameter is used to configure the threshold value of the slow query (unit: millisecond).

    The slow query log is recorded in tidb.log by default. If you want to generate a slow query log file separately, set enable_slow_query_log in the inventory.ini configuration file to True.

    Then run ansible-playbook rolling_update.yml --tags=tidb to perform a rolling update on the tidb-server instance. After the update is finished, the tidb-server instance will record the slow query log in tidb_slow_query.log.

  2. If a slow query occurs, you can locate the tidb-server instance where the slow query is and the slow query time point using Grafana and find the SQL statement information recorded in the log on the corresponding node.

  3. In addition to the log, you can also view the slow query using the admin show slow command. For details, see admin show slow command.

How to add the label configuration if label of TiKV was not configured when I deployed the TiDB cluster for the first time?

The configuration of TiDB label is related to the cluster deployment architecture. It is important and is the basis for PD to execute global management and scheduling. If you did not configure label when deploying the cluster previously, you should adjust the deployment structure by manually adding the location-labels information using the PD management tool pd-ctl, for example, config set location-labels "zone,rack,host" (you should configure it based on the practical label level name).

For the usage of pd-ctl, see PD Control Instruction.

Why does the dd command for the disk test use the oflag=direct option?

The Direct mode wraps the Write request into the I/O command and sends this command to the disk to bypass the file system cache and directly test the real I/O Read/Write performance of the disk.

How to use the fio command to test the disk performance of the TiKV instance?

  • Random Read test:

    ./fio -ioengine=psync -bs=32k -fdatasync=1 -thread -rw=randread -size=10G -filename=fio_randread_test.txt -name='fio randread test' -iodepth=4 -runtime=60 -numjobs=4 -group_reporting --output-format=json --output=fio_randread_result.json
  • The mix test of sequential Write and random Read:

    ./fio -ioengine=psync -bs=32k -fdatasync=1 -thread -rw=randrw -percentage_random=100,0 -size=10G -filename=fio_randread_write_test.txt -name='fio mixed randread and sequential write test' -iodepth=4 -runtime=60 -numjobs=4 -group_reporting --output-format=json --output=fio_randread_write_test.json

Error UNREACHABLE! "msg": "Failed to connect to the host via ssh: " when deploying TiDB using TiDB Ansible

Two possible reasons and solutions:

  • The SSH mutual trust is not configured as required. It’s recommended to follow the steps described in the official document and check whether it is successfully configured using ansible -i inventory.ini all -m shell -a 'whoami' -b.
  • If it involves the scenario where a single server is assigned multiple roles, for example, the mixed deployment of multiple components or multiple TiKV instances are deployed on a single server, this error might be caused by the SSH reuse mechanism. You can use the option of ansible … -f 1 to avoid this error.

Upgrade

How to perform rolling updates using TiDB Ansible?

  • Apply rolling updates to the TiKV node (only update the TiKV service).

    ansible-playbook rolling_update.yml --tags=tikv
  • Apply rolling updates to all services.

    ansible-playbook rolling_update.yml

How are the rolling updates done?

When you apply rolling updates to the TiDB services, the running application is not affected. You need to configure the minimum cluster topology (TiDB * 2, PD * 3, TiKV * 3). If the Pump/Drainer service is involved in the cluster, it is recommended to stop Drainer before rolling updates. When you update TiDB, Pump is also updated.

How to upgrade when I deploy TiDB using Binary?

It is not recommended to deploy TiDB using Binary. The support for upgrading using Binary is not as friendly as using TiDB Ansible. It is recommended to deploy TiDB using TiDB Ansible.

Should I upgrade TiKV or all components generally?

Generally you should upgrade all components, because the whole version is tested together. Upgrade a single component only when an emergent issue occurs and you need to upgrade this component.

What causes "Timeout when waiting for search string 200 OK" when starting or upgrading a cluster? How to deal with it?

Possible reasons:

  • The process did not start normally.
  • The port is occupied.
  • The process did not stop normally.
  • You use rolling_update.yml to upgrade the cluster when the cluster is stopped (operation error).

Solution:

  • Log into the node to check the status of the process or port.
  • Correct the incorrect operation procedure.

Manage the cluster

Daily management

What are the common operations?

JobPlaybook
Start the clusteransible-playbook start.yml
Stop the clusteransible-playbook stop.yml
Destroy the clusteransible-playbook unsafe_cleanup.yml (If the deployment directory is a mount point, an error will be reported, but implementation results will remain unaffected)
Clean data (for test)ansible-playbook unsafe_cleanup_data.yml
Apply rolling updatesansible-playbook rolling_update.yml
Apply rolling updates to TiKVansible-playbook rolling_update.yml --tags=tikv
Apply rolling updates to components except PDansible-playbook rolling_update.yml --skip-tags=pd
Apply rolling updates to the monitoring componentsansible-playbook rolling_update_monitor.yml

How to log into TiDB?

You can log into TiDB like logging into MySQL. For example:

mysql -h 127.0.0.1 -uroot -P4000

How to modify the system variables in TiDB?

Similar to MySQL, TiDB includes static and solid parameters. You can directly modify static parameters using set global xxx = n, but the new value of a parameter is only effective within the life cycle in this instance.

Where and what are the data directories in TiDB (TiKV)?

TiKV data is located in the --data-dir, which include four directories of backup, db, raft, and snap, used to store backup, data, Raft data, and mirror data respectively.

What are the system tables in TiDB?

Similar to MySQL, TiDB includes system tables as well, used to store the information required by the server when it runs.

Where are the TiDB/PD/TiKV logs?

By default, TiDB/PD/TiKV outputs standard error in the logs. If a log file is specified by --log-file during the startup, the log is output to the specified file and executes rotation daily.

How to safely stop TiDB?

If the cluster is deployed using TiDB Ansible, you can use the ansible-playbook stop.yml command to stop the TiDB cluster. If the cluster is not deployed using TiDB Ansible, kill all the services directly. The components of TiDB will do graceful shutdown.

Can kill be executed in TiDB?

  • You can kill DML statements. First use show processlist to find the ID corresponding with the session, and then run kill tidb [session id].
  • You can kill DDL statements. First use admin show ddl jobs to find the ID of the DDL job you need to kill, and then run admin cancel ddl jobs 'job_id' [, 'job_id'] .... For more details, see the ADMIN statement.

Does TiDB support session timeout?

TiDB supports the MySQL configuration settings wait_timeout and interactive_timeout. By configuring these settings, it is possible to have sessions automatically timeout after an idle period.

What is the TiDB version management strategy for production environment? How to avoid frequent upgrade?

Currently, TiDB has a standard management of various versions. Each release contains a detailed change log and release notes. Whether it is necessary to upgrade in the production environment depends on the application system. It is recommended to learn the details about the functional differences between the previous and later versions before upgrading.

Take Release Version: v1.0.3-1-ga80e796 as an example of version number description:

  • v1.0.3 indicates the standard GA version.
  • -1 indicates the current version has one commit.
  • ga80e796 indicates the version git-hash.

What's the difference between various TiDB master versions? How to avoid using the wrong TiDB Ansible version?

The TiDB community is highly active. After the 1.0 GA release, the engineers have been keeping optimizing and fixing bugs. Therefore, the TiDB version is updated quite fast. If you want to keep informed of the latest version, see TiDB Weekly update.

It is recommended to deploy the TiDB cluster using the latest version of TiDB Ansible, which will also be updated along with the TiDB version. TiDB has a unified management of the version number after the 1.0 GA release. You can view the version number using the following two methods:

  • select tidb_version()
  • tidb-server -V

Is there a graphical deployment tool for TiDB?

Currently no.

How to scale TiDB horizontally?

As your business grows, your database might face the following three bottlenecks:

  • Lack of storage resources which means that the disk space is not enough.

  • Lack of computing resources such as high CPU occupancy.

  • Not enough write and read capacity.

You can scale TiDB as your business grows.

  • If the disk space is not enough, you can increase the capacity simply by adding more TiKV nodes. When the new node is started, PD will migrate the data from other nodes to the new node automatically.

  • If the computing resources are not enough, check the CPU consumption situation first before adding more TiDB nodes or TiKV nodes. When a TiDB node is added, you can configure it in the Load Balancer.

  • If the capacity is not enough, you can add both TiDB nodes and TiKV nodes.

Why does TiDB use gRPC instead of Thrift? Is it because Google uses it?

Not really. We need some good features of gRPC, such as flow control, encryption and streaming.

What does the 92 indicate in like(bindo.customers.name, jason%, 92)?

The 92 indicates the escape character, which is ASCII 92 by default.

Why does the data length shown by information_schema.tables.data_length differ from the store size on the TiKV monitoring panel?

Two reasons:

  • The two results are calculated in different ways. information_schema.tables.data_length is an estimated value by calculating the averaged length of each row, while the store size on the TiKV monitoring panel sums up the length of the data files (the SST files of RocksDB) in a single TiKV instance.
  • information_schema.tables.data_length is a logical value, while the store size is a physical value. The redundant data generated by multiple versions of the transaction is not included in the logical value, while the redundant data is compressed by TiKV in the physical value.

Manage the PD server

The TiKV cluster is not bootstrapped message is displayed when I access PD

Most of the APIs of PD are available only when the TiKV cluster is initialized. This message is displayed if PD is accessed when PD is started while TiKV is not started when a new cluster is deployed. If this message is displayed, start the TiKV cluster. When TiKV is initialized, PD is accessible.

The etcd cluster ID mismatch message is displayed when starting PD

This is because the --initial-cluster in the PD startup parameter contains a member that doesn't belong to this cluster. To solve this problem, check the corresponding cluster of each member, remove the wrong member, and then restart PD.

What's the maximum tolerance for time synchronization error of PD?

PD can tolerate any synchronization error, but a larger error value means a larger gap between the timestamp allocated by the PD and the physical time, which will affect functions such as read of historical versions.

How does the client connection find PD?

The client connection can only access the cluster through TiDB. TiDB connects PD and TiKV. PD and TiKV are transparent to the client. When TiDB connects to any PD, the PD tells TiDB who is the current leader. If this PD is not the leader, TiDB reconnects to the leader PD.

What is the difference between the leader-schedule-limit and region-schedule-limit scheduling parameters in PD?

  • The leader-schedule-limit scheduling parameter is used to balance the Leader number of different TiKV servers, affecting the load of query processing.
  • The region-schedule-limit scheduling parameter is used to balance the replica number of different TiKV servers, affecting the data amount of different nodes.

Is the number of replicas in each region configurable? If yes, how to configure it?

Yes. Currently, you can only update the global number of replicas. When started for the first time, PD reads the configuration file (conf/pd.yml) and uses the max-replicas configuration in it. If you want to update the number later, use the pd-ctl configuration command config set max-replicas $num and view the enabled configuration using config show all. The updating does not affect the applications and is configured in the background.

Make sure that the total number of TiKV instances is always greater than or equal to the number of replicas you set. For example, 3 replicas need 3 TiKV instances at least. Additional storage requirements need to be estimated before increasing the number of replicas. For more information about pd-ctl, see PD Control User Guide.

How to check the health status of the whole cluster when lacking command line cluster management tools?

You can determine the general status of the cluster using the pd-ctl tool. For detailed cluster status, you need to use the monitor to determine.

How to delete the monitoring data of a cluster node that is offline?

The offline node usually indicates the TiKV node. You can determine whether the offline process is finished by the pd-ctl or the monitor. After the node is offline, perform the following steps:

  1. Manually stop the relevant services on the offline node.
  2. Delete the node_exporter data of the corresponding node from the Prometheus configuration file.
  3. Delete the data of the corresponding node from Ansible inventory.ini.

Why couldn't I connect to the PD server using 127.0.0.1 when I was using the PD Control?

If your TiDB cluster is deployed using TiDB Ansible, the PD external service port is not bound to 127.0.0.1, so PD Control does not recognize 127.0.0.1 and you can only connect to it using the local IP address.

Manage the TiDB server

How to set the lease parameter in TiDB?

The lease parameter (--lease=60) is set from the command line when starting a TiDB server. The value of the lease parameter impacts the Database Schema Changes (DDL) speed of the current session. In the testing environments, you can set the value to 1s for to speed up the testing cycle. But in the production environments, it is recommended to set the value to minutes (for example, 60) to ensure the DDL safety.

What is the processing time of a DDL operation?

The processing time is different for different scenarios. Generally, you can consider the following three scenarios:

  1. The Add Index operation with a relatively small number of rows in the corresponding data table: about 3s
  2. The Add Index operation with a relatively large number of rows in the corresponding data table: the processing time depends on the specific number of rows and the QPS at that time (the Add Index operation has a lower priority than ordinary SQL operations)
  3. Other DDL operations: about 1s

If the TiDB server instance that receives the DDL request is the same TiDB server instance that the DDL owner is in, the first and third scenarios above may cost only dozens to hundreds of milliseconds.

Why it is very slow to run DDL statements sometimes?

Possible reasons:

  • If you run multiple DDL statements together, the last few DDL statements might run slowly. This is because the DDL statements are executed serially in the TiDB cluster.
  • After you start the cluster successfully, the first DDL operation may take a longer time to run, usually around 30s. This is because the TiDB cluster is electing the leader that processes DDL statements.
  • The processing time of DDL statements in the first ten minutes after starting TiDB would be much longer than the normal case if you meet the following conditions: 1) TiDB cannot communicate with PD as usual when you are stopping TiDB (including the case of power failure); 2) TiDB fails to clean up the registration data from PD in time because TiDB is stopped by the kill -9 command. If you run DDL statements during this period, for the state change of each DDL, you need to wait for 2 * lease (lease = 45s).
  • If a communication issue occurs between a TiDB server and a PD server in the cluster, the TiDB server cannot get or update the version information from the PD server in time. In this case, you need to wait for 2 * lease for the state processing of each DDL.

Can I use S3 as the backend storage engine in TiDB?

No. Currently, TiDB only supports the distributed storage engine and the Goleveldb/RocksDB/BoltDB engine.

Can the Information_schema support more real information?

As part of MySQL compatibility, TiDB supports a number of INFORMATION_SCHEMA tables. Many of these tables also have a corresponding SHOW command. For more information, see Information Schema.

What's the explanation of the TiDB Backoff type scenario?

In the communication process between the TiDB server and the TiKV server, the Server is busy or backoff.maxsleep 20000ms log message is displayed when processing a large volume of data. This is because the system is busy while the TiKV server processes data. At this time, usually you can view that the TiKV host resources usage rate is high. If this occurs, you can increase the server capacity according to the resources usage.

What's the maximum number of concurrent connections that TiDB supports?

By default, there is no limit on the maximum number of connections per TiDB server. A limit may be enforced by setting max-server-connections in the config.toml file. If too large concurrency leads to an increase of response time, it is recommended to increase the capacity by adding TiDB nodes.

How to view the creation time of a table?

The create_time of tables in the information_schema is the creation time.

What is the meaning of EXPENSIVE_QUERY in the TiDB log?

When TiDB is executing a SQL statement, the query will be EXPENSIVE_QUERY if each operator is estimated to process over 10000 pieces of data. You can modify the tidb-server configuration parameter to adjust the threshold and then restart the tidb-server.

How to control or change the execution priority of SQL commits?

TiDB supports changing the priority on a per-session, global or individual statement basis. Priority has the following meaning:

  • HIGH_PRIORITY: this statement has a high priority, that is, TiDB gives priority to this statement and executes it first.

  • LOW_PRIORITY: this statement has a low priority, that is, TiDB reduces the priority of this statement during the execution period.

You can combine the above two parameters with the DML of TiDB to use them. For example:

  1. Adjust the priority by writing SQL statements in the database:

    select HIGH_PRIORITY | LOW_PRIORITY count(*) from table_name; insert HIGH_PRIORITY | LOW_PRIORITY into table_name insert_values; delete HIGH_PRIORITY | LOW_PRIORITY from table_name; update HIGH_PRIORITY | LOW_PRIORITY table_reference set assignment_list where where_condition; replace HIGH_PRIORITY | LOW_PRIORITY into table_name;
  2. The full table scan statement automatically adjusts itself to a low priority. analyze has a low priority by default.

What's the trigger strategy for auto analyze in TiDB?

Trigger strategy: auto analyze is automatically triggered when the number of pieces of data in a new table reaches 1000 and this table has no write operation within one minute.

When the modified number or the current total row number is larger than tidb_auto_analyze_ratio, the analyze statement is automatically triggered. The default value of tidb_auto_analyze_ratio is 0.5, indicating that this feature is enabled by default. To ensure safety, its minimum value is 0.3 when the feature is enabled, and it must be smaller than pseudo-estimate-ratio whose default value is 0.8, otherwise pseudo statistics will be used for a period of time. It is recommended to set tidb_auto_analyze_ratio to 0.5.

Can I use hints to override the optimizer behavior?

TiDB supports multiple hints to override the default query optimizer behavior. The basic usage is similar to MySQL, with several TiDB specific extensions:

SELECT column_name FROM table_name USE INDEX(index_name)WHERE where_condition;

Why the Information schema is changed error is reported?

TiDB handles the SQL statement using the schema of the time and supports online asynchronous DDL change. A DML statement and a DDL statement might be executed at the same time and you must ensure that each statement is executed using the same schema. Therefore, when the DML operation meets the ongoing DDL operation, the Information schema is changed error might be reported. Some improvements have been made to prevent too many error reportings during the DML operation.

Now, there are still a few reasons for this error reporting (the latter two are unrelated to tables):

  • Some tables involved in the DML operation are the same tables involved in the ongoing DDL operation.
  • The DML operation goes on for a long time. During this period, many DDL statements have been executed, which causes more than 1024 schema version changes. This value is set to 100 in TiDB before v3.0.5. Since v3.0.5, the default value is 1024 which can be modified by modifying the tidb_max_delta_schema_count variable.
  • The TiDB server that accepts the DML request is not able to load schema information for a long time (possibly caused by the connection failure between TiDB and PD or TiKV). During this period, many DDL statements have been executed, which causes more than 100 schema version changes.

What are the causes of the "Information schema is out of date" error

When executing a DML statement, if TiDB fails to load the latest schema within a DDL lease (45s by default), the Information schema is out of date error might occur. Possible causes are:

  • The TiDB instance that executed this DML was killed, and the transaction execution corresponding to this DML statement took longer than a DDL lease. When the transaction was committed, the error occurred.
  • TiDB failed to connect to PD or TiKV while executing this DML statement. As a result, TiDB failed to load schema within a DDL lease or disconnected from PD due to the keepalive setting.

Error is reported when executing DDL statements under high concurrency?

When you execute DDL statements (such as creating tables in batches) under high concurrency, a very few of these statements might fail because of key conflicts during the concurrent execution.

It is recommended to keep the number of concurrent DDL statements under 20. Otherwise, you need to retry the failed statements from the client.

Manage the TiKV server

3 replicas for each Region is sufficient for a testing environment. However, you should never operate a TiKV cluster with under 3 nodes in a production scenario. Depending on infrastructure, workload, and resiliency needs, you may wish to increase this number.

The cluster ID mismatch message is displayed when starting TiKV

This is because the cluster ID stored in local TiKV is different from the cluster ID specified by PD. When a new PD cluster is deployed, PD generates random cluster IDs. TiKV gets the cluster ID from PD and stores the cluster ID locally when it is initialized. The next time when TiKV is started, it checks the local cluster ID with the cluster ID in PD. If the cluster IDs don't match, the cluster ID mismatch message is displayed and TiKV exits.

If you previously deploy a PD cluster, but then you remove the PD data and deploy a new PD cluster, this error occurs because TiKV uses the old data to connect to the new PD cluster.

The duplicated store address message is displayed when starting TiKV

This is because the address in the startup parameter has been registered in the PD cluster by other TiKVs. This error occurs when there is no data folder under the directory that TiKV --store specifies, but you use the previous parameter to restart the TiKV.

To solve this problem, use the store delete function to delete the previous store and then restart TiKV.

TiKV leader replicas and follower replicas use the same compression algorithm. Why the amount of disk space occupied is different?

TiKV stores data in the LSM tree, in which each layer has a different compression algorithm. If two replicas of the same data are located in different layers in two TiKV nodes, the two replicas might occupy different space.

What are the features of TiKV block cache?

TiKV implements the Column Family (CF) feature of RocksDB. By default, the KV data is eventually stored in the 3 CFs (default, write and lock) within RocksDB.

  • The default CF stores real data and the corresponding parameter is in [rocksdb.defaultcf]. The write CF stores the data version information (MVCC) and index-related data, and the corresponding parameter is in [rocksdb.writecf]. The lock CF stores the lock information and the system uses the default parameter.
  • The Raft RocksDB instance stores Raft logs. The default CF mainly stores Raft logs and the corresponding parameter is in [raftdb.defaultcf].
  • All CFs have a shared block-cache to cache data blocks and improve RocksDB read speed. The size of block-cache is controlled by the block-cache-size parameter. A larger value of the parameter means more hot data can be cached and is more favorable to read operation. At the same time, it consumes more system memory.
  • Each CF has an individual write-buffer and the size is controlled by the write-buffer-size parameter.

Why it occurs that "TiKV channel full"?

  • The Raftstore thread is too slow or blocked by I/O. You can view the CPU usage status of Raftstore.
  • TiKV is too busy (CPU, disk I/O, etc.) and cannot manage to handle it.

Why does TiKV frequently switch Region leader?

  • Leaders can not reach out to followers. E.g., network problem or node failure.
  • Leader balance from PD. E.g., PD wants to transfer leaders from a hotspot node to others.

If a node is down, will the service be affected? If yes, how long?

TiKV uses Raft to replicate data among multiple replicas (by default 3 replicas for each Region). If one replica goes wrong, the other replicas can guarantee data safety. Based on the Raft protocol, if a single leader fails as the node goes down, a follower in another node is soon elected as the Region leader after a maximum of 2 * lease time (lease time is 10 seconds).

What are the TiKV scenarios that take up high I/O, memory, CPU, and exceed the parameter configuration?

Writing or reading a large volume of data in TiKV takes up high I/O, memory and CPU. Executing very complex queries costs a lot of memory and CPU resources, such as the scenario that generates large intermediate result sets.

Does TiKV support SAS/SATA disks or mixed deployment of SSD/SAS disks?

No. For OLTP scenarios, TiDB requires high I/O disks for data access and operation. As a distributed database with strong consistency, TiDB has some write amplification such as replica replication and bottom layer storage compaction. Therefore, it is recommended to use NVMe SSD as the storage disks in TiDB best practices. Mixed deployment of TiKV and PD is not supported.

Is the Range of the Key data table divided before data access?

No. It differs from the table splitting rules of MySQL. In TiKV, the table Range is dynamically split based on the size of Region.

How does Region split?

Region is not divided in advance, but it follows a Region split mechanism. When the Region size exceeds the value of the region-max-size or region-max-keys parameters, split is triggered. After the split, the information is reported to PD.

Does TiKV have the innodb_flush_log_trx_commit parameter like MySQL, to guarantee the security of data?

Yes. Currently, the standalone storage engine uses two RocksDB instances. One instance is used to store the raft-log. When the sync-log parameter in TiKV is set to true, each commit is mandatorily flushed to the raft-log. If a crash occurs, you can restore the KV data using the raft-log.

WAL belongs to ordered writing, and currently, we do not apply a unique configuration to it. Recommended configuration is as follows:

  • SSD
  • RAID 10 preferred
  • Cache strategy of RAID card and I/O scheduling strategy of the operating system: currently no specific best practices; you can use the default configuration in Linux 7 or later
  • NUMA: no specific suggestion; for memory allocation strategy, you can use interleave = all
  • File system: ext4

How is the write performance in the most strict data available mode (sync-log = true)?

Generally, enabling sync-log reduces about 30% of the performance. For write performance when sync-log is set to false, see Performance test result for TiDB using Sysbench.

Can Raft + multiple replicas in the TiKV architecture achieve absolute data safety? Is it necessary to apply the most strict mode (sync-log = true) to a standalone storage?

Data is redundantly replicated between TiKV nodes using the Raft consensus algorithm to ensure recoverability should a node failure occur. Only when the data has been written into more than 50% of the replicas will the application return ACK (two out of three nodes). However, theoretically, two nodes might crash. Therefore, except for scenarios with less strict requirement on data safety but extreme requirement on performance, it is strongly recommended that you enable the sync-log mode.

As an alternative to using sync-log, you may also consider having five replicas instead of three in your Raft group. This would allow for the failure of two replicas, while still providing data safety.

For a standalone TiKV node, it is still recommended to enable the sync-log mode. Otherwise, the last write might be lost in case of a node failure.

Since TiKV uses the Raft protocol, multiple network roundtrips occur during data writing. What is the actual write delay?

Theoretically, TiDB has a write delay of 4 more network roundtrips than standalone databases.

Does TiDB have a InnoDB memcached plugin like MySQL which can directly use the KV interface and does not need the independent cache?

TiKV supports calling the interface separately. Theoretically, you can take an instance as the cache. Because TiDB is a distributed relational database, we do not support TiKV separately.

What is the Coprocessor component used for?

  • Reduce the data transmission between TiDB and TiKV
  • Make full use of the distributed computing resources of TiKV to execute computing pushdown

The error message IO error: No space left on device While appending to file is displayed

This is because the disk space is not enough. You need to add nodes or enlarge the disk space.

Why does the OOM (Out of Memory) error occur frequently in TiKV?

The memory usage of TiKV mainly comes from the block-cache of RocksDB, which is 40% of the system memory size by default. When the OOM error occurs frequently in TiKV, you should check whether the value of block-cache-size is set too high. In addition, when multiple TiKV instances are deployed on a single machine, you need to explicitly configure the parameter to prevent multiple instances from using too much system memory that results in the OOM error.

Can both TiDB data and RawKV data be stored in the same TiKV cluster?

No. TiDB (or data created from the transactional API) relies on a specific key format. It is not compatible with data created from RawKV API (or data from other RawKV-based services).

TiDB test

What is the performance test result for TiDB using Sysbench?

At the beginning, many users tend to do a benchmark test or a comparison test between TiDB and MySQL. We have also done a similar official test and find the test result is consistent at large, although the test data has some bias. Because the architecture of TiDB differs greatly from MySQL, it is hard to find a benchmark point. The suggestions are as follows:

What's the relationship between the TiDB cluster capacity (QPS) and the number of nodes? How does TiDB compare to MySQL?

  • Within 10 nodes, the relationship between TiDB write capacity (Insert TPS) and the number of nodes is roughly 40% linear increase. Because MySQL uses single-node write, its write capacity cannot be scaled.
  • In MySQL, the read capacity can be increased by adding replicas, but the write capacity cannot be increased except using sharding, which has many problems.
  • In TiDB, both the read and write capacity can be easily increased by adding more nodes.

The performance test of MySQL and TiDB by our DBA shows that the performance of a standalone TiDB is not as good as MySQL

TiDB is designed for scenarios where sharding is used because the capacity of a MySQL standalone is limited, and where strong consistency and complete distributed transactions are required. One of the advantages of TiDB is pushing down computing to the storage nodes to execute concurrent computing.

TiDB is not suitable for tables of small size (such as below ten million level), because its strength in concurrency cannot be shown with a small size of data and limited Regions. A typical example is the counter table, in which records of a few lines are updated high frequently. In TiDB, these lines become several Key-Value pairs in the storage engine, and then settle into a Region located on a single node. The overhead of background replication to guarantee strong consistency and operations from TiDB to TiKV leads to a poorer performance than a MySQL standalone.

Backup and restore

How to back up data in TiDB?

Currently, for the backup of a large volume of data, the preferred method is using BR. Otherwise, the recommended tool is Dumpling. Although the official MySQL tool mysqldump is also supported in TiDB to back up and restore data, its performance is worse than BR and it needs much more time to back up and restore large volumes of data.

Migrate the data and traffic

Full data export and import

Mydumper

See Mydumper Instructions.

Loader

See Loader Instructions.

How to migrate an application running on MySQL to TiDB?

Because TiDB supports most MySQL syntax, generally you can migrate your applications to TiDB without changing a single line of code in most cases.

If I accidentally import the MySQL user table into TiDB, or forget the password and cannot log in, how to deal with it?

Restart the TiDB service, add the -skip-grant-table=true parameter in the configuration file. Log into the cluster without password and recreate the user, or recreate the mysql.user table using the following statement:

DROP TABLE IF EXIST mysql.user;
CREATE TABLE if not exists mysql.user ( Host CHAR(64), User CHAR(16), Password CHAR(41), Select_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Insert_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Update_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Delete_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Create_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Drop_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Process_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Grant_priv ENUM('N','Y') NOT NULL DEFAULT 'N', References_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Alter_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Show_db_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Super_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Create_tmp_table_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Lock_tables_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Execute_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Create_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Show_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Create_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Alter_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Index_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Create_user_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Event_priv ENUM('N','Y') NOT NULL DEFAULT 'N', Trigger_priv ENUM('N','Y') NOT NULL DEFAULT 'N', PRIMARY KEY (Host, User));
INSERT INTO mysql.user VALUES ("%", "root", "", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y");

Can TiDB provide services while Loader is running?

TiDB can provide services while Loader is running because Loader inserts the data logically. But do not perform the related DDL operations.

How to export the data in TiDB?

Currently, TiDB does not support select into outfile. You can use the following methods to export the data in TiDB:

How to migrate from DB2 or Oracle to TiDB?

To migrate all the data or migrate incrementally from DB2 or Oracle to TiDB, see the following solution:

  • Use the official migration tool of Oracle, such as OGG, Gateway, CDC (Change Data Capture).
  • Develop a program for importing and exporting data.
  • Export Spool as text file, and import data using Load infile.
  • Use a third-party data migration tool.

Currently, it is recommended to use OGG.

Error: java.sql.BatchUpdateExecption:statement count 5001 exceeds the transaction limitation while using Sqoop to write data into TiDB in batches

In Sqoop, --batch means committing 100 statements in each batch, but by default each statement contains 100 SQL statements. So, 100 * 100 = 10000 SQL statements, which exceeds 5000, the maximum number of statements allowed in a single TiDB transaction.

Two solutions:

  • Add the -Dsqoop.export.records.per.statement=10 option as follows:

    sqoop export \ -Dsqoop.export.records.per.statement=10 \ --connect jdbc:mysql://mysql.example.com/sqoop \ --username sqoop ${user} \ --password ${passwd} \ --table ${tab_name} \ --export-dir ${dir} \ --batch
  • You can also increase the limited number of statements in a single TiDB transaction, but this will consume more memory.

Does TiDB have a function like the Flashback Query in Oracle? Does it support DDL?

Yes, it does. And it supports DDL as well. For details, see how TiDB reads data from history versions.

Migrate the data online

Syncer

Syncer user guide

See Syncer User Guide.

How to configure to monitor Syncer status?

Download and import Syncer Json to Grafana. Edit the Prometheus configuration file and add the following content:

- job_name: 'syncer_ops' // task name static_configs: - targets: [’10.10.1.1:10096’] // Syncer monitoring address and port, informing Prometheus to pull the data of Syncer

Restart Prometheus.

Is there a current solution to replicating data from TiDB to other databases like HBase and Elasticsearch?

No. Currently, the data replication depends on the application itself.

Does Syncer support replicating only some of the tables when Syncer is replicating data?

Yes. For details, see Syncer User Guide

Do frequent DDL operations affect the replication speed of Syncer?

Frequent DDL operations may affect the replication speed. For Sycner, DDL operations are executed serially. When DDL operations are executed during data replication, data will be replicated serially and thus the replication speed will be slowed down.

If the machine that Syncer is in is broken and the directory of the syncer.meta file is lost, what should I do?

When you replicate data using Syncer GTID, the syncer.meta file is constantly updated during the replication process. The current version of Syncer does not contain the design for high availability. The syncer.meta configuration file of Syncer is directly stored on the hard disks, which is similar to other tools in the MySQL ecosystem, such as Mydumper.

Two solutions:

  • Put the syncer.meta file in a relatively secure disk. For example, use disks with RAID 1.
  • Restore the location information of history replication according to the monitoring data that Syncer reports to Prometheus regularly. But the location information might be inaccurate due to the delay when a large amount of data is replicated.
If the downstream TiDB data is not consistent with the MySQL data during the replication process of Syncer, will DML operations cause exits?
  • If the data exists in the upstream MySQL but does not exist in the downstream TiDB, when the upstream MySQL performs the UPDATE or DELETE operation on this row of data, Syncer will not report an error and the replication process will not exit, and this row of data does not exist in the downstream.
  • If a conflict exists in the primary key indexes or the unique indexes in the downstream, preforming the UPDATE operation will cause an exit and performing the INSERT operation will not cause an exit.

Migrate the traffic

How to migrate the traffic quickly?

It is recommended to build a multi-source MySQL -> TiDB real-time replication environment using Syncer tool. You can migrate the read and write traffic in batches by editing the network configuration as needed. Deploy a stable network LB (HAproxy, LVS, F5, DNS, etc.) on the upper layer, in order to implement seamless migration by directly editing the network configuration.

Is there a limit for the total write and read capacity in TiDB?

The total read capacity has no limit. You can increase the read capacity by adding more TiDB servers. Generally the write capacity has no limit as well. You can increase the write capacity by adding more TiKV nodes.

The error message transaction too large is displayed

As distributed transactions need to conduct two-phase commit and the bottom layer performs Raft replication, if a transaction is very large, the commit process would be quite slow and the following Raft replication flow is thus struck. To avoid this problem, we limit the transaction size:

  • A transaction is limited to 5000 SQL statements (by default)
  • Each Key-Value entry is no more than 6MB
  • The total number of Key-Value entry is no more than 300,000 rows
  • The total size of Key-Value entry is no more than 100MB

There are similar limits on Google Cloud Spanner.

How to import data in batches?

When you import data, insert in batches and keep the number of rows within 10,000 for each batch.

Does TiDB release space immediately after deleting data?

None of the DELETE, TRUNCATE and DROP operations release data immediately. For the TRUNCATE and DROP operations, after the TiDB GC (Garbage Collection) time (10 minutes by default), the data is deleted and the space is released. For the DELETE operation, the data is deleted but the space is not released according to TiDB GC. When subsequent data is written into RocksDB and executes COMPACT, the space is reused.

Can I execute DDL operations on the target table when loading data?

No. None of the DDL operations can be executed on the target table when you load data, otherwise the data fails to be loaded.

Does TiDB support the replace into syntax?

Yes. But the load data does not support the replace into syntax.

Why does the query speed getting slow after deleting data?

Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the Region Merge feature is in development, which is expected to solve this problem. For details, see the deleting data section in TiDB Best Practices.

What is the most efficient way of deleting data?

When deleting a large amount of data, it is recommended to use Delete * from t where xx limit 5000;. It deletes through the loop and uses Affected Rows == 0 as a condition to end the loop, so as not to exceed the limit of transaction size. With the prerequisite of meeting business filtering logic, it is recommended to add a strong filter index column or directly use the primary key to select the range, such as id >= 5000*n+m and id < 5000*(n+1)+m.

If the amount of data that needs to be deleted at a time is very large, this loop method will get slower and slower because each deletion traverses backward. After deleting the previous data, lots of deleted flags remain for a short period (then all will be processed by Garbage Collection) and influence the following Delete statement. If possible, it is recommended to refine the Where condition. See details in TiDB Best Practices.

How to improve the data loading speed in TiDB?

  • The Lightning tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
  • Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics.

What should I do if it is slow to reclaim storage space after deleting data?

You can configure concurrent GC to increase the speed of reclaiming storage space. The default concurrency is 1, and you can modify it to at most 50% of the number of TiKV instances using the following command:

update mysql.tidb set VARIABLE_VALUE="3" where VARIABLE_NAME="tikv_gc_concurrency";

SQL optimization

TiDB execution plan description

See Understand the Query Execution Plan.

Statistics collection

See Introduction to Statistics.

How to optimize select count(1)?

The count(1) statement counts the total number of rows in a table. Improving the degree of concurrency can significantly improve the speed. To modify the concurrency, refer to the document. But it also depends on the CPU and I/O resources. TiDB accesses TiKV in every query. When the amount of data is small, all MySQL is in memory, and TiDB needs to conduct a network access.

Recommendations:

  1. Improve the hardware configuration. See Software and Hardware Requirements.
  2. Improve the concurrency. The default value is 10. You can improve it to 50 and have a try. But usually the improvement is 2-4 times of the default value.
  3. Test the count in the case of large amount of data.
  4. Optimize the TiKV configuration. See Performance Tuning for TiKV.

How to view the progress of the current DDL job?

You can use admin show ddl to view the progress of the current DDL job. The operation is as follows:

admin show ddl;
*************************** 1. row *************************** SCHEMA_VER: 140 OWNER: 1a1c4174-0fcd-4ba0-add9-12d08c4077dc RUNNING_JOBS: ID:121, Type:add index, State:running, SchemaState:write reorganization, SchemaID:1, TableID:118, RowCount:77312, ArgLen:0, start time: 2018-12-05 16:26:10.652 +0800 CST, Err:<nil>, ErrCount:0, SnapshotVersion:404749908941733890 SELF_ID: 1a1c4174-0fcd-4ba0-add9-12d08c4077dc

From the above results, you can get that the add index operation is being processed currently. You can also get from the RowCount field of the RUNNING_JOBS column that now the add index operation has added 77312 rows of indexes.

How to view the DDL job?

  • admin show ddl: to view the running DDL job
  • admin show ddl jobs: to view all the results in the current DDL job queue (including tasks that are running and waiting to run) and the last ten results in the completed DDL job queue
  • admin show ddl job queries 'job_id' [, 'job_id'] ...: to view the original SQL statement of the DDL task corresponding to the job_id; the job_id only searches the running DDL job and the last ten results in the DDL history job queue

Does TiDB support CBO (Cost-Based Optimization)? If yes, to what extent?

Yes. TiDB uses the cost-based optimizer. The cost model and statistics are constantly optimized. TiDB also supports join algorithms like hash join and sort-merge join.

How to determine whether I need to execute analyze on a table?

View the Healthy field using show stats_healthy and generally you need to execute analyze on a table when the field value is smaller than 60.

What is the ID rule when a query plan is presented as a tree? What is the execution order for this tree?

No rule exists for these IDs but the IDs are unique. When IDs are generated, a counter works and adds one when one plan is generated. The execution order has nothing to do with the ID. The whole query plan is a tree and the execution process starts from the root node and the data is returned to the upper level continuously. For details about the query plan, see Understanding the TiDB Query Execution Plan.

In the TiDB query plan, cop tasks are in the same root. Are they executed concurrently?

Currently the computing tasks of TiDB belong to two different types of tasks: cop task and root task.

cop task is the computing task which is pushed down to the KV end for distributed execution; root task is the computing task for single point execution on the TiDB end.

Generally the input data of root task comes from cop task; when root task processes data, cop task of TiKV can processes data at the same time and waits for the pull of root task of TiDB. Therefore, cop tasks can be considered as executed concurrently; but their data has an upstream and downstream relationship. During the execution process, they are executed concurrently during some time. For example, the first cop task is processing the data in [100, 200] and the second cop task is processing the data in [1, 100]. For details, see Understanding the TiDB Query Plan.

Database optimization

TiDB

Edit TiDB options

See The TiDB Command Options.

How to scatter the hotspots?

In TiDB, data is divided into Regions for management. Generally, the TiDB hotspot means the Read/Write hotspot in a Region. In TiDB, for the table whose primary key (PK) is not an integer or which has no PK, you can properly break Regions by configuring SHARD_ROW_ID_BITS to scatter the Region hotspots. For details, see the introduction of SHARD_ROW_ID_BITS in TiDB Specific System Variables and Syntax.

TiKV

Tune TiKV performance

See Tune TiKV Performance.

Monitor

Prometheus monitoring framework

See Overview of the Monitoring Framework.

Key metrics of monitoring

See Key Metrics.

Is there a better way of monitoring the key metrics?

The monitoring system of TiDB consists of Prometheus and Grafana. From the dashboard in Grafana, you can monitor various running metrics of TiDB which include the monitoring metrics of system resources, of client connection and SQL operation, of internal communication and Region scheduling. With these metrics, the database administrator can better understand the system running status, running bottlenecks and so on. In the practice of monitoring these metrics, we list the key metrics of each TiDB component. Generally you only need to pay attention to these common metrics. For details, see Key Metrics.

The Prometheus monitoring data is deleted every 15 days by default. Could I set it to two months or delete the monitoring data manually?

Yes. Find the startup script on the machine where Prometheus is started, edit the startup parameter and restart Prometheus.

--storage.tsdb.retention="60d"

Region Health monitor

In TiDB 2.0, Region health is monitored in the PD metric monitoring page, in which the Region Health monitoring item shows the statistics of all the Region replica status. miss means shortage of replicas and extra means the extra replica exists. In addition, Region Health also shows the isolation level by label. level-1 means the Region replicas are isolated physically in the first label level. All the Regions are in level-0 when location label is not configured.

What is the meaning of selectsimplefull in Statement Count monitor?

It means full table scan but the table might be a small system table.

What is the difference between QPS and Statement OPS in the monitor?

The QPS statistics is about all the SQL statements, including use database, load data, begin, commit, set, show, insert and select.

The Statement OPS statistics is only about applications related SQL statements, including select, update and insert, therefore the Statement OPS statistics matches the applications better.

Deployment on the cloud

Public cloud

What cloud vendors are currently supported by TiDB?

TiDB supports deployment on Google GKE, AWS EKS and Alibaba Cloud ACK.

In addition, TiDB is currently available on JD Cloud and UCloud, and has the first-level database entries on them.

Troubleshoot

TiDB custom error messages

ERROR 8005 (HY000): Write Conflict, txnStartTS is stale

Check whether tidb_disable_txn_auto_retry is set to on. If so, set it to off; if it is already off, increase the value of tidb_retry_limit until the error no longer occurs.

ERROR 9001 (HY000): PD Server Timeout

A PD request timeout. Check the status, monitoring data and log of the PD server, and the network between the TiDB server and the PD server.

ERROR 9002 (HY000): TiKV Server Timeout

A TiKV request timeout. Check the status, monitoring data and log of the TiKV server, and the network between the TiDB server and the TiKV server.

ERROR 9003 (HY000): TiKV Server is Busy

The TiKV server is busy. This usually occurs when the database load is very high. Check the status, monitoring data and log of the TiKV server.

ERROR 9004 (HY000): Resolve Lock Timeout

A lock resolving timeout. This usually occurs when a large number of transaction conflicts exist. Check the application code to see whether lock contention exists in the database.

ERROR 9005 (HY000): Region is unavailable

The accessed Region is not available. A Raft Group is not available, with possible reasons like an inadequate number of replicas. This usually occurs when the TiKV server is busy or the TiKV node is shut down. Check the status, monitoring data and log of the TiKV server.

ERROR 9006 (HY000): GC life time is shorter than transaction duration

The interval of GC Life Time is too short. The data that should have been read by long transactions might be deleted. You can add GC Life Time using the following command:

update mysql.tidb set variable_value='30m' where variable_name='tikv_gc_life_time';

ERROR 9007 (HY000): Write Conflict

Check whether tidb_disable_txn_auto_retry is set to on. If so, set it to off; if it is already off, increase the value of tidb_retry_limit until the error no longer occurs.

MySQL native error messages

ERROR 2013 (HY000): Lost connection to MySQL server during query

  • Check whether panic is in the log.
  • Check whether OOM exists in dmesg using dmesg -T | grep -i oom.
  • A long time of no access might also lead to this error. It is usually caused by TCP timeout. If TCP is not used for a long time, the operating system kills it.

ERROR 1105 (HY000): other error: unknown error Wire Error(InvalidEnumValue(4004))

This error usually occurs when the version of TiDB does not match with the version of TiKV. To avoid version mismatch, upgrade all components when you upgrade the version.

ERROR 1148 (42000): the used command is not allowed with this TiDB version

When you execute the LOAD DATA LOCAL statement but the MySQL client does not allow executing this statement (the value of the local_infile option is 0), this error occurs.

The solution is to use the --local-infile=1 option when you start the MySQL client. For example, use command like mysql --local-infile=1 -u root -h 127.0.0.1 -P 4000. The default value of local-infile is different in different versions of MySQL client, therefore you need to configure it in some MySQL clients and do not need to configure it in some others.

ERROR 9001 (HY000): PD server timeout start timestamp may fall behind safe point

This error occurs when TiDB fails to access PD. A worker in the TiDB background continuously queries the safepoint from PD and this error occurs if it fails to query within 100s. Generally, it is because the disk on PD is slow and busy or the network failed between TiDB and PD. For the details of common errors, see Error Number and Fault Diagnosis.

TiDB log error messages

EOF error

When the client or proxy disconnects from TiDB, TiDB does not immediately notice that the connection has been disconnected. Instead, TiDB can only notice the disconnection when it begins to return data to the connection. At this time, the log prints an EOF error.