How to Test TiDB Using Sysbench
It is recommended to use Sysbench 1.0 or later, which can be downloaded here.
Test plan
TiDB configuration
Higher log level means fewer logs to be printed and thus positively influences TiDB performance. Specifically, you can add the following command in the TiUP configuration file:
server_configs:
tidb:
log.level: "error"
It is also recommended to make sure tidb_enable_prepared_plan_cache
is enabled and that you allow sysbench to use prepared statements by using --db-ps-mode=auto
. See the SQL Prepared Execution Plan Cache for documetnation about what the SQL plan cache does and how to monitor it.
TiKV configuration
Higher log level also means better performance for TiKV.
There are multiple Column Families on TiKV cluster which are mainly used to store different types of data, including Default CF, Write CF, and Lock CF. For the Sysbench test, you only need to focus on Default CF and Write CF. The Column Family that is used to import data has a constant proportion among TiDB clusters:
Default CF : Write CF = 4 : 1
Configuring the block cache of RocksDB on TiKV should be based on the machine's memory size, in order to make full use of the memory. To deploy a TiKV cluster on a 40GB virtual machine, it is recommended to configure the block cache as follows:
server_configs:
tikv:
log-level: "error"
rocksdb.defaultcf.block-cache-size: "24GB"
rocksdb.writecf.block-cache-size: "6GB"
You can also configure TiKV to share block cache:
server_configs:
tikv:
storage.block-cache.capacity: "30GB"
For more detailed information on TiKV performance tuning, see Tune TiKV Performance.
Test process
Sysbench configuration
This is an example of the Sysbench configuration file:
mysql-host={TIDB_HOST}
mysql-port=4000
mysql-user=root
mysql-password=password
mysql-db=sbtest
time=600
threads={8, 16, 32, 64, 128, 256}
report-interval=10
db-driver=mysql
The above parameters can be adjusted according to actual needs. Among them, TIDB_HOST
is the IP address of the TiDB server (because we cannot include multiple addresses in the configuration file), threads
is the number of concurrent connections in the test, which can be adjusted in "8, 16, 32, 64, 128, 256". When importing data, it is recommended to set threads = 8 or 16. After adjusting threads
, save the file named config.
See the following as a sample config file:
mysql-host=172.16.30.33
mysql-port=4000
mysql-user=root
mysql-password=password
mysql-db=sbtest
time=600
threads=16
report-interval=10
db-driver=mysql
Data import
Before importing the data, it is necessary to make some settings to TiDB. Execute the following command in MySQL client:
set global tidb_disable_txn_auto_retry = off;
Then exit the client.
Restart MySQL client and execute the following SQL statement to create a database sbtest
:
create database sbtest;
Adjust the order in which Sysbench scripts create indexes. Sysbench imports data in the order of "Build Table -> Insert Data -> Create Index", which takes more time for TiDB to import data. Users can adjust the order to speed up the import of data. Suppose that you use the Sysbench version 1.0.20. You can adjust the order in either of the following two ways:
- Download the modified oltp_common.lua file for TiDB and overwrite the
/usr/share/sysbench/oltp_common.lua
file with it. - In
/usr/share/sysbench/oltp_common.lua
, move the lines 235-240 to be right behind the line 198.
At the command line, enter the following command to start importing data. The config file is the one configured in the previous step:
sysbench --config-file=config oltp_point_select --tables=32 --table-size=10000000 prepare
Warming data and collecting statistics
To warm data, we load data from disk into the block cache of memory. The warmed data has significantly improved the overall performance of the system. It is recommended to warm data once after restarting the cluster.
sysbench --config-file=config oltp_point_select --tables=32 --table-size=10000000 prewarm
Point select test command
sysbench --config-file=config oltp_point_select --tables=32 --table-size=10000000 --db-ps-mode=auto --rand-type=uniform run
Update index test command
sysbench --config-file=config oltp_update_index --tables=32 --table-size=10000000 --db-ps-mode=auto --rand-type=uniform run
Read-only test command
sysbench --config-file=config oltp_read_only --tables=32 --table-size=10000000 --db-ps-mode=auto --rand-type=uniform run
Common issues
TiDB and TiKV are both properly configured under high concurrency, why is the overall performance still low?
This issue often has things to do with the use of a proxy. You can add pressure on single TiDB server, sum each result up and compare the summed result with the result with proxy.
Take HAproxy as an example. The parameter nbproc
can increase the number of processes it can start at most. Later versions of HAproxy also support nbthread
and cpu-map
. All of these can mitigate the negative impact of proxy use on performance.
Under high concurrency, why is the CPU utilization rate of TiKV still low?
Although the overall CPU utilization rate is low for TiKV, the CPU utilization rate of some modules in the cluster might be high.
The maximum concurrency limits for other modules on TiKV, such as storage readpool, coprocessor, and gRPC, can be adjusted through the TiKV configuration file.
The actual CPU usage can be observed through Grafana's TiKV Thread CPU monitor panel. If there is a bottleneck on the modules, it can be adjusted by increasing the concurrency of the modules.
Given that TiKV has not yet reached the CPU usage bottleneck under high concurrency, why is TiDB's CPU utilization rate still low?
CPU of NUMA architecture is used on some high-end equipment where cross-CPU access to remote memory will greatly reduce performance. By default, TiDB will use all CPUs of the server, and goroutine scheduling will inevitably lead to cross-CPU memory access.
Therefore, it is recommended to deploy n TiDBs (n is the number of NUMA CPUs) on the server of NUMA architecture, and meanwhile set the TiDB parameter max-procs
to a value that is the same as the number of NUMA CPU cores.